skip to main content
10.1145/2312005.2312018acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
abstract

Brief announcement: the problem based benchmark suite

Published:25 June 2012Publication History

ABSTRACT

This announcement describes the problem based benchmark suite (PBBS). PBBS is a set of benchmarks designed for comparing parallel algorithmic approaches, parallel programming language styles, and machine architectures across a broad set of problems. Each benchmark is defined concretely in terms of a problem specification and a set of input distributions. No requirements are made in terms of algorithmic approach, programming language, or machine architecture. The goal of the benchmarks is not only to compare runtimes, but also to be able to compare code and other aspects of an implementation (e.g., portability, robustness, determinism, and generality). As such the code for an implementation of a benchmark is as important as its runtime, and the public PBBS repository will include both code and performance results.

The benchmarks are designed to make it easy for others to try their own implementations, or to add new benchmark problems. Each benchmark problem includes the problem specification, the specification of input and output file formats, default input generators, test codes that check the correctness of the output for a given input, driver code that can be linked with implementations, a baseline sequential implementation, a baseline multicore implementation, and scripts for running timings (and checks) and outputting the results in a standard format. The current suite includes the following problems: integer sort, comparison sort, remove duplicates, dictionary, breadth first search, spanning forest, minimum spanning forest, maximal independent set, maximal matching, K-nearest neighbors, Delaunay triangulation, convex hull, suffix arrays, n-body, and ray casting. For each problem, we report the performance of our baseline multicore implementation on a 40-core machine.

References

  1. L. Akoglu and C. Faloutsos. RTG: A recursive realistic graph generator using random typing. Data Min. Knowl. Discov., 19(2), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. K. Albayraktaroglu, A. Jaleel, X. Wu, M. Franklin, B. Jacob, C.-W. Tseng, and D. Yeung. BioBench: A benchmark suite of bioinformatics applications. In IEEE ISPASS, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, and K. A. Yelick. The landscape of parallel computing research: A view from Berkeley. Technical Report UCB/EECS-2006--183, EECS Department, UC Berkeley, 2006.Google ScholarGoogle Scholar
  4. D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, L. Dagum, R. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. Weeratunga. The NAS parallel benchmarks--summary and preliminary results. In ACM/IEEE Supercomputing, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In ACM PACT, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. M. Blackburn et al. The DaCapo benchmarks: Java benchmarking development and analysis. In ACM OOPSLA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. E. Blelloch, J. T. Fineman, P. B. Gibbons, and J. Shun. Internally deterministic parallel algorithms can be fast. In ACM PPoPP, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Cao Minh, J. Chung, C. Kozyrakis, and K. Olukotun. STAMP: Stanford transactional applications for multi-processing. In IISWC '08, September 2008.Google ScholarGoogle ScholarCross RefCross Ref
  9. B. Fulgham. The computer language benchmarks game. http://shootout.alioth.debian.org/, 2009.Google ScholarGoogle Scholar
  10. H. H. Hoos and T. Stützle. SATLIB: An online resource for research on SAT. In I. P. Gent, H. v. Maaren, and T. Walsh, editors, SAT 2000. IOS Press, 2000.Google ScholarGoogle Scholar
  11. A. Jaleel, M. Mattina, and B. Jacob. Last-level cache (LLC) performance of data-mining workloads on a CMP--A case study of parallel bioinformatics workloads. In IEEE HPCA, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  12. C. Lee, M. Potkonjak, and W. H. Mangione-Smith. MediaBench: A tool for evaluating and synthesizing multimedia and communications systems. In IEEE/ACM MICRO, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M.-L. Li, R. Sasanka, S. V. Adve, Y.-K. Chen, and E. Debes. The ALPBench benchmark suite for complex multimedia applications. In IEEE IISWC, 2005.Google ScholarGoogle Scholar
  14. P. Luszczek, D. Bailey, J. Dongarra, J. Kepner, R. Lucas, R. Rabenseifner, and D. Takahashi. The HPC challenge (HPCC) benchmark suite. In ACM/IEEE SC06 Conference Tutorial, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Narayanan, B. Ozisikyilmaz, J. Zambreno, G. Memik, and A. N. Choudhary. MineBench: A benchmark suite for data mining workloads. In IEEE IISWC, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  16. K. Pingali, D. Nguyen, M. Kulkarni, M. Burtscher, M. A. Hassaan, R. Kaleem, T.-H. Lee, A. Lenharth, R. Manevich, M. Méndez-Lojo, D. Prountzos, and X. Sui. The tao of parallelism in algorithms. In ACM PLDI, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. L. A. Smith, J. M. Bull, and J. Obdrzalek. A parallel Java Grande benchmark suite. In ACM/IEEE SC2001, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. K. Venkata, I. Ahn, D. Jeon, A. Gupta, C. Louie, S. Garcia, S. Belongie, and M. B. Taylor. SD-VBS: The San Diego vision benchmark suite. In IEEE IISWC, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta. The SPLASH-2 programs: Characterization and methodological considerations. In ACM ISCA, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Brief announcement: the problem based benchmark suite

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader