skip to main content
10.1145/1287624.1287681acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
Article

Measuring empirical computational complexity

Published:07 September 2007Publication History

ABSTRACT

The standard language for describing the asymptotic behavior of algorithms is theoretical computational complexity. We propose a method for describing the asymptotic behavior of programs in practice by measuring their empirical computational complexity. Our method involves running a program on workloads spanning several orders of magnitude in size, measuring their performance, and fitting these observations to a model that predicts performance as a function of workload size. Comparing these models to the programmer's expectations or to theoretical asymptotic bounds can reveal performance bugs or confirm that a program's performance scales as expected. Grouping and ranking program locations based on these models focuses attention on scalability-critical code. We describe our tool, the Trend Profiler (trend-prof), for constructing models of empirical computational complexity that predict how many times each basic block in a program runs as a linear (y = a + bx) or a powerlaw (y = axb) function of user-specified features of the program's workloads. We ran trend-prof on several large programs and report cases where a program scaled as expected, beat its worst-case theoretical complexity bound, or had a performance bug.

References

  1. A. Alexandrov, M. F. Ionescu, K. E. Schauser, and C. Scheiman. LogGP: Incorporating long messages into the LogP model - One step closer towards a realistic model for parallel computation. In SPAA '95: Proceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, pages 95--105, New York, NY, USA, 1995. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. G. Ammons, J.-D. Choi, M. Gupta, and N. Swamy. Finding and removing performance bottlenecks in large systems. In ECOOP 2004. Springer Berlin / Heidelberg.Google ScholarGoogle ScholarCross RefCross Ref
  3. L.O.Andersen.Program Analysis and Specialization for the C Programming Language. Ph.d. thesis, DIKU, Unversity of Copenhagen, 1994.Google ScholarGoogle Scholar
  4. T. Ball and J. R. Larus. Optimally profiling and tracing programs. ACM Trans. Program. Lang. Syst., 16(4):1319--1360, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. E. A. Brewer. High-level optimization via automated statistical modeling. In PPOPP '95: Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 80--91, New York, NY, USA, 1995. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. bzip2 project homepage. http://www.bzip.org/.Google ScholarGoogle Scholar
  7. gcov documentation. http://gcc.gnu.org/onlinedocs/gcc/Gcov.html.Google ScholarGoogle Scholar
  8. S. L. Graham, P. B. Kessler, and M. K. Mckusick. Gprof: A call graph execution profiler. In SIGPLAN '82: Proceedings of the 1982 SIGPLAN Symposium on Compiler Construction, pages 120--126, New York, NY, USA, 1982. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Kluge, A. Knüpfer, and W. E. Nagel. Knowledge based automatic scalability analysis and extrapolation for MPI programs. In Euro-Par 2005 Parallel Processing: 11th International Euro-Par Conference, Lecture Notes in Computer Science. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Kodumal and A. Aiken. Banshee: A scalable constraint-based analysis toolkit. In SAS '05: Proceedings of the 12th International Static Analysis Symposium. London, United Kingdom, September 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. McPeak and G. C. Necula. Elkhound: A fast, practical GLR parser generator. In Conference on Compiler Construction (CC04), 2004.Google ScholarGoogle ScholarCross RefCross Ref
  12. D. L. Métayer. ACE: An automatic complexity evaluator. ACM Trans. Program. Lang. Syst., 10(2):248--266, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. A. Rice. Mathematical Statistics and Data Analysis. Duxbury Press, 2006.Google ScholarGoogle Scholar
  14. M. Rosendahl. Automatic complexity analysis. Proceedings of the 4th International Conference on Functional Programming Languages and Computer Architecture, pages 144--156, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Rugina and K. Schauser. Predicting the running times of parallel programs by simulation. In Proceedings of the 12th International Parallel Processing Symposium and 9th Symposium on Parallel and Distributed Processing, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. V. Sarkar. Determining average program execution times and their variance. In PLDI '89: Proceedings of the ACM SIGPLAN 1989 Conference on Programming Language Design and Implementation, pages 298--312, New York, NY, USA, 1989. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. Sevitsky, W. de Pauw, and R. Konuru. An information exploration tool for performance analysis of Java programs. In TOOLS '01: Proceedings of the Technology of Object-Oriented Languages and Systems, page 85, Washington, DC, USA, 2001. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. E. Ukkonen. A linear-time algorithm for finding approximate shortest common superstrings. In Algorithmica, volume 5, pages 313--323, 1990.Google ScholarGoogle ScholarCross RefCross Ref
  19. B. Wegbreit. Mechanical program analysis. Commun. ACM, 18(9):528--539, 1975. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Measuring empirical computational complexity

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ESEC-FSE '07: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
            September 2007
            638 pages
            ISBN:9781595938114
            DOI:10.1145/1287624

            Copyright © 2007 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 7 September 2007

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • Article

            Acceptance Rates

            Overall Acceptance Rate112of543submissions,21%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader