skip to main content
10.1145/3180155.3180177acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Distinguished Paper

Towards optimal concolic testing

Published:27 May 2018Publication History

ABSTRACT

Concolic testing integrates concrete execution (e.g., random testing) and symbolic execution for test case generation. It is shown to be more cost-effective than random testing or symbolic execution sometimes. A concolic testing strategy is a function which decides when to apply random testing or symbolic execution, and if it is the latter case, which program path to symbolically execute. Many heuristics-based strategies have been proposed. It is still an open problem what is the optimal concolic testing strategy. In this work, we make two contributions towards solving this problem. First, we show the optimal strategy can be defined based on the probability of program paths and the cost of constraint solving. The problem of identifying the optimal strategy is then reduced to a model checking problem of Markov Decision Processes with Costs. Secondly, in view of the complexity in identifying the optimal strategy, we design a greedy algorithm for approximating the optimal strategy. We conduct two sets of experiments. One is based on randomly generated models and the other is based on a set of C programs. The results show that existing heuristics have much room to improve and our greedy algorithm often outperforms existing heuristics.

References

  1. Saswat Anand, Patrice Godefroid, and Nikolai Tillmann. Demand-driven compositional symbolic execution. In Tools and Algorithms for the Construction and Analysis of Systems, 14th International Conference, TACAS, pages 367--381, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT Press, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Marcel Böhme and Soumya Paul. On the efficiency of automated testing. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22), pages 632--642, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Marcel Böhme and Soumya Paul. A probabilistic analysis of the efficiency of automated software testing. IEEE Trans. Software Eng., 42(4):345--360, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Peter Boonstoppel, Cristian Cadar, and Dawson R. Engler. Rwset: Attacking path explosion in constraint-based test generation. In Tools and Algorithms for the Construction and Analysis of Systems, 14th International Conference, TACAS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS, pages 351--366, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jacob Burnim and Koushik Sen. Heuristics for scalable dynamic test generation. In 23rd IEEE/ACM International Conference on Automated Software Engineering ASE, pages 443--446, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cristian Cadar, Daniel Dunbar, and Dawson R. Engler. KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In 8th USENIX Symposium on Operating Systems Design and Implementation, OSDI,pages 209--224, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. EXE: automatically generating inputs of death. ACM Trans. Inf. Syst. Secur., 12(2):10:1--10:38, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Supratik Chakraborty, Dror Fried, Kuldeep S. Meel, and Moshe Y. Vardi. From weighted to unweighted model counting. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI, pages 689--695, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ting Chen, Xiaosong Zhang, Shi-ze Guo, Hong-yuan Li, and Yue Wu. State of the art: Dynamic symbolic execution for automated test generation. Future Generation Comp. Syst., 29(7):1758--1773, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Edmund M. Clarke, E. Allen Emerson, and A. Prasad Sistla. Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Trans. Program. Lang. Syst., 8(2):244--263, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lori A. Clarke. A system to generate test data and symbolically execute programs. IEEE Trans. Software Eng., 2(3):215--222, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G Cochran. Laplace s ratio estimator. Contributions to survey sampling and applied statistics, pages 3--10, 1978.Google ScholarGoogle Scholar
  14. Leonardo Mendonça de Moura and Nikolaj Bjørner. Z3: an efficient SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems, 14th International Conference, TACAS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS, pages 337--340, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Matthew B. Dwyer, Antonio Filieri, Jaco Geldenhuys, Mitchell J. Gerrard, Corina S. Pasareanu, and Willem Visser. Probabilistic program analysis. In Grand Timely Topics in Software Engineering - International Summer School GTTSE, pages 1--25, 2015.Google ScholarGoogle Scholar
  16. Eigen 3.3.4. Eigen Website. http://eigen.tuxfamily.org/.Google ScholarGoogle Scholar
  17. Dawson R. Engler and Daniel Dunbar. Under-constrained execution: making automatic code destruction easy and scalable. In Proc. ACM/SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2007), pages 1--4. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Antonio Filieri, Marcelo F. Frias, Corina S. Pasareanu, and Willem Visser. Model counting for complex data structures. In Model Checking Software - 22nd International Symposium, SPIN, pages 222--241, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. William A Gale and Geoffrey Sampson. Good-turing frequency estimation without tears*. Journal of Quantitative Linguistics, 2(3):217--237, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  20. Pranav Garg, Franjo Ivancic, Gogul Balakrishnan, Naoto Maeda, and Aarti Gupta. Feedback-directed unit test generation for C/C++ using concolic execution. In 35th International Conference on Software Engineering, ICSE, pages 132--141, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jaco Geldenhuys, Matthew B. Dwyer, and Willem Visser. Probabilistic symbolic execution. In International Symposium on Software Testing and Analysis, ISSTA, pages 166--176, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Patrice Godefroid, Nils Klarlund, and Koushik Sen. DART: directed automated random testing. In Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation (PLDI), pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Patrice Godefroid, Michael Y. Levin, and David A. Molnar. Automated whitebox fuzz testing. In Proceedings of the Network and Distributed System Security Symposium, NDSS, 2008.Google ScholarGoogle Scholar
  24. Patrice Godefroid, Aditya V. Nori, Sriram K. Rajamani, and SaiDeep Tetali. Compositional may-must program analysis: unleashing the power of alternation. In Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL, pages 43--56, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. GSL 2.1. GNU Scientific Library (GSL). http://www.gnu.org/software/gsl/.Google ScholarGoogle Scholar
  26. Richard G. Hamlet. Testing programs with finite sets of data. Comput. J., 20(3):232--237, 1977.Google ScholarGoogle ScholarCross RefCross Ref
  27. Ronald A. Howard. The M.I.T. Press, 1960.Google ScholarGoogle Scholar
  28. Joxan Jaffar, Vijayaraghavan Murali, and Jorge A. Navas. Boosting concolic testing via interpolation. In Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE'13, pages 48--58, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Liu Jingde, Chen Zhenbang, and Wang Ji. Solving cost prediction based search in symbolic execution. Journal of Computer Research and Development, pages 1086,1094, 2016.Google ScholarGoogle Scholar
  30. Sun Jun. http://sav.sutd.edu.sg/research/smartconcolic.Google ScholarGoogle Scholar
  31. Pingfan Kong, Yi Li, Xiaohong Chen, Jun Sun, Meng Sun, and Jingyi Wang. Towards concolic testing for hybrid systems. In FM 2016: Formal Methods - 21st International Symposium, pages 460--478, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  32. Marta Kwiatkowska, Gethin Norman, and David Parker. Prism: Probabilistic symbolic model checker. In Computer performance evaluation: modelling techniques and tools, pages 200--204. Springer, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. You Li, Zhendong Su, Linzhang Wang, and Xuandong Li. Steering symbolic execution to less traveled paths. In Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA, pages 19--32, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Kasper Søe Luckow, Marko Dimjasevic, Dimitra Giannakopoulou, Falk Howar, Malte Isberner, Temesghen Kahsai, Zvonimir Rakamaric, and Vishwanath Raman. Jdart: A dynamic symbolic analysis framework. In Tools and Algorithms for the Construction and Analysis of Systems - 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS, pages 442--459, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Rupak Majumdar and Koushik Sen. Hybrid concolic testing. In 29th International Conference on Software Engineering (ICSE, pages 416--426, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Paul Dan Marinescu and Cristian Cadar. High-coverage symbolic patch testing. In Model Checking Software - 19th International Workshop, SPIN, pages 7--21, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Carlos Pacheco and Michael D. Ernst. Randoop: feedback-directed random testing for java. In Companion to the 22nd Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA, pages 815--816, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Sangmin Park, B. M. Mainul Hossain, Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Kunal Taneja, Chen Fu, and Qing Xie. Carfast: achieving higher statement coverage faster. In 20th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-20), SIGSOFT/FSE'12, Cary, NC, USA - November 11 - 16, 2012, page 35, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Minghui Quan. Hotspot symbolic execution of floating-point programs. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE, pages 1112--1114, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Anthony Romano. Practical floating-point tests with integer code. In Proc. International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2014, pages 337--356. Springer, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Koushik Sen, Darko Marinov, and Gul Agha. CUTE: a concolic unit testing engine for C. In Proceedings of the 10th European Software Engineering Conference held jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Hyunmin Seo and Sunghun Kim. How we get there: a context-guided search strategy in concolic testing. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22), pages 413--424, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. SoftFloat 2b. Berkeley SoftFloat. http://www.jhauser.us/arithmetic/SoftFloat.html.Google ScholarGoogle Scholar
  44. Matt Staats and Corina S. Pasareanu. Parallel symbolic execution for structural test generation. In Proceedings of the Nineteenth International Symposium on Software Testing and Analysis, ISSTA, pages 183--194, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Nick Stephens, John Grosen, Christopher Salls, Andrew Dutcher, Ruoyu Wang, Jacopo Corbetta, Yan Shoshitaishvili, Christopher Kruegel, and Giovanni Vigna. Driller: Augmenting fuzzing through selective symbolic execution. In 23rd Annual Network and Distributed System Security Symposium, NDSS, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  46. Deian Tabakov and Moshe Y. Vardi. Experimental evaluation of classical automata constructions. In Logic for Programming, Artificial Intelligence, and Reasoning, 12th International Conference, LPAR, pages 396--411, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Nikolai Tillmann and Jonathan de Halleux. Pex-white box test generation for .net. In Tests and Proofs, Second International Conference, TAP, pages 134--153, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Wolfram Schulte. Fitness-guided path exploration in dynamic symbolic execution. In Proceedings of the 2009 IEEE/IFIP International Conference on Dependable Systems and Networks, DSN, pages 359--368, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  49. Chaoqiang Zhang, Alex Groce, and Mohammad Amin Alipour. Using test case reduction and prioritization to improve symbolic execution. In International Symposium on Software Testing and Analysis, ISSTA, pages 160--170, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Towards optimal concolic testing
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICSE '18: Proceedings of the 40th International Conference on Software Engineering
          May 2018
          1307 pages
          ISBN:9781450356381
          DOI:10.1145/3180155
          • Conference Chair:
          • Michel Chaudron,
          • General Chair:
          • Ivica Crnkovic,
          • Program Chairs:
          • Marsha Chechik,
          • Mark Harman

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 May 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate276of1,856submissions,15%

          Upcoming Conference

          ICSE 2025

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader