skip to main content
10.1145/2001420.2001425acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Symbolic execution with mixed concrete-symbolic solving

Published:17 July 2011Publication History

ABSTRACT

Symbolic execution is a powerful static program analysis technique that has been used for the automated generation of test inputs. Directed Automated Random Testing (DART) is a dynamic variant of symbolic execution that initially uses random values to execute a program and collects symbolic path conditions during the execution. These conditions are then used to produce new inputs to execute the program along different paths. It has been argued that DART can handle situations where classical static symbolic execution fails due to incompleteness in decision procedures and its inability to handle external library calls.

We propose here a technique that mitigates these previous limitations of classical symbolic execution. The proposed technique splits the generated path conditions into (a) constraints that can be solved by a decision procedure and (b) complex non-linear constraints with uninterpreted functions to represent external library calls. The solutions generated from the decision procedure are used to simplify the complex constraints and the resulting path conditions are checked again for satisfiability. We also present heuristics that can further improve our technique. We show how our technique can enable classical symbolic execution to cover paths that other dynamic symbolic execution approaches cannot cover. Our method has been implemented within the Symbolic PathFinder tool and has been applied to several examples, including two from the NASA domain.

References

  1. W. Bush, J. Pincus, and D. Sielaff. A static analyzer for finding dynamic programming errors. Software: Practice and Experience, 30(7):775--802, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. C. Cadar, D. Dunbar, and D. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In OSDI, pages 209--224. USENIX Association, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Cadar, V. Ganesh, P. Pawlowski, D. Dill, and D. Engler. EXE: automatically generating inputs of death. TISSEC, 12(2):1--38, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Choco Solver. http://www.emn.fr/z-info/choco-solver/.Google ScholarGoogle Scholar
  5. L. A. Clarke. A program testing system. In Proceedings of the 1976 annual conference, ACM '76, pages 488--491, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Coen-Porisini, G. Denaro, C. Ghezzi, and M. Pezzé. Using symbolic execution for verifying safety-critical systems. In ESEC/FSE, page 151. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. X. Deng, Robby, and J. Hatcliff. Kiasan/KUnit: Automatic test case generation and analysis feedback for open object-oriented systems. In TAICPART-MUTATION, pages 3--12, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Giannakopoulou, D. Bushnell, J. Schumann, H. Erzberger, and K. Heere. Formal testing for separation assurance. In To Appear, Annals of Mathematics and Artificial Intelligence. Springer, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. Godefroid. Compositional dynamic test generation. In POPL, pages 47--54. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Godefroid. Higher-Order Test Generation. Proc. PLDI, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Godefroid, P. de Halleux, A. Nori, S. Rajamani, W. Schulte, N. Tillmann, and M. Levin. Automating software testing using program analysis. Software, IEEE, 25(5):30--37, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. SIGPLAN Not., 40(6):213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Java PathFinder Tool-set. http://babelfish.arc.nasa.gov/trac/jpf.Google ScholarGoogle Scholar
  14. S. Khurshid, C. Păsăreanu, and W. Visser. Generalized symbolic execution for model checking and testing. Proc. TACAS, pages 553--568, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. C. King. Symbolic execution and program testing. Comm. ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Lakhotia, N. Tillmann, M. Harman, and J. De Halleux. Flopsy: search-based floating point constraint solving for symbolic execution. In ICTSS, pages 142--157, Berlin, Heidelberg, 2010. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. T. Menzies and Y. Hu. Just enough learning (of association rules): the tar2 "treatment" learner. Artif. Intell. Rev., 25(3):211--229, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Păsăreanu and N. Rungta. Symbolic PathFinder: symbolic execution of Java bytecode. In ASE, pages 179--180. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. S. Păsăreanu, P. C. Mehlitz, D. H. Bushnell, K. Gundy-Burlet, M. Lowry, S. Person, and M. Pape. Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In Proc. ISSTA, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. S. Păsăreanu, J. Schumann, P. Mehlitz, M. Lowry, G. Karsai, H. Nine, and S. Neema. Model based analysis and test generation for flight software. In Proceedings of the Third IEEE International Conference on Space Mission Challenges for Information Technology, pages 83--90, Washington, DC, USA, 2009. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Santelices and M. J. Harrold. Exploiting program dependencies for scalable multiple-path symbolic execution. In ISSTA, pages 195--206, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. K. Sen and G. Agha. A race-detection and flipping algorithm for automated testing of multi-threaded programs. In Proc. HVC, volume 4383 of LNCS, pages 166--182. Springer, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. K. Sen, D. Marinov, and G. Agha. CUTE: a concolic unit testing engine for C. In Proc. ESEC/FSE-13, pages 263--272, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Siegel, A. Mironova, G. Avrunin, and L. Clarke. Using model checking with symbolic execution to verify parallel numerical programs. In ISSTA, pages 157--168. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Souza, M. Borges, M. d'Amorim, and C. S. Păsăreanu. CORAL: solving complex constraints for Symbolic Pathfinder. Proc. NFM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. N. Tillmann and J. De Halleux. Pex: white box test generation for. NET. In TAP, pages 134--153. Springer-Verlag, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Tomb, G. Brat, and W. Visser. Variably interprocedural program analysis for runtime error detection. In Proc. ISSTA, pages 97--107, New York, NY, USA, 2007. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. W. Visser, C. Păsăreanu, and R. Pelánek. Test input generation for Java containers using state matching. In ISSTA, pages 37--48. ACM New York, NY, USA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: A framework for generating object-oriented unit tests using symbolic execution. TACAS, pages 365--381, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Symbolic execution with mixed concrete-symbolic solving

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ISSTA '11: Proceedings of the 2011 International Symposium on Software Testing and Analysis
            July 2011
            394 pages
            ISBN:9781450305624
            DOI:10.1145/2001420

            Copyright © 2011 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 17 July 2011

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate58of213submissions,27%

            Upcoming Conference

            ISSTA '24

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader