Skip to main content

Inspecting Code Churns to Prioritize Test Cases

  • Conference paper
  • First Online:
Testing Software and Systems (ICTSS 2020)

Abstract

Within the context of software evolution, due to time-to-market pressure, it is not uncommon that a company has not enough time and/or resources to re-execute the whole test suite on the new software version, to check for non-regression. To face this issue, many Regression Test Prioritization techniques have been proposed, aimed at ranking test cases in a way that tests more likely to expose faults have higher priority. Some of these techniques exploit code churn metrics, i.e. some quantification of code changes between two subsequent versions of a software artifact, which have been proven to be effective indicators of defect-prone components. In this paper, we first present three new Regression Test Prioritization strategies, based on a novel code churn metric, that we empirically assessed on an open source software system. Results highlighted that the proposal is promising, but that it might be further improved by a more detailed analysis on the nature of the changes introduced between two subsequent code versions. To this aim, in this paper we also sketch a more refined approach we are currently investigating, that quantifies changes in a code base at a finer grained level. Intuitively, we seek to prioritize tests that stress more fault-prone changes (e.g., structural changes in the control flow), w.r.t. those that are less likely to introduce errors (e.g., the renaming of a variable). To do so, we propose the exploitation of the Abstract Syntax Tree (AST) representation of source code, and to quantify differences between ASTs by means of specifically designed Tree Kernel functions, a type of similarity measure for tree-based data structures, which have shown to be very effective in other domains, thanks to their customizability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://webkit.org/.

  2. 2.

    The JaCoCo tool can be obtained freely at http://www.eclemma.org/jacoco/.

  3. 3.

    The SonarQube tool can be obtained freely at https://www.sonarqube.org/.

References

  1. Baresi, L., Pezzè, M.: An introduction to software testing. Electron. Notes Theor. Comput. Sci. 148, 89–111 (2006). Elsevier

    Article  Google Scholar 

  2. Carzaniga, A., Rosenblum, D.S., Wolf, A.L.: Achieving scalability and expressiveness in an internet-scale event notification service. In: Proceedings of ACM Symposium on Principles of Distributed Computing, PODC 2000, pp. 219–227. ACM, New York (2000)

    Google Scholar 

  3. Corazza, A., Di Martino, S., Maggio, V., Scanniello, G.: A tree kernel based approach for clone detection. In: 2010 IEEE International Conference on Software Maintenance, pp. 1–5. IEEE (2010)

    Google Scholar 

  4. Di Martino, S., Fasolino, A.R., Starace, L.L.L., Tramontana, P.: Comparing the effectiveness of capture and replay against automatic input generation for android graphical user interface testing. Softw. Test. Verif. Reliab. (2020). https://doi.org/10.1002/stvr.1754

  5. Di Nardo, D., Alshahwan, N., Briand, L., Labiche, Y.: Coverage-based regression test case selection, minimization and prioritization: a case study on an industrial system. Softw. Test. Verif. Reliab. 25, 371–396 (2015). https://doi.org/10.1002/stvr.1572. John Wiley and Sons Ltd

    Article  Google Scholar 

  6. Do, H., Elbaum, S.G., Rothermel, G.: Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir. Softw. Eng.: Int. J. 10(4), 405–435 (2005)

    Article  Google Scholar 

  7. Elbaum, S., Malishevsky, A.G., Rothermel, G.: Prioritizing test cases for regression testing. In: Proceedings of International Symposium on Software Testing and Analysis, ISSTA 2000, pp. 102–112. ACM (2000)

    Google Scholar 

  8. Elbaum, S.G., Malishevsky, A.G., Rothermel, G.: Incorporating varying test costs and fault severities into test case prioritization. In: Proceedings of ICSE, pp. 329–338. IEEE Computer Society (2001)

    Google Scholar 

  9. Elbaum, S.G., Malishevsky, A.G., Rothermel, G.: Test case prioritization: a family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002)

    Article  Google Scholar 

  10. Elbaum, S.G., Rothermel, G., Penix, J.: Techniques for improving regression testing in continuous integration development environments. In: Proceedings of FSE, pp. 235–245. ACM (2014)

    Google Scholar 

  11. Hao, D., Zhang, L., Zhang, L., Rothermel, G., Mei, H.: A unified test case prioritization approach. ACM Trans. Softw. Eng. Methodol. 24(2), 10:1–10:31 (2014)

    Article  Google Scholar 

  12. Hao, D., Zhang, L., Zang, L., Wang, Y., Wu, X., Xie, T.: To be optimal or not in test-case prioritization. IEEE Trans. Softw. Eng. 42(5) (2016). https://doi.org/10.1109/TSE.2015.2496939

  13. Harrold, M.J., et al.: Regression test selection for Java software. In: Proceedings of ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2001, pp. 312–326. ACM (2001)

    Google Scholar 

  14. Harrold, M.J., Rosenblum, D.S., Rothermel, G., Weyuker, E.J.: Empirical studies of a prediction model for regression test selection. IEEE Trans. Softw. Eng. 27(3), 248–263 (2001)

    Article  Google Scholar 

  15. Hemmati, H.: Advances in techniques for test prioritization. Adv. Comput. 112, 185–221 (2019). https://doi.org/10.1016/bs.adcom.2017.12.004

    Article  Google Scholar 

  16. Huang, Y.C., Peng, K.L., Huang, C.Y.: A history-based cost-cognizant test case prioritization technique in regression testing. J. Syst. Softw. 85(3), 626–637 (2012)

    Article  MathSciNet  Google Scholar 

  17. Jasz, J., Lango, L., Gyimothy, T., Gergely, T., Beszedes, A., Schrettner, L.: Code coverage-based regression test selection and prioritization in WebKit. In: Proceedings of International Conference on Software Maintenance, ICSM 2012, pp. 46–55. IEEE Computer Society (2012)

    Google Scholar 

  18. Kaushik, N., Salehie, M., Tahvildari, L., Li, S., Moore, M.: Dynamic prioritization in regression testing. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 135–138 (2011)

    Google Scholar 

  19. Khatibsyarbini, M., Isa, M.A., Jawawi, D.N., Tumeng, R.: Test case prioritization approaches in regression testing: a systematic literature review. Inf. Softw. Technol. 93, 74–93 (2018)

    Article  Google Scholar 

  20. Kim, J.M., Porter, A.: A history-based test prioritization technique for regression testing in resource constrained environments. In: Proceedings of ICSE, pp. 119–129. ACM (2002)

    Google Scholar 

  21. Li, Z., Harman, M., Hierons, R.: Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. 33(4), 225–237 (2007)

    Article  Google Scholar 

  22. Moschitti, A.: Efficient convolution kernels for dependency and constituent syntactic trees. In: FĂ¼rnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 318–329. Springer, Heidelberg (2006). https://doi.org/10.1007/11871842_32

    Chapter  Google Scholar 

  23. Nagappan, N., Ball, T.: Use of relative code churn measures to predict system defect density. In: Proceedings of the 27th International Conference on Software Engineering, 2005. ICSE 2005, pp. 284–292. IEEE (2005)

    Google Scholar 

  24. Nanda, A., Mani, S., Sinha, S., Harrold, M., Orso, A.: Regression testing in the presence of non-code changes. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation (ICST), pp. 21–30 (2011)

    Google Scholar 

  25. Ouriques, J., Cartaxo, E., Machado, P.: On the influence of model structure and test case profile on the prioritization of test cases in the context of model-based testing. In: 2013 27th Brazilian Symposium on Software Engineering (SBES), pp. 119–128 (2013)

    Google Scholar 

  26. Prado Lima, J.A., Vergilio, S.R.: Test case prioritization in continuous integration environments: a systematic mapping study. Inf. Softw. Technol. 121, 106–268 (2020). https://doi.org/10.1016/j.infsof.2020.106268

    Article  Google Scholar 

  27. Rothermel, G., Untch, R., Chu, C., Harrold, M.: Test case prioritization: an empirical study. In: Proceedings of the International Conference on Software Maintenance, pp. 179–188 (1999)

    Google Scholar 

  28. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27(10), 929–948 (2001)

    Article  Google Scholar 

  29. Saha, R.K., Zhang, L., Khurshid, S., Perry, D.E.: An information retrieval approach for regression test prioritization based on program changes. In: ICSE (2015)

    Google Scholar 

  30. SĂ¡nchez, A.B., Segura, S., CortĂ©s, A.R.: A comparison of test case prioritization criteria for software product lines. In: ICST, pp. 41–50. IEEE Computer Society (2014)

    Google Scholar 

  31. Sarro, F., Di Martino, S., Ferrucci, F., Gravino, C.: A further analysis on the use of genetic algorithm to configure support vector machines for inter-release fault prediction. In: Proceedings of the 27th Annual ACM Symposium on Applied Computing, pp. 1215–1220. ACM (2012)

    Google Scholar 

  32. Shin, Y., Meneely, A., Williams, L., Osborne, J.A.: Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Trans. Softw. Eng. 37(6), 772–787 (2011)

    Article  Google Scholar 

  33. Srikanth, H., Banerjee, S., Williams, L., Osborne, J.A.: Towards the prioritization of system test cases. Softw. Test. Verif. Reliab. 24(4), 320–337 (2014)

    Article  Google Scholar 

  34. Ul Ain, Q., Haider Butt, W., Anwar, M.W., Azam, F., Maqbool, B.: A systematic review on code clone detection. IEEE Access 7, 86121–86144 (2019). https://doi.org/10.1109/ACCESS.2019.2918202

    Article  Google Scholar 

  35. Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Di Martino .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Altiero, F., Corazza, A., Di Martino, S., Peron, A., Starace, L.L.L. (2020). Inspecting Code Churns to Prioritize Test Cases. In: Casola, V., De Benedictis, A., Rak, M. (eds) Testing Software and Systems. ICTSS 2020. Lecture Notes in Computer Science(), vol 12543. Springer, Cham. https://doi.org/10.1007/978-3-030-64881-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64881-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64880-0

  • Online ISBN: 978-3-030-64881-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics