skip to main content
10.1145/2804322.2804329acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Bayesian concepts in software testing: an initial review

Published:30 August 2015Publication History

ABSTRACT

This work summarizes the main topics that have been researched in the area of software testing under the umbrella of ``Bayesian approaches'' since 2010. There is a growing trend on the use of the so-called Bayesian statistics and Bayesian concepts in general and software testing in particular. Following a Systematic Literature Review protocol using the main digital libraries and repositories, we selected around 40 references applying Bayesian approaches in the field of software testing since 2010. Those references summarise the current state of the art and foster better focused research. So far, the main observed use of the Bayesian concepts in the software testing field is through the application of Bayesian networks for software reliability and defect prediction (the latter is mainly based on static software metrics and Bayesian classifiers). Other areas of application are software estimation and test data generation. There are areas not fully explored beyond the basic Bayesian approaches, such as influence diagrams and dynamic networks.

References

  1. R. Abreu, A. Gonzalez-Sanchez, and A. J. Van Gemund. A diagnostic reasoning approach to defect prediction. In Modern Approaches in Applied Intelligence, pages 416–425. Springer, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. N. Angius. The problem of justification of empirical hypotheses in software testing. Philosophy & Technology, 27(3):423––439, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  3. J. Ba and S. Wu. Propred: A probabilistic model for the prediction of residual defects. In Mechatronics and Embedded Systems and Applications (MESA), 2012 IEEE/ASME International Conference on, pages 247–251. IEEE, 2012.Google ScholarGoogle Scholar
  4. M. Blackburn and B. Huddell. Hybrid bayesian network models for predicting software reliability. In 2012 IEEE Sixth International Conference on Software Security and Reliability Companion, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Catal, U. Sevim, and B. Diri. Practical development of an eclipse-based software fault prediction tool using naive bayes algorithm. Expert Systems with Applications, 38(3):2347 – 2353, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. B. Cheng-Gang, J. Chang-Hai, and C. Kai-Yuan. A reliability improvement predictive approach to software testing with bayesian method. In Control Conference (CCC), 2010 29th Chinese, pages 6031–6036, July 2010.Google ScholarGoogle Scholar
  7. D. Cotroneo, R. Natella, and R. Pietrantuono. Predicting aging-related bugs using software complexity metrics. Performance Evaluation, 70(3):163 – 178, 2013. Special Issue on Software Aging and Rejuvenation. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. B. L. Dalmazo, A. L. R. de Sousa, W. L. Cordeiro, J. Wickboldt, R. C. Lunardi, R. L. dos Santos, L. P. Gaspary, L. Z. Granville, C. Bartolini, M. Hickey, et al. It project variables in the balance: a bayesian approach to prediction of support costs. In Software Engineering (SBES), 2011 25th Brazilian Symposium on, pages 224–232. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Dejaeger, T. Verbraken, and B. b. Baesens. Toward comprehensible software fault prediction models using bayesian network classifiers. IEEE Transactions on Software Engineering, 39(2):237–257, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Dhankhar, H. Rastogi, and M. Kakkar. Software fault prediction performance in software engineering. In Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pages 228–232, March 2015.Google ScholarGoogle Scholar
  11. H. Do, S. Mirarab, L. Tahvildari, and G. Rothermel. The effects of time constraints on test case prioritization: A series of controlled experiments. IEEE Transactions on Software Engineering, 36(5):593–617, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. EBSE. Template for a systematic literature review protocol. http://www.dur.ac.uk/ebse/resources/ templates/SLRTemplate.pdf, 2010.Google ScholarGoogle Scholar
  13. Z. Fang and H. Sun. A software regression testing strategy based on bayesian network. In Computational Intelligence and Software Engineering (CiSE), 2010 International Conference on, pages 1–4. IEEE, 2010.Google ScholarGoogle Scholar
  14. L. Han. Evaluation of software testing process based on bayesian networks. In Computer Engineering and Technology (ICCET), 2010 2nd International Conference on, volume 7, pages V7–361. IEEE, 2010.Google ScholarGoogle Scholar
  15. R. Hewett. Mining software defect data to support software testing management. Applied Intelligence, 34(2):245–257, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. N. Jongsawat and W. Premchaiswadi. Developing a bayesian network model based on a state and transition model for software defect detection. pages 295–300, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Jun-min, C. Ying-xiang, C. Jing-ru, and F. Jia-jie. Optimization model of software fault detection. In Proceedings of the 2012 International Conference on Information Technology and Software Engineering, pages 129–136. Springer, 2013.Google ScholarGoogle Scholar
  18. G. Khan, S. Sengupta, and K. Das. A probabilistic model for analysis and fault detection in the software system: An empirical approach. Lecture Notes in Electrical Engineering, 298 LNEE:253–265, 2014.Google ScholarGoogle Scholar
  19. B. A. Kitchenham, T. Dyba, and M. Jørgensen. Evidence-based software engineering. In Proceedings of the 26th International Conference on Software Engineering (ICSE’04), pages 273–281, Washington, DC, USA, 2004. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Kumar and D. Yadav. Software defects estimation using metrics of early phases of software development life cycle. International Journal of System Assurance Engineering and Management, pages 1–9, 2014.Google ScholarGoogle Scholar
  21. L. b. Li and H. Leung. Bayesian prediction of fault-proneness of agile-developed object-oriented system. Lecture Notes in Business Information Processing, 190:209–225, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  22. Q. Li and J. Wang. Determination of software reliability demonstration testing effort based on importance sampling and prior information. Advances in Intelligent and Soft Computing, 126 AISC:247–255, 2012.Google ScholarGoogle Scholar
  23. Z. Li, Q. Zhao, C. Li, and X. Yang. Design and reliability evaluation of simulation system for fire control radar network. In International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE), pages 1314–1317, June 2012.Google ScholarGoogle ScholarCross RefCross Ref
  24. J. Lv, B.-B. Yin, and K.-Y. Cai. Estimating confidence interval of software reliability with adaptive testing strategy. Journal of Systems and Software, 97(0):192 – 206, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Y. Ma, G. Luo, X. Zeng, and A. Chen. Transfer learning for cross-company software defect prediction. Information and Software Technology, 54(3):248 – 256, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. T. Misirli and A. B. Bener. Bayesian networks for evidence-based decision-making in software engineering. IEEE Transactions on Software Engineering, 40(6):1–1, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  27. A. S. Namin and M. Sridharan. Position paper: Bayesian reasoning for software testing. Proc. of the FSE/SDP workshop on Future of software engineering research (FoSER ’10), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. Okutan and O. T. Yıldız. Software defect prediction using bayesian networks. Empirical Software Engineering, 19(1):154–181, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. L. Qiuying, L. Haifeng, and W. Guodong. Sensitivity analysis on the influence factors of software reliability based on diagnosis reasoning. In Intelligence Computation and Evolutionary Computation, pages 557–566. Springer, 2013.Google ScholarGoogle Scholar
  30. L. Radlinski. A survey of bayesian net models for software development effort prediction. International Journal of Software Engineering and Computing, 2(2):95–109, 2010.Google ScholarGoogle Scholar
  31. K. Rekab, H. Thompson, and W. Wu. A multistage sequential test allocation for software reliability estimation. IEEE Transactions on Reliability, 62(2):424–433, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  32. R. Sagarna, A. Mendiburu, I. Inza, and J. A. Lozano. Assisting in search heuristics selection through multidimensional supervised classification: A case study on software testing. Information Sciences, 258(0):122 – 139, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. T. Schulz, L. Radlinski, T. Gorges, and W. Rosenstiel. Defect cost flow model. In Proceedings of the 6th International Conference on Predictive Models in Software Engineering - PROMISE ’10, page 1, New York, New York, USA, Sept. 2010. ACM Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Schumann, T. Mbaya, O. Mengshoel, K. Pipatsrisawat, A. Srivastava, A. Choi, and A. Darwiche. Software health management with Bayesian networks. Innovations in Systems and Software Engineering, 9(4):271–292, June 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. Sridharan and A. S. Namin. Prioritizing mutation operators based on importance sampling. In Software Reliability Engineering (ISSRE), 2010 IEEE 21st International Symposium on, pages 378–387. IEEE, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. R. Torkar, N. M. Awan, A. K. Alvi, and W. Afzal. Predicting software test effort in iterative development using a dynamic bayesian network. In 21st IEEE International Symposium on Software Reliability Engineering. IEEE, 2010.Google ScholarGoogle Scholar
  37. J. VanderPlas. Frequentism and bayesianism: A python-driven primer. arXiv preprint arXiv:1411.5018, 2014.Google ScholarGoogle Scholar
  38. S. Wagner. A Bayesian network approach to assess and predict software quality using activity-based quality models. Information and Software Technology, 52(11):1230–1241, Nov. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Comparing the effectiveness of several modeling methods for fault prediction. Empirical Software Engineering, 15(3):277–295, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. M. Wiper, A. Palacios, and J. Marın. Bayesian software reliability prediction using software metrics information. Quality Technology & Quantitative Management, 9(1):35––44, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  41. Z. Yang, Z. Yu, and C. Bai. The approach of graphical user interface testing guided by bayesian model. Lecture Notes in Electrical Engineering, 277 LNEE:385–393, 2014.Google ScholarGoogle Scholar
  42. Z.-F. Yang, Z.-X. Yu, B.-B. Yin, and C.-G. Bai. Gui reliability assessment based on bayesian network and structural profile. International Journal of Signal Processing, Image Processing and Pattern Recognition, 8(1):225–240, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  43. C. Zheng, F. Peng, J. Wu, and Z. Wu. Software life cycle-based defects prediction and diagnosis technique research. In Computer Application and System Modeling (ICCASM), 2010 International Conference on, volume 8, pages V8–192. IEEE, 2010.Google ScholarGoogle Scholar
  44. B. Zhou, H. Okamura, and T. Dohi. Markov chain monte carlo random testing. In Advances in Computer Science and Information Technology, pages 447–456. Springer, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. B. Zhou, H. Okamura, and T. Dohi. Enhancing performance of random testing through markov chain monte carlo methods. IEEE Transactions on Computers, 62(1):186–192, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Bayesian concepts in software testing: an initial review

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        A-TEST 2015: Proceedings of the 6th International Workshop on Automating Test Case Design, Selection and Evaluation
        August 2015
        46 pages
        ISBN:9781450338134
        DOI:10.1145/2804322

        Copyright © 2015 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 30 August 2015

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Upcoming Conference

        FSE '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader