skip to main content
research-article

Decision Support for Selecting Tools for Software Test Automation

Published:05 January 2017Publication History
Skip Abstract Section

Abstract

Context: Test automation is an investment having a high initial economic impact on software development. Utilization of test automation may positively affect the costs (e.g. by speeding up development iterations by providing repeatable tests and regression testing) and the quality of software or system, in large scale. Approaches to test automation may not always be appropriate or successful. The trade-off between manual and automated testing and the tools to be used have to be identified and justified. The task to decide which tools to use, to maximize the benefits is not a trivial one. There are numerous software testing or software test automation tools available, both commercial and open source and unique, multifaceted goals in every development environment (context). The exact number of tools is unknown and chances or resources to try out different choices are very limited. Objective: Contextual factors are acknowledged as an issue and well known and common to both practitioners in the field and consultation service providers. Selecting and utilizing the most effective and efficient tool(s) for specific purpose(s) in a specific context is essential for the success of business. The goal of the research is to define a systematic, empirically validated decision support system (DSS) for selecting a tool for software test automation

References

  1. Tassey, G. 2002. The economic impacts of inadequate infrastructure for software testing. National Institute of Standards and Technology, RTI Project. 7007.Google ScholarGoogle Scholar
  2. Strigel, W., Juristo, N., and Moreno, A. M. 2006. Guest Editors' Introduction: Software Testing Practices in Industry. IEEE Software. 23, 0019-21.Google ScholarGoogle Scholar
  3. Blackburn, M., Busser, R., and Nauman, A. 2004. Why model-based test automation is different and what you should know to get started. In International Conference on Practical Software Quality and Testing. 212--232.Google ScholarGoogle Scholar
  4. Engel, A. and Last, M. 2007. Modeling software testing costs and risks using fuzzy logic paradigm. J. Syst. Software. 80, 817--835. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Harrold, M. J. 2000. Testing: A roadmap. In Proceedings of the Conference on the Future of Software Engineering. 61--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Chittimalli, P. K. and Harrold, M. J. 2009. Recomputing coverage information to assist regression testing. Software Engineering, IEEE Transactions. 35, 452--469.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Andersson, C. and Runeson, P. 2002. Verification and validation in industry-a qualitative survey on the state of practice. In Empirical Software Engineering, Proceedings. 2002 International Symposium N. 37--47. Google ScholarGoogle ScholarCross RefCross Ref
  8. Budnik, C. J., Chan, W. K., and Kapfhammer, G. M. 2010. Bridging the gap between the theory and practice of software test automation. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. 2, 445--446. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Graham, D. and Fewster, M. 2012. Experiences of Test Automation: Case Studies of Software Test Automation. Addison-Wesley Professional.Google ScholarGoogle Scholar
  10. Rafi, D. M., Moses, K. R. K., Petersen, K., and Mäntylä, M. V. 2012. Benefits and limitations of automated software testing: Systematic literature review and practitioner survey. In Proceedings of the 7th International Workshop on Automation of Software Test. 36--42.Google ScholarGoogle Scholar
  11. Taipale, O., Kasurinen, J., Karhu, K., and Smolander, K. 2011. Trade-off between automated and manual software testing. International Journal of System Assurance Engineering and Management. 2, 114--125. Google ScholarGoogle ScholarCross RefCross Ref
  12. Fewster, M. 2001. Common mistakes in test automation. In Proceedings of Fall Test Automation Conference.Google ScholarGoogle Scholar
  13. Stobie, K. 2009. Too much automation or not enough? when to automate testing. In Pacific Northwest Software Quality Conference.Google ScholarGoogle Scholar
  14. Sahaf, Z., Garousi, V., Pfahl, D., Irving, R., and Amannejad, Y. 2014. When to automate software testing? decision support based on system dynamics: An industrial case study. In Proceedings of the 2014 International Conference on Software and System Process. 149--158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Ramler, R. and Wolfmaier, K. 2006. Economic perspectives in test automation: Balancing automated and manual testing with opportunity cost. In Proceedings of the 2006 International Workshop on Automation of Software Test. 85--91. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Whittaker, J. 2000. What is software testing? And why is it so hard? Software, IEEE. 17, 70--79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kasurinen, J., Taipale, O., and Smolander, K. 2009. Analysis of problems in testing practices. In Software Engineering Conference, 2009. APSEC'09. Asia-Pacific.309--315. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Capgemini Consulting. 2015. World Quality Report 2015-2016. Capgemini, Sogeti and HP. https://www.capgemini.com/thought-leadership/world-qualityreport-2015-16.Google ScholarGoogle Scholar
  19. ISTQB (International Software Testing Qualifications Board). 2016. ISTQB® worldwide software testing practices report 2015-2016. ISTQB. http://www.istqb.org/references/surveys/istqb-worldwidesoftware-testing-practices-report.html.Google ScholarGoogle Scholar
  20. Capgemini Consulting. 2014. World quality report 2014-2015, sixth edition. Capgemini, Sogeti and HP. https://www.capgemini.com/resources/world-quality-report-2014-15.Google ScholarGoogle Scholar
  21. Poston, R. M. and Sexton, M. P. 1992. Evaluating and selecting testing tools. Software, IEEE. 9, 33--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Garousi, V. and Zhi, J. 2013. A survey of software testing practices in Canada. J. Syst. Software. 86, 1354--1376. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Michael, M. and Tragoudas, S. 2002. ATPG tools for delay faults at the functional level. ACM Trans. Design Autom. Electron. Syst. 7, 33--57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ng, S., Murnane, T., Reed, K., Grant, D., and Chen, T. 2004. A preliminary survey on software testing practices in australia. In Software Engineering Conference, 2004. Proceedings. 2004 Australian. 116--125. Google ScholarGoogle ScholarCross RefCross Ref
  25. Kaur, M. and Kumari, R. 2011. Comparative study of automated testing tools: Testcomplete and quicktest pro. International Journal of Computer Applications, vol. 24, pp. 1--7, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  26. Portillo-Rodríguez, J., Vizcaíno, A., Piattini, M., and Beecham, S. 2012. Tools used in Global Software Engineering: A systematic mapping review. Information and Software Technology. 54, 663--685. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Kaur, H. and Gupta, G. 2013. Comparative Study of Automated Testing Tools: Selenium, Quick Test Professional and Testcomplete. Int.Journal of Engineering Research and Applications ISSN. 1739--1743.Google ScholarGoogle Scholar
  28. Hussain, S., Wang, Z., Toure, I. K., and Diop, A. 2013. Web service testing tools: A comparative study. arXiv Preprint arXiv:1306.4063.Google ScholarGoogle Scholar
  29. Petersen, K. and Wohlin, C. 2009. Context in industrial software engineering research. In Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement. 401--404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Dybå, T., Kitchenham, B., and Jorgensen, M. 2005. Evidence-based software engineering for practitioners. Software, IEEE. 22, 58--65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Dybå, T., Sjøberg, D. I., and Cruzes, D. S. 2012. What works for whom, where, when, and why?: On the role of context in empirical software engineering. In Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. 19--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Johns, G. 2006. The essential impact of context on organizational behavior. Academy of Management Review. 31, 386--408. Google ScholarGoogle ScholarCross RefCross Ref
  33. Sjoberg, D. I., Dybå, T., and Jorgensen, M. 2007. The future of empirical methods in software engineering research. In 2007 Future of Software Engineering. 358--378.Google ScholarGoogle Scholar
  34. Garousi, V. and Mäntylä, M. V. 2016. When and what to automate in software testing? A multi-vocal literature review. Information and Software Technology. 76, 92--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Garousi, V. and Varma, T. 2010. A bibliometric assessment of canadian software engineering scholars and institutions (1996-2006). Computer and Information Science. 3, 19. Google ScholarGoogle ScholarCross RefCross Ref
  36. Lehtinen, T. O., Mäntylä, M. V., Vanhanen, J., Itkonen, J., and Lassenius, C. 2014. Perceived causes of software project failures--An analysis of their relationships. Information and Software Technology. 56, 623--643. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mosley, D. J. and Posey, B. A. 2002. Just enough Software Test Automation. Prentice Hall Professional.Google ScholarGoogle Scholar
  38. Hendrickson, E. 1998. The differences between test automation success and failure. Proceedings of STAR West.Google ScholarGoogle Scholar
  39. Hoffman, D. 1999. Cost benefits analysis of test automation. STAR West. 99.Google ScholarGoogle Scholar
  40. Meszaros, G., Smith, S. M., and Andrea, J. 2003. The test automation manifesto. In Extreme Programming and Agile Methods-XP/Agile Universe 2003. Springer. 73--81. Google ScholarGoogle ScholarCross RefCross Ref
  41. Surowiecki, J. 2005. The Wisdom of Crowds. Anchor.Google ScholarGoogle Scholar
  42. Hosio, S., Goncalves, J., Anagnostopoulos, T., and Kostakos, V. 2016. Leveraging wisdom of the crowd for decision support. In Proceedings of British Human Computer Interaction Conference (BCS-HCI). Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Kitzinger, J. 1995. Qualitative research. Introducing focus groups. BMJ. 311 (July 29), 299--302.Google ScholarGoogle Scholar
  44. Wikipedia. Wisdom of the crowd. https://en.wikipedia.org/wiki/Wisdom_of_the_crowdGoogle ScholarGoogle Scholar
  45. Yehezkel, S. 2016. Test Automation Survey 2016. March 16, 2016. http://blog.testproject.io/2016/03/16/test-automation-survey-2016/.Google ScholarGoogle Scholar

Index Terms

  1. Decision Support for Selecting Tools for Software Test Automation

                  Recommendations

                  Comments

                  Login options

                  Check if you have access through your login credentials or your institution to get full access on this article.

                  Sign in

                  Full Access

                  • Published in

                    cover image ACM SIGSOFT Software Engineering Notes
                    ACM SIGSOFT Software Engineering Notes  Volume 41, Issue 6
                    November 2016
                    110 pages
                    ISSN:0163-5948
                    DOI:10.1145/3011286
                    Issue’s Table of Contents

                    Copyright © 2017 Copyright is held by the owner/author(s)

                    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

                    Publisher

                    Association for Computing Machinery

                    New York, NY, United States

                    Publication History

                    • Published: 5 January 2017

                    Check for updates

                    Qualifiers

                    • research-article

                  PDF Format

                  View or Download as a PDF file.

                  PDF

                  eReader

                  View online with eReader.

                  eReader