skip to main content
article
Free Access

Theoretical comparison of testing methods

Authors Info & Claims
Published:01 November 1989Publication History
Skip Abstract Section

Abstract

Comparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. Existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods on the basis of technical terms the methods themselves define. In the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods! Random testing, as a method that can be related to quality (in both the conventional sense of statistical reliability, and the more stringent sense of software assurance), offers the opportunity for valid comparison.

References

  1. 1 V. Basili and R. Selby, Comparing the effectiveness of software testing strategies, IEEE Trans. Software Eng. SE-13 (December, 1987), 1278-1296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2 T. A. Budd, The portable mutation testing suite, TR 83-8, Department of Computer Science, University of Arizona, March, 1983.Google ScholarGoogle Scholar
  3. 3 T. A. Budd and W. Miller, Testing numerical software, TR 83-18, Department of Computer Science, University of Arizona, November, 1983.Google ScholarGoogle Scholar
  4. 4 R. DeMillo, R. Lipton, and F. Sayward, Hints on test data selection: help for the practicing programmer, Computer 11 (April, 1978), 34-43.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5 R. A. DeMillo and A. Jefferson Offutt VI, Experimental results of automatically generated adequate test sets, Proceeding 6th Pacijic Northwest Sofrware Quality Conference, Portland, OR, September, 1988,210- 232.Google ScholarGoogle Scholar
  6. 6 J. Duran and S. Ntafos, An evaluation of random testing, IEEE Trans. Software Eng. SE- 10 (July, 1984), 438-444.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. 7 Gourlay, A mathematical framework for the investigation of testing, IEEE Trans. Software Eng. SE-9 (November, 1983), 786-709.Google ScholarGoogle Scholar
  8. 8 R. Hamlet, Testing programs with the aid of a compiler, IEEE Trans. on Software Eng. SE-3 (July, 1977), 279-290.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9 R. Hamlet, Probable correctness theory, Info. Proc. Letters 25 (April, 1987), 17-25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10 R. Hamlet and R. Taylor, Partition testing does not inspire confidence, Proceedings Second Workshop on Sofnvare Testing, Verification, and Analysis, Banff, Canada, July, 1988,206-215.Google ScholarGoogle ScholarCross RefCross Ref
  11. 11 R. Hamlet, Editor's introduction, special section on software testing, CACM 31 (June, 1988), 662-667. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12 R. Hamlet, Unit testing for software assurance, Proceedings COMPASS 89, Washington, DC, June, 1989,42-48.Google ScholarGoogle ScholarCross RefCross Ref
  13. 13 W. Howden, Reliability of the path analysis testing strategy, IEEE Trans. Sofnvare Eng. SE-2(1976), 208- 215.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14 W. Howden, Functional Program Testing and Analysis, McGraw-Hill, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15 J. La&i and B. Korel, A data flow oriented program testing strategy, IEEE Trans. Software Eng. SE-9 (May, 1983), 347-354.Google ScholarGoogle Scholar
  16. 16 L. Lauterbach and W. Randall, Experimental evaluation of six test techniques, Proceedings COMPASS 89, Washington, DC, June, 1989.3641.Google ScholarGoogle ScholarCross RefCross Ref
  17. 17 J. D. Musa, "Qualitytime" column, Faults, failures, and a metrics revolution, IEEE Software, March, 1989, 85,91. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18 S. C. Ntafos, An evaluation of required element testing strategies, Proc. 7th Int. Conf. on Software Engineering, Orlando, FL, 1984,250-256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19 S. C. Ntafos, A comparison of some structural testing strategies, IEEE Trans. Software Eng. SE-14 (June, 1988), 868-874. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. 20 T. J. Ostrand and M. Balcer, The category-partition method for specifying and generating functional tests, CACM 31 (June, 1988), 676-687. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. 21 D. L. Pamas, A. van Schouwn, and S. Kwan, Evaluation standards for safety critical software, TR 88-220, Department of Computing and Information Science, Queen's University, Kingston, Ontario, Canada.Google ScholarGoogle Scholar
  22. 22 D. Parnas, personal communication.Google ScholarGoogle Scholar
  23. 23 C. V. Ramamoorthy, S. F. Ho, and W. T. Chen, On the automated generation of program test data, IEEE Trans. Software. Eng. SE-2 (Dec., 1976), 293-300.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. 24 S. Rapps and E.Weyuker, Selecting software test data using data flow information, IEEE Trans. Software Eng. SE-1 1 (April, 1985), 367-375. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25 D. Richardson and L. Clarke, A partition analysis method to increase program reliability, Proc. 5th Int. Con& on Software Engineering, San Diego, 198 1, 244253. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. 26 R. W. Selby, V. Basili, F. Baker, Cleanroom software development: an empirical evaluation, IEEE Trans. Softwure Eng. SE-13 (Sept., 1987). 1027-1038. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. 27 P. Thevenod-Fosse, Statistical validation by means of statistical testing, Dependable Computing for Critical Applications, Santa Barbara, CA, August, 1989.Google ScholarGoogle Scholar
  28. 28 M. Weiser, J. Gannon, and P. McMullin, Comparison of struchual test coverage metrics, IEEE Software (March, 1985),80-85.Google ScholarGoogle Scholar
  29. 29 E. J. Weyuker, Axiomatizing software test data adequacy, IEEE Trans. Sofnvare Eng. SE-12 (December, 1986), 1128-1138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. 30 S. J. Zeil, The EQUATE testing strategy, Proceedings Workshop on Sofnvare Testing, Banff, Canada, July, 1986.142-151.Google ScholarGoogle Scholar
  31. 31 S. H. Zweben and J. S. Gourlay, On the adequacy of Weyuker's test data adequacy axioms, IEEE Trans. Sofiwure Eng. SE-15 (April, 1989), 4%-500. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Theoretical comparison of testing methods

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM SIGSOFT Software Engineering Notes
            ACM SIGSOFT Software Engineering Notes  Volume 14, Issue 8
            Dec. 1989
            213 pages
            ISSN:0163-5948
            DOI:10.1145/75309
            Issue’s Table of Contents
            • cover image ACM Conferences
              TAV3: Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verification
              November 1989
              229 pages
              ISBN:0897913426
              DOI:10.1145/75308

            Copyright © 1989 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 November 1989

            Check for updates

            Qualifiers

            • article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader