Abstract
Comparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. Existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods on the basis of technical terms the methods themselves define. In the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods! Random testing, as a method that can be related to quality (in both the conventional sense of statistical reliability, and the more stringent sense of software assurance), offers the opportunity for valid comparison.
- 1 V. Basili and R. Selby, Comparing the effectiveness of software testing strategies, IEEE Trans. Software Eng. SE-13 (December, 1987), 1278-1296. Google ScholarDigital Library
- 2 T. A. Budd, The portable mutation testing suite, TR 83-8, Department of Computer Science, University of Arizona, March, 1983.Google Scholar
- 3 T. A. Budd and W. Miller, Testing numerical software, TR 83-18, Department of Computer Science, University of Arizona, November, 1983.Google Scholar
- 4 R. DeMillo, R. Lipton, and F. Sayward, Hints on test data selection: help for the practicing programmer, Computer 11 (April, 1978), 34-43.Google ScholarDigital Library
- 5 R. A. DeMillo and A. Jefferson Offutt VI, Experimental results of automatically generated adequate test sets, Proceeding 6th Pacijic Northwest Sofrware Quality Conference, Portland, OR, September, 1988,210- 232.Google Scholar
- 6 J. Duran and S. Ntafos, An evaluation of random testing, IEEE Trans. Software Eng. SE- 10 (July, 1984), 438-444.Google ScholarDigital Library
- 7 Gourlay, A mathematical framework for the investigation of testing, IEEE Trans. Software Eng. SE-9 (November, 1983), 786-709.Google Scholar
- 8 R. Hamlet, Testing programs with the aid of a compiler, IEEE Trans. on Software Eng. SE-3 (July, 1977), 279-290.Google ScholarDigital Library
- 9 R. Hamlet, Probable correctness theory, Info. Proc. Letters 25 (April, 1987), 17-25. Google ScholarDigital Library
- 10 R. Hamlet and R. Taylor, Partition testing does not inspire confidence, Proceedings Second Workshop on Sofnvare Testing, Verification, and Analysis, Banff, Canada, July, 1988,206-215.Google ScholarCross Ref
- 11 R. Hamlet, Editor's introduction, special section on software testing, CACM 31 (June, 1988), 662-667. Google ScholarDigital Library
- 12 R. Hamlet, Unit testing for software assurance, Proceedings COMPASS 89, Washington, DC, June, 1989,42-48.Google ScholarCross Ref
- 13 W. Howden, Reliability of the path analysis testing strategy, IEEE Trans. Sofnvare Eng. SE-2(1976), 208- 215.Google ScholarDigital Library
- 14 W. Howden, Functional Program Testing and Analysis, McGraw-Hill, 1987. Google ScholarDigital Library
- 15 J. La&i and B. Korel, A data flow oriented program testing strategy, IEEE Trans. Software Eng. SE-9 (May, 1983), 347-354.Google Scholar
- 16 L. Lauterbach and W. Randall, Experimental evaluation of six test techniques, Proceedings COMPASS 89, Washington, DC, June, 1989.3641.Google ScholarCross Ref
- 17 J. D. Musa, "Qualitytime" column, Faults, failures, and a metrics revolution, IEEE Software, March, 1989, 85,91. Google ScholarDigital Library
- 18 S. C. Ntafos, An evaluation of required element testing strategies, Proc. 7th Int. Conf. on Software Engineering, Orlando, FL, 1984,250-256. Google ScholarDigital Library
- 19 S. C. Ntafos, A comparison of some structural testing strategies, IEEE Trans. Software Eng. SE-14 (June, 1988), 868-874. Google ScholarDigital Library
- 20 T. J. Ostrand and M. Balcer, The category-partition method for specifying and generating functional tests, CACM 31 (June, 1988), 676-687. Google ScholarDigital Library
- 21 D. L. Pamas, A. van Schouwn, and S. Kwan, Evaluation standards for safety critical software, TR 88-220, Department of Computing and Information Science, Queen's University, Kingston, Ontario, Canada.Google Scholar
- 22 D. Parnas, personal communication.Google Scholar
- 23 C. V. Ramamoorthy, S. F. Ho, and W. T. Chen, On the automated generation of program test data, IEEE Trans. Software. Eng. SE-2 (Dec., 1976), 293-300.Google ScholarDigital Library
- 24 S. Rapps and E.Weyuker, Selecting software test data using data flow information, IEEE Trans. Software Eng. SE-1 1 (April, 1985), 367-375. Google ScholarDigital Library
- 25 D. Richardson and L. Clarke, A partition analysis method to increase program reliability, Proc. 5th Int. Con& on Software Engineering, San Diego, 198 1, 244253. Google ScholarDigital Library
- 26 R. W. Selby, V. Basili, F. Baker, Cleanroom software development: an empirical evaluation, IEEE Trans. Softwure Eng. SE-13 (Sept., 1987). 1027-1038. Google ScholarDigital Library
- 27 P. Thevenod-Fosse, Statistical validation by means of statistical testing, Dependable Computing for Critical Applications, Santa Barbara, CA, August, 1989.Google Scholar
- 28 M. Weiser, J. Gannon, and P. McMullin, Comparison of struchual test coverage metrics, IEEE Software (March, 1985),80-85.Google Scholar
- 29 E. J. Weyuker, Axiomatizing software test data adequacy, IEEE Trans. Sofnvare Eng. SE-12 (December, 1986), 1128-1138. Google ScholarDigital Library
- 30 S. J. Zeil, The EQUATE testing strategy, Proceedings Workshop on Sofnvare Testing, Banff, Canada, July, 1986.142-151.Google Scholar
- 31 S. H. Zweben and J. S. Gourlay, On the adequacy of Weyuker's test data adequacy axioms, IEEE Trans. Sofiwure Eng. SE-15 (April, 1989), 4%-500. Google ScholarDigital Library
Index Terms
- Theoretical comparison of testing methods
Recommendations
Theoretical comparison of testing methods
TAV3: Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verificationComparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. Existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods ...
Comparison of Five Black-box Testing Methods for Object-Oriented Software
SERA '06: Proceedings of the Fourth International Conference on Software Engineering Research, Management and ApplicationsAs the size of software is getting huge, it is difficult for testers to check out all parts of source code in white-box style during integration testing or system testing period. Therefore functional test methods based on requirements information are ...
Comments