skip to main content
10.1145/3593434.3593954acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Collecting cognitive strategies applied by students during test case design

Published:14 June 2023Publication History

ABSTRACT

It is important to properly test developed software because this may contribute to fewer bugs going unreported in deployed software. Often, little attention is spent on the topic of software testing in curricula, yielding graduate students without adequate preparation to deal with the quality standards required by the industry. This problem could be tackled by introducing bite-sized software testing education capsules that allow teachers to introduce software testing to their students in a less time-consuming manner and with a hands-on component that will facilitate learning. In order to design appropriate software testing educational tools, it is necessary to consider both the software testing needs of the industry and the cognitive models of students. This work-in-progress paper proposes an experimental design to gain an understanding of the cognitive strategies used by students during test case design based on real-life cases. Ultimately, the results of the experiment will be used to develop educational support for teaching software testing.

References

  1. Afsoon Afzal, Claire Le Goues, Michael Hilton, and Christopher Steven Timperley. 2020. A study on challenges of testing robotic systems. In 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST). IEEE, 96–107.Google ScholarGoogle ScholarCross RefCross Ref
  2. Lex Bijlsma, Niels Doorn, Harrie Passier, Harold Pootjes, and Sylvia Stuurman. 2021. How do Students Test Software Units?. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET). 189–198. https://doi.org/10.1109/ICSE-SEET52601.2021.00029Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Niels Doorn, Tanja EJ Vos, Beatriz Marín, Harrie Passier, Lex Bijlsma, and Silvio Cacace. 2021. Exploring students’ sensemaking of test case design. An initial study. In 2021 IEEE 21st International Conference on Software Quality, Reliability and Security Companion (QRS-C). IEEE, 1069–1078.Google ScholarGoogle ScholarCross RefCross Ref
  4. Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Automatically Grading Programming Assignments. In Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education (Madrid, Spain) (ITiCSE ’08). Association for Computing Machinery, New York, NY, USA, 328. https://doi.org/10.1145/1384271.1384371Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Eduard Enoiu, Gerald Tukseferi, and Robert Feldt. 2020. Towards a model of testers’ cognitive processes: Software testing as a problem solving approach. In 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). IEEE, 272–279.Google ScholarGoogle ScholarCross RefCross Ref
  6. Herb Krasner. 2021. The cost of poor software quality in the US: a 2020 report. Proc. Consortium Inf. Softw. QualityTM (CISQTM) (2021).Google ScholarGoogle Scholar
  7. Allison R Lombardi, David T Conley, Mary A Seburn, and Andrew M Downs. 2013. College and career readiness assessment: Validation of the key cognitive strategies framework. Assessment for effective intervention 38, 3 (2013), 163–171.Google ScholarGoogle ScholarCross RefCross Ref
  8. Ana C.R. Paiva, Nuno H. Flores, André G. Barbosa, and Tânia P.B. Ribeiro. 2016. iLearnTest – Framework for Educational Games. Procedia - Social and Behavioral Sciences 228 (2016), 443–448. https://doi.org/10.1016/j.sbspro.2016.07.068 2nd International Conference on Higher Education Advances,HEAd’16, 21-23 June 2016, València, Spain.Google ScholarGoogle ScholarCross RefCross Ref
  9. José Miguel Rojas and Gordon Fraser. 2016. Code defenders: a mutation testing game. In 2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 162–167.Google ScholarGoogle ScholarCross RefCross Ref
  10. Lilian Passos Scatalon, Jeffrey C Carver, Rogério Eduardo Garcia, and Ellen Francine Barbosa. 2019. Software testing in introductory programming courses: A systematic mapping study. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 421–427.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jaime Spacco, David Hovemeyer, William Pugh, Fawzi Emad, Jeffrey K Hollingsworth, and Nelson Padua-Perez. 2006. Experiences with marmoset: designing and using an advanced submission and testing system for programming courses. ACM Sigcse Bulletin 38, 3 (2006), 13–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Klaas-Jan Stol and Brian Fitzgerald. 2020. Guidelines for conducting software engineering research. In Contemporary Empirical Methods in Software Engineering. Springer, 27–62.Google ScholarGoogle Scholar
  13. Jeroen JG Van Merriënboer and Paul A Kirschner. 2017. Ten steps to complex learning: A systematic approach to four-component instructional design. Routledge.Google ScholarGoogle Scholar
  14. DR TANJA EJ VOS. 2017. Zoeken naar fouten. Op weg naar een nieuwe manier om software te testen. Open Universiteit 1 (2017).Google ScholarGoogle Scholar
  15. Tanja E.J. Vos, Niels Doorn, Beatriz Marín, Beatriz Marín, and Niels Doorn. 2022. Test Informed Learning with Examples. In Proceedings of the 10th Computer Science Education Research Conference (Virtual Event, Netherlands) (CSERC ’21). Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/3507923.3507924Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    EASE '23: Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering
    June 2023
    544 pages
    ISBN:9798400700446
    DOI:10.1145/3593434

    Copyright © 2023 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 June 2023

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate71of232submissions,31%
  • Article Metrics

    • Downloads (Last 12 months)49
    • Downloads (Last 6 weeks)7

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format