ABSTRACT
Block-based programming environments are widely used by novices who are learning computer science. However, even in block-based coding environments that have been carefully developed to serve novices, students frequently struggle and require additional support. A promising avenue to provide this support is the use of intelligent tutoring systems, which offer adaptive hints to assist learners. In order to provide students with the adaptive hints they need, we must investigate their help-seeking behaviors and identify patterns surrounding their need for support. In this experience report, we examine data collected from 174 college students in an introductory engineering course, who used an intelligent block-based coding environment to learn computer science. These students made more than 1,000 hint requests, which we represent in two-dimensional space along axes of elapsed time and code completeness. Analysis revealed five major clusters of hint requests, which we further characterized through qualitative examination of the coding trajectories that preceded each hint request. We also analyzed how students' incoming knowledge and perceived computer skill were related to their help-seeking behaviors. Students with higher incoming knowledge requested hints when their code was more complete than students with lower incoming knowledge. Students with high perceived computer skill asked for hints when their code was less complete than those with low perceived computer skill. The results presented here provide insight into student help-seeking behavior in computer science education, informing CS educators and system designers on how best to develop support strategies.
- Vincent Aleven, Ido Roll, Bruce M McLaren, and Kenneth R Koedinger. 2016. Help helps, but only so much: Research on help seeking with intelligent tutoring systems. International Journal of Artificial Intelligence in Education, Vol. 26, 1 (2016), 205--223.Google ScholarCross Ref
- John R Anderson, Albert T Corbett, Kenneth R Koedinger, and Ray Pelletier. 1995. Cognitive tutors: Lessons learned. The Journal of the Learning Sciences, Vol. 4, 2 (1995), 167--207.Google ScholarCross Ref
- Theresa Beaubouef and John Mason. 2005. Why the high attrition rate for computer science students: Some thoughts and observations. ACM SIGCSE Bulletin, Vol. 37, 2 (2005), 103--106.Google ScholarDigital Library
- Karo Castro-Wunsch, Alireza Ahadi, and Andrew Petersen. 2017. Evaluating neural networks as a method for identifying students in need of assistance. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. 111--116.Google ScholarDigital Library
- Paul Denny, Brett A Becker, Michelle Craig, Greg Wilson, and Piotr Banaszkiewicz. 2019. Research this! Questions that computing educators most want computing education researchers to answer. In Proceedings of the 2019 ACM Conference on International Computing Education Research. 259--267.Google ScholarDigital Library
- Andrew Emerson, Andy Smith, Fernando J Rodríguez, Eric N Wiebe, Bradford W Mott, Kristy Elizabeth Boyer, and James C Lester. 2020. Cluster-based analysis of novice coding misconceptions in block-based programming. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 825--831.Google ScholarDigital Library
- Anthony Estey, Hieke Keuning, and Yvonne Coady. 2017. Automatically classifying students in need of support by detecting changes in programming behaviour. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. 189--194.Google ScholarDigital Library
- Neil Fraser et almbox. 2013. Blockly: A visual programming editor. URL: https://code. google. com/p/blockly, Vol. 42 (2013).Google Scholar
- Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. 2015. OverCode: Visualizing variation in student solutions to programming problems at scale. ACM Transactions on Computer-Human Interaction (TOCHI), Vol. 22, 2 (2015), 1--35.Google ScholarDigital Library
- Shuchi Grover. 2020. Designing an assessment for introductory programming concepts in middle school computer science. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 678--684.Google ScholarDigital Library
- Shuchi Grover and Satabdi Basu. 2017. Measuring student learning in introductory block-based programming: Examining misconceptions of loops, variables, and boolean logic. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. 267--272.Google ScholarDigital Library
- Georgiana Haldeman, Andrew Tjang, Monica Babecs -Vroman, Stephen Bartos, Jay Shah, Danielle Yucht, and Thu D Nguyen. 2018. Providing meaningful feedback for autograding of programming assignments. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 278--283.Google ScholarDigital Library
- David Joyner, Ryan Arrison, Mehnaz Ruksana, Evi Salguero, Zida Wang, Ben Wellington, and Kevin Yin. 2019. From clusters to content: Using code clustering for course improvement. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 780--786.Google ScholarDigital Library
- Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2019. How teachers would help students to improve their code. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education. 119--125.Google ScholarDigital Library
- Joo Yeun Kim and Kyu Yon Lim. 2019. Promoting learning in online, ill-structured problem solving: The effects of scaffolding type and metacognition level. Computers & Education, Vol. 138 (2019), 116--129.Google ScholarDigital Library
- Abe Leite and Saúl A Blanco. 2020. Effects of human vs. automatic feedback on students' understanding of AI concepts and programming style. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 44--50.Google ScholarDigital Library
- Victor J Marin, Tobin Pereira, Srinivas Sridharan, and Carlos R Rivero. 2017. Automated personalized feedback in introductory Java programming MOOCs. In 2017 IEEE 33rd International Conference on Data Engineering (ICDE). IEEE, 1259--1270.Google ScholarCross Ref
- Christopher M. Mitchell, Eun Young Ha, Kristy Elizabeth Boyer, and James C. Lester. 2013. Learner characteristics and dialogue: Recognizing effective and student-adaptive tutorial strategies. International Journal of Learning Technology (IJLT), Vol. 8, 4 (2013), 382--403.Google ScholarDigital Library
- Antonija Mitrovic, Stellan Ohlsson, and Devon K Barrow. 2013. The effect of positive feedback in a constraint-based intelligent tutoring system. Computers & Education, Vol. 60, 1 (2013), 264--272.Google ScholarDigital Library
- Jan Moons and Carlos De Backer. 2013. The design and pilot evaluation of an interactive learning environment for introductory programming influenced by cognitive load theory and constructivism. Computers & Education, Vol. 60, 1 (2013), 368--384.Google ScholarDigital Library
- Jonathan P Munson and Joshua P Zitovsky. 2018. Models for early identification of struggling novice programmers. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 699--704.Google ScholarDigital Library
- Sagar Parihar, Ziyaan Dadachanji, Praveen Kumar Singh, Rajdeep Das, Amey Karkare, and Arnab Bhattacharya. 2017. Automatic grading and feedback using program repair for introductory programming courses. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education. 92--97.Google ScholarDigital Library
- James Prather, Raymond Pettit, Brett A Becker, Paul Denny, Dastyni Loksa, Alani Peters, Zachary Albrecht, and Krista Masci. 2019. First things first: Providing metacognitive scaffolding for interpreting problem prompts. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 531--537.Google ScholarDigital Library
- Thomas W Price and Tiffany Barnes. 2015. Comparing textual and block interfaces in a novice programming environment. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research. 91--99.Google ScholarDigital Library
- Thomas W Price, Yihuan Dong, and Dragan Lipovac. 2017. iSnap: Towards intelligent tutoring in novice programming environments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. 483--488.Google ScholarDigital Library
- Siti Nurulain Mohd Rum and Maizatul Akmar Ismail. 2017. Metocognitive support accelerates computer assisted learning for novice programmers. Journal of Educational Technology & Society, Vol. 20, 3 (2017), 170--181.Google Scholar
- Warren S Sarle. 1983. Cubic clustering criterion. SAS Institute.Google Scholar
- Alexandria Katarina Vail and Kristy Elizabeth Boyer. 2014. Identifying effective moves in tutoring: On the refinement of dialogue act annotation schemes. In International Conference on Intelligent Tutoring Systems. Springer, 199--209.Google ScholarDigital Library
- Jason A Walonoski and Neil T Heffernan. 2006. Detection and analysis of off-task gaming behavior in intelligent tutoring systems. In International Conference on Intelligent Tutoring Systems. Springer, 382--391.Google ScholarDigital Library
- Wengran Wang, Rui Zhi, Alexandra Milliken, Nicholas Lytle, and Thomas W Price. 2020. Crescendo: Engaging students to self-paced programming practices. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 859--865.Google ScholarDigital Library
- Christopher Watson and Frederick WB Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education. 39--44.Google ScholarDigital Library
- David Weintrop and Uri Wilensky. 2015. Using commutative assessments to compare conceptual understanding in blocks-based and text-based programs.. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research, Vol. 15. 101--110.Google ScholarDigital Library
- Benjamin Xie and Hal Abelson. 2016. Skill progression in MIT app inventor. In 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 213--217.Google ScholarCross Ref
- Rui Zhi, Thomas W Price, Samiha Marwan, Alexandra Milliken, Tiffany Barnes, and Min Chi. 2019. Exploring the impact of worked examples in a novice programming environment. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 98--104.Google ScholarDigital Library
Index Terms
- Exploring Novice Programmers' Hint Requests in an Intelligent Block-Based Coding Environment
Recommendations
A flowchart-based intelligent tutoring system for improving problem-solving skills of novice programmers
Intelligent tutoring and personalization are considered as the two most important factors in the research of learning systems and environments. An effective tool that can be used to improve problem-solving ability is an Intelligent Tutoring System which ...
Incorporating an intelligent tutoring system into CS1
SIGCSE '06: Proceedings of the 37th SIGCSE technical symposium on Computer science educationIntelligent tutoring systems (ITSs) have been used to complement classroom instruction in recent years, and have been shown to facilitate learning. We incorporate an ITS named Intelligent Learning Materials Delivery Agent (ILMDA) into our CS1 course and ...
Incorporating an intelligent tutoring system into CS1
Intelligent tutoring systems (ITSs) have been used to complement classroom instruction in recent years, and have been shown to facilitate learning. We incorporate an ITS named Intelligent Learning Materials Delivery Agent (ILMDA) into our CS1 course and ...
Comments