Abstract
AutoTutor-ARC (adult reading comprehension) is an intelligent tutoring system that uses conversational agents to help adult learners improve their comprehension skills. However, in such a system, not all lessons and items optimally serve the same purposes. In this paper, we describe a method for classifying items that are instructive, evaluative, motivational, versus potentially flawed based on analyses of items’ psychometric properties. Further, there is no a priori way of determining which lessons are optimal given the learner’s reading profile needs. To address this, we evaluate how assessing learner component reading skills can inform various aspects of learner needs on AutoTutor lessons. More specifically, we compare learners who were classified as proficient, underengaged, conscientious, versus struggling readers based on their experiences with AutoTutor. Together, these analyses suggest the utility of integrating assessments with instruction: efficient, adaptive learning at the lesson level, more efficient and valid post-testing, and consequently, recommendations for more targeted, adaptive pathways through the instructional program/system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
UNESCO: Literacy Rates Continue to Rise from One Generation to the Next. UIS Fact Sheet No, 45. (2017)
Greenberg, D.: The Challenges Facing Adult Literacy Programs. Community Lit. J. 3 (2008). https://doi.org/10.25148/clj.3.1.009480
Sabatini, J., O’Reilly, T., Dreier, K., Wang, Z.: Cognitive processing challenges associated with low literacy in adults. In: The Wiley Handbook of Adult Literacy, pp. 15–39. Wiley (2019). https://doi.org/10.1002/9781119261407.ch1
Comings, J.P., Soricone, L.: Adult literacy research: opportunities and challenges. Boston, MA (2007)
Fang, Y., et al.: Patterns of adults with low literacy skills interacting with an intelligent tutoring system. Int. J. Artif. Intell. Educ. 1–26 (2021). https://doi.org/10.1007/s40593-021-00266-y
Chen, S., et al.: Automated disengagement tracking within an intelligent tutoring system. Front. Artif. Intell. 3 (2021). https://doi.org/10.3389/frai.2020.595627
Sabatini, J., Weeks, J., O’Reilly, T., Bruce, K., Steinberg, J., Chao, S.F.: SARA Reading Components Tests, RISE Forms: Technical Adequacy and Test Design, 3rd edn. ETS Research Report Series 2019, pp. 1–30 (2019). https://doi.org/10.1002/ets2.12269
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Hollander, J., Sabatini, J., Graesser, A. (2022). How Item and Learner Characteristics Matter in Intelligent Tutoring Systems Data. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. AIED 2022. Lecture Notes in Computer Science, vol 13356. Springer, Cham. https://doi.org/10.1007/978-3-031-11647-6_106
Download citation
DOI: https://doi.org/10.1007/978-3-031-11647-6_106
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-11646-9
Online ISBN: 978-3-031-11647-6
eBook Packages: Computer ScienceComputer Science (R0)