skip to main content
10.1145/3231644.3231651acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article

The effects of adaptive learning in a massive open online course on learners' skill development

Published:26 June 2018Publication History

ABSTRACT

We report an experimental implementation of adaptive learning functionality in a self-paced Microsoft MOOC (massive open online course) on edX. In a personalized adaptive system, the learner's progress toward clearly defined goals is continually assessed, the assessment occurs when a student is ready to demonstrate competency, and supporting materials are tailored to the needs of each learner. Despite the promise of adaptive personalized learning, there is a lack of evidence-based instructional design, transparency in many of the models and algorithms used to provide adaptive technology or a framework for rapid experimentation with different models. ALOSI (Adaptive Learning Open Source Initiative) provides open source adaptive learning technology and a common framework to measure learning gains and learner behavior. This study explored the effects of two different strategies for adaptive learning and assessment: Learners were randomly assigned to three groups. In the first adaptive group ALOSI prioritized a strategy of remediation - serving learners items on topics with the least evidence of mastery; in the second adaptive group ALOSI prioritized a strategy of continuity - that is learners would be more likely served items on similar topic in a sequence until mastery is demonstrated. The control group followed the pathways of the course as set out by the instructional designer, with no adaptive algorithms. We found that the implemented adaptivity in assessment, with emphasis on remediation is associated with a substantial increase in learning gains, while producing no big effect on the drop-out. Further research is needed to confirm these findings and explore additional possible effects and implications to course design.

References

  1. Beck, J., Chang, K-M., Mostow, J., and Corbett, A. 2008. Does help help? Introducing the Bayesian Evaluation and Assessment methodology. In International Conference on Intelligent Tutoring Systems. Springer, 383--394 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Hawkins, W.J., Heffernan, N.T. and Baker, R.S., 2014. Learning Bayesian knowledge tracing parameters with a knowledge heuristic and empirical probabilities. In International Conference on Intelligent Tutoring Systems (pp. 150--155). Springer, Cham. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Koedinger, K., Anderson, J., Hadley, W., and Mark, M. 1997. Intelligent tutoring goes to school in the big city. (1997).Google ScholarGoogle Scholar
  4. Koedinger, K., and Stamper, J. 2010. A Data Driven Approach to the Discovery of Better Cognitive Models. In Baker, R.S.J.d., Merceron, A., Pavlik, P.I. Jr. (Eds.) Proceedings of the 3rd International Conference on Educational Data Mining. (EDM 2010), 325--326. Pittsburgh, PA.Google ScholarGoogle Scholar
  5. Pardos, Z., and Heffernan, N. 2010. Navigating the parameter space of Bayesian Knowledge Tracing models: Visualizations of the convergence of the Expectation Maximization algorithm. In Educational Data Mining 2010.Google ScholarGoogle Scholar
  6. Pardos, Z., and Heffernan, N. 2011. KT-IDEM: Introducing item difficulty to the knowledge tracing model. In International Conference on User Modeling, Adaptation, and Personalization. Springer, 243--254. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Pardos, Z., Tang, S., Davis, D., and Vu Le C. 2017. Enabling real-time adaptivity in MOOCs with a personalized next-step recommendation framework. Proceedings of the Fourth ACM Conference on Learning @ Scale. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Rosen, Y., Rushkin, I., Ang, A., Federicks, C., Tingley, D., and Blink, M. - J. 2017. Designing adaptive assessments in MOOCs. Proceedings of the Fourth ACM Conference on Learning @ Scale. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Rushkin, I., Rosen, Y., Ang, A., Fredericks, C., Tingley, D., Blink, M. J., and Lopez, G. 2017. Adaptive Assessment Experiment in a HarvardX MOOC. Proceedings of the 10th International Conference on Educational Data Mining.Google ScholarGoogle Scholar
  10. Stamper, J., Barnes, T., and Croy, M. 2011. Experimental Evaluation of Automatic Hint Generation for a Logic Tutor. In Kay, J., Bull, S. and Biswas, G. eds. Proceeding of the 15th International Conference on Artificial Intelligence in Education (AIED2011). 345--352. Berlin Germany: Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Corbett, A. T. and Anderson, J. R., 1994. Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user-adapted interaction, 4(4), 253--278.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    L@S '18: Proceedings of the Fifth Annual ACM Conference on Learning at Scale
    June 2018
    391 pages
    ISBN:9781450358866
    DOI:10.1145/3231644

    Copyright © 2018 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 26 June 2018

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    L@S '18 Paper Acceptance Rate24of58submissions,41%Overall Acceptance Rate117of440submissions,27%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader