Abstract
This chapter examines the procedure followed in defining a scoring process to enable the reporting of individual student results for teachers to use in the classroom. The procedure begins with the identification of task features that match elements of the skills frameworks, and is followed by the generation of simple rules to collect data points to represent these elements. The data points are extracted from log files generated by students engaged in the assessment tasks and consist of the documentation of each event, chat and action from each student. The chapter includes examples of the process for defining and generating global and local (task specific) indicators, and examples of how the indicators are coded, scored and interpreted.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The acronym ATC21STM has been globally trademarked. For purposes of simplicity the acronym is presented throughout this chapter as ATC21S.
References
Arroyo, I., & Woolf, B. P. (2005). Inferring learning and attitudes from a Bayesian Network of log file data. Conference paper presented at the Artificial Intelligence in Education: Supporting learning through intelligent and socially informed technology. Amsterdam, The Netherlands.
Bennett, R. E., Jenkins, F., Persky, H., & Weiss, A. (2003). Assessing complex problem solving performances. Assessment in Education, 10(3), 347–359.
Care, E., Griffin, P., Scoular, C., Awwal, N., & Zoanetti, N. (2015). Collaborative problem solving tasks. In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st century skills: Methods and approach (pp. 85–104). Dordrecht: Springer.
Chung, G. K., de Vries, L. F., Cheak, A. M., Stevens, R. H., & Bewley, W. L. (2002). Cognitive process validation of an online problem solving assessment. Computers in Human Behaviour, 18, 669–684.
Fu, W. T. (2001). ACT-PRO action protocol analyzer: A tool for analyzing discrete action protocols. Behavior Research Methods, Instruments, and Computers, 33(2), 149–158.
Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-metrix: Analysis of text on cohesion and language. Behavior Research Methods, Instruments, and Computers, 36(2), 193–202.
Greiff, S., Wüstenberg, S., & Funke, J. (2012). Dynamic problem solving: A new assessment perspective. Applied Psychological Measurement, 36(3), 189–213.
Hesse, F., Care, E., Buder, J., Sassenberg, K., & Griffin, P. (2015). A framework for teachable collaborative problem solving skills. In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st century skills: Methods and approach (pp. 37–56). Dordrecht: Springer.
Masters, J. (2010). Automated scoring of an interactive geometry item: A proof-of-concept. Journal of Technology, Learning, and Assessment, 8(7). Retrieved April 12, 2013, from http://escholarship.bc.edu/jtla/, http://www.jtla.org
Mills, C. N., Potenza, M. T., Fremer, J. J., & Ward, W. C. (2002). Computer-based testing: Building the foundation for future assessments. Mahwah: Lawrence Erlbaum Associates.
OECD. (2013). PISA 2015: Draft collaborative problem solving framework. http://www.oecd.org/pisa/pisaproducts/Draft%20PISA%202015%20Collaborative%20Problem%20Solving%20Framework%20.pdf. Accessed 7 July 2014
Pelligrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Polya, G. (1945). How to solve it (1st ed.). Princeton: Princeton University Press.
Polya, G. (1957). How to solve it (2nd ed.). Princeton: Princeton University Press.
Rowe, M. (1972). Wait time and rewards as instructional variables, their influence in language, logic, and fate control. Paper presented at the National Association for Research in Science Teaching, Chicago, IL.
Rupp, A. (2002). Feature selection for choosing and assembling measurement models: A building-block-based organisation. International Journal of Testing, 2(3/4), 311–360.
Schoenfeld, A. H. (1985). Mathematical problem solving. New York: Academic.
Stahl, R. (1990). Using “think-time” behaviors to promote students’ information processing, learning, and on-task participation. An instructional module. Tempe: Arizona State University.
Tobin, K. (1987). The role of wait time in higher cognitive level learning. Review of Education Research, 57(1), 69–95.
Williamson, D. M., Mislevy, R. J., & Bejar, I. I. (2006). Automated scoring of complex tasks in computer-based testing. Mahwah: Lawrence Erlbaum Associates.
Zoanetti, N. P. (2010). Interactive computer based assessment tasks: How problem-solving process data can inform instruction. Australasian Journal of Educational Technology, 26(5), 585–606.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Adams, R., Vista, A., Scoular, C., Awwal, N., Griffin, P., Care, E. (2015). Automatic Coding Procedures for Collaborative Problem Solving. In: Griffin, P., Care, E. (eds) Assessment and Teaching of 21st Century Skills. Educational Assessment in an Information Age. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-9395-7_6
Download citation
DOI: https://doi.org/10.1007/978-94-017-9395-7_6
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-017-9394-0
Online ISBN: 978-94-017-9395-7
eBook Packages: Humanities, Social Sciences and LawEducation (R0)