Skip to main content
Log in

Advance in Detecting Key Concepts as an Expert Model: Using Student Mental Model Analyzer for Research and Teaching (SMART)

  • Original research
  • Published:
Technology, Knowledge and Learning Aims and scope Submit manuscript

Abstract

While key concepts embedded within an expert’s textual explanation have been considered an aspect of expert model, the complexity of textual data makes determining key concepts demanding and time consuming. To address this issue, we developed Student Mental Model Analyzer for Teaching and Learning (SMART) technology that can analyze an expert’ textual explanation to elicit an expert concept map from which key concepts are automatically derived. SMART draws on four graph-based metrics (i.e., clustering coefficient, betweenness, PageRank, and closeness) to automatically filter key concepts from experts’ concept maps. This study investigated which filtering method extract key concepts most accurately. Using 18 expert textual data, we compared the accuracy levels of those four competing filtering methods by referring to four accuracy measures (i.e., precision, recall, F-measure, and N-similarity). The results showed the PageRank filtering method outperformed the other methods in all accuracy measures. For example, on average, PageRank derived 79% of key concepts as accurately as human experts. SMART’s automatic filtering methods can help human experts save time when building an expert model, and it can validate their decision making on a list of key concepts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The text was retrieved from https://education.jlab.org/reading/electrostatics_r.html.

References

  • Allen, L. K., Snow, E. L., & McNamara, D. S. (2015). Are you reading my mind? In Proceedings of the fifth international conference on learning analytics and knowledgeLAK ’15 (pp. 246–254). New York, NY: ACM Press.

  • Anthonisse, J. M. (1971). The rush in a graph. Amsterdam: Mathematische Centrum.

    Google Scholar 

  • Anzai, Y., & Yokoyama, T. (1984). Internal models in physics problem-solving. Cognition and Instruction, 1(4), 397–450.

    Google Scholar 

  • Axelrod, R. (1976). Structure of decision: The cognitive maps of political elites. Princeton: Princeton University Press.

    Google Scholar 

  • Baroni, M., Dinu, G., & Kruszewski, G. (2014). Don’t count, predict! A systematic comparison of context-counting versus context-predicting semantic vectors. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 1: Long papers) (Vol. 1, pp. 238–247).

  • Beamer, B., Rozovskaya, A., & Girju, R. (2008). Automatic semantic relation extraction with multiple boundary generations. In Proceedings of AAAI (pp. 824–829). Chicago: AAAI Press.

  • Boleda, G., & Herbelot, A. (2016). Formal distributional semantics: Introduction to the special issue. Computational Linguistics, 42(4), 619–635.

    Google Scholar 

  • Brandes, U. (2001). A faster algorithm for betweenness centrality. The Journal of Mathematical Sociology, 25(2), 163–177.

    Google Scholar 

  • Burstein, J., Tetreault, J., & Madnani, N. (2013). The e-rater automated essay scoring system. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 55–67). New York: Routledge.

    Google Scholar 

  • Carley, K., & Palmquist, M. (1992). Extracting, representing, and analyzing mental models. Social Forces, 70(3), 601–636.

    Google Scholar 

  • Cho, H., Gay, G., Davidson, B., & Ingraffea, A. (2007). Social networks, communication styles, and learning performance in a CSCL community. Computers & Education, 49(2), 309–329.

    Google Scholar 

  • Clariana, R., Wallace, P., & Godshalk, V. (2009). Deriving and measuring group knowledge structure from essays: The effects of anaphoric reference. Educational Technology Research and Development, 57(6), 725–737.

    Google Scholar 

  • Clariana, R. B. (2010). Multi-decision approaches for eliciting knowledge structure. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 41–59). New York: Springer.

    Google Scholar 

  • Clark, S. (2015). Vector space models of lexical meaning. In S. Lappin & C. Fox (Eds.), Handbook of contemporary semantics (2nd ed., pp. 493–534). Malden, MA: Blackwell.

    Google Scholar 

  • Cohen, E., Delling, D., Pajor, T., & Werneck, R. F. (2014). Computing classic closeness centrality, at scale. In Proceedings of the second ACM conference on Online social networks (pp. 37–50). ACM.

  • Collins, A., & Gentner, D. (1987). How people construct mental models. In D. Holland & N. Quinn (Eds.), Cultural models in language and thought (pp. 243–265). Cambridge: Cambridge University Press.

    Google Scholar 

  • Collins, A. M., & Loftus, E. F. (1975). A spreading–activation theory of semantic processing. Psychological Review, 82, 407–428.

    Google Scholar 

  • Coronges, K. A., Stacy, A. W., & Valente, T. W. (2007). Structural comparison of cognitive associative networks in two populations. Journal of Applied Social Psychology, 37(9), 2097–2129.

    Google Scholar 

  • D’Mello, S., Hays, P., Williams, C., Cade, W., Brown, J., & Olney, A. (2010). Collaborative lecturing by human and computer tutors. In V. Aleven, J. Kay, & J. Mostow (Eds.), Intelligent tutoring systems (pp. 178–187). Berlin: Springer.

    Google Scholar 

  • Emig, J. (1977). Writing as a mode of learning. College Composition and Communication, 28(2), 122–128.

    Google Scholar 

  • Erk, K. (2012). Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10), 635–653.

    Google Scholar 

  • Freeman, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry, 40(1), 35–41.

    Google Scholar 

  • Garnham, A. (1987). Mental models as representations of discourse and text (1st ed.). Chichester: Ellis Horwood Ltd.

    Google Scholar 

  • Garnham, A. (2001). Mental models and the interpretation of anaphora. Hove: Psychology Press.

    Google Scholar 

  • Girju, R., Nakov, P., Nastase, V., Szpakowicz, S., Turney, P., & Yuret, D. (2009). Classification of semantic relations between nominals. Language Resources and Evaluation, 43(2), 105–121.

    Google Scholar 

  • Glaser, R., Chi, M. T., & Farr, M. J. (1988). The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Greeno, J. G. (1989). Situation, mental models, and generative knowledge. In D. Klahr & K. Kotovsky (Eds.), Complex information processing (1st ed., pp. 285–318). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Hage, P., & Harary, F. (1983). Structural models in anthropology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Hansen, D., Schneiderman, B., & Smith, M. (2010). Analyzing social media networks with NodeXL: Insights from a connected world. Burlington, MA: Morgan Kaufmann.

    Google Scholar 

  • Ifenthaler, D. (2010). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97.

    Google Scholar 

  • Janssen, T. M. V. (2012). Montague semantics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2012). Retrieved from http://plato.stanford.edu/archives/win2012/entries/montaguesemantics/.

  • Johnson-Laird, P. N. (2005a). Mental models and thoughts. In K. J. Holyoak (Ed.), The Cambridge handbook of thinking and reasoning (pp. 185–208). Cambridge: Cambridge University Press.

    Google Scholar 

  • Johnson-Laird, P. N. (2005b). The history of mental models. In K. I. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 179–212). New York: Psychology Press.

    Google Scholar 

  • Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology Research and Development, 48(4), 63–85.

    Google Scholar 

  • Jonassen, D. H., Beissner, K., & Yacci, M. (Eds.). (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hillsdale: Lawrence Erlbaum Associates Inc.

    Google Scholar 

  • Jonassen, D. H., & Henning, P. (1996). Mental models: Knowledge in the head and knowledge in the world. In Proceedings of the 1996 international conference on learning sciences (pp. 433–438). International Society of the Learning Sciences.

  • Kim, K. (2017). Visualizing first and second language interactions in science reading: A knowledge structure network approach. Language Assessment Quarterly, 14, 328–345.

    Google Scholar 

  • Kim, K. (2018). An automatic measure of cross-language text structures. Technology, Knowledge and Learning, 23(2), 301–314.

    Google Scholar 

  • Kim, K., Clarianay, R. B., & Kim, Y. (2018). Automatic representation of knowledge structure: Enhancing learning through knowledge structure reflection in an online course. Educational Technology Research and Development, 67(1), 105–122.

    Google Scholar 

  • Kim, M. (2012). Theoretically grounded guidelines for assessing learning progress: Cognitive changes in ill-structured complex problem-solving contexts. Educational Technology Research and Development, 60(4), 601–622. https://doi.org/10.1007/s11423-012-9247-4.

    Article  Google Scholar 

  • Kim, M. (2013). Concept map engineering: Methods and tools based on the semantic relation approach. Educational Technology Research and Development, 61(6), 951–978. https://doi.org/10.1007/s11423-013-9316-3.

    Article  Google Scholar 

  • Kim, M. (2015). Models of learning progress in solving complex problems: Expertise development in teaching and learning. Contemporary Educational Psychology, 42, 1–16. https://doi.org/10.1016/j.cedpsych.2015.03.005.

    Article  Google Scholar 

  • Kim, M., & Ayer, T. (2019). Learner participation profiles in an asynchronous online collaboration context. Internet and Higher Education, 41, 62–76. https://doi.org/10.1016/j.iheduc.2019.02.002.

    Article  Google Scholar 

  • Kim, M., Zouaq, A., & Kim, S. (2016). Automatic detection of expert models: The exploration of expert modeling methods applicable to technology-based assessment and instruction. Computers & Education, 101, 55–69.

    Google Scholar 

  • Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2), 163.

    Google Scholar 

  • Knoke, D., & Kuklinski, J. H. (1982). Network analysis. Newbury Park: Sage Publications.

    Google Scholar 

  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.

    Google Scholar 

  • Leydesdorff, L. (2007). Betweenness centrality as an indicator of the interdisciplinarity of scientific journals. Journal of the American Society for Information Science and Technology, 58(9), 1303–1319.

    Google Scholar 

  • Lintean, M., Rus, V., & Azevedo, R. (2012). Automatic detection of student mental models based on natural language student input during metacognitive skill training. International Journal of Artificial Intelligence in Education, 21(3), 169–190.

    Google Scholar 

  • Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to information retrieval. Cambridge: Cambridge University Press.

    Google Scholar 

  • Montague, R. (1974). In R. Thomason (Ed.), Formal philosophy: The selected papers of Richard Montague. New Haven: Yale University Press.

    Google Scholar 

  • Narayanan, V. K. (2005). Causal mapping: An historical overview. In V. K. Narayanan & D. J. Armstrong (Eds.), Causal mapping for research in information technology (pp. 1–19). Hershey: Idea Group Publishing.

    Google Scholar 

  • Newman, M. (2010). Networks: An introduction. Oxford: Oxford University Press.

    Google Scholar 

  • Newman, M. E. (2004). Analysis of weighted networks. Physical Review E, 70(5), 56–131.

    Google Scholar 

  • Nye, B. D., Graesser, A. C., & Hu, X. (2014). AutoTutor and Family: A review of 17 years of natural language tutoring. International Journal of Artificial Intelligence in Education, 24(4), 427–469.

    Google Scholar 

  • Partee, B. H. (1984). Compositionality. In F. Landman & F. Veltman (Eds.), Varieties of formal semantics: Proceedings of the 4th Amsterdam colloquium (Groningen-Amsterdam Studies in Semantics, No. 3) (pp. 281–311). Dordrecht: Foris.

  • Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3–18.

    Google Scholar 

  • Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies, 2(1), 37–63.

    Google Scholar 

  • Pretz, J. E., Naples, A. J., & Sternberg, R. J. (2003). Recognizing, defining, and representing problems. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving (pp. 3–30). New York, NY: Cambridge University Press.

    Google Scholar 

  • Quillian, M. R. (1985). Word concepts. A theory and simulation of some basic capabilities. Behavioral Science, 12(5), 410–430.

    Google Scholar 

  • Rupp, A. A., Gushta, M., Mislevy, R. J., & Shaffer, D. W. (2010). Evidence-centered design of epistemic games: Measurement principles for complex learning environments. The Journal of Technology, Learning and Assessment, 8(4), 4–47.

    Google Scholar 

  • Rus, V., D’Mello, S., Hu, X., & Graesser, A. (2013). Recent advances in conversational intelligent tutoring systems. AI Magazine, 34(3), 42–54.

    Google Scholar 

  • Schvaneveldt, R. W. (1990). Pathfinder associative networks: Studies in knowledge organizations. Norwood, NJ: Ablex Publishing Corp.

    Google Scholar 

  • Seel, N. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition, and Learning, 1(1), 59–85.

    Google Scholar 

  • Seel, N. M. (1999). Semiotics and structural leaning theory. Journal of Structural Learning and Intelligent Systems, 14(1), 11–28.

    Google Scholar 

  • Seel, N. M. (2001). Epistemology, situated cognition, and mental models: “Like a bridge over troubled water”. Instructional Science, 29(4/5), 403–427.

    Google Scholar 

  • Seel, N. M. (2004). Model-centered learning environments: Theory, instructional design, and effects. In N. M. Seel & S. Dijkstra (Eds.), Curriculum, plans, and processes in instruction design: International perspectives (1st ed., pp. 49–74). New York: Routledge.

    Google Scholar 

  • Seel, N. M., & Dinter, F. R. (1995). Instruction and mental model progression: Learner-dependent effects of teaching strategies on knowledge acquisition and analogical transfer. Educational Research and Evaluation, 1(1), 4–35.

    Google Scholar 

  • Shermis, M. D. (2010). Automated essay scoring in a high stakes testing environment. In Innovative assessment for the twenty-first century (pp. 167–185). Boston, MA: Springer US.

  • Siemens, G., & Baker, R. (2012). Learning analytics and educational data mining. In Proceedings of the 2nd international conference on learning analytics and knowledgeLAK ’12 (pp. 252–254). New York, NY: ACM Press.

  • Smith, J. P., III, diSessa, A. A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of the Learning Sciences, 3(2), 115–163.

    Google Scholar 

  • Snow, R. E. (1990). New approaches to cognitive and conative assessment in education. International Journal of Educational Research, 14(5), 455–473.

    Google Scholar 

  • Spector, J. M. (2008). Complex domain learning. In H. H. Adelsberger, Kinshuk, & J. M. Pawlowski (Eds.), Handbook of information technologies for education and training (pp. 261–275). Heidelberg: Springer.

    Google Scholar 

  • Spector, J. M. (2010). Mental representations and their analysis: An epistemological perspective. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 27–40). Boston: Springer US. https://doi.org/10.1007/978-1-4419-5662-0_3.

    Chapter  Google Scholar 

  • Turney, P. D., & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37, 141–188.

    Google Scholar 

  • Villalon, J., & Calvo, R. A. (2011). Concept maps as cognitive visualizations of writing assignments. Journal of Educational Technology & Society, 14(3), 16–27.

    Google Scholar 

  • Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge: Cambridge University Press.

    Google Scholar 

  • Zimmerman, W. A., Kang, H. B., Kim, K., Gao, M., Johnson, G., Clariana, R., et al. (2018). Computer-automated approach for scoring short essays in an introductory statistics course. Journal of Statistics Education, 26(1), 40–47.

    Google Scholar 

  • Zouaq, A., Gagnon, M., & Ozell, B. (2010). Semantic analysis using dependency-based grammars and upper-level ontologies. International Journal of Computational Linguistics and Applications, 1(1–2), 85–101. Retrieved from https://www.cicling.org/2010/IJCLA-2010.pdf#page=85.

  • Zouaq, A., Gasevic, D., & Hatala, M. (2011). Ontologizing concept maps using graph theory. In Proceedings of the 2011 ACM symposium on applied computing (pp. 1687–1692). New York, NY: ACM.

  • Zouaq, A., Gasevic, D., & Hatala, M. (2012). Linguistic patterns for information extraction in Ontocmaps. In Proceedings of the 3rd international conference on ontology patternsVolume 929 (pp. 61–72). CEUR-WS.org.

  • Zouaq, A., Joksimovic, S., & Gasevic, D. (2013). Ontology learning to analyze research trends in learning analytics publications. In CEUR WS Proceedings of the LAK Data Challenge, 974. Accessible under http://lak.linkededucation.org. Accessed 19 Sept 2018.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Kyu Kim.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interests.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or the national research committee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Examples of Expert Textual Explanations (Human-Judged Key Concepts were Bolded in the Texts)

Appendix: Examples of Expert Textual Explanations (Human-Judged Key Concepts were Bolded in the Texts)

1.1 Model 1

Evaluation is a fundamental component of the instructional design. Evaluation is the process of determining merit, worth, and value of things. The types of evaluation include confirmatory evaluation, formative evaluation, and summative evaluation. For example, formative evaluation supports the process of improvement, focusing on learner ability. Summative evaluation focuses on the overall effectiveness, usefulness, or worth of the instruction. This chapter introduces several evaluation models. Stufflebeam proposes the CIPP model that stands for context, input, process, and product evaluation. CIPP model influenced program planning, program structuring, implementation decisions. In the CIPP model, an evaluator often participates in a project as a member of the project team. From a broad perspective, Rossi views that evaluation can include needs assessment, theory assessment, implementation assessment, impact, and efficiency assessment. Chen proposes theory-driven evaluation in which evaluators and stakeholders work together. The important role of an evaluator is to help articulate, evaluate, and improve the program theory including an action model and change model. Kirkpatrick suggests that training evaluation should exam four levels of the outcomes including reaction, learning, behavior, and business results. Brinkerhoff emphasizes the use of success case to evaluate a program. He suggests that an organization can gain profits by applying knowledge learned from success cases. Lastly, Patton views that the use of evaluation findings is critical, and thus his evaluation model focuses on producing evaluation use. The utility of evaluation is judged by the degree of use. The use of evaluation findings can increase when stakeholders become active participants in the evaluation process

 

1.2 Model 2

Technology implementations usually begin with an identified instructional need. An instructional need was likely not fully identified due to the insufficient study of how instructional practices in the classroom were being conducted already without the technology. One big issue is defining what a successful integration or change in instructional practice actually is. While teachers in the situation may have felt that they knew this already, the assumptions inherent in a design situation need to be articulated and checked if the assumptions are not to distort the design space by which instructional practices are manipulated. Teachers didn’t have enough professional development using the technology in classroom teaching and learning, on ways to integrate use into their teaching, and best practices with regard to effective educational use. Teacher professional development that discusses not just technical know-how but also pedagogy could help teachers realize how to do things differently that takes full advantage of the affordances of the tablets. Training as a professional development should be extensive including teacher beliefs and attitude. Teacher beliefs play a role in adopting new practices and changing their instructional practice. Teachers may not believe that students learn with laptops, and thus do not use laptops in their instruction. The only support teachers had during implementation was technical support; Teachers lacked a mentor who could assist them as instructional issues arose throughout the year. Mentoring on additional and advanced uses of the technology in the classroom is critical for teachers to increase their skills and maintain their motivation in utilizing the technology. In addition, a mentor could help teachers to maintain the belief that these efforts will have positive results. There are concerns that the environment does not support change. An ongoing supportive environment where teachers initially learn how to use the technology, how to use the technology with their content, and how to continue to develop their expertise in the technology and incorporating it to the classroom is critical

The environment could include a culture that does not support the desired performance. For example, the lack of incentives to make effective use of a new technology could also contribute to a lack of use. The intervention seems to have been applied to this community rather than involving teachers from the beginning as collaborators in its design and modification. Teachers were not involved in the decision to implement the new media; thus, they did not fully “buy into” the plan

 

1.3 Model 3Footnote 1

Atoms, the basic building blocks of matter, are made of three basic components: protons, neutrons and electrons. The protons and neutrons cluster together to form the nucleus, the central part of the atom, and the electrons orbit about the nucleus. Protons and electrons both carry an electrical charge. The charges they carry are opposite to each other; protons carry a positive electrical charge while electrons carry a negative electrical charge. Neutrons are neutrally charged - they carry no charge at all

Electricity is the movement of charged particles, usually electrons, from one place to another. Materials that electricity can move through easily are called conductors. Most metals, such as iron, copper and aluminum, are good conductors of electricity. Other materials, such as rubber, wood and glass, block the flow of electricity. Materials which prevent the flow of electricity are called insulators. Electrical cords are usually made with both conductors and insulators. Electricity flows through a conductor in the center of the cord. A layer of insulation surrounds the conductor and prevents the electricity from ‘leaking’ out

Objects usually have equal numbers of positive and negative charges, but it isn’t too hard to temporarily create an imbalance. One way scientists can create an imbalance is with a machine called a Van de Graaff generator. It creates a large static charge by placing electrons on a metal dome using a motor and a big rubber band. Since like charges repel, the electrons push away from each other as they collect on the do me. Eventually, too many electrons are placed on the dome and they leap off, creating a spark that looks like a bolt of lightning

Have you ever received a shock after having walked across a carpet? This shock was caused by extra electrons you collected while walking across the carpet. Your body became like the dome of the Van de Graaff generator, full of extra electrons looking for a way to get away. The path back to the carpet was blocked by the shoes you were wearing, but they were able to move through your hand and into the object that you touched, causing the shock. So, the next time you shuffle across a carpet and shock your friend on the ear, tell them you were just trying to be a Van de Graaff generator!

 

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, M.K., Gaul, C.J., Kim, S.M. et al. Advance in Detecting Key Concepts as an Expert Model: Using Student Mental Model Analyzer for Research and Teaching (SMART). Tech Know Learn 25, 953–976 (2020). https://doi.org/10.1007/s10758-019-09418-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10758-019-09418-5

Keywords

Navigation