Animated agents and learning: Does the type of verbal feedback they provide matter?
Introduction
As researchers continue to investigate methods and guidelines to increase the effectiveness of learning environments, attention is being focused on how motivation, social interaction and cognitive processes impact learning in multimedia environments (Mayer, Sobko, & Mautone, 2003; Moreno, 2007; Moreno & Mayer, 2007). Multimedia environments provide an interface that incorporates words and pictures in ways that can potentially capitalize on these factors and enhance learning (Mayer, 2005). For example, researchers have explored using animated pedagogical agents to enhance social interaction between the computer and the learner and promote learning processes (Atkinson, 2002; Craig, Gholson, & Driscoll, 2002; Dunsworth & Atkinson, 2007). An animated pedagogical agent is a lifelike character that provides instructional information through verbal and nonverbal forms of communication. An agent incorporates some or all of the following features: (a) a human-like look, (b) locomotion, (c) goal-directed gestures, (d) facial expression, (e) gaze, (f) a human voice, (g) personalized speech, and (h) interactive behavior by reacting to a learner's actions (e.g., providing verbal feedback). This study investigated the impact of an animated agent and the type of corrective feedback on learning, motivation and cognition in a multimedia environment.
Social agency theory (Atkinson, Mayer, & Merrill, 2005; Mayer, Sobko, et al., 2003) is one of the theoretical frameworks that researchers use to investigate the effectiveness of animated pedagogical agents in multimedia learning environments. According to this theory, an animated agent that appears on a computer screen and provides learners with verbal and/or non-verbal learning cues has the potential to prime their social-interaction schema and involve the learner in social interaction. As a result, learners may be triggered to interact with the agent in a computer-based multimedia learning environment in much the same way they would interact with their peer, mentor or teacher in a classroom. Once learners perceive a computer-based instructional episode as a social event, they apply social rules—the conventions for human-to-human communication—when they are interacting with the computer (Reeves & Nass, 1996; Van der Meij, 2013). There are a number of social norms primed by the human–computer interaction—one of which is the cooperation principle (Grice, 1975). Grice proposed that a person who is listening to someone talk in a human-to-human communication scenario will assume that the speaker is making a concerted effort to clearly communicate by being informative, accurate, relevant, and concise. Therefore, the learner is potentially motivated in this situation to make sense of what is being presented to him/her and will be more likely to process the information deeply and achieve meaningful learning. In effect, they will be more motivated to select relevant information and integrate it with prior knowledge.
There is modest empirical evidence in the educational research literature supporting social agency theory as several studies have revealed positive learning effects of presenting an animated pedagogical agent in a multimedia environment. For instance, Atkinson (2002) conducted a study in which an animated parrot (Peedy) was used in a multimedia program to deliver worked-example instruction about proportion-word problems. He found that participants studying content with the agent that narrated the instruction performed significantly better on learning outcome measures than their counterparts studying the same content with narrated instruction but no agent. This finding indicated that the presence of the agent enhanced the learning effectiveness of the multimedia environment (i.e., image effect). Other studies (e.g., Dunsworth & Atkinson, 2007; Lester et al., 1997; Lusk & Atkinson, 2007; Moreno, Mayer, & Lester, 2000; Moreno, Mayer, Spires, & Lester, 2001; Yilmaz & Kılıç-Çakmak, 2012) also showed that the presence of an agent fostered learning in a multimedia environment. Kim and Ryu (2003) reviewed 28 studies and found a strong positive learning effect for visually presented agents that are utilized to deliver instruction. In addition, past research revealed the positive impact of agents' voices (e.g., personalized speech) and affective behaviors (e.g., facial expressions) on learners' affective states (e.g., motivation and interest) in multimedia environments (Atkinson et al., 2005; Baylor & Kim, 2005, 2009; Kim & Baylor, 2006; Kim, Baylor, & Shen, 2007). These findings provide further evidence of social-motivational aspects of agents. Additionally, Atkinson et al. (2005) found that learners who studied worked examples that were narrated by an agent with a human voice rated the agent's speech more positively and had better performance on transfer test questions than their peers who studied examples accompanied by the same agent with a computer voice. Therefore, learning, motivation and cognition should all be considered and investigated in multimedia environments, as these three factors are influenced by different instructional methods and media (Brünken, Plass, & Moreno, 2010; Moreno, 2010; Moreno & Mayer, 2007).
Cognitive load theory (CLT; Paas, Renkl, & Sweller, 2003; Schnotz & Kurschner, 2007; Sweller, 1994; Sweller, Ayres, & Kalyuga, 2011; Sweller, van Merrienboer, & Paas, 1998) provides another theoretical framework for researchers to explain their findings in agent-based learning environments. CLT is built around a multicomponent working memory model (Baddeley, 2007) that assumes humans process information via dual sensory channels—audio/verbal channel and visual/pictorial channel and consequently have a limited working memory capacity. During the learning process, learners must select relevant information from the two channels, organize it in working memory and integrate it with their prior knowledge. This process is essential for learning, as it facilitates schema construction and the transfer of information to long-term memory (Sweller, 2005). Learners experience cognitive load when their working memory capacity has been exceeded.
There are three sources of cognitive load—intrinsic load, extraneous load and germane load. Intrinsic load is due to the natural complexity of the learning content that results from the number of interacting elements (element interactivity) necessary to process the task (Sweller, 2005). More interactive elements increase the intrinsic load, the working memory load (Sweller, 2010) and the difficulty level of the task. Extraneous load is caused by ineffective instructional design and should be reduced to promote learning. Finally, germane load is caused by the necessary effortful processing that is required to facilitate schema acquisition. Regardless of the source, the underlying cause of cognitive load that taxes limited working memory resources is proposed to be element interactivity (Sweller, 2010). Sweller suggested that this notion may make it difficult to assess how much load is caused by the different sources but that overall cognitive load can be still be determined and there is “…no reason why the currently commonly used subjective ratings of task difficulty…cannot be used to determine changes in overall cognitive load” (p. 128).
The design of instruction, or the instructional format, has the potential to impact how learners interact with a learning environment and experience cognitive load. For example, it could be argued that a multimedia learning program designed with an animated agent has no effect or even negative effect on learning. According to Harp and Mayer (1998), an animated agent that displays gestures, gaze, facial expressions or locomotion may provide learners too many seductive details and cause learners to split their attention from relevant information and consequently experience extraneous load (or additional element interactivity) in the learning environment. Results revealed from several studies (Chen, 2012; Choi & Clark, 2006; Craig et al., 2002; Mayer, Dow, & Mayer, 2003) support this claim. For instance, in Choi and Clark's study (2006), either an animated agent or an arrow was used in a multimedia program to teach an English language topic about relative clauses. However, the study failed to reveal any learning benefits for those who learned from the animated pedagogical agent. This finding is consistent with Mayer's (Mayer, Dow, et al., 2003) results, who found that participants who studied with an animated agent did not significantly improve on the transfer test compared to their peers who learned without an agent.
Irrespective of theoretical orientation, the current education research literature on the effectiveness of animated agents is rich with diverse research hypotheses and varied empirical outcomes (for review, see Heidig & Clarebout, 2011). In fact, some researchers have concluded that no generalization can be made about whether it is advantageous to embed an agent in a learning environment. Instead, research should investigate the specific conditions under which an agent enhances learning by taking into account a series of potential moderators, such as learner characteristics, the agent's functions, the agent's design, learning environments, and the type of knowledge (Atkinson et al., 2009; Johnson, DiDonato, & Reisslein, 2013; Kim & Wei, 2011; Ozogul, Johnson, Atkinson, & Reisslein, 2013; for review, see Dehn & van Mulken, 2000; Heidig & Clarebout, 2011). Therefore, they recommended that empirical research should address the effect of a specific type of agent in a specific domain. In order to shed light on the mixed and inconclusive empirical results on animated pedagogical agents, the current study was designed to investigate the learning and motivational benefits of an animated agent that functioned to provide verbal feedback in a multimedia environment designed to deliver science instruction.
Shute (2008) defined feedback as “information communicated to the learner that is intended to modify his or her thinking or behavior for the purpose of improving learning” (p. 154). Instructional designers considered feedback as one of the important elements of effective instruction (Sullivan & Higgins, 1983), as it has the potential to assist learners monitor their own learning (Butler & Winne, 1995). In the past several decades, researchers have investigated the role of feedback in learning and instruction from multiple perspectives, e.g., the timing of feedback (immediate feedback vs. delayed feedback, Schroth, 1992), the source of feedback (self-generated feedback vs. externally provided feedback, Andre & Thieman, 1988) and the degree of elaboration of feedback (simple feedback vs. elaborate feedback, Moreno, 2004). To help researchers and practitioners better understand the effectiveness of feedback, a couple of models of feedback were proposed from review articles (Bangert-Drowns, Kulik, Kulic & Morgan, 1991; Butler & Winne, 1995; Hattie & Timperley, 2007). The commonality of these models is that the effectiveness of feedback is related to a range of factors internal (e.g., meta-cognition) and external (e.g., task level) to learners. This is supported by the results of a meta-analysis conducted by Azevedo and Bernard (1995), which revealed that the effect of a particular type of feedback was inconsistent in the literature.
One category distinguishes feedback into simple and elaborate feedback based on the amount of information contained in the feedback (Bangert-Drowns et al., 1991). Feedback can be as simple as a confirmation of whether a learner's response is correct or not (simple feedback) or it can provide an explanation for correct and incorrect responses (elaborate feedback). In a review of 40 research studies utilizing either computerized or non-computerized environments, Bangert-Drowns et al. (1991) found that results from studies that used elaborate feedback produced larger effect sizes compared to results from studies that used simple feedback. Additionally, studies that utilized computer-based learning environments also revealed results that showed the effectiveness of elaborate feedback (e.g., Narciss & Huth, 2006; Pridemore & Klein, 1991). For instance, Pridemore and Klein (1991) found that participants who received elaborate feedback outperformed their counterparts who received verification feedback (i.e., simple feedback), regardless of whether learner control was provided. One interpretation of this effect is that elaborate feedback cues the learners into a cognitive elaboration process, which enhances deep understanding (Anderson & Reder, 1979).
One of the affordances of an animated pedagogical agent is its ability to serve as a source of verbal social cues (e.g., feedback) when learners are interacting with the multimedia environment. Considering that the feedback is most effective when it fosters cognitive processes (Azevedo & Bernard, 1995; Bangert-Drowns et al., 1991), it is possible and plausible to predict that providing verbal feedback that is external to learners, facilitates positive learning outcomes in the agent-based environments. For instance, participants in two studies (Moreno, 2004; Moreno & Mayer, 2005) completed an activity designing plants for various weather conditions in a discovery game-like learning environment augmented with an animated agent (called Herman the Bug). The results of both studies revealed that spoken explanatory feedback (i.e., elaborate feedback) provided by the agent promoted learning and reduced perceived cognitive load more effectively than when the same agent provided simple feedback. However, as the review of past literature revealed a wide range of variables that influence the effectiveness of feedback, it is worthwhile continuing to investigate the interplay between the agent and feedback by extending previous studies by Moreno and her colleagues (Moreno, 2004; Moreno & Mayer, 2005) by using a non-gaming environment and incorporating a no agent control group to deliver different types of feedback.
Section snippets
Overview of experiment
The purpose of the current study was to investigate the effects of an animated pedagogical agent that provided verbal feedback in a multimedia learning environment. Specifically, the study was designed to test the social agency theory and the cognitive load theory by exploring the effect of the agent (agent or no-agent) and type of feedback (simple or elaborate), as well as the potential interaction between the agent and the type of feedback, on a learning outcome measure and perceived
Participants and design
The participants consisted of 135 undergraduate and graduate students from a southwestern university in the US. They were recruited from a participant pool, as well as from flyers and emails that were distributed throughout campus. A wide range of disciplines (Education, Engineering, Music, Business, Journalism, etc), representing the general student population, participated in the study. Participants were either paid a small stipend ($20) or received class credits for participation. The sample
Results
All participants' data were included in the analysis for two reasons: (a) there were no missing cases; and (b) the results of preliminary data screening showed no outliers. Table 3 presents the means and standard deviations (in parentheses) of participants' (a) total pretest scores, (b) total posttest scores, and (c) adjusted total posttest scores, where appropriate. Family-wise alpha was set at the .05 level. Cohen's f was used as an effect size measure with .10, .25 and .40 defined as small,
Discussion
The findings in the educational research literature regarding the effects of animated pedagogical agents (image effect) are varied and inconclusive. Results from some studies support the agent's image effect (e.g., Atkinson, 2002; Dunsworth & Atkinson, 2007; Lester et al., 1997) while others do not (e.g., Moreno et al., 2001). This study was designed to explore Dehn and van Mulken's (2000) recommendation to study a specific type of agent in a specific domain and attempt to disentangle the
Conclusion
The results of the study indicate that an animated agent's ability to foster learning when deployed in a computer-based multimedia learning environment is moderated by instructional components, specifically the type of verbal feedback that an agent delivers. This study supports the idea that different types of verbal feedback may moderate the effect of the presence of an animated agent (image effect). It also suggests that when a computer-based multimedia learning environment is complemented by
References (63)
- et al.
Level of adjunct question, type of feedback, and learning concepts by reading
Contemporary Educational Psychology
(1988) - et al.
Fostering social agency in multimedia learning: examining the impact of an animated agent's voice
Contemporary Educational Psychology
(2005) - et al.
Designing nonverbal communication for pedagogical agents: when less is more
Computers in Human Behavior
(2009) We care about you: incorporating pet characteristics with educational agents through reciprocal caring approach
Computers & Education
(2012)- et al.
Eliciting self-explanations improves understanding
Cognitive Science: A Multidisciplinary Journal
(1994) - et al.
The impact of animated interface agents: a review of empirical research
International Journal of Human-Computer Studies
(2000) - et al.
Fostering multimedia learning of science: exploring the role of an animated agent's image
Computers & Education
(2007) - et al.
Development of NASA-TLX (Task Load Index): results of experimental and theoretical research
- et al.
Do pedagogical agents make a difference to student motivation and learning?
Educational Research Review
(2011) - et al.
Animated agents in K-12 engineering outreach: preferred agent characteristics across age levels
Computers in Human Behavior
(2013)
The impact of learner attributes and learner choice in an agent-based environment
Computers & Education
Fostering achievement and motivation with bug-related tutoring feedback in a computer-based training for written subtraction
Learning and Instruction
Investigating the impact of pedagogical agent gender matching and learner choice on learning outcomes and perceptions
Computers & Education
Making the abstract concrete: visualizing mathematical solution procedures
Computers in Human Behavior
The effects of delay of feedback on a delayed concept formation transfer task
Contemporary Educational Psychology
Cognitive load theory, learning difficulty, and instructional design
Learning and Instruction
Motivating agents in software tutorials
Computers in Human Behavior
Educational interface agents as social models to influence learner achievement, attitude and retention of learning
Computers & Education
An elaborative processing explanation of depth of processing
Optimizing learning from examples using animated pedagogical agents
Journal of Educational Psychology
Does the type and degree of animation present in a visual representation accompanying narration in a multimedia environment impact learning?
A meta-analysis of the effects of feedback in computer-based instruction
Journal of Educational Computing Research
Working memory, thought, and action
The instructional effect of feedback in test-like events
Review of Educational Research
Simulating instructional roles through pedagogical agents
International Journal of Artificial Intelligence in Education
Current issues and open questions in cognitive load research
Feedback and self-regulated learning: a theoretical synthesis
Review of Educational Research
Cognitive and affective benefits of an animated pedagogical agent for learning English as a second language
Journal of Educational Computing Research
Statistical power analysis for the behavioral sciences
Animated pedagogical agents in multimedia educational environments: effects of agent properties, picture features and redundancy
Journal of Educational Psychology
Designing instructional examples to reduce intrinsic cognitive load: molar versus modular presentation of solution procedures
Instructional Science
Cited by (87)
The effects of dynamic and static feedback under tasks with different difficulty levels in digital game-based learning
2024, Internet and Higher EducationThe power of affective pedagogical agent and self-explanation in computer-based learning
2023, Computers and EducationA systematic review of pedagogical agent research: Similarities, differences and unexplored aspects
2022, Computers and EducationCitation Excerpt :Arroyo, Royer, and Woolf (2011) showed that the group that used agents with additional instructional tools performed significantly better than groups using only one method and a control group with none of the methods. Lin et al. (2013) showed that students who learned in the presence of an agent and with a elaborate feedback style performed significantly better than students who learned with an agent and simple feedback style. Johnson, Ozogul, Moreno, and Reisslein (2013) showed low prior knowledge students with agent signaling conditions achieved significantly higher posttest scores than no signaling condition.
How pedagogical agents communicate with students: A two-phase systematic review
2022, Computers and EducationCitation Excerpt :For instance, PAs' formative versus conversational instructions resulting in that conversational style can positively affect students' learning outcomes, cognitive load, and intrinsic motivation (Lin et al., 2020). Similarly, elaborate feedback, which is richer with social cues, was found to be better in facilitating students’ learning than simple feedback (Lin et al., 2013). Furthermore, adding social cues such as politeness was consistent with social agency theory, at least with low-knowledge students who engaged socially with an agent (McLaren et al., 2011a).
Motivation Effect of Animated Pedagogical Agent's Personality and Feedback Strategy Types on Learning in Virtual Training Environment
2022, Virtual Reality and Intelligent Hardware