Next Article in Journal
Diffusive Representation: A Powerful Method to Analyze Temporal Signals from Metal-Oxide Gas Sensors Used in Pulsed Mode
Next Article in Special Issue
Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles
Previous Article in Journal
Compact and Wideband PIFA Design for Wireless Body Area Sensor Networks
Previous Article in Special Issue
Factors Contributing to Korean Older Adults’ Acceptance of Assistive Social Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education

by
Diego Antonio Urdanivia Alarcon
1,
Sandra Cano
2,*,
Fabian Hugo Rucano Paucar
1,
Ruben Fernando Palomino Quispe
1,
Fabiola Talavera-Mendoza
1 and
María Elena Rojas Zegarra
1
1
Universidad Nacional de San Agustín de Arequipa, Arequipa 04000, Peru
2
School of Computer Engineering, Pontificia Universidad Católica de Valparaíso, Valparaí-so 2340000, Chile
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(21), 2577; https://doi.org/10.3390/electronics10212577
Submission received: 16 September 2021 / Revised: 14 October 2021 / Accepted: 18 October 2021 / Published: 21 October 2021
(This article belongs to the Special Issue Human Computer Interaction and Its Future)

Abstract

:
Education systems are currently in a state of uncertainty in the face of the changes and complexities that have accompanied SARS-CoV2, leading to new directions in educational models and curricular reforms. Video-based Intervention (VBIs) is a form of observational learning based on social learning theory. Thus, this study aims to make use of a humanoid robot called NAO, which has been used in educational interventions for children with autism spectrum disorder. Integrating it in video-based interventions. This study aims to characterize, in an everyday context, the mediating role of the NAO robot presented in group videoconferences to stimulate video-based on observational learning for children with cognitive and social communication deficits. The children in the study demonstrated a minimal ability to understand simple instructions. This qualitative study was applied to three children with autism spectrum disorder (ASD), level III special education students at Center for Special Basic Education (CEBE) in the city of Arequipa, Perú. Likewise, an instrument was designed for assessment of the VBIs by a group of psychologists. The results showed that the presence of the NAO robot in the VBIs successfully stimulated their interaction capabilities.

1. Introduction

The World Health Organization (WHO) has declared COVID-19 as a global pandemic. Therefore, in March of 2020, in all centers of public and private education and higher education, face to face meetings were suspended. The intention of this measure was to prevent educational institutions from becoming sources of contagion among students, asking for responsibility from students and parents to make appropriate decisions [1]. Education had to be switched to a virtual mode (synchronous and asynchronous). However, some people were more affected, such as children with autism spectrum disorder (ASD) [2]. ASD is a group of conditions characterized by social communication problems and difficulties with social interactions [3]. These children require educational and pedagogical means that help them to enhance their cognitive development and critical thinking in the environment in which they live and grow, to achieve better living conditions. ASD is considered by both the World Health Organization (WHO) and the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [4] as a widespread developmental disorder, characterized by a wide variety of clinical and behavioral expressions, which are the result of multifactorial dysfunctions of the development of the central nervous system [5,6].
Children with ASD are one of the groups at highest risk of complications from COVID-19, because they have anxiety, learning problems, immune system alterations, behavioral problems, impaired social communication, attention disorder with hyperactivity, irritability, and aggression, which represent additional challenges in times of a pandemic [7]. Children with special learning needs have greater difficulties during the teaching–learning process due to their cognitive and motor limitations, among other factors. Accordingly [4], ASD is coded for three levels of performance. At level one, children need help because even though their vocabulary has not been affected, they express atypical or unsatisfactory responses to other people’s social openness. It may seem that these children have little interest in social interactions. At level two, the children need notable help, as they have conditions in verbal and nonverbal communication skills, limited social interactions, and restricted and inflexible behaviors that affect their performance. At level three, the children need very noticeable help, in which severe deficits in verbal and nonverbal social communication skills cause severe disturbances in performance, very limited initiation of social interactions, and minimal responses to other people’s social openness. In addition, with these limitations, traditional learning methods are rudimentary and not very adaptable to the needs of this population. The result is a poor learning process, and isolation of this population is required to educate and prepare them for the environment that surrounds them. In the educational process, any child is determined by the social experience that begins in the family and later in school and society.
Video-Based Interventions (VBIs) is a type of observational learning and imitation [8]. Hughes and Yakubova [9] mention that VBI has strong evidence in teaching social, communication, functional behavior, play and self-help skills for students with ASD. The use of technology in interventions for children with ASD have a positive effect [10]. Therefore, using video as an instructional tool can help children with ASD in social skills teaching, because it allows the incorporation of various elements that highlight key social cues. VBI has been used as an effective social communication support for learners diagnosed with autism [11]. Reasons for using VBIs for children with ASD include poor motor imitation, reduced interactions with peers, and a lack of attention and eye contact with an appropriate model [12]. Therefore, imitation skill is a prerequisite for VBI, where the subject must be capable of imitating the model in order to benefit from the intervention [13]. However, children with ASD have specific imitation deficits, even if they have a higher level of severity, which may impact VBI outcomes. A study made in 2020 by Lytridis et al. [14], where the authors designed a set of video-based instructions in life skills teaching personal hygiene, where the instructor is an NAO robot. The applied study is not clear at the level of ASD at which the study was conducted. In addition, the evaluation instrument used was a satisfaction questionnaire using a Likert scale from 1 to 10, which must be answered by the parents. Therefore, the study does not detail the expressive reactions that children had when they saw that the instructor was a robot. Moreover, the authors mention that they designed a personalized support material, but do not describe in depth what the personalization was for each video for each child, nor do they go into much depth about the profile of children with ASD. Therefore, this type of VBI must follow a User-Centered Design (UCD) approach [15].
VBIs is a form of observational learning based on social learning theory, which students can learn by observing and then imitating the actions of others [16]. Observational learning requires the coordination of cognitive functions, such as action representation, attention, and motivation [17]. Children with ASD have limited social abilities, such as: imitation, joint attention, goal understanding, affect sharing, communicative use of language and play [18]. A study made by [19] showed that subjects with ASD have less imitation in a variety of tasks. Another study [20] reported an increased imitation in ASD, with behavior (echopraxia) and speech (echolalia) imitated from others, without regarding the context and meaning of the actions.
Social robots are being used in educational interventions for children with ASD [21], where the children’s reactions have been very positive. Therefore, it could be considered to include a social robot as multimedia material in the VBIs, where the robot could be the role of instructor; so, could a child with ASD have a positive reaction with a NAO robot? Could Robot-based Video Interventions motivate the imitation of children with ASD?
Therefore, the main objective of this study is summarized in the following research question: Could video-based interventions (VBI), in which the NAO robot is used as an instructor, produce expressivity reactions in children with autism spectrum disorder with severity level three?
This paper discusses our approach of using the NAO robot in the VBIs to evaluate expressive reactions in children with ASD. The paper is structured as follows: Section 2 presents a brief description of the theoretical concepts related to this study; Section 3 presents the methodology, where qualitative data were applied through an instrument designed to identify facial, verbal and body expressions, and describes each activity applied for each child and the results obtained; Section 4 presents a discussion of the results obtained and highlights the limitations of this study; and finally, Section 5 provides conclusions and proposes directions for future work.

2. Background

2.1. Child-Centered Design (CCD)

User-Centered Design (UCD) includes the end-user in all stages of the product design. Therefore, this approach applied to children is useful, as it allows us to identify the different needs of the child, where the needs of a child are different compared to those of an adult, especially if he/she has ASD.
UCD can be defined as “a multidisciplinary activity, which incorporates human factors and ergonomic knowledge and techniques with the aim of increasing efficiency and productivity, improving human working conditions, and counteracting the possible adverse effects of its use on human health, safety and performance” [22]. Another definition of UCD, where it is oriented to a design approach, as an approach to design that is based on the process of gathering information about the people who will use the product. UCD processes focus on users through the planning, design, and development of a product [23].
The importance of studying product design for children arises from a term called child-centered design, which is defined by ACM. Hourcade [24] has defined it as a child who grows up using interactive devices more frequently; the way they learn, play, and interact with others has changed. Whether the changes that occur are positive or negative will depend on how these interactions with technology are designed, and how these electronic devices are used.
Hourcade [24] defined 10 aspects that must be considered to design a product that is child-centered, such as: (1) multidisciplinary teamwork; (2) stakeholder participation; (3) assess the impact over time (as children do not change immediately when using technology); (4) ecological design, not just technology (the use of technology is significantly affected by the context); (5) to make practical the reality of children as for a technology designed for children to be successful, it needs to be able to work in real children’s contexts; (6) Personalization, as children come to use technologies with different life experiences, different skill sets, neutral structure and bodies; (7) Hierarchy of skills, as in many domains including music and education the learning processes consist of basic learning competencies; (8) Supporting creativity, as learning can be more motivating if it is given with a conscious purpose for the child, such as creation or construction; (9) Increase in human connections, and interactions with teachers and friends, among others; (10) Composition of the game, physical elements can benefit in many ways, including the development of competencies in learning, among others.

2.2. Video-Based Interventions

In 1969 Bandura described a type of observational learning, which leads to the learner imitating that skill [16]. VBI was defined by Rayner et al. [25] as a set of concepts, such as: (1) video feedback; (2) video modelling; (3) video self-modelling; (4) point-of-view modelling; (5) video prompting; and (6) computer-based video instruction. Video feedback is a self-monitoring technique where an individual is recorded on videotape, the observed behavior is then collected and reviewed by an experimenter. Video modelling is the most widely researched and implemented of all facets of VBI, and desired behaviors are acquired by watching a video demonstration of the desired behavior by a proficient model and then imitating that behavior [26]. Video self-modelling is observing themselves exhibiting the targeted behavior after the experimenter has edited out all the undesirable behaviors to produce a video that displays only the targeted behaviors [27]. Point of view modelling (PVM) is where the camera is directed to embrace the scene as the participant would see it, perhaps directed at a specific setting or as a set of hands performing the desired task. Video-prompting procedures are like video modelling procedures in that both present the participant with video clips of a model expertly performing a desired task or behavior. Finally, Computer-based instruction (CBI) is often used interchangeably with the term “multimedia”, where a computer is used to interactively present a variety of media, text, music, pictures, and video [28]. Therefore, in this study was applied the computer-based instruction.

2.3. Robots-Based Interventions

The field of robotics has been of interest in research and development in several areas. Nowadays, it is gaining more interest in people with ASD. A study conducted by [29] used a NAO robot with seven children with high functioning autism, where results show that children show extreme excitement when initially meeting NAO for the first time. However, some results indicated that 6 to the 7 children with ASD have some difficulties responding to one or more of the tasks (verbal communication, facial expressions, joint attention, imitating and pointing). Another study [30], where the author proposed the integration of a robot into the current learning environment for children with ASD, where six scenarios were designed based on the existing syllabus to teach communication skills.
Therefore, interventions with robots for children with ASD have been shown to increase children’s emotional expression [31], attention [32], and joint attention [33]. On another hand, NAO is one of the most widely used humanoid robots for special education [34]; this is due to NAO having simpler features compared to real humans, and compared with other robots, it is one the most advanced and commercially available humanoids and can be programmed. However, in these studies, NAO robots interact with children physically. In the context of the pandemic, children with ASD are confined to their homes. Therefore, it is used for video-based modeling for teaching social and communication skills through an NAO robot. It is based on the flexibility to watch and re-watch targeted skills being modelled; it is recorded by an NAO robot as an instructor to teach a skill or task to be imitated. Video-based modelling (VBM) has been particularly effective in supporting independent skill development in schools for children with ASD [35]. Video modelling involves a participant viewing a skill being modelled before completing the skill for himself or herself [36]. VBIs constitute a low-cost alternative to traditional approaches where usually children interact with a robot in a real environment.

3. Methodology

This study follows qualitative research, which generates descriptive and explanatory data [37], to gain an in-depth understanding of the context and users through non-numerical means and direct observations. Therefore, the data collected are video recordings, direct observations, user profile and an instrument designed to identity the expressivity of emotions through behaviors verbal and non-verbal. This study was designed in the following stages: diagnostic initial, design and develop, execution and implementation, evaluation, and results analysis.

3.1. Data Collection Instruments

A psycho-pedagogical profile was designed, where characteristics such as cognitive development, motor skills, the communication level, language, social and emotional aspects, adaptive behavior, learning opportunity, and interests were considered to build the profile for each child. An instrument was designed by phycologists to evaluate the emotional expressions of each child. The instrument is composed of three dimensions such as facial; verbal; and body expression. The definitions of these dimensions proposed are based on a study conducted by Trevarthen [38]. The instrument consists of 17 aspects, which are evaluated on a Likert scale from 0 to 4, where the equivalence of values is: never (0), almost never (1), sometimes (2), almost always (3) and always (4). The instrument was validated by five experts specialized in psychology for use with children aged 3 years and older, whose validations were quantified by means of the Aiken V coefficient, obtaining a value of 0.9 for each dimension (facial expression, verbal expression, and body expression).

3.2. Participants

Initially, the study was designed for six children with ASD level III. However, with the global pandemic (SARS-CoV2), some parents withdrew their children from the school, so the number of children was reduced to three. Therefore, the pandemic context further reduced the participation sample, because not all the population with ASD level III has access to technology (internet) in Peru. Therefore, the population with which we present this study is very small due to the factors mentioned above.
Children were diagnosed by medical psychiatrists duly accredited by the National Health System [1]. Therefore, the study was applied for three children with a level III ASD diagnosis at the Center for Special Basic Education (CEBE), in the city of Arequipa, Perú. The ages of the children were between 9 and 13 years. All the children met the criteria established in DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, 5th Edition); that is, they need very notable help, displaying significant deficits at the verbal and non-verbal level, minimal social interactions, affected language, and restricted behaviors that significantly affect their performance. Therefore, level three is the most severe form of autism, requiring very substantial support, and children have several deficits in verbal and nonverbal aspects, which can make it very hard for them to function, interact socially, and deal with a change in focus or location. It is important to mention this because of the inclusion criteria that were considered: the children had been evaluated at birth with the Apgar Test and were diagnosed with ASD by professionals duly accredited by the health system in Peru (Ministry of Health, MINSA) and were enrolled in a special Basic Education Center, reducing the number of participants. Furthermore, most of these children were of low and/or middle socio-economic levels.
Moreover, in this study, one parent (male or female) and one teacher of special education participated to obtain information on attention and emotions while the children with ASD were watching the video, producing an educational activity, where an NAO robot was an instructor. In the video, the NAO communicated with the children through behaviors, including dancing, speaking, and moving hands. The objective was to stimulate social-affective skills such as imitation, and emotion expressions.
Parents provided their signed, informed consent to participate in the study, at which point the conditions and protocol of the experiment regarding publication of the data were explained. In turn, the procedures followed met the human experimentation ethical standards according to the Helsinki declaration.
Selection of the sample was conducted by a non-probabilistic convenience sampling approach. The inclusion criteria in this sample were autistic children who (a) did not present sensory problems; (b) who followed instructions; and (c) who repeated instructions. At the time of starting the study, the participants were receiving regular special basic education in the non-face-to-face modality, according to the recommendations of the Ministry of Education of Peru, due to the state of emergency relating to COVID-19.

3.3. Diagnostic Initial

According to the context of use, children were educated remotely, as the global pandemic (SARS-CoV2) has led to the disruption of face-to-face classes for children with ASD. However, before the interruption of the face-to-face classes, the children were able to have a face-to-face approach with the NAO robot (Figure 1). Therefore, an interaction with the NAO and a group of children with ASD was designed. First, the NAO robot is introduced, where it says, “I’m a little humanoid and my name is NAO”, and that it wants to know each child’s name. Next, each child begins to say his or her name, while the teacher coordinates who should start and who follows. Next, the NAO robot says, “I am very good at dancing”, and starts dancing. Finally, the NAO robot says “goodbye, nice to meet you”.
To specify the children’s requirements, a psycho-pedagogical profile was produced (see Table 1) with the help of two psychologists, where characteristics such as cognitive development, motor skills, the communication level, language, social and emotional aspects, adaptive behavior, learning opportunity, and interests were considered.
Table 1 shows that children have attention and memory deficits. These basic mental functions are necessary to represent, i.e., to analyze, process, and store information. Autistic children share a characteristic deficit in expressive and comprehensive language. Lack of interest in communicating with other people appear with a difficulty in understanding and expressing nonverbal communication patterns, such as eye contact, use of gestures, attention, and smiling. Difficulty in understanding and expressing what they feel causes great frustration, showing adjustment problems. These include not speaking or maintaining eye contact in child 1, breaking of objects in child 2, and a lack of expressiveness in child 3. Given these developmental deficits in autistic children, it is essential to promote joyful and dynamic interactions to foster a very positive mood and to participate in their favorite activities, integrating language, nonverbal communication, cognition, and play. In relation to interests, the children were interested in videos and photos.
The three children who participated in the study were evaluated at birth with the Apgar Test [39] based on five criteria, including their appearance, pulse rate, irritability reflex, activity, and respiratory effort, obtaining a normal score. Child 1 presented a deficiency in vitamins B11 and B12 and child 3 was premature (before 7 months), whilst children 2 and 3 were 9 months old. The complications that occurred were children 1 and 3, who were diagnosed with jaundice. The children who were selected for the study were diagnosed with ASD by professionals duly accredited by the health system in Peru (Ministry of Health, MINSA) and are enrolled in a special Basic Education Center (CEBE).

3.4. Design and Development of the Activities

The activities were carried out with the NAO robot with a humanoid appearance developed by Aldebaran Robotics, France. NAO is a 58 cm tall, 5 kg humanoid robot with 25 degrees of freedom. NAO is balanced by four pressure sensors controlling the pressure center corresponding to each foot. It has four speakers and a voice recognition and analysis system, which enables it to listen and speak. In this study, NAO communicated with the children through behaviors such as dancing, talking, and walking, among others.
According to the profile identified for each child (Table 1), customized activities were designed. The scenario for designing each activity has the following characteristics:
Social initiation: NAO says: “Hello XXXX (child name), I’m a small humanoid and my name is NAO”. While NAO speaks, he moves his hands and changes his posture.
Conversational skills: NAO asks to the child if he remember the name of the robot. Therefore, NAO says, “hello XXXX (child name), do you remember my name? What is my name?”
Dancing: NAO asks the child by name if he would like to dance, and NAO begins dancing.
Imitation: The robot initiates an imitation exercise designed to determine whether the child remembers imitation skills gained during face-to-face lessons with the teacher. The exercise imitation was personalized for each child according to the profile identified. Figure 2 shows some imitation exercises made by the children.
Child 1: recognizing the parts of body, such as: head, shoulders, knees, feet. Raising and lowering exercises. NAO
Child 2: inflating a balloon. NAO holds a balloon and asks the child What’s in my hand?
Child 3: washing the hands (See Figure 3), NAO asks him about which toiletries and shows them one by one. Then, NAO shows the sequence of hand washing. The child imitates with the real toiletries. (1) Hand wetting; (2) rubbing hands with soap; (3) rinse hands with water; (4) towel dry hands. Finally, NAO asks, how should our hands be?
It is important to mention that during the four sessions that were held for each child, in each session there was designed a new video according to the defined imitation learning.
Play: NAO invites the child to dance. This is to relax him a bit after his activity. NAO does the corresponding activity through a song with the intention of motivating the child further.
Reinforcement: NAO asks the child questions related to the activity being worked on.
The design of each of the activities was supported by pictograms and NAO. Moreover, the background of the video was green and used an effect called “Chroma Key”, which is where a still or moving object could pass in front of the background as if it were there. For each of the designed activities, the NAO robot had to be programmed to perform movements and talk.

3.5. Video-Based Interventions

During the recording of the videos, care was taken to ensure that the position of the camera was at the same level as the robot’s eyes, giving the impression that the robot was looking at the child while watching the video, so that eye contact with the child could be simulated, as gaze or facial tracking for synchronous activities could not be applied. The Google Meet platform was used for the synchronous meetings, as it was the most appropriate for the parents not to have to install any software on the computer.
Considering the deficiencies in children with autism, [40] stated that certain basic emotions, such as happiness, fear, and sadness, were necessary for developing communication, as well as for deliberately sharing experiences about facts and things [41].
To evaluate video-based interventions, an instrument was used which integrates three dimensions, such as: Facial; verbal; and body expression. This instrument evaluates the effect of video-based interventions using the NAO robot for observational learning. The definitions of these dimensions were based on a study conducted by Trevarthen [42,43], which analyzes how the wide variety of facial expressions shown by infants are interpreted by the caregiver, as well as how other expressive responses, such as voice, sounds, and body movements, are emitted by the enfant to express an emotional state.
Therefore, the instruments proposed identified three dimensions that were the most important for analyzing the behavior, facial, and verbal expressions of the child with ASD, each of which was associated with a set of aspects, as shown in Table 2.
To quantify the level of the child–robot interaction, the following rating scale was used for each dimension: Facial expression: high (20 to 14), medium (13 to 7), and low (6 to 0); Verbal Expression: high (36 to 24), medium (23 to 7), and low (6 to 0); and Body Expression: high (16 to 12), medium (11 to 7), and low (6 to 0).
A total score is also considered to assess the expressiveness of emotions, where a positive expression is when the child attends to, understands and expresses emotions with coherent gestures, voice, and body movements, which is in the range of 35 or higher. A facilitative expression of emotions relates to when the child has some difficulty attending to, understanding, and expressing some emotions with gestures, voice and body movements that are coherent to the context, with a score between 15 and 34. Finally, a negative expression, when the child has a difficulty to attend, understand and express emotions with gestures, voice, and movements coherent to the context, are in a score range from 0 to 14.

3.6. Results

The instrument (see Table 2) was applied to each child while he/she performed the activity interacting with the NAO robot during the sessions (Figure 4). A psychologist assigned a respective score to each aspect related to each dimension, according to what he or she observed. The children were always accompanied by a responsible person (father or mother) and a special education teacher who supported them in the activity. Four sessions were carried out for child 1 and 3 and three sessions for child 2, due to the fact that in the third session the child did not show any improvement in the score, so it was decided not to carry out a fourth session. Each session lasted between 2 and 5 min.
During video recording, care was taken to ensure that the camera position was at the same level as the robot’s eyes, giving the impression that the robot was looking at the child while watching the video, to stimulate eye contact with the child, since gaze or face tracking could not be applied for synchronous activities.
For the interactive aspect of the scenario to be successful, it was necessary to ensure constant eye contact with the child. Thus, when the child’s face was not visible, the robot called his name to get his attention, since the child’s attention span was very short.
The results obtained (Table 3) for each child show that child 1 and child 3 obtained a facilitating expression score, indicating that the child has some difficulty in attending, understanding and expressing some emotions with gestures, voice, and body movements. Child 1 (Figure 4a) in the second and third sessions presented a better response to expressions compared to the first session. This is because in the second session, he tried to speak with the robot, but got no response. However, he learned from this, as in the third session he did not try again but imitated the robot’s sounds. In the case of child 2, there was an improvement in each session in which he interacted with the NAO robot. Figure 4b shows that in each session his score improved incrementally, although not as much as expected, since he had difficulty in attending, understanding, and expressing emotions with gestures, voice, and body movements. It is important to mention that child 2 has an attention and memory deficit and does not speak. Meanwhile, child 3 (Figure 4c), compared to child 2 and child 1, obtained the best score in each of the sessions. Both child 2 and child 3 smiled when seeing and hearing the NAO robot in at least two sessions. Only child 1 showed fear in session 4 when seeing and hearing the NAO robot.
Figure 4d shows the results that were averaged over the 4 sessions for each child, where the best result was obtained by child 3 with a score of 21.7 (SD: 1.41, Var: 2), child 2 of 18 (SD: 2.51, Var: 6.3) and child 3 of 12.3 (SD: 5.4, Var: 29.6).
Figure 5 shows the results obtained in the different sessions for each child. Child 1 showed progress for each child. Child 1 showed progress from the first session to the third session, but in the fourth session there was no progress. It is also observed that child 2 only had an effect in session 3; child 2 only had three sessions. Meanwhile, child 3 had the best reaction to the NAO robot. Both child 1 and child 3 presented better responses compared with child 2, perhaps because both children were interested in images and videos.
The results obtained for each dimension are described below.

Facial Expression 

The results showed the statistical analysis that was carried out on the facial expressions, which is presented in Figure 6. In Figure 6, the results of child 2 were not considered, seeing as he does not show any facial expression. Therefore, the graphs show the results between child 1 (Figure 6a) and child 3 (Figure 6b). Facial expressions could not be modified due to inaccuracies such as distances between the child and the camera, face inclination when listening to the presentation song, paying attention for a space of 5 s, staring at the computer screen, and displaying a low facial expression category. To evaluate facial expressions, characteristics such as mouth, eyebrows, and eyes were considered. Child 1 obtained a mean score of 4.25 on the facial expression dimension, while child 3 obtained an average score of 3, which is very low. Figure 6d shows the progress of each session for each child, where it is observed that child 1 and child 3 between each session show a small progress; for example, in the first session with respect to second session, child 1 obtained a better response in the first aspect of this dimension, which is “smiles on seeing the robot”. However, in the third and fourth session, he did not smile, but in the third session he cried when he saw the robot and heard the robot, while in the fourth session he showed fear when he saw and heard the robot. Child 3, between the first session and the third session, did not show any change in his facial expressions, but in session 4 he smiled when he saw and heard the robot. Therefore, it can be said that the variations of affection presented by the children in each session were due to the affective state that each one of them was in, and in the same way influenced their interaction with the NAO robot. Child 2, in the fourth, session, was neutral and showed neither positive nor negative facial expressions.

Verbal Expression 

Figure 7 shows the verbal expression for each child. Child 1 (see Figure 7a) was assigned the same value for the third and fourth sessions of the four aspects assessed in this dimension (see Table 2), which are related to the intention to talk to the robot, imitate the sounds and keep silent when listening and watching the robot. It was observed that child 1 responded from the first session by keeping silent when listening and watching the robot, reflecting an effect of attention and curiosity in child 1. Child 2 (see Figure 7b) aspect 11 and 12 are maintained in all three sessions. Meanwhile, for child 3 (see Figure 7c), aspects 9,10, 11 and 12 were maintained in the third and fourth session, but in the last session, made attempts at dialogue and to imitate the sounds of the NAO robot.
The four sessions were averaged for each child, such as: child 1 (8.5); child 2 (8.7) and child 3 (9), which shows that this dimension for each child had a positive effect and is related to the imitation and attention of the robot. Figure 7d shows the results for each session for each child, where they have a very similar behavior.
Some observations obtained for each child when assessing this dimension are described below:
Child 1: Recognized the NAO by its presentation song and watched the computer screen for 2 s at a time, with an interval of 1 s. In the arm movement session, he demonstrated satisfaction and emotional expressiveness that increased in each session
Child 2: On listening to the NAO song, he no longer showed irritability and observed NAO for 2 s, interacting with NAO using a balloon for her arm movement learning session.
Child 3: Recognized NAO by its introduction song and called it by its name, making hand signs of likes and actively participating in the hand washing session with enthusiasm. He thereby demonstrated the facilitation of emotional expressiveness.

Body Expression 

Figure 8 shows the results obtained for each child in the dimension body expression. This dimension includes aspects 13 to 17 (see Table 1). Figure 8a shows that child 1 scored better on aspect 17, which related to “ignores the robot’s instructions”, and aspect 16 scored a change in the fourth session, indicating “follows by sight the robot’s movements”. It must be considered that child 1 has an attention deficit and presents language alterations. Therefore, verbal communication is not favorable for this child.
On the other hand, child 2 (see Figure 8b) improved in the third session compared to the first and second sessions, showing some changes in: imitation of the robot, repetition of actions and not completely ignoring the robot’s instructions. On the other hand, child 3 (see Figure 8c) was the one who presented the best result compared to child 2 and child 3. In all sessions he imitated the robot’s movement, repeated actions, and followed the robot’s movements with his eyes and only in one session did he ignore some of the robot’s instructions. It should be considered that child 3 has an attention and memory deficit.
The imitation that the NAO robot did for each child to follow these instructions were: (1) washing hands, (2) body parts and (3) blowing up a balloon. However, it is important to clarify that this was done while children were watching a video of an NAO robot performing the given instructions and movement actions. Therefore, it was shown that not all the children imitated the NAO robot; perhaps, if the NAO was tangibly present, there could be a change in the aspects. This is one of the questions that were raised from the study.
Some observations captured for each child in this dimension are described below:
Child 1: The body parts session was used. He followed NAO’s directions, demonstrating satisfaction and a desire to participate.
Child 2: The arm movement session was used. He followed the NAO’s instructions and did not show any difficulty.
Child 3: The hand washing session was used. He followed the instructions of NAO, interacting appropriately and paying attention to the instructions. He also taught the NAO how to wash its hands and it followed his instructions.
The four sessions were averaged for each child, child 1 (4.3); child 2 (3.7); and child 3 (12.3), which reflects that this dimension had a positive effect on child 3, who obtained the best result compared to the other two children. Figure 8d shows that child 3 maintains his values in each of the aspects corresponding to this dimension in all sessions. While child 1 in the second to fourth session had no change, and child 2 had an effect in the third session.
According to the results obtained, Spearmen’s correlation analysis was used to identify the correlation between the scores children received on the 17 items evaluated. Results of the analysis are reported in Figure 9. It shows that Spearmen’s rho (r) was found in some items indicating a strong, moderate, and weak relationship or no relationship as items 17 and 8 have a moderate positive relationship (r = 0.476). Meanwhile, there was a weak positive correlation between 16, 1 and 2 (r = 0.324 and r = 0.299). Weak correlation between 10 and 1, 2, 3 and 4 have a weak relationship (r = 0.219, r = 0.194, r = 0.221, r = 0.221) and moderate correlation between 10 and 9 (r = 0.562).
Moderate negative correlation between item 11, 9 and 10 (r = −0.418, r = −0.497). Significant positive correlation was also observed between 16, 13 and 14 (r = 0.802, r = 0.802).

4. Discussion

Video-Based Interventions (VBIs) are shown to be useful tools for students with ASD, as it is considered as an alternative that they can learn through visual means [44]. Therefore, visually based approaches may help address pervasive difficulties in students with ASD. For best results, the video should be viewed in a consistent environment. To increase the relevance of the instruction, this environment should be the place where the child is expected to demonstrate the skill. The material used in the video should be the same as those that learners are expected to use when demonstrating the target behavior. The videos can be viewed in a group or independently, depending on the needs of the learners and the educational environment [45].
Therefore, technology can help as an educational tool. Jackson [46] wrote: “technology increases independent, personal productivity and empowerment. It can facilitate the kinds of interactions that occasion instruction, and it can transform static curriculum resources into flexible digital media and tools”.
The results showed that child 1 smiled whenever he saw and heard the robot in the three sessions. Within a short period of time, he tried to converse with and imitate the sounds of the robot and followed the movements of the robot with his gaze at various times. Child 2 only cried once in the three scheduled sessions. In the first two sessions, when he saw and heard the robot, he smiled. In the second session, he tried to converse with the robot and in the last session, he followed the movements of the robot with his eyes. Finally, child 3 did not show emotion on either seeing or hearing the robot. In the first and last session, at one point, he tried talking to the robot. In the last session, he tried to imitate sounds, although mostly remained silent. In the second and third sessions, at times, he tried to imitate and repeat the actions of the robot and in the last session, he briefly followed the movements of the robot with her gaze. These results demonstrate the importance of robotics in children diagnosed with ASD and how after their interaction, they respond positively to communication skills and eye contact. This relates well to [45], which indicates that people with ASD use less non-verbal communication and use other ways to communicate, such as eye contact, facial expressions, gestures, and body language.
The results are limited by the number of participants, so each of the results should be interpreted with caution, as a personalized video-based intervention was designed, which required not only a robot NAO, but also digital material, such as pictograms, physical elements, and a scenario in such a way that it would not distract or overload the child with ASD.
However, some recommendations that could be considered such as: (1) psycho-pedagogical profile for each child to identify his or her interests and psycho-educational needs; (2) a clear description of the objective of the educational intervention; (3) storytelling of the video should include social initiation, conversational skills, dancing, imitation, play and reinforcement where a robot acts as an instructor with humanoid appearance; (4) the information presented must be very precise and adapted for each child; (5) the multimedia material could be supported by pictograms of augmentative communication and audio; (6) VBIs must be short time; (7) the device to watch the video can influence the attention span;
Therefore, a set of steps were proposed to apply to the study (See Figure 10):
  • Step 1. Before the study, the teachers and parents receive a detailed explanation of the activities. In addition, the family of the children with autism must have technological instruments available for the virtual presentation of the robot. They also must give their informed consent for this study.
  • Step 2. A psycho-pedagogical profile must be applied to the child.
  • Step 3. The design must be adapted according to the profile of each child and to achieve positive acceptance in each session, games, objects, and favorite music must be incorporated. Short videos showing the NAO robot performing whole body movements must be presented to the participants, all the while using a green screen background to avoid visual stimulus overload and ensure that the NAO robot.
  • Step 4. In the intervention phase, each child must participate in three scheduled sessions each week, lasting three minutes with content related to greeting, body parts and hand washing in an established sequence, to avoid multiple expressions and facilitate a simplified interaction. In this way, on presenting the structured and repetitive sessions, the child can pay attention to the words or any movement.
  • Step 5. Simultaneous interaction. The presentation of the NAO robot was focused on building a close relationship with autistic children using reinforcing play routines, verbal, and non-verbal language, to promote a very positive state of mind in the learning process. To analyze the child–robot interaction must be applied the instrument, which measures three dimensions, such as: facial, verbal and body expression.

5. Conclusions and Future Work

The results obtained show that the progress for each child is very slow and depends on their emotional state which affects the performance of the activity. Therefore, this indicates that even more sessions are needed for this interaction with the robot to have a greater effect on facial, verbal and body expressions.
Moreover, if we look at the interests of child 1 and child 3, this is because both had already interacted with NAO face-to-face, while child 2 had not, who obtained the lowest results. The children showed a medium level in the expression when interacting with the immediate environment using an NAO robot as a mediator. This paves the way for the continued development of more interactive sessions that reliably verify the need to start from real and situated situations, allowing the development of cognitive and communicative processes. The use of external means and resources helps to provoke greater interest and enthusiasm in the interaction of the autistic child with the robot, in a social context.
The expression of emotions in children should be stimulated in order to develop their attention process so that they can improve their learning, which will allow them to adapt to their social environment in a more functional way. Among the orientations for future work, the adaptation of facial expression recognition algorithms to the specificity of the possibilities of expression of children with autism using artificial intelligence could be considered.

Author Contributions

Conceptualization, D.A.U.A. and F.H.R.P.; methodology, S.C.; software, D.A.U.A.; validation, F.H.R.P., D.A.U.A., F.T.-M., M.E.R.Z. and R.F.P.Q.; formal analysis, F.H.R.P., F.T.-M.; investigation, F.H.R.P.; resources, F.H.R.P.; data curation, R.F.P.Q. and F.T.-M.; writing—original draft preparation, D.A.U.A.; writing—review and editing, S.C.; visualization, S.C.; supervision, F.H.R.P.; project administration, F.H.R.P.; funding acquisition, F.H.R.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad Nacional de San Agustín de Arequipa, UNSA-INVESTIGA, Faculty of Education Sciences within the framework of the project “TEACHING CHILDREN WITH AUTISM SPECTRUM DISORDER USING INCLUSIVE ACTIVITIES BASED ON ROBOTICS”, grant number IAI-008-2018-UNSA.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of UNIVERSIDAD NACIONAL DE SAN AGUSTÍN (protocol code 008-2018-UNSA and 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Health. (30th of May). Ministerio de Salud. Obtained [Online]. Available online: https://www.gob.pe/minsa/-2020 (accessed on 5 April 2020).
  2. Amorim, R.; Catarino, S.; Miragaia, P.; Ferreras, C.; Viana, V.; Guardiano, M. The impact of COVID-19 on children with autism spectrum disorder. Rev. Neurol. 2020, 71, 285–291. [Google Scholar] [PubMed]
  3. Weitlauf, A.S.; McPheeters, M.L.; Peters, B.; Sathe, N.; Travis, R.; Aiello, R.; Williamson, E.; Veenstra-VanderWeele, J.; Krishnaswami, S.; Jerome, R.; et al. Therapies for Children with Autism Spectrum Disorder: Behavioral Interventions Update [Internet]; Comparative Effectiveness Review, No. 137; Agency for Healthcare Research and Quality (US): Rockville, MD, USA, 2014. Available online: https://www.ncbi.nlm.nih.gov/books/NBK241444/ (accessed on 21 October 2021).
  4. American Psychiatric Publishing. Diagnostic and Statistical Manual of Mental Disorders; APA: Arlington, VA, USA, 2013. [Google Scholar]
  5. Torras, M.E. Autism Spectrum Disorders. Educational Strategies for Children with Autism (Trastornos del Espectro Autista. Estrategias Educativas para Niños con Autismo); Universidad Internacional de Valencia: Valencia, Spain, 2014. [Google Scholar]
  6. Ledford, J.R.; King, S.; Harbin, E.R.; Zimmerman, K.N. Antecedent social skills interventions for individuals with ASD: What works, for whom, and under what conditions? Focus Autism Other Dev. Disabil. 2018, 3, 3–13. [Google Scholar] [CrossRef] [Green Version]
  7. Eshraghi, A.A. Covid-19: Overcoming the challenges faced by people with autism and their families. Lancet Psychia 2020, 7, 481–483. [Google Scholar] [CrossRef]
  8. Ayres, K.; Langone, J. Intervention and Instruction with Video for Students with Autism: A review of the literature. Educ. Train. Dev. Disabil. 2005, 40, 183–196. [Google Scholar]
  9. Hughes, E.M.; Yakubova, G. Developing Handheld Video Intervention for Students with Autism Spectrum Disorder. Interv. Sch. Clin. 2016, 52, 115–121. [Google Scholar] [CrossRef]
  10. Goldsmith, T.R.; LeBlanc, L.A. Use of technology in interventions for children with autism. J. Early Intensive Behav. Interv. 2004, 1, 166–178. [Google Scholar] [CrossRef] [Green Version]
  11. Delano, M.E. Video Modeling Interventions for Individuals with Autism. Remedial and Spec. Educ. 2007, 28, 33–42. [Google Scholar] [CrossRef] [Green Version]
  12. Mundy, P.; Sigman, M.; Kasari, C. Joint attention, developmental level, and symptom presentation in autism. Dev. Psychopathol. 1994, 6, 389–401. [Google Scholar] [CrossRef]
  13. Kleeberger, V.; Mirenda, P. Teaching generalized imitation skills to a preschooler with autism using video modeling. J. Posit. Behav. Interv. 2010, 12, 116–127. [Google Scholar] [CrossRef]
  14. Lytridis, C.; Bazinas, C.; Sidiropoulos, G.; Papakostas, G.A.; Kaburlasos, V.G.; Nikopoulou, V.-A.; Holeva, V.; Evangeliou, A. Distance Special Education Delivery by Social Robots. Electronics 2020, 9, 1034. [Google Scholar] [CrossRef]
  15. McAlindon, P.J. Computer interface design: A user-centered approach. Comput. Ind. Eng. 1992, 23, 205–207. [Google Scholar] [CrossRef]
  16. Bandura, A. Social Teaming Theory; Prentice Hall: Engiewood Cliffs, NJ, USA, 1977. [Google Scholar]
  17. Gallese, V.; Goldman, A. Mirror neurons and the simulation theory of mind-reading. Trends Cogn. Sci. 1998, 2, 493–501. [Google Scholar] [CrossRef]
  18. Rogers, S.J.; Williams, J.H.G. Imitation and the Social Mind: Autism and Typical Development; Guilford Press: New York, NY, USA, 2006. [Google Scholar]
  19. Williams, J.H.; Whiten, A.; Singh, T. A systematic review of action imitation in autistic spectrum disorder. J. Autism. Dev. Disord. 2004, 34, 285–299. [Google Scholar] [CrossRef] [PubMed]
  20. Bellini, S.; Akullian, J. A meta-analysis of video modelling and video self-modelling interventions for children and adolescents with autism spectrum disorders. Except. Child. 2007, 73, 264–287. [Google Scholar] [CrossRef]
  21. Ismail, L.I.; Shamsudin, S.; Yussof, H.; Hanapiah, F.A.; Zahari, N.I. Robot-based Intervention Program for Autistic Children with Humanoid Robot NAO: Initial Response in Stereotyped Behavior. Procedia Eng. 2012, 41, 1441–1447. [Google Scholar] [CrossRef] [Green Version]
  22. Usability.net. Usabilitynet. [online]. Available online: http://www.usabilitynet.org/home.htm (accessed on 5 April 2020).
  23. UPA. What Is User Centered Design? Usability Professionals’ Association. [Online]. Available online: http://www.usabilityprofessionals.org/usability_resources/about_usability/what_is_ucd.html (accessed on 5 April 2020).
  24. Hourcade, J.P. Child-Computer Interaction; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015. [Google Scholar]
  25. Rayner, C.; Denholm, C.; Sigafoos, J. Video-based intervention for individuals with autism: Key questions that remain unanswered. Res. Autism Spectr. Disord. 2009, 3, 291–303. [Google Scholar] [CrossRef]
  26. McCoy, K.; Hermansen, E. Video modeling for individuals with autism: A review of model types and effects. Educ. Treat. Child. 2007, 30, 183–213. [Google Scholar] [CrossRef]
  27. D'Ateno, P.; Mangiapanello, K.; Taylor, B.A. Using video modeling to teach complex play sequences to a preschooler with autism. J. Posit. Behav. Interv. 2003, 5, 5–11. [Google Scholar] [CrossRef]
  28. Mechling, L. The effect of instructor-created video programs to teach students with disabilities: A literature review. J. Spec. Educ. Technol. 2005, 20, 25–36. [Google Scholar] [CrossRef] [Green Version]
  29. Mavadati, S.M.; Feng, H.; Salvador, M.; Silver, S.; Gutierrez, A.; Mahoor, M.H. Robot-based therapeutic protocol for training children with Autism. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 855–860. [Google Scholar] [CrossRef]
  30. Shamsuddin, S.; Yussof, H.; Hanapiah, F.A.; Mohamed, S.; Jamil, N.F.F.; Yunus, F.W. Robot-assisted learning for communication-care in autism intervention. In Proceedings of the 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), Singapore, 11–14 August 2015; pp. 822–827. [Google Scholar] [CrossRef]
  31. Duquette, A.; Michaud, F.; Mercier, H. Exploring the use of a mobile robot as an imitation agent with children with low functioning autism. Auton. Robot. 2008, 24, 147–157. [Google Scholar] [CrossRef]
  32. Bird, G.; Leighton, J.; Press, C.; Heyes, C. Intact automatic imitation of human and robot actions in autism spectrum disorders. Proc. Biol. Sci. 2007, 274, 3027–3031. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. De Silva, P.R.S.; Tadano, K.; Saito, A.; Lambacher, S.G.; Higashi, M. Therapeutic-Assisted Robot for Children with Autism; ACM Press: New York, NY, USA, 2009; pp. 3561–3567. [Google Scholar]
  34. Qidwai, U.; Kashem, S.B.A.; Conor, O. Humanoid Robot as a Teacher’s Assistant: Helping Children with Autism to Learn Social and Academic Skills. J. Intell. Robot. Syst. 2020, 98, 759–770. [Google Scholar] [CrossRef]
  35. Nikopoulos, C.K.; Keenan, M. Effects of video modeling on social initiations by children with autism. J. Appl. Behav. Anal. 2004, 37, 93–96. [Google Scholar] [CrossRef] [Green Version]
  36. LeBlanc, L.A.; Coates, A.M.; Daneshvar, S.; Charlop-Christy, M.H.; Morris, C.; Lancaster, B.M. Using video modeling and reinforcement to teach perspective-taking skills to children with autism. J. Appl. Behav. Anal. 2003, 36, 253–257. [Google Scholar] [CrossRef] [Green Version]
  37. Sampieri, H.; Fernández, C.; Baptista, L. Metodología de la Investigación, 6th ed.; McGRAW-HILL: Ciudad de Mexico, Mexico, 2014. [Google Scholar]
  38. Trevarthen, C. Facial expressions of emotions in mother-infant interaction. Human Neurobiology 1985, 4, 21–32, PMID 3997585. [Google Scholar]
  39. Apgar, V. A proposal for a new method of evaluation of newborn infant. Anesth. Analg. 1953, 32, 260–268. [Google Scholar] [CrossRef]
  40. Trevarthen, C. The function of emotions in early communication and development. In New Perspectives in Early Communicative Development; Nadel, J., Camaioni, L., Eds.; Routledge: Abingdon, UK, 1993. [Google Scholar]
  41. Trevarthen, C.; Hubley, P. Secondary intersubjectivity: Confidence, confiding and acts of meaning in the first year. In Action, Gesture and Symbol: The Emergence of Language; Lock, A., Ed.; Academic Press: London, UK, 1978; pp. 183–229. [Google Scholar]
  42. Trevarthen, C. The primary motives for cooperative understanding. In Social Cognition: Studies of the Development of Understanding; Butterworth, G., Light, P., Eds.; Harvester: Brighton, UK, 1982; pp. 77–109. [Google Scholar]
  43. Hodgdon, L.Q. Solving social-behavioral problems through the use of visually supported communication. In Teaching Children with Autism: Strategies to Enhance Communication and Socialization; Quill, K.A., Ed.; Delmar: New York, NY, USA, 1995; pp. 265–285. [Google Scholar]
  44. Buggey, T. Video modeling applica- tions with students with autism spectrum disorder in a small private school setting. Focus Autism Other Dev. Disabil. 2005, 20, 52–63. [Google Scholar] [CrossRef]
  45. Jackson, R.M. Technologies Supporting Curriculum Access for Students with Disabilities; National Center on Accessing the General Curriculum: Wakefield, MA, USA, 2004. [Google Scholar]
  46. Kumazaki, H.; Muramatsu, T.; Yoshikawa, Y.; Corbett, B.A.; Matsumoto, Y.; Higashida, H.; Kikuchi, M. Job interview training targeting nonverbal communication using an android robot for individuals with autism spectrum disorder. Autism. 2019, 23, 1586–1595. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Interaction with NAO face-to face before the global crisis (NAO arriving at the CEBE).
Figure 1. Interaction with NAO face-to face before the global crisis (NAO arriving at the CEBE).
Electronics 10 02577 g001
Figure 2. Activities applied for each child with autism spectrum disorder (ASD).
Figure 2. Activities applied for each child with autism spectrum disorder (ASD).
Electronics 10 02577 g002
Figure 3. A selection of pictograms used with NAO to design the activities.
Figure 3. A selection of pictograms used with NAO to design the activities.
Electronics 10 02577 g003
Figure 4. Results obtained when applying the instrument to evaluate the interaction between children with ASD and a video-based intervention with the NAO robot. (a) Results by session child 1; (b) results by session child 2; (c) results by session child 3; (d) results total for child 1, child 2 and child 3.
Figure 4. Results obtained when applying the instrument to evaluate the interaction between children with ASD and a video-based intervention with the NAO robot. (a) Results by session child 1; (b) results by session child 2; (c) results by session child 3; (d) results total for child 1, child 2 and child 3.
Electronics 10 02577 g004
Figure 5. Plot for each child by sessions.
Figure 5. Plot for each child by sessions.
Electronics 10 02577 g005
Figure 6. Score of facial expression. (a) Facial Expression Score for child 1; (b) Facial Expression Score for child 3; (c) Total Score Facial Expression; (d) Plot of each session for each child.
Figure 6. Score of facial expression. (a) Facial Expression Score for child 1; (b) Facial Expression Score for child 3; (c) Total Score Facial Expression; (d) Plot of each session for each child.
Electronics 10 02577 g006
Figure 7. Score of verbal expression. (a) Verbal expression for child 1; (b) verbal expression for child 2; (c) verbal expression for child 3; (d) Plot of each session for each child.
Figure 7. Score of verbal expression. (a) Verbal expression for child 1; (b) verbal expression for child 2; (c) verbal expression for child 3; (d) Plot of each session for each child.
Electronics 10 02577 g007
Figure 8. Score of body expression. (a) body expression for child 1; (b) body expression for child 2; (c) body expression for child 3; (d) Plot of each session for each child.
Figure 8. Score of body expression. (a) body expression for child 1; (b) body expression for child 2; (c) body expression for child 3; (d) Plot of each session for each child.
Electronics 10 02577 g008
Figure 9. Spearmen’s correlation analysis.
Figure 9. Spearmen’s correlation analysis.
Electronics 10 02577 g009
Figure 10. Flow diagram applying the study.
Figure 10. Flow diagram applying the study.
Electronics 10 02577 g010
Table 1. Psycho-pedagogical profile for each child.
Table 1. Psycho-pedagogical profile for each child.
ChildAgeCognitiveMotor SkillsCommunicationLanguageSocial EmotionalAdaptive BehaviorLearning OpportunityInterests
19Attention and memory deficitDeficit in motor coordination and planningAsks for things by pointingDoes not speakCries for no reasonDoes not speak and does not keep his eyes on social interactions Does not establish spatial relationshipsLikes to be caressed
211Attention and memory deficitDeficit in motor coordination and planningLanguage deficit at the expressive and comprehensive levelLow word articulationScreams and cries when not given what she asks forBreaks objectsWill only look at the person speaking to her and the object for a few secondsLikes to watch videos and look at photos
313Attention deficitDeficit in motor coordination and planningExpressive and comprehensive language deficitEcholaliaGestures When askedLack of expressivenessSlow sensory processingLikes to watch videos and look at photos
Table 2. Instrument to evaluate the VBIs for children with ASD.
Table 2. Instrument to evaluate the VBIs for children with ASD.
DimensionNoAspectScale
01234
Facial Expression1Smiles on seeing the robot
2Smiles when listening to the robot
3Shows fear on seeing the robot
4Shows fear when listening the robot
5Gets angry when seeing the robot
6Gets angry when listening to the robot
7Cries on seeing the robot
8Cries when listening to the robot
Verbal Expression9Tries to converse with the robot
10Makes sounds resembling what the robot is saying
11Stays silent when listening to the robot
12Stays silent on seeing the robot
Body Expression13Imitates the movements of the robot
14Repeats the actions performed by the robot
15Follows movements of the robot with a finger
16Follows movements of the robot with eyes
17Ignores the movements of the robot
Total Score for Expressiveness of Emotions
Positive Expression ≥35The child attends, understands and expresses emotions with gestures, voice, and body movements consistent with the context.
Facilitative Expression 15–34The child has some difficulty in attending, understanding, and expressing some emotions with gestures, voice, and body movements coherent to the context.
Negative Expression 0–14The child has difficulty attending, understanding, and expressing emotions with gestures, voice, and movements consistent with the context.
Table 3. Results obtained for each child with ASD level III.
Table 3. Results obtained for each child with ASD level III.
ChildSessionScoreAverage Expressiveness
111618: Facilitative Expression
219
319
418
211012.3: Negative Expression
212
315
312421.7: Facilitative Expression
221
320
432
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Urdanivia Alarcon, D.A.; Cano, S.; Paucar, F.H.R.; Quispe, R.F.P.; Talavera-Mendoza, F.; Zegarra, M.E.R. Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education. Electronics 2021, 10, 2577. https://doi.org/10.3390/electronics10212577

AMA Style

Urdanivia Alarcon DA, Cano S, Paucar FHR, Quispe RFP, Talavera-Mendoza F, Zegarra MER. Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education. Electronics. 2021; 10(21):2577. https://doi.org/10.3390/electronics10212577

Chicago/Turabian Style

Urdanivia Alarcon, Diego Antonio, Sandra Cano, Fabian Hugo Rucano Paucar, Ruben Fernando Palomino Quispe, Fabiola Talavera-Mendoza, and María Elena Rojas Zegarra. 2021. "Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education" Electronics 10, no. 21: 2577. https://doi.org/10.3390/electronics10212577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop