A physical learning companion for Mental-Imagery BCI User Training

https://doi.org/10.1016/j.ijhcs.2019.102380Get rights and content

Highlights

  • Learning companions could improve Brain-Computer Interface (BCI) user experience.

  • Non-autonomous users can benefit from learning companions (increased performances).

  • Companions’ interventions should be adapted to users performance and progression.

Abstract

Mental-Imagery based Brain-Computer Interfaces (MI-BCI) present new opportunities to interact with digital technologies, such as wheelchairs or neuroprostheses, only by performing mental imagery tasks (e.g., imagining an object rotating or imagining hand movements). MI-BCIs can also be used for several applications such as communication or post-stroke rehabilitation. Though, their lack of reliability remains a barrier to a larger scale development of the technology. For example, one task between two is recognized on average 75% of the time. It has been shown that users are more likely to struggle using MI-BCIs if they are non-autonomous or tensed. This might, at least in part, result from a lack of social presence and emotional support, which have yet very little been tested in MI-BCI, despite recommendations from the educational literature. One way to provide such social and emotional context is by using a learning companion. Therefore, we designed, implemented and evaluated the first learning companion dedicated to the improvement of MI-BCI user training. We called this companion PEANUT for Personalized Emotional Agent for Neurotechnology User Training. PEANUT provided social presence and emotional support, depending on the performance and progress of the user, through interventions combining both pronounced sentences and facial expressions. It was designed based on the literature, data analyses and user-studies. We notably conducted various online user surveys to identify the desired characteristics of our learning companion in terms of appearance and supporting speech content. From the results of these surveys we notably deduced which should be the characteristics (personal/non-personal, exclamatory/declarative) of the sentences to be used depending on the performance and progression of a learner. We also found that eyebrows could increase expressiveness of cartoon-like faces. Then, once this companion was implemented, we evaluated it during real online MI-BCI use. We found that non-autonomous people, i.e., who are more inclined to work in a group, that are usually disadvantaged when using MI-BCI were advantaged compared to autonomous people when PEANUT was present with an increase of 3.9% of peak performances. Furthermore, in terms of user experience, PEANUT seems to have improved how people felt about their ability to learn and memorize how to use an MI-BCI by 7.4%, which is a dimension of the user experience we assessed.

Introduction

Brain-computer Interface (BCI) enable their users to send commands to digital technologies using their brain-activity alone, often recorded using electroencephalography (EEG) Wolpaw and Wolpaw (2012). One of the most commonly used type of BCI is Mental-Imagery based BCI (MI-BCI) which we will focus on in this article. Such BCIs are controlled by their users by performing mental-imagery (MI) tasks, such as imagining objects rotating or performing mental calculation. A famous example of MI-BCI is a smart wheelchair that is controlled by imagining left or right hand movements, e.g., imagining waving at someone, to make the wheelchair turn respectively left or right (Carlson and del R., 2013). MI-BCI applications are broad because they provide new interaction tools. For example, they can also be used to write by controlling a speller (Williamson et al., 2009) or to foster brain plasticity and improve motor rehabilitation for post-stroke patients (Biasiucci et al., 2018).

All the MI-BCI applications rely, on their ability to send the correct command, i.e., the one selected by the user, to the system. However, the accuracy still has to be improved for the technology to undergo a strong growth outside of research laboratories. For example, when the system has to decide which task the user is performing between two, e.g., imagining a right versus a left hand movement, on average the system is mistaken once every four guesses (Allison and Neuper, 2010). There are several lines of research aiming at improving the efficiency of MI-BCIs. A great deal of them focus on improving the acquisition and processing of the brain activity (McFarland and Wolpaw, 2018). However, MI-BCI applications also rely on users themselves. Indeed, on the one hand, the computer has to learn to discriminate the different brain-activity patterns for the tasks performed by a user. But on the other hand, the user has to train and learn how to produce a stable and distinguishable brain-activity pattern for each of the tasks in order for them to be recognized by the computer (McFarland and Wolpaw, 2018).

When being asked to imagine hand movements, users can adopt a great variety of strategies, e.g., imagining waving at someone or playing the piano. During the training, users have to find their own strategies, i.e., characteristics of mental imagery, which make the system recognize these tasks as correctly as possible. However, the adequacy of the feedback provided during the training has been questioned both by the theoretical literature (Lotte et al., 2013) and experimentally (Jeunet et al., 2016a). The inadequacy of the training and more particularly of the feedback are probably part of the reasons why MI-BCIs remain insufficiently reliable (Lotte et al., 2013). Some users are more likely to struggle when using MI-BCIs (Jeunet et al., 2015a). The more “tensed” and “non-autonomous” people are (based on the dimensions of the 16PF5 psychometric questionnaire (Cattell and Cattell, 1995)), and the lower their performances tend to be. “Non-autonomous” people are persons who rather learn in a social context. Yet, while educational and neurophysiological literature show the importance of a social feedback (Izuma, Saito, Sadato, 2008, Mathiak, Alawi, Koush, Dyck, Cordes, Gaber, Zepf, Palomero-Gallagher, Sarkheil, Bergert, et al., 2015), this aspect of feedback as well as emotional support have been neglected during MI-BCI training. Nevertheless, educational literature shows that social presence and emotional support are very important to the learning process (Johnson and Johnson, 2009). It seems promising to assess their impact on MI-BCI training.

Learning companions, a type of intelligent tutoring system, are computer-simulated, human-like, non-authoritative and social characters meant to foster learning (Chou et al., 2003). They have already proven their efficiency in providing social and emotional support in different learning situations (Nkambou et al., 2010) but have never been used for MI-BCIs. The work presented in this paper aimed at designing, implementing and testing the first learning companion dedicated to the improvement of user experience and/or user performances during MI-BCI training. We called this learning companion PEANUT for Personalized Emotional Agent for Neurotechnology User Training (see Fig. 1).

In the following sections, we first introduce the literature related to MI-BCI and learning companions. Then we describe the different steps which guided our design of the companion, starting with our main contributions regarding: (1) the design of the behavior of PEANUT, (2) the design of the physical appearance of PEANUT and (3) the implementation of PEANUT. Our design approach was carefully motivated and justified based on a review of the literature, the analysis of data from previous experiments and several user-studies. We then present the experiment which enabled us to test the adequacy of PEANUT and its characteristics for improving MI-BCI training to finally discuss these results1

Section snippets

MI-BCI user-training

As their name suggests, BCIs require an interaction between a human’s brain-activity and a machine (Jeunet et al., 2016b). Thus, the computer has to be able to understand the mental command sent by the user. In order to facilitate this process, the user must provide the system with stable brain-activity patterns each time the same MI task is performed. Brain-activity patterns from the different MI-tasks must also be distinct from one another and be consistent with the training set (Allison and

Designing the behavior of PEANUT

As stated herein-above, theoretical knowledge is still lacking to provide informative feedback to users with an explanatory feedback. Moreover, during the training, the users are asked not to move in order to limit motor related artifacts that could create noise in the recorded brain activity. Therefore, a complex interaction between the user and the learning companion was hardly feasible. The behavior of the companion as well as its physical appearance had to be consistent. They had to reflect

Physical appearance of PEANUT

Designing the appearance of PEANUT consisted in two steps: designing its body, and designing its face and facial expressions. The decisions concerning the face have been made based on a user-study. Those concerning the body were based on a review of the literature.

System architecture

Implementing the whole BCI system as well as PEANUT required to design, assemble and connect multiple pieces of hardware and software. Users’ EEG signals were first measured using EEG hardware (g.tec gUSBAmp, g.tec, Austria) and then collected and processed online using the software OpenViBE (Renard et al., 2010). OpenViBE provided users with a visual feedback about the estimated mental task, and computed users’ performances which were then transmitted to a home-made software, the “Rule Engine”

Evaluation of the efficiency to improve BCI user-training of PEANUT

Once the companion’s behavior and appearance had been designed and implemented, the next step consisted in testing its efficiency to improve MI-BCI user-training both in terms of MI-BCI performance and user experience. Below we present the study performed to test the efficiency of PEANUT.

Conclusion

In this paper, we introduced the design, implementation and evaluation of the first learning companion dedicated to MI-BCI user-training: PEANUT. The strength of this experimental protocol is the design of the companion: a combination of recommendations from the literature, the analysis of data from previous experiments and user-studies. PEANUT was evaluated in an MI-BCI study (10 participants trained with PEANUT, 18 control participants, 3 sessions per participant). This study revealed that

CRediT authorship contribution statement

Léa Pillette: Conceptualization, Methodology, Software, Formal analysis, Investigation, Resources, Writing - original draft, Writing - review & editing, Supervision, Project administration. Camille Jeunet: Conceptualization, Methodology, Software, Resources, Writing - original draft, Writing - review & editing, Visualization, Supervision, Project administration. Boris Mansencal: Conceptualization, Software, Resources, Writing - original draft, Writing - review & editing, Supervision. Roger

Declaration of Competing Interest

We declare that this work is not under consideration for publication elsewhere and that we do not have any competing interests to declare. There are no redundant or duplicate of this manuscript to report. Finally, all authors have agreed to conditions noted on the Authorship Agreement Form.

Acknowledgements

This work was supported by the French National Research Agency (project REBEL, grant ANR-15-CE23-0013-01), the European Research Council with the Brain-Conquest project (grant ERC-2016-STG-714567) and the Initiative of Excellence (IdEx) from the University of Bordeaux, France. We also want to express our thank to Marie Ecarlat for designing the potential faces of PEANUT, and to all our participants.

References (67)

  • B.Z. Allison et al.

    Could Anyone Use a BCI?

    Brain-computer interfaces

    (2010)
  • I. Arroyo et al.

    Affective gendered learning companions

    AIED

    (2009)
  • A. Biasiucci et al.

    Brain-actuated functional electrical stimulation elicits lasting arm motor recovery after stroke

    Nat. Commun.

    (2018)
  • P. Boersma

    Praat, a system for doing phonetics by computer

    Glot international

    (2002)
  • L. Bonnet et al.

    Two brains, one game: design and evaluation of a multiuser BCI video game based on motor imagery

    IEEE Trans. Comput. Intell. AI Games

    (2013)
  • T. Carlson et al.

    Brain-controlled wheelchairs: a robotic architecture

    IEEE Robot. Autom. Mag.

    (2013)
  • R.B. Cattell et al.

    Personality structure and the new fifth edition of the 16PF

    Educational and Psychological Measurement

    (1995)
  • C.-Y. Chou et al.

    Redefining the learning companion: the past, present, and future of educational agents

    Comput. Educ.

    (2003)
  • S.H. Fairclough

    Fundamentals of physiological computing

    Interact. Comput.

    (2009)
  • E.V. Friedrich et al.

    Whatever works: a systematic user-centered training protocol to optimize brain-computer interfacing individually

    PLoS ONE

    (2013)
  • G.D. Gargiulo et al.

    Investigating the role of combined acoustic-visual feedback in one-dimensional synchronous brain computer interfaces, a preliminary study

    Med. Devices (Auckland, NZ)

    (2012)
  • R. Gervais et al.

    Tobe: tangible out-of-body experience

    Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction

    (2016)
  • S. Graham et al.

    Theories and Principles of Motivation

    (1996)
  • J. Heutte et al.

    The Eduflow Model: A Contribution toward the Study of Optimal Learning Environments

    Flow Experience

    (2016)
  • E. Hornecker

    The role of physicality in tangible and embodied interactions

    Interactions

    (2011)
  • A.M. Isen et al.

    Positive affect facilitates creative problem solving

    J. Pers. Soc. Psychol.

    (1987)
  • K. Izuma et al.

    Processing of social and monetary rewards in the human striatum

    Neuron

    (2008)
  • C. Jeunet et al.

    Why standard brain-computer interface (BCI) training protocols should be changed: an experimental study

    J. Neural Eng.

    (2016)
  • C. Jeunet et al.

    Human learning for brain–computer interfaces

    Brain–Computer Interf. 1: Found. Methods

    (2016)
  • C. Jeunet et al.

    Towards a cognitive model of mi-bci user training

    7th International BCI Conference

    (2017)
  • C. Jeunet et al.

    Predicting mental imagery-based BCI performance from personality, cognitive profile and neurophysiological patterns

    PLoS ONE

    (2015)
  • C. Jeunet et al.

    Continuous tactile feedback for motor-imagery based brain-computer interaction in a multitasking context

    Human-Computer Interaction

    (2015)
  • D.W. Johnson et al.

    An educational psychology success story: social interdependence theory and cooperative learning

    Educ. Res.

    (2009)
  • Cited by (0)

    View full text