Elsevier

Brain Research

Volume 1071, Issue 1, 3 February 2006, Pages 145-152
Brain Research

Research Report
Walking from thought

https://doi.org/10.1016/j.brainres.2005.11.083Get rights and content

Abstract

Online analysis and classification of single electroencephalogram (EEG) trials during motor imagery were used for navigation in the virtual environment (VE). The EEG was recorded bipolarly with electrode placement over the hand and foot representation areas. The aim of the study was to demonstrate for the first time that it is possible to move through a virtual street without muscular activity when the participant only imagines feet movements. This is achieved by exploiting a brain–computer interface (BCI) which transforms thought-modulated EEG signals into an output signal that controls events within the VE. The experiments were carried out in an immersive projection environment, commonly referred to as a “Cave” (Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Surround-screen projection-based virtual reality: the design and implementation of the CAVE. Proceedings of the 20th annual conference on Computer graphics and interactive techniques, ACM Press, 1993, pp. 135–142) where participants were able to move through a virtual street by foot imagery only. Prior to the final experiments in the Cave, the participants underwent an extensive BCI training.

Introduction

A relatively recent development is called EEG-based brain–computer interface (BCI). By means of a BCI, specific features extracted from EEG signals are used, e.g. to operate devices and assist people with comprised motor functions (Wolpaw et al., 2002) or to control events in the virtual environment (VE) (Bayliss and Ballard, 2000, Leeb et al., 2004). In general, VE provides an excellent tool to test procedures which might be applied subsequently in reality, e.g. for patients with disabilities. If it is possible to show that people can learn to control their movements through space within a VE, it would justify the much bigger expense of building physical devices like, e.g. a robot arm controlled by a BCI. Another application of combining BCI and virtual reality technologies is using VE as feedback medium with the goal to enhance classification accuracy and shorten the time needed for BCI training sessions, e.g. to re-establish a communication channel in patients who are totally paralyzed (or “locked-in”) (Pfurtscheller et al., 2005, Wolpaw et al., 2002).

One of the first who combined virtual reality (VR) and BCI technologies were Bayliss and Ballard (2000) using the P300 evoked potential components in combination with a head-mounted display (HMD)-based VR system. Subjects were instructed to drive a modified go-car in a virtual tour and stop at red lights while ignoring both green and yellow lights. The red lights were made to be rare enough to make the P300 suitable (Donchin et al., 2000). In this type of BCI the subject has to focus attention to the visual cue and obtain visual feedback when the goal was achieved.

Besides focused visual attention, motor imagery is also a suitable mental strategy in BCI research. In motor imagery, the participants do not have to focus on a certain flashing object in the VE but do have to imagine a specific motor act such as hand, foot or tongue movement. Motor imagery results in a somatotopic activation pattern, similar to the same physical movement being executed (Porro et al., 1996). In particular, hand and feet motor imagery affects sensorimotor EEG rhythms in a way that allows a BCI to generate a control signal of high reliability.

The goal of this paper is (i) to demonstrate that it is possible to move through a virtual street without any muscular activity, when the participant only imagines the movement of both feet; (ii) to investigate whether feedback in a BCI experiment in form of a moving scene in VE disturbs or motivates the participants.

Three able-bodied subjects firstly underwent a basic BCI training where a computer monitor was used for feedback (FB). Thereafter, advanced training was performed within a VE delivered on a head-mounted display (HMD), and finally a “walking” task was carried out in the ReaCTor, a Cave-like system (Cruz-Neira et al., 1993) at University College London (UCL).

Section snippets

Classification accuracy over runs

After the experiment the online classification output of the LDA of each run was analyzed. Furthermore, the best (lowest) classification error within the feedback time of a trial (between second 4.25 and 8) was used as classification error of this run. In case of training runs without FB, the classification error time course was calculated offline.

The classification error of all runs without and with FB over a period of 5 months is displayed in Fig. 1. The runs with PC, HMD and Cave are

Discussion and conclusion

Combining BCI and virtual reality technologies may provide the possibility to realize a “natural” interface for navigation of a VE. As an important step in this direction, the data reported show that EEG recording and single trial processing are possible in a Cave-like system and that the obtained signals are even suitable to control events within a VE in real time. Furthermore, the present study revealed that motor imagery is an adequate mental strategy to control actions or events within the

Subjects and EEG-recording

The study was performed on three healthy volunteers aged 23, 28 and 30 years. All subjects were right-handed and without a history of neurological disease. They gave informal consent to participate in the study. The EEG was recorded bipolarly with electrodes placed 2.5 cm anterior and 2.5 cm posterior to position C3 (channel C3), position C4 (channel C4) and position Cz (channel Cz) of the international 10/20 system. The ground electrode was positioned on the forehead. The signals were recorded

Acknowledgments

The work was funded by the European PRESENCIA project (IST-2001-37927). Thanks to Vinoba Vinayagamoorthy and Marco Gillies for allowing the use of the street scene virtual environment and special thanks to Jörg Biedermann, Gernot Supp and Doris Zimmermann for their participation.

References (21)

  • B. Graimann et al.

    Visualization of significant ERD/ERS patterns in multichannel EEG and ECoG data

    Clin. Neurophysiol.

    (2002)
  • G. Pfurtscheller et al.

    Event-related EEG/MEG synchronization and desynchronization: basic principles

    Clin. Neurophysiol.

    (1999)
  • J.R. Wolpaw et al.

    Brain–computer interfaces for communication and control

    Clin. Neurophysiol.

    (2002)
  • J.D. Bayliss et al.

    A virtual reality testbed for brain–computer interface research

    IEEE Trans. Rehabil. Eng.

    (2000)
  • C.M. Bishop

    Neural Networks for Pattern Recognition

    (1995)
  • C. Cruz-Neira et al.

    Surround-screen projection-based virtual reality: the design and implementation of the CAVE

  • E. Donchin et al.

    The mental prosthesis: assessing the speed of a P300-based brain–computer interface

    IEEE Trans. Rehabil. Eng.

    (2000)
  • W.C. Drevets et al.

    Blood flow changes in human somatosensory cortex during anticipated stimulation

    Nature

    (1995)
  • D.E. Goldberg

    Genetic Algorithms in Search, Optimization, and Machine Learning

    (1989)
  • S. Kastner et al.

    Mechanisms of directed attention in the human extrastriate cortex as revealed by functional MRI

    Science

    (1998)
There are more references available in the full text version of this article.

Cited by (185)

  • Brain-computer interfaces and virtual reality for neurorehabilitation

    2020, Handbook of Clinical Neurology
    Citation Excerpt :

    In contrast to this result is the outcome of the rotation work described earlier, where the applied mental strategy and the mapped VR action did not match. Such mismatches between the normal intention and the applied one, causes the participants to act counterintuitively and inefficiently, and can lead to fatigue and confusion (Pfurtscheller et al., 2006), as the sensorimotor contingency between perception and action, which powers the illusions and immersion, is broken (Slater et al., 2009). From a user interaction point of view, these works based on the use of a synchronous BCI, which is a cue-based approach, show limitations.

View all citing articles on Scopus
View full text