Elsevier

Computer Science Review

Volume 22, November 2016, Pages 65-87
Computer Science Review

Review Article
An overview of self-adaptive technologies within virtual reality training

https://doi.org/10.1016/j.cosrev.2016.09.001Get rights and content

Abstract

This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training.

Introduction

Previously no systematic study has been published on the use of self-adaptive systems for virtual reality training. As a result, it was unclear what self-adaptive methods have been applied within virtual training systems, and what these methods offer to the maintenance and capabilities of virtual training systems. This systematic overview aims to provide this insight which is important for researchers, engineers, developers and scientists.

This research provides a current overview of the use of self-adaptive, autonomous systems and data driven training within Virtual Reality (VR) based training environments. Adaptation can benefit several core technologies within VR including haptic devices, stereo graphics, adaptive content, scoring and assessment and autonomous agents. Adaptation can provide customised or personalised content for individually tailored learner-specific training for maximum learning efficiency.

The use of VR is continually increasing with mobile, online and ubiquitous technologies for 21st century learning. In 2014 Google released a VR interface for smartphones and Facebook acquired Oculus Rift,1 a VR headset company for $2 billion. Children are introduced to VR at a younger age as an effective part of K-12 primary, secondary and higher education  [1]. Worldwide, VR is used for everyday training hand–eye coordination and physical skills. VR is useful for physical skills training. Training with VR has been applied to many training disciplines including flight simulation  [2], military  [3], engineering, space, automotive and manufacturing  [4].

VR software applications are experiencing demand for increased mobility, including pervasive, ubiquitous, and embedded features, providing virtual training over the Internet and ad-hoc wireless networks. This leads to increasing demand to deal with handling complexity and achieving quality goals at run time.

A current problem is that VR trainees all experience the same training routines, which are not customised to individual learning patterns. Yet every trainee learns in a different way and will require their training to focus on specific aspects of the tasks.

Conventionally, in order to adapt or re-configure distributed training systems, human supervision is required which is costly and time consuming. Automation can avoid this human supervision to reduce or avoid these costs [5].

There is currently no method defined to enable adaptive VR training. Currently there is limited research on customisation of VR training to suit individual learners.

Self-adaptive VR could potentially make training more efficient and effective. VR training could be improved by incorporating autonomous, data-driven aspects to customise and automate training for individuals  [6].

A literature search was performed to identify existing publications, research and patents which were reviewed as part of this research overview. Medical subject headings (MeSH) terms were used to search on the Medline (Pubmed) database. Additional keyword searches were conducted using alternative databases including IEEE Xplore, ASME Digital Collection, ACM Digital Library, Google Scholar and IET/IEEE Electronic Library (IEL) which produced further relevant titles. Patent searches using the European patent office (EPO) were conducted to identify existing intellectual property via the worldwide patent database. The results contained a larger number of academic research than commercial/industrial applications.

The main objective of this overview paper is to clarify the current use of self-adaptive systems and automation within virtual reality training and assessment. Adaptation within VR includes a variety of immersive, high fidelity technologies. Self-adaptation is particularly useful in VR for generation of learner-specific training scenarios to enhance learning. The paper will assess current applications for:

  • Automating haptic feedback, for guiding novices based on expert knowledge.

  • Automating VR training which can lead to automation of actual procedures such as robotic assisted surgery.

  • Automating the generation of customised learning content in terms of visual, haptic and audio material to provide scenarios for learning which ideally challenge the individual in the best way to target improvement of their most beneficial skills.

  • Automating the assessment and scoring mechanisms, which objectively generate feedback on a trainee’s performance. This forms a vital part of generating user-centred content.

The rest of the paper covers: Section  2; outlining the principles of self-adaptive systems, which provides a precursor to adaptation of software since the main focus of this work is adaptation within VR training environments.

This review has been organised in such a way that Sections  3 Core technologies of virtual reality training, 4 Existing virtual reality training systems present the main bulk of this research. Section  3 details the five main technologies used within VR, by detailing the uses for these five technologies within VR, which are (i) adaptive technology, (ii) haptic devices, (iii) head mounted displays (HMDs), (iv) assessment and (v) autonomous agents. Section  4 presents five main application areas of VR training; (i) medical, (ii) industrial and collaborative, (iii) collaborative and Massive Open Online Courses (MOOCs), (iv) serious games and (v) rehabilitation. These five areas were identified by an increased number of publications in these focus area. This overview reports the state of art within each of the five main VR application areas. The application area of Medical VR is particularly active and has the largest number of publications over recent years, perhaps due to the potential benefits offered by VR in avoiding patient harm.

Other sections include Section  5; covering the future VR technologies. Section  6 provides an overall comparison, which enables the reader to accessibly view which VR technologies have been used in which types of VR training. Section  7 draws conclusions from this overview summarising the current state and future directions of adaptive training in succinct form.

Section snippets

Principles of self-adaptive systems

The definition of an adaptive system is a set of interacting entities that together are able to respond to changes. Examples of natural adaptive systems include ecosystems, organisms or human communities.

Feedback loops represent a key feature allowing response to change. In adaptive systems this loop is known as a control loop. Control loops in adaptive systems can be formed from four categories of machine learning: prediction, recognition, detection and optimisation  [7]. The outputs from the

Core technologies of virtual reality training

This section outlines five technologies which are the core building blocks of VR-based training simulations:

  • (i)

    Adaptive technologies.

  • (ii)

    Haptic devices.

  • (iii)

    Head mounted displays.

  • (iv)

    Assessment and scoring feedback.

  • (v)

    Autonomous agents.

Throughout the remainder of Section  3, each of these five core technologies will be discussed in further detailed subsections. These five VR technologies are visualised in Fig. 1 showing the data flow and interaction with adaptation and various layers, building blocks for a

Existing virtual reality training systems

This section presents a review of existing virtual reality training systems and their technological features. The simulators are arranged within a variety of topics, the main 5 VR-based training topics covered in this section are:

  • (i)

    Medical VR.

  • (ii)

    Industrial and Commercial VR.

  • (iii)

    Collaborative VR and MOOCs.

  • (iv)

    Serious Games.

  • (v)

    Rehabilitation.

The five main VR topic areas shown above are covered in the following subsections, detailing their use of adaptation, current state-of the art, and outlining the ways in

Future technology in VR training

The majority of VR research focuses on medical training, with other industries receiving less attention. There is currently increased research activity within rehabilitation games.

No common framework or method has yet been defined to customise learning content within VR training for individual trainees. Data mining of user scores and other measured performance data provide a possible solution. A database capturing results for all trainees could be data-mined by an algorithm to formulate a

Summary of technologies used within virtual training simulators

This section presents a series of tables comparing the findings from this overview. Table 3 gives examples of VR simulators demonstrating which VR training topics have used which VR technologies. There are examples for all possible combinations, showing that generally all core VR technologies can be useful for all VR topic areas.

Table 4 shows examples of multimodal technologies within VR simulators, where two technologies have been combined together, including examples of existing VR

Conclusions

This current overview has summarised the state of the art virtual reality training systems using self-adaptive technology. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data forms a critical input to enable adaptation within virtual training systems. Trainee assessment data can enable customised training scenarios and automated difficulty levels to match individual expertise. Data from

Acknowledgements

The authors would like to acknowledge Bournemouth University for funding this research. They would also like to acknowledge the help of various anonymous reviewers who provided feedback which helped strengthen the research. The authors acknowledge the editors of International Journal of Soft Computing and Software Engineering (JSCSE) for permission to re-produce Fig. 7.

References (154)

  • J. Novak-Marcincin et al.

    Application of virtual and augmented reality technology in education of manufacturing engineers

  • M. Salehie et al.

    Self-adaptive software: Landscape and research challenges

    ACM Trans. Auton. Adapt. Syst. (TAAS)

    (2009)
  • A. Boulton

    Testing the limits of data-driven learning: language proficiency and training

    ReCALL

    (2009)
  • S. Stenudd

    Using machine learning in the adaptive control of a smart environment

    (2010)
  • R. Laddaga, Active software, in: Proc. of Int. Workshop on Self-Adaptive Software, 2000, pp....
  • D. Weyns et al.

    A survey of formal methods in self-adaptive systems

  • J.O. Kephart et al.

    The vision of autonomic computing

    Computer

    (2003)
  • R.W. Picard

    Affective computing, Tech. Rep. 321

    (1995)
  • T. Johns

    Should You be Persuaded: Two Samples of Data-driven Learning Materials

    (1991)
  • C. Bouras et al.

    A framework for intelligent virtual training environment: The steps from specification to design

    Educ. Technol. Soc.

    (2002)
  • D. Tavangarian et al.

    Is e-learning the solution for individual learning

    Electron. J. E-Learn.

    (2004)
  • A. Ahmad, O. Basir, K. Hassanein, Adaptive user interfaces for intelligent e-Learning: issues and trends, in: The...
  • C.T. dos Santos et al.

    An intelligent and adaptive virtual environment and its application in distance learning

  • D. Charles, A. Kerr, M. McNeill, M. McAlister, M. Black, J. Kcklich, K. Stringer, Player-centred game design: Player...
  • N. Rossol et al.

    A framework for adaptive training and games in virtual reality rehabilitation environments

  • A. Del Blanco et al.

    Enhancing adaptive learning and assessment in virtual learning environments with educational games

    IJDET-Intell. Learn. Syst. Advancements Comput.-Aided Instr.: Emerg. Stud.

    (2011)
  • A. Ewais et al.

    Authoring adaptive 3D virtual learning environments

    Int. J. Virtual Pers. Learn. Environ. (IJVPLE)

    (2014)
  • B.W. Chen

    Stereo vision calibration based on Elman neural network

    Comput. Intell. Comput. Educ. Technol.

    (2014)
  • S.K. Rushton, K.L. Coles, J.P. Wann, Virtual reality technology in the assessment and rehabilitation of unilateral...
  • R.L. Myers et al.

    Virtual reality and left hemineglect: a technology for assessment and therapy

    CyberPsychol. Behav.

    (2000)
  • G. Fichtinger et al.

    Image overlay guidance for needle insertion in CT scanner

    IEEE Trans. Biomed. Eng.

    (2005)
  • O. Erat et al.

    How a surgeon becomes superman by visualization of intelligently fused multi-modalities

  • Forbes. 2014....
  • MSDN, 2012....
  • IEEE. 2014. Microsoft Kinect Hack. http://sites.ieee.org/vancouver-cs/archives/290  (Accessed October...
  • J.P. Rolland et al.

    Quantification of adaptation to virtual-eye location in see-thru head-mounted displays

  • P. DiZio et al.

    Spatial orientation, adaptation, and motion sickness in real and virtual environments

    Presence: Teleoper. Vir. Environ.

    (1992)
  • L. James Smart et al.

    Influence of complexity and coupling of optic flow on visually induced motion sickness

    Ecol. Psychol.

    (2014)
  • A. Tinwell et al.

    The Uncanny Valley and nonverbal communication in virtual characters

  • T.R. Coles et al.

    The role of haptics in medical training simulators: a survey of the state of the art

    IEEE Trans. Haptic.

    (2011)
  • S. Deng et al.

    Multimodality with eye tracking and haptics: A new horizon for serious games?

    Int. J. Serious Games

    (2014)
  • O. Luzanin et al.

    Hand gesture recognition using low-budget data glove and cluster-trained probabilistic neural network

    Assem. Autom.

    (2014)
  • Gregory James Offer

    Automated vehicles and electrification of transport

    Energy Environ. Sci.

    (2015)
  • F. Olivari, F.M. Nieuwenhuizen, H.H. Bülthoff, L. Pollini, An experimental comparison of haptic and automated pilot...
  • Manish Chauhan et al.

    Control of an omnidirectional walking simulator

  • C. Gunn et al.

    Using collaborative haptics in remote surgical training

  • H. Yu

    Uses, costs and comparative effectiveness of robotic assisted, laparoscopic and open urological surgery

    J. Urol.

    (2011)
  • A.M. Okamura

    Haptic feedback in robot-assisted minimally invasive surgery

    Curr. Opin. Urol.

    (2009)
  • M. Stark et al.

    The future of telesurgery: a universal system with haptic sensation

    J. Turkish German Gynecol. Assoc.

    (2012)
  • F. Despinoy, J. Leon Torres, M.A. Vitrani, B. Herman, Toward remote teleoperation with eye and hand: A first...
  • Cited by (0)

    View full text