Elsevier

Annual Reviews in Control

Volume 32, Issue 2, December 2008, Pages 253-261
Annual Reviews in Control

Smart collaboration between humans and machines based on mutual understanding

https://doi.org/10.1016/j.arcontrol.2008.07.003Get rights and content

Abstract

To improve the safety and comfort of a human–machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine's assistance to the human can be fine tuned if the machine is able to sense the human's state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.

Introduction

A human uses a machine with an expectation that it can extend his/her capability or help him/her to achieve a goal efficiently with fewer burdens. The machine must be designed appropriately so that it may be easy for the human to: (1) understand what the machine can or cannot do, (2) give directives to the machine, (3) monitor what the machine is doing, and (4) intervene in machine control when necessary. The machine thus is required to be an agent that is faithful to the human and is able to perform precisely what it is ordered to do. If the human's decision and its associated directive to the machine are correct, he/she can obtain a result that matches his/her goal and the situation at the time. In reality, however, the human can fail to give a proper directive to the machine in several ways. One of such cases may be where the human's understanding of a given situation (and thus his/her decision) is not correct for some reasons, such as inattention or internal distraction. Another case may be where little time is left for him/her to implement a necessary action, such as giving a directive to the machine, although the human's understanding of a given situation is correct.

The machine may need to implement some control actions, when it determines that the human might be in a condition where he/she is unable to give directives to the machine. In other words, the machine might need to be smart so that it can behave like a human friend (or a teammate) who would try to understand the partner's psychological/physiological conditions, the situation at the time, what the partner is going or trying to do, and whether the partner's intent or action matches the situation. In the area of automobile, for instance, various research projects have been conducted world-wide to develop smart machines that provide the drivers with various support functions for enhancing comfort and safety (see, e.g. Akamatsu & Sakaguchi, 2003; Amiditis, Lentziou, Polychronopoulos, Bolovinou, & Bekiaris, 2005; Cacciabue & Hollnagel, 2005; Furugori, Yoshizawa, Iname, & Miura, 2005; Panou, Bekiaris, & Papakostopoulos, 2005; Saad, 2005; Tango & Montanari, 2005; Witt, 2003). Development of a situation-adaptive Driver Assistance System (DAS) is one of such approaches.

The situation-adaptive DAS was developed by the author and his colleagues in a project supported by the Government of Japan (Inagaki, 2007). The developed DAS provides the driver with multi-layered assist functions (Fig. 1). In the first layer, a driver's situation recognition is enhanced for proper decisions and actions. It is believed that understanding of the current situation determines what action needs to be done (Hollnagel & Bye, 2000). In the second layer, the DAS monitors the driver's behavior and traffic condition to evaluate whether his/her intent and behaviors match the traffic condition. When the DAS detects a deviation from normality, it gives the driver an alert or a warning to make him/her come back to normality. In the third layer, the DAS provides the driver with automatic safety control functions, if the deviation from normality still continues to be observed or if little time is left for the driver to cope with a traffic situation. The situation-adaptive DAS adjusts its assist functions dynamically so that they may fit to the human's intent, psychological/physiological conditions, and the traffic condition. The adjustment of assist functions is made in a machine-initiated manner (Inagaki, 2003, Scerbo, 1996) by inferring intent and conditions of the human through monitoring his/her behaviors. For instance, the DAS can implement control actions based on its own decisions.

This paper presents the benefits of mutual understanding between humans and machines for realizing smart collaboration, as well as the necessity of the machine-initiated (instead of the human-initiated) decision and control in order to assure comfort and safety. Discussions are also made on the issues of trust, automation surprises, responsibility and authority, especially when humans and machines monitor with each other the partner's behaviors.

Section snippets

Advanced safety vehicle

Before going into discussion of the situation-adaptive DAS, it would be beneficial to review the Advanced Safety Vehicle Project, one of national projects in Japan. An Advanced Safety Vehicle (ASV) is defined as a vehicle equipped with technology-based support systems that can assist drivers to enhance safety under normal as well as time-critical situations. The ASV Project aims to promote development of new technologies for reducing traffic accidents. The project is carried out through

Why is understanding of driver state necessary?

The pre-crash safety system in the previous section applies the automatic brakes upon detecting the delay in action. In other words, the driver assistance function is activated when it detects the fact that the driver's braking is absolutely late. Note here that, if the driver assistance system could predict that the driver might be late in braking, it would be able to apply the automatic brakes a bit earlier for a better result. To make such a prediction of action delay feasible, it is

Real time sensing of driver state

The author conducted a research project, ‘Situation and Intent Recognition for Risk Finding and Avoidance’ with the support of Government of Japan during the period of July 2004–March 2007. The aim of the project was to develop proactive safety technologies to realize a driver assistance system that provides the driver with various support functions in a situation-adaptive and context-dependent manner (Inagaki, 2007). The idea of the project was: Although it is not possible to ‘see’ inside of a

What if a deviation from normality is detected?

The situation-adaptive DAS developed in the ‘Situation and intent recognition for risk finding and avoidance’ project monitors the driver behavior and outside traffic environment to evaluate whether his/her intent and behaviors match the traffic condition. When it detects a deviation from normality (such as undesirable conditions or behaviors in Case 1, Case 2, Case 3, Case 4, or driver's intent of an inappropriate action as in Case 5), the DAS gives the driver an alert or a warning to let

Concluding remarks

This paper has discussed the need and importance for the machine to ‘know’ the human operator in the system. If the machine is able to sense whether the human is in a good condition, what he/she is trying to do, whether he/she can accomplish the aim alone, then the machine's support to the human can be fine tuned. We have seen that some sensing technologies and related methods are available for that purpose. However, no ‘universal sensing methods’ have been developed. It is necessary to tune

Acknowledgments

This work has been partially based on the results of a research project, ‘Situation and intent recognition for risk finding and avoidance,’ conducted during the period of 2004–2007 with the support by MEXT, the Ministry of Education, Culture, Sports, Science and Technology, Government of Japan. The author expresses his thanks to members of the project for their great efforts. Thanks are extended to MLIT, the Ministry of Land, Infrastructure and Transport, Government of Japan, for their

Toshiyuki Inagaki received BS, MS, and Doctor's degrees in systems engineering from Kyoto University in 1974, 1976, and 1979, respectively. From 1979 to 1980 he was a Research Associate at the University of Houston. In 1980, he joined the University of Tsukuba, where he is Professor since 1994. From 1990 to 1991 he was at the University of Kassel, Germany, as a Research Fellow of the Alexander von Humboldt Foundation. His research interests include design and evaluation of human interactions

References (38)

  • E. Hollnagel et al.

    Principles for modeling function allocation

    Int. J. Human-Computer Studies

    (2000)
  • M. Akamatsu et al.

    Personal fitting driver assistance system based on driving behavior model

  • A. Amiditis et al.

    Real time traffic and environment monitoring for automotive applications

  • C.E. Billings

    Aviation automation—The search for a human-centered approach

    (1997)
  • P.C. Cacciabue et al.

    Modelling driving performance: A review of criteria, variables and parameters

  • M.R. Endsley et al.

    The out-of-the-loop performance problem and the level of control in automation

    Human Factors

    (1995)
  • S. Furugori et al.

    Estimation of driver fatigue by pressure distribution on seat in long term driving

    Review of Automotive Engineering

    (2005)
  • E. Hollnagel et al.

    Joint cognitive systems: Foundations of cognitive systems engineering

    (2005)
  • ICAO (1998). Human factors training manual. Doc...
  • T. Inagaki

    Adaptive automation: Sharing and trading of control

  • T. Inagaki

    Design of human–machine interactions in light of domain-dependence of human-centered automation

    Cognition, Technology & Work

    (2006)
  • T. Inagaki

    Situation and intent recognition for risk finding and avoidance

  • T. Inagaki et al.

    Adaptive automation as an ultimate means for assuring safety

  • T. Inagaki et al.

    Driver support functions under resource-limited situations

    Journal of Mechanical Systems for Transportation and Logistics

    (2008)
  • Inagaki, T., & Sheridan, T. B. (2008). Authority and responsibility in human–machine systems: Is machine-initiated...
  • T. Inagaki et al.

    Human supervision and control in engineering and music: Similarities, dissimilarities, and their implications

    Proceedings of the IEEE

    (2004)
  • M. Itoh

    Proactive detection of driver's potentially risky behavior via sensor fusion approach

  • M. Itoh

    Real time detection of driver's intent via analyses of pressure distribution on the seat

  • M. Itoh et al.

    Driver behavior monitoring. Part II. Detection of driver's inattentiveness under distracting conditions

  • Cited by (77)

    • A method to assess individualized driver models: Descriptiveness, identifiability and realism

      2019, Transportation Research Part F: Traffic Psychology and Behaviour
    • Online adaptation of the Level of Haptic Authority in a lane keeping system considering the driver's state

      2019, Transportation Research Part F: Traffic Psychology and Behaviour
      Citation Excerpt :

      Furthermore, humans may be more efficient at managing resources when they can control changes in the state of automation (Billings & Woods, 1994). However, humans may fail to take proper actions in cases where their understanding of the situation is somewhat incorrect or incomplete or in cases where the time is critical to implement a necessary action (Inagaki, 2008). Thereby, an alternative paradigm has been proposed to consider the possibility that the level of automation and operating mode might be modified in real time either by the human or the system.

    • Recognition and prediction of driver’s whole body posture model

      2022, Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
    View all citing articles on Scopus

    Toshiyuki Inagaki received BS, MS, and Doctor's degrees in systems engineering from Kyoto University in 1974, 1976, and 1979, respectively. From 1979 to 1980 he was a Research Associate at the University of Houston. In 1980, he joined the University of Tsukuba, where he is Professor since 1994. From 1990 to 1991 he was at the University of Kassel, Germany, as a Research Fellow of the Alexander von Humboldt Foundation. His research interests include design and evaluation of human interactions with machine intelligence (e.g., situation-adaptive trading of authority between humans and automation), psychological aspects of human-machine systems (e.g., human trust in and distrust of automated systems), and decision making with imperfect information under time stress. Dr. Inagaki is a Senior Member of the IEEE and a Fellow of the IEICE. He served as Chair of Special Interest Group on Human-Machine Systems of the SICE, and Chair of the IEEE Reliability Society Japan Chapter. He received the best paper awards from the Institute of Systems, Control, and Information Engineers in 1994, the Human Interface Society in 2001, the Japanese Council of Traffic Science in 2008, and the best presentation award from the Japan Society of Automotive Engineers in 2004. Since 2006, Dr. Inagaki has been a member of committees for Adaptive Safety Vehicle (ASV), Ministry of Land, Infrastructure and Transport, Government of Japan.

    View full text