ABSTRACT
Motivation - To investigate ways to support human-automation teams with real-world, imperfect automation where many system failures are the result of systematic failure.
Research approach - An experimental approach was used to investigate how variance in agent reliability may influence human's trust and subsequent reliance on agent's decision aids. Sixty command and control (C2) teams, each consisting of a human operator and two cognitive agents, were asked to detect and respond to battlefield threats in six ten-minute scenarios. At the end of each scenario, participants completed the SAGAT queries, followed by the NASA TLX queries.
Findings/Design - Results revealed that teams with experienced human operators accepted significantly less inappropriate recommendations from agents than teams with inexperienced operators. More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification of inappropriate recommendations from agents.
Originality/Value - It represents an important step toward uncovering the nature of human trust in human-agent collaboration.
Take away message - This research has shown that given even minimal basis for understanding when the operator should and should not trust the agent recommendations allows operators to make better AUDs, to have better situation awareness on the critical issues associated with automation error, and to establish better trust in intelligent agents.
- Bradshaw, J.; Sierhuis, M.; Acquisti, A.; Gawdiak, Y.; Prescott, D.; Jeffers, R.; Suri, N.; and van Hoof, R. 2002. What we can learn about human-agent teamwork from practice. In Workshop on Teamwork and Coalition Formation, Autonomous Agents and Multi-agent Systems (AAMAS 02).Google Scholar
- Cannon-Bowers, J. A.; Salas, E.; and Converse, S. 1990. Cognitive psychology and team training: Training shared mental models and complex systems. Human Factors Society Bulletin 33:1--4.Google Scholar
- Cohen, P. R., and Levesque, H. J. 1991. Teamwork. Nous 25(4):487--512.Google ScholarCross Ref
- Deutsch, M. 1960. The effect of motivational orientation upon trust and suspicion. Human Relations 13:123--139.Google ScholarCross Ref
- Dzindolet, M. T.; Peterson, S. A.; Pomranky, R. A.; Pierce, L. G.; and Beck, H. P. 2003. The role of trust in automation reliance. International Journal of Human-Computer Studies 58:697--719. Google ScholarDigital Library
- Endsley, M. 1995. Toward a theory of situation awareness in dynamic systems. Human Factors 37:32--64.Google ScholarCross Ref
- Fan, X., and Yen, J. 2007. R-CAST: Integrating team intelligence for human-centered teamwork. In Proceedings of the Twenty-Second National Conference on Artificial Intelligence (AAAI'07), 1535--1541. Google ScholarDigital Library
- Fan, X.; Sun, B.; Sun, S.; McNeese, M.; and Yen, J. 2006. RPD-enabled agents teaming with humans for multicontext decision making. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, 34--41. ACM Press. Google ScholarDigital Library
- Fan, X.; Yen, J.; and Volz, R. A. 2005. A theoretical framework on proactive information exchange in agent teamwork. Artificial Intelligence 169:23--97. Google ScholarDigital Library
- Kaber, D. B., and Endsley, M. R. 2004. The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomics Science 5(2):113--153.Google ScholarCross Ref
- Klein, G. A. 1997. The recognition-primed decision (rpd) model: Looking back, looking forward. In Naturalistic decision making (Eds: C. E. Zsambok and G. Klein). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. 285--292.Google Scholar
- Lee, J. D., and See, K. A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors 46(1):50--80.Google ScholarCross Ref
- Lennox, T. L.; Payne, T.; Hahn, S. K.; Lewis, M.; and Sycara, K. 1999. MokSAF: How should we support teamwork in human-agent teams? Technical Report CMU-RITR- 99-31, Robotics Institute, Carnegie Mellon University, PA.Google Scholar
- Norling, E.; Sonenberg, L.; and Ronnquist, R. 2000. Enhancing multi-agent based simulation with human-like decision making strategies. In Moss, S., and Davidsson, P., eds., Proceedings of the Second International Workshop on Multi-Agent Based Simulation, 214--228. Google ScholarDigital Library
- Norling, E. 2004. Folk psychology for human modelling: Extending the BDI paradigm. In AAMAS '04: International Conference on Autonomous Agents and Multi Agent Systems, 202--209. Google ScholarDigital Library
- Parasuraman, R., and Riley, V. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors 39(2):230--253.Google ScholarCross Ref
Index Terms
- The influence of agent reliability on trust in human-agent collaboration
Recommendations
Comparing Human Trust Attitudes Towards Human and Agent Teammates
HAI '20: Proceedings of the 8th International Conference on Human-Agent InteractionAgents' roles in our lives increasingly matter as they engage with people in a variety of important tasks. To achieve successful human-agent teamwork, it is critical to know the differences and similarities in people's attitudes towards human and agent ...
Reputation Based Trust In Human-Agent Teamwork Without Explicit Coordination
HAI '18: Proceedings of the 6th International Conference on Human-Agent InteractionInteracting with strangers and agents through computer networks has become a routine aspect of our daily lives. In such environments, reputation plays a critical role in determining our future interactions and satisfaction derived from them. This paper ...
Building Trust Over Time in Human-Agent Relationships
HAI '21: Proceedings of the 9th International Conference on Human-Agent InteractionThis paper aims to understand how long-term trust and distrust develop between humans and agents (smart objects). We first conducted a qualitative study to explore key factors that lead to trust and distrust, how the human-agent trust journey develops, ...
Comments