ABSTRACT
Artificial intelligence (AI) is gaining traction in service-oriented businesses in the form of chatbots. A chatbot is a popular type of social AI that uses natural language processing to communicate with users. Past studies have shown discrepancies in terms of whether or not a chatbot should communicate and behave like a human. This article aims to explore these discrepancies in order to provide a theoretical contribution of a list of factors related to perceived humanness in chatbots and how these may consequently lead to a positive user experience. The results suggest that a chatbot should have the following characteristics: avoiding small talk and maintaining a formal tone; identifying itself as a bot and asking how it can help; providing specific information and articulating itself with sophisticated choices of words and well-constructed sentences; asking follow-up questions during decision-making processes and; providing an apology when the context is not comprehensible, followed by a question or a statement to dynamically move a conversation forward. These results may have implications for designers working in the field of AI as well as for the wider debates and the broader discourse around the adoption of AI in society.
- C. Adam and B. Gaudou. 2016. BDI agents in social simulations: A survey. The Knowledge Engineering Review 31, 3 (2016), 207--238.Google ScholarCross Ref
- N. Akma, M. Hafiz, A. Zainal, M. Fairuz, and Z. Adnan. 2018. Review of chatbots design techniques. International Journal of Computer Applications 181, 8 (2018), 7--10.Google ScholarCross Ref
- T. Bickmore and J. Cassell. 2005. Social dialogue with embodied conversational agents. Springer, Dordrecht, NL, 23--54.Google Scholar
- T. Bickmore, H. Trinh, S. Olafsson, T. K. O'Leary, R. Asadi, N. M. Rickles, and R. Cruz. 2018. Patient and consumer safety risks when using conversational assistants for medical information: An observational study of Siri, Alexa, and Google Assistant. Journal of Medical Internet Research 20, 9 (2018), e11510.Google ScholarCross Ref
- A. Braga and R. Logan. 2017. The emperor of strong AI has no clothes: Limits to artificial intelligence. Information 8, 4 (2017), 156--177.Google ScholarCross Ref
- C. Chakrabarti and G. F. Luger. 2015. Artificial conversations for customer service chatter bots: Architecture, algorithms and evaluation metrics. Expert Systems with Applications 42, 2015 (2015), 6878--6897.Google ScholarDigital Library
- Swedish Research Council. 2017. Good research practice. Report. Swedish Research Council.Google Scholar
- L. J. Cronbach. 1951. Coefficient alpha and the internal structure of tests. Psychometrika 16, 3 (1951), 297--334.Google ScholarCross Ref
- R. Dale. 2016. The return of the chatbots. Natural Language Engineering 22, 5 (2016), 811--817.Google ScholarCross Ref
- V. Demeure, R. Niewiadomski, and C. Pelachaud. 2011. How is believability of a virtual agent related to warmth, competence, personification, and embodiment? Presence 20, 5 (2011), 431--448.Google ScholarDigital Library
- A. Dix. 2010. Human-computer interaction: A stable discipline, a nascent science, and the growth of the long tail. Interacting with Computers 22, 1 (2010), 13--27.Google ScholarDigital Library
- P. Dourish. 2004. What we talk about when we talk about context. Personal Ubiquitous Computing 8, 1 (2004), 19--30.Google ScholarDigital Library
- D. Duijst. 2017. Can we improve the user experience of chatbots with personalisation? Thesis.Google Scholar
- A. Følstad and P. B. Brandtzaeg. 2017. Chatbots and the new world of HCI. Interactions 24, 4 (2017), 38--42.Google ScholarDigital Library
- L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, and E. Vayena. 2018. AI4People - An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28, 4 (2018), 689--707.Google ScholarDigital Library
- N. Foulquier, P. Redou, C. Le Gal, B. Rouvière, J-O. Pers, and A. Saraux. 2018. Pathogenesis-based treatments in primary Sjogren's syndrome using artificial intelligence and advanced machine learning techniques: A systematic literature review. Human Vaccines and Immunotherapeutics 14, 11 (2018), 2553--2558.Google Scholar
- M. Gams, I. Y. Gu, A. Härmä, A. Muñoz, and V. Tam. 2019. Artificial intelligence and ambient intelligence. Journal of Ambient Intelligence and Smart Environments 11, 1 (2019), 71--86.Google ScholarDigital Library
- T. Hagendorff and K. Wezel. 2019. 15 challenges for AI: or what AI (currently) can't do. AI and Society 3, 2019 (2019), 1--11.Google Scholar
- J. Hecht. 2018. Meeting people's expectations. Nature Outlook - Digital Revolution 563, 7733 (2018), 141--143.Google Scholar
- M. Huang and R. Rust. 2017. Artificial intelligence in service. Journal of Service Research 21, 2 (2017), 155--172.Google ScholarCross Ref
- M. Jain, P. Kumar, R. Kota, and S. N. Patel. 2018. Evaluating and informing the design of chatbots. In DIS 2018, Session 18: Interacting with Conversational Agents. 895--906.Google Scholar
- M. H. Jarrahi. 2018. Artificial intelligence and the future of work: Human-AI symbiosos in organizational decision making. Business Horizons 61, 4 (2018), 577--586.Google ScholarCross Ref
- L. C. Klopfenstein, S. Delpriori, S. Malatini, and A. Bogliolo. 2017. The rise of bots: A survey of conversational interfaces, patterns and paradigms. In DIS 2017. 555--565.Google Scholar
- C. Lallemand, G. Gronier, and V. Koenig. 2015. User experience: A concept without consensus? Exploring practitioners' perspectives through an international survey. Computers in Human Behavior 43, 2015 (2015), 35--48.Google ScholarDigital Library
- Q. V. Liao, M. Hussain, P. Chandar, M. Davis, Y. Khazaen, M. P. Crasso, D. Wang, M. Muller, N. S. Shami, and W. Geyer. 2018. All work and no play? Conversations with a question-and-answer chatbot in the wild. In CHI 2018. 1--13.Google Scholar
- C. L. Lortie and M.J. Guitton. 2011. Judgment of the humanness of an interlocutor is in the eye of the beholder. PLoS ONE 6, 9 (2011), e25085.Google ScholarCross Ref
- E. Luger and A. Sellen. 2016. "Like having a really bad PA": The gulf between user expectation and experience of conversational agents. In CHI 2016. 5286--5297.Google Scholar
- S. Makridakis. 2017. The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures 90, 2017 (2017), 46--60.Google ScholarCross Ref
- J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. 1955/2006. A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine 27, 4(1955/2006), 12--14.Google Scholar
- D. McDuff and M. Czerwinski. 2018. Designing emotionally sentient agents. Commun. ACM 61, 12 (2018), 74--83.Google ScholarDigital Library
- S. McKenney, N. Nieveen, and J. van den Akker. 2006. Design research from a curriculum perspective. Routledge, London, 67--90.Google Scholar
- M. L. McNeal and D. Newyear. 2013. Introducing chatbots in libraries. Library Technology Reports 49, 8 (2013), 5--10.Google Scholar
- G. Mone. 2016. The edge of the uncanny. Commun. ACM 59, 9 (2016), 17--19.Google ScholarDigital Library
- M.Mori. 1970. Bukimi no tani [The uncanny valley]. Energy 7, 4 (1970), 33--35.Google Scholar
- M. Mori. 2012. The uncanny valley. IEEE Robotics and Automation Magazine 19, 2(2012), 98--100.Google ScholarCross Ref
- Y. Mou and K. Xu. 2017. The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior 72, 2017 (2017), 432--440.Google ScholarDigital Library
- M. Neururer, S. Schlögl, L. Brinkschulte, and A. Groth. 2018. Perceptions on authenticity in chat bots. Multimodal Technologies and Interaction 2, 60 (2018), 1--19.Google ScholarCross Ref
- S. Noorunnisa, D. Jarvis, J. Jarvis, and M. Watson. 2019. Application of the GO-RITE BDI framework to human-auto no my teaming: A case study. Journal of Computing and Information Technology 27, 1 (2019), 13--24.Google Scholar
- E. Norling. 2004. Folk psychology for human modelling: Extending the BDI paradigm.. In AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems. 202--209.Google Scholar
- E. Paikari and A. van der Hoek. 2018. A framework for understanding chatbots and their future. In Proceedings of 11th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE'18). 13--16.Google Scholar
- K. Panetta. 2017. Gartner top strategic predictions for 2018 and beyond. (2017). Retrieved September 5, 2019 from https://www.gartner.com/smarterwithgartner/gartner-top- strategic-predictions- for- 2018- and-beyond/Google Scholar
- J. Pereira and Ó. Díaz. 2019. Using health chatbots for behavior change: A mapping study. Journal of Medical Systems 43, 5 (2019), 1--13.Google ScholarDigital Library
- L. Piccolo, M. Mensio, and H. Alani. 2018. Chasing the chatbots: Directions for interaction and design research. In CONVERSATIONS 2018, 5th International Conference on Internet Science. 1--12.Google Scholar
- M. Portela and C. Granell-Canut. 2017. A new friend in our smartphone? Observing interactions with chatbots in the search of emotional engagement. In Interacción '17. 1--7.Google Scholar
- R. Rosales, M. Castañón-Puga, L. Lara-Rosano, D. R. Evans, N. Osuna-Millan, and M. V. Flores-Ortiz. 2017. Modelling the interruption on HCI using BDI agents with the fuzzy perceptions approach: An interactive museum case study in Mexico. Applied Sciences 7, 8 (2017), 1--18.Google ScholarCross Ref
- S. Russell and P. Norvig. 2010. Artificial intelligence: A modern approach. Pearson, Upper Saddle River, NJ.Google Scholar
- R. Schuetzler, M. Grimes, J. S. Giboney, and J. Buckman. 2014. Facilitating natural conversational agent interactions: Lessons from a deception experiment. In 35th International Conference on Information Systems. 1--16.Google Scholar
- M. Skjuve, I. M. Haugstveit, A. Følstad, and P. B. Brandtzaeg. 2019. Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human-chatbot interaction. Human Technology 15, 1 (2019), 30--54.Google ScholarCross Ref
- M. Strait, L. Vujovic, V. Floerke, M. Scheutz, and H. Urry. 2015. Too much humanness for human-robot interaction: Exposure to highly humanlike robots elicits aversive responding in observers. In 33rd Annual ACM Conference on Human Factors in Computing Systems. 3593--3602.Google Scholar
- J. Torresen. 2018. A review of future and ethical perspectives of robotics and AI. Frontiers in Robotics and AI 4, 1 (2018), 1--10.Google ScholarCross Ref
- A. Turing. 1950. Computing machinery and intelligence. Mind 59, 236 (1950), 433--460.Google ScholarCross Ref
- J. J. H. van den Akker. 1999. Principles and methods of development research. Kluwer Academic Publishers, Dordrecht.Google Scholar
- A. Vinciarelli, A. Esposito, E. André, F. Bonin, M. Chetouani, J. F. Cohn, M. Cristani, F. Fuhrmann, E. Gilmartin, Z. Hammal, D. Heylen, R. Kaiser, M. Koutsombogera, A. Potamianos, S. Renals, G. Riccardi, and A. A. Salah. 2015. Open challenges in modelling, analysis and synthesis of human behaviour in human-human and human-machine interactions. Cognitive Computation 7, 4 (2015), 397--413.Google ScholarCross Ref
- J. B. Walther. 2007. Selective self-presentation in computer-mediated communication: Hyperpersonal dimensions of technology, language, and cognition. Computers in Human Behavior 23, 5 (2007), 2538--2557.Google ScholarDigital Library
- K. Warwick and H. Shah. 2016. Passing the Turing Test does not mean the end of humanity. Cognitive Computation 8, 3 (2016), 409--419.Google ScholarCross Ref
- D. Westerman, A. C. Cross, and P. G. Lindmark. 2018. I believe in a thing called bot: Perceptions of the humanness of "chatbots". Communication Studies 70, 3 (2018), 1--18.Google Scholar
- Y. Yang, X. Ma, and P. Fung. 2017. Perceived emotional intelligence in virtual agents. In CHI '17 2255--2262.Google Scholar
- J. Zamora. 2017. I'm sorry, Dave, I'm afraid I can't do that: Chatbot perception and expectations. In HAI 2017. 253--260.Google ScholarDigital Library
Index Terms
- Artificial Intelligence in Conversational Agents: A Study of Factors Related to Perceived Humanness in Chatbots
Recommendations
Chatbots, Humbots, and the Quest for Artificial General Intelligence
CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing SystemsWhat began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress ...
Conversational artificial intelligence in the AEC industry: A review of present status, challenges and opportunities
AbstractThe idea of developing a system that can converse and understand human languages has been around since the 1200 s. With the advancement in artificial intelligence (AI), Conversational AI came of age in 2010 with the launch of Apple’s ...
The perception of artificial intelligence as "human" by computer users
HCI'07: Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environmentsThis paper deals with the topic of 'humanness' in intelligent agents. Chatbot agents (e.g. Eliza, Encarta) had been criticized on their ability to communicate in human like conversation. In this study, a CIT approach was used for analyzing the human and ...
Comments