ABSTRACT
Trust is a central component of the interaction between people and AI, in that 'incorrect' levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the cognitive mechanism of trust, and how can we promote them, or assess whether they are being satisfied in a given interaction? This work aims to answer these questions. We discuss a model of trust inspired by, but not identical to, interpersonal trust (i.e., trust between people) as defined by sociologists. This model rests on two key properties: the vulnerability of the user; and the ability to anticipate the impact of the AI model's decisions. We incorporate a formalization of 'contractual trust', such that trust between a user and an AI model is trust that some implicit or explicit contract will hold, and a formalization of 'trustworthiness' (that detaches from the notion of trustworthiness in sociology), and with it concepts of 'warranted' and 'unwarranted' trust. We present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.
- David Alvarez Melis and Tommi Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). 7775--7784. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdfGoogle Scholar
- Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, Darrell Reimer, John T. Richards, Jason Tsay, and Kush R. Varshney. 2019. FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM J. Res. Dev. 63, 4/5 (2019), 6:1-6:13. https://doi.org/10.1147/JRD.2019.2942288Google ScholarCross Ref
- Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. Generating Fact Checking Explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7352--7364. https://doi.org/10.18653/v1/2020.acl-main.656Google ScholarCross Ref
- Annette Baier. 1986. Trust and antitrust. ethics 96, 2 (1986), 231--260.Google Scholar
- Jeremy Barnes, Lilja Øvrelid, and Erik Velldal. 2019. Sentiment Analysis Is Not Solved! Assessing and Probing Sentiment Classification. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy, 12--23. https://doi.org/10.18653/v1/W19-4802Google ScholarCross Ref
- Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Trans. Assoc. Comput. Linguistics 6 (2018), 587--604. https://transacl.org/ojs/index. php/tacl/article/view/1464Google ScholarCross Ref
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, 632--642. https://doi.org/10.18653/v1/D15-1075Google Scholar
- Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8 (2019), 832.Google ScholarCross Ref
- Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, and Tong Wang. 2018. An Interpretable Model with Globally Consistent Explanations for Credit Risk. CoRR abs/1811.12615 (2018). arXiv:1811.12615 http://arxiv.org/abs/1811.12615Google Scholar
- F. Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, A. Bau, and James R. Glass. 2019. What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. In AAAI.Google Scholar
- Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv:2006.11371 [cs.CV]Google Scholar
- Harm de Vries, Dzmitry Bahdanau, and Christopher D. Manning. 2020. Towards Ecologically Valid Research on Language User Interfaces. CoRR abs/2007.14435 (2020). arXiv:2007.14435 https://arxiv.org/abs/2007.14435Google Scholar
- Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A Benchmark to Evaluate Rationalized NLP Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4443--4458. https://doi.org/10.18653/v1/2020.acl-main.408Google Scholar
- Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show Your Work: Improved Reporting of Experimental Results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 2185--2194. https://doi.org/10.18653/v1/D19-1224Google Scholar
- Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).Google Scholar
- Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, Berlin, Germany, 134--139. https://doi.org/10.18653/v1/W16-2524Google ScholarCross Ref
- Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP Models via Contrast Sets. CoRR abs/2004.02709 (2020). arXiv:2004.02709 https://arxiv.org/abs/2004.02709Google Scholar
- Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé, and Kate Crawford. 2018. Datasheets for Datasets. In Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning.Google Scholar
- Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, and Paul Varghese. 2018. ClinicalVis: Supporting Clinical Task-Focused Design Evaluation. CoRR abs/1810.05798 (2018). arXiv:1810.05798 http://arxiv.org/abs/1810.05798Google Scholar
- Amirata Ghorbani, James Wexler, James Y. Zou, and Been Kim. 2019. Towards Automatic Concept-based Explanations. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 9273--9282. http://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanationsGoogle Scholar
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6572Google Scholar
- Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual Visual Explanations (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, Long Beach, California, USA, 2376--2384. http://proceedings.mlr.press/v97/goyal19a.htmlGoogle Scholar
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5, Article 93 (Aug. 2018), 42 pages. https://doi.org/10.1145/3236009Google ScholarDigital Library
- Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, New Orleans, Louisiana, 107--112. https://doi.org/10.18653/v1/N18-2017Google Scholar
- Sven Ove Hansson. 2018. Risk. In The Stanford Encyclopedia of Philosophy (fall 2018 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.Google Scholar
- Peter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? CoRR abs/2005.01831 (2020). arXiv:2005.01831 https://arxiv.org/abs/2005.01831Google Scholar
- Katherine Hawley. 2014. Trust, distrust and commitment. Noûs 48, 1 (2014), 1--20.Google ScholarCross Ref
- Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Grounding Visual Explanations. In ECCV. https://arxiv.org/abs/1807.09685Google Scholar
- Robert R Hoffman. 2017. A taxonomy of emergent trusting in the human-machine relationship. Cognitive systems engineering: The future for a changing world (2017), 137--163.Google Scholar
- Gert Jan Hofstede. 2006. Intrinsic and Enforceable Trust: A Research Agenda. European Association of Agricultural Economists, 99th Seminar, February 8-10, 2006, Bonn, Germany (01 2006).Google Scholar
- Alon Jacovi and Yoav Goldberg. 2020. Aligning Faithful Interpretations with their Social Attribution. CoRR abs/2006.01067 (2020). arXiv:2006.01067 https://arxiv.org/abs/2006.01067Google Scholar
- Alon Jacovi and Yoav Goldberg. 2020. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4198--4205. https://doi.org/10.18653/v1/2020.acl-main.386Google ScholarCross Ref
- Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna M. Wallach, and Jennifer Wortman Vaughan. 2019. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning.Google Scholar
- Divyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning The Difference That Makes A Difference With Counterfactually-Augmented Data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.Google Scholar
- Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 252--262. https://doi.org/10.18653/v1/N18-1023Google ScholarCross Ref
- Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2017. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arXiv:1711.11279 [stat.ML]Google Scholar
- Svetlana Kiritchenko and Saif Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, Malvina Nissim, Jonathan Berant, and Alessandro Lenci (Eds.). Association for Computational Linguistics, 43--53. https://doi.org/10.18653/v1/s18-2005Google ScholarCross Ref
- J. Kleinberg and S. Mullainathan. 2019. Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability. Proceedings of the 2019 ACM Conference on Economics and Computation (2019).Google ScholarDigital Library
- Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions (Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, International Convention Centre, Sydney, Australia, 1885--1894. http://proceedings.mlr.press/v70/koh17a.htmlGoogle Scholar
- Hana Kopecka and Jose M Such. 2020. Explainable AI for Cultural Minds. https://sites.google.com/view/dexahai-at-ecai2020/home Workshop on Dialogue, Explanation and Argumentation for Human-Agent Interaction, DEXAHAI; Conference date: 07-09-2020.Google Scholar
- Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 4365--4374. https://doi.org/10.18653/v1/D19-1445Google Scholar
- Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300717Google ScholarDigital Library
- David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of Google Flu: traps in big data analysis. Science 343, 6176 (2014), 1203--1205.Google Scholar
- John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50--80.Google Scholar
- J David Lewis and Andrew Weigert. 1985. Trust as a social reality. Social forces 63, 4 (1985), 967--985.Google Scholar
- Sarah Lichtenstein, Baruch Fischhoff, and Lawrence D Phillips. 1977. Calibration of probabilities: The state of the art. In Decision making and change in human affairs. Springer, 275--324.Google Scholar
- Zachary C. Lipton. 2018. The mythos of model interpretability. Commun. ACM 61, 10 (2018), 36--43. https://doi.org/10.1145/3233231Google ScholarDigital Library
- Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 505--514. https://doi.org/10.18653/v1/2020.acl-main.48Google ScholarCross Ref
- Brian Lubars and Chenhao Tan. 2019. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 57--67.Google Scholar
- Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and H. Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020).Google ScholarDigital Library
- Ana Marasović, Chandra Bhagavatula, Jae sung Park, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 2810--2829. https://doi.org/10.18653/v1/2020.findings-emnlp.253Google ScholarCross Ref
- Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709--734.Google ScholarCross Ref
- Carolyn McLeod. 2015. Trust. In The Stanford Encyclopedia of Philosophy (fall 2015 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.Google Scholar
- Tim Miller. 2018. Contrastive Explanation: A Structural-Model Approach. CoRR abs/1811.03163 (2018). arXiv:1811.03163 http://arxiv.org/abs/1811.03163Google Scholar
- Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (2019), 1--38. https://arxiv.org/abs/1706.07269Google ScholarCross Ref
- B. Misztal. 1996. Trust in Modern Societies: The Search for the Bases of Social Order. Wiley. https://books.google.co.il/books?id=q3R1QgAACAAJGoogle Scholar
- Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019. ACM, 220--229. https://doi.org/10.1145/3287560.3287596Google ScholarDigital Library
- Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors 39, 2 (1997), 230--253. https://doi.org/10.1518/001872097778543886 arXiv:https://doi.org/10.1518/001872097778543886Google ScholarCross Ref
- Joelle Pineau. 2020. The Machine Learning Reproducibility Checklist. https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf.Google Scholar
- Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7237--7256. https://doi.org/10.18653/v1/2020.acl-main.647Google ScholarCross Ref
- Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4902--4912. https://doi.org/10.18653/v1/2020.acl-main.442Google ScholarCross Ref
- Mireia Ribera and Àgata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. In IUI Workshops.Google Scholar
- D. Schlangen. 2020. Targeting the Benchmark: On Methodology in Current Natural Language Processing Research. (2020). https://arxiv.org/abs/2007.04792 arXiv:2007.04792.Google Scholar
- Philipp Schmidt and Felix Bießmann. 2019. Quantifying Interpretability and Trust in Machine Learning Systems. CoRR abs/1901.08558 (2019). arXiv:1901.08558 http://arxiv.org/abs/1901.08558Google Scholar
- K. Simonyan, A. Vedaldi, and Andrew Zisserman. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In 2nd International Conference on Learning Representations ICLR, Workshop Track Proceedings. https://arxiv.org/abs/1312.6034Google Scholar
- Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S. Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376624Google ScholarDigital Library
- Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova. 2020. We Need to Talk About Random Splits. CoRR abs/2005.00636 (2020). arXiv:2005.00636 https://arxiv.org/abs/2005.00636Google Scholar
- Robert C. Solomon. 1998. Creating Trust. Business Ethics Quarterly 8, 2 (1998), 205--232. https://doi.org/10.2307/3857326Google ScholarCross Ref
- Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating Gender Bias in Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1679--1684. https://doi.org/10.18653/v1/P19-1164Google ScholarCross Ref
- Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1630--1640. https://www.aclweb.org/anthology/P19-1159Google ScholarCross Ref
- Jonathan Tallant. 2017. Commitment in Cases of Trust and Distrust. Thought: A Journal of Philosophy 6, 4 (2017), 261--267. https://doi.org/10.1002/tht3.259 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/tht3.259Google Scholar
- Jonathan Tallant and Donatella Donati. 2020. Trust: from the Philosophical to the Commercial. Philosophy of Management 19, 1 (2020), 3--19.Google ScholarCross Ref
- Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics - On what Language Model Pre-training Captures. CoRR abs/1912.13283 (2019). arXiv:1912.13283 http://arxiv.org/abs/1912.13283Google Scholar
- Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4149--4158. https://doi.org/10.18653/v1/N19-1421Google Scholar
- Scott Thiebes, Sebastian Lins, and Ali Sunyaev. 2020. Trustworthy artificial intelligence. Electronic Markets (10 2020). https://doi.org/10.1007/s12525-020-00441-4Google Scholar
- Erico Tjoa and Cuntai Guan. 2019. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. CoRR abs/1907.07374 (2019). arXiv:1907.07374 http://arxiv.org/abs/1907.07374Google Scholar
- Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The Relationship between Trust in AI and Trustworthy Machine Learning Technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 272--283. https://doi.org/10.1145/3351095.3372834Google ScholarDigital Library
- Suresh Venkatasubramanian. 2019. Algorithmic Fairness: Measures, Methods and Representations. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (Amsterdam, Netherlands) (PODS '19). Association for Computing Machinery, New York, NY, USA, 481. https://doi.org/10.1145/3294052.3322192Google ScholarDigital Library
- Sarah Myers West, Meredith Whittaker, and Kate Crawford. 2019. Discriminating systems: Gender, race and power in AI. (2019). https://ainowinstitute.org/discriminatingsystems.pdfGoogle Scholar
- Stephen Wright. 2010. Trust and Trustworthiness. Philosophia 38, 3 (2010), 615--627. https://doi.org/10.1007/s11406-009-9218-0Google ScholarCross Ref
- Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. 563--574. https://doi.org/10.1007/978-3-030-32236-6_51Google ScholarDigital Library
Index Terms
- Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Recommendations
Distrust and trust in B2C e-commerce: do they differ?
ICEC '06: Proceedings of the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internetResearchers have not studied e-commerce <u>distrust</u> as much as e-commerce <u>trust</u>. This study examines whether trust and distrust are distinct concepts. If trust and distrust are the same, lack of distrust research matters little. But if they ...
Trust and distrust on the web
We examine the content of trustful and distrustful user experience reports on the web.Distrust is mostly an effect of graphical and structural design issues of a website.Trust is based on social factors such as reviews or recommendations by friends. The ...
When trust and distrust collide online: The engenderment and role of consumer ambivalence in online consumer behavior
Trust and distrust are both considered to be crucial in online truster-trustee relationships. Although some research has proposed that trust and distrust are distinct, other research continues to hold that they are merely opposite ends of the same ...
Comments