skip to main content
10.1145/3313831.3376590acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Honorable Mention

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Published:23 April 2020Publication History

ABSTRACT

A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.

References

  1. 2018. DALEX: Descriptive Machine Learning EXplanations. (2018). Accessed September 18, 2019 from http://uc-r.github.io/dalex.Google ScholarGoogle Scholar
  2. 2018. H2O Driverless AI. (2018). Accessed September 18, 2019 from https://www.h2o.ai/products/h2o-driverless-ai/.Google ScholarGoogle Scholar
  3. 2019. Alibi. (2019). Accessed September 18, 2019 from https://github.com/SeldonIO/alibi.Google ScholarGoogle Scholar
  4. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. ACM, 582.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138--52160.Google ScholarGoogle ScholarCross RefCross Ref
  6. Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (2014), 105--120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, and others. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Daniel W Apley. 2016. Visualizing the effects of predictor variables in black box supervised learning models. arXiv preprint arXiv:1612.08468 (2016).Google ScholarGoogle Scholar
  9. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, and others. 2019. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion (2019).Google ScholarGoogle Scholar
  10. Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilovi´ c, and others. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv preprint arXiv:1909.03012 (2019).Google ScholarGoogle Scholar
  11. M Gethsiyal Augasta and Thangairulappan Kathirvalavakumar. 2012. Reverse engineering the neural networks for rule extraction in classification problems. Neural processing letters 35, 2 (2012), 131--150.Google ScholarGoogle Scholar
  12. Victoria Bellotti and Keith Edwards. 2001. Intelligibility and accountability: human considerations in context-aware systems. Human--Computer Interaction 16, 2--4 (2001), 193--212.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jacob Bien, Robert Tibshirani, and others. 2011. Prototype selection for interpretable classification. The Annals of Applied Statistics 5, 4 (2011), 2403--2424.Google ScholarGoogle ScholarCross RefCross Ref
  14. Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 377.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Nadia Boukhelifa, Marc-Emmanuel Perrin, Samuel Huron, and James Eagan. 2017. How data workers cope with uncertainty: A task characterisation study. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3645--3656.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Glenn A Bowen. 2006. Grounded theory and sensitizing concepts. International journal of qualitative methods 5, 3 (2006), 12--23.Google ScholarGoogle ScholarCross RefCross Ref
  17. Sylvain Bromberger. 1992. On what we know we don't know: Explanation, theory, linguistics, and how questions shape them. University of Chicago Press.Google ScholarGoogle Scholar
  18. Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, 258--262.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 8, 8 (2019), 832.Google ScholarGoogle Scholar
  20. Bruce Chandrasekaran, Michael C Tanner, and John R Josephson. 1989. Explaining control strategies in problem solving. IEEE Intelligent Systems 1 (1989), 9--15.Google ScholarGoogle Scholar
  21. Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 559.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. William J Clancey. 1983. The epistemology of a rule-based expert system framework for explanation. Artificial intelligence 20, 3 (1983), 215--251.Google ScholarGoogle Scholar
  23. Herbert H Clark, Susan E Brennan, and others. 1991. Grounding in communication. Perspectives on socially shared cognition 13, 1991 (1991), 127--149.Google ScholarGoogle Scholar
  24. European Commission. 2016. General Data Protection Regulation. (2016). Accessed September 18, 2019 from https://eugdpr.org/.Google ScholarGoogle Scholar
  25. Juliet Corbin, Anselm L Strauss, and Anselm Strauss. 2015. Basics of qualitative research. sage.Google ScholarGoogle Scholar
  26. Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems. 592--603.Google ScholarGoogle Scholar
  27. Nicholas Diakopoulos. 2015. Algorithmic accountability: Journalistic investigation of computational power structures. Digital journalism 3, 3 (2015), 398--415.Google ScholarGoogle ScholarCross RefCross Ref
  28. Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, 275--285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).Google ScholarGoogle Scholar
  30. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In 23rd International Conference on Intelligent User Interfaces. ACM, 211--223.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Thomas Erickson, Catalina M Danis, Wendy A Kellogg, and Mary E Helander. 2008. Assistance: the work practices of human administrative assistants and their implications for it and organizations. In Proceedings of the 2008 ACM conference on Computer supported cooperative work. ACM, 609--618.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.Google ScholarGoogle Scholar
  33. Robert D Gibbons, Giles Hooker, Matthew D Finkelman, David J Weiss, Paul A Pilkonis, Ellen Frank, Tara Moore, and David J Kupfer. 2013. The CAD-MDD: A computerized adaptive diagnostic screening tool for depression. The Journal of clinical psychiatry 74, 7 (2013), 669.Google ScholarGoogle ScholarCross RefCross Ref
  34. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80--89.Google ScholarGoogle ScholarCross RefCross Ref
  35. Alyssa Glass, Deborah L McGuinness, and Michael Wolverton. 2008. Toward establishing trust in adaptive agents. In Proceedings of the 13th international conference on Intelligent user interfaces. ACM, 227--236.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics 24, 1 (2015), 44--65.Google ScholarGoogle ScholarCross RefCross Ref
  37. Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly (1999), 497--530.Google ScholarGoogle Scholar
  38. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018).Google ScholarGoogle Scholar
  39. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2019), 93.Google ScholarGoogle Scholar
  40. Andreas Henelius, Kai Puolamäki, Henrik Boström, Lars Asker, and Panagiotis Papapetrou. 2014. A peek into the black box: exploring classifiers by randomization. Data mining and knowledge discovery 28, 5--6 (2014), 1503--1529.Google ScholarGoogle Scholar
  41. Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 241--250.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Michael Hind. 2019. Explaining explainable AI. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (2019), 16--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).Google ScholarGoogle Scholar
  44. Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 579.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 600.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Ulf Johansson and Lars Niklasson. 2009. Evolving decision trees using oracle guides. In 2009 IEEE Symposium on Computational Intelligence and Data Mining. IEEE, 238--244.Google ScholarGoogle ScholarCross RefCross Ref
  47. Been Kim, Cynthia Rudin, and Julie A Shah. 2014. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Advances in Neural Information Processing Systems. 1952--1960.Google ScholarGoogle Scholar
  48. Rafal Kocielnik, Saleema Amershi, and Paul N Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 411.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 1885--1894.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 5686--5697.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. R Krishnan, G Sivakumar, and P Bhattacharya. 1999. Extracting decision trees from trained neural networks. Pattern recognition 32, 12 (1999).Google ScholarGoogle Scholar
  52. Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces. ACM, 126--137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Vivian Lai and Chenhao Tan. 2018. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. arXiv preprint arXiv:1811.07901 (2018).Google ScholarGoogle Scholar
  54. Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443 (2017).Google ScholarGoogle Scholar
  55. Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. ACM, 195--204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Brian Y Lim and Anind K Dey. 2010. Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing. ACM, 13--22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).Google ScholarGoogle Scholar
  59. Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 623--631.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems. 4765--4774.Google ScholarGoogle Scholar
  61. Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A Grounded Interaction Protocol for Explainable Artificial Intelligence. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1033--1041.Google ScholarGoogle Scholar
  62. Tim Miller. 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence (2018).Google ScholarGoogle Scholar
  63. Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A survey of evaluation methods and measures for interpretable machine learning. arXiv preprint arXiv:1811.11839 (2018).Google ScholarGoogle Scholar
  64. Christoph Molnar and others. 2018. Interpretable machine learning: A guide for making black box models explainable. E-book at https://christophm.github.io/interpretable-ml-book/, version dated 10 (2018).Google ScholarGoogle Scholar
  65. Ramaravind Kommiya Mothilal, Amit Sharma, and Chenhao Tan. 2019. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. arXiv preprint arXiv:1905.07697 (2019).Google ScholarGoogle Scholar
  66. Michael Muller, Ingrid Lange, Dakuo Wang, David Piorkowski, Jason Tsay, Q Vera Liao, Casey Dugan, and Thomas Erickson. 2019. How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018).Google ScholarGoogle Scholar
  68. Anh Nguyen, Jason Yosinski, and Jeff Clune. 2016. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. arXiv preprint arXiv:1602.03616 (2016).Google ScholarGoogle Scholar
  69. Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018).Google ScholarGoogle Scholar
  70. Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Ashwin Ram. 1989. Question-driven understanding: An integrated theory of story understanding, memory and learning. (1989).Google ScholarGoogle Scholar
  72. Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer, 19--36.Google ScholarGoogle Scholar
  73. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135--1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Thirty-Second AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  75. David Ribes. 2017. Notes on the concept of data interoperability: Cases from an ecology of AIDS research infrastructures. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 1514--1526.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Marko Robnik-?ikonja and Marko Bohanec. 2018. Perturbation-Based Explanations of Prediction Models. In Human and Machine Learning. Springer, 159--175.Google ScholarGoogle Scholar
  77. Adam Rule, Aurélien Tabard, and James D Hollan. 2018. Exploration and explanation in computational notebooks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Wojciech Samek and Klaus-Robert Müller. 2019. Towards explainable artificial intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 5--22.Google ScholarGoogle Scholar
  79. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry 22 (2014).Google ScholarGoogle Scholar
  80. Johanes Schneider and Joshua Handali. 2019. Personalized explanation in machine learning. arXiv preprint arXiv:1901.00770 (2019).Google ScholarGoogle Scholar
  81. Milene Selbach Silveira, Clarisse Sieckenius de Souza, and Simone DJ Barbosa. 2001. Semiotic engineering contributions for designing online help systems. In Proceedings of the 19th annual international conference on Computer documentation. ACM, 31--38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).Google ScholarGoogle Scholar
  83. Sameer Singh, Marco Tulio Ribeiro, and Carlos Guestrin. 2016. Programs as black-box explanations. arXiv preprint arXiv:1611.07579 (2016).Google ScholarGoogle Scholar
  84. Aaron Springer and Steve Whittaker. 2019. Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, 107--120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Erik ?trumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems 41, 3 (2014), 647--665.Google ScholarGoogle Scholar
  86. Simone Stumpf, Vidya Rajaram, Lida Li, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Russell Drummond, and Jonathan Herlocker. 2007. Toward harnessing user feedback for machine learning. In Proceedings of the 12th international conference on Intelligent user interfaces. ACM, 82--91.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. William R Swartout. 1983. XPLAIN: A system for creating and explaining expert consulting programs. Artificial intelligence 21, 3 (1983), 285--325.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. William R Swartout and Stephen W Smoliar. 1987. On making expert systems more like experts. Expert Systems 4, 3 (1987), 196--208.Google ScholarGoogle ScholarCross RefCross Ref
  89. Andrea L Thomaz and Cynthia Breazeal. 2006. Transparency and socially guided machine learning. In 5th Intl. Conf. on Development and Learning (ICDL).Google ScholarGoogle Scholar
  90. Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 465--474.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR. Harv. JL & Tech. 31 (2017), 841.Google ScholarGoogle Scholar
  92. Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 601.Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Daniel S Weld and Gagan Bansal. 2018. The challenge of crafting intelligible intelligence. arXiv preprint arXiv:1803.04263 (2018).Google ScholarGoogle Scholar
  94. Adrian Weller. 2017. Challenges for transparency. arXiv preprint arXiv:1708.01870 (2017).Google ScholarGoogle Scholar
  95. Christine T Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, 252--257.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Qian Yang. 2018. Machine Learning as a UX Design Material: How Can We Imagine Beyond Automation, Recommenders, and Reminders?. In 2018 AAAI Spring Symposium Series.Google ScholarGoogle Scholar
  97. Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 279.Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Xin Zhang, Armando Solar-Lezama, and Rishabh Singh. 2018. Interpreting neural network judgments via minimal, stable, and symbolic corrections. In Advances in Neural Information Processing Systems. 4874--4885.Google ScholarGoogle Scholar
  99. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2921--2929.Google ScholarGoogle ScholarCross RefCross Ref
  100. Zhi-Hua Zhou, Yuan Jiang, and Shi-Fu Chen. 2003. Extracting symbolic rules from trained neural network ensembles. Ai Communications 16, 1 (2003), 3--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Youngblood. 2018. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Questioning the AI: Informing Design Practices for Explainable AI User Experiences

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
        April 2020
        10688 pages
        ISBN:9781450367080
        DOI:10.1145/3313831

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 April 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate6,199of26,314submissions,24%

        Upcoming Conference

        CHI '24
        CHI Conference on Human Factors in Computing Systems
        May 11 - 16, 2024
        Honolulu , HI , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format