Skip to main content

“Better” Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI)

  • Conference paper
  • First Online:
Case-Based Reasoning Research and Development (ICCBR 2022)

Abstract

A recent surge of research has focused on counterfactual explanations as a promising solution to the eXplainable AI (XAI) problem. Over 100 counterfactual XAI methods have been proposed, many emphasising the key role of features that are “important” or “causal” or “actionable” in making explanations comprehensible to human users. However, these proposals rest on intuition rather than psychological evidence. Indeed, recent psychological evidence [22] shows that it is abstract feature-types that impact people’s understanding of explanations; categorical features better support people’s learning of an AI model’s predictions than continuous features. This paper proposes a more psychologically-valid counterfactual method, one extending case-based techniques with additional functionality to transform feature-differences into categorical versions of themselves. This enhanced case-based counterfactual method, still generates good counterfactuals relative to baseline methods on coverage and distances metrics. This is the first counterfactual method specifically designed to meet identified psychological requirements of end-users, rather than merely reflecting the intuitions of algorithm designers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Keane and Smyth [19] argued that, for tabular data, counterfactuals should be sparse; no more than 2 feature-differences, to allow people to understand them. Recent user studies show that people prefer counterfactuals with 2–3 feature differences [45].

  2. 2.

    2 As well as considering multiple natives, CB2-CF also considers nearest-like-neighbours of the native’s x’ (e.g., the three closest, same-class datapoints to x’) to expand on the variations of natives considered. This second step in not implemented in our version of CB2-CF.

References

  1. Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Miller, T.: Explanation in artificial intelligence. Artif. Intell. 267, 1–38 (2019)

    Article  Google Scholar 

  4. Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation.” AI Mag. 38(3), 50–57 (2017)

    Google Scholar 

  5. Leake, D., McSherry, D.: Introduction to the special issue on explanation in case-based reasoning. Artif. Intell. Rev. 24(2), 103–108 (2005)

    Article  Google Scholar 

  6. Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning–perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005)

    Article  Google Scholar 

  7. Schoenborn, J.M., Althoff, K.D.: Recent trends in XAI: In: Case-Based Reasoning for the Explanation of intelligent systems (XCBR) Workshop (2019)

    Google Scholar 

  8. Kenny, E.M., Keane, M.T.: Twin-systems to explain neural networks using case-based reasoning. In: IJCAI-19, pp. 326–333 (2019)

    Google Scholar 

  9. Keane, M.T., Kenny, E.M.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11

    Chapter  Google Scholar 

  10. Kenny, E.M., Keane, M.T.: Explaining deep learning using examples: optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowl.-Based Syst. 233, 1–14, 107530 (2021)

    Google Scholar 

  11. Nugent, C., Cunningham, P.: Gaining insight through case-based explanation. J. Intell. Inf. Syst. 32(3), 267–295 (2009)

    Article  Google Scholar 

  12. Cummins, L., Bridge, D.: KLEOR: a knowledge lite approach to explanation oriented retrieval. Comput. Inform. 25(2–3), 173–193 (2006)

    MATH  Google Scholar 

  13. Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. In: AAAI-21, pp. 11575–11585 (2021)

    Google Scholar 

  14. Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Q. 38, 73–100 (2014)

    Article  Google Scholar 

  15. Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations. In: IJCAI-21, pp. 4466–4474 (2021)

    Google Scholar 

  16. Karimi, A.-H., Barthe, G., Schölkopf, B., Valera, I.: A survey of algorithmic recourse. arXiv preprint arXiv:2010.04050 (2020)

  17. Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI-19, pp. 6276–6282 (2019)

    Google Scholar 

  18. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2018)

    Google Scholar 

  19. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11

    Chapter  Google Scholar 

  20. Smyth, B., Keane, M.T.: A few good counterfactuals: generating interpretable, plausible and diverse counterfactual explanations. In: ICCBR-22, Springer, Berlin (2022)

    Google Scholar 

  21. Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models. IEEE TVCG 26(1), 56–65 (2019)

    Google Scholar 

  22. Warren, G., Keane, M.T., Byrne, R.M.J.: Features of explainability: how users understand counterfactual and causal explanations for categorical and continuous features in XAI. In: IJCAI-22 Workshop on Cognitive Aspects of Knowledge Representation (2022)

    Google Scholar 

  23. Nugent, C., Cunningham, P.: A case-based explanation system for black-box systems. Artif. Intell. Rev. 24(2), 163–178 (2005)

    Article  Google Scholar 

  24. Kumar, R.R., Viswanath, P., Bindu, C.S.: Nearest neighbor classifiers: a review. Int. J. Comput. Intell. Res. 13(2), 303–311 (2017)

    Google Scholar 

  25. Aggarwal, C.C., Chen, C., Han, J.: The inverse classification problem. J. Comput. Sci. Technol. 25(3), 458–468 (2010)

    Article  Google Scholar 

  26. Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability. In: IJCAI-19, pp. 2801–2807 (2019)

    Google Scholar 

  27. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: FAT*20, pp. 607–617 (2020)

    Google Scholar 

  28. Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12976, pp. 650–665. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86520-7_40

    Chapter  Google Scholar 

  29. Russell, C.: Efficient search for diverse coherent explanations. In: FAT-19, pp. 20–28 (2019)

    Google Scholar 

  30. Kahneman, D., Miller, D.T.: Norm theory. Psychol. Rev. 93(2), 136–153 (1986)

    Article  Google Scholar 

  31. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: FAT-19, pp. 10–19 (2019)

    Google Scholar 

  32. Karimi, A.H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: AISTATS-20, Palermo, Italy, vol. 108. PMLR (2020)

    Google Scholar 

  33. Wiratunga, N., Wijekoon, A., Nkisi-Orji, I., Martin, K., Palihawadana, C., Corsar, D.: Actionable feature discovery in counterfactuals using feature relevance explainers. In: CEUR Workshop Proceedings (2021)

    Google Scholar 

  34. Karimi, A.H., von Kügelgen, J., Schölkopf, B., Valera, I.: Algorithmic recourse under imperfect causal knowledge. In: NeurIPS-20, 33 (2020)

    Google Scholar 

  35. Ramon, Y., Martens, D., Provost, F., Evgeniou, T.: A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C. Adv. Data Anal. Classif. 14(4), 801–819 (2020). https://doi.org/10.1007/s11634-020-00418-3

    Article  MathSciNet  MATH  Google Scholar 

  36. Delaney, E., Greene, D., Keane, M.T.: Instance-based counterfactual explanations for time series classification. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds.) ICCBR 2021. LNCS (LNAI), vol. 12877, pp. 32–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86957-1_3

    Chapter  Google Scholar 

  37. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: IUI-19, pp. 275–285 (2019)

    Google Scholar 

  38. Lucic, A., Haned, H., de Rijke, M.: Contrastive local explanations for retail forecasting. In: FAT*20, pp. 90–98 (2020)

    Google Scholar 

  39. Van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291 (2021)

    Google Scholar 

  40. Lage, I., et al.: Human evaluation of models built for interpretability. In: HCOMP-19, pp. 59–67 (2019)

    Google Scholar 

  41. Kirfel, L., Liefgreen, A.: What if (and how...)? Actionability shapes people’s perceptions of counterfactual explanations in automated decision-making. In: ICML-21 Workshop on Algorithmic Recourse (2021)

    Google Scholar 

  42. Kahneman, D., Tversky, A.: The simulation heuristic. In: Kahneman, D., Slovic, P., Tversky, A. (eds.), Judgment Under Uncertainty: Heuristics and Biases, pp. 201–208. CUP (1982)

    Google Scholar 

  43. Dua, D., Graff, C.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2019). http://archive.ics.uci.edu/ml

  44. Keil, F.C.: Explanation and understanding. Ann. Rev. Psychol. 57, 227–254 (2006)

    Article  Google Scholar 

  45. Förster, M., Klier, M., Kluge, K., Sigler, I.: Evaluating explainable artificial intelligence: what users really appreciate. In ECIS-2020 (2020)

    Google Scholar 

Download references

Acknowledgments

This research was supported by (i) the UCD Foundation, (ii) UCD Science Foundation Ireland via the Insight SFI Research Centre for Data Analytics (12/RC/2289) and (iii) the Department of Agriculture, Food and Marine via the VistaMilk SFI Research Centre (16/RC/3835).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Greta Warren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Warren, G., Smyth, B., Keane, M.T. (2022). “Better” Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI). In: Keane, M.T., Wiratunga, N. (eds) Case-Based Reasoning Research and Development. ICCBR 2022. Lecture Notes in Computer Science(), vol 13405. Springer, Cham. https://doi.org/10.1007/978-3-031-14923-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14923-8_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14922-1

  • Online ISBN: 978-3-031-14923-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics