Skip to main content
Log in

Exploiting patterns to explain individual predictions

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Users need to understand the predictions of a classifier, especially when decisions based on the predictions can have severe consequences. The explanation of a prediction reveals the reason why a classifier makes a certain prediction, and it helps users to accept or reject the prediction with greater confidence. This paper proposes an explanation method called Pattern Aided Local Explanation (PALEX) to provide instance-level explanations for any classifier. PALEX takes a classifier, a test instance and a frequent pattern set summarizing the training data of the classifier as inputs, and then outputs the supporting evidence that the classifier considers important for the prediction of the instance. To study the local behavior of a classifier in the vicinity of the test instance, PALEX uses the frequent pattern set from the training data as an extra input to guide generation of new synthetic samples in the vicinity of the test instance. Contrast patterns are also used in PALEX to identify locally discriminative features in the vicinity of a test instance. PALEX is particularly effective for scenarios where there exist multiple explanations. In our experiments, we compare PALEX to several state-of-the-art explanation methods over a range of benchmark datasets and find that it can identify explanations with both high precision and high recall.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Adler P, Falk C, Friedler SA, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2016) Auditing black-box models for indirect influence. In: ICDM’16. IEEE 1–10

  2. Aggarwal CC, Han J (2014) Frequent pattern mining. Springer, Heidelberg

    Book  Google Scholar 

  3. Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Müller K-R (2010) How to explain individual classification decisions. J Mach Learn Res 11:1803–1831

    MathSciNet  MATH  Google Scholar 

  4. Barakat N, Diederich J (2005) Eclectic rule-extraction from support vector machines. Int J Comput Intell 2(1):59–62

    Google Scholar 

  5. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of KDD’15’, ACM, pp 1721–1730

  6. Craven MW, Shavlik JW (1996) Extracting tree-structured representations of trained networks. In: Advances in neural information processing systems pp 24–30

  7. Dong G, Bailey J (2012) Contrast data mining: concepts, algorithms, and applications. CRC Press, Boca Raton

    Google Scholar 

  8. Duivesteijn W, Thaele J (2014) Understanding where your classifier does (not) work-the scape model class for emm. In: ICDM’14. IEEE, pp 809–814

  9. Fan H, Ramamohanarao K (2006) Fast discovery and the generalization of strong jumping emerging patterns for building compact and accurate classifiers. IEEE Trans Knowl Data Eng 18(6):721–737

    Article  Google Scholar 

  10. Freitas AA (2014) Comprehensible classification models: a position paper. ACM SIGKDD Explor Newslett 15(1):1–10

    Article  Google Scholar 

  11. Fung G, Sandilya S, Rao RB (2005) Rule extraction from linear support vector machines. In: Proceedings of KDD’05’, ACM, pp 32–40

  12. Goodman B, Flaxman S (2016) Eu regulations on algorithmic decision-making and a “right to explanation”. In: ICML workshop on human interpretability in machine learning (WHI 2016)

  13. Han J Pei J, Yin Y (2000) Mining frequent patterns without candidate generation. In: ACM sigmod record’, Vol 29, ACM, pp 1–12

  14. Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Discov 28(5–6):1503–1529

    Article  MathSciNet  Google Scholar 

  15. Henelius A, Puolamäki K, Karlsson I, Zhao J, Asker L, Boström H, Papapetrou P (2015) Goldeneye++: a closer look into the black box. In: International symposium on statistical learning and data sciences. Springer, pp 96–105

  16. Jia Y, Bailey J, Kotagiri R, Leckie C (2018) Pattern-based feature generation. In: Feature engineering for machine learning and data analytics, p 245

    Chapter  Google Scholar 

  17. Kang S, Ramamohanarao K (2014) A robust classifier for imbalanced datasets. In: PAKDD’, pp 212–223

    Chapter  Google Scholar 

  18. Karmarkar N (1984) A new polynomial-time algorithm for linear programming. In: Proceedings of the sixteenth annual ACM symposium on theory of computing. ACM, pp 302–311

  19. Koh PW, Liang P (2017) Understanding black-box predictions via influence functions, arXiv preprint arXiv:1703.04730

  20. Kohavi R (1995) The power of decision tables. In: Machine learning: ECML-95 pp 174–189

    Google Scholar 

  21. Kurfess FJ (2000) Neural networks and structured knowledge: rule extraction and applications. Appl Intell 12(1):7–13. https://doi.org/10.1023/A:1008344602888

    Article  Google Scholar 

  22. Letham B, Rudin C, McCormick TH, Madigan D et al (2015) Interpretable classifiers using rules and bayeian analysis: building a better stroke prediction model. Ann Appl Stat 9(3):1350–1371

    Article  MathSciNet  Google Scholar 

  23. Martens D, Baesens B, Van Gestel T, Vanthienen J (2007) Comprehensible credit scoring models using rule extraction from support vector machines. Eur J Oper Res 183(3):1466–1476

    Article  Google Scholar 

  24. Martens D, Provost F (2011) Explaining documents’ classifications, Center for Digital Economy Research

  25. Ribeiro MT, Singh S, Guestrin C (2016a) Model-agnostic interpretability of machine learning. In: ICML workshop on human interpretability in machine learning (WHI 2016)

  26. Ribeiro MT, Singh S, Guestrin C (2016b) Nothing else matters: model-agnostic explanations by identifying prediction invariance. In: NIPS workshop on interpretable machine learning in complex systems

  27. Ribeiro MT, Singh S, Guestrin C (2016c) Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of KDD’16’, ACM, pp 1135–1144

  28. Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: AAAI conference on artificial intelligence

  29. Robnik-Šikonja M, Kononenko I (2008) Explaining classifications for individual instances. IEEE Trans Knowl Data Eng 20(5):589–600

    Article  Google Scholar 

  30. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: ICCV, pp 618–626

  31. Shang J, Tong W, Peng J, Han J (2016) Dpclass: an effective but concise discriminative patterns-based classification framework. In: Proceedings of SDM’16’, SIAM, pp 567–575

  32. Wang F, Rudin C (2015) Falling rule lists. In: AISTATS

  33. Wang Z, Schaul T, Hessel M, Van Hasselt H, Lanctot M, De Freitas N (2016) Dueling network architectures for deep reinforcement learning. In: Proceedings of ICML’16’, JMLR.org, pp 1995–2003. http://dl.acm.org/citation.cfm?id=3045390.3045601

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunzhe Jia.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, Y., Bailey, J., Ramamohanarao, K. et al. Exploiting patterns to explain individual predictions. Knowl Inf Syst 62, 927–950 (2020). https://doi.org/10.1007/s10115-019-01368-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-019-01368-9

Keywords

Navigation