Skip to main content

Explaining End-to-End ECG Automated Diagnosis Using Contextual Features

  • Conference paper
  • First Online:
Book cover Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track (ECML PKDD 2020)

Abstract

We propose a new method to generate explanations for end-to-end classification models. The explanations consist of meaningful features to the user, namely contextual features. We instantiate our approach in the scenario of automated electrocardiogram (ECG) diagnosis and analyze the explanations generated in terms of interpretability and robustness. The proposed method uses a noise-insertion strategy to quantify the impact of intervals and segments of the ECG signals on the automated classification outcome. These intervals and segments and their impact on the diagnosis are common place to cardiologists, and their usage in explanations enables a better understanding of the outcomes and also the identification of sources of mistakes. The proposed method is particularly effective and useful for modern deep learning models that take raw data as input. We demonstrate our method by explaining diagnoses generated by a deep convolutional neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/DerickMatheus/ECG-interpretation.

  2. 2.

    https://doi.org/10.5281/zenodo.3625017.

  3. 3.

    https://doi.org/10.5281/zenodo.3625006.

References

  1. Alkmim, M.B., et al.: Improving patient access to specialized health care: the telehealth network of Minas Gerais, Brazil. Bull. World Health Organ. 90(5), 373–378 (2012). https://doi.org/10.2471/BLT.11.099408

  2. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. arXiv preprint arXiv:1806.07538 (2018)

  3. Bai, T., Zhang, S., Egleston, B.L., Vucetic, S.: Interpretable representation learning for healthcare via capturing disease progression through time. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 43–51. ACM (2018)

    Google Scholar 

  4. Bejnordi, B.E., et. al: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318(22), 2199 (2017). https://doi.org/10.1001/jama.2017.14585

  5. De Fauw, J., et. al: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018). https://doi.org/10.1038/s41591-018-0107-6

  6. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. Univ. Montreal 1341(3), 1 (2009)

    Google Scholar 

  7. Fawaz, H.I., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. 33(4), 917–963 (2019)

    Article  MathSciNet  Google Scholar 

  8. Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3429–3437 (2017)

    Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. arXiv:1412.6572 (2014)

  10. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)

    Article  Google Scholar 

  11. Hannun, A.Y., et al.: Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 25(1), 65–69 (2019). https://doi.org/10.1038/s41591-018-0268-3

  12. Hinton, G.: Deep learning—a technology with the potential to transform health care. JAMA 320(11), 1101–1102 (2018). https://doi.org/10.1001/jama.2018.11100

  13. Ignatiev, A., Narodytska, N., Marques-Silva, J.: On relating explanations and adversarial examples. In: Advances in Neural Information Processing Systems, pp. 15857–15867 (2019)

    Google Scholar 

  14. Lipton, Z.C.: The doctor just won’t accept that! arXiv preprint arXiv:1711.08037 (2017)

  15. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  16. Macfarlane, P.W., Devine, B., Clark, E.: The university of glasgow (Uni-G) ECG analysis program. In: Computers in Cardiology, pp. 451–454 (2005). https://doi.org/10.1109/CIC.2005.1588134

  17. Macfarlane, P., Devine, B., Clark, E.: The university of glasgow (uni-g) ECG analysis program. In: Computers in Cardiology, 2005, pp. 451–454. IEEE (2005)

    Google Scholar 

  18. Macfarlane, P., van Oosterom, A., Pahlm, O., Kligfield, P., Janse, M., Camm, J.: Comprehensive electrocardiology. Springer, Heidleberg (2010). https://doi.org/10.1007/978-1-84882-046-3, 978-1-84882-046-3

  19. Makowski, D.: Neurokit: A python toolbox for statistics and neurophysiological signal processing (eeg, eda, ecg, emg...). Memory and Cognition Lab’ Day, 01 November, Paris, France (2016)

    Google Scholar 

  20. McKinney, S.M., et. al: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020). https://doi.org/10.1038/s41586-019-1799-6

  21. Naylor, C.: On the prospects for a (deep) learning health care system. JAMA 320(11), 1099–1100 (2018). https://doi.org/10.1001/jama.2018.11103

    Article  Google Scholar 

  22. Nguyen, T.L., Gsponer, S., Ilie, I., Ifrim, G.: Interpretable time series classification using all-subsequence learning and symbolic representations in time and frequency domains. arXiv preprint arXiv:1808.04022 (2018)

  23. Pepine, C.J.: Complete cardiology in a heartbeat. https://www.healio.com/cardiology/learn-the-heart

  24. Petsiuk, V., Das, A., Saenko, K.: Rise: randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421 (2018)

  25. Ribeiro, A.H., et al.: Automatic diagnosis of the 12-lead ECG using a deep neural network. Nat. Commun. 11(1), 1–9 (2020)

    Google Scholar 

  26. Ribeiro, A.H., et al.: Automatic diagnosis of the short-duration 12-Lead ECG using a deep neural network: the CODE study. arXiv (2019)

    Google Scholar 

  27. Ribeiro, A.L.P., Duncan, B.B., Brant, L.C., Lotufo, P.A., Mill, J.G., Barreto, S.M.: Cardiovascular health in Brazil: trends and perspectives. Circulation 133(4), 422–433 (2016)

    Article  Google Scholar 

  28. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. pp. 1135–1144. ACM (2016)

    Google Scholar 

  29. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: Hhigh-precision model-agnostic explanations. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  30. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  31. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR. org (2017)

    Google Scholar 

  32. Smith, S.W., et al.: A deep neural network learning algorithm outperforms a conventional algorithm for emergency department electrocardiogram interpretation.J. Electrocardiol. 52, 88–95 (2019)

    Article  Google Scholar 

  33. Stead, W.W.: Clinical implications and challenges of artificial intelligence and deep learning. JAMA 320(11), 1107–1108 (2018). https://doi.org/10.1001/jama.2018.11029

  34. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3319–3328. JMLR. org (2017)

    Google Scholar 

  35. Surawicz, B., Childers, R., Deal, B.J., Gettes, L.S.: Aha/accf/hrs recommendations for the standardization and interpretation of the electrocardiogram: part iii: intraventricular conduction disturbances a scientific statement from the american heart association electrocardiography and arrhythmias committee, council on clinical cardiology; the american college of cardiology foundation; and the heart rhythm society endorsed by the international society for computerized electrocardiology. J. Am. Coll. Cardiol. 53(11), 976–981 (2009)

    Article  Google Scholar 

  36. Topol, E.: Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Hachette UK, London (2019)

    Google Scholar 

  37. Ventura, F., Cerquitelli, T., Giacalone, F.: Black-box model explained through an assessment of its interpretable features. In: Benczúr, A., et al. (eds.) ADBIS 2018. CCIS, vol. 909, pp. 138–149. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00063-9_15

    Chapter  Google Scholar 

  38. Winkler, J.K., et al.: Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition. JAMA Dermatology (2019). https://doi.org/10.1001/jamadermatol.2019.1735

  39. Zhang, X., Solar-Lezama, A., Singh, R.: Interpreting neural network judgments via minimal, stable, and symbolic corrections. In: Advances in Neural Information Processing Systems, pp. 4874–4885 (2018)

    Google Scholar 

Download references

Acknowledgement

The authors would like to thank FAPEMIG, CNPq and CAPES for their financial support. This work was also partially funded by projects MASWeb, EUBra-BIGSEA, INCT-Cyber, ATMOSPHERE and by the Google Research Awards for Latin America program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Derick M. Oliveira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oliveira, D.M., Ribeiro, A.H., Pedrosa, J.A.O., Paixão, G.M.M., Ribeiro, A.L.P., Meira, W. (2021). Explaining End-to-End ECG Automated Diagnosis Using Contextual Features. In: Dong, Y., Ifrim, G., Mladenić, D., Saunders, C., Van Hoecke, S. (eds) Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12461. Springer, Cham. https://doi.org/10.1007/978-3-030-67670-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67670-4_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67669-8

  • Online ISBN: 978-3-030-67670-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics