Skip to main content

Scientific Discovery by Generating Counterfactuals Using Image Translation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Abstract

Model explanation techniques play a critical role in understanding the source of a model’s performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a framework to convert predictions from explanation techniques to a mechanism of discovery. Second, we show how generative models in combination with black-box predictors can be used to generate hypotheses (without human priors) that can be critically examined. Third, with these techniques we study classification models for retinal images predicting Diabetic Macular Edema (DME), where recent work  [30] showed that a CNN trained on these images is likely learning novel features in the image. We demonstrate that the proposed framework is able to explain the underlying scientific mechanism, thus bridging the gap between the model’s performance and human understanding.

A. Narayanaswamy and S. Venugopalan—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR (2017)

    Google Scholar 

  2. Chang, C.H., Creager, E., Goldenberg, A., Duvenaud, D.: Explaining image classifiers by counterfactual generation. arXiv preprint arXiv:1807.08024 (2018)

  3. Chu, C., Zhmoginov, A., Sandler, M.: Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950 (2017)

  4. Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: NeurIPS, pp. 592–603 (2018)

    Google Scholar 

  5. Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: ICCV, pp. 2950–2958 (2019)

    Google Scholar 

  6. Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: ICCV, pp. 3429–3437 (2017)

    Google Scholar 

  7. Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  8. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein GANs. In: NeurIPS, pp. 5767–5777 (2017)

    Google Scholar 

  9. Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)

    Article  Google Scholar 

  10. Harding, S., Broadbent, D., Neoh, C., White, M., Vora, J.: Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool diabetic eye study. BMJ 311(7013), 1131–1135 (1995)

    Article  Google Scholar 

  11. Joshi, S., Koyejo, O., Vijitbenjaronk, W., Kim, B., Ghosh, J.: Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615 (2019)

  12. Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: XRAI: better attributions through regions. In: ICCV, pp. 4948–4957 (2019)

    Google Scholar 

  13. Krause, J., et al.: Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 125(8), 1264–1272 (2018)

    Article  Google Scholar 

  14. Lee, R., Wong, T.Y., Sabanayagam, C.: Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye Vis. 2(1), 1–25 (2015)

    Article  Google Scholar 

  15. Liu, S., Kailkhura, B., Loveland, D., Han, Y.: Generative counterfactual introspection for explainable deep learning. arXiv preprint arXiv:1907.03077 (2019)

  16. Mackenzie, S., et al.: SDOCT imaging to identify macular pathology in patients diagnosed with diabetic maculopathy by a digital photographic retinal screening programme. PLoS ONE 6(5), e14811 (2011)

    Article  Google Scholar 

  17. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: CVPR, pp. 5188–5196 (2015)

    Google Scholar 

  18. Miller, A., Obermeyer, Z., Cunningham, J., Mullainathan, S.: Discriminative regularization for latent variable models with applications to electrocardiography. In: ICML. Proceedings of Machine Learning Research, PMLR (2019)

    Google Scholar 

  19. Mordvintsev, A., Olah, C., Tyka, M.: Deepdream-a code example for visualizing neural networks. Google Res. 2(5) (2015)

    Google Scholar 

  20. Petsiuk, V., Das, A., Saenko, K.: Rise: randomized input sampling for explanation of black-box models (2018)

    Google Scholar 

  21. Pharmaceuticals, R.: Recursion Cellular Image Classification - Kaggle contest. www.kaggle.com/c/recursion-cellular-image-classification/data

  22. Poplin, R., et al.: Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2(3), 158 (2018)

    Article  Google Scholar 

  23. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: ACM SIGKDD (2016)

    Google Scholar 

  24. Samangouei, P., Saeedi, A., Nakagawa, L., Silberman, N.: Explaingan: model explanation via decision boundary crossing transformations. In: ECCV (2018)

    Google Scholar 

  25. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)

    Google Scholar 

  26. Singla, S., Pollack, B., Chen, J., Batmanghelich, K.: Explanation by progressive exaggeration. arXiv preprint arXiv:1911.00483 (2019)

  27. Smilkov, D., Thorat, N., Kim, B., Vigas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise (2017)

    Google Scholar 

  28. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedémiller, M.: Striving for simplicity: the all convolutional net (2014)

    Google Scholar 

  29. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks (2017)

    Google Scholar 

  30. Varadarajan, A.V., et al.: Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning. Nat. Commun. 11(1), 1–8 (2020)

    Article  Google Scholar 

  31. Wang, Y.T., Tadarati, M., Wolfson, Y., Bressler, S.B., Bressler, N.M.: Comparison of prevalence of diabetic macular edema based on monocular fundus photography vs optical coherence tomography. JAMA Ophthalmol. 134(2), 222–228 (2016)

    Article  Google Scholar 

  32. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  33. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arunachalam Narayanaswamy .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 3994 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Narayanaswamy, A. et al. (2020). Scientific Discovery by Generating Counterfactuals Using Image Translation. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59710-8_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59709-2

  • Online ISBN: 978-3-030-59710-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics