Skip to main content
Log in

Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging

  • Original Article
  • Published:
European Journal of Nuclear Medicine and Molecular Imaging Aims and scope Submit manuscript

Abstract

Purpose

Estimation of accurate attenuation maps for whole-body positron emission tomography (PET) imaging in simultaneous PET-MRI systems is a challenging problem as it affects the quantitative nature of the modality. In this study, we aimed to improve the accuracy of estimated attenuation maps from MRI Dixon contrast images by training an augmented generative adversarial network (GANs) in a supervised manner. We augmented the GANs by perturbing the non-linear deformation field during image registration between MRI and the ground truth CT images.

Methods

We acquired the CT and the corresponding PET-MR images for a cohort of 28 prostate cancer patients. Data from 18 patients (2160 slices and later augmented to 270,000 slices) was used for training the GANs and others for validation. We calculated the error in bone and soft tissue regions for the AC μ-maps and the reconstructed PET images.

Results

For quantitative analysis, we use the average relative absolute errors and validate the proposed technique on 10 patients. The DL-based MR methods generated the pseudo-CT AC μ-maps with an accuracy of 4.5% more than standard MR-based techniques. Particularly, the proposed method demonstrates improved accuracy in the pelvic regions without affecting the uptake values. The lowest error of the AC μ-map in the pelvic region was 1.9% for μ-mapGAN + aug compared with 6.4% for μ-mapdixon, 5.9% for μ-mapdixon + bone, 2.1% for μ-mapU-Net and 2.0% for μ-mapU-Net + aug. For the reconstructed PET images, the lowest error was 2.2% for PETGAN + aug compared with 10.3% for PETdixon, 8.7% for PETdixon + bone, 2.6% for PETU-Net and 2.4% for PETU-Net + aug..

Conclusion

The proposed technique to augment the training datasets for training of the GAN results in improved accuracy of the estimated μ-map and consequently the PET quantification compared to the state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Hofmann M, Pichler B, Schölkopf B, Beyer T. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques. Eur J Nucl Med Mol I. 2009;36:93–104.

    Article  Google Scholar 

  2. Vandenberghe S, Marsden PK. PET-MRI: a review of challenges and solutions in the development of integrated multimodality imaging. Phys Med Biol. 2015;60:R115.

    Article  Google Scholar 

  3. Chen Z, Jamadar SD, Li S, Sforazzini F, Baran J, Ferris N, et al. From simultaneous to synergistic MR-PET brain imaging: a review of hybrid MR-PET imaging methodologies. Hum Brain Mapp. 2018;39:5126–44.

    Article  Google Scholar 

  4. Izquierdo-Garcia D, Catana C. MR imaging–guided attenuation correction of PET data in PET/MR imaging. PET Clinics. 2016;11:129–49.

    Article  Google Scholar 

  5. Arabi H, Zaidi H. Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach. Med Image Anal. 2016;31:1–15.

    Article  Google Scholar 

  6. Baran J, Chen Z, Sforazzini F, Ferris N, Jamadar S, Schmitt B, et al. Accurate hybrid template–based and MR-based attenuation correction using UTE images for simultaneous PET/MR brain imaging applications. BMC Med Imaging. 2018;18:41.

    Article  Google Scholar 

  7. Izquierdo-Garcia D, Hansen AE, Förster S, Benoit D, Schachoff S, Fürst S, et al. An SPM8-based approach for attenuation correction combining segmentation and nonrigid template formation: application to simultaneous PET/MR brain imaging. J Nucl Med. 2014;55:1825–30.

    Article  Google Scholar 

  8. Sekine T, Buck A, Delso G, Ter Voert EE, Huellner M, Veit-Haibach P, et al. Evaluation of atlas-based attenuation correction for integrated PET/MR in human brain: application of a head atlas and comparison to true CT-based attenuation correction. J Nucl Med. 2016;57:215–20.

    Article  CAS  Google Scholar 

  9. Hofmann M, Bezrukov I, Mantlik F, Aschoff P, Steinke F, Beyer T, et al. MRI-based attenuation correction for whole-body PET/MRI: quantitative evaluation of segmentation-and atlas-based methods. J Nucl Med. 2011;52:1392–9.

    Article  Google Scholar 

  10. Salomon A, Goedicke A, Schweizer B, Aach T, Schulz V. Simultaneous reconstruction of activity and attenuation for PET/MR. IEEE Trans Med Imaging. 2011;30:804–13.

    Article  Google Scholar 

  11. Schick F. Whole-body MRI at high field: technical limits and clinical potential. Eur Radiol. 2005;15:946–59.

    Article  Google Scholar 

  12. Gaertner FC, Fürst S, Schwaiger M. PET/MR: a paradigm shift. Cancer Imaging. 2013;13:36.

    Article  Google Scholar 

  13. Lenzo N, Meyrick D, Turner J. Review of gallium-68 PSMA PET/CT imaging in the management of prostate cancer. Diagnostics. 2018;8:16.

    Article  Google Scholar 

  14. Rhee H, Blazak J, Tham CM, Ng KL, Shepherd B, Lawson M, et al. Pilot study: use of gallium-68 PSMA PET for detection of metastatic lesions in patients with renal tumour. EJNMMI Res. 2016;6:76.

    Article  Google Scholar 

  15. Greenspan H, Van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging. 2016;35:1153–9.

    Article  Google Scholar 

  16. Li H. Deep learning for image denoising. Int J Signal Process Image Process Pattern Recognit. 2014;7:171–80.

    Google Scholar 

  17. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487.

    Article  CAS  Google Scholar 

  18. Pawar K, Chen Z, Shah NJ, Egan GF. A deep learning framework for transforming image reconstruction into pixel classification. IEEE Access. 2019;7:177690–702.

    Article  Google Scholar 

  19. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention: Springer; 2015. p. 234–41.

  20. Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 1125–34.

  21. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Advances in neural information processing systems; 2014. p. 2672–80.

  22. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA. Context encoders: feature learning by inpainting. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 2536–44.

  23. Zhang R, Isola P, Efros AA. Colorful image colorization. European conference on computer vision: Springer; 2016. p. 649–66.

  24. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging–based attenuation correction for PET/MR imaging. Radiology. 2017;286:676–84.

    Article  Google Scholar 

  25. Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, et al. Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–8.

    Article  Google Scholar 

  26. Torrado-Carvajal A, Vera-Olmos J, Izquierdo-Garcia D, Catalano OA, Morales MA, Margolin J, et al. Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction. J Nucl Med. 2019;60:429–35.

    Article  Google Scholar 

  27. Dong X, Wang T, Lei Y, Higgins K, Liu T, Curran WJ, et al. Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging. Phys Med Biol. 2019;64:215016.

    Article  Google Scholar 

  28. Varadhan R, Karangelis G, Krishnan K, Hui S. A framework for deformable image registration validation in radiotherapy clinical applications. J Appl Clin Med Phys. 2013;14:192–213.

    Article  Google Scholar 

  29. Keszei AP, Berkels B, Deserno TM. Survey of non-rigid registration tools in medicine. J Digit Imaging. 2017;30:102–16. https://doi.org/10.1007/s10278-016-9915-8.

    Article  PubMed  Google Scholar 

  30. Avants BB, Tustison N, Song G. Advanced normalization tools (ANTS). Insight J. 2009;2:1–35.

    Google Scholar 

  31. Carney JP, Townsend DW, Rappoport V, Bendriem B. Method for transforming CT images for attenuation correction in PET/CT imaging. Med Phys. 2006;33:976–83. https://doi.org/10.1118/1.2174132.

    Article  PubMed  Google Scholar 

  32. Han JH, Yang S, Lee BU. A novel 3-D color histogram equalization method with uniform 1-D gray scale histogram. IEEE Trans Image Process. 2011;20:506–12. https://doi.org/10.1109/tip.2010.2068555.

    Article  PubMed  Google Scholar 

  33. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:150203167. 2015.

  34. Goscinski WJ, McIntosh P, Felzmann UC, Maksimenko A, Hall CJ, Gureyev T, et al. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research. Front Neuroinform. 2014;8:30.

    Article  Google Scholar 

  35. Kinehan P, Fletcher J. PET/CT standardized uptake values (SUVs) in clinical practice and assessing response to therapy. Semin Ultrasound CT MR. 2010;31:496–505.

    Article  Google Scholar 

  36. Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–48.

    Article  CAS  Google Scholar 

  37. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys. 2019;29:102–27.

    Article  Google Scholar 

  38. Lei Y, Fu Y, Wang T, Qiu RL, Curran WJ, Liu T, et al. Deep learning in multi-organ segmentation. arXiv preprint arXiv:200110619. 2020.

  39. Hesamian MH, Jia W, He X, Kennedy P. Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging. 2019;32:582–96.

    Article  Google Scholar 

  40. Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. Deep learning in medical image registration: A Review. arXiv preprint arXiv:191212318. 2019.

  41. Pawar K, Chen Z, Shah NJ, Egan GF. Suppressing motion artefacts in MRI using an Inception-ResNet network with motion simulation augmentation. NMR in Biomedicine. 2019:e4225.

  42. Lv J, Yang M, Zhang J, Wang X. Respiratory motion correction for free-breathing 3D abdominal MRI using CNN-based image registration: a feasibility study. Br J Radiol. 2018;91:20170788.

    Article  Google Scholar 

  43. Lee H, Yune S, Mansouri M, Kim M, Tajmir SH, Guerrier CE, et al. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat Biomed Eng. 2019;3:173.

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge Richard McIntyre from Monash Biomedical Imaging for their assistance in acquiring data and Keiran O’Brien and Daniel Staeb from Siemens Healthineers for useful discussions.

Funding

The research was supported by a grant from the Reignwood Cultural Foundation and an Australian Research Council (ARC) Linkage grant (LP170100494) that includes financial support from Siemens Healthineers. GE is supported by the ARC Centre of Excellence for Integrative Brain Function (CE140100007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaolin Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the Monash University Human Research Ethical Committee.

Informed consent

Informed consent from all individual participants in this study was obtained by Dr. Jeremy Grummet.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Technology

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pozaruk, A., Pawar, K., Li, S. et al. Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging. Eur J Nucl Med Mol Imaging 48, 9–20 (2021). https://doi.org/10.1007/s00259-020-04816-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00259-020-04816-9

Keywords

Navigation