Skip to main content

Advertisement

Log in

NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

In this article, a novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented. The proposed MIF scheme exploits the advantages of both the NSCT and the PCNN to obtain better fusion results. The source medical images are first decomposed by NSCT. The low-frequency subbands (LFSs) are fused using the ‘max selection’ rule. For fusing the high-frequency subbands (HFSs), a PCNN model is utilized. Modified spatial frequency in NSCT domain is input to motivate the PCNN, and coefficients in NSCT domain with large firing times are selected as coefficients of the fused image. Finally, inverse NSCT (INSCT) is applied to get the fused image. Subjective as well as objective analysis of the results and comparisons with state-of-the-art MIF techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Barra V, Boire JY (2001) A general framework for the fusion of anatomical and functional medical images. NeuroImage 13(3):410–424

    Article  PubMed  CAS  Google Scholar 

  2. da Cunha A, Zhou J, Do M (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process 15(10):3089–3101

    Article  PubMed  Google Scholar 

  3. Das S, Chowdhury M, Kundu MK (2011) Medical image fusion based on ripplet transform type-I. Prog Electromagn Res B 30:355–370

    Google Scholar 

  4. Deepika MM, Vaithyanathan V (2012) An efficient method to improve the spatial property of medical images. J Theor Appl Inf Technol 35(2):141–148

    Google Scholar 

  5. Deng H, Ma Y (2009) Image fusion based on steerable pyramid and PCNN. In: Proceedings of 2nd international conference of applications of digital information and web technologies, pp 569–573

  6. Eckhorn R, Reitboeck HJ, Arndt M, Dicke P (1990) Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex. Neural Comput 2(3):293–307

    Article  Google Scholar 

  7. Eskicioglu A, Fisher P (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965

    Article  Google Scholar 

  8. Feng K, Zhang X, Li X (2011) A novel method of medical image fusion based on bidimensional empirical mode decomposition. J Converg Inf Technol 6(12):84–91

    Article  Google Scholar 

  9. Johnson J, Padgett M (1999) PCNN models and applications. IEEE Trans Neural Netw 10(3):480–498

    Article  PubMed  CAS  Google Scholar 

  10. Li H, Manjunath BS, Mitra SK (1995) Multi-sensor image fusion using the wavelet transform. CVGIP Graph Model Image Process 57(3):235–245

    Article  Google Scholar 

  11. Li M, Cai W, Tan Z (2006) A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recogn Lett 27(16):1948–1956

    Article  Google Scholar 

  12. Li S, Yang B (2010) Hybrid multiresolution method for multisensor multimodal image fusion. IEEE Sens J 10(9):1519–1526

    Article  Google Scholar 

  13. Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12(2):74–84

    Article  Google Scholar 

  14. Qu GH, Zhang DL, Yan PF (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315

    Article  Google Scholar 

  15. Tian H, Fu YN, Wang PG (2010) Image fusion algorithm based on regional variance and multi-wavelet bases. In: Proceedings of 2nd international conference of future computer and communication, vol 2, pp 792–795

  16. Wang Z, Ma Y (2008) Medical image fusion using m-PCNN. Inf Fusion 9(2):176–185

    Article  CAS  Google Scholar 

  17. Wang Z, Ma Y, Cheng F, Yang L (2010) Review of pulse-coupled neural networks. Image Vision Comput 28(1):5–13

    Article  CAS  Google Scholar 

  18. Wang Z, Ma Y, Gu J (2010) Multi-focus image fusion using PCNN. Pattern Recogn 43(6):2003–2016

    Article  Google Scholar 

  19. Xiao-Bo Q, Jing-Wen Y, Hong-Zhi X, Zi-Qian Z (2008) Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom Sin 34(12):1508–1514

    Google Scholar 

  20. Xin G, Zou B, Li J, Liang Y (2011) Multi-focus image fusion based on the nonsubsampled contourlet transform and dual-layer PCNN model. Inf Technol J 10(6):1138–1149

    Article  Google Scholar 

  21. Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Article  Google Scholar 

  22. Yang L, Guo BL, Ni W (2008) Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 72(1-3):203–211

    Article  Google Scholar 

  23. Yang S, Wang M, Lu Y, Qi W, Jiao L (2009) Fusion of multiparametric SAR images based on SW-nonsubsampled contourlet and PCNN. Signal Process 89(12):2596–2608

    Article  Google Scholar 

  24. Yang Y, Park DS, Huang S, Rao N (2010) Medical image fusion via an effective wavelet-based approach. EURASIP J Adv Signal Process 44:1–13

    Google Scholar 

  25. Yonghong J (1998) Fusion of landsat TM and SAR images based on principal component analysis. Remote Sens Technol Appl 13(3):46–49

    Google Scholar 

  26. Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf Fusion 8(2):177–192

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank the editor, associate editor and the anonymous reviewers for their invaluable suggestions. We are grateful to Dr. Pradip Kumar Das (Medicare Images, Asansol-4, West Bengal) for the subjective evaluation of the fused images. We also like to thank http://www.imagefusion.org/ and http://www.med.harvard.edu/aanlib/home.html for providing us the source medical images.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sudeb Das.

Additional information

This work was supported by the Machine Intelligence Unit, Indian Statistical Institute, Kolkata-108 (Internal Academic Project).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Das, S., Kundu, M.K. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Med Biol Eng Comput 50, 1105–1114 (2012). https://doi.org/10.1007/s11517-012-0943-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11517-012-0943-3

Keywords

Navigation