skip to main content
10.1145/3371425.3371433acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaiipccConference Proceedingsconference-collections
research-article

Infrared and visible image fusion using multi-resolution convolution neural network

Authors Info & Claims
Published:19 December 2019Publication History

ABSTRACT

The purpose infrared image and visible image fusion work is generating a richer content image. Since these images are captured by two different sensors, the fusion work can merge the valid information into one image. In this paper, we proposed a multi-resolution convolution neural network, which is constructed by multi-scale convolution operators. This method will calculate features in different scales. The multi-scale operators are used to capture multi-features from source images and these feature maps are used to calculate fusion parameter. This can avoid noise and artifact impaction. The fusion results can prove that the multi-fusion method is an effective method.

References

  1. Li S, Yang B and Hu J (2011). Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12(2), 74--84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Pajares G and Cruz J M D L (2004). A wavelet-based image fusion tutorial. Pattern Recognition, 37(9), 1855--1872.Google ScholarGoogle ScholarCross RefCross Ref
  3. Zhang Z and Blum R S (1999). A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 87(8), 1315--1326.Google ScholarGoogle ScholarCross RefCross Ref
  4. Wang J, Peng J, Feng X, He G and Fan J (2014). Fusion method for infrared and visible images by using non-negative sparse representation. Infrared Physics & Technology, 67, 477--489.Google ScholarGoogle ScholarCross RefCross Ref
  5. Li S, Yin H and Fang L (2012). Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE transactions on bio-medical engineering, 59(12).Google ScholarGoogle Scholar
  6. Xiang T, Yan L and Gao R (2015). A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain. Infrared Physics & Technology, 69, 53--61.Google ScholarGoogle ScholarCross RefCross Ref
  7. Kong W, Zhang L and Lei Y (2014). Novel fusion method for visible light and infrared images based on nsst-sf-pcnn. Infrared Physics & Technology, 65, 103--112.Google ScholarGoogle ScholarCross RefCross Ref
  8. Bavirisetti D P (2017). Multi-sensor Image Fusion based on Fourth Order Partial Differential Equations. 20th International Conference on Information Fusion (Fusion), 2017, IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  9. Kong W, Lei Y and Zhao H (2014). Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization. Infrared Physics & Technology, 67, 161--172.Google ScholarGoogle ScholarCross RefCross Ref
  10. Zhang X Y, Yong M, Fan F, Ying Z and Jun H (2017). Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. Journal of the Optical Society of America A, 34(8), 1400.Google ScholarGoogle ScholarCross RefCross Ref
  11. Zhao J, Chen Y, Feng H, Xu Z and Li Q (2014). Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Physics & Technology, 62, 86--93.Google ScholarGoogle ScholarCross RefCross Ref
  12. Liu Y, Liu S and Wang Z (2015). A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 24, 147--164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Ma J, Zhou Z, Wang B and Zong H (2017). Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics & Technology, 82, 8--17.Google ScholarGoogle ScholarCross RefCross Ref
  14. Ma J, Chen C, Li C and Huang J (2016). Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion, 31, 100--109.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Zhao J, Cui G, Gong X, Zang Y, Tao S and Wang D (2017). Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Physics & Technology, 81, 201--209.Google ScholarGoogle ScholarCross RefCross Ref
  16. Liu Y, Chen X, Ward R and Wang Z J (2016). Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 1--1.Google ScholarGoogle Scholar
  17. Zhong J, Yang B, Li Y, Zhong F and Chen Z (2016). Image Fusion and Super-Resolution with Convolutional Neural Network. Chinese Conference on Pattern Recognition, Springer Singapore.Google ScholarGoogle Scholar
  18. Prabhakar K R, Srikar V S and Babu R V (2017). Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs.Google ScholarGoogle Scholar
  19. Liu Y, Chen X, Peng H and Wang Z (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191--207.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Piao Jingchun, Chen Yunfan and Shin Hyunchul (2019). A New Deep Learning Based Multi-Spectral Image Fusion Method. Entropy 21, no. 6, 570.Google ScholarGoogle ScholarCross RefCross Ref
  21. Ma J Y, Wei Y, Pengwei L, Chang L and Junjun J (2018). Fusiongan: a generative adversarial network for infrared and visible image fusion. Information Fusion, S1566253518301143-.Google ScholarGoogle Scholar
  22. Yang Z Q, Dan T T and Yang Y (2018). Multi-temporal remote sensing image registration using deep convolutional features. IEEE Access, 1--1.Google ScholarGoogle Scholar
  23. http://data.cma.com.Google ScholarGoogle Scholar
  24. https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029.Google ScholarGoogle Scholar

Index Terms

  1. Infrared and visible image fusion using multi-resolution convolution neural network

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      AIIPCC '19: Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing
      December 2019
      464 pages
      ISBN:9781450376334
      DOI:10.1145/3371425

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 December 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      AIIPCC '19 Paper Acceptance Rate78of211submissions,37%Overall Acceptance Rate78of211submissions,37%
    • Article Metrics

      • Downloads (Last 12 months)5
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader