ABSTRACT
The purpose infrared image and visible image fusion work is generating a richer content image. Since these images are captured by two different sensors, the fusion work can merge the valid information into one image. In this paper, we proposed a multi-resolution convolution neural network, which is constructed by multi-scale convolution operators. This method will calculate features in different scales. The multi-scale operators are used to capture multi-features from source images and these feature maps are used to calculate fusion parameter. This can avoid noise and artifact impaction. The fusion results can prove that the multi-fusion method is an effective method.
- Li S, Yang B and Hu J (2011). Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12(2), 74--84.Google ScholarDigital Library
- Pajares G and Cruz J M D L (2004). A wavelet-based image fusion tutorial. Pattern Recognition, 37(9), 1855--1872.Google ScholarCross Ref
- Zhang Z and Blum R S (1999). A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 87(8), 1315--1326.Google ScholarCross Ref
- Wang J, Peng J, Feng X, He G and Fan J (2014). Fusion method for infrared and visible images by using non-negative sparse representation. Infrared Physics & Technology, 67, 477--489.Google ScholarCross Ref
- Li S, Yin H and Fang L (2012). Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE transactions on bio-medical engineering, 59(12).Google Scholar
- Xiang T, Yan L and Gao R (2015). A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain. Infrared Physics & Technology, 69, 53--61.Google ScholarCross Ref
- Kong W, Zhang L and Lei Y (2014). Novel fusion method for visible light and infrared images based on nsst-sf-pcnn. Infrared Physics & Technology, 65, 103--112.Google ScholarCross Ref
- Bavirisetti D P (2017). Multi-sensor Image Fusion based on Fourth Order Partial Differential Equations. 20th International Conference on Information Fusion (Fusion), 2017, IEEE.Google ScholarCross Ref
- Kong W, Lei Y and Zhao H (2014). Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization. Infrared Physics & Technology, 67, 161--172.Google ScholarCross Ref
- Zhang X Y, Yong M, Fan F, Ying Z and Jun H (2017). Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. Journal of the Optical Society of America A, 34(8), 1400.Google ScholarCross Ref
- Zhao J, Chen Y, Feng H, Xu Z and Li Q (2014). Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Physics & Technology, 62, 86--93.Google ScholarCross Ref
- Liu Y, Liu S and Wang Z (2015). A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 24, 147--164.Google ScholarDigital Library
- Ma J, Zhou Z, Wang B and Zong H (2017). Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics & Technology, 82, 8--17.Google ScholarCross Ref
- Ma J, Chen C, Li C and Huang J (2016). Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion, 31, 100--109.Google ScholarDigital Library
- Zhao J, Cui G, Gong X, Zang Y, Tao S and Wang D (2017). Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Physics & Technology, 81, 201--209.Google ScholarCross Ref
- Liu Y, Chen X, Ward R and Wang Z J (2016). Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 1--1.Google Scholar
- Zhong J, Yang B, Li Y, Zhong F and Chen Z (2016). Image Fusion and Super-Resolution with Convolutional Neural Network. Chinese Conference on Pattern Recognition, Springer Singapore.Google Scholar
- Prabhakar K R, Srikar V S and Babu R V (2017). Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs.Google Scholar
- Liu Y, Chen X, Peng H and Wang Z (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191--207.Google ScholarDigital Library
- Piao Jingchun, Chen Yunfan and Shin Hyunchul (2019). A New Deep Learning Based Multi-Spectral Image Fusion Method. Entropy 21, no. 6, 570.Google ScholarCross Ref
- Ma J Y, Wei Y, Pengwei L, Chang L and Junjun J (2018). Fusiongan: a generative adversarial network for infrared and visible image fusion. Information Fusion, S1566253518301143-.Google Scholar
- Yang Z Q, Dan T T and Yang Y (2018). Multi-temporal remote sensing image registration using deep convolutional features. IEEE Access, 1--1.Google Scholar
- http://data.cma.com.Google Scholar
- https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029.Google Scholar
Index Terms
- Infrared and visible image fusion using multi-resolution convolution neural network
Recommendations
Infrared and visible image fusion via multi-scale multi-layer rolling guidance filter
AbstractThe desire of infrared (IR) and visible (VIS) image fusion is to bring out an admixture image to augment the target information from IR image and to retain the texture details from VIS image. In this paper, we put forward a multi-scale multi-layer ...
An end-to-end multi-scale network based on autoencoder for infrared and visible image fusion
AbstractInfrared and visible image fusion aims to obtain a more informative fusion image by merging the infrared and visible images. However, the existing methods have some shortcomings, such as detail information loss, unclear boundaries, and not being ...
Infrared and visible image fusion method based on LatLRR and ICA
PRIS '21: Proceedings of the 2021 International Conference on Pattern Recognition and Intelligent SystemsTo solve the problem of missing lots of texture details in the fusion image, we propose a new fusion method of infrared and visible images based on latent low-rank representation(LatLRR) and independent component analysis(ICA) in this paper. Firstly, ...
Comments