Skip to main content
Log in

A Novel Fusion Strategy and Convolutional Sparse Coding for Robot Multisource Image Fusion

  • Published:
Automatic Control and Computer Sciences Aims and scope Submit manuscript

Abstract

Image fusion refers to the fusion of images collected by multiple sensors about the same target or scene into one image by image processing technology. In this way, the advantages of multiple sensors can be effectively utilized to obtain more comprehensive feature information of the target or scene, which is conducive to human eye observation and subsequent recognition and processing. The traditional fusion methods are easy to lose the image details, resulting in poor fusion effect. Therefore, this paper proposes a multisource image fusion method based on convolutional sparse coding and a novel fusion strategy. Firstly, the image is decomposed into low-rank and sparse through low-rank decomposition. Then, the sparse part is convoluted and decomposed to obtain a set of sparse filter dictionaries, which are applied to image fusion by convolutional sparse coding. The regional energy-Cauchy fuzzy function rule is adopted for low-rank components. The regional Laplace energy is used for sparse component. Finally, the weighted average method is used to obtain the final fusion result. Experimental results show that the proposed method achieves good results in terms of visual effects and objective evaluation indexes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

REFERENCES

  1. Ma, J., Yu, W., Liang, P., Li, Ch., and Jiang, J., FusionGAN: A generative adversarial network for infrared and visible image fusion, Inf. Fusion, 2019, vol. 48, pp. 11–26. https://doi.org/10.1016/j.inffus.2018.09.004

    Article  Google Scholar 

  2. Teng, L., Li, H., Yin, Sh., and Sun, Ya., Improved krill group-based region growing algorithm for image segmentation, Int. J. Image Data Fusion, vol. 10, no. 4, pp. 327–341. https://doi.org/10.1080/19479832.2019.1604574

  3. Li, H., Song, Ya., and Chen, C.L.P., Hyperspectral image classification based on multiscale spatial information fusion, IEEE Trans. Geosci. Remote Sensing, 2017, vol. 55, no. 9, pp. 5302–5312. https://doi.org/10.1109/TGRS.2017.2705176

    Article  Google Scholar 

  4. Yin, Sh. and Zhang, Ye, Singular value decomposition-based anisotropic diffusion for fusion of infrared and visible images, Int. J. Image Data Fusion, vol. 10, no. 2, pp. 146–163. https://doi.org/10.1080/19479832.2018.1487886

  5. Maqsood, S. and Javed, U., Multi-modal medical image fusion based on two-scale image decomposition and sparse representation, Biomed. Signal Process. Control, 2020, vol. 57, p. 101810. https://doi.org/10.1016/j.bspc.2019.101810

    Article  Google Scholar 

  6. Tan, J., Zhang, T., Zhao, L., Luo, X., and Tang, Yu.Ya., Multi-focus image fusion with geometrical sparse representation, Signal Process. Image Commun., 2021, vol. 92, p. 116130. https://doi.org/10.1016/j.image.2020.116130

    Article  Google Scholar 

  7. Zhao, M. and Peng, Yu., A multi-module medical image fusion method based on non-subsampled shear wave transformation and convolutional neural network, Sensing Imaging, 2021, vol. 22, no. 1, p. 9. https://doi.org/10.1007/s11220-021-00330-w

    Article  MathSciNet  Google Scholar 

  8. Arif, M. and Wang, G., Fast curvelet transform through genetic algorithm for multimodal medical image fusion, Soft Comput., 2020, vol. 24, no. 3, pp. 1815–1836. https://doi.org/10.1007/s00500-019-04011-5

    Article  Google Scholar 

  9. Cao, L., Dey, N., Ashour, A.S., Fong, S., Sherratt, R.S., Wu, L., and Shi, F., Diabetic plantar pressure analysis using image fusion, Multimedia Tools Appl., 2020, vol. 79, no. 1, pp. 11213–11236. https://doi.org/10.1007/s11042-018-6269-x

    Article  Google Scholar 

  10. Li, H., He, X., Tao, D., Tang, Yu., and Wang, R., Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning, Pattern Recognit., 2018, vol. 79, pp. 130–146. https://doi.org/10.1016/j.patcog.2018.02.005

    Article  Google Scholar 

  11. Zhang, J., Zhao, D. and Gao, W., Group-based sparse representation for image restoration, IEEE Trans. Image Process., vol. 23, no. 8, pp. 3336–3351. https://doi.org/10.1109/TIP.2014.2323127

  12. Jian, W., Chunxia, Q., Xiufei, Zh., Ke, Ya., and Ping, R., A multi-source image fusion algorithm based on gradient regularized convolution sparse representation, J. Syst. Eng. Electron., 2020, vol. 31, no. 3, pp. 447–459. https://doi.org/10.23919/JSEE.2020.000027

    Article  Google Scholar 

  13. Huang, S.-L., Song, W., Wang, Yi-Zh., Wu, Yu-M., Pan, X.-M., and Sheng, X.-Q., Efficient and accurate electromagnetic angular sweeping of rough surfaces by MPI parallel randomized low-rank decomposition, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sensing, 2020, vol. 13, pp. 1752–1760. https://doi.org/10.1109/JSTARS.2020.2981124

    Article  Google Scholar 

  14. Wohlberg, B., Efficient algorithms for convolutional sparse representations, IEEE Trans. Image Process., 2016, vol. 25, no. 1, pp. 301–315. https://doi.org/10.1109/TIP.2015.2495260

    Article  MathSciNet  MATH  Google Scholar 

  15. Hao, Z., Xu, Z., Zhao, H., and Zhang, R., The context-based distance measure for intuitionistic fuzzy set with application in marine energy transportation route decision making, Appl. Soft Comput., 2021, vol. 101, p. 107044. https://doi.org/10.1016/j.asoc.2020.107044

    Article  Google Scholar 

  16. Dutta, A.J. and Tripathy, B.C., Statistically pre-Cauchy fuzzy real-valued sequences defined by Orlicz function, Proyecciones, 2014, vol. 33, no. 3, pp. 235–243. https://doi.org/10.4067/S0716-09172014000300001

    Article  MathSciNet  MATH  Google Scholar 

  17. Zhong, Zh., Gao, W., Khattak, A.M., and Wang, M., A novel multi-source image fusion method for pig-body multi-feature detection in NSCT domain, Multimedia Tools Appl., 2020, vol. 79, no. 9, pp. 26225-26244. https://doi.org/10.1007/s11042-020-09044-9

    Article  Google Scholar 

  18. Xing, Ch., Wang, M., Dong, Ch., Duan, Ch., and Wang, Zh., Using Taylor expansion and convolutional sparse representation for image fusion, Neurocomputing, 2020, vol. 402, pp. 437–455. https://doi.org/10.1016/j.neucom.2020.04.002

    Article  Google Scholar 

  19. Das, M., Gupta, D., Radeva, P., and Bakde, A.M., NSST domain CT–MR neurological image fusion using optimised biologically inspired neural network, IET Image Process., 2020, vol. 14, no. 16, pp. 4291–4305. https://doi.org/10.1049/iet-ipr.2020.0219

    Article  Google Scholar 

  20. Guo, P., Xie, G., Li, R., and Hu, H., Multi-modal image fusion via convolutional morphological component analysis and guided filter, J. Circuits, Syst. Comput., 2020, vol. 30, no. 2, p. 2130003. https://doi.org/10.1142/S0218126621300038

    Article  Google Scholar 

  21. Hao, Zh. and Yi, Zh., Multifocus image fusion method based on convolutional deep belief network, IEEJ Trans. Electr. Electron. Eng., 2020, vol. 16, no. 1, pp. 85–97. https://doi.org/10.1002/tee.23271

    Article  Google Scholar 

  22. Shoulin Yin, Hang Li, Lin Teng, Man Jiang & Shahid Karim, An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images, Int. J. Image Data Fusion, vol. 11, no. 2, pp. 201–214, 2020. https://doi.org/10.1080/19479832.2020.1727573

    Article  Google Scholar 

  23. Sun, Ya., Yin, Sh., and Teng, L., Research on multi-robot intelligent fusion technology based on multi-mode deep learning, Int. J. Electron. Inf. Eng., 2020, vol. 12, no. 3, pp. 119–127. https://doi.org/10.6636/IJEIE.202009_12(3).03

    Article  Google Scholar 

Download references

Funding

This work was supported by Key Scientific Research Project of Higher Education Institutions of Henan Province in 2018, grant no. 21B460017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yigui Lu.

Ethics declarations

The authors declare that they have no conflicts of interest.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiyue Wang, Liu, J. & Lu, Y. A Novel Fusion Strategy and Convolutional Sparse Coding for Robot Multisource Image Fusion. Aut. Control Comp. Sci. 57, 185–195 (2023). https://doi.org/10.3103/S0146411623020086

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0146411623020086

Keywords:

Navigation