Skip to main content
Log in

Infrared and visible image fusion using a generative adversarial network with a dual-branch generator and matched dense blocks

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

To obtain a better fusion effect for infrared and visible images, a generative adversarial network using a dual-branch generator with matched dense blocks is proposed. The dual-branch generator consists of two parallel sub-networks, namely the upper and lower branches, which are asymmetrical in structure. It could be applied to nonlinearly extract the textural and contrast information in multiple degrees of freedom. Based on the dual-branch structure, two dense blocks are optimally designed by selectively arranging reduced concatenation connections to effectively employ the shallow information. As a result, both are non-full connection and symmetrically added on the upper and lower branches, respectively. Additionally, a gradient loss function containing the mean square error function was applied in the generator loss function, which could help extract more textural detail information. With such a generator and under adversarial learning with the discriminator, it could allow the fused images to preserve more visible and infrared information while also produce satisfactory visual perception. Experiments were implemented based on the open datasets, which included contrast and optimization experiments. The results demonstrate that the proposed method has superiority in terms of more detail and salient contrast in faint features which is relative to other state-of-the-art methods, and the applied dual-branch structure with the matched dense blocks is an appropriate for better fusion effect. The proposed method could be applied in certain detection or monitoring fields for infrared and visible image fusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and materials

This research was carried out in terms of the TNO dataset which can be accessed from “https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029”, and in terms of the RoadScene dataset which can be accessed from "https://github.com/hanna-xu/RoadScene".

References

  1. Wang, K., Duanmu, C.: Dual-branch feature fusion network for single image super-resolution. J. Phys. Conf. Ser. 5(1), 012167 (2020)

    Google Scholar 

  2. Goodfellow, I., Pougetabadie, J., Mirza, M., Xu, B., Wardefarley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Proceedings of the 27th international conference on neural information processing systems-Volume 2, Montreal, Canada, pp. 2672–2680 (2014)

  3. Ma, J., Yu, W., Liang, P., Li, C., Jiang, J.: FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf. Fusion 48, 11–26 (2019)

    Article  Google Scholar 

  4. Shi, C., Liao, D., Xiong, Y., Zhang, T., Wang, L.: Hyperspectral image classification based on dual-branch spectral multiscale attention network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 14, 10450–10467 (2021)

    Article  Google Scholar 

  5. Zhang, F., Xu, X., Xiao, Z., Wu, J., Liu, Y.: Automated quality classification of color fundus images based on a modified residual dense block network. Signal Image Video Process. 14, 215–223 (2020)

    Article  Google Scholar 

  6. Guo, Y., Li, H., Zhuang, P.: Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Ocean. Eng. 45(3), 862–870 (2020)

    Article  Google Scholar 

  7. Ma, J., Zhang, H., Shao, Z., Liang, P., Xu, H.: GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans. Instrum. Meas. 70, 1–14 (2020)

    Google Scholar 

  8. Wang, X., Hua, Z., Li, J.: Cross-UNet: dual-branch infrared and visible image fusion framework based on cross-convolution and attention mechanism. Vis. Comput. (2022). https://doi.org/10.1007/s00371-022-02628-6

    Article  Google Scholar 

  9. Li, H., Wu, X.-J.: DenseFuse: a fusion approach to infrared and visible images. IEEE Trans. Image Process. 28(5), 2614–2623 (2018)

    Article  MathSciNet  Google Scholar 

  10. Li, H., Wu, X.-J., Durrani, T.: NestFuse: an infrared and visible image fusion architecture based on nest Connection and spatial/channel attention models. IEEE Trans. Instrum. Meas. 69(12), 9645–9656 (2020)

    Article  Google Scholar 

  11. Su, W., Huang, Y., Li, Q., Zuo, F., Liu, L.: Infrared and visible image fusion based on adversarial feature extraction and stable image reconstruction. IEEE Trans. Instrum. Meas. 71, 2510214 (2022)

    Article  Google Scholar 

  12. Zhang, H., Xu, H., Xiao, Y., Guo, X., Ma. J.: Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity. In: AAAI-AAAI Conf. Artif. Intell., New York, NY, United states, pp. 12794–12804 (2020)

  13. Ma, J., Liang, P., Yu, W., Chen, C., Guo, X., Wu, J., Jiang, J.: Infrared and visible image fusion via detail preserving adversarial learning. Inf. Fusion 54, 85–98 (2020)

    Article  Google Scholar 

  14. Shreyamsha Kumar, B.K.: Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 9(5), 1193–1204 (2015)

    Article  Google Scholar 

  15. LewisRobert, J.J., O’Callaghan, J., Nikolov, S.G., Bull, D.R., Canagarajah, N.: Pixel- and region-based image fusion with complex wavelets. Inf. Fusion 8(2), 119–130 (2007)

    Article  Google Scholar 

  16. Fu, Y., Wu, X.-J.: A dual-branch network for infrared and visible image fusion. In: Proc. Int. Conf. Pattern Recognit., Virtual, Milan, Italy, pp.10675–10680, (2021)

  17. Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Honolulu, HI, USA, pp. 4700–4708, 21–26 July (2017)

  18. Fu, Y., Wu, X.-J., Durrani, T.: Image fusion based on generative adversarial network consistent with perception. Inf. Fusion 72, 110–125 (2021)

    Article  Google Scholar 

  19. Zhang, H., Yuan, J., Tian, X., Ma, J.: GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators. IEEE Trans. Comput. Imag. 7, 1134–1147 (2021)

    Article  MathSciNet  Google Scholar 

  20. Yang, Y., Kong, X., Huang, S., Wan, W., Liu, J., Zhang, W.: Infrared and visible image fusion based on multiscale network with dual-channel information cross fusion block. In: Proc. Int. Jt. Conf. Neural Networks, Shenzhen, China, 18-22 July (2021)

  21. Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognition, San Diego, CA, USA, pp.4353–4361, pp.539–546, (2005).

  22. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: IEEE Int. Conf. Comput. Vision, Venice, Italy, pp. 2813–2821 (2017)

  23. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  24. Toet, A.: TNO image fusion dataset, Figshare. Data (2014)

  25. Prabhakar, K.R., Srikar, V.S., Babu, R.V.: DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: IEEE Int. Conf. Comput. Vision, Venice, Italy, pp. Venice, Italy, (2017)

  26. Xu, H., Ma, J., Jiang, J., Guo, X., Ling, H.: U2Fusion: a unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 502–518 (2020)

    Article  Google Scholar 

  27. Li, H., Wu, X.J., Kittler, J.: RFN-Nest: an end-to-end residual fusion network for infrared and visible images. Inf. Fusion 73, 72–86 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by the Starting Research Fund Project of Xiangtan University under Grant 19QDZ16 and in part by the Research Foundation of Education Bureau of Hunan Province, China contract number 20C1794.

Funding

This research was supported in part by the Starting Research Fund Project of Xiangtan University under Grant 19QDZ16 and in part by the Research Foundation of Education Bureau of Hunan Province, China Contract Number 20C1794.

Author information

Authors and Affiliations

Authors

Contributions

LG wrote the main manuscript text and instructed certain experiments. DT did the experiments.

Corresponding author

Correspondence to Li Guo.

Ethics declarations

Conflict of interest

We declare that the authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.

Ethical approval

We declare that there are no applications for both human and/ or animal studies in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, L., Tang, D. Infrared and visible image fusion using a generative adversarial network with a dual-branch generator and matched dense blocks. SIViP 17, 1811–1819 (2023). https://doi.org/10.1007/s11760-022-02392-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02392-z

Keywords

Navigation