12 December 2022 DGS-Fuse: unsupervised image fusion network combining global information
Kaijun Wu, Mengsi Wang, Yuan Mei, Hongquan Shan, Zehao Xu
Author Affiliations +
Abstract

To better preserve the global semantic information and edge information of the original image during the multi-focus image fusion process, we propose an encoder–decoder network, which combines pixel loss function, multiscale structural similarity loss function and total variation loss function to further reduce the detail loss in the image reconstruction process. The model introduces dynamic convolution and global context network to improve its feature expression ability and global context modeling ability. In the fusion stage, the fusion strategy based on the edge feature map is used to fuse the two feature maps output by the encoder to obtain the final decision map. Combining the gradient information of the deep feature map and the edge feature map, the fusion strategy can more effectively extract the flat parts of the focused and defocused regions and enhance the edge features of the image. Finally, the proposed algorithm is compared with seven advanced fusion algorithms, and the results are superior to other fusion algorithms in subjective and objective evaluation indexes.

© 2022 SPIE and IS&T
Kaijun Wu, Mengsi Wang, Yuan Mei, Hongquan Shan, and Zehao Xu "DGS-Fuse: unsupervised image fusion network combining global information," Journal of Electronic Imaging 31(6), 063045 (12 December 2022). https://doi.org/10.1117/1.JEI.31.6.063045
Received: 5 September 2022; Accepted: 22 November 2022; Published: 12 December 2022
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Feature fusion

Image processing

Convolution

Feature extraction

Education and training

Image restoration

RELATED CONTENT


Back to Top