The aim of image fusion is to merge image information from multiple source images into one image. The fused image can contain more complete and clear scene information without introducing artifacts. The results of image fusion are required to be faithful to the original image contents. However, we notice that the unfaithful fusions often contain hallucinated content, which cannot be aligned with any input images. For the first time, we propose a full-scale connected-based fusion network (FSCF-Net) for infrared and visible image fusion to achieve the goal. To make full use of the multiscale features of the image, the full-scale skip connection is used in the decoder network of FSCF-NET to fuse the full-scale deep features extracted by the encoder. To enforce faithfulness, the FSCF-Net adopts a hybrid loss function consisting of detail-preserving loss and structure-preserving loss. To effectively fuse the features of different source images, a spatial-channel attention-based feature fusion strategy (SCAF) is proposed. The SCAF fusion strategy measures the importance of features from both spatial and channel dimensions. The detailed analysis is provided about the proposed model in our experiments that are performed on publicly available datasets. We adopt subjective evaluation and 6 objective metrics to compare the proposed FSCF-Net with 10 state-of-the-art fusion algorithms. The experiments demonstrate that our method achieves the best comprehensive performance on the overall metrics. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image fusion
Infrared imaging
Infrared radiation
Visible radiation
Feature fusion
Education and training
Image restoration