원문정보
Visible and Infrared Image Fusion using Spatial Adaptive Weights
초록
영어
In this paper, a deep learning based fusion technique is presented for the visible and infrared image fusion. In general, the image fusion process is composed of three stages: feature extraction by an encoder, feature fusion, and the reconstruction of the fused image by a decoder. We propose a feature fusion scheme that gives spatially adaptive weights to each infrared and visible pair in the fusion process. Features of the infrared image are used to determine the weights based on the observation that only the high activation region in IR contains the salient information. We conduct both quantitative and qualitative analysis on two datasets. Experimental results show that our fusion method achieves better performance than the previous method.
목차
1. Introduction
2. Proposed Fusion Method
2.1. Training
2.2. Fusion Layer
3. Experiments
3.1. Experimental setup
3.2. Experimental result
4. Conclusions
Acknowledgement
References