Image Fusion Refers To The Process of Combining Two or More Images Into One Composite Image
Image Fusion Refers To The Process of Combining Two or More Images Into One Composite Image
image. The complementary information contained within the individual source images of
a same scene is integrated. The resultant image has higher information content compared
to any of the individual source image. This fusion process is very much useful in the
enhancement of multi-focus images, medical images, satellite images, etc. In this research
work, satellite images are considered for performing fusion process.
1.3 TYPES OF IMAGE FUSION
The image fusion is to combine data from different images obtained by the sensors of
different wavelengths simultaneously on viewing the same scene to form a composite image
(Yufeng, 2011). It is done in order to obtain non-redundant and complementary information
which cannot be derived from each of the single sensor data alone. The original information of
the object is preserved, and so the fused image is used to analyze various objects. Hence specific
information should be preserved, and the artifacts should be minimized in the fused image.
This research work aims to modify the various image fusion techniques for remote
sensing applications. Image fusion is an effective way of optimum utilization of large number of
different spatial resolution PAN images and different spectral resolution MS images from the
multiple image capturing sensors for image analysis.
Panchromatic band has high spatial resolution, but less spectral resolution. On a high
spatial resolution panchromatic image, detailed geometric features can easily be identified. The
multispectral images are rich in spectral information. So, more information from the input
images can be got if the non-redundant details in both high spatial and spectral resolution images
are combined into a single image (Blum & Liu 2005). With appropriate algorithms, the
multispectral image and the panchromatic image are integrated to obtain a fused image which
gives more details than the source image.
The purpose of image fusion is to enhance the spatial and spectral resolution from
several low resolution images. Due to this, image fusion has become an interesting topic for
many researchers. Also, image fusion enables features to be distinguished that are impossible to
perceive with any individual sensor. Furthermore, high cost complex sensors are avoided. These
advantages are the reasons to go for image fusion and hence to obtain non-redundant,
complementary, more timely and precise information at less cost.
Image fusion can be performed roughly at three different techniques such as pixel
level fusion, feature level fusion and decision level fusion (Pohl & Genderen 1998).
In pixel level image fusion, the raw images obtained from different sensors are fused
to provide a new image. The pixel level image fusion can be more helpful for a human observer
to easily detect and recognize coarse details. The goal in pixel level image fusion is to combine
and preserve in a single output image all the important visual information that is present in a
number of input images. Nowadays, pixel level image fusion is used widely to fuse satellite
images, medical images, multi-focus images, etc (Li et al. 2017).
In feature-based fusion methods, similar pixels of the source images are extracted
and fused for further details based upon statistical approaches and artificial neural networks
using fuzzy logic or soft computing. The features extracted from the source images are defined
based upon shape, edges, textures and pixel intensity of their neighboring pixels. These features
from similar source images are assigned to each other and fused for further analysis.