100% found this document useful (1 vote)
86 views3 pages

Image Fusion Refers To The Process of Combining Two or More Images Into One Composite Image

Image fusion refers to the process of combining two or more images into a composite image by integrating the complementary information contained within individual source images of the same scene. This results in a fused image that has higher information content than any single source image. There are three main types of image fusion: pixel level fusion, which fuses images on a pixel-by-pixel basis; feature level fusion, which extracts and fuses features like shapes, edges and textures; and decision level fusion, which merges interpretations of images obtained after understanding them at a higher level of abstraction.

Uploaded by

sujitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
86 views3 pages

Image Fusion Refers To The Process of Combining Two or More Images Into One Composite Image

Image fusion refers to the process of combining two or more images into a composite image by integrating the complementary information contained within individual source images of the same scene. This results in a fused image that has higher information content than any single source image. There are three main types of image fusion: pixel level fusion, which fuses images on a pixel-by-pixel basis; feature level fusion, which extracts and fuses features like shapes, edges and textures; and decision level fusion, which merges interpretations of images obtained after understanding them at a higher level of abstraction.

Uploaded by

sujitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Image fusion refers to the process of combining two or more images into one composite

image. The complementary information contained within the individual source images of
a same scene is integrated. The resultant image has higher information content compared
to any of the individual source image. This fusion process is very much useful in the
enhancement of multi-focus images, medical images, satellite images, etc. In this research
work, satellite images are considered for performing fusion process.
1.3 TYPES OF IMAGE FUSION

The image fusion is to combine data from different images obtained by the sensors of
different wavelengths simultaneously on viewing the same scene to form a composite image
(Yufeng, 2011). It is done in order to obtain non-redundant and complementary information
which cannot be derived from each of the single sensor data alone. The original information of
the object is preserved, and so the fused image is used to analyze various objects. Hence specific
information should be preserved, and the artifacts should be minimized in the fused image.

This research work aims to modify the various image fusion techniques for remote
sensing applications. Image fusion is an effective way of optimum utilization of large number of
different spatial resolution PAN images and different spectral resolution MS images from the
multiple image capturing sensors for image analysis.

Panchromatic band has high spatial resolution, but less spectral resolution. On a high
spatial resolution panchromatic image, detailed geometric features can easily be identified. The
multispectral images are rich in spectral information. So, more information from the input
images can be got if the non-redundant details in both high spatial and spectral resolution images
are combined into a single image (Blum & Liu 2005). With appropriate algorithms, the
multispectral image and the panchromatic image are integrated to obtain a fused image which
gives more details than the source image.

The purpose of image fusion is to enhance the spatial and spectral resolution from
several low resolution images. Due to this, image fusion has become an interesting topic for
many researchers. Also, image fusion enables features to be distinguished that are impossible to
perceive with any individual sensor. Furthermore, high cost complex sensors are avoided. These
advantages are the reasons to go for image fusion and hence to obtain non-redundant,
complementary, more timely and precise information at less cost.

Image fusion can be performed roughly at three different techniques such as pixel
level fusion, feature level fusion and decision level fusion (Pohl & Genderen 1998).

1.3.1 Pixel Level Image Fusion

Pixel-based fusion is performed on a pixel by pixel basis or over a small arbitrary


region of the pixels. The pixel level fusion is done directly on the pixels captured by the sensor
outputs. The fusion of the input images from multiple sensors is initially associated with a
specific pixel. Hence the fusion of corresponding pixel in multiple registered images to form a
composite RGB images is also a pixel level fusion. In pixel-based fusion, the corresponding
pixel coefficients for a set of pixels in the source images are fused either by selecting the
maximum or minimum value or by selecting the weighted coefficients of the two sub images.

In pixel level image fusion, the raw images obtained from different sensors are fused
to provide a new image. The pixel level image fusion can be more helpful for a human observer
to easily detect and recognize coarse details. The goal in pixel level image fusion is to combine
and preserve in a single output image all the important visual information that is present in a
number of input images. Nowadays, pixel level image fusion is used widely to fuse satellite
images, medical images, multi-focus images, etc (Li et al. 2017).

1.3.2 Feature Level Image Fusion

In feature-based fusion methods, similar pixels of the source images are extracted
and fused for further details based upon statistical approaches and artificial neural networks
using fuzzy logic or soft computing. The features extracted from the source images are defined
based upon shape, edges, textures and pixel intensity of their neighboring pixels. These features
from similar source images are assigned to each other and fused for further analysis.

1.3.3 Decision Level Image Fusion

Decision level fusion consists of merging information directly. The interpretations of


different source images are obtained after understanding the images at a higher level of
abstraction. The results of the similar source images are combined by multiple algorithms to
yield a fused result. It is based upon the decision rules reinforced upon the common optical
principles. Input images are processed individually for extraction of desired information. The
processed information from the source images is needed to merge in order to obtain the required
fused image.

You might also like