Low Pass Filtering
Low Pass Filtering
net/publication/338431748
Image Fusion based on Sparse Sampling Method and Hybrid Discrete Cosine
Transformation
CITATIONS READS
17 244
7 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Rajesh Ganesan on 07 January 2020.
Abstract— Image fusion is a mixture of several images to a merged image ensuing more informative than other input images which have
been used in recent years. Image fusion based on Discrete Cosine Transformation (DCT) is dealt in this work. Generally, in image fusion,
several images of the same scene are given as input and one image with higher quality is obtained as output. When compared to Nyquist
theorem, compressed sensing theory offers an improved result. DCT yields better quality and it requires less storage and low cost amongst
the many techniques.
Index Terms— Image fusion, compressive sensing, principal component analysis, discrete cosine transformation, sparse sampling,
nyquist theorm, low pass filter, fused image.
—————————— ——————————
1 INTRODUCTION
ACQUIRING and renovating are essential in every signal differentiate the images since various targets may look alike.
processing system and sampling theorems between Either single multispectral sensor or multiple sensors which
continuous and discrete domains. Image fusion based on DCT are operating at various frequencies could be used to acquire
can provide better performance than fusion based on other different bands of frequencies [12-13].
multi-scale methods. The multi-sensor data fusion has become
an intense research area in recent years [1-2]. Cosine transform 2 RELATED WORK
provides good localization in both frequency and space 2.1 Compressive Sensing (CS)
domain. With the increasing of remote sensing image fusion Compressive sensing otherwise known as sparse sampling is a
algorithm, the real-time of fusion faces challenges [3-5]. method generally used to reconstruct a signal, by finding a
better outcome for vague signals. Compressive sensing is one
Remote sensing using satellite is one of the powerful
of the techniques that have received more interest in sampling
mechanisms to monitor the surface of the earth, atmosphere
procedures [14-17]. Compressive sensing has advantages of
by providing importance coverage and mapping. The term
redundancy of signals and reconstruction of the image. The
‚remote sensing‛ is generally used in electromagnetic
performance of this algorithm is ranked both computably and
techniques [6-9]. The consistent and periodic view of the
conditionally [18-20].
planet earth is being provided by the remote sensing systems
which are employed on satellites. The remote sensing systems 2.2 Discrete Cosine Transformation (DCT)
provide a broad range of spatial and temporal resolutions in Using DCT, frequency components of a given signal can be
order to support various applications. Many images are being obtained. Since it has good energy compaction, it is normally
taken by the satellites and their frequency bands are in the
adopted in image processing and compression applications.
visual and non-visual range [10-11]. Multispectral image (MS)
DCT fuses images through weighted coefficient, recover the
is a collection of various bands of the same scene which are
fusion image by recognition algorithm [21-24]. DCT based
obtained by a sensor. A colour image is formed with the
mixture of three bands allied in a RGB colour system. The image fusion are more suitable and time-saving in the real-
information content will be increased by the combination of time application. Thus we achieve better clarity images.
spectral band of a colour image. If not, it is tedious to DCT also described in low frequency and high frequency
where it allows low-frequency substances and discards high
Hien Dang, Dept. of Computer Science and Engineering, ThuyLoi frequency. Many images are compressed by JPEG standard
University, HaNoi, Vietnam. E-mail: [email protected] format which uses Discrete Cosine Transform (DCT) which is
K. Martin Sagayam, Dept. of ECE, Karunya Institute of Tech. and one of the main applications [25-26].
Sciences, Coimbatore. E-mail: [email protected]
P. Malin Bruntha, Dept. of ECE, Karunya Institute of Tech. and Sciences,
Coimbatore. E-mail: [email protected] 3 PROBLEM DEFINITION
S. Dhanasekar, Dept. of ECE, Sri Eshwar College of Engg., Coimbatore.
E-mail: [email protected] Recent survey papers have been provided overviews on
A. Amir Anton Jone, Dept. of ECE, Karunya Institute of Tech. and
Sciences, Coimbatore. E-mail: [email protected]
developments and current status of remote sensing in the field
G. Rajesh, Dept. of Information Technology, Anna University (MIT of image processing [2-4]. Remote sensing satellite techniques
Campus), Chennai, E-mail: [email protected] deal with acquisition, processing, analysis and usage of data
from space platform and satellites [28].
1103
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 12, DECEMBER 2019 ISSN 2277-861
Effects of atmosphere
TRANSFORMATION
The atmosphere attenuates and distorts the signal coming (DISCRETE COSINE)
from the earth surface to the remote sensing sensor. The gasses
and aerosols present in the atmosphere scatter, observe and
emit radiant energy. The atmosphere is transparent in certain LOW PASS/HIGH PASS
narrow regions of the visible, infrared and microwave regions.
It is practically opaque in the ultraviolet and hence it is not
suitable for remote sensing. The main source of PRINCIPAL
electromagnetic energy reaching the earth is a sun. The COMPONENT COMPRESSIVE
maximum irradiance of the sun occurs for different ANALYSIS SENSING (CS)
wavelength of the earth and atmosphere. The atmospheres (PCA)
scatter, absorbs or emit radiant energy.
The Rayleigh or molecular scattering is dominant in the visible OUTPUT OUTPUT
and near-ultraviolet regions and increases with the inverse
power of the variant [10-13]. There are some molecular energy
that scatters and has absorption bands that can influence INVERSE
atmospheric absorption even when in trace quantities. Water COSINE TRANSFORM
vapor also has a prominent role in absorption. Infrared cloud
image can give a lot of information. The temperature of the
cloud can be estimated from the infrared radiation and hence FUSED IMAGE
the altitude.
Fig. 1 Framework of the proposed work
Data Fusion
Discrete Cosine Transform (DCT)
Data fusion is the process of combining many images using
Nowadays, image data are extensively obtainable. These large
some fusion algorithms. In order to improve the resolution of
number of data obtained from different processes are
image in spatial and temporal characteristics, data fusion can
tremendously large. Hence, storing the image data or
merge dissimilar and complementary data. This process will
transmitting them require high cost. To mitigate this problem,
lead to obtain exact data [27]. The satellites which cover earth
data compression techniques are adopted. One such data
provide data which encompasses different regions of the
compression technique is Discrete Cosine Transform (DCT).
electromagnetic spectrum at various resolutions in terms of
spatial, temporal and spectral. Data fusion is an emerging
Using DCT, noteworthy compression ratio can be achieved.
area which can be used to handle multi-source data. The
However, there will losses of accuracy while reconstructing
fused images carry more information because different data
the image. In videophone application, such loss can be
with its own characteristics are combined. Optical sensors in
endured. In this paper, DCT as a compression technique in
RADAR generate many data and these data can be combined
remote sensing images is employed.
using this data fusion technique. But the problem is optical
images may resonate with the spectral information of the
object. Moreover, the intensities of RADAR are sensitive to the
roughness of the target and vertical characteristics. Also, the
optical images are affected by the electromagnetic waves’
carrier frequency [5-7]. In general, the optical images prone to
the geometric distortions. Henceforth, it is much needed to
combine various images which are obtained by the same or
different sensors to eliminate distortions, noises and to get
accurate images of the region of interest.
1104
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 12, DECEMBER 2019 ISSN 2277-861
Low pass filter emphasize on two main problems are compression capability
and lack of complexity. Cosine transform is used to
Low pass filter is an ideal filter used for blurring process, to decompose the original data into multi data. The fused image
‘smoothen’ the averaging the value of neighbour pixel to is recovered by two criteria are, sparsity and incoherence.
improve the intensity level. The resultant data is replaces all
values through the source pixel values from the image. This The orientation field and iterative field are used in Lagrangian
repeats for all values of pixels still it gets refined final output. multiplier as shown below:
Each pixel is independent of noise, but it cannot be tolerate to 𝜆 =𝜆 + 𝛾 (𝐻 − ∇𝑑 ) (3)
the minimum level. It can supressed by gradually applying by
an ideal filter for individual pixel level for enhancing the 𝜆 =𝜆 + 𝛾 (𝑉 − ∇𝑑 ) (4)
quality of image. It brings out faint details that can be
smoothen by the filtering to reduce the noise level. 𝜆 =𝜆 + 𝛾 (𝑃 − ∇𝑥 ) (5)
High pass filter
𝜆 =𝜆 + 𝛾 (𝑄 − 𝑃 𝑑 ) (6)
High-pass filter is used to improve the intensity level of an Inverse Discrete Cosine Transform
input image with good resolution. It provides more precise Inverse DCT reconstructs a sequence from its DCT. Using
filtering than the low-pass filter. The working principle of high normalization method, inverse DCT is multiplication of
pass and low pass filtering works in same fashion, but differs transformed sequence and 2/(N-1). It produced more accurate
with convolution operator. If there is no variation in intensity original data than other transformation technique.
level, no changes has been occur. But if any one of the pixel is
high intense than neighbouring pixels, then it improves the Fused Image
quality of image each if it magnifies. Image fusion algorithm is used to divide non-overlapping
blocks of N x N using DCT coefficients for each block. Fusion
Even low-pass filtering smoothen the noise level of input rule is used to get fused data using DCT coefficient. Inverse
image, high-pass filtering magnifies the noise level of input transform is used to get the fused block. Repeat this procedure
until gets the fused image.
content. Since the input data consist of too noisy content, it can
overwhelm with the source content [9-11]. High-pass filtering
can significantly improve image content by sharpening 5 RESULT AND ANALYSIS
without loss of any information.
The true information and natural color preservation is very
Principal Component Analysis (PCA) much necessary for any kind of image fusion method. Fusion
Principal Component Analysis (PCA) is used to transform performance is mainly assessed using subjective and objective
multi-variant data with correlated to uncorrelated variable. It metrics. However, subjective tests may be accurate if it
is one of the significant methods used in statistical approach recognize correctly, and consume less time. Hence, the
for dimensionality reduction or de-correlation operation. performances of different multi-sensor image fusion methods
Image analysis deals with PCA, since it more emphasize on are evaluated as a function of their generated objective
determination of object orientation for selection through Eigen metrics. An objective fusion test should be,
vector properties. It used to extract high redundant feature
values from the input data.
The perceptual features are extracted from the input data.
It emphasizes on the standards of data for exploratory data Ability to measure fusion process as accurate as possible
analysis and predictive approach. It is mathematically defined from the input data.
as orthogonal linear transformation into a new coordinate
system in different coordinate systems. Let us consider data The performance measures are determined based on the edge
matrix X, with zero empirical mean has to transformed into p- information of the source images that are transferred into the
dimensional vector of weight coefficients w(k) = (w1,…, wp)(k)
fused images. Hence all objective metrics are range between 0
that mapped into each row vector X = x(i) where i=1,…,n to a
new vector t(i) = (t1,…,tl)(i) given by to 1. It can be derived using objective evaluation methods; a
tk(i) = x(i).w(k) for i = 1,…,n and k = 1,…, l (1) few parameters are entropy (E), standard deviation (SD), root
mean square error (RMSE), peak signal to noise ratio (PSNR),
𝑤 = arg 𝑚𝑎𝑥 { (2) mutual information (MI).
[5] Zhihui Wang, Yang Tie and Yueping Liu, “Design and Implementation of
Image Fusion System”, IEEE, International Conference on Computer
Application and System Modelling (ICCASM), 2010, pp – v10-140 to v10-
143.
[6] Qizhi Xu, Yun Zhang, Bo Li and Lin Ding, “Pansharpening using
Regression of classified MS and pan images to Reduce Color Distortion”,
IEEE Geoscience And Remote Sensing Letters. Vol. 12, No. 1, January
2015, pp. 28-32.
[7] Jagdeep Singh, Vijay Kumar Banga, “An Enhanced DCT based Image
Fusion using Adaptive Histogram Equaliztion”, International Journal of
Computer Applications (0975-8887) Volume 87- No. 12, February 2014.
[8] Sascha Klonus, Manfred Ehlers, “Performance of evaluation methods in
image fusion”, 12th Inernational Conference on Information Fusion, Seattle,
(a) first image (b) second image WA, USA, July 6-9, 2009.
[9] VPS Naidu, “Discrete Cosine Transform based Image Fusion Techniques”,
Journal of Communication, Navigation and Signal Processing, Vol. 1, No. 1,
pp. 35-45, January 2012.
[10] Mandeep Kaur Sandhu, Ajay Kumar Dogra, “A Detailed Comparative Study
of Pixel Based Image Fusion Techniques”, International Journal of Recent
Scientific Research, Vol. 4, Issue, 12, pp. 1949-1951, December 2013.
[11] Frosti Palsson, Johannes R. Sveinsson, Magnus Om Ulfarsson and Jon Atli
Benediktsson, “Model-Based Fusion of Multi- and Hyperspectral Images
Using PCA and Wvelets”, IEEE Transactions on Geoscience and Remote
Sensing, Vol. 53, No. 5, May 2015, pp. 2652-2663.
[12] Yufeng Zheng, “Image Fusion And Its Applications”, InTech Publication,
ISBN 978-953-307-182-4, May 2011.
[13] Veeraraghavan Vijayaraj, Nicolas H. Younan and Charles G. O’Hara,
“Concepts of Image Fusion in Remote Sensing Applications”, IEEE,
Geoscience and Remote Sensing International Symposium – IGARSS,
2006,pp. 3781-3784.
[14] Wenkao Yang, Jing Wang ang Jing Guo, “A Novel Algorithm for satellite
Image Fusion Based on Compressed Sensing and PCA”, Hindawi
(c) PCA with CS (d) Hybrid DCT fused image Publishing Corporation, Mathematical Problems in Engineering,pp.1-
10,2013.
[15] Yunxiang Tian, and Xiaolin Tian, Remote sensing image fusion based on
Fig. 2: Fused images of leaf with their fusion techniques orientation information in nonsubsampled contourlet transform domain,
International Conference on Advanced Electronic Science and Technology,
published by Atlantic Press, pp. 57-63, 2016.
This mthod shows the significant output of each stage, the
[16] Yang Chen, and Zheng Qin, PCNN-Based image fusion in compressed
performance evaluation has shown for leaf image in table 1. domain, Mathematical Problems in Engineering, Hindawi Publishing
Corporation, http://dx.doi.org/10/1155/2015/536215, 2015
[17] Vaibhav R. Pandit, and R. J. Bhiwani, Image fusion in remote sensing
6 CONCLUSION applications: A review, International Journal of Computer Applications, vol.
120(10), pp.22-32, 2015.
The Proposed algorithm presents Hybrid DCT Digital image
[18] Hongguang Li, Wenrui Ding, Xianbin Cao, and Chunlei Liu, Image
fusion algorithm. This method exploits strength of the DCT to registration and fusion of visible and infrared camera for medium-altitude
obtain better image quality of fused image. It has lower unmanned aerial vehicle remote sensing, Remote Sensing, vol. 9(5),
standard deviation and higher PSNR compared to existing doi:10.3390/rs9050441, 2017.
technique. In this work, the image fusion algorithm is [19] Yong Yang, Song Tong, Shuying Huang, Pan Lin, and Yuming Fang, A
proposed to fuse the MS images and the Pan image. It is used hybrid method for multi-focus image fusion based on fast discrete curvelet
transform, IEEE Translations and Content Mining, doi:
to combine by cosine transform and sparse representation. 10.1109/ACCESS.2017.2698217, 2017.
Cosine transform and fusion rule is used to characterize into [20] B. Rajalingam, and R. Priya, Multimodality medical image fusion based on
the low- and high- frequency sub-images from the input hybrid fusion techniques, International Journal on Engineering and
images. To get the better local information and spatial contents Manufacturing Science, vol.7(1), pp. 22-29, 2017.
from the image, sparse sampling representation is used. Thus, [21] Biswajit Biswas, Biplab Kanti Sen, Ritamshirsa Choudhuri, Remote
sensing image fusion using PCNN model parameter estimation by Gamma
it reduces the spectral distortion in the fused image. distribution in Shearlet domain, Procedia Computer Science, vol. 70,
pp.304-310, 2015.
REFERENCES [22] Jun Li, Minghui Song, Yuanxi Peng, Infrared and visible image fusion
based on robust principal component analysis and compressed sensing,
[1] L. Wald, “Some terms of reference in data fusion,” IEEE
Infrared Physics and Technology, vol. 89, pp.129-139, 2018.
Trans.Geosci.Remote Sens., vol. 37, no. 3, pp. 1190-1193, May 1999.
[23] T. Xiang, L. Yan, R. Gao, A fusion algorithm for infrared and visible
[2] M. Choi, “A new intensity-hue-saturation fusion approach to image image
images based on adaptive dual-channel unit-linking PCNN in NSCT
fusion with a tradeoff parameter,” IEEE Trans. Geosci. Remote Sens., vol.
domain, Infrared Phys. Technol. 69, pp.53-61, 2015.
44, no. 6, pp. 1672-1682, Jun. 2006.
[24] Z. Fu, X. Wang, J. Xu, N. Zhou, Y. Zhao, Infrared and visible images
[3] G. Piella, A general framework for multiresolution image fusion: from
pixels to regions, Information Fusion 4 (4) (2003) 259-280. fusion based on RPCA and NSCT, Infrared Phys. Technol. 77, pp.114-123,
2016.
[4] S. Li, J. T. Kwok, Y. Wang, Using the discrete wavelet frame transform to
[25] Q. Zhang, X. Maldague, An adaptive fusion approach for infrared and
merge Landsat TM and SPOT panchromatic images, Information Fusion 3
(1) (2002) 17-23. visible images based on NSCT and compressed sensing, Infrared Phys.
Technol. 74, pp.11-20, 2016.
1106
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 12, DECEMBER 2019 ISSN 2277-861
Metric 1: QAB/F The hybrid DCT based sparse sampling methods produced
better than other methods such as PCA, CS and PCA with CS.
This method exploits strength of the DCT to obtain better
image quality of fused image. It has lower standard deviation
and higher PSNR compared to existing technique the image
fusion algorithm is proposed to fuse the MS images and the
Pan image by combining the cosine transform and sparse
Metrics 2: Entropy representation.
N
En p ( i ) lo g 2 p ( i )
i0
MI F
M I FA ( f ; a ) M I FB ( f ; b )
1107
IJSTR©2019
www.ijstr.org
View publication stats