0% found this document useful (0 votes)
16 views5 pages

Effective Image Enhancement Using Hybrid Multi Resolution Image Fusion

Uploaded by

mukulmanohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Effective Image Enhancement Using Hybrid Multi Resolution Image Fusion

Uploaded by

mukulmanohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2014 IEEE Global Conference on Wireless Computing and Networking (GCWCN)

Effective Image Enhancement Using Hybrid Multi


Resolution Image Fusion
Deepali Sale Varsha Patil Dr. Madhuri A. Joshi
Dept. of E&TC, Pad. Dr. D. Y. Patil. Dept. of E&TC, Pad. Dr. D. Y. Patil. Dept. of E&TC, College of
Inst.of Engg and Tech., Inst.of Engg and Tech., Engineering, Pune (COEP)
Pimpri, Pune-411018, India Pimpri, Pune411018, India. Pune- 411005, India
[email protected] [email protected] [email protected]

Abstract — image fusion plays an important role in image some areas are clear and some are unclear which corresponds
enhancement and retrieval. For multi-focus image fusion as it is to a loss of high frequency information about edges or
related to frequency distortion the wavelet with symmetric regional borders. High frequency data selection using
property is more useful to retrieve the image. Hence, to select the sharpness criteria is useful for blurred image enhancement.
wavelet family, which provides better frequency resolution is of This method is properly used in the block, region and feature
immense importance. Due to this property of haar and bi- based fusion level. Spatial domain fusion technique directly
orthogonal wavelet bases, they are used for multi focus image incorporates the pixel values and fuse source images using
fusion. Spatial frequency measures the overall activity level in an local spatial features, such as gradient, spatial frequency, and
image. Some abrupt spatial changes in the image are represented
local standard derivation. The spatial domain techniques are
by the high spatial frequencies in the image like edges. Edges are
weighted average, nonlinear and estimation theory based
the salient features of an image which gives the information
about the fine details. Low spatial frequencies, on the other hand, methods. In a transformed domain method of fusion, the
represents global information about the shape and the images are first transformed into multiple levels of resolutions.
orientation of an image. Due to this spatial frequency is used as Image contains physically relevant features of many different
the feature of selection. A novel hybrid approach to multi-focus scales or resolutions. Multi scale or multi-resolution
image fusion is used. Wavelet coefficients of both the source approaches provide a means to exploit this fact. Image Fusion
images are combined by applying different fusion rules for low is a technique of giving the superior quality image
and high frequency coefficients. Applied a Laplacian pyramid enhancement for the blurred (multi focus) images. Image
based transform onto the low frequency coefficients of both the fusion technique takes the relevant and important information
input images. For coefficient selection applied a maximum from two or more images into a single image. Relevant and
selection fusion rule. High frequency coefficients are selected important information means the parts of the picture that
based upon maximum spatial frequency. The presented is sensitive to Human Visual System(HVS). The human
algorithm is compared with those of the individual fusion visual system is more sensitive to the visually salient regions
methods. The better result obtained with the combined method in an image. The salient regions are formed by contrast
(proposed) as compared to the individual methods. The statistical differences, i.e. Edges of the image. The resultant sharp image
analysis is performed using reference and non reference based is more suitable for visualization, detection or recognition
metrics. Important feature based metrics are considered. tasks (6-9) (17). The multi resolution transform contribute a
good mathematical model of the human visual system (HVS).
Keywords—biorthogonal;haar;imagefusion; laplacian pyramid; It provides the information on the contrast changes. The multi-
multi-focus; spatial frequency; wavelet resolution techniques are more popular for image fusion. The
most frequently used multi-resolution technique for multi
I. INTRODUCTION focus image fusion is Discrete wavelet transforms(DWT)
which is a spatial-frequency decomposition that provides
Multi focus image processing technique overcomes the flexible multi-resolution analysis of an image and easy to
deficiencies in optical systems and the problem of limited implement with the use of filter bank (low-High frequency
depth of a field. By integrating the in-focus portions of each filters). But it has the drawback of shift variance. This
image will get a single composite image. The fused image has drawback is removed by using Shift invariant DWT. It
all the information in the image enhanced and sharp. The improves the quality of the fused image. This wavelet
problem of depth of focus in optical lenses is overcome using transform decomposes the image into low frequency and high
image fusion. Many techniques for multi focus image fusion frequency coefficient. Each of these coefficients has a special
have been proposed in the literature (6) (12-17) at pixel level, physical significance. The physical meaning of the wavelet
feature level, block and region based. The simplest image coefficients is that the low frequency coefficient is the original
fusion technique is fusion at pixel level. In pixel level fusion image of the coarser resolution level, which can be considered
values or intensities of two source images are combined using as a smoothed and sub sampled version of the original image.
maximum selection, minimum or average fusion rule. Fusion Hence most of the information is kept in the low frequency
algorithms categorized into spatial domain and transform band. The high coefficient contains detail information of an
domain fusion(5)(12). Pixel level image fusion methods suffer image. It usually has large absolute values that correspond to a
from the problem of contrast reduction. In multi focus images sharp intensity change. These coefficients preserve the salient

978-1-4799-6298-3/14/$31.00 © 2014 IEEE 116


information of an image such as edges, lines and the region pyramid for each image. Then fuse the image pyramid layers
borders. In wavelet based image fusion algorithms the high- decomposed separately. Different layers can be used to mix
level sub bands and LL sub band both are important. LL sub with fusion rule as explained below with the spatial frequency
band has approximate value. Ignoring LL sub band may lose as the criteria function
the contrast information of the original image. The Laplacian
pyramid is the contrast enhancement method. By applying
Laplacian pyramid to LL sub bands; it will improve the
contrast and quality of the final fused image. The main
objective of the proposed hybrid model is to develop a reliable
method of image fusion. The fused image represents the visual
information without the introduction of distortion or loss of
information.
II. PROPOSED METHOD
Fig. 2: Laplacian pyramid fusion algorithm
IV. SHIFT INVARIANT DWT ALGORITHM
The Laplacian image fusion algorithm integrates multi
source information on the basic level. It also provides more
abundant, accurate and reliable detail information of an image.
But Laplacian pyramid does not take into account the
important details of an image like edges, boundaries and
salient features larger than a single pixel. This information is
normally included only in high pass sub bands. Laplacian
Pyramidal transform fails to introduce any spatial orientation
Fig.1: The proposed hybrid model
Number of techniques is developed for the multi focus selectivity in decomposition processes which may cause
image fusion. Fusion is performed at all the levels i.e. Pixel, blocking effect. As it is a band pass copies of the original
feature and decision level. The multi focus image fusion field images it enhances the contrast at various sizes and make them
still has not reached its maturity. Feature level fusion is giving directly available for various image processing applications.
the best results. In the proposed method, the hybrid approach As the wavelet transform provides information on both low
of fusion is used. Spatial frequency measures the overall pass and high pass sub band. By combining the Laplacian and
activity level in an image. High spatial frequencies represent wavelet, it gives better results. Fused image has enhanced
abrupt spatial changes in the image. Edge corresponds to the both, contrast and edge information. The most popular
feature and finer detail information of an image. Low spatial wavelet technique used for multi focus image fusion is
frequencies represent global information about the shape such discrete wavelet transform (DWT). DWT is a spatial
as general orientation and proportions. Hence spatial frequency decomposition that provides a flexible multi
frequency feature is considered for the selection of high resolution analysis of an image. It is easy to implement by
frequency components of an image. using the filter bank (low-High frequency filters). But the
III. LAPLACIAN PYRAMID traditional discrete wavelet transform (DWT) based fusion
encounters a number of shortcomings. When fusion of the
It is very important for image enhancement to analyze the sequences of the images is being considered, they are not shift
image contain objects of many sizes. These objects contain invariants. The fusion methods using DWT lead to unstable
features of different size. Also objects can be at various and flickering results. To make the DWT shift invariant a
distances from the viewer. Any analysis procedure that is number of approaches has been suggested (11). The Shift
applied only to a single scale may miss information on other invariant discrete Wavelet Transform (SIDWT) is an exactly
scales. The solution is to carry out analyses at all scales shift-invariant transforms. It does so by removing the down
simultaneously. A powerful, but conceptually simple structure samplers and up samplers in the DWT and up sampling the
of representing images of more than one resolution or scale is filter coefficients by a factor of 2(j-1) in the jth level of the
the image pyramid (Burt and Adelson(1983)(15). Laplacian algorithm. The SIDWT is an inherently redundant scheme.
pyramid of an image is a set of band pass images in which The output of each level of SIDWT contains the same number
each is a band pass filtered copy of its predecessor. Band pass of samples of the input. So for a decomposition of N levels
copies can be obtained by calculating the difference between there are a redundancy of N in the wavelet coefficients. The
low pass images of successive levels of a Gaussian pyramid. actual fusion process of the SIDWT case is identical to the one
The basic idea of a pyramid based fusion is to compose the in the generic wavelet fusion case. The input images are
pyramid transform into the fused image. The pyramid decomposed into their shift invariant wavelet representation
transform from the source images and then the fused image is
which is nothing but the four low-low, low-high, and high-low
obtained by taking inverse pyramid transform. The general
and high-high frequency bands at different scales. Similar to
block diagram of such algorithms is shown in figure 2. The
first step is to perform Laplacian pyramid decomposition for traditional DWT, here the LL sub bands are the original image
the images to be fused separately and establish Laplacian of the coarser resolution level which can be considered as a

117
smoothed and sub sampled version of the original image. the influence of human vision, mentality and knowledge, and
Therefore most of the information on source images is kept make machines automatically select a superior algorithm to
in the low frequency band and they may have inherited some accomplish the mission of the image fusion.
properties such as the mean intensity or texture information of VII. DATABASE
the original image. While the high frequency coefficient (LH,
HL, HH) as large absolute values correspond to sharp Two complementary images are generated by blurring the
intensity changes. It presents the salient features in the image images using a Gaussian blur of radius 1.5. The images are
as edges, lines and region boundaries. For an M x N image F, complementary to the sense that the blurring occurs to the left
with the gray value at pixel position (m, n) denoted by F (m, half and the right half part of the image respectively.
n), its spatial frequency is defined using Equation (1). Gaussian filtering is more effective for smoothing the images.
√ ……….. (1) It has its basis analogous to the human visual perception
system. It has been found that neurons create a similar filter
Here RF and CF are the row frequency and column when processing the visual images.
frequency (Equations 2 and 3) respectively. 0<i<N-1
VIII. PERFORMANCES METRICS
∑ ∑ , , 1 … (2)
Those are selected according to the application used during
the experimentation. It evaluates the efficiency of the
∑ ∑ , 1, …. .(3) proposed algorithm. The metrics are:
1. Structural similarity index (SSIM),
, , ,
2. Petrovic (edge based) parameter (QABF)
, …. (4)
3. Average gradient (AG).
,
1. Structural similarity index (SSIM):
A and B are input images and F is the fused image.
Wang (16) developed a Structural Similarity (SSIM) quality
V. FUSION ALGORITHM metric which measures the similarity between two images. It
is an improved version of the traditional methods like PSNR
The main steps of fusion algorithm based on SF and and MSE. SSIM is considered to be one of the most effective
SIDWT are given below and is schematically shown in and consistent metric. This parameter is highly based upon the
figure 1. evidence that the human visual perception is highly adapted to
• Decompose the input images using SIDWT into four extracting structural information on a scene. Let x and y be the
different wavelet coefficients (LL, LH, HL, and HH). two images being compared out of them one is the reference
• Combine wavelet coefficients of both the source Image and another is fused image. Let µx, σx 2 and, σxy is the
images using the different fusion rules for LL and (LH, mean of x, the variance of x, and the covariance of x and y,
HL, HH) coefficients. LL is combined using the respectively. Approximately, µx and σx are estimates of the
Laplacian pyramid based image fusion as explained luminance and contrast of x, and σxy measures the tendency of
above while the high frequency coefficients are x and y to vary together, thus an indication of structural
combined by considering the maximum SF. similarity. Structural Similarity Image Metric (SSIM)
• Selection of high frequency coefficients:
F (i, j) = A (i, j) if SFA (i,j)>SFB(i,j) , ……. (6)
B (i, j) if SFB (i.j) > SFA (i,j)
A (i, j) +B (i, j)/2 otherwise ….. (5) C1, C2 and C3 are small constants given by C1 = (K1 L) 2; C2
= (K2 L) 2 and C3 = C2/2 respectively. L is the dynamic range
• Final fused image can be obtained by taking inverse of the pixel values (L = 255 for 8 bits/pixel gray scale images),
SIDWT of the fused coefficients obtained from above and K1<< 1 and K2 << 1 are two scalar constants. Table I
step. shows the results obtained for Structural similarity index
VI. EXPERIMENTAL RESULTS TABLE I: SSIM
WRT
LAPLACIAN
The widespread use of image fusion leads to the advent of IMAGES SIDWT PROPOSED LAP.%
PYRAMID
INCREASE
different fusion algorithms. Better quality assessment tools are A) 0.7419 0.8374 0.8836 19.09
needed to compare the obtained results and to derive the B) 0.8349 0.8839 0.9089 8.86
optimal parameters of these techniques. The main objective of C) 0.7855 0.8299 0.8675 10.44
image quality measures is to study and compare the quality of D) 0.6279 0.6559 0.6624 5.49
the distorted images with the original image. It also finds the E) 0.9184 0.9250 0.9477 3.19
closest quality assessment measures to the human visual F) 0.9137 0.9309 0.9658 5.70
system. Fusion performance has been investigated using
subjective and objective approaches. As the subjective indices Image A Image B
rely on the ability of people's comprehension and are hard to
come into application. While objective indices can overcome

118

The comparative values of for the three algorithms are
given in the Table II.

A) TABLE II: EDGE BASED PARAMETER
LAPLAC WRT
IMAG IAN PROPOS LAP.%
SIDWT
ES PYRAMI ED INCREAS
D E
A) 0.6531 0.7033 0.7100 8.71
B) 0.6597 0.7002 0.7073 7.21
B) C) 0.6622 0.6699 0.6868 3.71
D) 0.4461 0.4495 0.4575 2.56
E) 0.5343 0.5660 0.6016 12.60
F) 0.6058 0.6352 0.6604 9.01
From the above graphs it can be seen that the SIDWT method
better extracts the edge information from the source images
C) than the Laplacian method. But as the proposed method is a
combination of SIDWT and Laplacian it is giving better
results as compared to the SIDWT and Laplacian method. The
proposed method better transfers the gradient information on
source images into the fused image.
D) 3. Average Gradient (AG)
The average gradient reflects the small details of the image,
texture variation and clarity. If this value is larger, the fused
image better.
, ,
E) ∑ ∑ /2 ….. (8)
In the above formula, M, N respectively expresses the rows
and columns of the image. Average gradient reflect the
contrast level of image detail and feature of texture variation.
It also reflects the image clarity. The resultant values of the
F) average gradient for three algorithms are tabulated in the
Table III.
TABLE III: AVERAGE GRADIENT
AVERAGE GRADIENT
FIG. 4.TEST IMAGES LAPLACIA WRT
IMAGE PROPOS
N SIDWT LAP.%
S ED
From the graph of SSIM.it is observed that out of the PYRAMID INCREASE
individual methods of fusion, images fused with SIDWT A) 7.1575 7.1206 7.5863 5.99
fusion method has better similarity than the laplacian and B) 6.5640 6.5523 7.0428 7.36
DWT method. As this method has additional directionality C) 6.9940 6.9066 7.1540 2.29
feature which is absent in laplacian and other method. D) 10.8331 9.4245 11.0922 2.40
⁄ E) 3.7428 3.6664 3.9130 4.55
2. Petrovic Parameter ( )
F) 5.2640 5.2219 5.4187 2.94
AB⁄F
The Q framework (17) associates important visual The Laplacian pyramidal fusion method and the proposed
information on gradient information and assesses fusion by method shows better values of average gradient which reflects
evaluating the success of gradient information transfer from the small details of the image, texture variation and clarity of
the inputs to the fused image. Total fusion performance is the final fused image than the SIDWT methods. But still the
evaluated as a weighted sum of edge information preservation proposed method has little more contrast than the Laplacian
values of both input images QAF and where the weights factors method which enhances the final visual quality of the fused
WA and WB represent perceptual importance of each input image.
image pixel. The range is 0<=QAB⁄F ⁄
<= 1, where 0 TABLE IV: COMPARISON OF PSNR
means complete loss of input information has occurred and PSNR

QAB⁄F =1 indicates the ideal fusion with no loss of IMAGES LP SIDWT SIDWT+LP
WRT LAP.%
INCREASE
information. The perceptual weights WA and WB take the
A) 33.7941 36.1351 37.1509 9.94
values of the corresponding gradient strength parameters
B) 35.3305 36.1620 37.2657 5.48
QA and QB.
C) 33.7434 35.9891 36.2193 7.35
⁄ ∑ , , , , ,

….. (7) D) 33.0635 35.9963 35.1000 6.17
, , ,
E) 35.7594 40.1098 39.3403 10.01

119
F) 33.7136 36.5955 35.9699 6.70 XI. REFERENCES
IX. TABLE IV: COMPARISON OF RMSE
RMSE [1] T.Sahoo, S.Mohanty, S.Sahu,” Multi-Focus Image Fusion Using
WRT LAP.% Variance Based Spatial Domain and Wavelet Transform” International
IMAGES LP SIDWT SIDWT+LP Conference on Multimedia, Signal Processing and Communication
DECREASE
A) 5.2100 3.9791 3.9019 25.14 Technologies, 978-1-4577-1107-7.IEEE 2011.
[2] A.B.Siddiqui,M.A.Jaffar,A.Hussain,A.M.Mirzay,“Block-basedFeature-
B) 4.3653 3.9668 3.4935 19.95 level Multi-focus Image Fusion” IEEE Transaction 978-1-4244-6949-
C) 5.2405 4.0466 3.9407 25 9/10/2010.
D) 5.6672 4.0432 4.4827 20.99 [3] S.Kor,U.Tiwary, ”Feature level fusion of Multimodal Medical images in
Lifting Wavelet transform domain” Proceedings of the 26th Annual
E) 4.1550 2.5180 2.7512 33.73 International Conference of the IEEE EMBS San Francisco, CA, USA •
F) 5.2585 3.7737 4.0555 22.88 0-7803-8439-3,September 1-5,2004
[4] C.M.S.Rani, V.V.Kumar, B.Sujatha, “An Efficient Block based Feature
Level Image Fusion Technique using Wavelet Transform and Neural
X. CONCLUSION AND FUTURE SCOPE Network”International Journal of Computer Applications (0975– 8887)
Volume 52– No.12, August 2012
The proposed work is compared with this paper [7]. Author [5] F.Cheng,Y.Zhang,J.Tian,”Multi-focus Image Fusion Based on Wavelet
has used the pixel level fusion using DWT based fusion. Also Coefficients Selection and Reconstruction” Fourth International
Conference on Genetic and Evolutionary Computing 978-0-7695-4281-
they evaluate the performance based upon RMSE and PSNR- 2/ 2010 IEEE DOI 10.1109/ICGEC.2010.184
reference based quality measures. It is found that RMSE and [6] S.G.Mallat,” A Theory for Multiresolution Signal Decomposition: The
PSNR values are improved in the proposed method. E.g. for Wavelet Representation” IEEE Transactionson pattern analysis and
clock image RMSE is 4.9 and PSNR is 41.25 from paper no. machine intelligence Vol. II, no. 7. July 1989
7. While in the proposed work it is 2.75 and PSNR is 39.34. [7] N.Indhumadhi, G.Padmavathi, ”Enhanced Image Fusion Algorithm
Using Laplacian Pyramid and Spatial frequency Based Wavelet
Although PSNR value has minor reduction but it is observed Algorithm” International Journal of Soft Computing and Engineering
that RMSE (error) has reduced considerably. (IJSCE) ISSN: 2231-2307, Volume-1, Issue-5, November 2011.
In our paper SIDWT based feature level fusion scheme is [8] I.De, B.Chanda, “A simple and efficient algorithm for multifocus image
fusion using morphological wavelets” in Signal Processing. pp. 924-936,
used. The performance of the system can also be evaluated 2006.
better using some important quality measures. Those measures [9] W.Wang, P.Shui,G.Song,”Multi-focus Image Fusion in Wavelet
outperform the RMSE and PSNR measures. We used feature domain” Proceedings of second International conference on Machine
based quality measures like edge based metric QABF, SSIM- Learning and Cybernetics,Xi’an,2-5 November. 0-7803-7865-2/03,
structural similarity quality measures and average gradients. In IEEE 2003.
[10] M.Bagher A.Haghighat, Ali Aghagolzadeh. H.Seyedarabi, ”Real-Time
multi-focus image edge information is missing as it is blurred. Fusion of Multi-Focus Images for Visual Sensor Networks” 978-1-4244-
Hence it is more important to evaluate how much structural 9708-9/ 2010 IEEE
and edge information has transferred from both source images [11] Rockinger,O,“Image Sequence Fusion Using a Shift Invariant Wavelet
(blurred) into the fused image. Most of the information of the Transform,“ Proceedings of the 1997 International Conference on Image
Processing (ICIP '97) 3 Volume Set-Volume 3 - Volume 3 Page 288
source image is kept in the low frequency band and they
[12] L.Zhang, H.Lu,Y.Li, S.Serikawa, ”Maximum Local Energy based
inherited the properties such as the mean intensity or texture Multifocus Image Fusion In Mirror Extended Curvelet Transform
information of the original image. While the high frequency Domain” 13th ACIS International Conference on Software Engineering,
coefficient (LH, HL and HH) have large absolute values Artificial Intelligence, Networking and Parallel/Distributed Computing,
correspond to sharp intensity changes and present salient 978-0-7695-4761-9/2012IEEE.DOI 10.1109/SNPD.2012.16
[13] L.Xu, J.Du, J.M.Lee,Q.Hu, Z.Zhang, M.Fang, Q.Wang,” Multifocus
features in the image as edges, lines and region boundaries. Image Fusion Using Local Perceived Sharpness” 978-1-4673-5534-6/
Hence, to select the wavelet family that provides better 2013 IEEE
frequency resolution is of immense importance. Spatial [14] Y.Sa,” Application of Multi-wavelet Transform in Multi focus Image
frequency measures the overall activity level in an image. Fusion” First International Workshop on Education Technology and
Some abrupt spatial changes in the image are represented by Computer Science, 978-0-7695-3557-9/ 2009 IEEE DOI
10.1109/ETCS.2009.657.
the high spatial frequencies in the image like edges. Edges are [15] P.J. Burt, E.h. Adelson,”The Laplacian Pyramid as a Compact Image
the salient features of an image which gives the information Code” IEEE transactions on communications, vol. com-3l, no. 4, April
about the fine details. Low spatial frequencies; on the other 1983.
hand, represents global information about the shape and the [16] Z.Wang, A.C.Bovik, H.R.Sheikh, E.P.Simoncelli, “Image Quality
Assessment: From Error Visibility to Structural Similarity”, IEEE
orientation of an image. The algorithm can be tested with Transactions on Image Processing, Vol.13, NO. 4, pp 600-612, April
other type of images as multi-modal images (medical images), 2004
multi-illumination, multi-temporal and remote sensing images. [17] V.Petrovic,C. Xydeas, “Objective Image Fusion Performance
Also it can be implemented for color image enhancement and Characterisation”, Proceedings of the Tenth IEEE International
retrieval. Important feature based parameters are considered Conference on Computer Vision (ICCV’05) 1550-5499,2005 IEEE
for the quality analysis. Reference and non reference quality [18] S.Mahajan, A.Singh, “A Comparative Analysis of Different Image
Fusion Techniques” IPASJ International Journal of Computer Science
metrics are considered for experimentation. (IIJCS), Volume 2, Issue 1,PP 8-15, ISSN 2321-5992, January 2014
[19] Y.Yang, S.Huang, J.Gao, Z.Qian, ”Multi-focus Image Fusion Using an
Effective Discrete Wavelet Transform Based Algorithm” Measurement
Science Review, Volume 14, No. 2, 2014

120

You might also like