0% found this document useful (0 votes)
11 views8 pages

1.analysis of Image Fusion Techniques Based On - 2016

This document analyzes various image fusion techniques, focusing on their quality assessment metrics and applications. It discusses the objective of image fusion, different levels of fusion, and compares non-transform and transform-based methods, highlighting their advantages and limitations. The findings indicate that Discrete Cosine Transformation methods reduce computational complexity and are effective in various fields such as remote sensing and medical imaging.

Uploaded by

samir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views8 pages

1.analysis of Image Fusion Techniques Based On - 2016

This document analyzes various image fusion techniques, focusing on their quality assessment metrics and applications. It discusses the objective of image fusion, different levels of fusion, and compares non-transform and transform-based methods, highlighting their advantages and limitations. The findings indicate that Discrete Cosine Transformation methods reduce computational complexity and are effective in various fields such as remote sensing and medical imaging.

Uploaded by

samir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ISSN (Print) : 0974-6846

Indian Journal of Science and Technology, Vol 9(31), DOI: 10.17485/ijst/2016/v9i31/92553, August 2016 ISSN (Online) : 0974-5645

Analysis of Image Fusion Techniques based on


Quality Assessment Metrics
K. Kalaivani1,2* and Y. Asnath Victy Phamila2
1
Department of Computer Science and Engineering, VELS University, Chennai – 600117, Tamil Nadu, India;
[email protected]
2
School of Computing Science and Engineering, VIT University, Chennai – 632014, Tamil Nadu, India;
[email protected]

Abstract
Objective: The objective of Image Fusion is to combine the relevant and essential information from several images into
a single image, which is highly informative than any of the source images such that the resultant fused image will be
more appropriate for human visual perception and for image processing tasks like segmentation, feature extraction and
­object recognition. Methods: This paper presents the basic concepts, various types and levels of fusion, literature review
of non-transform and transform based image fusion techniques from the perspective of their applications, advantages
and limitations. Findings: The performance of existing image fusion methods along with various assessment metrics that
­determine the quality of fused images are evaluated and theoretically analyzed. It is found that the computational com-
plexity is considerably reduced in Discrete Cosine Transformation based methods. Applications: Image Fusion has been
effectively applied to many fields such as Remote Sensing, Military affairs, Machine Vision, Medical imaging, and so on

Keywords: Frequency Domain, Image Fusion, Multi-Focus, Quality Assessment Metrics, Spatial Domain

1. Introduction Surveillance, etc. Multitemporal fusion integrates several


images taken at various intervals to detect changes among
Image fusion is the integration of complementary them or to produce accurate images of objects.
­information present in multiple registered images into It is impossible for the optical lens to capture all the
a single image with higher reliability of interpretation objects at various focal lengths. Multi focus image fusion
and quality of data. Image fusion combines multiple integrates the images of various focal lengths from the
source images with the help of improved image process- imaging equipment into a single image of better quality.
ing techniques. A single fusion methodology cannot This fusion methodology is widely used in Visual Sensor
be utilized for various applications. Based on the input Networks (VSN), which refers to a spatially distributed
data and the purpose, image fusion methods are classi- system with vast number of sensors installed at various
fied as i) Multiview fusion, ii) Multitemporal fusion, iii) locations for monitoring. Sensors are cameras, which
Multifocus fusion and iv) Multimodal fusion. Multiview records the video sequences or still images. In VSN, images
fusion combines the images taken by a sensor from dif- are compressed before they are transmitted to other nodes,
ferent view- points at the same time. Multiview fusion the large amount of data is acquired at each monitoring
provides an image with higher resolution and also recov- points, which drastically occupies the memory, so an exten-
ers the 3d representation of a scene. Multimodal fusion sive research and study were required on image fusion. The
refers the combination of images from different sensors objective of an image fusion technique is to effectively min-
and is often referred as multisensor fusion which is widely imize the volume and maximize the quality and relevant
used in applications like Medical diagnosis, Security, information of a scene in terms of its application. The liter-

*Author for correspondence


Analysis of Image Fusion Techniques based on Quality Assessment Metrics

Table 1. Studies on various tasks of image fusion


methodology
Ref Task
[1]
Image Sharpening
[2,3]
Context Enhancement
[4],[5]
Improved classification
[6–9]
Stereo-viewing capabilities
[10–12]
Change detection
[13,14]
Object recognition and Retrieval
[15]
3D Scene reconstruction
[16]
Emotion Recognition

ature shows that the image fusion rule is applied to digital


images for various tasks as shown in table 1.
The importance of fusion is increasing because of Figure 1. Levels of Fusion.
­different image acquisition techniques17 which make the
resultant image features enabling improved detection and labeling and radiometric calibration20. The images to be
localization of the target18. The paper mainly addresses fused are spatially aligned into a similar geometric base
the multifocus image fusion and organized as follows: in (image registration), which is a pre requisite for fusion
Section 2, the processing levels of fusion are described. without which the spatial information among the dif-
Section 3 addresses the various non-transform and trans- ferent input images cannot be associated. In some cases
form based fusion methodologies adopted by various the images are then resampled and the gray levels are
researchers. Section 4 describes different quality mea- interpolated21. The temporal alignment is required when
sures that are used to verify the performance of the fusion the input images are changing over time. Feature maps
algorithm. Section 5 focuses the issues in various fusion are then generated with the identified characteristics of
techniques and Section 6 provides the conclusion. all input images. Decision map is constructed once the
pixels or feature maps are labeled based on the criteria.
2. Levels of Fusion Semantic equivalence is done by linking different inputs
to a common phenomenon22.
Image fusion is carried at three processing levels based on The fusion methods are generally classified into
the stage in which the fusion needs to be performed. The dif- spatial and frequency domain methods23,24. The spatial
ferent levels of fusion are pixel, feature and decision level19. domain method works directly on the pixel gray level
The concept of different processing levels of fusion is and color space of the input images and hence they are
illustrated in figure 1. The pixel level fusion takes place at referred as single scale fusion methods or non-transform
the lowest level with the integration of measured physi- based fusion techniques. The frequency domain method
cal parameters. In the feature level fusion, the features i.e. decomposes the source images into sequence of images
characteristics of the individual images are extracted and by mathematical transformation and employs the combi-
then a fusion rule is employed. Feature Fusion either uses nation rules to obtain the fused coefficients. The inverse
the statistical approach or Artificial Neural Network for transformation is then employed to get the resultant fused
combining the extractions of individual source images. In image, hence this kind of fusion is referred as multi-scale
decision approach, the individual images are processed fusion or transform based fusion.
for feature extraction and classification. The local deci-
sions are then made before the fusion rule is employed to 3.1 Non Transform based Fusion
form a resultant image.
Non-transform based fusion techniques fuse the images
by directly computing on the pixel intensity values. The
3. Image Fusion Techniques simplest pixel level fusion can be done based on the aver-
The generic image fusion process involves four stages age or maximum or minimum of the pixel intensities
which includes spatial and temporal alignment, decision of source images. The simplest spatial based averaging

2 Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology
K. Kalaivani and Y. Asnath Victy Phamila

method results in undesirable side effects like reduced con- widely used in various image compression applications34
trast25 and features are superimposed like ­photographic such as still image JPEG, Motion JPEG, H263 video and
double exposure effect26. The pixel averaging approach is MPEG35. For JPEG standard images in VSN, the appli-
good at eliminating the Gaussian noise at the cost of com- cation of Spatial Frequency (SF) or Averaging (Avg) or
promising the contrast information. The maximum pixel Variance or Consistency Verification (CV) or any com-
intensity approach produces the image with full contrast bination of these in DCT domain is outstanding in
but results in sensor noise27. terms of visual perception and the qualitative parameters
Some of the spatial based methods like Brovey compared to the conventional DCT, DWT and NSCT.
Transform, Intensity Hue Saturation, Principal Researchers proposed different fusion techniques whose
Component Analysis28 suffer from spectral distortion performance is comparatively better than some of the
whereas the methods such as High Pass Modulation and techniques, which are listed in table 3.
High Pass Filtering produces less spectral distortion. The Liquiang et al.43 proposed a method integrating the
performance of non-transform based fusion technique quaternion with traditional curvelet transformation to
proposed by various researchers is better when compared address the blurring of an image. Zang et al.44 proposed
to some of the transform based fusion techniques, and the Multi resolution Analysis based Intensity Modulation
the list is shown in Table 2. Shutao Li et al.29 suggested method for high resolution fused image.
a method which fuses the images of diverse focuses by
decomposing them into several blocks and then integrat- Table 3. Studies on Transform based fusion
ing them by the use of spatial frequency. techniques
Ref Proposed Technique Techniques Compared
Table 2. Studies on Non- Transform based fusion
DCT + Contrast,
techniques [36]
DCT + Average
WT
Ref Proposed Technique Techniques Compared DCT + Avg,
DCT + Variance,
Spatial Frequency (SF) + Wavelet: Db4, Db 10, Sym [37]
DCT+ Contrast,
[29] DCT + Variance + CV
Threshold 8, Bior 3.5 DWT, SIDWT
[30] Avg + Segmentation by Discrete Wavelet DCT + Avg,
Normalized cuts + SF Transform DCT + Variance,
[38]
DCT + AC_Max + CV
Spatial Frequency + Haar Wavelet, DCT + Variance + CV,
[31]
Genetic Algorithm Morphological Wavelet DWT, SIDWT (Haar)
Spatial Gradient, Wavelet DCT + Avg,
Transform, Curvelet DCT + Contrast,
Sparse representation + [39]
DCT + SF DCT + Variance,
[32]
Transform,
Choose Max DCT + Variance + CV,
Non Sub Sampled
Contourlet Transform DWT
Modified Pulse Coupled Conventional Pulse DWT + Adaptive Local
[33]
Neural Network Coupled Neural Network Energy Metrics + Fast
Maximum Selection,
[40]
Continuous Linearized
SWV, SDWV, EMWV
Augmented Lagrangian
3.2 Transform based Fusion Method
Transform based fusion technique applies mathematical
[41]
Segmentation + DWT WT
transformation on images before a fusion rule is employed. [42]
NSCT DWT
There are various transform based techniques such as
Discrete Cosine Transformation (DCT), Discrete Wavelet
Transformation (DWT), Shift Invariant Discrete Wavelet
4. Fusion Metrics
Transform (SIDWT), Contourlet Transform (CT), Non- The performance of the fused image can be accessed by
Subsampled Contourlet Transform (NSCT), Standard the objective evaluation of the metrics based on reference
Deviation Weighted Average (SDWV), Simple Weighted and non-reference images45. RMSE (Root Mean Squared
Average (SWV), Entropy Metrics Weighted Average Error), SSIM (Structured Similarity Index Measure),
(EMWV) and so on. Discrete Cosine Transformation is PSNR (Peak Signal to Noise Ratio), Petrovic, SF (Spatial

Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology 3
Analysis of Image Fusion Techniques based on Quality Assessment Metrics

Frequency), MG (Mean Gradient), LMI (Localized where QAF and QBF are calculated from the edge values
Mutual Information), FMI (Feature Mutual Information), and WA and WB are the weight factors. The value may lie
Correlation Coefficient (CORR) and Piella are some of between 0 & 1, where the value 0 implies the complete loss
the metrics used by the researchers to evaluate the quality of information and 1 refers the ideal fusion. Performance
of the fused image for the source images taken from the of various techniques based on Petrovic values of test
image dataset46-49 image “Pepsi” is shown in Table 4.
4.4 Structural Similarity Index Measure (SSIM) 55 mea-
4.1 Root Mean Squared Error50,51 is used to find the
sures the structural resemblance between two images and
­dissimilarity between the reference image and the fused
this reference metric considers image degradation as a
image. Low RMSE values indicate that the test image is
modification in structural information.
close to the reference image.

where µ A , µ B refers the mean, σAB refers the cross co-vari-


ance and c1, c2 are constants.
4.2 Peak Signal to Noise Ratio measures the quality and
4.5 Spatial Frequency (SF) finds the clarity of the resultant
the value will be high if the fused image is more identical
fused image with the edge information computed using
to the reference image.
the row and column frequency. Higher Spatial Frequency
indicates the higher clarity of the image.

where MSE refers the Mean Squared Error and r is the peak
value of the reference image. The metrics MSE and PSNR are
used to measure the perceived errors of the fused image.
4.3 Petrovic (QAB/F)52,53 metric is a pixel wise measure of
information preservation in the resultant image(F) from
the source images (A, B).

Table 4. Performance report of the various techniques on test image “Pepsi”

Ref Technique Used Petrovic (QAB/F)

[30]
Avg + Segmentation + SF 0.7593

Sparse representation +
[32]
0.7660
Choose Max

[37]
DCT + Variance 0.7700

[54]
RPCA 0.7600

[38]
DCT+AC_Max +CV 0.7800

4 Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology
K. Kalaivani and Y. Asnath Victy Phamila

4.6 Piella (Qwt) 56 metric finds the quantity of information and resultant image. Performance of various techniques
captured from the input images to the fused image. based on Mutual Information (MI) is shown in table 5.
CORR is used to find the degree of correlation between the
standard reference image and the fused image.
W. Huang et al.62 suggested a focus measure on the
basis of Sum-Modified-Laplacian (SML) method which
where Q0 refers the Wang-Bovik image quality index57 differentiates the focused from defocused image blocks.
and λ(wt) represents the local weight referring the relative To access the quality of multi-exposure multi-focus
importance of the source image A compared to B. images, Rania Hassen et al. proposed FQI (Fusion Quality
4.7 Mean Gradient (MG)58 estimates the edge details of Index) based on three key factors i) Preserving Contrast,
the resultant image. Higher values denote the maximum ii) Preserving Structure and iii) Sharpness63.
preservation of edge details in the fused image. LMI59 and
FMI60 metrics calculates the amount of mutual information 5. Discussion
between the resultant fused image and the source images.
These values are computed by the application of normaliza- The selection of a fusion technique and the level of fusion
tion of the joint and the marginal histogram of the source is application dependent. Feature and decision level fusion

Table 5. Mutual Information metric values of various techniques on test image “Clock”

Ref Technique Used MI

[61]
NSCT + Focused Area Detection 8.65

[54]
RPCA 8.57

[38]
DCT + Max AC + CV 9.04

Table 6. Advantages and Limitations of different fusion methodology


Method / Ref Advantages Limitations
Spatial based method Simple Reduced Contrast
Block effect due to difficulty in finding sub
Block based method[65] Improves the convergence between pixels
block size
Evolution algorithm[66] and quad tree
Determines the Sub block size Inaccurate in finding sub block size
structure
Unable to completely eliminate “block
Bilateral gradient based method [67] & Improves the accuracy in finding the size
effect” for the sub-blocks which has clear
Artificial Neural Network[68] of sub blocks
and blurred area.
Ineffective representation of plane as well
Overcomes the problem of Single Scale as line singularities of images.
Wavelet Packet[69] & Frame Transform[70]
based transform methods Inaccurate representation of image edge
directions
Overcomes the limitations of wavelet
Lacking of shift invariance and presence of
Contourlet Transform[71] transform and provides an asymptotic
pseudo Gibbs phenomena
optimal representation of contours
Retains shift invariance and effectually
Non Subsampled Contourlet Transform[72] Time Consuming
suppresses Pseudo Gibbs phenomena
Transformation with Sparse Excellent Performance on both clean and
Time consuming and Complicated
representation[32] noisy images

Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology 5
Analysis of Image Fusion Techniques based on Quality Assessment Metrics

schemes are employed for applications like emotion recog- Photogrammetric Engineering and Remote Sensing. 1990;
nition, pattern classification64, gaming environment, etc. 56(9):1237–46.
In general, many of the spatial based methods are time 3. Raskar R, Ilie A, Yu J. Image fusion for context enhance-
consuming and inappropriate for any real time applica- ment and video surrealism. In: ACM SIGGRAPH 2005
tion. The block-based method improves the convergence Courses. ACM. 2005; 4 pp.
4. Sheoran A, Haack B. Classification of California agriculture
across pixels in the resultant image; it degrades the image
using quad polarization radar data and Landsat Thematic
quality due to the presence of block effect. If the source
Mapper data. GIScience and Remote Sensing. 2013;
images are not registered well, the Spatial Gradient
50(1):50–63.
method, which is based on single pixel, leads to artifacts 5. Zhu Z, Woodcock CE, Rogan J, Kellndorfer J. Assessment
in the resultant fused image. Various fusion methodology of spectral, polarimetric, temporal, and spatial dimensions
adopted by various researchers is illustrated in table 6. for urban and peri-urban land cover classification using
The popular multi-scale transform techniques such Landsat and SAR data. Remote Sensing of Environment.
as DWT, SIDWT, NSCT are time consuming and com- 2012; 117:72–82.
plex, hence they cannot be used in an environment like 6. Bloom AL, Fielding EJ, Fu X-Y. A demonstration of stereo-
resource constrained VSN. The usage of various methods photogrammetry with combined SIR-B and Landsat TM
in DCT domain considerably reduces the computational images. International Journal of Remote Sensing. 1988;
complexity and makes it easy to implement especially for 9(5):1023–38.
multi-focused images. Limitation of the multi scale trans- 7. Toutin T. SPOT and Landsat stereo fusion for data extraction
over mountainous areas. Photogrammetric Engineering
form based methods can be addressed and minimized by
and Remote Sensing. 1998; 64(2):109–13.
the integration of spatial and transform based methods.
8. Gudmundsson SA, Aanaes H, Larsen R. Fusion of ste-
A Single image fusion metric cannot validate the perfor- reo vision and time-of-flight imaging for improved 3D
mance of fusion algorithm. Various metrics were studied estimation. International Journal of Intelligent Systems
and quality measures such as SSIM, PSNR, CORR and Technologies and Applications. 2008; 5(3-4):425–33.
MSE are used for assessing the fusion when there are ref- 9. Franke U, Rabe C, Badino H, Gehrig S. 6d-vision: Fusion
erence images, whereas the other metrics such as Petrovic, of stereo and motion for robust environment perception.
SF, MG, MI and FMI are used for non-reference images73. In: Pattern Recognition Letters, Elsevier. Springer. 2005;
216–23.
6. Conclusion 10. Thamarai M, Mohanbabu K. An Improved Image Fusion and
Segmentation using FLICM with GA for Medical Diagonosis.
This paper has presented an overview of various image Indian Journal of Science and Technology. 2016; 9(12).
fusion techniques in non-transform and transform based 11. Nichol J, Wong MS. Satellite remote sensing for detailed
fusion methods like Pixel Averaging, Select Minima or landslide inventories using change detection and image
Maxima, Brovey, Principal Component Analysis, DCT, fusion. International Journal of Remote Sensing. 2005;
DWT, SIDWT, NSCT and various integrations with the 26(9):1913–26.
12. Gong M, Zhou Z, Ma J. Change detection in synthetic
objective of combining the several source images into
aperture radar images based on image fusion and fuzzy
a single image of better quality and information, which
clustering. IEEE Transactions on Image Processing. 2012;
cannot be achieved otherwise. The analysis and usage of 21(4):2141–51.
different fusion schemes are elaborated. The various per- 13. Bu S, Cheng S, Liu Z, Han J. Multimodal Feature Fusion
formance metrics, which are used to measure the quality for 3D Shape Recognition and Retrieval. IEEE MultiMedia.
of the fused image were reviewed and analyzed. 2014; 21(4):38–46.
14. Annabattula J, Koteswara Rao S, Sampath Dakshina Murthy
7.References A, Srikanth KS, Das RP. Multi-sensor submarine surveil-
lance system using MGBEKF. Indian Journal of Science and
1. Vivone G, Alparone L, Chanussot J, Mura MD, Garzelli Technology. 2015; 8(35):1–5.
A, Member S et al. A Critical Comparison Among 15. Li Z, Wang K, Meng D, Xu C. Multi-view stereo via depth
Pansharpening Algorithms. IEEE Transactions on map fusion: A coordinate decent optimization method.
Geoscience and Remote Sensing. 2015; 53(5):2565–86. Neurocomputing. 2016; 178:46–61.
2. Leckie DG. Synergism of synthetic aperture radar and 16. Sun B, Li L, Wu X, Zuo T, Chen Y, Zhou G et al. Combining
­visible/infrared data for forest type discrimination. PE&RS, feature-level and decision-level fusion in a hierarchical

6 Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology
K. Kalaivani and Y. Asnath Victy Phamila

classifier for emotion recognition in the wild. Journal on 34. Djamel S, Mouldi B. Image compression via ­embedded coder
Multimodal User Interfaces. 2015; 1–13. in the transform domain. Asian Journal of Information
17. Jiang Y, Wang M. Image fusion with morphological Technology. 2006; 5(6):633–9.
­component analysis. Information Fusion. 2014; 18:107–18. 35. Wallace GK. The Jpeg Still Picture Compression Standard.
18. Toet A, Franken EM. Perceptual evaluation of different image IEEE Transactions on Consumer Electronics. 1992; 38(1).
fusion schemes. Displays, Elsevier. 2003; 24(1):25–37. 36. Tang J. A contrast based image fusion technique in the
19. Pohl C, Van Genderen JL. Multisensor image fusion in remote DCT domain. Digital Signal Processing, Elsevier. 2004;
sensing: Concepts, methods and applications. International 14(3):218–26.
Journal of Remote Sensing. 1998; 19(5):823–54. 37. Haghighat MBA, Aghagolzadeh A, Seyedarabi H. ­Multi-focus
20. Mitchell HB. Image fusion: Theories, techniques and image fusion for visual sensor networks in DCT domain.
­applications. Springer, 2010. Computers and Electrical Engineering. 2011; 37(5):789–97.
21. Ludusan C, Lavialle O. Multifocus image fusion and 38. Phamila YAV, Amutha R. Discrete Cosine Transform based
­denoising: a variational approach. Pattern Recognition fusion of multi-focus images for visual sensor networks.
Letters. 2012; 33(10):1388–96. Signal Processing, Elsevier. 2014; 95:161–70.
22. Hong G. Image Fusion, Image Registration, and Radiometric 39. Liu C, Longxu J, Hongjiang T, Guoning L. Multi-focus image
Normalization for High Resolution Image Processing. fusion based on spatial frequency in discrete cosine transform
2007. domain. IEEE Signal Processing Letters. 2015; 22(2):220–4.
23. Mitianoudis N, Stathaki T. Optimal contrast correction for 40. Yang Z-Z, Yang Z. Novel multifocus image fusion and
ICA-based fusion of multimodal images. Sensors Journal. reconstruction framework based on compressed sensing.
2008; 8(12):2016–26. IET Image Processing. 2013; 7(9):837–47.
24. Li H, Manjunath BS, Mitra SK. Multisensor image fusion 41. Jia-zheng Y, Qing L, Bo-xuan S. Multifocus Image Fusion
using the wavelet transform. Graphical models and image Based on Region Selection. TELKOMNIKA Indonesian
processing. 1995; 57(3):235–45. Journal of Electrical Engineering. 2013; 11(11):400–4.
25. Zhang Z, Blum RS. A categorization of multiscale- 42. Zhang Q, Guo B long. Multifocus image fusion using the
­decomposition-based image fusion schemes with a nonsubsampled contourlet transform. Signal Processing,
performance study for a digital camera application. Elsevier. 2009; 89(7):1334–46.
Proceedings of the IEEE. 1999; 87(8):1315–26. 43. Guo L, Dai M, Zhu M. Multifocus color image fusion based
26. Burt PJ, Kolczynski RJ. Enhanced image capture through on quaternion curvelet transform. Optics Express. 2012;
fusion. Proceedings of IEEE Fourth International 20(17):18846.
Conference on Computer Vision. 1993. p. 173–82. 44. Wang Z, Ziou D, Armenakis C, Li D, Li Q. A Comparative
27. Sharma RK, Leen TK, Pavel M. Probabilistic Image Sensor Analysis of Image Fusion Methods. IEEE Transactions on
Fusion. Image (Rochester, NY). 1999; 1. Geoscience and Remote Sensing. 2005; 43(6):1391–402.
28. Amro I, Mateos J, Vega M, Molina R, Katsaggelos AK. A 45. Phamila YAV, Amutha R. Low complexity multifocus
survey of classical methods and new trends in pansharpen- image fusion in discrete cosine transform domain. Optica
ing of multispectral images. EURASIP Journal on Advances Applicata. 2013; 43(4).
in Signal Processing. 2011; 2011(1):79. 46. Dataset of Standard Gray scale test images, Computer
29. Li S, Kwok JT, Wang Y. Combination of images with diverse Vision Group, University of Granada. 2003. Available from:
focuses using the spatial frequency. Information Fusion, http://decsai.ugr.es/cvg/CG/base.htm
Elsevier. 2001; 2(3):169–76. 47. Image Repository, Fractal Coding and Analysis Group.
30. Li S, Yang B. Multifocus image fusion using region University of Waterloo. 2009. Available from: http://links.
­segmentation and spatial frequency. Image and Vision uwaterloo.ca/Repository.html
Computing, Elsevier. 2008; 26(7):971–9. 48. Test Images, University of Southern California, 1981.
31. Kong J, Zheng K, Zhang J, Feng X. Multi-focus image fusion Available from: http://sipi.usc.edu/database.
using spatial frequency and genetic algorithm. International 49. Naidu VPS. Multi focus image fusion using the measure of
Journal of Computer Science and Network Security. 2008; focus. Journal of Optics. Springer. 2012; 41(2):117–25.
8(2):220. 50. Rockinger O. Image Sequence Fusion Using a Shift-
32. Yang B, Li S. Multifocus Image Fusion and Restoration Invariant Wavelet Transform. Proceedings of International
with Sparse Representation. IEEE Transactions on Conference on Image Processing. 1997; 3. p. 288–91.
Instrumentation and Measurement. 2010; 59(4):884–92. 51. Wang Z, Bovik AC. A universal image quality index. IEEE
33. Singhai DAJ. Multifocus image fusion using modified pulse Signal Processing Letters. 2002; 9(3):81–4.
coupled neural network for improved image quality. IET 52. Drajic D, Cvejic N. Adaptive fusion of multimodal
Image Processing. 2010; 4(March):443–51. ­surveillance image sequences in visual sensor networks.

Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology 7
Analysis of Image Fusion Techniques based on Quality Assessment Metrics

IEEE Transactions on Consumer Electronics. 2007; Fusion. IEEE Transactions on Image Processing. 2015;
53(4):1456–62. 24(9):2712–24.
53. Xydeas CS. Objective Image Fusion Performance Measure. 64. Mangai UG, Samanta S, Das S, Chowdhury PR. A survey
Electronic Letters. 2000; 36(4):308–9. of decision fusion and feature fusion strategies for pattern
54. Wan T, Zhu C, Qin Z. Multifocus image fusion based on classification. IETE Technical Review. 2010; 27(4):293–
robust principal component analysis. Pattern Recognition 307.
Letters, Elsevier. 2013; 34(9):1001–8. 65. De I, Chanda B. Multi-focus image fusion using a
55. Wang Z, Bovik AC, Simoncelli EP. Image quality ­assessment: ­morphology-based focus measure in a quad-tree structure.
From error visibility to structural similarity. IEEE Information Fusion. 2013; 14(2):136–46.
Transactions on Image Processing. 2004; 13(4):600–12. 66. Aslantas V, Kurban R. Fusion of multi-focus images using
56. Piella G, Heijmans H. A new quality metric for image differential evolution algorithm. Expert Systems with
fusion. Proceedings of IEEE International Conference on Applications. 2010; 37(12):8861–70.
Image Processing. 2003. 67. Tian J, Chen L, Ma L, Yu W. Multi-focus image fusion using
57. Piella G. A general framework for multiresolution image a bilateral gradient-based sharpness criterion. Optics com-
fusion: from pixels to regions. Information fusion. 2003; munications. 2011; 284(1):80–7.
4(4):259–80. 68. Li S, Kwok J, Wang Y. Multifocus image fusion using
58. Bai X, Zhou F, Xue B. Edge preserved image fusion based ­artificial neural networks. Pattern Recognition Letters,
on multiscale toggle contrast operator. Image and Vision Elsevier. 2002; 23(8):985–97.
Computing. 2011; 29(12):829–39. 69. Kannan K, Perumal SA, Arulmozhi K. Area level fusion of
59. Hossny M, Nahavandi S, Creighton D, Bhatti A. Image multi-focused images using multi-stationary wavelet packet
fusion performance metric based on mutual information transform. International Journal of Computer Applications.
and entropy driven quadtree decomposition. Electronics 2010; 2(1):88–95.
letters. 2010; 46(18):1266–8. 70. Li S, Kwok JT, Wang Y. Using the discrete wavelet frame
60. Haghighat MBA, Aghagolzadeh A, Seyedarabi H. A non- transform to merge Landsat TM and SPOT panchromatic
reference image fusion metric based on mutual information images. Information Fusion. 2002; 3(1):17–23.
of image features. Computers and Electrical Engineering. 71. Do MN, Vetterli M. The contourlet transform: an
2011; 37(5):744–56. efficient directional multiresolution image representa-
61. Yang Y, Tong S, Huang S, Lin P. Multifocus Image Fusion tion. IEEE Transactions on Image Processing. 2005;
Based on NSCT and Focused Area Detection. IEEE Sensors ­14(12):2091–106.
Journal. 2015; 15(5):2824–38. 72. Da Cunha AL, Zhou J, Do MN. The nonsubsampled
62. Huang W, Jing Z. Evaluation of focus measures in ­contourlet transform: theory, design, and applications. IEEE
­multi-focus image fusion. Pattern Recognition Letters. Transactions on Image Processing. 2006; 15(10):3089–
2007; 28(4):493–500. 101.
63. Hassen R, Wang Z, Salama MMA. Objective Quality 73. Qu G, Zhang D, Yan P. Information measure for perfor-
Assessment for Multiexposure Multifocus Image mance of image fusion. Electronics letters. 2002; 38(7):1.

8 Vol 9 (31) | August 2016 | www.indjst.org Indian Journal of Science and Technology

You might also like