Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications
net/publication/221912997
CITATIONS READS
28 556
4 authors:
Some of the authors of this publication are also working on these related projects:
National Natural Science Foundation of China (41402288), the Special Fund for Basic Scientific Research of Central Colleges (NO.
2014G1271060), Chang’an University, China, and the High Resolution Earth Observation Systems of National Science and Technology
Major Project (Grant no. 30-Y30B13-9003-14/16-04). View project
All content following this page was uploaded by Dong Jiang on 04 June 2014.
1. Introduction
1.1 Definition of image fusion
With the development of multiple types of biosensors, chemical sensors, and remote
sensors on board satellites, more and more data have become available for scientific
researches. As the volume of data grows, so does the need to combine data gathered from
different sources to extract the most useful information. Different terms such as data
interpretation, combined analysis, data integrating have been used. Since early 1990’s,
“Data fusion” has been adopt and widely used. The definition of data fusion/image
fusion varies. For example:
- Data fusion is a process dealing with data and information from multiple sources to
achieve refined/improved information for decision making (Hall 1992)[1].
- Image fusion is the combination of two or more different images to form a new image by using a
certain algorithm (Genderen and Pohl 1994 )[2].
- Image fusion is the process of combining information from two or more images of a scene into a
single composite image that is more informative and is more suitable for visual perception or
computer processing. (Guest editorial of Information Fusion, 2007)[3].
- Image fusion is a process of combining images, obtained by sensors of different wavelengths
simultaneously viewing of the same scene, to form a composite image. The composite image is
formed to improve image content and to make it easier for the user to detect, recognize, and
identify targets and increase his situational awareness. 2010.
(http://www.hcltech.com/aerospace-and-defense/enhanced-vision-system/).
Generally speaking, in data fusion the information of a specific scene acquired by two or
more sensors at the same time or separate times is combined to generate an interpretation of
the scene not obtainable from a single sensor [4]. Image fusion is a component of data fusion
when data type is strict to image format (Figure 1). Image fusion is an effective way for
optimum utilization of large volumes of image from multiple sources. Multiple image
fusion seeks to combine information from multiple sources to achieve inferences that are not
feasible from a single sensor or source. It is the aim of image fusion to integrate different
data in order to obtain more information than can be derived from each of the single sensor
data alone (`1+1=3’)[4].
2 Image Fusion and Its Applications
Data fusion
Image fusion
Satellite
image fusion
1. Signal level fusion. In signal-based fusion, signals from different sensors are combined
to create a new signal with a better signal-to noise ratio than the original signals.
2. Pixel level fusion. Pixel-based fusion is performed on a pixel-by-pixel basis. It generates
a fused image in which information associated with each pixel is determined from a set
of pixels in source images to improve the performance of image processing tasks such
as segmentation
3. Feature level fusion. Feature-based fusion at feature level requires an extraction of
objects recognized in the various data sources. It requires the extraction of salient
features which are depending on their environment such as pixel intensities, edges or
textures. These similar features from input images are fused.
4. Decision-level fusion consists of merging information at a higher level of abstraction,
combines the results from multiple algorithms to yield a final fused decision. Input
images are processed individually for information extraction. The obtained information
is then combined applying decision rules to reinforce common interpretation.
I = ( R+ G+ B)/3 (1)
the PAN together with the hue (H) and saturation (S) bands, resulting in an IHS fused
image.
Different arithmetic combinations have been developed for image fusion. The Brovey
transform, Synthetic Variable Ratio (SVR), and Ratio Enhancement (RE) techniques are some
successful examples [9]. The basic procedure of the Brovey transform first multiplies each
MS band by the high resolution PAN band, and then divides each product by the sum of the
MS bands. The algorithm is shown in equation (5) .
New
buildings
Old
buildings
transform seem to reduce the color distortion problem and to keep the statistical parameters
invariable.
small objects often lost in the fused images; (3) It often requires the user to determine
appropriate values for certain parameters (such as thresholds). The development of more
sophisticated wavelet-based fusion algorithm (such as Ridgelet, Curvelet, and Contourlet
transformation) could improve the performance results, but these new schemes may cause
greater complexity in the computation and setting of parameters.
(6)
In Figure 6, the hidden layer has several neurons and the output layer has one neuron (or
more neuron). The ith neuron of the input layer connects with the jth neuron of the
hidden layer by weight Wij, and weight between the jth neuron of the hidden layer and
the tth neuron of output layer is Vjt (in this case t = 1). The weighting function is used to
simulate and recognize the response relationship between features of fused image and
corresponding feature from original images (image A and image B). The ANN model is
given as follows:
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications 9
(7)
In equation (7), Y=pixel value of fused image exported from the neural network model, q =
number of nodes hidden (q~8 here), Vj=weight between jth hidden node and output node
(in this case, there is only one output node), γ=threshold of the output node, Hj=exported
values from the jth hidden node:
(8)
Where Wij=weight between ith input node and the jth hidden node, ai=values of the ith
input factor, n=number of nodes of input (n~5 here), θj=threshold of the jth hidden node.
As the first step of ANN-based data fusion, two registered images are decomposed into
several blocks with size of M and N (Figure 6). Then, features of the corresponding blocks in
the two original images are extracted, and the normalized feature vector incident to neural
networks can be constructed [31]. The features used here to evaluate the fusion effect are
normally spatial frequency, visibility, and edge. The next step is to select some vector
samples to train neural networks. An ANN is a universal function approximator that
directly adapts to any nonlinear function defined by a representative set of training data.
Once trained, the ANN model can remember a functional relationship and be used for
further calculations. For these reasons, the ANN concept has been adopted to develop
strongly nonlinear models for multiple sensors data fusion. Thomas et al. discussed the
optimal fusion method of TV and infrared images using artificial neural networks [32]. After
that, many neural network models have been proposed for image fusion such as BP, SOFM,
and ARTMAP neural networks. BP algorithm has been mostly used. However, the
convergence of BP networks is slow and the global minima of the error space may not be
always achieved [33]. As an unsupervised network, SOFM network clusters input sample
through competitive learning. But the number of output neurons should be set before
constructing neural networks model [34]. RBF neural network can approximate objective
function at any precise level if enough hidden units are provided. The advantages of RBF
network training include no iteration, few training parameters, high training speed, simply
process and memory functions [35]. Hong explored the way that using RBF neural networks
combined with nearest neighbor clustering method to cluster, and membership weighting is
used to fuse. Experiments show this method can obtain the better effect of cluster fusion
with proper width parameter [36].
Gail et al. used Adaptive Resonance Theory (ART) neural networks to form a new
framework for self-organizing information fusion. The ARTMAP neural network can act as
a self-organizing expert system to derive hierarchical knowledge structures from
inconsistent training data [37]. ARTMAP information fusion resolves apparent
contradictions in input pixel labels by assigning output classes to levels in a knowledge
hierarchy [38]. Rong et al. presented a feature-level image fusion method based on
segmentation region and neural networks. The results indicated that this combined fusion
scheme was more efficient than that of traditional methods [39].
The ANN-based fusion method exploits the pattern recognition capabilities of artificial
neural networks, and meanwhile, the learning capability of neural networks makes it
10 Image Fusion and Its Applications
feasible to customize the image fusion process. Many of applications indicated that the
ANN-based fusion methods had more advantages than traditional statistical methods,
especially when input multiple sensor data were incomplete or with much noises. It is often
served as an efficient decision level fusion tools for its self learning characters, especially in
land use/land cover classification. In addition, the multiple inputs − multiple outputs
framework make it to be a possible approach to fuse high dimension data, such as long-term
time-series data or hyper-spectral data.
soil, water and forests [5] The volume of remote sensing images continues to grow at an
enormous rate due to advances in sensor technology for both high spatial and temporal
resolution systems. Consequently, an increasing quantity of image data from
airborne/satellite sensors have been available, including multi-resolution images, multi-
temporal images, multi-frequency/spectral bands images and multi-polarization image. The
goal of multiple sensor data fusion is to integrate complementary and redundant
information to provide a composite image which could be used to better understanding of
the entire scene. It has been widely used in many fields of remote sensing, such as object
identification, classification, and change detection. The following paragraphs describe the
recent achievements of image fusion in more detail.
the full scale reveal that an accurate and reliable PAN-sharpening is achieved by the
proposed method [44]. A case study, which extract crop field using high spatial resolution
image and images with high time repetitiveness, was shown as follows.
Identification of crop types from satellite imagery is a challenging task. Here we present an
automatic approach for planting areas extracting in mixed planting regions around Beijing
city using MODIS data and Landsat TM data. Firstly, planting areas were distinguished
with non-crop areas from Landsat TM image using traditional supervised classifier. Then,
time series NDVI derived from MODIS data were used for indentifying different types of
crops. Because different crop has different growth stage, maximum or minimum value of
crop’s NDVI is not same and it appears in different date.
After investigating the planting structure of main crops and analyzing the NDVI value of
different crop from the middle of March to the middle of November 2002 in Beijing, planting
area of winter wheat, spring maize, summer maize and bean in Beijing has been extracted.
3.2 Classification
Classification is one of the key tasks of remote sensing applications. The classification accuracy
of remote sensing images is improved when multiple source image data are introduced to the
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications 13
processing [4]. Images from microwave and optical sensors offer complementary information
that helps in discriminating the different classes. As discussed in the work of Wang et al., a
multi-sensor decision level image fusion algorithm based on fuzzy theory are used for
classification of each sensor image, and the classification results are fused by the fusion rule.
Interesting result was achieved mainly for the high speed classification and efficient fusion of
complementary information [45]. Land-use/land-cover classification had been improved using
data fusion techniques such as ANN and the Dempster-Shafer theory of evidence. The
experimental results show that the excellent performance of classification as compared to
existing classification techniques [46, 47]. Image fusion methods will lead to strong advances in
land use/land cover classifications by use of the complementary of the data presenting either
high spatial resolution or high time repetitiveness.
For example, Indian P5 Panchromatic image(Figure 9 b) with spatial resolution of 2.18 m of
Yiwu City, Southeast China, in 2007 was fused with multiple spectral bands of China-Brazil
CBERS data (spatial resolution: 19.2m) (Figure 9 a)in 2007. Brovey transformation fusion
method was used.
(a) Land use classification based on (b) Land use classification based on
CBERS multiple spectral image fused image
Fig. 10. Land use classification of Yiwu city,2007
Results indicated that the accuracy of residential areas of Yiwu city derived from fused
image is much higher than result derived from CBERS multiple spectral image(Table 2).
roof color, industrial structures, smaller vehicles, and vegetation [50]. Two examples of
Change detection using image fusion method are shown as follows.
1. Change detection using Landsat ETM+ and MODIS data
Recent study indicated that urban expansion could be efficiently monitored using satellite
images with multi-temporal and multi-spatial resolution. For example, Landsat ETM+
Panchromatic image(Figure 4 a) with spatial resolution of 10 m of Chongqing City,
Southwest China, in 2000 was fused with daily-received multiple spectral bands of MODIS
data (spatial resolution: 250m) (Figure 4 b)in 2006. Brovey transformation fusion method
was used. The building areas remained unchanged from 2000 to 2006 were in grey-pink.
Meanwhile, the newly established buildings were in dark red color in the composed image
(Figure 5) and could be easily identified.
2. Change detection using former land-cover map and multiple spectral images
In the study area, Qingpu district of Shanghai City,China, two kinds of data were fused for
automatic urban sprawl monitoring, which include land cover map, multiple spectral image
of Environment Satellites1(HJ-1). The land cover map of 2005 was used as prior knowledge
for hyperspace analysis and segment. HJ-1 image of September 22, 2009 were geometric and
radiometric corrected. HJ-1 images consisted of four spectral bands, which are three visible
bands and a near infra-red (NIR) band.
Two data layers were overlapped and spectral DN value of the five kinds of land cover
types were extracted. The results in Figure 3 show that spectral DN value of the five land
cover types most clusters in relevant three-dimensional ellipsoid spaces. Outliers were
considered pixels with higher probability of changed area. Based on three-dimensional
feature space analysis, the map of urban expansion could be achieved.
Fig. 6. Three-dimensional scatter plots and feature space of five kinds of land cover types
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications 17
performance result, but they often cause greater complexity in computation and parameters
setting. Another challenge on existing fusion techniques will be the ability for processing
hyper-spectral satellite sensor data. Artificial neural network seem to be one possible
approach to handle the high dimension nature of hyper-spectral satellite sensor data.
2. Establishment of an automatic quality assessment scheme.
Automatic quality assessment is highly desirable to evaluate the possible benefits of fusion,
to determine an optimal setting of parameters for a certain fusion scheme, as well as to
compare results obtained with different algorithms [34]. Mathematical methods were used
to judge the quality of merged imagery in respect to their improvement of spatial resolution
while preserving the spectral content of the data. Statistical indices, such as cross entropy,
mean square error, signal-to-noise ratio, have been used for evaluation purpose. While
recently a few image fusion quality measures have been proposed, analytical studies of
these measures have been lacking. The work of Yin et al. focused on one popular mutual
information-based quality measure and weighted averaging image fusion [56]. Jiying
presented a new metric based on image phase congruency to assess the performance of the
image fusion algorithm [57]. However, in general, no automatic solution has been achieved
to consistently produce high quality fusion for different data sets [58]. It is expected that the
result of fusing data from multiple independent sensors will offer the potential for better
performance than can be achieved by either sensor, and will reduce vulnerability to sensor
specific countermeasures and deployment factors. We expect that future research will
address new performance assessment criteria and automatic quality assessment methods
[59].
5. References
Hall, L.; Llinas, J. (1997).An introduction to multisensor data fusion. Proc. IEEE. Vol.85,pp. 6-
23,ISSN 0018-9219
Genderen, J. L. van, and Pohl, C. Image fusion: Issues, techniques and applications.
Intelligent Image Fusion, Proceedings EARSeL Workshop, Strasbourg, France, 11
September 1994, edited by J. L. van Genderen and V. Cappellini (Enschede: ITC),
18- 26.
Guest editorial. (2007). Image fusion: Advances in the state of the art. Information Fusion.
Vol.8,pp.114-118, ISSN 1566-2535
Pohl, C.; Van Genderen, J.L. (1998). Multisensor image fusion in remote sensing: concepts,
methods and applications. Int. J. Remote Sens.Vol. 19,pp.823-854, ISSN 0143-1161
Simone, G.; Farina, A.; Morabito, F.C.; Serpico, S.B.; Bruzzone, L. (2002). Image fusion
techniques for remote sensing applications. Information Fusion. Vol.3,pp.3-15, ISSN
1566-2535
Vijayaraj, V.; Younan, N.; O’Hara, C.(2006). Concepts of image fusion in remote sensing
applications. In Proceedings of IEEE International Conference on Geoscience and Remote
Sensing Symposium, Denver, CO, USA, July 31–August 4, 2006, pp.3798-3801,
Dasarathy, B.V. (2007). A special issue on image fusion: advances in the state of the art.
Information Fusion. Vol.8,pp.113, ISSN 1566-2535
Smith, M.I.; Heather, J.P. (2005). Review of image fusion technology in 2005. In Proceedings of
Defense and Security Symposium, Orlando, FL, USA, 2005.
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications 19
Blum, R.S.; Liu, Z. (2006). Multi-Sensor Image Fusion and Its Applications; special series on
Signal Processing and Communications; CRC Press: Boca Raton, FL, USA, 2006.
Olivier TheÂpaut, Kidiyo Kpalma, Joseph Ronsin. (2000). Automatic registration of ERS and
SPOT multisensory images in a data fusion context. Forest Ecology and Management .
Vol.123,pp.93-100, ISSN 0378-1127
Tamar Peli, Mon Young, Robert Knox, Kenneth K. Ellis and Frederick Bennett, Feature-level
sensor fusion, Proc. SPIE 3719, 1999,332, ISSN 0277-786X
Marcia L.S. Aguena, Nelson D.A. Mascarenhas.(2006). Multispectral image data fusion using
POCS and super-resolution. Computer Vision and Image Understanding
Vol.102,pp.178-187, ISSN 1077-3142
Ying Lei, Dong Jiang, and Xiaohuan Yang .(2007). Appllcation of image fusion in urban
expanding detection. Journal of Geomatics,vol.32,No.3,pp.4-5,ISSN 1007-3817
Ahmed F. Elaksher. (2008). Fusion of hyperspectral images and lidar-based dems for coastal
mapping. Optics and Lasers in Engineering Vol.46,pp.493-498,ISSN 0143-8166
Dai, X.; Khorram, S. (1999). Data fusion using artificial neural networks: a case study on
multitemporal change analysis. Comput. Environ. Urban Syst.Vol.23,pp.19-31,ISSN
0198-9715
Yun, Z. (2004). Understanding image fusion. Photogram. Eng. Remote Sens.Vol.6, pp.657-
661,ISSN 2702-4292
Pouran, B.(2005). Comparison between four methods for data fusion of ETM+ multispectral
and pan images. Geo-spat. Inf. Sci.Vol.8,pp.112-122,ISSN
Adelson, C.H.; Bergen, J.R.(1984). Pyramid methods in image processing. RCA Eng.
Vol.29,pp.33-41,
Miao, Q.G.; Wang, B.S. (2007). Multi-sensor image fusion based on improved laplacian
pyramid transform. Acta Opti. Sin. Vol.27,pp.1605-1610,ISSN 1424-8220
Xiang, J.; Su, X. (2009). A pyramid transform of image denoising algorithm based on
morphology. Acta Photon. Sin.Vol.38,pp.89-103,ISSN 1000-7032
Mallat, S.G. (1989). A theory for multiresolution signal decomposition: the wavelet
representation. IEEE Trans. Pattern Anal. Mach. Intell.Vol.11,pp.674-693,ISSN 0162-
8828
Ganzalo, P.; Jesus, M.A. (2004). Wavelet-based image fusion tutorial. Pattern Recognit.
Vol.37,pp.1855-1872, ISSN 0031-3203
Ma, H.; Jia, C.Y.; Liu, S. (2005).Multisource image fusion based on wavelet transform. Int. J.
Inf. Technol. Vol. 11,pp 81-91,
Krista, A.; Yun, Z.; Peter, D. (2007).Wavelet based image fusion techniques — An
introduction, review and comparison. ISPRS J. Photogram. Remote Sens. Vol. 62,
pp.249-263.
Candes, E.J.; Donoho, D.L.(2000). Curvelets-A Surprisingly Effective Nonadaptive
Representation for Objects with Edges.Curves and Surfcaces; Vanderbilt University
Press: Nashville, TN, USA, pp.105-120,
Choi, M.; Kim, RY.; Nam, MR. Fusion of multi-spectral and panchromatic satellite images
using the Curvelet transform. IEEE Geosci. Remote Sens. Lett. Vol.2,pp. 136-140,ISSN
0196-2892
20 Image Fusion and Its Applications
Donoho, M.N.; Vetterli, M. (2002).Contourlets; Academic Press: New York, NY, USA, ISSN
0890-5401
Minh, N.; Martin, V.(2005). The contourlet transform: an efficient directional multiresolution
image representation. Available online:
http://lcavwww.epfl.ch/~vetterli/IP-4-2005.pdf (accessed June 29, 2009).
Louis, E.K.; Yan, X.H. (1998).A neural network model for estimating sea surface chlorophyll
and sediments from thematic mapper imagery. Remote Sens. Environ.Vol.,66, pp.153-
165,ISSN 0034-4257
Dong. J.; Yang, X.; Clinton, N.; Wang, N. (2004).An artificial neural network model for
estimating crop yields using remotely sensed information. Int. J. Remote Sens. Vol.
25,pp. 1723-1732,ISSN 0143-1161
Shutao, L.; Kwok, J.T.; Yaonan W.(2002). Multifocus image fusion using artificial neural
networks. Pattern Recognit. Lett. Vol. 23,pp. 985-997.,ISSN 0167-8655
Thomas, F.; Grzegorz, G. (1995).Optimal fusion of TV and infrared images using artificial
neural networks. In Proceedings of Applications and Science of Artificial Neural
Networks, Orlando, FL, USA, April 21, 1995;Vol. 2492, pp.919-925,
Huang, W.; Jing, Z.(2007). Multi-focus image fusion using pulse coupled neural network.
Pattern Recognit. Lett. Vol. 28,pp.1123-1132, ,ISSN 0167-8655
Wu, Y.; Yang, W. (2003).Image fusion based on wavelet decomposition and evolutionary
strategy. Acta Opt. Sin. Vol. 23,pp. 671-676, ISSN 0253-2239
Sun, Z.Z.; Fu, K.; Wu, Y.R. (2003).The high-resolution SAR image terrain classification
algorithm based on mixed double hint layers RBFN model. Acta Electron. Sin. Vol.,
31,pp. 2040-2044,
Zhang, H.; Sun, X.N.; Zhao, L.; Liu, L. (2008).Image fusion algorithm using RBF neural
networks. Lect. Notes Comput. Sci. Vol. 9,pp. 417-424,
Gail, A.; Siegfried, M.; Ogas, J.(2005). Self-organizing information fusion and hierarchical
knowledge discovery- a new framework using ARTMAP neural networks. Neural
Netw. Vol. 18, pp.287-295,
Gail, A.; Siegfried, M.; Ogas, J.(2004). Self-organizing hierarchical knowledge discovery by
an ARTMAP image fusion system. In Proceedings of the 7th International Conference
on Information Fusion, Stockholm, Sweden, 2004; pp. 235-242, ISSN 1210-0552
Wang, R.; Bu, F.L.; Jin, H.; Li, L.H.(2007). A feature-level image fusion algorithm based on
neural networks. Bioinf. Biomed. Eng. Vol. 7,pp. 821-824,
Huadong Wu;Mel Siegel; Rainer Stiefelhagen;Jie Yang.(2002).Sensor Fusion Using
Dempster-Shafer Theory ,IEEE Instrumentation and Measurement Technology
Conference Anchorage, AK, USA, 21-23 May 2002,
S. Le Hégarat-Mascle, D. Richard, C. (2003).Ottlé, Multi-scale data fusion using Dempster-
Shafer evidence theory, Integrated Computer-Aided Engineering, Vol.10,No.1,pp.9-22,
ISSN:1875-8835
H. Borotschnig, L. Paletta, M. Prantl, and A. Pinz, Graz. (1999).A Comparison of
Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object
Recognition. Computing. Vol.62,pp. 293–319,
Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications 21
Jin, X.Y.; Davis, C.H. (2005).An integrated system for automatic road mapping from high-
resolution multi-spectral satellite imagery by information fusion. Information Fusion.
Vol. 6, pp.257-273, ISSN 1566-2535
Garzelli, A.; Nencini, F. (2007).Panchromatic sharpening of remote sensing images using a
multiscale Kalman filter. Pattern Recognit.Vol. 40,pp. 3568-3577, ISSN: 0167-8655
Wu, Y.; Yang, W.(2003). Image fusion based on wavelet decomposition and evolutionary
strategy. Acta Opt. Sin.Vol. 23,pp.671-676, ISSN 0253-2239
Sarkar, A.; Banerjee, A.; Banerjee, N.; Brahma, S.; Kartikeyan, B.; Chakraborty, M.;
Majumder, K.L.(2005). Landcover classification in MRF context using Dempster-
Shafer fusion for multisensor imagery. IEEE Trans. Image Processing , Vol.14,pp.
634-645, ISSN : 1057-7149
Liu, C.P.; Ma, X.H.; Cui, Z.M.(2007). Multi-source remote sensing image fusion classification
based on DS evidence theory. In Proceedings of Conference on Remote Sensing and GIS
Data Processing and Applications; and Innovative Multispectral Technology and
Applications, Wuhan, China, November 15–17, 2007; Vol. 6790, part 2.
Rottensteiner, F.; Trinder, J.; Clode, S.; Kubik, K.; Lovell, B.(2004) .Building detection by
Dempster-Shafer fusion of LIDAR data and multispectral aerial imagery. In
Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK,
August 23–26,2004; Vol. 2, pp. 339-342, ISSN: 1001-0920
Ruvimbo, G.;Philippe, D.; Morgan, D.(2009). Object-oriented change detection for the city of
Harare, Zimbabwe. Exp. Syst. Appl.Vol. 36, pp.571-588, ISSN 0013-8703
Madhavan, B.B.; Sasagawa, T.; Tachibana, K.; Mishra, K.(2005). A decision level fusion of
ADS-40, TABI and AISA data. Nippon Shashin Sokuryo Gakkai Gakujutsu Koenkai
Happyo Ronbunshu , Vol.2005, 163-166,
Duncan, S.; Sameer, S. (2006).Approaches to multisensor data fusion in target tracking: a
survey. IEEE Trans. Knowl. Data Eng. Vol. 18,pp. 1696-1710,ISSN 1041-4347
Chen, Y.; Han, C. (2005).Maneuvering vehicle tracking based on multi-sensor fusion. Acta
Autom. Sin.Vol. 31, pp.625-630,
Liu, C.; Feng, X.(2006). An algorithm of tracking a maneuvering target based on ir sensor
and radar in dense environment. J. Air Force Eng. Univ. Vol. 7,pp. 25-28,ISSN 1009-
3516
Zheng, M.; Zhao, Y.; Tian, H.(2006). Maneuvering target tracking based on fusion of multi-
sensor. J. Detect. Control, Vol. 28,pp. 43-45,
Vahdati-khajeh, E. (2004).Tracking the maneuvering targets using multiple scan joint
probabilistic data association algorithm. In Proceedings of the Target Tracking
2004: Algorithms and Applications, IEE, Sussex University, Brighton, UK, January
1, 2004. ISSN : 0537-9989
Chen, Y.; Xue, Z.Y.; Blum, R.S. (2008).Theoretical analysis of an information-based quality
measure for image fusion. Information Fusion ,Vol. 9,pp. 161-175, ISSN 0018-9251
Zhao, J.Y.; Laganiere, R.; Liu, Z.(2006). Image fusion algorithm assessment based on feature
measurement. In Proceedings of the 1st International Conference on Innovative
Computing, Information and Control, Beijing, China, August 30 – September 1, Vol.
2,pp. 701-704,
22 Image Fusion and Its Applications
Goshtasby, A.; Nikolov, S.(2007). Image fusion: advances in the state of the art. Information
Fusion. Vol. 8, pp.114-118,ISSN 1566-2535
Dong Jiang;Dafang Zhuang,; Yaohuan Huang; Jingying Fu.(2009). Advances in multi-sensor
data fusion: algorithms and applications. Sensors, Vol.9,No.10,pp. 7771- 7784,ISSN
1424-8220