An Image Fusion Approach Based On Markov Random Fields
An Image Fusion Approach Based On Markov Random Fields
Abstract—Markov random field (MRF) models are powerful a narrow spectral range [3], and different bands represent
tools to model image characteristics accurately and have been different aspects of the scene. Multispectral image fusion in-
successfully applied to a large number of image processing appli- volves the fusion of several bands in order to improve spectral
cations. This paper investigates the problem of fusion of remote
sensing images, e.g., multispectral image fusion, based on MRF resolution.
models and incorporates the contextual constraints via MRF mod- The existing image fusion approaches can be classified into
els into the fusion model. Fusion algorithms under the maximum three categories: pixel-level, feature-level, and decision-level
a posteriori criterion are developed to search for solutions. Our [4]. An overview of these image fusion approaches can be found
algorithm is applicable to both multiscale decomposition (MD)- in [4]. This paper is focused on the pixel-level fusion approach.
based image fusion and non-MD-based image fusion. Experimen-
tal results are provided to demonstrate the improvement of fusion Before image fusion, an image registration algorithm usually
performance by our algorithms. needs to be applied in order to align the source images [5].
In this paper, we assume that registered images are available
Index Terms—Markov random field, multi-resolution decompo-
sition, multispectral image fusion. prior to fusion. A variety of image fusion algorithms have been
proposed for different applications [6]. The basic pixel-level
fusion rule includes two steps.
I. I NTRODUCTION
1) First, we need to determine whether a source image
performance while the use of the transform increases the com- fusion approach and the non-MD-based fusion approach while
putational complexity. So one can choose or not to employ the second algorithm is only applicable for the non-MD-based
transforms on images depending on different applications. For fusion approach.
the MD-based fusion approaches, the basic fusion rule is ap- This paper is organized as follows. In Section II, we for-
plied to MD representations of the images at each resolution mulate the image fusion problem based on a statistical model.
level. For the non-MD-based fusion approach, the basic fusion Then, the MRF-based image fusion approach is presented in
rule is directly applied to the source images. Section III. In Section IV, we compare our proposed fusion
Generally, the main drawback of the pixel-level fusion rule approach with other fusion approaches via two experiments.
is that the decision on whether a source image contributes to Finally, some concluding remarks are provided in Section V.
the fused image is made pixel by pixel and, therefore, may
cause spatial distortion in the fused image, which affects further
processing, such as classification and detection. As we know, II. P ROBLEM F ORMULATION
the pixels in an image are spatially correlated. Thus, for a source
Image fusion is essentially an estimation problem. The ob-
image, if one of its pixels contributes to the fused image, its
jective is to estimate the underlying scene, assuming that each
neighbors are also likely to contribute to the fused image. It
source image contains a good view of only part of the scene [1].
implies that the decision making during the first step of the
Blum [1] has proposed a statistical model for the image fusion
fusion process should exploit the property of spatial correlation.
problem. Assume that there are N source images to fuse. Each
Thus, it is important to incorporate spatial correlation into the
source image can be modeled as
fusion model, and the use of such a fusion model is expected to
improve fusion performance.
A straightforward approach to make use of spatial correlation yi (r) = Hi (r) x(r) + wi (r), i = 1, . . . , N (1)
is to use a window- or region-based method [7], [10]–[13]. The
idea is to estimate the intensity of a fused pixel from that of the where r indicates the spatial coordinates of a pixel, yi (r) is
source images in a small window. Yang and Blum [11] assumed the intensity of the ith source image at r, x(r) is the intensity
that the decision making of pixels within a small window is of the true scene at r to be estimated, wi (r) is the noise, and
constant and developed an expectation-maximization algorithm Hi (r) is the sensor selectivity coefficient, taking on values from
by employing a Gaussian mixture image model to adaptively Θ = {q1 , q2 , . . .} representing the percentage of the true scene
find the fusion result. Burt and Kolczynski [10] proposed a contributing to the ith source image [7]. In our work, we use
weighted average algorithm to estimate the fused image in a Θ = {0, 1}, which determines if the true scene contributes to
pyramid transform domain. The weights are measured based the ith source image or not [1]. In the following, for simplicity
on a local energy or variance (called “salience”) within a of notation, “(r)” is omitted.
small window. Lozci et al. [12] modified the weighted average Note that (1) represents the relationship between the source
algorithm by incorporating a generalized Gaussian statistical images and the true scene. According to this model, if the
model. Lallier and Farooq [13] designed a weighted average true scene contributes to the source image, the source image is
scheme for fusing IR and visual images in a surveillance modeled as a true scene plus a Gaussian noise. If the true scene
scenario. In their algorithm, larger weights are assigned to does not contribute to the source image, the source image is
either the warmer or cooler pixels for the IR image and to the modeled as Gaussian noise. In practice, particularly in multiple-
pixels having larger local variance for the visual image. The sensor applications and multifocus applications, this model has
aforementioned algorithms [10]–[13] are used in the MD-based some limitations. The source images obtained from different
fusion approach. sensors sense different aspects of the true scene, and this model
The theory of Markov random fields (MRFs) provides a basis may be a coarse approximation in this case.
for modeling contextual constraints in visual processing and The image fusion problem essentially involves the estimation
interpretation [14]. MRF models have proved to be successful of Hi and x. The two traditional algorithms, namely, the
in a variety of image processing applications, including mul- averaging and maximizing algorithms, can also be expressed
tiresolution fusion [15], change detection [16], edge detection using this model. For the averaging algorithm, Hi = 1 for all i.
[17], image denoising [14], [18], image restoration [19], [20], For the maximizing algorithm, Hi = 1, i = max{yi }; Hi = 0,
i
and image classification [21]. In the image fusion application, otherwise.
an MRF model has been used to model the images for the fusion When Hi is given, the pixel intensity of the fused image can
of edge information [22]. Yang and Blum proposed a statistical be easily calculated by a Least Squares (LS) technique as [24]
model to describe the fusion process [1], [11], [23]. However,
the application of MRF models for pixel-level image fusion on −1
images with the same resolution has not been considered. In this x̂ = (H T H) H T Y (2)
paper, we propose two fusion algorithms by incorporating the
contextual constraints via MRF models into the fusion model. where H denotes the vector [H1 , H2 , . . . , HN ]T and Y denotes
The first algorithm models the decision making at the first the vector [y1 , y2 , . . . , yN ]T .
step of the fusion rule as an MRF, and the second algorithm In practice, we only have the source images available without
models both the decision making and the true image as MRFs. any prior information and the coefficient H is usually unknown.
Also, the first algorithm is applicable for both the MD-based According to the LS technique, from the set of all possible
5118 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 49, NO. 12, DECEMBER 2011
values that the coefficient H can take, the one which produces only the coefficients are modeled using an MRF, denoted as
the highest energy should be selected, i.e., MRF_H. In the second algorithm, both the coefficients and the
fused image are modeled using MRFs, denoted as MRF_HX.
Ĥ = min (Y − H x̂)T (Y − H x̂) Some notations used in the remainder of this paper are listed as
H
−1
follows:
= min Y T Y − (Y T H)(H T H) (H T Y ) • X: the whole true scene (fused image);
H
−1
• Hi : the coefficients of the ith source image;
= max (Y T H)(H T H) (H T Y ) . (3) • Yi : the intensities of the ith source image;
H
• H: the coefficients of source images, where H(r, i) = Hi ;
Note, since Hi ∈ {0, 1}, H has 2N possible values. Once H is • Y: the intensities of source images, where Y(r, i) = Yi .
available, the intensity of the fused image at pixel r, i.e., x, is The maximum a posteriori (MAP) criterion is used to find
obtained by an LS approach [24], which is the optimal solution for the estimation problem. The estimation
−1 procedure based on the MAP criterion chooses the most likely
x̂ = (Ĥ T Ĥ) Ĥ T Y. (4) values of coefficients and the fused image among all possible
values given the observed source images. The resulting proba-
In the aforementioned model, both the coefficient H and bility of error is minimal among all other detectors [24]. This
the intensity of the fused image x at each pixel are estimated criterion is expressed as
pixel by pixel, and therefore, it is very sensitive to sensor
noise. Furthermore, since the estimation of the fused image is {Ĥ, X̂} = arg max [P (X, H|Y)] . (5)
based on the estimation of the coefficients, the estimation of H,X
the coefficient H plays an important role in the fusion process.
The estimation accuracy of the coefficients directly influences However, due to high computational complexity, it is difficult
the estimation of the fused image. Since the coefficient H of a to directly obtain the final solution. Note that P (X, H|Y) =
pixel is likely to be similar to the coefficients corresponding to P (X|H, Y)P (H|Y) = P (H|X, Y)P (X|Y). Thus, a subop-
other pixels in its neighborhood due to spatial correlation, we timal method is adopted in this paper. We decompose our
can get better estimates of H by utilizing spatial correlation. problem (5) into two subproblems and iteratively solve the two
A straightforward and simple approach is to assume that the subproblems
coefficients of pixels within a small window are constant and Ĥn+1 = arg max P (H|Y, X̂n )
then select the coefficient which produces the highest energy H
of pixels within a small window. This strategy has been used
n+1
X̂ = arg max P (X|Y, Ĥn ) (6)
in [11]. However, the goal of the LS approach is to minimize X
the data error y − ŷ2 , which does not necessarily lead to a
small estimation error for either H or x. A popular strategy where Ĥn denotes the nth update of the estimate of H and
for improving the estimation error of LS is to incorporate prior X̂n denotes the nth update of the estimate of X. It is easy to
information on H or x [25]. Motivated by this fact and the fact show that we can iteratively update the estimates of H and X
that the MRF model in the form of prior Gibbs distributions is such that
currently the most effective way to describe the local behavior
of both the intensity field and the discontinuity field [20], we P (Xn+1 , Hn+1 |Y) ≥ P (Xn , Hn |Y) (7)
propose to employ an MRF model to estimate the coefficients.
and, therefore, achieve the optimum at the end.
It is expected to improve the estimation accuracy of the coef-
ficients H, thereby leading to improved fusion results. In the
next section, we develop our image fusion approaches based on A. Fusion Approach: MRF Modeling for
MRF modeling. Coefficients H (MRF_H)
Motivated by the fact that the coefficients of the source
III. P ROPOSED A LGORITHMS images exhibit spatial correlation, we model the coefficient H
The image fusion problem is to estimate the true scene x. by an MRF model. Let S be a set of sites in an image and
However, before the estimation of x, we need to obtain an Λ ∈ {0, 1, . . . , L − 1} be the phase space. We assume that the
accurate estimate of H, which represents the decision whether coefficients H(S) ∈ ΛS follow MRF properties with the Gibbs
the true scene is present in the source image, i.e., whether the potential Uc (H). The marginal pdf for H is written as [14]
source image contributes to the fused image. In the previous
1 1
section, we considered the estimation of x and H on a pixel PH (H) = exp − Uc (H) (8)
ZH T
level. In this section, we propose two MRF-based image fusion c⊂S
approaches, which design the estimator by incorporating the
where ZH is a normalization constant given by
spatial correlation through the prior probability density function
(pdf) of H and x. The intensity of a fused pixel then depends 1
not only on the intensities of the pixel in the source images but ZH = exp − Uc (H) . (9)
T
also on that of the neighboring pixels. In the first algorithm, S
H∈Λ c⊂S
XU et al.: IMAGE FUSION APPROACH BASED ON MARKOV RANDOM FIELDS 5119
The estimate of H is given by Here, the temperature is a parameter which is used to control
the randomness of the coefficient generator, and we consider
Ĥn+1 = arg max P (H|Y, X̂n ) . (10) that the algorithm converges when the two consecutive updates
H
are within tolerance of each other. At steps 2) and 3), we visit
We apply Bayes’ rule, which provides the following result: each pixel from left to right and from top to bottom when
we update the coefficients and the fused image. Eventually,
P (H, Y|X̂n ) P (Y|H, X̂n )P (H)
P (H|Y, X̂n ) = = (11) the resulting coefficient will converge to the solution of (16),
P (Y|X̂n ) P (Y|X̂n ) and the fused image is simultaneously obtained. Compared
with the maximizing approach, the averaging approach, and
and because P (Y|X̂) is constant for all the values of H, (10)
the LS approach, the solution of this algorithm is obtained
can be rewritten as
through an optimization algorithm, and therefore, it increases
Ĥn+1 = arg max P (Y|H, X̂n )P (H) . (12) the computation time. However, the MRF modeling of the
H coefficient in the image model is a better model to describe the
In the model given in (1), the noise of each source pixel fusion process, which improves the fusion performance.
is assumed to be an independent and identically distributed In recent years, other optimization algorithms such as the
(i.i.d.) Gaussian noise with zero mean and variance of σ 2 , and graph-cut-based approach [27] have become very popular, and
therefore, the conditional pdf of the source image Y given H they can find the solution in a more computationally efficient
and X̂n is given by manner than the SA optimization algorithm. The use of opti-
mization algorithms such as the graph-cut-based optimization
(Yi −Hi X̂n )T (Yi −Hi X̂n ) approach will be investigated in the future to improve the
exp − i
2σ 2 efficiency of the fusion algorithm.
P (Y|H, X̂n ) = (13)
M
(2πσ 2 ) 2 Assume that we have N source images of size M . Since the
coefficient Hi (r) of the ith source image at pixel r is taken
where M is the total number of pixels. from {0, 1}, each component of H belongs to a set with the
Then, substituting (8) and (13) into (12) and taking the size 2N . Thus, during each iteration, the algorithm to estimate
constant term out, we obtain the coefficient has the computational complexity O(M ∗ 2N )
during each iteration.
Ĥn+1 = arg max [exp (−E(H))] (14)
H
B. Fusion Approach: MRF Modeling for Coefficients H
where
and Fused Image X (MRF_HX)
i (Yi − Hi X̂n )T (Yi − Hi X̂n )
E(H) = + Uc (H). In the aforementioned algorithm, we assumed that the co-
2σ 2 efficients H follow an MRF model. Then, the intensity of the
c⊂S
(15) fused pixel is estimated by an LS technique. In practice, the
fused image also has the property of high spatial correlation.
According to the aforementioned result, we observe that maxi-
Thus, one may assume that the fused image also follows an
mization in (14) is equivalent to minimization of E(H). Thus,
MRF model with a Gibbs potential Vc (X). Hence, the marginal
the optimal estimate for H can be expressed as
pdf for X is written as [14]
Ĥn+1 = arg min (E(H)) . (16)
H
1 1
PX (X) = exp − Vc (X) (17)
Note that, for two source images with size 300 ∗ 300, H has ZX T
c⊂S
a total of 490000 possible configurations. Thus, in practice, due
to the large search space on H, the solution of (16) cannot
where ZX is a normalization constant given by
be obtained directly, and therefore, the simulated annealing
(SA) algorithm [26] is applied here to search for the optimal
1
solution of (16). The solution for the second subproblem, i.e., ZX = exp − Vc (X) . (18)
the estimate for X, is obtained by (4). The iterative algorithm is T
X c⊂S
described in terms of the following steps.
Under this assumption, the MAP criterion to obtain the
1) Start with an initial estimate of H and X. Estimate the optimal X is written as
initial parameters (noise variance and some parameters in
the pdf of H) and set the initial temperature. X̂n+1 = arg max P (X|Y, Ĥn ) . (19)
X
2) At each iteration, obtain a new estimate of H based on
the Gibbs pdf given in (8) with the Gibbs potential E(H)
using a Gibbs sampling procedure [14]. In a similar manner as for the estimation of Ĥ, (19)
3) Update the fused image using (4). reduces to
4) Reduce the temperature using a predetermined schedule
X̂n+1 = arg min (Δ(X)) (20)
and repeat 2) and 3) until convergence. X
5120 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 49, NO. 12, DECEMBER 2011
where
1 T
Δ(X) = Yi − Ĥni X Yi − Ĥni X + Vc (X).
2σ 2 i c⊂S
(21)
In the next section, we will show that the solution of (20) can
Fig. 1. Cliques considered in the eight-neighborhood system.
be easily obtained when we use a Gaussian MRF to model the
fused image. which makes the model (1) more closely fit the MD representa-
Different from the first algorithm MRF_H, where the fused tions. However, the use of multiresolution transforms increases
image X is updated by an LS technique, this new algorithm the complexity of the algorithm. It is noted that, since the
uses a MAP solution to update the fused image X using (20). multiresolution transform may result in the loss of locality in
The whole procedure is then described as follows. MRF models for the MD image representations [28], i.e., the
local MRF property may not hold on X, it is not suggested
1) Start with an initial estimate of H and X. Estimate the to use the algorithm MRF_HX with the MD-based fusion
initial parameters (noise variance and some parameters in approach.
the pdf of H and X) and set the initial temperature. The coefficient H for each pixel represents whether the true
2) At each iteration, obtain a new estimate of H based on scene X (fused image) contributes to the source image. Pixels
its Gibbs pdf given in (8) with the Gibbs potential E(H) in a large area may all contribute to the true scene; however, all
using a Gibbs sampling procedure [14]. the pixels in the area may not contain the same intensities. Thus,
3) Update the fused image using the solution of (20). the coefficient H has more spatial correlation over a larger area
4) Reduce the temperature using a predetermined schedule than the intensity of the true scene X. After MD transformation,
and repeat 2) and 3) until convergence. coefficients H may still exhibit spatial correlation while MRF
property may not hold for X. Thus, only the algorithm MRF_H
In summary, it is an iterative estimation process to estimate is applied in the MD-based fusion approach. In the next section,
both H and X, which increases computational time. The MRF some examples are provided for illustration.
modeling of both the coefficient and the fused image more
accurately represents the images with high resolution and there-
fore produces better fusion results for the fusion of source IV. E XPERIMENTAL R ESULTS
images with high resolution. Moreover, the initial estimates A. Choice of MRF Models
of H and X are important. As the initial estimates of H and
X get closer to the optimal values, the algorithm converges We provide three examples to evaluate the fusion perfor-
faster. Poor initial estimates may lead to the local maxima of the mance of our fusion algorithms. For the two MRF-based fu-
a posteriori probability. Although the LS approach given by (3) sion algorithms, MRF_H and MRF_HX, used in the following
and (4) is found to be a simple and effective fusion approach, it experiments, we consider five clique types in the eight-
does not take spatial correlation into account. However, we use neighborhood system: C1 , C2 , C3 , C4 , C5 , associated with the
it to obtain our initial estimates of H and X. According to our singleton, vertical pairs, horizontal pairs, left-diagonal pairs,
experiments, this approach displays a good fusion performance. and right-diagonal pairs, respectively. They are shown in Fig. 1.
The Gibbs energy function of the coefficient of the source
image is defined by an autologistic function, given by [14]
C. Extension to the MD-Based Fusion Framework
Here, the applicability of the two proposed algorithms to Uc (H) = aT L (22)
the MD-based fusion approach is discussed. For the non-MD- c⊂S
Fig. 3. MSE of fusion result as the number of iterations increases for MRF-based approaches for four- and eight-neighborhood systems. (a) MRF_H, SNR =
10 dB. (b) MRF_H, SNR = 30 dB. (c) MRF_HX, SNR = 10 dB. (d) MRF_HX, SNR = 30 dB.
repeated at two SNR levels, SNR = 10 dB and SNR = 30 dB. duces a little smaller mse than that of the four-neighborhood
In addition to the visual inspection, we employed mean square system while using the algorithm MRF_H, while it produces a
error (mse) to evaluate the fusion performance in Experiment 1 slightly larger mse with the algorithm MRF_HX. This observa-
in which a true reference scene is available. A smaller mse tion implies that the use of a larger neighborhood system does
usually indicates a better fusion performance. The solid lines in not necessarily improve the performance and that the choice of
Fig. 3 show the mse of fusion results using the MRF approaches different neighborhoods in MRF models does not impact much
with respect to the ground truth as the number of iterations the final fusion result. We also note that the inclusion of more
increases. Initially, the coefficient mask and the fused image memory in the model results in improved performance, i.e.,
have much noise. As the number of iterations increases, the the eight-neighborhood system performs better than the four-
resulting coefficient mask and the fused image approach the neighborhood system.
ground truth and produce the smallest mse. We found that our Fig. 2 shows the three source images at an SNR level of
new fusion algorithms converge after around three iterations at 30 dB, and Fig. 4 shows the fusion results produced by the
an SNR level of 30 dB and converge after around ten iterations six fusion approaches, respectively. It is observed that the fused
at an SNR level of 10 dB. image produced by the algorithm MRF_HX displays reduced
MRF modeling is employed to include the dependences of image noise at the cost of smoother image texture and is the
decision making and/or pixel intensities of nearby pixels. The closest to the ground truth. Fig. 5 shows the results for the
choice of neighborhoods affects the effectiveness of MRF mod- estimated coefficients of the three source images using six
eling. Larger neighborhoods used in MRF modeling implies fusion approaches. The window-based approach removed the
more spatial correlation in the image. In order to evaluate noise quite effectively but produced a mosaic coefficient mask.
the fusion result for the different neighborhoods chosen, we The two MRF algorithms outperform the other four approaches
also employed the four-neighborhood system, which only uses and demonstrate good ability of accurately estimating the coef-
the three clique types C1 , C2 , and C3 shown in Fig. 1 for ficients. Table I gives a quantitative comparison by means of
MRF modeling, to compare with the fusion result using the the mse. We observed that the use of the spatial correlation
eight-neighborhood system. The mses of fusion results using property improves the fusion performance. Furthermore, the
both neighborhood systems for MRF modeling are shown window-based approach and the algorithm MRF_H produce
in Fig. 3. Although the algorithms using both neighborhood much smaller mses than the LS approach in the low SNR
systems converge finally, it is observed that the MRF model- case, while they do not improve the performance much in the
ing using the eight-neighborhood system makes the algorithm high SNR case. Our proposed fusion algorithm, MRF_HX, is
converge faster than that for the four-neighborhood system. observed to produce the smallest mse in both the low SNR case
Furthermore, the use of the eight-neighborhood system pro- and the high SNR case. This also indicates that the estimation
XU et al.: IMAGE FUSION APPROACH BASED ON MARKOV RANDOM FIELDS 5123
TABLE II
E XECUTION T IME OF F USION A PPROACHES IN
E XPERIMENT 1 ( IN S ECONDS )
Fig. 11. Magnified fusion results in Experiment 4—cloud images. (a) Averag-
ing. (b) LS. (c) Window. (d) MRF_H. (e) MRF_HX (non-MD-based approach).
when the raw source images are directly used for fusion without [15] M. Joshi and A. Jalobeanu, “MAP estimation for multiresolution fusion
preprocessing, the fused image can also be modeled as an MRF, in remotely sensed images using an IGMRF prior model,” IEEE Trans.
Geosci. Remote Sens., vol. 48, no. 3, pp. 1245–1255, Mar. 2010.
and then, the fusion result can be obtained using the MAP [16] T. Kasetkasem and P. Varshney, “An image change detection algorithm
criterion incorporating the a priori Gibbs distribution of the based on Markov random field models,” IEEE Trans. Geosci. Remote
fused image. The second algorithm, MRF_HX, is only appli- Sens., vol. 40, no. 8, pp. 1815–1823, Aug. 2002.
[17] Z. Tu and S. Zhu, “Image segmentation by data-driven Markov chain
cable for non-MD-based fusion approaches. Visual inspection Monte Carlo,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5,
and quantitative performance evaluation both demonstrate that pp. 657–673, May 2002.
the employment of the MRF model in the fusion approaches [18] H. Chen, “Mutual information based image registration with applica-
tions,” Ph.D. dissertation, Syracuse Univ., Syracuse, NY, May, 2002.
resulted in a better fusion performance than the traditional [19] S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and
fusion approaches. In our proposed image fusion algorithms, the Bayesian restoration of images,” in Readings in Uncertain Reasoning.
we assumed a simple relationship between each source image San Francisco, CA: Morgan Kaufmann, 1990, pp. 452–472.
[20] L. Bedini, A. Tonazzini, and S. Minutoli, “Unsupervised edge-preserving
and the true scene, i.e., a source image either contributes to image restoration via a saddle point approximation,” Image Vis. Comput.,
the fused image or does not contribute to the fused image. vol. 17, no. 11, pp. 779–793, Sep. 1999.
Thus, it results in a mismatch between the fusion model and [21] D. Kundur, D. Hatzinakos, and H. Leung, “Robust classification of blurred
imagery,” IEEE Trans. Image Process., vol. 9, no. 2, pp. 243–255,
the real image data set. To improve this, one can assume that Feb. 2000.
the coefficient in the data model can take any real value, which [22] W. Wright, “Fast image fusion with a Markov random field,” in Proc. 7th
may increase the accuracy of the fusion algorithms. In addition, Int. Conf. Image Process. Appl., 1999, pp. 557–561.
[23] R. S. Blum, “Robust image fusion using a statistical signal processing
in the developed image fusion algorithms, we assumed that the approach,” Inf. Fusion, vol. 6, no. 2, pp. 119–128, Jun. 2005.
noise in the source image is an i.i.d. Gaussian noise. Since [24] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation
this is a rather limiting assumption, if we can build the noise Theory. Upper Saddle River, NJ: Prentice-Hall, 1993.
[25] Y. C. Eldar, A. Beck, and M. Teboulle, “Bounded error estimation: A
model to include non-Gaussian distortion or possibly correlated Chebyshev center approach,” in Proc. 2nd IEEE Int. Workshop Comput.
Gaussian mixture distortion, this model should be closer to Adv. Multi-Sensor Adapt. Process., 2007, pp. 205–208.
realistic sensor images and the estimation of fused image may [26] S. Lakshmanan and H. Derin, “Simultaneous parameter estimation and
segmentation of Gibbs random fields using simulated annealing,” IEEE
improve. Trans. Pattern Anal. Mach. Intell., vol. 11, no. 8, pp. 799–813, Aug. 1989.
[27] V. Kolmogorov and R. Zabin, “What energy functions can be minimized
via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 2,
pp. 147–159, Feb. 2004.
R EFERENCES
[28] F. Heitz, “Restriction of a Markov random field on a graph and multireso-
[1] R. S. Blum, “On multisensor image fusion performance limits from an lution statistical image modeling,” IEEE Trans. Inf. Theory, vol. 42, no. 1,
estimation theory perspective,” Inf. Fusion, vol. 7, no. 3, pp. 250–263, pp. 180–190, Jan. 1996.
Sep. 2006. [29] P. Bremaud, Markov Chains, Gibbs Fields, Monte Carlo Simulation, and
[2] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative Queues. New York: Springer-Verlag, 1999.
analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens., [30] H. Derin and H. Elliott, “Modeling and segmentation of noisy and textured
vol. 43, no. 6, pp. 1391–1402, Jun. 2005. images using Gibbs random fields,” IEEE Trans. Pattern Anal. Mach.
[3] C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, “Synthesis of mul- Intell., vol. PAMI-9, no. 1, pp. 39–55, Jan. 1987.
tispectral images to high spatial resolution: A critical review of fusion [31] T. Kasetkasem, “Image analysis methods based on Markov random field
methods based on remote sensing physics,” IEEE Trans. Geosci. Remote models,” Ph.D. dissertation, Syracuse Univ., Syracuse, NY, Dec., 2002.
Sens., vol. 46, no. 5, pp. 1301–1312, May 2008. [32] S. S. Saquib, C. A. Bouman, and K. Sauer, “ML parameter estimation for
[4] C. Pohl and J. van Genderen, “Multisensor image fusion in remote sens- Markov random fields, with applications to Bayesian tomography,” IEEE
ing: Concepts, methods, and applications,” Int. J. Remote Sens., vol. 19, Trans. Image Process., vol. 7, no. 7, pp. 1029–1044, Jul. 1998.
no. 5, pp. 823–854, 1998. [33] J. Besag, “On the statistical analysis of dirty pictures,” J. R. Stat. Soc.,
[5] P. K. Varshney, B. Kumar, M. Xu, A. Drozd, and I. Kasperovich, “Image vol. 48, no. 3, pp. 259–302, 1986.
registration: A tutorial,” in Proc. NATO ASI, Albena, Bulgaria, 2005. [34] S. Geman and C. Graffigne, “Markov random field image models and their
[6] Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition- application to computer vision,” in Proc. Int. Congr. Mathematicians,
based image fusion schemes with a performance study for a digital camera 1986, pp. 1496–1517.
application,” Proc. IEEE, vol. 87, no. 8, pp. 1315–1326, Aug. 1999. [35] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Upper
[7] R. K. Sharma, T. K. Leen, and M. Pavel, “Probabilistic image sensor Saddle River, NJ: Prentice-Hall, 2008.
fusion,” in Proc. Adv. Neural Inf. Process. Syst. 11, 1999, pp. 824–830.
[8] H.-M. Chen, S. Lee, R. Rao, M.-A. Slamani, and P. Varshney, “Imaging
for concealed weapon detection: A tutorial overview of development in
imaging sensors and processing,” IEEE Signal Process. Mag., vol. 22,
no. 2, pp. 52–61, Mar. 2005.
[9] Y. Zhang, S. De Backer, and P. Scheunders, “Noise-resistant wavelet-
based Bayesian fusion of multispectral and hyperspectral images,”
IEEE Trans. Geosci. Remote Sens., vol. 47, no. 11, pp. 3834–3843,
Nov. 2009.
[10] P. Burt and R. Kolczynski, “Enhanced image capture through fusion,” in
Proc. 4th Int. Conf. Comput. Vis., 1993, pp. 173–182.
[11] J. Yang and R. Blum, “A statistical signal processing approach to image
fusion for concealed weapon detection,” in Proc. IEEE Int. Conf. Image Min Xu (S’07–M’10) received the B.S. degree from
Process., 2002, pp. 513–516. the University of Science and Technology of China,
[12] A. Lozci, A. Achim, D. Bull, and N. Canagarajah, “Statistical image fu- Hefei, China, in 2002 and the M.S. and Ph.D. degrees
sion with generalized Gaussian and Alpha-Stable distributions,” in Proc. in electrical engineering from Syracuse University,
15th Int. Conf. Digital Signal Process., 2007, pp. 268–271. Syracuse, NY, in 2005 and 2009, respectively.
[13] E. Lallier and M. Farooq, “A real time pixel-level based image fusion Since December 2009, she has been a Researcher
via adaptive weight averaging,” in Proc. 3rd Int. Conf. Inf. Fusion, 2000, with Blue Highway, LLC, Syracuse. Her research
pp. WEC3/3–WEC313. interests are in the areas of statistical signal and
[14] S. Z. Li, Markov Random Field Modeling in Computer Vision. image processing.
New York: Spinger-Verlag, 2001.
XU et al.: IMAGE FUSION APPROACH BASED ON MARKOV RANDOM FIELDS 5127
Hao Chen (S’06–M’08) received the Ph.D. degree Pramod K. Varshney (S’72–M’77–SM’82–F’97)
in electrical engineering from Syracuse University, received the B.S. degree in electrical engineering and
Syracuse, NY, in 2007. computer science and the M.S. and Ph.D. degrees in
In 2007–2010, he was a Postdoctoral Research electrical engineering from the University of Illinois
Associate and then a Research Assistant Professor at Urbana–Champaign, Urbana, in 1972, 1974, and
with Syracuse University. Since August 2010, he 1976, respectively.
has been an Assistant Professor with the Depart- Since 1976, he has been with Syracuse University,
ment of Electrical and Computer Engineering, Boise Syracuse, NY, where he is currently a Distinguished
State University, Boise, ID. His research interests Professor of Electrical Engineering and Computer
include statistical signal and image processing and Science and the Director of The Center for Advanced
communications. Systems and Engineering. His current research inter-
ests are in distributed sensor networks and data fusion, detection and estimation
theory, wireless communications, image processing, radar signal processing,
and remote sensing.
Dr. Varshney has received numerous awards. He serves as a Distinguished
Lecturer for the IEEE Aerospace and Electronic Systems Society. He was the
2001 President of the International Society of Information Fusion.