0% found this document useful (0 votes)
7 views11 pages

Research Paper

This paper presents a remote sensing image fusion method that integrates high-resolution panchromatic (PAN) images with low-resolution multispectral (MS) images using sparse representations over learned dictionaries. The proposed method improves upon existing techniques by adaptively learning dictionaries from source images and constructing a dictionary for high-resolution MS images without requiring a training set. Experimental results demonstrate the effectiveness of this approach in achieving high spatial resolution while minimizing spectral distortion.

Uploaded by

kumawatarti184
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views11 pages

Research Paper

This paper presents a remote sensing image fusion method that integrates high-resolution panchromatic (PAN) images with low-resolution multispectral (MS) images using sparse representations over learned dictionaries. The proposed method improves upon existing techniques by adaptively learning dictionaries from source images and constructing a dictionary for high-resolution MS images without requiring a training set. Experimental results demonstrate the effectiveness of this approach in achieving high spatial resolution while minimizing spectral distortion.

Uploaded by

kumawatarti184
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO.

9, SEPTEMBER 2013 4779

Remote Sensing Image Fusion via Sparse


Representations Over Learned Dictionaries
Shutao Li, Member, IEEE, Haitao Yin, and Leyuan Fang, Student Member, IEEE

Abstract—Remote sensing image fusion can integrate the spatial limitation of information obtained from individual sensor but
detail of panchromatic (PAN) image and the spectral information also achieve a better observation [2].
of a low-resolution multispectral (MS) image to produce a fused Recently, various approaches have been proposed to address
MS image with high spatial resolution. In this paper, a remote
sensing image fusion method is proposed with sparse representa- the problem of remote sensing image fusion. The existing
tions over learned dictionaries. The dictionaries for PAN image methods can be categorized into three classes: component-
and low-resolution MS image are learned from the source images substitution-based methods, multiresolution-analysis-based
adaptively. Furthermore, a novel strategy is designed to construct methods, and restoration-based methods. The representative
the dictionary for unknown high-resolution MS images without component-substitution-based methods are the intensity–hue–
training set, which can make our proposed method more practical.
The sparse coefficients of the PAN image and low-resolution MS saturation (IHS) [3], [4] and the principal component analysis
image are sought by the orthogonal matching pursuit algorithm. [5]. The main steps of classical IHS-based method are as
Then, the fused high-resolution MS image is calculated by combin- follows. First, the spectral bands of MS image are transformed
ing the obtained sparse coefficients and the dictionary for the high- into the IHS image space. Then, the intensity component is
resolution MS image. By comparing with six well-known methods replaced by the PAN image. The final fused result is obtained
in terms of several universal quality evaluation indexes with or
without references, the simulated and real experimental results on by the inverse transform. The fused image obtained through
QuickBird and IKONOS images demonstrate the superiority of the component-substitution-based methods can achieve high
our method. spatial resolution; however, spectral distortion is hard to avoid.
Index Terms—Dictionary learning, image fusion, multispectral Multiresolution analysis provides another effective technique
(MS) image, panchromatic (PAN) image, remote sensing, sparse to fuse the PAN image and the MS image. The wavelet trans-
representation. form (WT)-based methods are the representative methods [6],
[7]. The PAN and each band of the MS image are decomposed
I. I NTRODUCTION into high- and low-frequency components through WT. The
high-frequency component extracted from the PAN image is
I N optical remote sensing, the sensors are designed following
a tradeoff among spectral resolution, spatial resolution, and
signal to noise. In addition, the spatial resolution of remote
merged into the MS bands. The fused image is obtained by
performing the inverse WT. In [8], the additive wavelet lumi-
nance (AWL) method is proposed by combining the “à trous”
sensing image is always limited due to the onboard storage and
WT (ATWT) and IHS transform. In AWL, the details from the
bandwidth. As a result, remote sensing satellites often provide
PAN image are injected into the luminance band of the MS im-
panchromatic (PAN) image with high spatial resolution and
age. A generalized proportional AWL method termed as AWL
multispectral (MS) image with high spectral resolution. For
proportional (AWLP) injects the details with self-adapting pro-
example, the IKONOS satellite produces PAN image with 1-m
portion [9]. Recently, other popular multiresolution-analysis-
spatial resolution and MS image with 4-m spatial resolution.
based methods have been proposed, such as the ATWT with
Generally, remote sensing images with high spectral and high
context-based decision (CBD) injection model [10], support
spatial resolutions are essential for complete and accurate de-
value transform (SVT) [11], and Laplacian pyramids [12].
scription of the observed scene. Remote sensing image fusion
Compared with the component-substitution-based methods, the
is an effective technique to integrate spatial and spectral in-
multiresolution-analysis-based methods preserve better spectral
formation of the PAN and MS images [1]. Through remote
information. However, the spatial distortions may occur accom-
sensing image fusion technique, we cannot only overcome the
panied by the blurring and artifacts [13].
With the characteristic of satellite imaging, the PAN image
Manuscript received April 10, 2011; revised July 7, 2012, September 28, and MS image can be modeled as the degraded images of
2012, and November 13, 2012; accepted November 20, 2012. Date of publi- high-resolution MS image. Based on these imaging models, the
cation February 1, 2013; date of current version August 30, 2013. This paper
was supported in part by the National Natural Science Foundation of China restoration-based methods address the remote sensing image
under Grant 61172161, by the Fundamental Research Funds for the Central fusion problem through some optimization problems. Further-
Universities, Hunan University, and by the Scholarship Award for Excellent more, the regularization terms of optimization problem are nec-
Doctoral Student granted by the Chinese Ministry of Education.
The authors are with the College of Electrical and Information Engineering, essary for restricting the solution, such as the inhomogeneous
Hunan University, Changsha 410082, China (e-mail: [email protected]; Gaussian Markov random field prior [14] and the constrained
[email protected]; [email protected]). least square [15]. In [16], Li and Yang proposed a remote
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. sensing image fusion method based on compressive sensing
Digital Object Identifier 10.1109/TGRS.2012.2230332 (CS). With the sparsity prior information, the high-resolution

0196-2892 © 2012 IEEE


4780 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 9, SEPTEMBER 2013

MS image is reconstructed through the CS theory. To enforce cographically as column vector. In sparse representation theory,
the sparsity, the dictionary used in [16] consists of image the patch x can be represented as a sparse linear combination of
patches randomly sampled from the high-resolution MS im- the columns with respect to a dictionary D ∈ Rn×N (n < N ),
ages. If the sampled patches are enough, the constructed dic- i.e., x = Dα, where α ∈ RN is the sparse coefficient. The in-
tionary through this strategy contains the abundant information equality n < N implies that the dictionary D is redundant. The
of remote sensing images. Nevertheless, a large dictionary often sparsest α can be obtained through the following optimization
leads to expensive computation. problem:
In this paper, we focus on the restoration approach and
propose a novel method based on sparse representation over min α0 subject to x − Dα22 ≤ ε (1)
α
learned dictionaries. The dictionaries for PAN image and low-
resolution MS image are learned from source images adap- where  · 0 is the 0 norm counting the number of nonzero
tively. Furthermore, a novel strategy is designed to construct the entries in vector and ε ≥ 0 is the error tolerance. Solving (1) is
dictionary for high-resolution MS image from the dictionaries non-deterministic polynomial-time hard (NP-hard) in general.
for PAN image and low-resolution MS image. Through our Several relaxation strategies are developed for approximating
dictionary learning method, the learned dictionaries can enforce the solution of (1), such as basis pursuit [20], focal underdeter-
that the PAN, low-resolution MS, and high-resolution MS mined system solver [21], and OMP [22].
images have the same sparse coefficients. These sparse coef- The choice of dictionary plays an important role in sparse
ficients are sought by the orthogonal matching pursuit (OMP) representation. One approach focuses on the mathematical
algorithm. Then, the fused MS image is reconstructed by the model, such as undecimated wavelet [23] and discrete cosine
obtained sparse coefficients and the dictionary of the high- transformation [24]. Another approach applies the learning
resolution MS image. Compared with the method presented techniques to infer the dictionary from the training set which
in [16], our method has the following novelties. First, instead is studied widely at present, such as the method of optimal
of randomly sampled patches, we adopt the learned dictionar- directions [25] and K-SVD [26]. Compared with the dictionary
ies, which can reduce the dimensionality of dictionary, speed constituted through the mathematical model, the learned dictio-
up the sparse decomposition, and improve the effectiveness nary from training set exhibits better performance in specific
and robustness of remote sensing image fusion. Second, we applications.
learn that the dictionaries are learned from the source images
directly, which can improve the adaptability of dictionaries.
Third, in our method, the dictionary for high-resolution MS III. P ROPOSED M ETHOD
image is constructed from the dictionaries for PAN image A. PAN and MS Image Formation Models
and low-resolution MS image, which does not need the high-
resolution MS training set. This strategy makes our method In this paper, we assume that the PAN image Ypan and
more practical. Generally, the choice of dictionary is crucial for low-resolution MS image Yjms (j = 1, 2, . . . , B) have been
sparse representation, which affects the performance of sparse registered, where B is the number of spectral bands in √ MS
representation directly. Therefore, the aforementioned three image.
√ Let x j (j = 1, 2, . . . , B) be the patches of size n×
novelties indicate that our method has significant improvements n extracted from the jth band Xj (j = 1, 2, . . . , B) of un-
over [16]. known high-resolution MS image X and ordered as column
T T
vectors. Then, x = (xT 1 , . . . , xj , . . . , xB ) ∈ R
T Bn×1
This paper is organized as follows. In Section II, we briefly denotes
review the theory of sparse representation. The proposed the unknown high-resolution MS patch with B bands, where
method is presented in Section III. In particular, the relation- T denotes the transpose of vector or matrix. The PAN image
ships of the PAN image and low-resolution MS image from covers all the wavelengths of the MS spectral bands, so the
the unknown high-resolution MS image are modeled. Then, the corresponding PAN patch y pan ∈ Rn×1 can be modeled as the
corresponding dictionary learning and construction strategy are linear combination of spectral bands xj (j = 1, 2, . . . , B), i.e.,
discussed. The experimental results and comparisons are given
in Section IV. The conclusions are drawn in Section V. 
B
y pan = ωj xj + npan (2)
j=1
II. S PARSE R EPRESENTATION
where ωj (j = 1, 2, . . . , B) represents the weights with satisfy-
Sparse representation is a powerful tool to describe signals, 
ing B j=1 ωj = 1 and n
pan
is assumed as the Gaussian noise.
which derives from the mechanism of human vision [17]. The weights ωj (j = 1, 2, . . . , B) can be calculated from the
Recently, sparse representation has attracted much interest and normalized spectral response curves [27]. By introducing the
has been applied into many image processing areas, such as auxiliary variable
image denoising [18] and image superresolution [19].
Generally, nature image contains complicated and nonsta- W = (ω1 I, ω2 I, . . . , ωB I) ∈ Rn×Bn (3)
tionary information as a whole, while local small image patch
appears simple and has a consistent structure. For this reason, (2) can be equivalently transformed as
the small patch can be modeled
√ √ more easily than the whole
image. Let x ∈ Rn be a n × n image patch ordered lexi- y pan = Wx + npan (4)
LI et al.: REMOTE SENSING IMAGE FUSION VIA SPARSE REPRESENTATIONS OVER LEARNED DICTIONARIES 4781

where I ∈ Rn×n is an identity matrix. At this moment, we assume that three dictionaries Dms h ,
The low-resolution MS image can be modeled as the de- D , and Dms
pan
l have been prepared. Then, the model of our
graded version of the unknown high-resolution MS image. The method can be formularized as
relationship between the high- and low-resolution MS image ⎧
⎪ ∗
patches can be expressed as ⎨ α = arg min α0 subject to y pan −Dpan α22 ≤ ε1 ,
α
2
⎪ y ms − Dms
l α2 ≤ ε2
y ms ms
j = SHj xj + nj , j = 1, 2, . . . , B. (5) ⎩ ms ∗
x = Dh α
2
(9)
j ∈R
In this formula, y ms (n/γ )×1
is the patch extracted from
the jth band low-resolution MS image Yjms and ordered as where ε1 ≥ 0 and ε2 ≥ 0 are the error tolerances for y pan
the column vector, γ is the spatial resolution ratio between the and y ms , respectively. The optimization problem in (9) can be
PAN image and low-resolution MS image, Hj is the blur approximately transformed as
filter for the jth band, S denotes the decimation operator,  pan 2
 y Dpan 
and nmsj is assumed as the Gaussian noise for the jth band of α∗ = arg min α0 subject to   ms
− ms α  ≤ ε.

α y Dl 2
low-resolution MS image. Generally, satellite imaging system
(10)
has different modulation transfer function (MTF) of each MS
band. The MTF is bell shaped, and its magnitude value at the At last, the fused high-resolution MS image patch can be

cutoff Nyquist frequency is far lower than 0.5, to prevent alias- calculated as x = Dmsh α . Due to fast computation speed and
ing [28]. Hence, Hj (j = 1, 2, . . . , B) is assumed as the MTF- low complexity, the OMP is applied to solve (10). The proposed
shaped filters with different cutoff frequency for each band. method is summarized as Algorithm 1.
ms T ms T T
Let y ms be y ms = ((y ms 1 ) , . . . , (y j ) , . . . , (y B ) ) ∈
T
2
R(Bn/γ )×1 . Equation (5) can be rewritten as C. Learning the Dictionaries Dpan and Dms
l

y ms = Lall x + nms (6) In this section, we describe a joint learning strategy for
learning the dictionaries Dpan and Dms l from the training set.
where Lall = diag(SH1 , . . . , SHj , . . . , SHB ) and nms = Let Ω = {Zpan , Zms l } be the training set, where Z
pan
=
pan pan pan ms ms ms
T ms T ms T T (z1 , . . . , zi , . . . , zNum ) and Zl = (z1 , . . . , zi , . . . ,
((nms
1 ) , . . . , (nj ) , . . . , (nB ) ) . ms
zNum ) are the sets of PAN patches and low-resolution MS
patches, respectively, zipan and zims (i = 1, 2, . . . , Num) are the
B. Fusion With Sparsity Prior Model ith samples of PAN image patch and low-resolution MS image
patch with B bands, and Num denotes the number of samples.
The task of our method is to reconstruct the unknown According to (7) and (8), it can be seen that the PAN image
high-resolution MS image patch xj (j = 1, 2, . . . , B) from the and MS image have the same sparse coefficients with respect
PAN image patch y pan and low-resolution MS image patch to dictionaries Dpan and Dms l . Therefore, to enforce that the
y ms
j (j = 1, 2, . . . , B) based on (4) and (6). However, (4) and PAN image and MS image have the same sparse coefficients,
(6) are underdetermined, and the regularization terms need our dictionary learning task can be modeled as the following
to be introduced. Due to favorable statistic characteristics of optimization problem:
the sparsity, the sparse regularization is applied to restrict the
solution space. The sparse representation indicates that the Zpan − Dpan A2F + Zms 2
l − Dl AF
ms
arg min
unknown high-resolution MS image patch can be expressed as a {Dpan ,Dms
l
,A}
linear combination of a few atoms. That is to say, the unknown
subject to ∀iαi 0 ≤ τ (11)
high-resolution
 MS image patch x ∈ RBn×1 can be represented
as x = N α ms ms N
k=1 k dhk . The atoms {dhk }k=1 consist of a dic- where A = (α1 , α2 , . . . , αNum ) is a sparse coefficient matrix
tionary Dh = (dh1 , dh2 , . . . , dhN ) ∈ RBn×N (Bn < N ) for
ms ms ms ms
and τ is a nature number controlling the sparsity level. By
the high-resolution MS image. Set α = (α1 , α2 , . . . , αN )T ∈ introducing the auxiliary variables
RN ; then, the patch x can be rewritten as x = Dmsh α. Referring
to (4) and (6), the PAN image patch and low-resolution MS Dpan Zpan
Dtrain = Z= (12)
image patch can be expressed as Dms
l Zms
l

y pan =Wx + npan = WDms


h α+n
pan
= Dpan α+npan (7) (11) can be rewritten as
ms all ms all
y =L x+n =L Dms
h α+n
ms
= Dms
l α+n
ms
(8) arg min Z − Dtrain A2F subject to ∀iαi 0 ≤ τ. (13)
{Dtrain ,A}
where D pan
= WDms
h and =LDms
l
all
Dms
are the dictio-
h
naries for the PAN image and low-resolution MS image, re- We apply the K-SVD algorithm to solve (13). In accordance
spectively. Equations (7) and (8) indicate that the unknown with the solver used in (10), the OMP algorithm is applied
high-resolution MS, PAN, and low-resolution MS images have in the sparse coding stage of K-SVD. In addition, the initial
the same sparse coefficients with respect to dictionaries Dms
h , dictionaries used in K-SVD consist of the randomly selected
Dpan , and Dms
l , respectively. samples.
4782 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 9, SEPTEMBER 2013

The gradient descent method is used to solve (18), i.e.,


Algorithm 1: Image fusion via sparse representation  
hi (j)(t) = dhi (j)(t−1) + dli (j)−SHj dhi (j)(t−1) ↑ γ ∗p
dms ms ms ms
Input: The PAN image Ypan , the low-resolution MS im-
(19)
age Yjms (j = 1, 2, . . . , B), three learned dictionaries Dms h ∈
R Bn×N
,D pan
∈R n×N
, and Dl ∈ R
ms (Bn/γ 2 )×N
, the global where dmshi (j)(t) is the tth iterative value, ↑ γ denotes up-
error ε, the spatial resolution ratio γ between the√PAN image sampling by factor γ, p is a back projection filter, and ∗ is
and low-resolution MS image, the patch size n for PAN convolution. In addition, the filter p is set as the Gaussian filter.
√ Then, at the tth iteration, the Dms
image, and the patch size n/γ for h can be computed as
√ low-resolution
√ MS image.
⎛ ms ⎞
1. For each patch√y pan of size√ n × n and corresponding dh1 (1)(t) · · · dms hi (1)(t) · · · dms
hN (1)(t)
j of size ( n/γ) × ( n/γ) extracted from Y
patch y ms pan
and ⎜ .. .. .. .. .. ⎟
Yjms (j = 1, 2, . . . , B), respectively ⎜ . . . . . ⎟
⎜ ⎟
1) Solve the following problem by OMP: (Dh )(t)=⎜ dh1 (j)(t) · · · dhi (j)(t) · · · dhN (j)(t) ⎟
ms ⎜ ms ms ms

 pan 2 ⎜ .. .. .. .. .. ⎟
 y  ⎝ . . . . . ⎠
 Dpan

α = arg min α0 subject to  − α ≤ε dms ··· dms ··· dms
α y ms Dl ms h1 (B)(t) hi (B)(t) hN (B)(t)
2 (20)
T ms T ms T T
where y ms = ((y ms
1 ) , . . . , (y j ) , . . . , (y B ) ) .
Based on Sections III-C and III-D, the dictionary learning meth-
2) Calculate the fused vector through x = Dms ∗ od for Dms
h ,D
pan
, and Dms
l can be summarized as Algorithm 2.
h α .
√ x as the high-resolution MS image
3) Reshape√the vector
patch with size n × n × B.
2: End Algorithm 2: Learning the dictionaries Dms h ,D
pan
, and Dms
l
3: Generate the high-resolution MS image by averaging the Input: The training set Ω = {Z , Zl }, the sparsity level
pan ms

overlapped pixels. τ , the spatial resolution ratio γ between the PAN image and
Output: High-resolution MS image X. low-resolution MS image, the iterative number Iter, and the
weight matrix W = (ω1 I, ω2 I, . . . , ωB I).
1: Learning dictionaries Dpan and Dms l
The dictionaries Dpan and Dms are learned based on (13),
D. Learning the Dictionary Dms
h
l
which is solved by the K-SVD algorithm.
In this section, we design a novel strategy to construct the 2: Learning dictionary Dms h
dictionary for the high-resolution MS image, since there is no 1) Compute the solution of (14) as
real high-resolution MS image for dictionary learning. −1
Equations (7) and (8) imply that the dictionaries Dms pan D̂ms T T pan
h = (W W + λI) W D
h ,D ,
ms pan ms ms
and Dl have the relationships: D = WDh and Dl =
and set (Dms ms
h )(0) = D̂h .
Lall Dms ms
h . Based on these relationships, the dictionary Dh for 2) For t = 1 : Iter
the unknown high-resolution MS image can be constructed from
The ith (i = 1, 2, . . . , N ) atom in (Dms
h )(t−1) is up-
Dpan and Dms l through the following two optimization problems: dated through
2 ms 2
min Dpan − WDmsh F + λ Dh F (14) dms ms
Dms hi (j)(t) = dhi (j)(t−1)  
h
 ms 2 + dms li (j) − SHj dhi (j)(t−1) ↑ γ ∗ p,
ms
min Dl − Lall Dms
h
 . (15)
ms
Dh F j = 1, 2, . . . , B

The solution of (14) can be calculated by Dms T


h = (W W+
and the (Dms
h )(t) is computed as (20).
−1 T pan End
λI) W D which is used as the initial value of (15).
Furthermore, due to the spectral characteristic of MS image, Output: Three dictionaries Dmsh ,D
pan
, and Dms
l .
the ith atom dms ms
hi of dictionary Dh can be partitioned as
T
T T T
dms ms ms ms
hi = (dhi (1)) , . . . , (dhi (j)) , . . . , (dhi (B)) (16) IV. E XPERIMENTS AND P ERFORMANCE C OMPARISONS
A. Experimental Data Sets
hi (j) ∈ R
where dms is the jth part of dms
n×1
hi with respect to the
jth spectral band. Similarly, the atoms of dictionary Dms
l have To evaluate the effectiveness of the proposed method, we
the corresponding partition, i.e., consider two data sets acquired by QuickBird and IKONOS
T satellites, respectively.
T T T
dms ms ms ms
li = (dli (1)) , . . . , (dli (j)) , . . . , (dli (B)) . (17) 1) QuickBird: The QuickBird satellite provides PAN image
at 0.7-m spatial resolution and MS image with four bands
Instead of (15), we study its subproblems at 2.8-m spatial resolution. The images on Sundarbans
2 acquired on November 21, 2002, including various land
min dms
li (j) − SHj dhi (j)2 ,
ms
dms
hi
(j) cover types are used, which are downloaded at http://glcf.
for j = 1, 2, . . . , B; i = 1, 2, . . . , N. (18) umiacs.umd.edu/data/quickbird/sundarbans.shtml.
LI et al.: REMOTE SENSING IMAGE FUSION VIA SPARSE REPRESENTATIONS OVER LEARNED DICTIONARIES 4783

TABLE I Iter controls the stopping criterion, which is set to ten in the
N YQUIST C UTOFF F REQUENCIES OF Q UICKBIRD AND IKONOS FOR
D IFFERENT S PECTRAL BANDS experiments.

C. Quality Assessment Indexes


In order to quantitatively assess the fusion performance, var-
ious quality indexes are used. In the simulated experiments, five
2) IKONOS: The IKONOS satellite provides PAN image at quality indexes with reference are considered. The correlation
1-m spatial resolution and MS image with four bands at coefficient (CC) [32] and the root-mean-square error (RMSE)
4-m spatial resolution. The IKONOS images captured between the fused MS image and the reference MS image are
over Sichuan, China, in May 2008 are used, which are calculated for each band. The average spectral angle mapper
downloaded at http://glcf.umiacs.umd.edu/data/ikonos/ (SAM) [33], the erreur relative globale adimensionnelle de
index.shtml. synthèse (or relative dimensionless global error in synthesis)
(ERGAS) [34], and Q4 [35] are computed with all spectral
B. Parameter Setting bands. In the real experiments, the “quality with no reference”
(QNR) [36] is used which consists of the spectral distortion
To evaluate the proposed method, both simulated and real
index Dλ and the spatial distortion index Ds . The best values
experiments are performed. In the simulated experiments, the
of CC, Q4, and QNR are one. The best values of RMSE, SAM,
original PAN image and MS image are degraded first. Then,
ERGAS, Dλ , and Ds are zero.
the degraded PAN image and MS image are fused, and the
original MS image is used as the reference image. Through
D. Effects of Patch Size and Dictionary Size
the simulated experiments, the effects of parameters on the
proposed method and comparisons of various methods are an- In this section, the effects of patch size and dictionary size
alyzed and discussed. In the real experiments, the original real on fusion performance are evaluated. The experiments are
PAN image and MS image are fused, which aims to evaluate implemented on the degraded PAN image and degraded MS
the performances of various methods in practice. image. The original MS image is used as the reference image.
In QuickBird and IKONOS satellites, the spatial resolution The CC, RMSE, SAM, ERGAS, and Q4 indexes are used to
ratio between the PAN image and low-resolution MS image assess the quality of fused images.
is four. Therefore, the degraded PAN image is generated by First, we consider the effect of patch size which directly
downsampling the original PAN image by a factor of four in affects the information contained in each patch. For this evalua-
the simulated experiments. The degraded MS image is obtained tion goal, the dictionary size is first fixed as 512. Three different
by filtering the original MS image with MTF-shaped filters patch sizes for low-resolution MS image are studied, including
and then downsampling by a factor of four. The approximated 2 × 2, 3 × 3, and 4 × 4. The corresponding patch sizes of
Gaussian filters with different Nyquist cutoff frequencies sim- PAN image are 8 × 8, 12 × 12, and 16 × 16, respectively.
ulate the MTF of satellite. The Nyquist cutoff frequencies of Two pairs of remote sensing images shown in Fig. 1(a) and (b)
QuickBird and IKONOS for different spectral bands are listed (QuickBird images) and Fig. 1(c) and (d) (IKONOS images) are
in Table I [29]. fused by the proposed method with different patch sizes. Then,
The proposed method is compared with six popular methods: the quality indexes are calculated, where the average CC and
the Gram–Schmidt (GS) algorithm [30], the fast IHS (FIHS) RMSE of four bands are presented. In addition, all the values of
[3], the SVT [11], the ATWT-based method with the CBD injec- indexes are normalized to the range [0, 1]. The normalized re-
tion model [10], the AWLP [9], and the CS-based method [16]. sults with respect to the different patch size are plotted in Fig. 2,
The GS method is implemented by the software Environment where the horizontal axis is the patch size of low-resolution MS
for Visualizing Images [30]. The decomposition level of ATWT image and the vertical axis is the normalized results. Larger CC
for CBD and AWLP-based methods is set as two. As to the SVT and Q4 indicate better fused result, and smaller RMSE, SAM,
method, the σ 2 in the Gaussian radial basis function kernel is set and ERGAS indicate better fused result. Based on the curves in
to 1.2, and the parameter γ of the mapped least-squares support Fig. 2, it can be seen that the performance of proposed method
vector machine (LS-SVM) is set to one, which gives the best is improved as the patch size increases. However, our method
fusion results for the SVT. For our method,
√ the global error with bigger patch size needs more computation time.
ε of the OMP is generally chosen as n · C · σ [18] in noise Second, the effects of dictionary size are analyzed, i.e., the
case, where n is the length of the signal, C is a constant, and number of atoms in dictionary. Four different dictionary sizes,
σ denotes the noise level. If the source images are clean, the namely, 256, 512, 1024, and 1536, are considered. The patch
global ε is set to one. The weights in (2) are set as follows sizes of low-resolution MS and PAN images are fixed as 3 ×
[15], [31]: 1) ω1 = 0.1139, ω2 = 0.2315, ω3 = 0.2308, and 3 and 12 × 12, respectively. The MS image and PAN image
ω4 = 0.4239 for QuickBird and 2) ω1 = 0.1071, ω2 = 0.2646, shown in Fig. 1(a) and (b) and Fig. 1(c) and (d) are fused
ω3 = 0.2696, and ω4 = 0.3587 for IKONOS. To improve the by the proposed method with different dictionary sizes. Fig. 3
adaptability of dictionaries Dmsh , D
pan
, and Dmsl , the dictio- exhibits the performance of the proposed method with different
naries are learned from the source images adaptively, i.e., the dictionary sizes. From Fig. 3, it can be seen that the proposed
training samples are chosen from the source images directly. In method with larger dictionary yields better performance with
the stage of learning the dictionary Dmsh , the iteration number higher computational cost.
4784 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 9, SEPTEMBER 2013

gion could result in that the fusion method based on sparse rep-
resentation is sensitive to aliasing patterns. The aliasing patterns
in source image may be transferred or even be enhanced in the
final results. One direct and feasible strategy to overcome the
aliasing patterns is to increase the patch size. Figs. 2 and 3 show
that the performance of our method can be improved further
through increasing the patch size and the dictionary size, but at
higher computational cost. Therefore, the choices of patch size
and dictionary size have a tradeoff between the performance
and computation time. In the following experiments, the patch
size of low-resolution MS image and the dictionary size are set
to 3 × 3 and 1024, respectively.

E. Simulated Experimental Results


First, the performance of the proposed method is evaluated
on a pair of simulated QuickBird images. Fig. 4(a) and (b)
shows the low-resolution MS image with a resolution of 11.2 m
and PAN image with a resolution of 2.8 m, respectively. The
Fig. 1. Two pairs of degraded remote sensing images. (a) Degraded MS image original MS image at 2.8-m resolution is used as the reference
(red, green, blue bands) of QuickBird. (b) Degraded PAN image of QuickBird. image, as shown in Fig. 4(c). The fused images of various
(c) Degraded MS image (red, green, blue bands) of IKONOS. (d) Degraded methods are reported in Fig. 4(d)–(j). In addition, a rectangle
PAN image of IKONOS.
region containing the trees and buildings is magnified and put at
the left bottom of each fused image. Fig. 4(d) and (e) shows the
results of the GS and FIHS methods, respectively, which exhibit
spectral distortion. The results of SVT, CBD, AWLP, CS,
and proposed methods are shown in Fig. 4(f)–(j), respectively.
Compared with the reference image [Fig. 4(c)], the SVT, CBD,
AWLP, CS, and proposed methods can preserve the spectral
information effectively but lose some spatial details. Focusing
on the magnified regions, our method is comparable to the
SVT, CBD, AWLP, and CS methods in preserving the details.
Table II lists the quality assessment of the results in Fig. 4.
The best results for each quality index are labeled in bold. The
results of CC and RMSE demonstrate that the proposed method
produces the best match for the reference image except the
near infrared band. As to the SAM index, the AWLP method
Fig. 2. Performance of the proposed method with different patch sizes. The provides the best result. The SVT method surpasses our method
symbols “QB” and “IK” denote the QuickBird and IKONOS, respectively.
slightly in terms of the ERGAS index. Concerning the Q4
index measuring both spectral and radiometric distortions, the
proposed method generates the best result.
Second, the performance of the proposed method is evaluated
on a pair of simulated IKONOS images. Fig. 5(a) and (b) shows
a pair of simulated images with the resolutions of 16 and 4 m,
respectively. Fig. 5(c) shows the original MS image at 4-m reso-
lution which is used as the reference image. Fig. 5(d)–(j) shows
the fused images by various fusion methods. The left bottom
in each fused image is a magnified rectangle region containing
the mountain vegetation and canyon. Fig. 5(d) shows the result
provided by the GS method which exhibits unnatural color.
Fig. 5(e) loses the most detailed information provided by the
FIHS method. From the magnified regions, it can be seen that
the AWLP method generates the best contrast. However, some
Fig. 3. Performance of the proposed method with different dictionary sizes.
The symbols “QB” and “IK” denote the QuickBird and IKONOS, respectively.
information about the canyon cannot be preserved effectively
by the AWLP method. By contrast, our method provides the
Our method is implemented on the local overlapping image most characteristic of canyon, but the fused image has some
patch. However, all MS bands have the aliasing patterns ac- aliasing patterns. Table III presents the quality indexes of the
quired by spaceborne sensors. The insufficient overlapping re- results in Fig. 5. Although the AWLP method generates the
LI et al.: REMOTE SENSING IMAGE FUSION VIA SPARSE REPRESENTATIONS OVER LEARNED DICTIONARIES 4785

Fig. 4. Simulated QuickBird images and fused results by different methods. (a) Degraded MS image at 11.2-m spatial resolution. (b) Degraded PAN image at
2.8-m spatial resolution. (c) Original MS image at 2.8-m spatial resolution. (d) GS method. (e) FIHS method. (f) SVT method. (g) CBD method. (h) AWLP method.
(i) CS method. (j) Proposed method.
TABLE II
Q UANTITATIVE A SSESSMENT OF THE F USION R ESULTS IN F IG . 4

Fig. 5. Simulated IKONOS images and fused results by different methods. (a) Degraded MS image at 16-m spatial resolution. (b) Degraded PAN image at 4-m
spatial resolution. (c) Original MS image at 4-m spatial resolution. (d) GS method. (e) FIHS method. (f) SVT method. (g) CBD method. (h) AWLP method.
(i) CS method. (j) Proposed method.
4786 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 9, SEPTEMBER 2013

TABLE III
Q UANTITATIVE A SSESSMENT OF THE F USION R ESULTS IN F IG . 5

Fig. 6. Real QuickBird images and fused results by different methods. (a) MS image at 2.8-m spatial resolution. (b) PAN image at 0.7-m spatial resolution.
(c) GS method. (d) FIHS method. (e) SVT method. (f) CBD method. (g) AWLP method. (h) Proposed method.

TABLE IV
Q UANTITATIVE A SSESSMENT OF THE F USION R ESULTS IN F IG . 6

best SAM result, the best CC, RMSE, ERGAS, and Q4 results tively. The fused images of all tested fusion methods are shown
demonstrate the superiority of our method. in Fig. 6(c)–(h). Similar to the simulated experimental results,
the GS and FIHS methods generate the spectral distortions, as
shown in Fig. 6(c) and (d). The SVT, CBD, AWLP, and our
F. Real Experimental Results
method generate high-resolution MS images with satisfactory
In this section, we will evaluate the proposed method on the spectral preservation, as shown in Fig. 6(e)–(h). A magnified
real remote sensing images. The quality assessment indexes rectangle region containing some trees is presented at the left
without the reference image, namely, Dλ , Ds , and QNR, are bottom of fused image. In this rectangle region, it is difficult to
used to evaluate the fused image objectively. reconstruct these trees due to the large black area around the
Fig. 6(a) and (b) shows a pair of real QuickBird images at tree region in the PAN image. From the magnified rectangle
2.8-m (MS image) and 0.7-m (PAN image) resolutions, respec- regions, we can see that the trees provided by our method
LI et al.: REMOTE SENSING IMAGE FUSION VIA SPARSE REPRESENTATIONS OVER LEARNED DICTIONARIES 4787

Fig. 7. Real IKONOS images and fused results by different methods. (a) MS image at 4-m spatial resolution. (b) PAN image at 1-m spatial resolution.
(c) GS method. (d) FIHS method. (e) SVT method. (f) CBD method. (g) AWLP method. (h) Proposed method.

TABLE V
Q UANTITATIVE A SSESSMENT OF THE F USION R ESULTS IN F IG . 7

are more natural. Table IV shows the QNR index results of fused result [Fig. 8(h)] with fewer artifacts compared with other
corresponding fused images in Fig. 6. Our method generates methods [Fig. 8(c)–(g)].
the best Dλ , Ds , and QNR values. All the experiments are implemented in Matlab 7.10 and run
Another pair of real IKONOS images is used to evaluate the on a Pentium 2.93-GHz PC with 2-GB memory. For fusing PAN
performance of our method. Fig. 7(a) and (b) shows the real MS image with size 256 × 256 and MS image with size 64 × 64 ×
and PAN images at 4- and 1-m spatial resolutions, respectively. 4, our method may take about 15 min. The GS, FIHS, SVT,
The corresponding results are presented in Fig. 7(c)–(h). The and AWLP methods need less than 1 s. The running time of
colors of images in Fig. 7(c) and (d) are unnatural, which are the CBD method is about 30 s. Compared with the component-
generated by the GS and FIHS methods. A rectangle region substitution- and multiresolution-analysis-based methods, the
about the branch of river exhibiting haze is extracted from the proposed method is time consuming. However, the algorithm
fused images and magnified. From the magnified regions, it can can be dramatically speeded up with graphics processing unit
be seen that our method is comparable to the SVT, CBD, and (GPU).
AWLP methods in providing the branch of river. In addition, the
fused image quality is evaluated, and the corresponding results
V. C ONCLUSION
are presented in Table V. Although the CBD method provides
better result in terms of the Dλ index, the best Ds and QNR In this paper, a restoration-based remote sensing image fu-
values point out that our method can generate the fused image sion method has been developed with the sparsity regulariza-
with small spectral and spatial distortions overall. tion. In our dictionary learning method, the dictionaries for
Finally, the experiments on the noisy source images are the PAN image and low-resolution MS image are learned from
performed to study the robustness of the proposed method. One the source images adaptively. The dictionary for the unknown
pair of clean real QuickBird images is corrupted by Gaussian high-resolution MS image is constructed from the dictionaries
noise. The standard deviation of Gaussian distribution is re- for the PAN image and low-resolution MS image. The learned
garded as the noise level. Fig. 8(a) and (b) shows the noisy dictionaries can reduce the dimensionality of dictionary, speed
MS and PAN images with noise level σ = 10, respectively. up the sparse decomposition, and improve the effectiveness and
Fig. 8(c)–(h) shows the corresponding results of various meth- robustness of remote sensing image fusion. Our method can
ods. As can be seen, our method can provide more natural provide comparable results than other state-of-the-art methods,
4788 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 9, SEPTEMBER 2013

Fig. 8. Noisy source images and fused results by different methods. (a) Noisy MS image at 2.8-m spatial resolution (σ = 10). (b) Noisy PAN image at 0.7-m
spatial resolution (σ = 10). (c) GS method. (d) FIHS method. (e) SVT method. (f) CBD method. (g) AWLP method. (h) Proposed method.

such as AWLP, while providing an alternative approach for multisensor image fusion,” IEEE Trans. Geosci. Remote Sens., vol. 44,
remote sensing image fusion with higher computational cost. In no. 12, pp. 3674–3686, Dec. 2006.
[8] J. Núũez, X. Otazu, O. Fors, A. Prades, V. Palà, and R. Arbiol,
addition, the noisy source image experiments show the robust- “Multiresolution-based image fusion with additive wavelet decomposi-
ness of the proposed method. In the future works, we focus on tion,” IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1204–1211,
fast sparse recovery algorithm and multiscale dictionary learn- May 1999.
[9] X. Otazu, M. González-Audícana, O. Fors, and J. Núũez, “Introduction
ing to improve the efficiency and effect of the proposed method. of sensor spectral response into image fusion methods. Application to
wavelet-based methods,” IEEE Trans. Geosci. Remote Sens., vol. 43,
no. 10, pp. 2376–2385, Oct. 2005.
ACKNOWLEDGMENT [10] A. Garzelli and F. Nencini, “Interband structure modeling for pansharp-
ening of very high-resolution multispectral images,” Inf. Fusion, vol. 6,
The authors would like to thank the editor and the anonymous no. 3, pp. 213–224, Sep. 2005.
reviewers for their helpful comments that led to a significant [11] S. Zheng, W. Shi, J. Liu, and J. Tian, “Remote sensing image fusion using
multiscale mapped LS-SVM,” IEEE Trans. Geosci. Remote Sens., vol. 46,
improvement in terms of both presentation and quality of this no. 5, pp. 1313–1322, May 2008.
paper. The authors would also like to thank Prof. S. Zheng for [12] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. Bruce,
providing the code of support value transform-based method “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S
data-fusion contest,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10,
and Dr. M. Vega for providing the code of the modulation pp. 3012–3021, Oct. 2007.
transfer function. [13] J. Choi, K. Yu, and Y. Kim, “A new adaptive component-substitution-
based satellite image fusion by using partial replacement,” IEEE Trans.
Geosci. Remote Sens., vol. 49, no. 1, pp. 295–309, Jan. 2011.
R EFERENCES [14] M. Joshi and A. Jalobeanu, “MAP estimation for multiresolution fusion
[1] L. Wald, “Some terms of reference in data fusion,” IEEE Trans. Geosci. in remotely sensed images using an IGMRF prior model,” IEEE Trans.
Remote Sens., vol. 37, no. 3, pp. 1190–1193, May 1999. Geosci. Remote Sens., vol. 48, no. 3, pp. 1245–1255, Mar. 2010.
[2] C. Pohl and J. L. Van Genderen, “Multisensor image fusion in remote [15] Z. Li and H. Leung, “Fusion of multispectral and panchromatic images
sensing: Concepts, methods and applications,” Int. J. Remote Sens., using a restoration-based method,” IEEE Trans. Geosci. Remote Sens.,
vol. 19, no. 5, pp. 823–854, Mar. 1998. vol. 47, no. 5, pp. 1482–1491, May 2009.
[3] T. M. Tu, P. S. Huang, C. L. Hung, and C. P. Chang, “A fast [16] S. Li and B. Yang, “A new pan-sharpening method using a compressed
intensity–hue–saturation fusion technique with spectral adjustment for sensing technique,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 2,
IKONOS imagery,” IEEE Geosci. Remote Sens. Lett., vol. 1, no. 4, pp. 738–746, Feb. 2011.
pp. 309–312, Oct. 2004. [17] B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field
[4] M. Choi, “A new intensity–hue–saturation fusion approach to image properties by learning a sparse code for natural images,” Nature, vol. 381,
fusion with a tradeoff parameter,” IEEE Trans. Geosci. Remote Sens., no. 6583, pp. 607–609, Jun. 1996.
vol. 44, no. 6, pp. 1672–1682, Jun. 2006. [18] M. Elad and M. Aharon, “Image denoising via sparse and redundant
[5] P. S. Chavez, Jr., S. C. Sides, and J. A. Anderson, “Comparison of representations over learned dictionaries,” IEEE Trans. Image Process.,
three different methods to merge multiresolution and multispectral data: vol. 15, no. 12, pp. 3736–3745, Dec. 2006.
Landsat TM and SPOT panchromatic,” Photogramm. Eng. Remote Sens., [19] J. Yang, J. Wright, T. S. Huang, E. , and Y. Ma, “Image super-resolution
vol. 57, no. 3, pp. 295–303, Mar. 1991. via sparse representation,” IEEE Trans. Image Process., vol. 19, no. 11,
[6] S. Li, J. T. Kwok, and Y. Wang, “Using the discrete wavelet frame trans- pp. 2861–2873, Nov. 2010.
form to merge Landsat TM and SPOT panchromatic images,” Inf. Fusion, [20] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition
vol. 3, no. 1, pp. 17–23, Mar. 2002. by basis pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129–159, Feb. 2001.
[7] P. S. Pradhan, R. L. King, N. H. Younan, and D. W. Holcomb, “Estimation [21] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction
of the number of decomposition levels for a wavelet-based multiresolution from limited data using FOCUSS: A re-weighted minimum norm
LI et al.: REMOTE SENSING IMAGE FUSION VIA SPARSE REPRESENTATIONS OVER LEARNED DICTIONARIES 4789

algorithm,” IEEE Trans. Signal Process., vol. 45, no. 3, pp. 600–616, Shutao Li (M’07) received the B.S., M.S., and Ph.D.
Mar. 1997. degrees in electrical engineering from Hunan Uni-
[22] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dic- versity, Changsha, China, in 1995, 1997, and 2001,
tionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397–3415, respectively.
Dec. 1993. From May to October 2001, he was a Research
[23] J. L. Starck, J. Fadili, and F. Murtagh, “The undecimated wavelet de- Associate with the Department of Computer Science,
composition and its reconstruction,” IEEE Trans. Image Process., vol. 16, The Hong Kong University of Science and Technol-
no. 2, pp. 297–309, Feb. 2007. ogy, Kowloon, Hong Kong. From November 2002 to
[24] J. L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the November 2003, he was a Postdoctoral Fellow with
combination of sparse representations and a variational approach,” IEEE the Royal Holloway College, University of London,
Trans. Image Process., vol. 14, no. 10, pp. 1570–1582, Oct. 2005. London, U.K. Since 2001, he has been with the
[25] K. Engan, S. Aase, and J. Hakon-Husoy, “Method of optimal directions for College of Electrical and Information Engineering, Hunan University, where
frame design,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., he is currently a Full Professor. He has authored or coauthored more than
1999, vol. 5, pp. 2443–2446. 130 refereed papers. His research interests include information fusion, image
[26] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for de- processing, and pattern recognition.
signing overcomplete dictionaries for sparse representation,” IEEE Trans.
Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006.
[27] R. Molina, M. Vega, J. Mateos, and A. K. Katsaggelos, “Variational
posterior distribution approximation in Bayesian super resolution recon-
struction of multispectral images,” Appl. Comput. Harmon. Anal., vol. 24,
no. 2, pp. 251–267, Feb. 2008.
[28] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “MTF-
tailored multiscale fusion of high-resolution MS and Pan imagery,” Pho- Haitao Yin received the B.S. and M.S. degrees
togramm. Eng. Remote Sens., vol. 72, no. 5, pp. 591–596, May 2006. in applied mathematics from Hunan University,
[29] M. M. Khan, L. Alparone, and J. Chanussot, “Pansharpening qual- Changsha, China, in 2007 and 2009, respectively,
ity assessment using the modulation transfer functions of instruments,” where he is currently working toward the Ph.D.
IEEE Trans. Geosci. Remote Sens., vol. 47, no. 11, pp. 3880–3891, degree in the College of Electrical and Information
Nov. 2009. Engineering.
[30] C. A. Laben and B. V. Brower, “Process for enhancing the spatial His research interests include image processing
resolution of multispectral imagery using pan-sharpening,” U.S. Patent and sparse representation.
6 011 875, Jan. 4, 2000.
[31] M. Vega, J. Mateos, R. Molina, and A. K. Katsaggelos, “Super res-
olution of multispectral images using 1 image models and interband
correlations,” J. Signal Process. Syst., vol. 65, no. 3, pp. 509–523,
Dec. 2011.
[32] T. Ranchin, B. Aiazzi, L. Alparone, S. Baronti, and L. Wald, “Im-
age fusion—the ARSIS concept and some successful implementation
schemes,” ISPRS J. Photogramm. Remote Sens., vol. 58, no. 1/2, pp. 4–18,
Jun. 2003.
[33] R. H. Yuhas, A. F. H. Goetz, and J. W. Boardman, “Discrimination among
semi-arid landscape endmembers using the spectral angle mapper (SAM)
algorithm,” in Proc. Summar. 4th JPL Airborne Earth Sci. Workshop, Leyuan Fang (S’10) received the B.S. degree in
1992, pp. 147–149. electrical engineering from Hunan University of Sci-
[34] L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of ence and Technology, Xiangtan, China, in 2008.
different spatial resolutions: Assessing the quality of resulting images,” Since 2008, he has been working toward the Ph.D.
Photogramm. Eng. Remote Sens., vol. 63, no. 6, pp. 691–699, 1997. degree in the College of Electrical and Information
[35] L. Alparone, S. Baronti, A. Garzelli, and F. Nencini, “A global quality Engineering, Hunan University, Changsha, China.
measurement of pan-sharpened multispectral imagery,” IEEE Geosci. Re- Since September 2011, he has been a Visiting
mote Sens. Lett., vol. 1, no. 4, pp. 313–317, Oct. 2004. Ph.D. Student with the Department of Ophthalmol-
[36] L. Alparone, B. Aiazzi, S. Baronti, A. Garzelli, F. Nencini, and ogy, Duke University, Durham, NC, supported by the
M. Selva, “Multispectral and panchromatic data fusion assessment with- China Scholarship Council. His research interests in-
out reference,” Photogramm. Eng. Remote Sens., vol. 74, no. 2, pp. 193– clude sparse representation and multiresolution anal-
200, Feb. 2008. ysis applied to biomedical images and remote sensing images.

You might also like