Radar Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/301948668

A refined classification approach by integrating Landsat Operational Land


Imager (OLI) and RADARSAT-2 imagery for land-use and land-cover mapping
in a tropical area

Article  in  International Journal of Remote Sensing · May 2016


DOI: 10.1080/01431161.2016.1176273

CITATIONS READS

22 1,419

5 authors, including:

Maher Al-Zuhairi Faez Hussein

77 PUBLICATIONS   1,140 CITATIONS   
Universiti Putra Malaysia - K. N. Toosi University of Technology.
2 PUBLICATIONS   54 CITATIONS   
SEE PROFILE
SEE PROFILE

Biswajeet Pradhan rashid shariff


University of Technology Sydney Universiti Putra Malaysia
984 PUBLICATIONS   43,495 CITATIONS    202 PUBLICATIONS   2,616 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Identification of rocks and their quartz content in Gua Musang gold field using Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) imagery
View project

Environmental Bio-indicators of groundwater View project

All content following this page was uploaded by Maher Al-Zuhairi on 22 November 2018.

The user has requested enhancement of the downloaded file.


International Journal of Remote Sensing

ISSN: 0143-1161 (Print) 1366-5901 (Online) Journal homepage: http://www.tandfonline.com/loi/tres20

A refined classification approach by integrating


Landsat Operational Land Imager (OLI) and
RADARSAT-2 imagery for land-use and land-cover
mapping in a tropical area

Maher Ibrahim Sameen, Faten Hamed Nahhas, Faez Hussein Buraihi,


Biswajeet Pradhan & Abdul Rashid B. Mohamed Shariff

To cite this article: Maher Ibrahim Sameen, Faten Hamed Nahhas, Faez Hussein Buraihi,
Biswajeet Pradhan & Abdul Rashid B. Mohamed Shariff (2016) A refined classification approach
by integrating Landsat Operational Land Imager (OLI) and RADARSAT-2 imagery for land-use
and land-cover mapping in a tropical area, International Journal of Remote Sensing, 37:10,
2358-2375, DOI: 10.1080/01431161.2016.1176273

To link to this article: http://dx.doi.org/10.1080/01431161.2016.1176273

Published online: 06 May 2016.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=tres20

Download by: [115.134.186.25] Date: 06 May 2016, At: 06:35


INTERNATIONAL JOURNAL OF REMOTE SENSING, 2016
VOL. 37, NO. 10, 2358–2375
http://dx.doi.org/10.1080/01431161.2016.1176273

A refined classification approach by integrating Landsat


Operational Land Imager (OLI) and RADARSAT-2 imagery
for land-use and land-cover mapping in a tropical area
Maher Ibrahim Sameena, Faten Hamed Nahhasa, Faez Hussein Buraihia,
Biswajeet Pradhan a,b and Abdul Rashid B. Mohamed Shariffc
a
Department of Civil Engineering, Geospatial Information Science Research Center (GISRC), Faculty of
Engineering, University Putra Malaysia, UPM, Serdang, Malaysia; bDepartment of Geoinformation
Engineering, Choongmu-gwan, Sejong University, Seoul, South Korea; cDepartment of Biological and
Agricultural Engineering, Geospatial Information Science Research Center (GISRC), Faculty of Engineering,
University Putra Malaysia, UPM, Serdang, Malaysia
Downloaded by [115.134.186.25] at 06:36 06 May 2016

ABSTRACT ARTICLE HISTORY


Producing accurate land-use and land-cover (LULC) mapping is a Received 12 June 2015
long-standing challenge using solely optical remote-sensing data, Accepted 27 March 2016
especially in tropical regions due to the presence of clouds. To
supplement this, RADARSAT images can be useful in assisting
LULC mapping. The fusion of optical and active remote-sensing
data is important for accurate LULC mapping because the data
from different parts of the spectrum provide complementary infor-
mation and often lead to increased classification accuracy. Also,
the timeliness of using synthetic aperture radar (SAR) fills informa-
tion gaps during overcast or hazy periods. Therefore, this research
designed a refined classification procedure for LULC mapping for
tropical regions. Determining the best method for mapping with a
specific data source and study area is a major challenge because
of the wide range of classification algorithms and methodologies
available. In this study, different combinations and the potential of
Landsat Operational Land Imager (OLI) and RADARSAT-2 SAR data
were evaluated to select the best procedure for LULC classifica-
tion. Results showed that the best filter for SAR speckle reduction
is the 5 × 5 enhanced Lee. Furthermore, image-sharpening algo-
rithms were employed to fuse Landsat multispectral and panchro-
matic bands and subsequently these algorithms were analysed in
detail. The findings also confirmed that Gram–Schmidt (GS) per-
formed better than the other techniques employed. Fused Landsat
data and SAR images were then integrated to produce the LULC
map. Different classification algorithms were adopted to classify
the integrated Landsat and SAR data, and the maximum likelihood
classifier (MLC) was considered the best approach. Finally, a sui-
table classification procedure was designed and proposed for
LULC as mapping in tropical regions based on the results
obtained. An overall accuracy of 98.62% was achieved from the
proposed methodology. The proposed methodology is a useful
tool in industry for mapping purposes. Additionally, it is also useful
for researchers, who could extend the method for different data
sources and regions.

CONTACT Maher Ibrahim Sameen and Biswajeet Pradhan [email protected], [email protected]


Department of Civil Engineering, Universiti Putra Malaysia, UPM, Serdang, Malaysia
© 2016 Informa UK Limited, trading as Taylor & Francis Group
INTERNATIONAL JOURNAL OF REMOTE SENSING 2359

1. Introduction
Accurate and reliable land-use and land-cover (LULC) classification is essential for a wide
range of applications (Werner, Storie, and Storie 2014). Such a classification has been
adopted for various applications, global change, land monitoring, updating geographi-
cal information databases, prediction of urban expansion, and natural hazard modelling
(Cihlar and Jansen 2001; Sang et al. 2014; Otukei and Blaschke 2010; Lu et al. 2011).
Remote sensing is used to determine LULC with a proper data source and classification
approach. The most common technique for LULC mapping is image classification, which
is a complicated process that considers several factors. The primary procedure in image
classification may include determination of a suitable classification system (Lu and Weng
2007), selection of training samples (Lu and Weng 2007), image pre-processing, feature
extraction, selection of suitable classification approaches, post-classification processing,
and accuracy assessment (Lu and Weng 2007).
Selecting a suitable classification algorithm is critical for achieving accurate LULC
Downloaded by [115.134.186.25] at 06:36 06 May 2016

mapping. In LULC mapping, numerous algorithms, techniques, and methodologies have


been used, including the maximum likelihood classifier (MLC) (Gevana et al. 2015) and
advanced algorithms such as artificial neural networks (Elatawneh et al. 2014; Iounousse
et al. 2015), decision trees (Löw, Conrad, and Michel 2015; Chasmer et al. 2014), support
vector machines (SVMs) (Singh et al. 2014), object-based algorithms (Tehrany, Pradhan,
and Jebuv 2014), data fusion techniques (Ghosh, Sharma, and Joshi 2014), and sensor
integration methods (Lucas et al. 2014). Nevertheless, scholars have always faced chal-
lenges in terms of deciding which classification algorithm to use (Srivastava et al. 2012).
No clear guidelines on selecting suitable algorithms and techniques for adoption in
practical projects have been established in the literature (Srivastava et al. 2012). The
user’s needs, scale of the study area, economic conditions, and analyst’s skills are
important factors that influence the selection of remotely sensed data, design of the
classification procedure, and quality of the classification results. Another important
factor that influences data selection is atmospheric conditions (Lu and Weng 2007).
The frequent cloudy conditions in tropical regions are often a problem in capturing
high-quality optical sensor images. In this event, active remote-sensing data are
regarded as a relevant supplementary data source. Therefore, integrating multi-sensory
data with various image characteristics could improve the quality of LULC mapping.
Colditz et al. (2006) investigated the influence of image fusion techniques on spectral
classification algorithms and clarified the accuracy of such techniques via Landsat 7
ETM+ imagery. In their study, fusion techniques such as the Brovey transform (BT) fusion
technique, hue-saturation-value (HSV) transform, and principal component analysis
(PCA) were evaluated. The findings revealed that the adaptive image fusion approach
presented the best results with low noise content. Such an observation resulted in a
major improvement compared to reference data, particularly along object edges.
Acceptable results were achieved via wavelet transform and PCA methods. BT and
HSV image fusion techniques performed poorly and hence these could not be recom-
mended for classifying fused imagery when Landsat 7 ETM+ data are used. Liu and
Zhang (2009) compared data fusion techniques for Beijing micro-satellite images. In
particular, the performance of various data fusion techniques, such as intensity-hue-
saturation (HIS), BT, GS, wavelet, IHS-wavelet, PCA-wavelet, and high-pass filter (HPF),
2360 M. I. SAMEEN ET AL.

was evaluated. Their study determined that all fused images had a larger entropy value
than the original multispectral images. Therefore data fusion techniques enhanced the
spatial details, thereby allowing easy identification of the object. The BT image was
extremely dark, in which the ground features could hardly be identified. The HPF image
was exceedingly fuzzy and had a very poor visual effect. The IHS image lost certain
spectral information, IHS-wavelet method being better. The PCA method had some
spectral distortion because it did not extract a component representing the spatial
details from the image. Moreover, the PCA-wavelet technique performed better
than PCA.
SAR-optical data fusion is important for various applications for two main reasons.
First, the data from different parts of the spectrum provide complementary information
and often lead to increased classification accuracy. Second, the timeliness of using SAR
fills information gaps during overcast or hazy periods (Idi and Nejad 2013; Bakirman
et al. 2014; Rahman, Tetuko Sri Sumantyo, and Sadek 2010; Lu et al. 2011; Soria-Ruiz,
Fernandez-Ordonez, and Woodhouse 2010; Werner, Storie, and Storie 2014). Various
Downloaded by [115.134.186.25] at 06:36 06 May 2016

studies have investigated the fusion of SAR and optical data to improve LULC classifica-
tion. Bakirman et al. (2014) explored the fusion and classification of SAR and optical data
for general LULC classification. In particular, that investigation evaluated several fusion
techniques such as BT, HSV, GS, and PCA. The results showed that the highest accuracy
was achieved by the fusion of GS and k-nearest-neighbour classification methods. In a
recent paper, Lu et al. (2011) explored the integration of Landsat thematic mapper (TM)
and SAR data for land-cover mapping in tropical regions, and examined several data
fusion techniques such as PCA, wavelet merging, HPF, and normalized multiplication
(NMM). Their study revealed that data fusion improved the performance of qualitative
analysis, and different polarizations (i.e. HH and HV) work similarly when adopted in the
data fusion process. Landsat TM provided significantly better result than individual SAR
imagery, and the combination of SAR and Landsat TM data did not change the overall
accuracy of LULC classification. The wavelet fusion technique improved the overall
accuracy of the classification by 3.3–5.7%. Moreover, the performance of HPF was the
same, but PCA and NMM reduced overall accuracy by 5.1–6.1%. Finally, their study
recommended the application of wavelet merging techniques in tropical regions. In
another paper, Rahman, Tetuko Sri Sumantyo, and Sadek (2010) concluded that IHS and
PCA performed better than BT and wavelet techniques. Their study also specified that BT
mostly distorted image contrast, while the wavelet technique lost SAR information when
microwave (SIR-C) and Landsat images were used. In vegetation cover mapping, SAR
and optical data are commonly fused because of their efficiency. Soria-Ruiz, Fernandez-
Ordonez, and Woodhouse (2010) assessed the advantages of combining SAR and optical
remote sensing in the production of more accurate LULC maps. Their result demon-
strated that the Landsat sensor must be used to classify crop types during critical crop
growth periods. The LULC classification of vegetated area could be improved with the
application of radar data.
From the aforementioned literature review, it is possible to see that many fusion
techniques have been proposed using optical and RADARSAT images by various scholars
around the world. However, several key questions remain unanswered: (1) Does incor-
porating RADARSAT-2 SAR data into the classification process improve the overall
accuracy of LULC classification? (2) What is the best image fusion technique for fusing
INTERNATIONAL JOURNAL OF REMOTE SENSING 2361

Landsat Operational Land Imager (OLI) panchromatic and multispectral images? (3) What
is the best classification algorithm for LULC mapping with Landsat OLI and RADARSAT-2
SAR data? In order to address these questions, the current research aimed to investigate
the integration of Landsat OLI with the RADARSAT-2 sensor, and to examine several
pixel-based classification techniques to propose a refined classification procedure for
LULC mapping especially suitable for tropical regions. The outcome of this research will
provide a useful contribution to both industry and researchers. The refined classification
is a useful classification tool that could be used for accurate LULC mapping. Researchers
can improve and extend the proposed methodology to work on different data sources
and regions.

1.1. Test area


The area examined in this study was Pasir Panjang, in Negeri Sembilan, Malaysia
Downloaded by [115.134.186.25] at 06:36 06 May 2016

(Figure 1), with geographical coordinates 3° 37′ 00″ N and 101° 2′ 30″ E. The selected
subset from the study area occupies an area of 359 km2, and the altitude ranges from 0
to 160 m above mean sea level. The land covers of the area are ocean, coastal land,
paddy field, forest, oil palm, uncultivated land, and grassland. Pasir Panjang was selected
as the study area because of its location in a tropical country (Malaysia). SAR data were
expected to be good supplementary data for optical images. The subset was considered
for exploration because the diversity of LULC can provide a reliable conclusion about
classification accuracy assessment.

Figure 1. Study area.


2362 M. I. SAMEEN ET AL.

Table 1. Spectral characteristics of Landsat OLI system.


Spectrum region Band name Wavelength range
Multispectral Coastal 0.43–0.45 μm
Blue 0.45–0.51 μm
Green 0.53–0.59 μm
Red 0.64–0.67 μm
NIR 0.85–0.88 μm
SWIR1 1.57–1.65 μm
SWIR2 2.11–2.29 μm
Panchromatic Panchromatic (Pan) 0.50–0.68 μm
Cirrus Cirrus 1.36–1.38 μm
Thermal TIR 1 10.60–11.19 μm
TIR 2 11.50–12.51 μm

Table 2. Characteristics of RADARSAT-2 and Landsat OLI data used in the study.
Dataset Parameter Characteristics
Landsat OLI Date Acquired 24 May 2014
Cloud Cover 2.49%
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Spatial Resolution 30 m for multispectral, 80 m for thermal bands


RADARSAT-2 SAR Date 21 May 2014
Polarization HH
Frequency 5.405 GHz
Spatial Resolution 12.25 m

1.2. Remote-sensing and ground truth data


In this study, Landsat OLI and RADARSAT-2 SAR images were selected for general LULC
mapping. Landsat OLI generally involves 10 multispectral bands and one panchromatic
(Table 1). In this study, seven spectral bands from Landsat were used (i.e. coastal, blue,
green, red, NIR, SWIR1, and SWIR2). The characteristics of RADARSAT-2 and Landsat data
used in this study are shown in Table 2.
Reference data must be acquired to perform supervised classification, produce train-
ing samples, and evaluate classification accuracy. Reference data were obtained from
GPS points collected from the field or higher-resolution satellite images (Lu et al. 2011).
In this study, Google mapping was used to generate training and testing regions of
interest (ROIs). Because Google images have high spatial resolution, LULC classes follow-
ing visual interpretation were defined and seven LULC classes (i.e. uncultivated land,
ocean, coastal land, oil palm, forest, grassland, and paddy field) were identified in the
study area. The Google map used in this study was from 2014. In terms of sampling
procedure, random sampling was used to collect the reference ROIs for each class. The
sampling points were distributed throughout the study area to ensure better classifica-
tion results. The total number of selected pixels for each class was 250 within 31 ROIs.

2. Methodology
Critical possibilities of data combination and pre-processing should be tested to design
an appropriate methodology for LULC classification with Landsat OLI and RADARSAT-2
data. In this study, the Landsat data were atmospherically corrected via the dark object
subtraction method and were subsequently used to generate an LULC map for the study
area. Consequently, four fusion levels were adopted to produce four different LULC
INTERNATIONAL JOURNAL OF REMOTE SENSING 2363

maps. The first fusion level tested the quality of the map when only the fused Landsat
OLI data (the best fusion technique is analysed prior to use) were utilized. The second
fusion level evaluated the result of the classification when both the original SAR image
and Landsat OLI data were adopted. In the third fusion level, filtered SAR (the best filter
is analysed and then used) and Landsat OLI data were evaluated for LULC mapping. Last,
the combination of filtered SAR and fused Landsat OLI data was tested. Each fusion level
tested a method that could potentially be used. By applying the accuracy assessment
technique using the reference data collected from the 2014 Google map for this
purpose, the best method was then selected. The overall methodology of this study is
shown in Figure 2.

2.1. Pre-processing
High geometric accuracy between images is generally required to perform accurate
sensor fusion (Otukei, Blaschke, and Collins 2015). Ground control points (GCPs) were
Downloaded by [115.134.186.25] at 06:36 06 May 2016

selected on clearly delineated plot corners. For the purpose of resampling, a second-
order transformation and nearest-neighbour resampling approaches were applied for
the transformation, in which the related root mean square error (RMSE) was 1.13 pixels.
The Landsat image was geometrically corrected based on selected ground points from
the Google map. Ten additional, regularly distributed GCPs were selected from different
parts of the image to correct the SAR image. A second-order transformation was also
used for the resampling process. A cubic resampling approach was applied as a resam-
pling technique, and the related RMSE was 1.28 pixels. Both Landsat and RADARSAT-2
data were then projected to UTM system Zone 47 N using the WGS84 reference
ellipsoid. Radiometric correction has proved to be an essential procedure in image
pre-processing because of the effects of sun illumination (Idi and Nejad 2013). Thus,
the Landsat multispectral (MS) and panchromatic (Pan) images were radiometrically
corrected, in which the digital numbers (DNs) of the raw image were converted into
meaningful radiance pixels. Dark object subtraction (Zhang et al. 2014) was performed
to correct the images for atmospheric conditions, and the SAR data were calibrated to
sigma-nought during radiometric correction (Lucas et al. 2014). The most important part
in SAR image pre-processing is the application of speckle reduction filters. These filters
are specially designed to reduce random noises in the SAR image. These noises are
caused by the coherent nature of radar waves, as well as by instructive and destructive
interferences. In this study, five filters (i.e. Lee, enhanced Lee, Frost, Kuan, and Gamma)
were first evaluated before the speckle reduction filter was selected. A subset of Landsat
and SAR images was generated based on the study area.

2.1.1. Speckle removal


Original SAR images mostly contain speckle noise caused by the coherent illumination
used in SAR system sensors (Hong et al. 2014). Speckle is a kind of noise that degrades
the quality of an image and makes image interpretation extremely difficult. In this event,
speckle must be reduced or removed from the image prior to analysis. Different filters
for removing noises from an image have been designed based on several criteria, such
as (1) speckle reduction, (2) edge sharpness preservation, (3) link and paint target
contrast preservation, (4) retention of texture information, and (5) computational
2364 M. I. SAMEEN ET AL.
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 2. Overall methodology adopted in this study.

efficiency (Hong et al. 2014). Forest, local region, Lee sigma, and Gamma-map filters are
the most commonly used for reducing speckle in SAR images (Lu et al. 2011). A filter is
normally selected based on numerous factors, such as data source, application type,
image processing methods, and software availability. Apart from the selection of a
INTERNATIONAL JOURNAL OF REMOTE SENSING 2365

suitable filter, the kernel size of the selected filter is another important aspect that must
be considered in image pre-processing. Although the optimal methodology for selecting
a suitable filter type for speckle reduction has not been specified, the Lee filter is
observed to be widely used in the literature (Idi and Nejad 2013; Jacob and Ban 2014;
Lu et al. 2011; Werner, Storie, and Storie 2014; Li et al. 2012; Shang et al. 2009; Hong
et al. 2014). This filter is proven to be efficient because it preserves the edges important
for urban mapping, and it provides better smoothing while preserving the detail in the
image. In other words, a good speckle filter smooths noise in the homogeneous regions
of a SAR image while preserving the pixel values in its heterogeneous areas. This study
evaluated different types of speckle reduction filter (Figure 3) by testing their perfor-
mance via MLC applied for the integrated Landsat and SAR images.
Based on the accuracy assessment (Table 3) of the classified image via MLC, the SAR
image whose speckle was reduced via the enhanced Lee filter produced the best results.
Quantitative results of accuracy assessment revealed no significance difference between
the performance of speckle filters, and therefore qualitative examination was addition-
Downloaded by [115.134.186.25] at 06:36 06 May 2016

ally performed. This additional examination confirmed that the 5 × 5 enhanced Lee filter
is the best filter for the data set used, and it was selected for further processing.

Figure 3. Different types of speckle filters applied to original SAR image. (a) 5 × 5 Lee filter, (b) raw
SAR data, (c) 5 × 5 Frost filter, (d) 5 × 5 enhanced Lee filter, (e) 5 × 5 Kuan filter, and (f) 5 × 5
Gamma filter.

Table 3. Classification accuracy assessment of different SAR speckle filters


used with Landsat data.
Filter Type Overall Accuracy (%) Kappa Coefficient
Lee 98.4676 0.9815
Enhanced Lee 98.4876 0.9817
Kuan 98.4676 0.9815
Gamma 98.4847 0.9817
Frost 98.4847 0.9817
2366 M. I. SAMEEN ET AL.

2.2. Image fusion and sensor integration


C. Pohl and Van Genderen (1998) described the concepts, methods, and applications of
multi-sensory data fusion. Data fusion is a process that deals with data from multiple
sources, improving them for decision making (Amarsaikhan et al. 2012; Li et al. 2012).
Image fusion is the combination of two or more different images to form a new image
by using a certain algorithm. Fusion techniques are generally classified into three levels,
namely, pixel/data, feature, and decision (Zhang 2010). Pixel-level fusion is the combina-
tion of raw data acquired from multiple sources into single-resolution data, which are
expected to be more informative and synthetic than the input data or may reveal the
changes among data sets acquired at different times.
Some data fusion techniques allow only a limited number of input bands to be used
(i.e. RGB and IHS), whereas others can be performed with any number of selected input
bands (Pohl and Van Genderen 1998). Optimum index factor (OIF) is a selection method
developed by Chavez, Berlin, and Sowers (1982). OIF relies on statistics to select the data,
Downloaded by [115.134.186.25] at 06:36 06 May 2016

which count most variance. This factor can be expressed as follows:


P3
σi
OIF ¼ P3i¼1  ; (1)
c j 
j¼1

where σi is the standard deviation in band i, and cj is the correlation coefficient between
any two bands.
In this study, three different fusion techniques (i.e. BT, GS, and modified HIS) were
used to fuse the multispectral and panchromatic bands of Landsat data. These
approaches were employed because of their popularity. SAR data were integrated
with Landsat data via layer stacking.

2.3. Image classification


Numerous image classification techniques have been developed and deployed in the
literature. These techniques can be categorized into two groups, namely, pixel-based
and object-based classification (Ban, Hu, and Rangel 2010). Pixel-based classification
(e.g. supervised classification) determines the class of each pixel in the image by
comparing the n-dimensional data vector of each pixel to the prototype vector of
each class (Ban, Hu, and Rangel 2010). Data vectors typically consist of grey-level
values of the pixel from multispectral channels. Supervised classification can be used
to group pixels in a data set into classes, corresponding to user-defined training
classes. Data must be trained to guide the classification process. These training
samples comprise a group of homogeneous pixels or ROIs obtained from the class
categories present on the image. However, the groups of sample candidates from the
same class category may vary spectrally (Ban, Hu, and Rangel 2010). Thus, a wide
variety of candidate pixels must be sampled. Supervised classification techniques
include parallelepiped, minimum distance (MD), Mahalanobis distance, MLC, spectral
angle mapper (SAM), spectral information divergence, and binary encoding
(Bhaskaran, Paramananda, and Ramnarayan 2010). Pixel-based classification
approaches can be used for optical and SAR images (Qi et al. 2010; Taubenböck
et al. 2012; Otukei and Blaschke 2010; Jia et al. 2014). In this study, pixel-based
INTERNATIONAL JOURNAL OF REMOTE SENSING 2367

classification method was used because of its efficiency with low-spatial resolution
data (Taubenböck et al. 2012) . Such an approach was also employed because
adopting an object-based method in designing a methodology for different study
areas is exceedingly complicated. The rationale for this is the fact that object-based
method is subjected to the data source and scale of the study area as well as to other
factors. For that reason, the supervised classification method and four classification
algorithms were evaluated in this research; MD, MLC, SAM, and SVM were the
classification algorithms selected for this study.

2.4. Accuracy assessment


Overall accuracy (OA) and Cohen’s kappa coefficient are widely used as measurement
tools for assessing the quality of classification results (Li et al. 2012; Nishii and Tanaka
1999). In this study, both of these measurement approaches were used, and their
Downloaded by [115.134.186.25] at 06:36 06 May 2016

calculation equations are provided below:


Pc
nii
OA ¼ i¼1
; (2)
n

Pc Pc
n i¼1 nij  ni þ n þi
kappa coefficient ¼ P i¼1 ; (3)
n2  ci¼1 ni þ n þ i

where n is the total number of pixels, nij represents the total number of correctly
classified pixels, and ni is the number of instances, belonging to label (i), that have
been classified into label (j).

3. Results
3.1. Experiment 1: classification of Landsat OLI data
The confusion matrix approach is often used to evaluate LULC classification results.
This method provides a detailed assessment of the agreement between the classified
result and ground truth data, and it presents information on how misclassification
occurs among classes (Li et al. 2012). In the first experiment of this study, such a
matrix was used to evaluate the results of classification techniques applied for
Landsat OLI data. Atmospherically corrected Landsat data, which had a spatial resolu-
tion of 30 m, along with seven multispectral bands were used to produce an LULC
map for the study area. The mechanism of various classification algorithms was
investigated to determine how they work on the data used. Figure 4 shows the
results of LULC classification conducted with ENVI 5.1 software. MLC produced the
best result, both visually and statistically, with an OA of 98.48%. No significant
difference was observed between MLC and SVM algorithms, in which the difference
was 0.20%. However, the results revealed that the classification accuracy was reduced
below 93% when MD or SAM was used.
2368 M. I. SAMEEN ET AL.
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 4. LULC maps produced using different classifiers applied to Landsat OLI data. (a) Maximum
likelihood, (b) minimum distance, (c) spectral analysis mapper, and (d) support vector machine.

3.2. Experiment 2: classification of integrated Landsat OLI and SAR data


The second experiment aimed to evaluate the role of RADARSAT-2 data in improving LULC
classification accuracy. The same classification algorithms used in the first experiment were
applied to the integrated RADARSAT-2 and Landsat data. MLC and SVM obtained better
results, visually, than the other two classifiers (Figure 5); however, statistically, MLC pro-
duced the higher OA (98.47%). This result must be compared to the classification results of
using only Landsat data to evaluate the role of RADARSAT-2 data in LULC mapping.
Thus, the results of this experiment were compared to those obtained in the first
experiment. Table 4 demonstrates that when RADARSAT-2 data were incorporated into
Landsat images, the classification accuracy increased with most of the classifiers used.
The classification accuracy was evidently improved with MD, SAM, and SVM algorithms
as indicated in Table 4. In contrast, such an integration decreased the classification
accuracy when the MLC algorithm was used, in which an OA of 98.47% was achieved.
Nonetheless, this finding does not illustrate whether RADARSAT-2 might improve the
accuracy of LULC classification. Hence, further experiments must be performed to verify
the role of RADARSAT-2 data in improving such accuracy. The subsequent experiment
evaluated the integration of RADARSAT-2 data with the fused Landsat data to clarify the
role of RADARSAT-2 in improving LULC classification accuracy.

3.3. Experiment 3: classification of integrated fused Landsat OLI and filtered SAR data
The third experiment aimed to evaluate certain fusion algorithms and to confirm the
role of RADARSAT-2 data in improving LULC accuracy. The best filter type for speckle
INTERNATIONAL JOURNAL OF REMOTE SENSING 2369
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 5. LULC maps produced using different classifiers applied to integrated Landsat OLI and SAR
data. (a) Maximum likelihood, (b) minimum distance, (c) support vector machine, (d) spectral angle
mapper.

Table 4. Classification accuracy assessment of RADARSAT-2 and Landsat


data (all values shown are percentages).
Classifier
Data Used MD ML SAM SVM
Landsat DOS 90.87 98.48 92.96 98.28
RADARSAT-2 and Landsat DOS 91.31 98.47 93.98 98.35

reduction was analysed in Section 2.1.1. A 5 × 5 enhanced Lee filter was determined as
the best type for the data used and study area. This filter was therefore adopted to filter
the RADARSAT-2 data before they were integrated with the fused Landsat images. BT,
modified IHS, and GS were evaluated, and the fused Landsat and filtered RADARSAT-2
data were used. Four classification techniques were again applied for evaluation.
Accordingly, GS was determined as the best fusion algorithm for Landsat data. GS
retained the spectral information and enhanced the spatial resolution. In terms of
classification algorithms, MLC performed better than the other techniques used in the
study. Figure 6 shows the results of image classification of the fusion and classification
techniques used.

4. Discussion
A comparison table was constructed based on the three experiments performed in this
study. The table specifically indicates the overall accuracy (OA) of the classifiers used in
2370 M. I. SAMEEN ET AL.
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 6. LULC maps produced using different classifiers and fusion techniques applied to fused
Landsat OLI data. (a) Maximum likelihood and Brovey fusion, (b) minimum distance and Brovey
fusion, (c) support vector machine and Brovey fusion, (d) spectral angle mapper and Brovey fusion,
(e) maximum likelihood and MIHS fusion, (f) minimum distance and MIHS fusion, (g) support vector
machine and MIHS fusion, (h) spectral angle mapper and MIHS fusion, (i) maximum likelihood and
Gram–Schmidt fusion, (j) minimum distance and Gram–Schmidt fusion, (k) support vector machine
and Gram–Schmidt fusion, and (l) spectral angle mapper and Gram–Schmidt fusion.

the study (Table 5). In the table, the highest OA was determined for each classifier
adopted in the classification process. The MD algorithm was determined to work best
when applied for Landsat data fused via the GS algorithm. MLC and SAM performed
extremely well when the enhanced Lee RADARSAT-2 data and fused Landsat with GS
algorithm were used. The results also showed that SVM produced the highest classifica-
tion accuracy when the original SAR or enhanced Lee filtered SAR data were integrated
with Landsat images. In general, MLC performed exceedingly well with most data sets
used in the classification process. SVM produced better results than MLC only when the
Landsat data, which were fused with the modified IHS algorithm and enhanced Lee SAR
as well as with the modified IHS technique, were used in the classification process. The
highest OA was achieved when MLC was applied to the enhanced Lee SAR and fused
Landsat with the GS algorithm data set. These results indicate that SAR data play a role
INTERNATIONAL JOURNAL OF REMOTE SENSING 2371

Table 5. Classification accuracy assessment of different data fusion methods used in the current
study (all values shown are percentages).
Method MD ML SAM SVM
Landsat DOS 90.87 98.48 92.96 98.28
Brovey Landsat DOS 95.46 98.00 93.49 97.60
Gram–Schmidt (GS) Landsat DOS 94.78 98.31 94.09 98.03
MIHS Landsat DOS 91.53 97.10 92.38 97.53
Original SAR Landsat DOS 91.31 98.47 93.98 98.35
Enhanced Lee SAR Landsat DOS 91.42 98.49 93.35 98.35
Enhanced Lee SAR Brovey Landsat DOS 83.64 98.03 81.26 97.69
Enhanced Lee SAR MIHS Landsat DOS 92.30 97.07 89.75 97.40
Enhanced Lee SAR GS Landsat DOS 95.29 98.62 95.55 98.10

Table 6. Classification accuracy assessment of different fusion techniques and classifiers used in the
current study.
User’s accuracy (%)
Data set/Method Ocean Grassland Forest Uncultivated Land Oil Palm Paddy Field Coastal Land
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Brovey/SVM 99.99 90.37 99.3 99.66 92.48 85.85 99.71


MIHS/MD 91.26 94.39 97.56 99.18 65.59 72.07 99.91
Gram–Schmidt/MD 96.35 89.17 99.83 99.79 77.6 80.75 99.2
Brovey/MD 97.93 59.9 90.37 99.83 41.59 36.98 99.96
MIHS/ML 100 94.26 99.53 99.28 85.2 79.28 100
Gram–Schmidt/ML 100 97.48 99.99 99.05 95.34 87.2 100
Brovey/ML 100 96.63 99.97 99.35 95.45 77.97 99.94
MIHS/SAM 92.26 59.8 91.81 98.35 68.05 88.06 99.99
Gram–Schmidt/SAM 99.85 85.02 95.04 98.05 80.37 92.82 99.51
Brovey/SAM 96.92 38.43 75.51 99.14 49.74 59.42 99.08
MIHS/SVM 100 95.36 98.39 99.24 91.46 78.61 99.92
Gram–Schmidt/SVM 100 94.7 99.19 99.73 93.74 85.93 99.52

in improving LULC classification accuracy, in which the majority of the highest overall
accuracies were observed when SAR data were integrated with Landsat data.
Furthermore, the user’s accuracy obtained in the classification process was ana-
lysed. This investigation aimed to identify where SAR data have a more significant
role and which combination is optimal for each LULC type. Table 6 reveals that the
ocean class was identified with higher user’s accuracy using MLC and SVM algorithms.
The most suitable data fusion/integration for ocean classification was determined as
the GS and BT techniques. Regarding grassland classification, MLC and SVM were
found to be more appropriate yet again. However, the modified IHS fusion technique
was not satisfactory even when MLC was used. In terms of forest classification, MLC
and GS techniques produced remarkably better accuracy than the others.
Uncultivated land was accurately classified with most techniques applied, except
when the SAM classifier and modified IHS/GS fusion algorithms were adopted in
the classification process. The accuracy assessment of oil palm classification showed
that MLC and GS produced higher accuracy than the other methods used. The
classification of paddy field revealed that GS and SAM were more suitable than the
other methods. Lastly, for the classification of coastal lands, all classification techni-
ques and fusion algorithms produced extremely high user’s accuracy. This analysis
confirms that the GS fusion algorithm, as well as the MLC and SVM classification
techniques, are more suitable than the other methods investigated for LULC classifi-
cation with Landsat OLI and RADARSAT-2 data.
2372 M. I. SAMEEN ET AL.
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 7. Optimal LULC classification achieved.

The technique/algorithm, which showed the best performance based on the analysis
of the results, was used to produce the optimal LULC map for the study area. Figure 7
indicates the LULC map of the study area with an OA of 98.62%. This map was generated
using MLC applied for the fused Landsat data with SAR imagery, where the Landsat
multispectral and panchromatic data fused by the GS fusion method and SAR image
were filtered using the enhanced Lee technique with a filter size of 5 × 5. The refined
classification procedure is shown in Figure 8.

5. Conclusion
This study investigated the use of Landsat OLI and RADARSAT-2 remote-sensing data for
general LULC mapping in tropical regions. The enhanced Lee filter with a kernel size of 5 × 5
was determined as the best filter type for reducing speckle in RADARSAT-2 data. The study
also revealed that SAR data can improve the accuracy of LULC classification when integrated
with Landsat data, particularly when MLC is used. Regarding image fusion, the most appro-
priate fusion technique was determined to be GS for Landsat OLI data and the study area.
Based on these results, a classification procedure for more accurate LULC mapping in tropical
regions is proposed. The refined classification methodology proposed in this study could be
replicated for other tropical regions. Thus, related industries and government organizations
could benefit by using this method as a classification tool. This study recommends that active
and optical data integration for LULC mapping in tropical regions must be further investi-
gated. Object-based classification methods must also be compared to pixel-based classifica-
tion methods with higher-spatial resolution images, such as SPOT, IKONOS, and worldview-3.

Disclosure statement
No potential conflict of interest was reported by the authors.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2373
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Figure 8. Refined classification procedure for Landsat OLI and RADARSAT-2 data.

ORCID
Biswajeet Pradhan http://orcid.org/0000-0001-9863-2054

References
Amarsaikhan, D., M. Saandar, M. Ganzorig, H. H. Blotevogel, E. Egshiglen, R. Gantuyal, B. Nergui,
and D. Enkhjargal. 2012. “Comparison of Multisource Image Fusion Methods and Land Cover
Classification.” International Journal of Remote Sensing 33 (8): 2532–2550. doi:10.1080/
01431161.2011.616552.
Bakirman, T., G. Bilgin, F. B. Sanli, E. Uslu, and M. Ustuner. 2014. “Fusion and Classification of
Synthetic Aparture Radar and Multispectral Sattellite Data.” In 2014 22nd Signal Processing and
Communications Applications Conference (SIU), 754–757. Trabzon: IEEE.
2374 M. I. SAMEEN ET AL.

Ban, Y., H. Hu, and I. M. Rangel. 2010. “Fusion of Quickbird MS and RADARSAT SAR Data for Urban
Land-Cover Mapping: Object-Based and Knowledge-Based Approach.” International Journal of
Remote Sensing 31 (6): 1391–1410. doi:10.1080/01431160903475415.
Bhaskaran, S., S. Paramananda, and M. Ramnarayan. 2010. “Per-Pixel and Object-Oriented
Classification Methods for Mapping Urban Features Using Ikonos Satellite Data.” Applied
Geography 30 (4): 650–665. doi:10.1016/j.apgeog.2010.01.009.
Chasmer, L., C. Hopkinson, T. Veness, W. Quinton, and J. Baltzer. 2014. “A Decision-Tree
Classification for Low-Lying Complex Land Cover Types within the Zone of Discontinuous
Permafrost.” Remote Sensing of Environment 143: 73–84. doi:10.1016/j.rse.2013.12.016.
Chavez, P. S.Jr., G. L. Berlin, and L. B. Sowers. 1982. “Statistical Method for Selecting Landsat MSS
Ratios.” Journal of Applied Photographic Engineering 8 (1): 23–30.
Cihlar, J., and L. J. M. Jansen. 2001. “From Land Cover to Land Use: A Methodology for Efficient Land Use
Mapping over Large Areas.” The Professional Geographer 53 (2): 275–289. doi:10.1080/
00330124.2001.9628460.
Colditz, R. R., T. Wehrmann, M. Bachmann, K. Steinnocher, M. Schmidt, G. Strunz, and S. Dech. 2006.
“Influence of Image Fusion Approaches on Classification Accuracy: A Case Study.” International
Journal of Remote Sensing 27 (15): 3311–3335. doi:10.1080/01431160600649254.
Elatawneh, A., C. Kalaitzidis, G. P. Petropoulos, and T. Schneider. 2014. “Evaluation of Diverse Classification
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Approaches for Land Use/Cover Mapping in a Mediterranean Region Utilizing Hyperion Data.”
International Journal of Digital Earth 7 (3): 194–216. doi:10.1080/17538947.2012.671378.
Gevana, D., L. Camacho, A. Carandang, S. Camacho, and S. Im. 2015. “Land Use Characterization
and Change Detection of a Small Mangrove Area in Banacon Island, Bohol, Philippines Using
a Maximum Likelihood Classification Method.” Forest Science and Technology 11 (4): 97–205.
Ghosh, A., R. Sharma, and P. K. Joshi. 2014. “Random Forest Classification of Urban Landscape
Using Landsat Archive and Ancillary Data: Combining Seasonal Maps with Decision Level
Fusion.” Applied Geography 48: 31–41. doi:10.1016/j.apgeog.2014.01.003.
Hong, G., A. Zhang, F. Zhou, and B. Brisco. 2014. “Integration of Optical and Synthetic Aperture
Radar (SAR) Images to Differentiate Grassland and Alfalfa in Prairie Area.” International Journal of
Applied Earth Observation and Geoinformation 28: 12–19. doi:10.1016/j.jag.2013.10.003.
Idi, B. Y., and P. G. Nejad. 2013. “Fusion of RADARSAT-2 and IKONOS Images for Land Cover
Mapping: Performance Analysis.” Applied Remote Sensing Journal 3 (1): 18–24.
Iounousse, J., S. Er-Raki, A. El Motassadeq, and H. Chehouani. 2015. “Using an Unsupervised
Approach of Probabilistic Neural Network (PNN) for Land Use Classification from Multitemporal
Satellite Images.” Applied Soft Computing 30: 1–13. doi:10.1016/j.asoc.2015.01.037.
Jacob, A., and Y. Ban. 2014. “Urban Land Cover Mapping with Terrasar-X Using an Edge-Aware
Region-Growing and Merging Algorithm.” In Geoscience and Remote Sensing Symposium
(IGARSS), 2014 IEEE International. Quebec City, QC: IEEE.
Jia, M., Z. Wang, L. Li, K. Song, C. Ren, B. Liu, and D. Mao. 2014. “Mapping China’s Mangroves Based
on an Object-Oriented Classification of Landsat Imagery.” Wetlands 34 (2): 277–283. doi:10.1007/
s13157-013-0449-2.
Li, G., D. Lu, E. Moran, L. Dutra, and M. Batistella. 2012. “A Comparative Analysis of ALOS PALSAR L-Band
and RADARSAT-2 C-Band Data for Land-Cover Classification in A Tropical Moist Region.” ISPRS
Journal of Photogrammetry and Remote Sensing 70: 26–38. doi:10.1016/j.isprsjprs.2012.03.010.
Liu, H., and X. Zhang. 2009. “Comparison of Data Fusion Techniques for Beijing-1 Micro-Satellite
Images.” In Urban Remote Sensing Event, 2009 Joint, 1–6. Shanghai: IEEE.
Löw, F., C. Conrad, and U. Michel. 2015. “Decision Fusion and Non-Parametric Classifiers for Land
Use Mapping Using Multi-Temporal Rapideye Data.” ISPRS Journal of Photogrammetry and
Remote Sensing 108: 191–204. doi:10.1016/j.isprsjprs.2015.07.001.
Lu, D., G. Li, E. Moran, L. Dutra, and M. Batistella. 2011. “A Comparison of Multisensor Integration
Methods for Land Cover Classification in the Brazilian Amazon.” GIScience & Remote Sensing 48
(3): 345–370. doi:10.2747/1548-1603.48.3.345.
Lu, D., and Q. Weng. 2007. “A Survey of Image Classification Methods and Techniques for
Improving Classification Performance.” International Journal of Remote Sensing 28 (5): 823–
870. doi:10.1080/01431160600746456.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2375

Lucas, R. M., D. Clewley, A. Accad, D. Butler, J. Armston, M. Bowen, P. Bunting, J. Carreiras, J. Dwyer,
T. Eyre, A. Kelly, C. McAlpine, S. Pollock, and L. Seabrook. 2014. “Mapping Forest Growth and
Degradation Stage in the Brigalow Belt Bioregion of Australia through Integration of ALOS
PALSAR and Landsat-Derived Foliage Projective Cover Data.” Remote Sensing of Environment
155: 42–57. doi:10.1016/j.rse.2013.11.025.
Nishii, R., and S. Tanaka. 1999. “Accuracy and Inaccuracy Assessments in Land-Cover Classification.”
IEEE Transactions on Geoscience and Remote Sensing 37 (1): 491–498. doi:10.1109/36.739098.
Otukei, J. R., and T. Blaschke. 2010. “Land Cover Change Assessment Using Decision Trees, Support
Vector Machines and Maximum Likelihood Classification Algorithms.” International Journal of
Applied Earth Observation and Geoinformation 12: S27–S31. doi:10.1016/j.jag.2009.11.002.
Otukei, J. R., T. Blaschke, and M. Collins. 2015. “Fusion of Terrasar-X and Landsat ETM+ Data for
Protected Area Mapping in Uganda.” International Journal of Applied Earth Observation and
Geoinformation 38: 99–104. doi:10.1016/j.jag.2014.12.012.
Pohl, C., and J. L. Van Genderen. 1998. “Review Article Multisensor Image Fusion in Remote
Sensing: Concepts, Methods and Applications.” International Journal of Remote Sensing 19 (5):
823–854. doi:10.1080/014311698215748.
Qi, Z., A. G. Yeh, X. Li, and Z. Lin. 2010. “Land Use and Land Cover Classification Using RADARSAT-2
Downloaded by [115.134.186.25] at 06:36 06 May 2016

Polarimetric SAR Image.” In Proceedings of the ISPRS Technical Commission VII Symposium: 100
Years ISPRS Advancing Remote Sensing Science, Vol. 38, 198–203. Vienna: ISPRS.
Rahman, M. M., J. Tetuko Sri Sumantyo, and M. F. Sadek. 2010. “Microwave and Optical Image
Fusion for Surface and Sub-Surface Feature Mapping in Eastern Sahara.” International Journal of
Remote Sensing 31 (20): 5465–5480. doi:10.1080/01431160903302999.
Sang, H., J. Zhang, L. Zhai, C. Qiu, and X. Sun. 2014. “Analysis of Rapideye Imagery for Agricultural
Land Cover and Land Use Mapping.” In 2014 3rd International Workshop on Earth Observation
and Remote Sensing Applications (EORSA), 366–369. Changsha: IEEE.
Shang, J., H. McNairn, C. Champagne, X. Jiao, I. Jarvis, and X. Geng. 2009. “Integration of
RADARSAT-2 Scansar and Awifs for Operational Agricultural Land Use Monitoring over the
Canadian Prairies.” In Geoscience and Remote Sensing Symposium, 2009 IEEE International,
IGARSS 2009, Vol. 4, IV–793. Cape Town: IEEE.
Singh, S. K., P. K. Srivastava, M. Gupta, J. K. Thakur, and S. Mukherjee. 2014. “Appraisal of Land Use/
Land Cover of Mangrove Forest Ecosystem Using Support Vector Machine.” Environmental Earth
Sciences 71 (5): 2245–2255. doi:10.1007/s12665-013-2628-0.
Soria-Ruiz, J., Y. Fernandez-Ordonez, and I. H. Woodhouse. 2010. “Land-Cover Classification Using
Radar and Optical Images: A Case Study in Central Mexico.” International Journal of Remote
Sensing 31 (12): 3291–3305. doi:10.1080/01431160903160777.
Srivastava, P. K., D. Han, M. A. Rico-Ramirez, M. Bray, and T. Islam. 2012. “Selection of Classification
Techniques for Land Use/Land Cover Change Investigation.” Advances in Space Research 50 (9):
1250–1265. doi:10.1016/j.asr.2012.06.032.
Taubenböck, H., A. Felbier, T. Esch, A. Roth, and S. Dech. 2012. “Pixel-Based Classification Algorithm
for Mapping Urban Footprints from Radar Data: A Case Study for RADARSAT-2.” Canadian
Journal of Remote Sensing 38 (3): 211–222. doi:10.5589/m11-061.
Tehrany, M. S., B. Pradhan, and M. N. Jebuv. 2014. “A Comparative Assessment between Object and
Pixel-Based Classification Approaches for Land Use/Land Cover Mapping Using SPOT 5
Imagery.” Geocarto International 29 (4): 351–369. doi:10.1080/10106049.2013.768300.
Werner, A., C. D. Storie, and J. Storie. 2014. “Evaluating SAR-Optical Image Fusions for Urban LULC
Classification in Vancouver Canada.” Canadian Journal of Remote Sensing 40 (4): 278–290.
doi:10.1080/07038992.2014.976700.
Zhang, J., W. Dong, J. X. Wang, and X. N. Liu. 2014. “A Method to Enhance the Fog Image Based on
Dark Object Subtraction.” Applied Mechanics and Materials 543: 2484–2487.
Zhang, J. 2010. “Multi-Source Remote Sensing Data Fusion: Status and Trends.” International
Journal of Image and Data Fusion 1 (1): 5–24. doi:10.1080/19479830903561035.

View publication stats

You might also like