Mastering Object Based Image Analysis
Mastering Object Based Image Analysis
IMAGE ANALYSIS
Prof. Dr. A. Rasouli, Dr. M. Milani and Dr. B. Milani
MASTERING OBJECT-BASED
IMAGE ANALYSIS
ISBN: 978-625-8061-89-5
Cover Design: İbrahim KAYA
December / 2021
Ankara / Turkey
Size = 16x24 cm
The Book Brief Contents
1
Mastering Object Based Image Analysis About The Authors
2
Mastering Object Based Image Analysis About The Authors
3
Mastering Object Based Image Analysis About The Authors
4
Mastering Object Based Image Analysis
Basic Background
Essential of OBIA
5
Mastering Object Based Image Analysis
6
Mastering Object Based Image Analysis
7
Mastering Object Based Image Analysis
8
Mastering Object Based Image Analysis
10
Mastering Object Based Image Analysis
11
Mastering Object Based Image Analysis
12
Mastering Object Based Image Analysis
13
Mastering Object Based Image Analysis
14
Mastering Object Based Image Analysis
15
Mastering Object Based Image Analysis
Table 1. Basic processing steps and experiences of OBIA inside the eCognition
Software
16
Mastering Object Based Image Analysis
17
Mastering Object Based Image Analysis
OBIA Applications:
Changes in landcover and landuse are pervasive, rapid and can
significantly impact humans, the economy, and the environment.
Accurate landcover mapping is of paramount importance in
many applications, e.g., biodiversity conservation, urban
planning, forestry, natural hazards, etc. Unfortunately,
traditional landcover mapping processes are often inaccurate,
costly, and time-consuming. A classification for an image - in
one geographic location- cannot be directly applied to a different
image on a different site.
In practice, land cover maps are built-up by analyzing remotely
sensed imagery captured by satellites, airplanes, or drones, using
different classification methods. The accuracy of the results
depends on the quality of the input data (mainly based on the
spatial, spectral, and radiometric resolution of the images) and
on the used classification method. The most commonly used
methods could be divided into pixel-based classifiers and OBIA.
Pixel-based methods use only the spectral information available
for each pixel. They are faster but ineffective in some cases,
particularly for high-resolution images and heterogeneous
objects detection. Object-based methods consider image
segments' spectral and spatial properties (i.e., set of similar
neighbor pixels). They are faster but ineffective, particularly for
high-resolution images and heterogeneous object detection.
Object-based methods consider image segments' spectral and
spatial properties (i.e., set of similar neighbor pixels). The most
18
Mastering Object Based Image Analysis
19
Mastering Object Based Image Analysis
20
Mastering Object Based Image Analysis
devastated area in the region because of severe air and water and
soil pollutions. The occupied Qarabagh rich florae, fauna, and
enormous natural resources have been in great confrontation due
to occupation force action.
Sum Up:
Many OBIA users have been working intensively on geographic
and environmental topics; most users run eCognition
applications for "green, environmental and climate changes
purposes." No doubt, there are thousands of different
applications for those image analysis software. Authors
convince that this is directly related to images' massive power.
The conventional wisdom that "one image is worth a thousand
words" fully applies to human understanding of environmental
issues. Every eCognition user in the geosciences and
environmental community has a story to tell. Some of these
stories will help us better understand how far the situation has
progressed.
We are sure that you don't want to stop here in the current
introduction. You will try to find out what eCognition Software
does (even having different progressive versions) is provide you
with a detailed representation of all objects and their inter-
relationships within the diverse satellite images. With this
cognition developer, we can recognize and measure practically
everything contained in an image – and do so across very large
areas and over long periods. In short, the thousands of
minuscule facts documented in an image are now becoming
21
Mastering Object Based Image Analysis
22
Mastering Object Based Image Analysis
Informative Practices
Tips:
1) Trimble eCognition enables you to accelerate and automate
the interpretation of your geospatial data products by allowing
you to design your feature extraction and change detection
solutions.
2) You can learn about the technology that drives eCognition as
a market leader in feature extraction.
3) The eCognition Software is used by professionals, remote
sensing experts, and data scientists to automate geospatial
data analytics.
Workouts:
1) Summarize the basic processing steps of the eCognition
Software.
2) List the possible applications of OBIA in your research area .
3) List the eCognition suite of the different main components.
Quizzes :
1) What does an OBJECT mean?
2) What are the similarities and differences between OBIA and
GEOBIA approaches?
3) What are the basic goals of the OBIA?
Allied References:
1) Addink, E.A., Van Coillie, F.M.B. (eds.) (2010). GEOBIA:
Geographic object-based image analysis, Ghent, Belgium, 29.
June - 2. July 2010. ISPRS Vol. No. XXXVIII-4/C7, Archives
ISSN No 1682-1777.
http://geobia.ugent.be/proceedings/html/papers.html (April
2011).
2) Baatz, M., Schäpe, A. (2000). Multi-resolution segmentation –
an optimization approach for high quality multi-scale
segmentation. In: Strobl, J., et al. (eds.), Angewandte
23
Mastering Object Based Image Analysis
24
Mastering Object Based Image Analysis Section One at a Glance
Basic Concepts:
The current book prepares so that interested beginner,
intermediate, and even advanced trainees, with introductory
knowledge of image processing concepts and fewer software
skills, could be familiar with the basic concepts of OBIA
working in eCognition software 9.01 setting. Accordingly, we
will train students to access and install the eCognition Developer
9.01 on their computers during the first section. They will learn
to access, download, manage, and display the Landsat 4, 5, 7,
and 8 imagery through simple steps from recognized
international websites.
We expect students to, with access to acceptable standard
satellite imagery and basic familiarity with eCognition software,
it will be possible to practice with OBIA primary concepts. At
the same time, one of the main goals of this section of the book
is to teach how to extract applied information from raw satellite
25
Mastering Object Based Image Analysis Section One at a Glance
26
Mastering Object Based Image Analysis SECTION ONE
Tutorial 1
Opening Statement:
In the current tutorial, you will learn how to access and install
the eCognition Developer 9.01 trial version, a powerful
development environment for Object-Based Image Analysis
(OBIA). eCognition 9.01 extends the existing knowledge-based
and supervised classification methods available for geospatial
applications. It is extensively used in earth sciences to develop
rule sets for advanced remote sensing data analysis. In addition,
you will have the first experience of importing Landsat-4
satellite MSS bands, subsetted for the small part of the
Azerbaijan, Qarabag region, into the eCognition software
environment.
Instructive Memo:
✓ Level: Beginner,
✓ Time: This tutorial should not take you more than 1
hour.
✓ Data: Landsat-4_19780713_T2.tar.gz,
✓ Software: eCognition Developer, Version 9.01,
27
Mastering Object Based Image Analysis SECTION ONE
Background Concepts:
The eCognition software operates for all common image
processing tasks such as vegetation mapping, feature extraction,
change detection, and object recognition. The object-based
approach facilitates analysis of all common data sources, such as
medium to high-resolution satellite data, high to very high-
resolution aerial photography, lidar, radar, and even
hyperspectral data. In this tutorial, we experience the Landsat 1-
4 satellites with the optimal ground resolution (60 meters) and
fewer spectral bands (4 bands). However, they could be
processed to track landcover efficiently and document land
changes due to climate change, urbanization, drought, wildfire,
vegetation changes, and a host of other natural and human-
caused changes. You may use this version of eCognition to
highlight the benefits of the most antiquated Landsat imagery of
the Qarabag region; many years ago, military actions did not
damage the region's lush forests during the long-drawn fights.
You can download eCognition Developer 9.01 from different
software libraries for free, as the Trimble GeoSpatial created the
most popular program.
28
Mastering Object Based Image Analysis SECTION ONE
29
Mastering Object Based Image Analysis SECTION ONE
30
Mastering Object Based Image Analysis SECTION ONE
2.5) Inside the Choose Components, accept all options and click
on the Next button (Figure 5).
31
Mastering Object Based Image Analysis SECTION ONE
2.6) When you notice the Choose Start Menu Folder, click on
the Next option (Figure 6).
32
Mastering Object Based Image Analysis SECTION ONE
33
Mastering Object Based Image Analysis SECTION ONE
35
Mastering Object Based Image Analysis SECTION ONE
2.16) Now, you can select one of the options to start the
eCognition software. In this step, use the Quick Map Mode to
notice the eCognition main window (Figure 14).
Figure 14: the eCognition software the main window, in the Quick
Map Mode
36
Mastering Object Based Image Analysis SECTION ONE
2.17) Then, try to create the image of the Qarabagh area in the
eCognition software by combining the Landsat 4 MSS bands,
which you have already prepared from GLOVIS
https://glovis.usgs.gov/app site. How to access the satellite
imagery will be covered in later tutorials.
2.18) Landsat 4 is the fourth satellite of the Landsat program. It
launched on July 16, 1982, with the primary goal of providing a
global archive of satellite imagery.
Landsat 4 carried the Multispectral Scanner (MSS) with four
spectral bands:
▪ Band 4 Visible (0.5 to 0.6 µm)
▪ Band 5 Visible (0.6 to 0.7 µm)
▪ Band 6 Near-Infrared (0.7 to 0.8 µm)
▪ Band 7 Near-Infrared (0.8 to 1.1 µm)
Step 3: Create a Project
37
Mastering Object Based Image Analysis SECTION ONE
38
Mastering Object Based Image Analysis SECTION ONE
39
Mastering Object Based Image Analysis SECTION ONE
40
Mastering Object Based Image Analysis SECTION ONE
41
Mastering Object Based Image Analysis SECTION ONE
42
Mastering Object Based Image Analysis SECTION ONE
Informative Practices
Tips:
1) eCognition 9.01 was designed to improve, accelerate and
automate the interpretation of various geospatial data.
2) Inside this version of eCognition, you can build analysis
logic structures into a series of steps to create a
computer-based representation of an expert's geospatial
interpretation process, a so-called "ruleset."
3) Landsat's space-based land imaging is essential because
it provides repetitive and synoptic observations of the
Earth otherwise unavailable to researchers and managers
who work across wide geographical areas and
applications.
Workouts:
1) Try to access the free trial of eCognition available via
the (http://www.ecognition.com website.
2) When installing the eCognition software, open the
eCognition Developer version 9.01 in a ruleset mode. Go
to file> Load Image File in the main menu.
3) Download the Landsat images for 1996 and 2016 and
compare them with the Landsat data (1976) you worked
on in this tutorial. Try to interpret the existing changes,
particularly on vegetation cover.
43
Mastering Object Based Image Analysis SECTION ONE
Quizzes:
1) What are the differences between eCognition Trial and
full-version modes?
2) How does the "software" work?
3) Are you familiar with earlier Landsat 4 applications?
Allied References:
1) Anuta, P. E., Bartolucci, L. A., Dean, M. E., Valdes, J. A., and
Valenzuela, C. R. (1984). Landsat-4 MSS and Thematic
Mapper design quality and information content analysis:
ZEEE Transactions on Geoscience and Remote Sensing, v.
GE-22, no. 3, pp. 222-236.
2) Benz, U. C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.;
Heynen, M. (2004). "Multi-resolution, object-oriented fuzzy
analysis of remote sensing data for GIS-ready information".
ISPRS Journal of Photogrammetry and Remote Sensing. 58
(3): 239–258.
3) Castilla, G., & Hay, G. J. (2008). Image objects and
geographic objects. In: Object-based image analysis. Springer
Berlin Heidelberg. pp. 91-110.
4) DeGloria, S. D. (1984). Spectral variability of Landsat-4
Thematic Mapper and Multispectral Scanner data for selected
crop and forest cover types: IEEE Transactions on Geoscience
and Remote Sensing, v. GE-22, no. 3, pp. 308-311.
5) eCognition Developer (2014). Trimble Germany GmbH,
Arnulfstrasse 126, 80636 Munich, Germany. All rights
reserved.
6) Rasouli, A.A. and H. Mahmoudzadeh (2010). Fundamental of
Knowledge-Based Remote Sensing. ElmIran Press, Tabriz,
Iran.
44
Mastering Object Based Image Analysis SECTION ONE
Tutorial 2
Picking Up The Landsat Imagery With eCognition
Opening Statement:
This tutorial will explore picking up the most valuable remote
sensing sources available in the eCognition setting. To access
stored Landsat resources from previous years and learn how to
use one of the most popular data sources, the Global
Visualization Viewer (Glovis). Doing an applicable practice,
you will try to access the Landsat-5 imagery of the Shirvan
National Park in the Republic of Azerbaijan and prepare them
for use in an eCognition setting. The Landsat Collection 1
Level-1 has the highest available data quality and, it is important
for change detection applications during the wet and dry months
of the year.
Instructive Memo:
✓ Level: Beginner,
✓ Time: This tutorial should not take you more than 1 hour.
✓ Software: eCognition Developer, Version 9.01,
45
Mastering Object Based Image Analysis SECTION ONE
46
Mastering Object Based Image Analysis SECTION ONE
47
Mastering Object Based Image Analysis SECTION ONE
48
Mastering Object Based Image Analysis SECTION ONE
49
Mastering Object Based Image Analysis SECTION ONE
50
Mastering Object Based Image Analysis SECTION ONE
2.3) Then, click on the "Share Scene" option until the Share
Scene appears (Figure 6).
51
Mastering Object Based Image Analysis SECTION ONE
2.7) You need to select the View Scene List for our
interesting data set. You will access the following window, and
then you may request the Landsat image (Figure 9).
52
Mastering Object Based Image Analysis SECTION ONE
.
2.10) Uncompress the zipped files inside a preformed folder,
for our purposes, LC05_L1TP_166033_19860413 file (Figure
11).
53
Mastering Object Based Image Analysis SECTION ONE
3.1) When you unzip the Landsat 5 bands, you must understand
their most characters. Landsat 5 carried the Thematic Mapper
(TM) sensor and created images consisting of six spectral bands
with a spatial resolution of 30 meters for Bands 1-5 and 7 one
thermal band (Band 6).
3.2) The approximate scene size is 170 km north-south and 183
km east-west (106 mi by 114 mi). TM could not resolve
individual houses or trees, but it could record houses constructed
or cleared forests. Here are the Landsat satellite band
designations (Table 1).
54
Mastering Object Based Image Analysis SECTION ONE
Bathymetric mapping,
distinguishing soil from vegetation,
1 Visible Blue 0.45 - 0.52 30
and deciduous from coniferous
vegetation
55
Mastering Object Based Image Analysis SECTION ONE
4.2) Click the icon, Go to All Programs and click the icon
56
Mastering Object Based Image Analysis SECTION ONE
Figure 12: The "Load Image File" dialog box, with recursive file
display selected
4.6) Click on the OK option to create an image. From the File
menu, select the Save Project option to save your project. To
Create New Project, Choose File > New Project on the main
menu bar.
4.7) Using the "Modify a Project" function, you can add/remove
more images or thematic layers or rename the project. Modify a
selected project by exchanging or renaming image layers or
other operations. Choose File > Modify Open Project on the
main menu bar to modify a project. The Modify Project dialog
box opens (Figure 13).
57
Mastering Object Based Image Analysis SECTION ONE
58
Mastering Object Based Image Analysis SECTION ONE
image and drag to select a subset area. For our interest, outline
the Shirvan National Park in the image viewer. Alternatively,
you may enter the subset coordinates. You can modify the
coordinates by typing. Then you need to confirm with OK to
return to the main dialog box. You can clear the subset selection
by Clicking Clear Subset in the main dialog box. (Figure 14).
4.11) When you are happy with the new image set, click on the
OK option. Then, click on the OK option in the Modify Project
window again. Then, modify the band combinations using the
"Edit Image Layer Mixing" dialog box (Figure 15).
59
Mastering Object Based Image Analysis SECTION ONE
Figure 16: Landsat-5 subsetted image for the Shirvan National Park
near the Caspian Sea coastlines
60
Mastering Object Based Image Analysis SECTION ONE
3, 2, 1 Natural Color
7, 5, 3 False Color
(urban)
4, 3, 2 Color Infrared
(vegetation)
61
Mastering Object Based Image Analysis SECTION ONE
5, 4, 1 Agriculture
7, 5, 4 Atmospheric
Penetration
4, 5, 1 Healthy
Vegetation &
Land/Water
Natural with
Atmospheric
7, 4, 2 & Removal &
7, 4, 3 Shortwave
Infrared
5, 4, 3 Vegetation
Analysis
62
Mastering Object Based Image Analysis SECTION ONE
Figure 17: Landcover changes during wet and dry seasons in Shirvan
National Park
6.3) At the end, you have to save the currently open project to a
project file (extension. dpr). To save a project, do the following:
• Choose File> Save Project on the main menu bar.
• Choose File> Save Project As, on the main menu bar. The
Save Project dialog box opens.
• Select a folder and enter a name for the project file (.dpr).
Click the Save button to store the File.
63
Mastering Object Based Image Analysis SECTION ONE
Sum Up:
The Landsat program is the longest-running enterprise for
acquiring satellite imagery of Earth, usually divided into scenes
for easy downloading. Landsat-5 data can assist a broad range of
specialists in managing the world's food, water, forests, and
other natural resources for a growing world population. The
Landsat images contain many layers of data collected at
different points along the visible and invisible light spectrum as
follows:
❖ As you may encounter, many distinct sorts of Landsat
satellite images are available to you at no cost from
different websites.
❖ Please note that Landsat scenes are large files, as
unzipped Landsat 5 scenes are about 404, Landsat 7
scenes are about 654, and Landsat 8 nearly 972
megabytes. It is an important consideration when
downloading and manipulating them.
❖ Landsat scenes are made of several files or layers
(bands) of data. Each band represents a section of the
electromagnetic spectrum that has been selected because
it is useful for distinguishing kinds of landcover and
landuse from one another and measuring the ways they
change over time.
❖ Regularly, challenging satellite images and extracting
useful information inside eCognition software would be
an impressive enjoyment tutorial for educational
purposes.
64
Mastering Object Based Image Analysis SECTION ONE
Informative Practices
Tips:
1) You have explored the most extensive and widely used remote sensing
data portal (USGS EarthExplorer) in the current lesson. Still, many
remote sensing data portals are available on the relevant websites, all
with various applications and focus areas.
2) Take some time to explore some of the following data sources :
➢ NASA EarthDataSearch: https://search.earthdata.nasa.gov.
3) Landsat satellites have the optimal ground resolution and spectral
bands to efficiently track landuse and document land change due to
climate change, urbanization, drought, wildfire, biomass changes
(carbon assessments), and a host of other natural and human-caused
changes.
Workouts:
1) Download the Landsat-5 imagery for your living area and carefully
manage them.
2) Import Landsat bands to the eCognition software and list geographic
and environmental characteristics .
3) Try combining different Landsat-5 bands and interpreting the
landcover characters based on visualized interpretations elements
(shape, color, and patterns).
Quizzes :
1) How many continents have Landsat-5 imagery available for them?
2) Will the cloud cover mask the satellite imagery considerably?
3) What are the following image's basic geographic and thematic
characters?
Allied References:
1) Irons, J.R., Dwyer, J.L., and Barsi, J.A. (2012). The next Landsat
satellite: the Landsat data continuity mission. Remote Sensing of
Environment, Vol. 122: pp.11–21.
65
Mastering Object Based Image Analysis SECTION ONE
66
Mastering Object Based Image Analysis SECTION ONE
Tutorial 3
Opening Statement:
The main aim of the current tutorial is to familiarize students
with the primary structure of the eCognition Developer software
version 9.01. Accordingly, this tutor provides initial hands-on
exercises, introducing the eCognition simple workflow. Data
material focuses on Landsat-5 TM bands taken from the
Republic of Azerbaijan, Absheron Peninsula, Baku City. In a
general view, the eCognition Developer allows users to import
many types of image raster layers and create targeted projects.
The tutorial also extends the band combination of Landsat TM
bands, providing qualitative landcover information.
Instructive Memo:
✓ Level: Beginner.
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software, Version 9.01.
✓ Data Sources: The Landsat-5 Tiff format files (Row: 166
Path: 32 Date: 19/07/1998) are available from the NASA
GloVIS service.
67
Mastering Object Based Image Analysis SECTION ONE
68
Mastering Object Based Image Analysis SECTION ONE
69
Mastering Object Based Image Analysis SECTION ONE
1.2) You can choose one of Quick Map Mode or Rule Set
Mode, for instance.
1.3) By choosing the Rule Set Mode and clicking on the OK
button, you will face the flowing eCognition main interface with
different interface modes.
1.4) To access other interface modes, you must toggle the
View> Appearance buttons. You may choose one of the
different interfaces series ( Figure 2).
70
Mastering Object Based Image Analysis SECTION ONE
When you start the eCognition software, you will see with at
least five windows and associated menu-bars and many toolbars
as fallow:
2.1) Data/Image Viewer: The image and classification data
viewer. The viewer lets you view the imagery you are
classifying, including manipulating the band order and image
stretching.
2.2) Process Tree: The Window within which you could develop
your ruleset script.
2.3) Class Hierarchy: The Window displays the classes you
develop.
2.4) Image Object Information: This Window displays selected
feature values for a selected object.
2.5) Feature View: This Window displays a list of all the
available features within eCognition Developer and allows the
current image objects to be colored (high green values and low
blue values) given their value for a special feature (Figure 3).
71
Mastering Object Based Image Analysis SECTION ONE
When you work inside the eCognition software, you need to run
many functions, each with its order.
3.1) Table 1 provides a glossary of the icon available on the
various toolbars within eCognition Developer.
72
Mastering Object Based Image Analysis SECTION ONE
73
Mastering Object Based Image Analysis SECTION ONE
74
Mastering Object Based Image Analysis SECTION ONE
75
Mastering Object Based Image Analysis SECTION ONE
76
Mastering Object Based Image Analysis SECTION ONE
77
Mastering Object Based Image Analysis SECTION ONE
78
Mastering Object Based Image Analysis SECTION ONE
79
Mastering Object Based Image Analysis SECTION ONE
80
Mastering Object Based Image Analysis SECTION ONE
7.8) Once the project has been loaded, you can pan and zoom
around the data in the display region using the zoom toolbar,
shown below in Figure 6.
7.9) If the zoom functions toolbar is not displayed, you can turn
it on using the View>Toolbars menu.
Step 8: Selecting bands for Display
8.1) To select the layer(s) to be displayed, you need to click on
8.2) Using the 'Layer Mixing' drop-down menu, you can select
the number of layers to be mixed in the display, and then by
selecting the individual layers, you may turn them on and off or
increase the weight (Figure 8).
81
Mastering Object Based Image Analysis SECTION ONE
82
Mastering Object Based Image Analysis SECTION ONE
83
Mastering Object Based Image Analysis SECTION ONE
84
Mastering Object Based Image Analysis SECTION ONE
85
Mastering Object Based Image Analysis SECTION ONE
Sum Up:
86
Mastering Object Based Image Analysis SECTION ONE
Informative Practices
Tips:
1) The eCognition software is powerful out-of-the-box
Landcover and changes detection mapping solution.
2) eCognition enables users at any skill level to quickly produce
high-quality, GIS-ready deliverables from imagery.
3) By eCognition, you can rapidly and easily combine Landsat's
different bands to perceive much landcover information.
Workouts:
1) List the satellite imagery which could be inter to the
eCognition setting.
2) Name the main parts and functions of the View Settings
Toolbar.
3) List the "Edit Image Layer Mixing dialog box" main
applicability .
Quizzes :
1) What is unique in eCognition?
2) What is typical Trimble eCognition use Cases?
3) What is the main aim of band combination ideas?
Allied References:
1) eCognition Reference Book (2014). Trimble eCognition
Reference Book, Munich, Germany: Trimble Germany
GmbH.
2) eCognition User Guide, (2014). Trimble eCognition
Developer User Guide, Munich, Germany: Trimble Germany
GmbH.
3) Flanders, D., Hall-Beyer, M., and Pereverzoff, J. (2003).
Preliminary evaluation of eCognition object-based software
for cut block delineation and feature extraction. Canadian
Journal of Remote Sensing, 29(4), 441– 452.
87
Mastering Object Based Image Analysis SECTION ONE
88
Mastering Object Based Image Analysis SECTION ONE
Tutorial 4
Opening Statement:
This tutorial teaches you through simple steps to find, download
and view free Landsat-7 imagery to extract useful landcover
information inside the eCognition software. Geospatial students
and even professionals looking for satellite imagery prefer to use
medium-resolution satellite imagery applied for a specific
application, but they are unsure where to start. The primary
method is to combine different bands to combine a few color
images, enabling you to interpret qualitative information
visually. Meanwhile, the main aim of the tutorial is to develop a
few band ratios of water surfaces for separating the water
surfaces from other types of landcover along the Kura River,
Aran Rayon, Azerbaijan.
Instructive Memo:
✓ Level: Intermediate
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software (Version 9.01),
89
Mastering Object Based Image Analysis SECTION ONE
90
Mastering Object Based Image Analysis SECTION ONE
91
Mastering Object Based Image Analysis SECTION ONE
gain for all scenes. The approximate scene size is 170 km north-
south by 183 km east-west.
Landsat-7 data products are provided free of charge to all data
users, including the students and researchers, under the terms
and conditions prescribed by the Glovis NASA Programme.
92
Mastering Object Based Image Analysis SECTION ONE
2.1) Set the layer mixing and equalizing options based on the
"Edit Image Layer Mixing dialog box capabilities. It affects
the display of Landsat-7 images, with nine bands (see Table 2),
as subsetted for small parts of the Aran Rayon.
2.2) Define the color composition to visualize image layers in
the map view. In addition, you can choose from different
equalizing options. It enables you to visualize the image better
and recognize the visual structures without changing them.
2.3) Choose to hide layers, which can be very helpful when
investigating image data and results. Note that changing the
image layer mixing changes the visual display of the image but
not the underlying image data – it has no impact on the process
of image analysis.
93
Mastering Object Based Image Analysis SECTION ONE
2.4) To do so, you need to open the Edit Image Layer Mixing
Figure 2: The "Edit Image Layer Mixing" dialog box, the layer mixing
and equalizing options
2.5) You may define the color composition to visualize image
layers in the map view, based on different RGB Layer Mixing
presets (Figure 3).
94
Mastering Object Based Image Analysis SECTION ONE
Figure 4: Edit Image Layer Mixing dialog box, Layer Mixing options
95
Mastering Object Based Image Analysis SECTION ONE
96
Mastering Object Based Image Analysis SECTION ONE
97
Mastering Object Based Image Analysis SECTION ONE
Linear (1.00%)
B6 No layer
weights
Linear (1.00%)
B5, B4, B3 No layer
weights
Standard
B6t1, B5, B4 Deviation (3)
No layer
weights
98
Mastering Object Based Image Analysis SECTION ONE
Standard
B5, B4, B3 Deviation (3)
No layer
weights
99
Mastering Object Based Image Analysis SECTION ONE
Figure 6 (a & b): Setting the Landsat-7 bands for the Standard
Deviation and Gamma Correction parameters
100
Mastering Object Based Image Analysis SECTION ONE
101
Mastering Object Based Image Analysis SECTION ONE
102
Mastering Object Based Image Analysis SECTION ONE
on the Edit Image Layer Mixing icon to open its dialog box.
Now you can set to the output layer mode. Once again, repeat
the above-mention procedure for another band rationing, for
example, "(B3Red/B4NIR)" to extract another water component.
4.5) When you insert different water rationing algorithms inside
the Process Tree, it looks like Figure 9.
103
Mastering Object Based Image Analysis SECTION ONE
104
Mastering Object Based Image Analysis SECTION ONE
Sum Up:
105
Mastering Object Based Image Analysis SECTION ONE
Informative Practises
Tips:
1) At an altitude of 705 km, a full surface scan by Landsat 7
takes 232 turns or 16 days. According to local solar time, the
terrain survey takes place at approximately 10 am (± 15
minutes).
2) You can fuse (combine) Landsat-7 bands, particularly
Panchormatic B8, with other sensor data to enhance your
reachers approaches.
3) You could process the Landsat-7 images to preserve land
monitoring studies, monitoring of vegetation, soil, and water
cover, as well as observation of inland waterways and coastal
areas.
Workouts:
1) Clarify the difference between TM and ETM+ sensors.
2) Download a set of Landsat-7 imagery for a region where you
live.
3) Moisture Stress Index (MSI (Landsat 4 – 7) = B5 / B4) is
applied for canopy stress analysis, productivity prediction,
and biophysical modeling. Apply this index for your selected
Landsat imagery.
Quizzes :
1) What sorts of band combinations are suitable for
demonstrating fired lands?
2) Which bands of Landsat-7 have the highest spatial
resolutions?
3) How many Landsat satellites are there in space?
Allied References:
106
Mastering Object Based Image Analysis SECTION ONE
107
Mastering Object Based Image Analysis SECTION ONE
Tutorial 5
Opening Statement:
In the current tutorial, you will learn to generate spectral
indicators captured by Landsat-8 imagery applying the
eCognition software computational methods that offer
qualitative information about the ground surfaces. You will
cover creating spectral indices by highlighting vegetation and
water features in the SariSu Lake region, Aran Rayon, in
Azerbaijan to emphasize the "Image Object Information" and
"Feature View windows " roles as key entries to the OBIA
procedures.
Instructive Memo:
✓ Level: Begenner and Intermedate
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software (Version 9.01),
✓ Data Sources: Landsat-8 Imagery, https://glovis.usgs.gov/app,
✓ Subject Scene: Sari-Su Lake, Kura River, Azerbaijan.
Tutor Objectives:
By the end of this unit, you should be able to:
108
Mastering Object Based Image Analysis SECTION ONE
109
Mastering Object Based Image Analysis SECTION ONE
Figure 2: The Subject Scene, where the Sarisu and Kura River are
located
110
Mastering Object Based Image Analysis SECTION ONE
2 Blue 0.45-0.51 30
3 Green 0.53-0.59 30
4 Red 0.64-0.67 30
5
Near Infrared (NIR) 0.85-0.88 30
6 SWIR 1 1.57-1.65 30
111
Mastering Object Based Image Analysis SECTION ONE
Wavelength Resolution
Band Numbers Band Characters (micrometers) (meters)
7 SWIR 2 2.11-2.29 30
8 Panchromatic 0.50-0.68 15
9 Cirrus 1.36-1.38 30
10 Thermal Infrared
10.6-11.19 100
(TIRS) 1
11 Thermal Infrared
11.50-12.51 100
(TIRS) 2
2.1) There are many spectral indices that you may like to
analyze various aspects of vegetation, water resources, snow,
soil, fire, among others, inside the eCognition setting.
112
Mastering Object Based Image Analysis SECTION ONE
113
Mastering Object Based Image Analysis SECTION ONE
3.1) Start the eCognition software and create the required initial
image based on the different band combinations.
3.2) Now, you need to set the main functional windows as View
Settings, Process Tree, Image Object Information, and Feature
View, as illustrated in Figure 3.
114
Mastering Object Based Image Analysis SECTION ONE
Figure 4: The Edit Process dialog box, setting up the Layer Arithmetic
algorithm parameters
4.3) Click on the Execute option and wait for a second to finish
the process. Soon you will notice the NDVI layer created.
4.4) Click on the Edit Image Layer Mixing and set it up, as you
can see in Figure 5. Then, click on the OK option to notice your
desired NDVI layer.
115
Mastering Object Based Image Analysis SECTION ONE
116
Mastering Object Based Image Analysis SECTION ONE
118
Mastering Object Based Image Analysis SECTION ONE
119
Mastering Object Based Image Analysis SECTION ONE
5.1.9) The initial black and white maps of vegetation and water
indices are illustrated in Figure 11.
Figure 11: Black and white maps of NDVI-2 and NDWI-2 spectral
indices created through an Arithmetic Customized Feature method
5.1.10) After creating NDVI and NDWI indices, you can find
the new arithmetic feature in the Image Object Information
window or the Feature View window under Object features >
Customized options.
5-2) Relational Features
Relational features are used to compare a particular feature of
one object to those of related objects of a specific class within a
specified distance. Related objects are surrounding neighbors,
121
Mastering Object Based Image Analysis SECTION ONE
122
Mastering Object Based Image Analysis SECTION ONE
6.4) The selected feature values are displayed in the map view.
To compare single image objects, click another image object in
the map view, and the displayed feature values are updated
(Figure 13).
123
Mastering Object Based Image Analysis SECTION ONE
124
Mastering Object Based Image Analysis SECTION ONE
125
Mastering Object Based Image Analysis SECTION ONE
Informative Practises
Tips:
1) The Normalized Difference Vegetation Index (NDVI) is a
simple numerical indicator that you can use to analyze remote
sensing measurements. NDVI is related to vegetation, where
healthy vegetation reflects very well in the near-infrared part
of the spectrum.
2) Index values can range from -1.0 to 1.0, but vegetation values
typically range between 0.1 and 0.7. Freestanding water
(ocean, sea, lake, river, etc.) gives a rather low reflectance in
both spectral bands and thus result in very low positive or
even slightly negative NDVI values.
3) Soils that generally exhibit a near-infrared spectral reflectance
are somewhat larger than the red, thus generating rather small
positive NDVI values (say 0.1 to 0.2).
Workouts:
1) Clarify the difference between GNDVI and EVI Indices.
2) Download a set of Landsat-8 imagery for a region where you
live.
3) Try to create the basic domain spectral indices in the region.
Quizzes:
1) What sorts of indices are suitable for demonstrating fired
lands?
2) What is the basic functionality of the Image Object
Information window?
3) How could you use the Feature View window for the
thresholding aims?
126
Mastering Object Based Image Analysis SECTION ONE
Allied References:
1) Flanders, D., M. Hall-Beyer, and J. Pereverzoff. (2003).
Preliminary evaluation of eCognition object-based software
for cut block delineation and feature extraction. Can J Remote
Sens 29(4):441–52.
2) Fuentes, S. R. de Bei, J. Pech, and S. Tyerman (2012).
Computational water stress indices obtained from thermal
image analysis of grapevine canopies," Irrigation Science, vol.
30, no. 6, pp. 523–536.
3) Liu, B.; Chen, J.; Chen, J.; Zhang, W. (2018). Landcover
Change Detection Using Multiple Shape Parameters of
Spectral and NDVI Curves. Remote Sens., 10, 1251.
4) McFeeters, S. K. (1996). The use of the Normalized
Difference Water Index (NDWI) in the delineation of open
water features," International Journal of Remote Sensing, vol.
17, no. 7, pp. 1425–1432.
5) Rasouli, A.A., and Mammadov, R. (2020). Preliminary
Satellite Image Analysis Inside the ArcGIS Setting, Lambert
Academy Publishing, Germany.
6) Vermote, E., Justice, C., Claverie, M., & Franch, B. (2016).
Preliminary analysis of the performance of the Landsat 8/OLI
land surface reflectance product. Remote Sensing of
Environment, 185, 46-56.
127
Mastering Object Based Image Analysis SECTION TWO
Tutorial 6
Quick Look to the eCognition's OBIA Capabilities
Opening Statement:
128
Mastering Object Based Image Analysis SECTION TWO
129
Mastering Object Based Image Analysis SECTION TWO
130
Mastering Object Based Image Analysis SECTION TWO
in the toolbar.
2.2) Navigate to your working directory, for example, "E:\
Satellite Images\S1-T6-North Azer\
LC08_L1TP_167032_20210522. You may have your personnel
directory setting.
2.3) Select all *.Tif image files you need and click on the OK
button.
2.4) Inside the Create Project dialog box, you can edit or remote
any band or bands that you do not need or may insert another
raster layer.
2.5) For the current tutorial, you may keep the Landsat bands of
2, 3,4,5,6,7, 8 and a subset of a raster ALOS-Palsar DEM file as
AP_05189_FBS_F0810_RT1, with 12.5-meter spatial
resolution. Notice Table 1 to realize the Landsat 8 bands
designations in the current tutorial.
Table 1: Landsat-8 bands designations
Band (s) Operation
Combination
2, 3, and 4 These bands are used to create a true-color band
combination or normal RGB picture of the visible light.
The basic aim of these filters is to create a visual map of
the area.
4, 3, and 2 These bands show agricultural farms around the image.
Dark Green in the picture indicates woods; greens are
healthy plantations.
Bands 5,4,3 You could use a combination of these bands to create a
false-colored image. To the bright red, band 4 is green,
and band 3 is blue.
Bands 6 and 7 use different parts of shortwave infrared and are helpful
in terms of monitoring rocks and soils. When analyzing
the image, the spectrum is almost fully absorbed by
131
Mastering Object Based Image Analysis SECTION TWO
2.6) When you have the Landsat-8 bands, you can select one or
several bands to create a clearer picture of the landcover due to
the specific needs of different research kinds. For example, it is
possible to use false-color Images to enhance the visual
appearance of the data. The opportunity given is to substitute the
true color of the image with the color required as the following
combinations:
2.7) To speed the processing procedure, you must first select the
bands you need, as indicated in Table 1. Also, you can subset a
small interest part of the image.
2.8) When you are happy with the image selection mode, click
on the OK option. The new image project is created with Blue,
Green, and Red bands.
132
Mastering Object Based Image Analysis SECTION TWO
3.1) Select "Image Layer Mixing from the "View" menu or click
Now that you have created a project, you can make your first
object-oriented image analysis as the main feature of
eCognition. For this reason, the first step in eCognition is
always to extract image object primitives (segments), which will
become the building blocks for subsequent classifications. You
will now produce such image objects with multiresolution
segmentation that generates image objects at any chosen
resolution.
4.1) Right-click inside the Process Tree and click on Append
New; in the drop-down for Algorithm, select the multiresolution
segmentation and set other segmentation parameters as the
Figure 3. You may try different segmentations settings to find
the best-fit image objects.
134
Mastering Object Based Image Analysis SECTION TWO
135
Mastering Object Based Image Analysis SECTION TWO
136
Mastering Object Based Image Analysis SECTION TWO
object. You can use the button to show the outlines colored
in the classification colors after classification is performed.
4.11) In this stage, you can right-click on the created segmented
Level-1 inside the Process Tree and modify the segmentation
process .
137
Mastering Object Based Image Analysis SECTION TWO
138
Mastering Object Based Image Analysis SECTION TWO
5.4) Another tool that helps you with this task is the feature
view. The featured view allows you to display one feature for all
image objects. The image objects are rendered in gray values,
corresponding to the feature value. The brighter an object is, the
higher is its feature value for the selected feature.
5.5) From the "Tools" menu, choose "Feature View. Select the
feature "Object features > Layer values > Mean > B2-Blue.tif"
by double-clicking it. The objects will then be colored according
to their feature value for the selected feature. A low weight
value represents a high feature value; a high weight value a low
feature value. You can also visualize features out of every other
dialog where features are selected, such as the "Insert
Expression" dialog or the "Select displayed Feature" dialog
(Figure 7). In these cases, you open a pop-up menu with a right-
click and select "Update range."
139
Mastering Object Based Image Analysis SECTION TWO
140
Mastering Object Based Image Analysis SECTION TWO
6.2) Type Snow into the box for name and change the box for
selecting colors to Blue, then click OK.
6.3) Repeat these steps for Agriculture, Barren, Forest, Pasture,
Snow, Water, and Snow-Tracks (patches), a geomorphological
pattern of snow and firn accumulation that lies on the surface
longer than others seasonal snow covers. Make sure that you
label and give the appropriate colors to all classes (Figure 9).
141
Mastering Object Based Image Analysis SECTION TWO
6.5) Go to the Class Hierarchy box, click on the Snow class, and
make sure it is highlighted. It makes you confident that your
selections go to that class.
6.6) Zoom down to an area called Snow and double click inside
the segments that overlay Snow areas or hold the shift key and
click once in the segment. It will turn the segment to the color of
the selected class. It is easier to figure out what plays with the
segmentation tools shown in the image below (Figure 11).
142
Mastering Object Based Image Analysis SECTION TWO
You can check your samples during and after the sampling
process using two important tools.
7.1) First, you can apply the Sample Editor window, the
principal tool for inputting samples. From the Classification,
select menu Samples and then Sample Editor option. A selected
class (for example, Snow class) shows histograms of selected
features of samples in the currently active map. You can display
the same values for other image objects (for instance, Snow-
Tracks) at a certain level or all levels in the image object
hierarchy (Figure 12).
143
Mastering Object Based Image Analysis SECTION TWO
144
Mastering Object Based Image Analysis SECTION TWO
145
Mastering Object Based Image Analysis SECTION TWO
146
Mastering Object Based Image Analysis SECTION TWO
147
Mastering Object Based Image Analysis SECTION TWO
148
Mastering Object Based Image Analysis SECTION TWO
149
Mastering Object Based Image Analysis SECTION TWO
150
Mastering Object Based Image Analysis SECTION TWO
151
Mastering Object Based Image Analysis SECTION TWO
152
Mastering Object Based Image Analysis SECTION TWO
Figure 23: Output of the Error Matrix based on TTA Mask statistics
9.4.4) The Error Matrix Based on Samples dialog box displays a
statistic type used for accuracy assessment. Error Matrix based
on Samples is similar to Error Matrix Based on TTA Mask but
considers samples (not pixels) derived from manual sample
inputs. The match between the sample objects and the
classification is expressed in terms of parts of class samples
(Figure 24).
153
Mastering Object Based Image Analysis SECTION TWO
10.1) When are happy with the results, you can export the
results by clicking on Export –> Export Results.
10.2) Leave the buttons to the left default; change the Export
File Name to Bazarduzu, for example.
10.3) Click on Select Classes and select all except for
unclassified, then click OK (Figure 25).
154
Mastering Object Based Image Analysis SECTION TWO
10.4) Click on Select Features to add all the attributes. The first
feature you want to add to the attribute table is the area of each
class.
10.5) Follow this path to add the area: Object Features –>
Geometry –> Extent –> Area and then double click on the area
to add it to the space to the right (Figure 26).
10.6) The class name is the second feature you want to add to
the attribute table. Follow this path to add the name: Class-
Related features –> Relations to Classification –> Class name
and double click on Create new Class Name, and in the new
window, click OK. It will let you select the box (Figure 27).
155
Mastering Object Based Image Analysis SECTION TWO
Sum Up:
156
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) In eCognition Developer, segmentation is an operation that
creates new image objects or alters the morphology of existing
image objects according to specific criteria. It means a
segmentation can be a subdividing operation, a merging
operation, or a reshaping operation.
2) There are two basic segmentation principles: Cutting
something big into smaller pieces, which is a top-down
strategy, and merging small pieces to get something bigger,
which is a bottom-up strategy.
3) Inside the eCognition 9.01, four classification algorithms are
available: Optimal Box, Nearest Neighbor, Brightness
Threshold, and Clutter Removal.
Workouts:
1) Define the different color compositions to visualize image
layers for display in the map view.
2) Try two different segmentation actions (for instant, Quadtree
and Multiresolution modes) to create image objects and
compare the results.
3) Use the Classification, Brightness Threshold action to classify
objects based on brightness.
Quizzes:
1) According to the experience from current and previous
tutorials, what are the eCognition software advantages and
disadvantages in processing Landsat-8 imagery?
2) How would you evaluate the final segemented map accuracy
inside the eCognition?
157
Mastering Object Based Image Analysis SECTION TWO
158
Mastering Object Based Image Analysis Section Two at a Glance
`Basic Concepts:
159
Mastering Object Based Image Analysis Section Two at a Glance
160
Mastering Object Based Image Analysis SECTION TWO
Tutorial 1
Opening Statement:
The main purpose of the current tutorial is to teach students how
to work professionally inside the trial version of the eCognition
9.5. Accordingly, you will download and install the eCognition
Developer version 9.5 on your computer at the first step. In the
second step, the tutorial teaches you simple steps to find,
download, and manage free Sentinel-2 data by accessing the
Copernicus Open Access Hub. At the tired step, you will learn
to import downloaded imagery to the inside of the eCognition
software. Creating a simple project helps the intermediate
learners understand the main aims of OBIA concepts, helping to
design, improve, and accelerate practicing the real-world high-
resolution satellite imagery during the next tutorials.
161
Mastering Object Based Image Analysis SECTION TWO
Instructive Memo:
✓ Level: Intermediate
✓ Time: This lesson should not take you more than 1 hour.
✓ Resources: The eCognition Developer software, Version 9.5
✓ Data Sources: Sentinel-2 Imagery,
L1C_T39TVE_A030362_20210415. jp2
✓ Scene Site: The Baku Peninsula
Tutor Objectives:
By the end of this unit, you should:
Learn how to download the eCognition Developer 9.5.
Start to install the eCognition Version 9.5.
Be trained to access the Sentinel-2 imagery.
Bring into the high-resolution imagery inside eCognition.
Background Concepts:
The eCognition trial version 9.5 enables researchers to examine
almost all high-resolution satellite imagery in pixel and object
levels, not in isolation, but contextual relations. Although
students do not have access to the official version of the new
eCognition software, they will acquire the necessary
constructive skills over time by doing simple exercises. Inside
the Trimble eCognition Developer Trial 9.5 version, you can
experience many image processing, classification functions as
experts and data scientists do in the geospatial data analyzing
main steps. Intermediate learners can design feature extraction
solutions to transform geo-data into geo-information. The
possibilities are continuous, with more feature extraction
experience. Besides, researchers could pioneer OBIA techniques
and continue to push the envelope of Sentinel-2 newly provided
imagery and integrated analyses.
162
Mastering Object Based Image Analysis SECTION TWO
163
Mastering Object Based Image Analysis SECTION TWO
164
Mastering Object Based Image Analysis SECTION TWO
165
Mastering Object Based Image Analysis SECTION TWO
166
Mastering Object Based Image Analysis SECTION TWO
167
Mastering Object Based Image Analysis SECTION TWO
168
Mastering Object Based Image Analysis SECTION TWO
169
Mastering Object Based Image Analysis SECTION TWO
Figure 11: the Completing the eCognition Developer Trial 9.5 Setup
Wizard window
1.15) You can start the eCognition Developer Trial version
and set up the software based on your choosier (Figure 12).
170
Mastering Object Based Image Analysis SECTION TWO
171
Mastering Object Based Image Analysis SECTION TWO
172
Mastering Object Based Image Analysis SECTION TWO
173
Mastering Object Based Image Analysis SECTION TWO
2.10) Using the Search Criteria text box in the top-left, click
on the menu and choose your data to feature, for instance, which
date and sentinel mission you need (Figure 16).
174
Mastering Object Based Image Analysis SECTION TWO
2.12) Click on this sign to view the product details (Figure 18).
175
Mastering Object Based Image Analysis SECTION TWO
folder similar to
in the jp2 format. Now, you need to unzip the contents of
the above folder in a particular folder.
2.14) The Sentinel-2 satellites will each carry a single multi-
spectral instrument (MSI) with 13 spectral channels in
the visible/near-infrared (VNIR) and short wave infrared
spectral range (SWIR). Keep in mind that the Sentinel
2A level 1c already has values in TOA reflectance.
Therefore, you need to convert its bands into GeoTIFF in
SNAP software. Besides, you can convert Sentinel image
format inside the ArcGIS setting into GeoTIFF format.
2.14.1) Download the SNAP software from Sentinel Data Hub:
http://step.esa.int/main/download/snap-download/. It is
free, open-source software. It is a common open-source
architecture for ESA Toolboxes ideal for exploiting
Earth Observation data.
2.14.2) Sentinel Data Hub offers three different installers for
your convenience. Choose the one which suits your
needs. During the installation process, each toolbox can
be excluded from the installation. Note that SNAP and
the individual Sentinel Toolboxes also support numerous
sensors other than Sentinel. Inside the SNAP software,
try File/ Open Product.
176
Mastering Object Based Image Analysis SECTION TWO
3.2) From the menu bar, click on the File and select the New
Project option. Soon the Create Project Window opens by
accompanying the Import Image Layers dialog box (Figure 20).
177
Mastering Object Based Image Analysis SECTION TWO
178
Mastering Object Based Image Analysis SECTION TWO
179
Mastering Object Based Image Analysis SECTION TWO
180
Mastering Object Based Image Analysis SECTION TWO
181
Mastering Object Based Image Analysis SECTION TWO
Quizzes:
182
Mastering Object Based Image Analysis SECTION TWO
Tutorial 2
Opening Statement:
In the first step, this tutorial teaches you to be familiar with the
eCognition 9.5 structure and functionality. In the second step,
the tutorial will teach you how to set up and configure your
workspace and projects within eCognition, based on Sentinel-2
datasets. Setting up a workspace and project is the first step in
eCognition, in which you configure your folder structure and
import your data. A project is the most basic format in
eCognition architect that contains one or more sets of satellite
imagery.
Instructive Memo:
✓ Level: Intermitade,
✓ Time: This tutorial should not take you more than 1.5 hours,
✓ Software: A eCognition Developer, Version 9.5,
✓ Data Sources: Sentinel-2 imagery T39TUE_20200311.Tif,
✓ Subject Scene: Baku region, Azerbaijan.
183
Mastering Object Based Image Analysis SECTION TWO
Tutor Objectives:
By the end of this unit, you should:
Learn how to downluod Sentinel-2 Imagery.
Be familiar with the workspace and projects.
Create a unique workspace and a project.
Create multiple projects inside a workspace.
Background Concepts:
Using the new eCognition software, you can import various
geospatial data, fusing them into a rich stack of geo-data for the
analysis. You may prefer to have a logic arrangement step-
wisely to create a computer-based representation of an expert’s
geospatial interpretation process, a so-called OBIA. eCognition
then combines the analysis logic with scalable computing power
to identify changes over time or features on the earth’s surface
across very large sets of data.
The eCognition technology enables researchers to examine
almost all high-resolution satellite imagery, such as Sentinel-2
(A & B), in pixel and object levels, not in isolation but in
contextual relations. Inside the eCognition software, you can
build up a picture iteratively, recognizing groups of pixels as
objects. Just like the human mind, it uses color, shape, texture,
shape, and size of objects, as well as their context and
relationships, to draw the same conclusions that an experienced
analyst would draw. Although you do not have access to the
official version of the new eCognition software, you will acquire
the necessary constructive skills over time by doing simple
exercises.
184
Mastering Object Based Image Analysis SECTION TWO
185
Mastering Object Based Image Analysis SECTION TWO
1.2) The Developer version has the parts, each with its function
as follows:
1) Source View: This dialog provides users a simple data
management area to modify input layer alias, display
orders, and access information on file details. To open
the dialog, choose View > Source View.
2) Item: All metadata items are listed in the feature tree if
existing. You can define a new metadata item by
clicking on Create a new Metadata item.
3) Main View: Inside the Main View, you could display
raster data, vector layers, and other derivative map
products.
4) Process Tree: eCognition Developer uses a cognition
language to create ruleware. These functions are
created by writing rule sets in the Process Tree
window.
5) View Settings: In this dialog, you can add image,
vector, and point cloud layers via drag and drop and
edit them according to view settings. Toggle between
detailed layer properties switches between grayscale
and RGB mixing, or - for point cloud data - select a 3D
subset to open the 3D viewer. The upper pane allows
individual layer settings, the lower pane global view
settings for the respective data type. If you click on the
View Settings Tab, you will notice the Process Tree,
186
Mastering Object Based Image Analysis SECTION TWO
187
Mastering Object Based Image Analysis SECTION TWO
188
Mastering Object Based Image Analysis SECTION TWO
1.3) eCognition offers the possibility to add all kinds of data via
drag and drop to the View Settings dialog. You can add image
layers, vector, and point cloud layers to a new or existing
project. You may wish to select the layers in the Windows File
Explorer and drag and drop them to the View Settings dialog to
import them. The upper pane allows individual layer settings,
the lower pane global view settings for the respective data type.
1.4) Alternatively, you can select File > Add data layer or View
> Source View > Add data layer button.
1.5) Furthermore, eCognition project (.dpr) and workspace (.dpj)
files can be added to eCognition by drag and drop (Figure 3).
189
Mastering Object Based Image Analysis SECTION TWO
190
Mastering Object Based Image Analysis SECTION TWO
2.2) Use the Single Layer Grayscale button in the View Settings
dialog to display image layers separately in grayscale.
2.3) To change from RGB to grayscale mode, press the button,
and the first image layer is shown in grayscale mode.
2.4) Step through all loaded image layers by using the Show
Next/Previous Image Layer button or open multiple views for
comparison (Figure 4).
Figure 4: Single layer grayscale view with layer 2 (left) and layer 12
(right)
2.5) Three Layers RGB button displays the first three layers
of your scene in RGB. By default, layer one is assigned to the
red channel, layer two to green, and layer three to blue,
indicated by a small circle in the respective field. You can
change view settings by clicking on a circle (removes circle) or
an empty field (adds circle).
191
Mastering Object Based Image Analysis SECTION TWO
In Three Layer Mix, the color composition for the image layers
changes one image layer up for each image layer. For example,
if layers two, three and, four are displayed, the Show Previous
Image Layer Button changes the display to layers one, two, and
three. If the first image layer is reached, the previous image
layer starts again with the last image layer.
192
Mastering Object Based Image Analysis SECTION TWO
194
Mastering Object Based Image Analysis SECTION TWO
195
Mastering Object Based Image Analysis SECTION TWO
196
Mastering Object Based Image Analysis SECTION TWO
3.3) You can easily change image layers' order and aliases in the
Source View dialog (View > Source View). To rename a layer
alias, you can right-click and select rename or press F2 to enter
renaming mode or and double-click on a layer alias.
Step 4: Image Layer Equalization Dialog
197
Mastering Object Based Image Analysis SECTION TWO
4.3) This function must be enabled in: Tools > Options >
Display > Use right-mouse button for adjusting window leveling
> Yes.
4.4) Image equalization is performed after mixing image layers
into a raw RGB (red, green, blue) image. If, as is usual, one
image layer is assigned to each color, the effect is the same as
applying equalization to the individual raw layer gray value
images. Image equalization leads to higher quality results if
more than one image layer is assigned to one screen color (red,
green, or blue). If it is performed after all image layers are
mixed into a raw RGB image (Figure 9).
198
Mastering Object Based Image Analysis SECTION TWO
4.8) The Zoom Scene to Window button helps you reset the
observer position and all zoom and rotation steps to default.
With point cloud data loaded, you can open an additional toolbar
View > Toolbars > 3D for more visualization options.
199
Mastering Object Based Image Analysis SECTION TWO
200
Mastering Object Based Image Analysis SECTION TWO
show the new settings after clicking Apply only. The Discard
and Apply buttons become active with the Auto-update check
box cleared.
✓ Apply to all Views: With this checkbox activated, you can
apply selected settings to all views at once.
✓ Note that by changing the Global Settings in this lower pane -
all settings in the upper pane are overwritten (Figure 10).
201
Mastering Object Based Image Analysis SECTION TWO
202
Mastering Object Based Image Analysis SECTION TWO
203
Mastering Object Based Image Analysis SECTION TWO
204
Mastering Object Based Image Analysis SECTION TWO
6.2.2) Now that we've created our project, let's explore how
eCognition handles these data sets. First, we will go to Edit
Aliases, Image Layer Aliases (Figure 13) under the Process
menu.
205
Mastering Object Based Image Analysis SECTION TWO
206
Mastering Object Based Image Analysis SECTION TWO
then click on the Save Project icon to ensure they save in the
workspace.
6.2.11) Moving over to Windows Explorer, we see that our
subsets save as new project files within the DPR folder, located
within our workspace demo folder.
Step 7: Creating an Initial Project
7.1) To create a simple project – one without thematic layers,
metadata, or scaling (geocoding is detected automatically) – go
to File> Load Image File in the main menu (Figure 14).
Figure 14: Load "Image File dialog box" for a simple project, with
recursive file display, selected
7.2) Load Image File (along with Open Project, Open
Workspace, and Load Ruleset) uses a customized dialog box.
Selecting a drive displays sub-folders in the adjacent pane; the
dialog will display the parent folder and the subfolder.
7.3) Clicking on a sub-folder then displays all the recognized
file types within it (this is the default.
207
Mastering Object Based Image Analysis SECTION TWO
7.4) You can filter file names or file types using the File Name
field. To combine different conditions, separate them with a
semicolon (for example *.tif; *.las). The File Type drop-down
list lets you select from a range of predefined file types. The
buttons at the top of the dialog box let you easily navigate
between folders. Pressing the Home button returns you to the
208
Mastering Object Based Image Analysis SECTION TWO
209
Mastering Object Based Image Analysis SECTION TWO
210
Mastering Object Based Image Analysis SECTION TWO
211
Mastering Object Based Image Analysis SECTION TWO
8.4.4) If you want to rescale the scene during import, edit the
scale factor in the text box corresponding to the scaling method
used: resolution (m/pxl), magnification (x), percent (%), or
pixel.
8.4.5) To use the geocoding information from an image file to
be imported, select the Use Geocoding checkbox.
8.4.6) For feature calculations, value display, and export, you
can edit the Pixels Size (Unit). If you keep the default (auto), the
unit conversion is applied according to the coordinate system of
the image data.
8.4.7) If geocoding information is included, the pixel size equals
the resolution. In other cases, pixel size is 1.
8.5) In special cases, you may want to ignore the unit
information from the included geocoding information. To do so,
deactivate Initialize Unit Conversion from the Input File item in
Tools > Options in the main menu
8.6) The Image Layer pane allows you to insert, remove and edit
image layers. The order of layers can be changed using the up
and down arrows – If you use multi-dimensional image data
sets, you can check and edit multi-dimensional map parameters.
You can set the number, the distance, and the starting item for
both slices and frames.
8.6.1) If you load two-dimensional image data, you can set the
value of those pixels that are not to be analyzed. Select an image
layer and click the No Data button to open the Assign No Data
Values dialog box.
212
Mastering Object Based Image Analysis SECTION TWO
213
Mastering Object Based Image Analysis SECTION TWO
214
Mastering Object Based Image Analysis SECTION TWO
215
Mastering Object Based Image Analysis SECTION TWO
216
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) Any Workspace is a safe house for eCognition projects.
2) The image file and the associated data within a scene can be
independent of eCognition software (although this is not
always true).
3) You can fuse (combine) Sentinel-2 data and other sensor data to enhance
your reachers approaches.
Workouts:
1) Create a workspace and place two different subsetted projects
inside it.
2) Load Image bands for the project as you created and then
change image layers differently .
3) Clarify the difference between Sentinel 2a and 2b.
Quizzes :
1) What does this group of buttons allow you to do?
217
Mastering Object Based Image Analysis SECTION TWO
218
Mastering Object Based Image Analysis SECTION TWO
Tutorial 3
219
Mastering Object Based Image Analysis SECTION TWO
Background Concepts:
Segmentation is defined as the partitioning of an image
into image objects; in a way, an image object is a group of
connected pixels in a scene. Segmentation means grouping
neighboring pixels into regions (or segments) based on
similarity criteria (digital number, texture). Image objects in
remotely sensed imagery are often homogenous and can be
delineated by segmentation. It is always the first step of any
process within eCognition Developer as it generates the image
objects on which the classification process will be performed.
The important part is for the segmentation process to identify
objects that represent the features you wish to classify and are
distinct in terms of the features available within eCognition
(e.g., spectral values, shape, and texture). You could capture the
data sources from Sentinel-2 imagery, adjusted for the around
Mingachevir Dam. Mingachevir is the fourth-largest city in
220
Mastering Object Based Image Analysis SECTION TWO
221
Mastering Object Based Image Analysis SECTION TWO
222
Mastering Object Based Image Analysis SECTION TWO
color ( ). You can turn off the coverage map by clicking the
icon.
1.2.2) Metadata Filter: The Metadata Filter provides options
that narrow search results. Update one or more filters, then click
Apply to save all selections and view the matching scenes.
a) You can Filter data temporally by Date Range using
mm/dd/yyyy to mm/dd/yyyy.
b) Enter a Cloud Cover range to narrow the results based on
the percentage (0-100%) of the scene covered by clouds.
Leave this filter empty for data sets that do not report
cloud cover, such as GLS.
c) Select Months to further limit search results to scenes
acquired during a portion of the year. Hold down the Shift
or Ctrl keys to select more than one month while selecting
additional rows.
1.2.3) Click the triangle ( ) in the upper right of the header
to collapse the menu.
1.3) Define Area of Interest
The next step is to define the geographic area of interest.
223
Mastering Object Based Image Analysis SECTION TWO
1.3.1) To select a location, click and drag the map to pan and
scroll to zoom into your area of interest. Then, utilize the Jump
To… menu ( ) in the upper right of the map. As the zoom
function is activated, the display shows browse images that
map view area. You can enable the Full Screen ( ) mode to
view the map in Full-Screen mode.
a) Pan and Zoom: Use the mouse scroll wheel or the +/-
controls in the upper right corner of the map view to activate the
browse imagery display or enlarge the browse images. The
zoom level required to activate to browse images depends on
scene size and varies by data set.
b) Pan to adjacent areas by clicking and dragging the map. The
data sets automatically refresh if the zoom level allows imagery
to display.
224
Mastering Object Based Image Analysis SECTION TWO
225
Mastering Object Based Image Analysis SECTION TWO
226
Mastering Object Based Image Analysis SECTION TWO
1.5.5) Hide Scene ( ): Click the Hide Scene icon to hide the
current scene from the list and remove it from the map. The
number of Hidden scenes is indicated by a counter next to the
data set name in the Interface Controls panel. Click on the
counter to clear hidden scenes to make them available for
display again.
227
Mastering Object Based Image Analysis SECTION TWO
228
Mastering Object Based Image Analysis SECTION TWO
1.6.3) Download and Order options vary by data set (Figure 6).
The Export Scene List and Import Scene List are functions of
GloVis only.
Step 2: Set up a Project
As with all work within eCognition Developer, the first step is
to create a project containing all the datasets required for the
study.
2.1) Your targeted project should have the same parameters as
that shown in these steps in the previous tutorials.
2.2) Figure 7 shows the current tutorial selected area, part of the
Mingachevir area, Azerbaijan, inside the eCognition project.
229
Mastering Object Based Image Analysis SECTION TWO
m of the first data. Once you have matched your project window
select OK and create your project.
2.4) Although it is important to keep an eye on the size of the
images you are creating, a project with an eCognition Developer
can become very slow with very large datasets due to the
number of objects generated during the segmentation process.
Step 3: Display Imagery
3.1) Doing these exercises, we recommended you subset a small
part of the Sentinel-2 imagery.
3.2) For the multispectral false-color image, use the band
combination B2-Blue, B3-Green, B4-Red, B7-VNIR, and B11-
SWIR components, as is illustrated in Figure 8.
231
Mastering Object Based Image Analysis SECTION TWO
Figure 10: The Edit Process dialog box, a basic outline for the
segmentation processes
4.3) The Edit Process dialog box would be the main
segmentation process. You can put several underset
segmentations processes by selecting the Insert Child option. In
addition, you may like to arrange other algorithm arrangements
in the Template Process Tree before or after segmentation
processes, as you notice in Figure11.
232
Mastering Object Based Image Analysis SECTION TWO
Figure 12: Edit Process dialog box, adjusted for the multiresolution
segmentation algorithm
5-3) Table 1 briefly describes the parameters available for this
segmentation algorithm. The 'Edit Process' dialog is made up of
233
Mastering Object Based Image Analysis SECTION TWO
234
Mastering Object Based Image Analysis SECTION TWO
235
Mastering Object Based Image Analysis SECTION TWO
236
Mastering Object Based Image Analysis SECTION TWO
237
Mastering Object Based Image Analysis SECTION TWO
Figure 14: The Quadtree segmentation resulted for the part of the
Mingachevir Lake
238
Mastering Object Based Image Analysis SECTION TWO
239
Mastering Object Based Image Analysis SECTION TWO
240
Mastering Object Based Image Analysis SECTION TWO
241
Mastering Object Based Image Analysis SECTION TWO
242
Mastering Object Based Image Analysis SECTION TWO
7.1.4) Then, from the Value list, select bands or features to add
them to the Selected Search Feature on the right side of the
Select displaced Features list. Remember that you can even add
some of Geometry, such as Area and Number of pixels, to the
mean values list.
7.1.5) The selected feature values are displayed in the map view.
To compare single image objects, click another image object in
the map view, and the displayed feature values are updated
(Figure 18).
244
Mastering Object Based Image Analysis SECTION TWO
Figure 19: The Feature View window and information about objects
Sum Up:
You could use segmentation algorithms to subdivide entire
images at a pixel level or specific image objects from other
domains into smaller image objects. eCognition 9.5 software
provides several diverse approaches to Segmentation. It may
range from very simple algorithms, such as Chessboard and
Quadtree-based Segmentation, to highly sophisticated methods
such as Multiresolution Segmentation Multi-Threshold
Segmentation algorithms. Those are required to create new
image object levels based on image layer information. But they
are also a valuable tool to refine existing image objects by
subdividing them into smaller pieces for more accurate
classification.
A few examples of image segmentation algorithms are given
during the current tutorial. If you spend approximately 30
245
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) The first step of an eCognition image segmentation is to cut the
image into pieces.
2) Segmentation serves as a building block for further analysis .
3) There is a choice of several algorithms to do the segmentation
process.
Workouts:
1) Using the layer combination of your choice (bands with the 10-
meter resolution are recommended), experiment with the image
equalizations available. Again, observe how the various land
cover types change to these changes.
2) Decide on the most appropriate segmentation algorithm for
segmenting this scene.
3) As you are doing this, consider what elements you think to
provide a better segmentation and how it could use the different
characteristics of the various algorithms to achieve the
segmentation you require.
Quizzes :
246
Mastering Object Based Image Analysis SECTION TWO
247
Mastering Object Based Image Analysis SECTION TWO
Tutorial 4
Opening Statement:
The current tutorial will create a nearest neighbor classification
of a segmented Sentinel-2 image acquired from an area around
the Baku region, Azerbyazan. The main objective of image
classification is to identify and portray, as a unique gray level
(or color), the features occurring in an image in terms of the
object or type of landcover these features represent on the
ground.
Instructive Memo:
✓ Level: Intermediate,
✓ Time: This unit should not take you more than 1.5 hours,
✓ Software: The eCognition version 9.5,
✓ Data Sources: Sentinel-2:
L1C_T39TVE_A030362_20210415,
✓ Subject Scene. Baku Region, The Republic of Azerbaijan.
Tutor Objectives:
By the end of this unit, you should:
248
Mastering Object Based Image Analysis SECTION TWO
249
Mastering Object Based Image Analysis SECTION TWO
250
Mastering Object Based Image Analysis SECTION TWO
251
Mastering Object Based Image Analysis SECTION TWO
252
Mastering Object Based Image Analysis SECTION TWO
253
Mastering Object Based Image Analysis SECTION TWO
254
Mastering Object Based Image Analysis SECTION TWO
255
Mastering Object Based Image Analysis SECTION TWO
256
Mastering Object Based Image Analysis SECTION TWO
Figure 11: Editing dialog for selecting the features within the NN
feature space
Step 3: Setting up the Process Tree
257
Mastering Object Based Image Analysis SECTION TWO
258
Mastering Object Based Image Analysis SECTION TWO
259
Mastering Object Based Image Analysis SECTION TWO
260
Mastering Object Based Image Analysis SECTION TWO
4.3) After inputting the parameters into the process, click on the
'OK button at the bottom. You need to select samples before
performing your classification by clicking the Active classes
option that illustrates the Edit Classification Filter (Figure 16).
261
Mastering Object Based Image Analysis SECTION TWO
262
Mastering Object Based Image Analysis SECTION TWO
263
Mastering Object Based Image Analysis SECTION TWO
5.9) Once you have selected your samples, you should have an
image that is similar in appearance. Bear in mind that the
selection of samples does not have a correct answer. Just select
the samples you consider to be most representative of the classes
you wish to separate and give the best separation in the Sample
Editor and Sample Selection windows.
Step 6: Running the Classification Process
6.1) Now, you can execute the classification process you
previously created (right-click and select execute on the
classification process). You should now have a nicely classified
image similar to Figure 21.
265
Mastering Object Based Image Analysis SECTION TWO
Figure 22: The Process Tree after the inclusion of the classification
process
6.3) To re-run the classification, open the process and click on
execute or select the process and press F5 or right-click on the
process and select 'execute'.
Step 7: Merging the Result
7.1) The next step is to set up the processes which will merge
your classification so that all neighboring objects of the same
class will form single objects.
7.2) It is important to merge your classification to identify
complete objects. For instance, you can query the Urban Area to
find its complete area once merged. To merge the result, you
will need to enter a merge process for each class; 'Insert Child').
The merge parameters for the vegetation class are shown in
Figure 23.
266
Mastering Object Based Image Analysis SECTION TWO
Figure 23: The Edit Process dialog box, the merge algorithm
parameters for the Built-up class
7.3) The class for merging is defined using the Image Object
Domain. The class of interest is defined; if you were to select
multiple classes, all the privileged classes would be merged,
removing the boundaries and classification of these objects.
7.4) To save time, once you have created your first merge
process, you can copy-and-paste (ctrl-c, ctrl-v, or right-click on
the process) to duplicate it and then edit the class you wish to
merge.
7.5) Once you are happy with your classification, execute the
merge image objects processes you have previously created.
Your results should appear similar to those shown in Figure 24.
7.6) The purpose of merging is to create a final classification
representing the scene's objects. For example, we will now
calculate the area of the whole water surface features. But, be
267
Mastering Object Based Image Analysis SECTION TWO
Figure 24: Sentinel-2 RGB image (a) and the classified map (b)
Step 8: Feature Space Optimization Tool
8.1) To refine the classification further, eCognition Developer
offers an automated feature, the Feature Space Optimization
function, to automatically identify the features which 'best'
separate the classes for which samples have been selected
(Figure 25).
268
Mastering Object Based Image Analysis SECTION TWO
269
Mastering Object Based Image Analysis SECTION TWO
8.4) To use this tool, select the features you wish to compare –
Initially, try the mean, standard deviation, and pixel ratio but
later try other combinations. Then select calculate; once the
calculation has finished, select advanced to see which features
offered the best separation, and 'Apply to the Std NN' to use
within the classification. You can now run your classification
step.
Step 9: Exportting Result
9.1) To end with, run the results to export. It will result in an
ESRI shapefile and create a map shown in Figure 27.
Researchers usually wish to export the classification result from
eCognition into a GIS, mostly ArcGIS, for further processing or
the production of a map.
270
Mastering Object Based Image Analysis SECTION TWO
271
Mastering Object Based Image Analysis SECTION TWO
272
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) Assign Class assigns a class to an image object with certain
features, using a threshold value,
2) Classification uses the class description to assign a class ,
Hierarchical Classification uses the class description and the
hierarchical structure of classes,
3) Advanced Classification Algorithms are designed to perform
a specific classification task, such as finding minimum or
maximum values of functions or identifying connections
between objects.
Workouts:
1) Experiment with different classification methods, and be
aware of what types of landcover/landuse you need to create.
2) Experiment with different features within the standard NN
feature space. (Classification > Nearest Neighbor > Edit
Standard NN feature space).
3) Experiment with different features and maximum dimension
levels within the feature optimization tool by applying
classification> Nearest Neighbor > Feature Space
Optimization.
Quizzes :
273
Mastering Object Based Image Analysis SECTION TWO
274
Mastering Object Based Image Analysis SECTION TWO
Tutorial 5
Threshold Rule-Setting With eCognition
Opening Statement:
The main purpose of the current tutorial is to teach the
application of combined methods of thresholding rule-set
techniques in the classification of water bodies. For this practice,
there is a need for six high-resolution bands of Sentinel 2A
imagery subsetted inside the eCognition 9.5 version for the
Neftchala Peninsula, Azerbaijan. Accordingly, first, you will
need to load satellite imagery, create a new project, segment the
imagery. In the second step, you will create two customized
Normalized Difference Water Index (NDWI) and Brightness
indices, which help you start a threshold-based classification of
water bodies in a wetland illustrating the sea, rivers, water
canals, ponds, and lakes.
Initial Memo:
✓ Level: Intermediate,
✓ Time: This tutorial should not take you more than 2 hours,
275
Mastering Object Based Image Analysis SECTION TWO
276
Mastering Object Based Image Analysis SECTION TWO
277
Mastering Object Based Image Analysis SECTION TWO
1.6) Select the image bands you wish to import, then press
the Ok button to display the Create Project dialog box.
279
Mastering Object Based Image Analysis SECTION TWO
280
Mastering Object Based Image Analysis SECTION TWO
281
Mastering Object Based Image Analysis SECTION TWO
pixel (pxl/pxl). You can edit the Pixels Size (Unit) for
feature calculations, value display, and export, and you can
edit the Pixels Size (Unit).
c) If you keep the default (auto), the unit conversion is applied
according to the unit of the coordinate system of the image
data. You may want to ignore the unit information from the
included geocoding information in special cases. To do so,
deactivate Initialize Unit Conversion from the Input File
item in Tools > Options in the main menu. Geocoding is the
assignment of positioning marks in images by coordinates.
In earth sciences, position marks serve as geographic
identifiers. But geocoding is helpful for life sciences image
analysis too. Typical examples include working with
subsets, multiple magnifications, or thematic layers to
transfer image analysis results. Typically, available
geocoding information is automatically detected.
d) if not, you can enter coordinates manually. Images without
geocodes automatically create a virtual coordinate system
with a value of 0/0 at the upper left and a unit of 1 pixel.
For such images, geocoding represents the pixel coordinates
instead of geographic coordinates.
5) The Image Layer pane allows you:
a) to insert, remove and edit image layers (Figure 4). The
order of layers can be changed using the up and down
arrows.
282
Mastering Object Based Image Analysis SECTION TWO
283
Mastering Object Based Image Analysis SECTION TWO
284
Mastering Object Based Image Analysis SECTION TWO
285
Mastering Object Based Image Analysis SECTION TWO
286
Mastering Object Based Image Analysis SECTION TWO
287
Mastering Object Based Image Analysis SECTION TWO
visible bands (B2, B3, and B4) and a SWIR band. Keep in mind
that a band combination of Sentinel-2 satellite would be a useful
RGB composite in recognizing water surfaces.
5.5) Finally, you need to update the scale parameter (52) to
make your image objects a bit larger and then adjust the shaping
compactness settings to emphasize the shape (0.3) a little bit
more, and also to try and get more compact objects (0.7).
288
Mastering Object Based Image Analysis SECTION TWO
Figure 10: The segmented process result for the part of the Neftchala
Peninsula
5.10) You can then toggle on and off the image objects using the
289
Mastering Object Based Image Analysis SECTION TWO
290
Mastering Object Based Image Analysis SECTION TWO
291
Mastering Object Based Image Analysis SECTION TWO
Figure 15: the Customized Features dialog box, A formula for NDWI
292
Mastering Object Based Image Analysis SECTION TWO
293
Mastering Object Based Image Analysis SECTION TWO
7.6) You can also display the actual values by going to the
Feature View window. Double-clicking on NDWI, for example,
assigns a gray-scale color ramp based on the NDWI values to
each of the objects.
7.7) By going into the lower left-hand corner of the Feature
View window and selecting the checkbox, you can play around
with the actual NDWI value ranges. You'll probably want to
right-click on the feature and choose update range to get the full
range of values before doing this. You can then use the arrows
to select the lower and upper range of values.
7.8) This isn't doing classification; it's just previewing what
would happen if you use these threshold values for classification
(Figure 17).
Figure 17: Adjusted NDWI values for the water bodies, including the
segmented objects
294
Mastering Object Based Image Analysis SECTION TWO
7.9) When you get satisfied with adjusted water bodies (Figure
18) by ranging the Feature View window tools, you may move
to the next step.
295
Mastering Object Based Image Analysis SECTION TWO
Figure 19: The Edit Process dialog box, an assigned class algorithm
with class filter and conditions
8.2) For classification, you can use the assigned class algorithm,
a very simple classification algorithm for threshold-based
classification.
8.3) Under the cross-filter, you could check the box for
unclassified so that we're only focusing on objects that are
unclassified (Figure 20).
296
Mastering Object Based Image Analysis SECTION TWO
297
Mastering Object Based Image Analysis SECTION TWO
Threshold Rule-Set
Classified NDWI Map
298
Mastering Object Based Image Analysis SECTION TWO
Figure 23: Edit Condition dialog box NDVI and Average Brightness
values
9.3) The view classification button will display the
classification, and then you can choose to display the
classification as either outlines or a solid fill. Overall, NDVI
was very effective in helping us classify vegetated objects
(Figure 24).
Figure 24: The result of the thresholding process for the green-cover
classification
299
Mastering Object Based Image Analysis SECTION TWO
9.4) You can to going back into the assigned clause algorithm
and remove that visible brightness condition. You may insert
this later and in a different assigned class algorithm. Rerunning
the segmentation algorithm every time I change my threshold
parameters for classification isn't very efficient.
9.5) If you need to insert a new algorithm, you can use the
"remove classification" algorithm. It will go right after the
segmentation algorithm, and it simply clears the classification
from the image objects. This is a very quick algorithm, and it's
an efficient algorithm to run if I want to play around with my
classification parameters.
9.6) Finally, you can go back to the original rule sets, with all
objects with an NDWI, NDVI, or Brightness threshold indices,
and modify less or greater than specific values assigned to the
water or the vegetation classes.
Sum Up:
This tutorial introduced you to threshold-and rule-based
classifications inside eCognition 9.5 by processing the Sentinel-
2 imagery just adjusted to the parts of Neftcala Peninsula in
nearby the Caspian Sea. You looked at different approaches in
your next exercise as you handled the following procedures:
✓ a project by loading Sentinel 2A images,
✓ a multiresolution segmentation to create image objects,
✓ an impression of how to obtain information associated
with NDWI indexing,
✓ a threshold-rule-based classified water bodies.
300
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) Note that the eCognition 9.5 trial software access is not limited
to a specific period.
2) Keep in mind that export functions, saving projects, and the
workspace environment are restricted.
3) The most important point is that you cannot open the rule sets
saved in trial software in a fully-licensed version of eCognition
software. If you are interested in exporting and analyzing the
processed data, use the eCognition authorized versions.
Workouts:
1) Experiment with different segmentation algorithms and
parameters. It would help if you did not have to edit the
thresholds you have entered to reclassify the resulting segments.
Still, you may notice varying levels of accuracy between
different segmentations.
2) The classification produced during this unit is superficially OK
but contains numerous errors when viewed in more detail. Try
to improve the quality of the classification through the
refinement of the existing rules.
3) In addition to the rule used within the classification, there may
be other features available within eCognition Developer which
could aid the classification.
Quizzes:
1) Which subsetting options are selectable inside the Subset
Selection Dialog Box?
2) What exactly does the Threshold Classification Process do?
3) Why do you use the "assign class algorithm" through a
threshold-based classification procedure?
Allied References:
1) Agancy, E. S. (2015). Sentinel-2 MSI: Overview. https://
sentinel.esa.int/documents/247904/685211/Sentinel-
2_User_Handbook.
2) Athelogou, Maria; Schmidt, Günter; Schäpe, Arno; Baatz,
Martin; Binnig, Gerd (2007). Cognition Network Technology –
A Novel Multimodal Image Analysis Technique for Automatic
Identification and Quantification of Biological Image
Contents. Imaging Cellular and Molecular Biological
Functions. Principles and Practice. pp. 407–422.
301
Mastering Object Based Image Analysis SECTION TWO
302
Mastering Object Based Image Analysis SECTION TWO
Tutorial 6
Practicing Change Detection with OBIA
Background Concepts:
Coastal zone detection is an important task in national
development and environmental protection, in which extraction
of shorelines should be regarded as fundamental research of
necessity. Very dynamic coastlines such as the Caspian Sea
coasts and its islands could pose considerable risk to the
surrounding countries' economic-social developments. Due to
the rapid advances in image processing methods, modern and
reliable OBIA techniques are required to detect and update the
coastline geodatabase of these areas to explore rates of physical
and ecological retreats. Natural and artificial land features are
very dynamic, changing somewhat rapidly in our lifetime. Thus,
accurately, you can more fully understand the physical and
human processes at work. An advanced OBIA plays a unique
role for easier interpretation in the Caspian Sea changes.
In eCognition Developer, you can work with so-called 'maps.' A
map is a "sub‐project" where you can process independently.
Within one project, you can have several maps. Maps are
independent "sub‐projects," The original scene is always the
304
Mastering Object Based Image Analysis SECTION TWO
'main' map; all other, created maps can have individual names.
One of the needs for the changes detection process is to map
multispectral images in a fast manner correctly. USC is where
the outcomes (groupings of pixels with common characteristics)
are based on the software analysis of an image without the user
providing sample classes. The eCognition uses techniques to
determine which pixels are related and groups them into classes.
USC using cluster algorithms is often used when there are no
field observations and other reliable geographic information. For
USC, eCognition users can execute an ISODATA cluster
analysis and categorize continuous pixel data into
classes/clusters having similar spectral-radiometric values.
305
Mastering Object Based Image Analysis SECTION TWO
1.1) To access the main aims of the current tutorial, you need
to download images from the Sentinel-2 satellite acquired by
ESA's Open Access (https://scihub.copernicus.eu), selected for
specific dates 2015 and 2021. Basic information on the Sentinel-
2 satellites is given in the previous tutorials.
1.2) For the current tutorial, we selected the environmentally
sensitive coastal part of the Caspian Sea in Azerbaijan, an
internationally recognized greatest lake of global importance.
1.3) To achieve the main objectives of the current practice,
several methods of image pre-processing have to step-wisely
apply the sentinel-2 zipped datasets, including opening the
downloaded zipped files and managing the corresponding Geo-
Tiff bands. The details of conversion methods are given in the
previous tutorials.
3.1) From the main menu, select File> New Project to access the
downloaded two sets of multispectral image layers from a subset
of a Sentinel-2 scene. T1 is the layers from 2015,09,14 and T2 is
from 2021,09,27. Evaluate the loaded Image Layers in the new
project named as Gi-Island Changes.
3.2) When you set the desired bands inside the eCognition, you
will notice the following screen in which the layers of T1 and
T2 are displayed (Figure 1).
307
Mastering Object Based Image Analysis SECTION TWO
308
Mastering Object Based Image Analysis SECTION TWO
3.3) Click the ‘Edit Image Layer Mixing’ button in the ‘View’
toolbar or go to the main menu View > Image Layer Mixing.
Note that In the lower right corner of the viewer, you see which
map is currently displayed (Figure 3). In our example right now,
it is the ‘main’ map.
3.4) Click on the up arrow in the lower right of the ‘Edit Image
Layer’ dialog box until the bullets are moved completely to the
T2 multispectral layers (Figure 4).
309
Mastering Object Based Image Analysis SECTION TWO
4.1) Right-click in the Process Tree and select Append New. Set
the Edit Process dialog box as Figure 5.
Figure 5: The Edit Process dialog box, named the Gil-Island Changes
310
Mastering Object Based Image Analysis SECTION TWO
311
Mastering Object Based Image Analysis SECTION TWO
d) In the field 'Image Layers,' the Image layers needed for the
new map are defined. If nothing is set, all Image layers of the source
map are copied to the new map.
e) In the field 'Thematic Layers,' the thematic layers for the new
map are defined.
f) If 'Yes' is set in the field 'Copy Image Object Hierarchy,' the
existing Image Object Levels are copied to the new map. If you want a
backup map, you will use this option.
4.4) The process settings to create the map for T1 are shown in
Figure 7.
312
Mastering Object Based Image Analysis SECTION TWO
Figure 8: The 'Select Image Layers' dialog box, only the T1 Layers are
selected
4.4.6) To explore the new map, click on the 'Execute' process to
create ‘MapT1’.
4.4.7) To display a map, use the dropdown list in the ‘View
Navigate’ toolbar, right beside the ‘Delete Level’ button. Select
‘MapT1’ (Figure 9).
313
Mastering Object Based Image Analysis SECTION TWO
Figure 9: The name of the displayed map appears in the lower right
corner of the viewer window, now ‘MapT1’ is displayed.
4.4.8) Now repeat the settings mentioned above to create a map
for T2 (Figure 10).
Figure 10: The Edit Process dialog box, process settings to create the
Map-T2.
314
Mastering Object Based Image Analysis SECTION TWO
315
Mastering Object Based Image Analysis SECTION TWO
Figure 12: Edit Process dialog box set to Map1 with the USC
algorithm
5.2) To run the USC algorithm, you may prefer to set all
parameters as default settings that it shown in Figure 12. More
details on the functionality of all required parameters are stated
in Table 1.
317
Mastering Object Based Image Analysis SECTION TWO
Output Layer Define the name for the temporary layer that contains the
Name result of the cluster analysis (cluster IDs).
Number of Define how many times the clustering algorithm will
Iterations iterate.
Maximum Define the maximum number of clusters to be created.
number of
clusters
The Initial Define an initial number of clusters to be created.
number of
clusters
Minimum Define the minimum number of pixels per cluster.
cluster size
Maximum Define the standard deviation that has to be exceeded for
standard cluster splitting to occur. A value of 0 means that splitting
deviation is always allowed
Minimum You can define a distance threshold for cluster centers. If
cluster distance cluster centers are closer than this, clusters are merged. A
value of 0 means that this threshold is ignored.
5.3) When you set all parameters, click on the Execute button to
produce the USC ISODATA output map with a defined number
of classes (Figure 13).
318
Mastering Object Based Image Analysis SECTION TWO
Figure 13: Selecting image layers for the Map-1 classified by USC
algorithm
5.4) Do not forget to select the image layers (for the Map-T2
bands) with adequate information in the USC procedure as
Figure 14.
Figure 14: Edit Process dialog box set to Map 2 with the USC
algorithm
5.5) The result of the USC algorithm for Map 2 is shown in
Figure 15.
319
Mastering Object Based Image Analysis SECTION TWO
Figure 15: Selecting image layers for the Map-2 classified by USC
algorithm
5.6) Figure 16 shows many details on the Visual comparison of
the Gil Island Caspian Sea changes.
320
Mastering Object Based Image Analysis SECTION TWO
5.7) Changes in the Caspian Sea water depletion trend for seven
years is quite visually detectable. Most likely, due to climate
change in the region, the trend of lowering the water level in the
coming years will be quite tangible. Such changes will lead to
negative changes in the region's coastal environment, animal
species, and downbeat economic and social problems.
322
Mastering Object Based Image Analysis SECTION TWO
Informative Practices
Tips:
1) USC is where the outcomes (groupings of pixels with common
characteristics) are based on the software analysis of an image
without the user providing sample classes .
2) USC algorithms discover hidden patterns or data groupings
without the need for human intervention.
3) The 'copy map' algorithm's most frequently used options are:
defining a subset of the selected map using a region variable,
selecting a scale, setting a resampling method, copying all
layers, selected image layers, and thematic layers, and copying
the image object hierarchy of the source map.
Workouts:
1) List the different methods of image classifications.
2) Apply the current tutorial methodology to the Sentinel-2 data
for the Caspian Sea major islands for the years 1995 and 2000
and compare the results.
3) Use an NDWI spectral indexing and multi-threshold
segmentation algorithm to detect changes on the other coastal
sides of the Caspian Sea.
Quizzes :
1) What is the difference between object-based and pixel-based
classification?
2) Is object-based classification supervised?
3) Which classification method is better for the high-resolution
satellite imagery?
323
Mastering Object Based Image Analysis SECTION TWO
Allied References:
1) Araya Y. H. and Hertagen, C. (2008). A Comparison of Pixel
and Object-based Land Cover Classification: A Case Study of
The Asmara Region, Eritrea. WIT Transaction on Built
Environment Vol 100, ISSN 1743-3509, Geo-Environment
Landscape Evolution III.
2) Kaplan, G, and Avdan, U. (2017). Object-based water body
extraction model using Sentinel-2 satellite imagery, European
Journal of Remote Sensing, 50:1, 137-143.
3) Kaplan, G., and Ugur A. (2018). Sentinel-1 and Sentinel-2 Data
Fusion for Wetlands Mapping: Balikdami, Turkey, International
Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences - ISPRS Archives 42.3 (2018): 729–734.
4) Rasouli, A.A. (2018). Geo-OBIA and Spatial Information
Sciences Implementations, Eurasian GIS Conference 2018, 04-
07 September 2018, Baku, Azerbaijan .
5) Rasouli, A.A. (2020). Detection of Caspian Sea Coastline
Changes by Fuzzy-Based Object-Oriented Image Analysis. The
Second Eurasian CONFERENCE RISK-2020 12 – 19 April
2020 – Tbilisi / GEORGIA :
6) Zerrouki, N. and Bouhaffra, D. (2014). Pixel-based Or Object-
based: Which Approach Is More Appropriate for Remote
Sensing Image Classification? IEEE International Conference
on Systems, Man & Cybernetics, San Diego(CA), pp. 864-869.
324
ISBN: 978-625-8061-89-5