0% found this document useful (0 votes)
10 views

Mastering Object Based Image Analysis

The document discusses 'Mastering Object-Based Image Analysis' authored by Prof. Dr. A. Rasouli, Dr. M. Milani, and Dr. B. Milani, focusing on the principles and applications of Object-Based Image Analysis (OBIA) in remote sensing. It outlines the book's content, including tutorials on using eCognition software for image processing, and highlights the advantages of OBIA over traditional pixel-based methods. The authors emphasize the importance of understanding segmentation and classification processes to effectively analyze geospatial data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Mastering Object Based Image Analysis

The document discusses 'Mastering Object-Based Image Analysis' authored by Prof. Dr. A. Rasouli, Dr. M. Milani, and Dr. B. Milani, focusing on the principles and applications of Object-Based Image Analysis (OBIA) in remote sensing. It outlines the book's content, including tutorials on using eCognition software for image processing, and highlights the advantages of OBIA over traditional pixel-based methods. The authors emphasize the importance of understanding segmentation and classification processes to effectively analyze geospatial data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 329

MASTERING OBJECT-BASED

IMAGE ANALYSIS
Prof. Dr. A. Rasouli, Dr. M. Milani and Dr. B. Milani
MASTERING OBJECT-BASED
IMAGE ANALYSIS

Prof. Dr. A. Rasouli, Dr. M. Milani and Dr. B. Milani


Copyright © 2021 by iksad publishing house
All rights reserved. No part of this publication may be reproduced,
distributed or transmitted in any form or by
any means, including photocopying, recording or other electronic or
mechanical methods, without the prior written permission of the publisher,
except in the case of
brief quotations embodied in critical reviews and certain other
noncommercial uses permitted by copyright law. Institution of Economic
Development and Social
Researches Publications®
(The Licence Number of Publicator: 2014/31220)
TURKEY TR: +90 342 606 06 75
USA: +1 631 685 0 853
E mail: [email protected]
www.iksadyayinevi.com

It is responsibility of the author to abide by the publishing ethics rules.


Iksad Publications – 2021©

ISBN: 978-625-8061-89-5
Cover Design: İbrahim KAYA
December / 2021
Ankara / Turkey
Size = 16x24 cm
The Book Brief Contents

Mastering Object-Based Image Analysis

Imaging the Azerbaijan Geo-Environment


Authors:
Prof. Dr. A. Rasouli1, Dr. M. Milani2 and Dr. B. Milani2
1
Macquarie University, Department of Environmental Sciences, Sydney, Australia
2
Bandirma Onyedi Eylul University, Department of Computer Science, Bandirma, Turkey

Book Contents: Pages


About The Authors: 1
Introduction: Object-Oriented Image Analysis Concepts 5
Section One: OBIA Basic Requirements 25
Tutorial 1: Running the eCognition Trail Version 9.01 27
Tutorial 2: Picking Up the Landsat Imagery with eCognition 45
Tutorial 3: Practicing the eCognition Basic Structure 67
Tutorial 4: Illustrating Land Surfaces By Landsat Imagery 89
Tutorial 5: Landcover Spectral Indexing inside eCognition 108
Tutorial 6: Quick Look to the eCognition's OBIA Capabilities 128
Section Two: Diving into the OBIA Advanced Skills 159

Tutorial 1: Headlong Into The eCognition 9.5 with Sentinel-2 159


Tutorial 2: Taking a Plunge Inside eCognition 9.5 183
Tutorial 3: Examining The Image Segmentation Algorithms 219
Tutorial 4: Objective Image Classification Processes 248
Tutorial 5: Threshold Rule-Setting With eCognition 275
Tutorial 6: Introducing Change Detection with OBIA 303

Authors: Prof. Dr. A. Rasouli, Dr. M. Milani, and Dr. B. Milani


Mastering Object Based Image Analysis About The Authors

About The Authors

Professor Dr. Aliakbar Rasouli is a


geoscientist who accomplished his Ph.D. at Wollongong
University in Australia. He had launched the most modern
remote sensing real-time monitoring systems and digital weather
observation devices at the University of Tabriz by teaching the
principles of applied remote sensing and Geographic
Information Systems. Later, he obtained an honorary fellow
invitation from the Macquarie University, Environmental
Sciences Department. Then, he completed a research fellow
opportunely on severe thunderstorms monitoring and landcover
changes in the New South Wales State and the Greater Sydney
Metropolitan area. He has served as the author of numerous
books, papers, and research focused on the practical training of
advanced remote sensing, image processing, and geospatial
information technology.
Over the past few years, Professor Rasouli has started a joint
research and educational cooperation with the Azerbaijan
National Academy of Sciences, the Institute of Geography. Such
a thoughtful invitation led to a fruitful collaboration that resulted
in the publication of several books, issuing various articles,
presenting a few workshops, and lecturing in international
conferences are among the latest scientific activities of Prof.
Rasouli's up-to-date novelty Azerbaijan. Among his most
important academic goals are the following:
▪ introducing the basic principles of a modern real-time
satellite monitoring system and consequence digital image
processing more ineffective manners,

1
Mastering Object Based Image Analysis About The Authors

▪ encouraging advanced image processing and Spatial


Information Sciences (SIS) behaviors among young
scholars through fast training programs in Azerbaijan.
▪ visualizing Azerbaijan's most recent hydro-climatic,
agricultural, and landcover/landuse geo-environmental
changes by applying a modern remote sensing technology.
▪ participation in the accurate identification of
landcover/landuse changes in the liberated areas of
Qarabag applying advanced remote sensing technologies
and processing of satellite imagery utilizing Object-Based
Image Analysis (OBIA) and Machin-Learning/Deep-
Learning techniques.

2
Mastering Object Based Image Analysis About The Authors

Assistant Professor Dr. Muhammed


Milani was born in Iran, studied for a bachelor's and master's
degree in computer engineering in Tehran, and completed his
doctoral studies in Turkey. His interest in satellite imagery dates
back to his undergraduate years. Then, during his postgraduate
studies, he worked on several images, satellite image processing,
and GIS projects. Muhammad is the author of VB programming
and Automatic Generation and Solving of mathematical
expressions books and currently lives in Turkey with his wife
and daughter as an assistant professor.

3
Mastering Object Based Image Analysis About The Authors

Assistant Professor Dr. Bahar Milani


completed his bachelor's degree in mathematics in Iran. After
several years of researches, she went to Turkey and completed
her Ph.D. in Computer Engineering. After graduating, Bahar
worked as an assistant professor at a Turkish university.
Teaching and writing are among Bahar's hobbies. She has great
enthusiasm for environmental research with computer sciences
and artificial intelligence approaches, with some published
papers in this regard.

4
Mastering Object Based Image Analysis

An Introduction to OBIA Approach

OBIA overcomes the limitations of traditional image processing


approaches

Basic Background

This book will familiarize you with the basics of Object-Base


Image Analysis (OBIA), general aspects of uses, and an
overview of their applications through the Republic of
Azerbaijan countryside. In a simple view, OBIA is becoming the
advanced primary image analysis and processing procedure.
Furthermore, to understand the main concepts, you will
experience your first glimpse of earlier version 9.01 of
eCognition Software, rather than going into the new version's
details too soon. Authors also recommend that new learners to
OBIA start with theoretical and practical basics of pixel and
objects fundamentals.

Essential of OBIA

Around 2000, GIS and image processing started to grow


together rapidly through OBIA for geospatial object-based
image analysis. Accordingly, significant advances have been

5
Mastering Object Based Image Analysis

made in remote sensing in recent decades. Several factors seem


to have simultaneously contributed to these remarkable
successes, particularly in the field of satellite image processing.
We paid the most attention to the category of knowledge
creation procedures such as the OBIA field. The most feature of
OBIA is directed to the object-oriented classification, which
uses both spectral and spatial information for classification. The
process involves the categorization of pixels based on their
spectral characteristics, shape, texture, and spatial relationship
with the surrounding pixels. To learn and apply OBIA
approaches, new learners must pay attention to the other
contributing variables associated with it, as a few important
factors are presented in Figure 1.

Figure 1. Controlling variables associated with OBIA

6
Mastering Object Based Image Analysis

In practice, OBIA is one of several approaches developed to


overcome the limitations of pixel-based approaches. It
incorporates spectral, textural, and contextual information to
identify thematic classes in an image. The first step in OBIA is
to segment the image into homogeneous objects. The term
object here stands for a contiguous cluster of pixels.
Segmentation stands on pre-defined parameters like
compactness, shape, and scale, derived from real-world
knowledge of the features one wants to identify. For instance,
crop mapping will require an understanding of the size and
shape of farm fields in the area of interest. Over- and under-
segmentation are obvious threats to these approaches, and they
may not identify all semantically meaningful entities. Object-
Based will generate segments by integrating similar pixels, and
then segments will be assigned to landcover. In turn, Object-
Based Classification is suitable for classification based on high-
resolution satellite imagery.
Each object (segment) signifies the contained pixels' statistical
properties in a second step. It means that all pixels within a
segment are assigned to one class, eliminating the within-field
spectral variability and mixed pixels problems associated with
pixel-based approaches. Supervised and unsupervised
classification is pixel-based. In other words, it creates square
pixels, and each pixel has a class. But OBIA groups pixels into
representative vector shapes with size and geometry.

7
Mastering Object Based Image Analysis

In a simple view, OBIA segments an image by grouping pixels.


It doesn't create single pixels. Instead, it generates objects with
different geometries. If you have the right image, objects can be
so meaningful that they digitize for you. For example, the
segmentation process highlights buildings, roads, and water
bodies. In OBIA based classification, you can employ different
methods to classify objects, for example:
✓ SHAPE: If you want to classify buildings, you can use a shape
statistic such as "rectangular fit that tests an object's geometry
to the shape of a rectangle.
✓ TEXTURE: Texture is the homogeneity of an object. For
case, water is mostly homogeneous because it's mostly dark
blue. But forests have shadows and are a mix of green and
black.
✓ SPECTRAL: You can utilize the mean value of spectral
properties such as near-infrared, short-wave infrared, red,
green, or blue band.
✓ GEOGRAPHIC CONTEXT: Objects have proximity and
distance relationships between neighbors.

Think both Objects and Pixels


How amazing would it be if you could digitize all your features
in an image with just a click of a button? On top of that, do you
classify each feature with another click of a button? Sounds like
magic? But these two processes are segmentation and
classification performed in OBIA. Let's examine what it is and
how you can use it to get your work done more efficiently and
accurately. Human visual perception almost always outperforms
computer vision algorithms. For example, your eyes know a
river when it sees one. But a computer can't recognize rivers
from lakes.

8
Mastering Object Based Image Analysis

Traditional pixel-based image classification assigns a landcover


class per pixel. All pixels are the same size, same shape, and
don't have any concept of their neighbors. However, OBIA
segments an image, grouping small pixels into vector objects.
Instead of a per-pixel basis, segmentation automatically digitizes
the image for you. OBIA segmentation is a process that groups
similar pixels into objects. What segmentation does is replicate
what your eyes are doing. But with these segmented objects, you
use their spectral, geometrical, and spatial properties to classify
them into land cover.
In turn, OBIA classification uses objects' shape, size, and
spectral properties to classify each object. Otherwise, when
using traditional image classification techniques, you often get a
salt and pepper look in the classification result. To recap, the
two basic principles of OBIA are:
➢ The layer arithmetic algorithms: use a pixel-based operation
to merge up to four layers by mathematical operations. For
example, you can apply the surface calculation algorithm to
derive the slope for each pixel of a digital elevation model
(DEM). It uses to determine whether an area within a
landscape is flat or steep and is independent of the absolute
height values.
➢ Index Layer Calculation algorithm that inserts a new image
layer by calculating a spectral index. You may choose NDVI,
NDWI, NDSI (soil), NDSnI (snow), NBR, SAVI, EVI, and
GRV indexes.
➢ Segmentation that breaks the image up into objects
representing land-based features. Segmentation algorithms
subdivide entire images at a pixel level or specific image
objects from other domains into smaller images. Trimble
provides several different approaches to segmentation,
ranging from very simple algorithms, such as chessboard and
quadtree-based segmentation, to highly sophisticated methods
9
Mastering Object Based Image Analysis

such as multiresolution segmentation and contrast filter


segmentation. Segmentation algorithms are required to create
new image object levels based on image layer information.
But they are also a very valuable tool to refine existing image
objects by subdividing them into smaller pieces for more
detailed analysis.
➢ Classification algorithms analyze image objects according to
defined criteria and assign them to a class that best meets
them. You may evaluate the membership value of an image
object against a list of selected classes. The classification
result of the image object is updated according to the class
evaluation result. The three best classes remain in the image
object classification result.
➢ The RuleSet algorithms let you control certain settings for the
rule set or parts of rule sets. For example, you may want to
apply particular settings to analyze large objects and change
them to analyze small objects. Because the settings are part of
the ruleset and not on the client, they are when the rules are
introduced.
➢ Deep learning, a subset of machine learning - utilizes artificial
neural networks to analyze data. The approach and
architectures were inspired by insights into the functioning of
the nervous system and mimic human perception.
➢ Export algorithms let you can export the results in several
vectors or raster formats. In addition, statistical information
can be created or exported.
Many supplementary algorithms and operators could reach
advanced applications inside the eCognition Developer 9.01
suits. During included tutorials, you will be familiar with many
of them. But, here, two segmentation and classification
algorithms are highlighted. When you segment an image, the
process groups pixels to form objects; suddenly, landcover
features start popping out, similar to how your eyes process your
surroundings. Based on your compactness and shape settings,
this is the preliminary step in OBIA. How big do you want the
objects to be? There's a scale parameter that you can estimate to

10
Mastering Object Based Image Analysis

generate more meaningful objects. Also, you can configure


weights for all the layers you want to segment. It means that you
only have to segment by red, green, or blue. Still, you can also
segment a Digital Elevation Model (DEM), Digital Surface
Model (DSM), Near Infra-Red (NIR), or even LiDAR imagery.
After you segment the image, it's time to classify each object.
You can now classify because each object has statistics
associated with the others. For example, you can classify objects
based on geometry, area, color, shape, texture, adjacency, and
more. While options are not limiting, this is where the true
power lies in eCognition Software There are seemingly more
practical examples in the following sections to recognize the
eCognition powers.

Coping with eCognition Suits:

Trimble eCognition software, in different versions of 9.01, 9.5,


10.1.1, and recently 10.1.2), is based on a new, object-oriented
approach to image analysis, used by remote sensing experts,
data scientists, and GIS professionals. In contrast to traditional
image processing methods, the basic processing units of object-
oriented image analysis are image objects or segments and not
single pixels. Even the classification acts on image objects. One
motivation for the object-oriented approach is that the expected
result of many image analysis tasks is the extraction of real-
world objects, proper in shape and proper in classification.
Common, pixel-based approaches cannot fulfill this expectation .

11
Mastering Object Based Image Analysis

Although the eCognition 9.01 version is an earlier combination


of different contributing procedures, some basic characteristic
aspects of the underlying object-oriented approach are
independent of the particular methods. The networking of these
image objects is directly connected to the representation of
image information utilizing objects. Whereas the raster models
give the topological relation of single, adjacent pixels. Learners
must explicitly work out the association of adjacent image
objects to address neighbor objects. Consequently, the resulting
topological network has a big advantage as it allows the efficient
propagation of many different kinds of relational information.
Inside eCognition, users can design feature extraction solutions
to transform geo-data into geo-information with many
possibilities as:
▪ eCognition makes a fundamentally different approach from
usual approaches to data analysis due to its ability to emulate
the human mind's cognitive powers and fuse geospatial input
data .
▪ In terms of segmentation and classification processes, it can
develop a robust method of rendering knowledge in a
semantic network. The technology examines pixels/ points,
not in isolation but context. It builds up a picture iteratively,
recognizing groups of pixels as objects .
▪ Like the human mind, it uses objects' color, shape, texture,
size, and context, and relationships to draw the same
conclusions and inferences as an experienced analyst. Still, it
adds the advantages of automation and standardization .
▪ eCognition classifies and analyzes imagery, vectors, and point
clouds using all the semantic information required to interpret
it correctly. Rather than examining stand-alone pixels or
points, it distills meaning from the objects' connotations and
mutual relationships, not only with neighboring objects but
throughout various input data.

12
Mastering Object Based Image Analysis

▪ The eCognition software fuse various geospatial data, such as


spectral raster data, 3D point cloud data, and thematic data .
▪ You could convert satellite images to point clouds, vectors to
images, and all three to one another. Users can leverage the
full power of their input data independent of data type and
source.
▪ Trimble eCognition is an advanced analysis software for
geospatial applications. It is designed to improve, accelerate,
and automate the interpretation of various geospatial data. It
enables users to design feature extraction of change detection
solutions to transform geo-data into geo-information.
▪ The eCognition suite offers three components: eCognition
Developer, eCognition Server, and eCognition Architect,
which can be used stand-alone or in combination to solve
even the most challenging image analysis tasks. In the
following sections, you will work with trial versions of
eCognition functions that have a much more powerful
development environment for object-based image analysis .

OBIA inside eCognition Developer:


The human mind has a remarkable ability to make sense of
images, identify objects, and extract insights. It handles
ambiguous or partial information by making inferences based on
the image as a whole, the relationship between objects, and
external and contextual information. Attempts to computerize
this capacity for image analysis have been going on for decades.
But despite increases in computational power and imaging
capabilities, advances in automated image analysis have been
limited.
The root of the problem is that computers examine images on a
pixel-by-pixel basis. Along those lines, "objects of interest" can
be identified by using a series of pixel-based filters. These
filters, such as intensity thresholds and gradients, distinguish

13
Mastering Object Based Image Analysis

patterns by comparing pixels to their neighbors. For an effective


analysis, the original image transforms so that simple threshold
measures can extract the areas of interest.
Identifying and analyzing objects using Cognition technology is
an iterative one. While identifying and measuring "objects of
interest," users can test, refine and fine-tune their analyses at any
point in the workflow. This approach dramatically reduces the
time needed to arrive at results and makes it easier and more
intuitive to create and validate new applications that explore
fresh avenues of investigation. We may Identify the basic OBIA
functionality (Figure 2).

Figure 2. An example of a Sentinel-2 satellite image processing according to the OBIA


concept.

14
Mastering Object Based Image Analysis

Figure 2-a): shows part of a Sentinel-2 satellite image of an


agricultural part. It is a trivial exercise for a person to identify
the green fields, but it is surprisingly difficult for a computer.
Figure 2-b): by applying the eCognition technology, it is
possible to identify greenish parts by searching for green
pixels. However, not all blue pixels indicate farms - there may
be individual green pixels in the middle of a field. Therefore,
eCognition aggregates green pixels into clusters and identifies
only those clusters as bodies of farms that are sufficiently
detected and then delineated.
Figure 2-c): separating green farms from non-green farms
demands a different segmentation process for instant
watershed segmentation. It requires the understanding that
traditional agriculture farms are long and thin while modern
farms are generally round.
Figure 2-c): by translating these insights into a set of rules and
parameters, a rule-based OBIA approach can distinguish the
different forms of farms.
The most advanced process is that the technology can measure
the size of agricultural farms and compare the results to images
of the same region at a previous time, accurately quantifying any
changes. The Cognition software allows users to easily identify
objects of interest and automate image analysis tasks with high
accuracy. Since the technology works for all modalities, images
from different sources can be compared, facilitating maximum
insight and applications extracted from image data.

Starting an eCognition Project:

Experience has proven that careful attention to several important


factors in understanding and using eCognition Software has
been very helpful. Based on the author's practical experience,
the items listed in Table 1 are very effective factors in fast
training programs limited to the eCognition Developer suit.

15
Mastering Object Based Image Analysis

Table 1. Basic processing steps and experiences of OBIA inside the eCognition
Software

Step Basic Processing Activity Experiences


1 Outline the research aim (s) Define an understandable research
problem.
2 Limit the study area to a small part Subset a relatively small geographic
area.
3 Download the satellite imagery Provide high-resolution satellite imagery
(Landsat & Sentinel series).
4 Open the eCognition Software Start with eCognition Developer
different versions (9.01) to get basic
skills.
5 Create a project inside eCognition Save and evaluate the project as it
changes.
6 Combine image layers Create an RGB combination image with
a few bands.
7 Try to segment the image based on The segmentation process should to
the correct scale, shape, and other based on other researchers and your
relevant factors. experiences.
8 Define the landcover & landuse Do not label a lot of classes.
classes
9 Extract statistics of bands and Feature statistics layers result in the
classes better recognition of the training areas.
10 Classify the segmented image Any ruleset is a key to achieving an
based on a (un) supervised or accurate automated image classification
ruleset classifies process.
11 Assess the classification accuracy Calculate the User’s, Producers, and
Overall accuracy.
12 Export the results to a GIS setting You could export all classified classes
and feature statistics to ArcMap for
further analysis.
13 Create personal geo-Informative It will help you keep geo-datasets for the
datasets change detection procedure.
14 Reanalysis datasets and create final Verify the final modes by comparing
models to address the research them with real land facts.
aims

16
Mastering Object Based Image Analysis

The basic steps of running eCognition Software are


schematically shown in Figure 3 to achieve the goals of the
OBIA application in the information process.

Figure 3. An example of basic OBIA process inside the eCognition Software

17
Mastering Object Based Image Analysis

OBIA Applications:
Changes in landcover and landuse are pervasive, rapid and can
significantly impact humans, the economy, and the environment.
Accurate landcover mapping is of paramount importance in
many applications, e.g., biodiversity conservation, urban
planning, forestry, natural hazards, etc. Unfortunately,
traditional landcover mapping processes are often inaccurate,
costly, and time-consuming. A classification for an image - in
one geographic location- cannot be directly applied to a different
image on a different site.
In practice, land cover maps are built-up by analyzing remotely
sensed imagery captured by satellites, airplanes, or drones, using
different classification methods. The accuracy of the results
depends on the quality of the input data (mainly based on the
spatial, spectral, and radiometric resolution of the images) and
on the used classification method. The most commonly used
methods could be divided into pixel-based classifiers and OBIA.
Pixel-based methods use only the spectral information available
for each pixel. They are faster but ineffective in some cases,
particularly for high-resolution images and heterogeneous
objects detection. Object-based methods consider image
segments' spectral and spatial properties (i.e., set of similar
neighbor pixels). They are faster but ineffective, particularly for
high-resolution images and heterogeneous object detection.
Object-based methods consider image segments' spectral and
spatial properties (i.e., set of similar neighbor pixels). The most

18
Mastering Object Based Image Analysis

commonly used Software implementing OBIA-methods is the


privative eCognition Developer, which provides a friendly
graphical user interface for non-programmers. There exist
several free and open-source OBIA-software, but they are less
popular. There is a growing demand for accurate high-resolution
land cover maps in many fields, e.g., land use planning and
biodiversity conservation, particularly in geography and
environmental research. Developing such maps has been
traditionally performed using OBIA and GEOBIA methods,
which usually reach good accuracies but require high human
supervision.
Targeting Geo-Environmental Problems:
The Republic of Azerbaijan is situated in the Caucasus region of
Eurasia, as three major physical features dominate it: the
Caspian Sea, whose shoreline forms a natural boundary to the
east; the Greater Caucasus mountain range to the north; and the
extensive flatlands at the country's center. The elevation changes
over a relatively short distance from lowlands to highlands;
nearly half the country is mountainous. The climate of
Azerbaijan varies from subtropical humid in the southeast to
subtropical and dry in the central and eastern parts of the
country. At a simple glance, the country can be divided into the
recognized ecosystem complexes, all of which contribute to the
large diversity of this wonderful country's forests, high
mountains, scrublands, steppes, semi-deserts, wetlands, and
coastal ecosystems (Figure 4).

19
Mastering Object Based Image Analysis

Figure 4. A general map for the protected areas in Azerbaijan

We purposely applied the OBIA's different image processing


functions to recognize the geospatial features of Azerbaijan.
Over time, Azerbaijan has had incredible natural resources
unique to this part of the world, including its National Parks and
State Reserves. Meanwhile, some parts of the country face
serious environmental failures, such as air and water pollution
and subtle changes due to the notable fluctuations in the Caspian
Sea. Furthermore, other natural hazards such as droughts,
floods, and bushfires may occasionally endanger the country's
natural resources. For instance, local scientists consider the
Apsheron Peninsula, including Baku and Sumqayit cities, and
the Caspian Sea costliness to be the ecologically most

20
Mastering Object Based Image Analysis

devastated area in the region because of severe air and water and
soil pollutions. The occupied Qarabagh rich florae, fauna, and
enormous natural resources have been in great confrontation due
to occupation force action.
Sum Up:
Many OBIA users have been working intensively on geographic
and environmental topics; most users run eCognition
applications for "green, environmental and climate changes
purposes." No doubt, there are thousands of different
applications for those image analysis software. Authors
convince that this is directly related to images' massive power.
The conventional wisdom that "one image is worth a thousand
words" fully applies to human understanding of environmental
issues. Every eCognition user in the geosciences and
environmental community has a story to tell. Some of these
stories will help us better understand how far the situation has
progressed.
We are sure that you don't want to stop here in the current
introduction. You will try to find out what eCognition Software
does (even having different progressive versions) is provide you
with a detailed representation of all objects and their inter-
relationships within the diverse satellite images. With this
cognition developer, we can recognize and measure practically
everything contained in an image – and do so across very large
areas and over long periods. In short, the thousands of
minuscule facts documented in an image are now becoming

21
Mastering Object Based Image Analysis

available in a standardized way, including the following


advantages:
❖ An object-oriented approach is better to reach practical
knowledge on geographic and environmental behaviors.
❖ OBIA - and in specific geographic cases, GEOBIA - has to be
integrated as a "particular vision" in the higher-education
tearing programs.
❖ It is now possible to import external, pre-trained advanced
models into eCognition. Deep Learning users can save time in
the workflows' sample generation, model creation, and
training phases.
❖ eCognition has added new tools to improve surface
calculations and analysis. It is now possible to generate hill-
shade layers based on DSM and DTM inputs. In addition, a
Ridge Filter algorithm improves the extraction of ridge-like
features in images and elevation data.
❖ The majority vote functionality is supported by the last
eCognition versions, which increases the contextual
relationship between image objects to make classification
easier.
❖ The important point is that eCognition continues to improve
its free trial versions. It makes access easier for new learners
to utilize the last OBIA advantages.
❖ As a final point, eCognition Software should view as an
advanced knowledge creation high-technology, an inevitable
requirement for future Azerbaijan geo-environment research
investigations.

22
Mastering Object Based Image Analysis

Informative Practices
Tips:
1) Trimble eCognition enables you to accelerate and automate
the interpretation of your geospatial data products by allowing
you to design your feature extraction and change detection
solutions.
2) You can learn about the technology that drives eCognition as
a market leader in feature extraction.
3) The eCognition Software is used by professionals, remote
sensing experts, and data scientists to automate geospatial
data analytics.
Workouts:
1) Summarize the basic processing steps of the eCognition
Software.
2) List the possible applications of OBIA in your research area .
3) List the eCognition suite of the different main components.
Quizzes :
1) What does an OBJECT mean?
2) What are the similarities and differences between OBIA and
GEOBIA approaches?
3) What are the basic goals of the OBIA?
Allied References:
1) Addink, E.A., Van Coillie, F.M.B. (eds.) (2010). GEOBIA:
Geographic object-based image analysis, Ghent, Belgium, 29.
June - 2. July 2010. ISPRS Vol. No. XXXVIII-4/C7, Archives
ISSN No 1682-1777.
http://geobia.ugent.be/proceedings/html/papers.html (April
2011).
2) Baatz, M., Schäpe, A. (2000). Multi-resolution segmentation –
an optimization approach for high quality multi-scale
segmentation. In: Strobl, J., et al. (eds.), Angewandte

23
Mastering Object Based Image Analysis

Geographische Informations verarbeitung XII, Beiträgezum


AGIT Symposium Salsburg 2000, Karlruhe, Herbert
Wichmann Verlag, 12-23.
3) Blaschke, T., Lang, S., Hay G.J. (eds.) (2008). Object-Based
Image Analysis: Spatial Concepts for Knowledge-Driven
Remote Sensing Applications (Lecture Notes in
geoinformation and cartography). Springer.
4) Rasouli, A.A., and Mammadov, R. (2020). Preliminary
Satellite Image Analysis Inside the ArcGIS Setting, Lambert
Academy Publishing, Germany.
5) Mammadov, R. and Rasouli, A.A. (2020). Practical Satellite
Image Processing Inside the ERDAS Imagine, Azerbaijan
National Academy of Sciences, The Institute of Geography.
6) Rasouli, A.A. G.Sh. Mammadov, and M.M. Asgarova (2021).
Mastering Spatial Data Analysis Inside the GIS Setting,
Azerbaijan State Pedagogical University. Faculty of History
and Geography, Baku.

24
Mastering Object Based Image Analysis Section One at a Glance

The OBIA Approach Basic Requirements

a progressive OBIA needs a well-designed platform

Basic Concepts:
The current book prepares so that interested beginner,
intermediate, and even advanced trainees, with introductory
knowledge of image processing concepts and fewer software
skills, could be familiar with the basic concepts of OBIA
working in eCognition software 9.01 setting. Accordingly, we
will train students to access and install the eCognition Developer
9.01 on their computers during the first section. They will learn
to access, download, manage, and display the Landsat 4, 5, 7,
and 8 imagery through simple steps from recognized
international websites.
We expect students to, with access to acceptable standard
satellite imagery and basic familiarity with eCognition software,
it will be possible to practice with OBIA primary concepts. At
the same time, one of the main goals of this section of the book
is to teach how to extract applied information from raw satellite

25
Mastering Object Based Image Analysis Section One at a Glance

images in the Republic of Azerbaijan geo-environment context.


Due to the educational nature of the content provided, you may
encounter duplicate content and work with similar data sets.
Trainees have to try to understand the basic concepts of OBIA
and the structural capabilities of the eCognition Developer
settings during the first section. Nevertheless, we expect
interested apprentices to achieve their desired educational and
research goals with great perseverance before starting the
lectures of "Section Two" with more evolved goals.

26
Mastering Object Based Image Analysis SECTION ONE

Tutorial 1

Running the eCognition Trail Version 9.01

any eCognition includes a range of new features and rule sets.

Opening Statement:
In the current tutorial, you will learn how to access and install
the eCognition Developer 9.01 trial version, a powerful
development environment for Object-Based Image Analysis
(OBIA). eCognition 9.01 extends the existing knowledge-based
and supervised classification methods available for geospatial
applications. It is extensively used in earth sciences to develop
rule sets for advanced remote sensing data analysis. In addition,
you will have the first experience of importing Landsat-4
satellite MSS bands, subsetted for the small part of the
Azerbaijan, Qarabag region, into the eCognition software
environment.
Instructive Memo:
✓ Level: Beginner,
✓ Time: This tutorial should not take you more than 1
hour.
✓ Data: Landsat-4_19780713_T2.tar.gz,
✓ Software: eCognition Developer, Version 9.01,
27
Mastering Object Based Image Analysis SECTION ONE

✓ Satellite Sence: Part of the Qarabag region,


By the end of this unit, you should be familiar with:
Free sources of the eCognition software.
Install and set up the software.
Start and run the eCognition Trail version.
Import Landsat-4 bands to eCognition setting.

Background Concepts:
The eCognition software operates for all common image
processing tasks such as vegetation mapping, feature extraction,
change detection, and object recognition. The object-based
approach facilitates analysis of all common data sources, such as
medium to high-resolution satellite data, high to very high-
resolution aerial photography, lidar, radar, and even
hyperspectral data. In this tutorial, we experience the Landsat 1-
4 satellites with the optimal ground resolution (60 meters) and
fewer spectral bands (4 bands). However, they could be
processed to track landcover efficiently and document land
changes due to climate change, urbanization, drought, wildfire,
vegetation changes, and a host of other natural and human-
caused changes. You may use this version of eCognition to
highlight the benefits of the most antiquated Landsat imagery of
the Qarabag region; many years ago, military actions did not
damage the region's lush forests during the long-drawn fights.
You can download eCognition Developer 9.01 from different
software libraries for free, as the Trimble GeoSpatial created the
most popular program.

28
Mastering Object Based Image Analysis SECTION ONE

Mastering The Skills:


To learn the OBIA approaches, you need to install the Cognition
Developer 9.01x64 on your personal computer. First launched in
May 2000, eCognition introduced the more advanced method
for extracting information from images using a hierarchy of
image objects (groups of pixels) instead of traditional pixel
processing methods. To practice the eCognition software, you
need to undertake the following:
Step 1) Accessing the Free eCognition 9.01
1.1) Download the eCognition Developer version 9.01, one of
the sites as listed follows:
a) https://en.freedownloadmanager.org/Windows-
PC/eCognition-Developer.html
b) http://edutechsoft.blogspot.com/2018/12/ecognition-
developer-901-full-crack.html
c) https://mega.nz/file/54EgCKSB#Td_M2cQLwfTxjHE
QbgkpxVfpit0neW1wqchGLJYAMSg
1.2) Make a backup from the zipped file inside a particular
folder.
1.3) Unzip and extract and install the software with 'set
license later.

Step 2) Installation of eCognition 9.01

2.1) First, run Setup.exe unzipped file inside the back-upped


folder (Figure 1).

29
Mastering Object Based Image Analysis SECTION ONE

Figure 1: Running Setup.exe file

2.2) When the Welcome to the eCognition Developer 64 9.0


Setup Wizard comes up, click on the Next button (Figure 2).

Figure 2: eCognition Developer 64 9.0 Setup Wizard

2.3) Inside the License Agreement Window, click on the "I


accept the terms of License Agreement and then get on the Next
option (Figure 3).

30
Mastering Object Based Image Analysis SECTION ONE

Figure 3: the License Agreement Window

2.4) Select the eCognition Licensing Option that comes up,


select the Set Licensing Later option, and click on the Next
button (Figure 4).

Figure 4: Choosing the Set Licensing Later

2.5) Inside the Choose Components, accept all options and click
on the Next button (Figure 5).

31
Mastering Object Based Image Analysis SECTION ONE

Figure 5: the Choose Components dialog box

2.6) When you notice the Choose Start Menu Folder, click on
the Next option (Figure 6).

Figure 6: the Choose Start Menu Folder

2.7) In the Choose Install Location dialog box, accept the


default path for the installation process (Figure 7).

32
Mastering Object Based Image Analysis SECTION ONE

Figure 7: the Choose Install Location dialog box

2.8) When the "Installation is Configured Successfully" dialog


box comes up, click on the Install option (Figure 8).

Figure 8: the Installation is Configured Successfully dialog box

2.9) eCognition software Installation state (Figure 9).

33
Mastering Object Based Image Analysis SECTION ONE

Figure 9: eCognition Installation state

2.10) When you notice the Installation complete dialog box,


click on the Next button (Figure 10).

Figure 10: the Installation complete dialog box


2.11) When the Completing the eCognition Developer 64 9.0
Setup Wizard appears, click on the Finish button (Figure 11).
34
Mastering Object Based Image Analysis SECTION ONE

Figure 11: Completing the eCognition Developer 64 9.0 Setup Wizard

2.12) Open the patch\Program Files\Trimble\eCognition


Developer 64 9.0\bin path.
2.13) Copy the listed files in Figure 12.

Figure 12: The files to be copied to your system


2.14) Then all files have to be copy-pasted in the below path
C:\Program Files\Trimble\eCognition Developer 64 9.0\bin.

35
Mastering Object Based Image Analysis SECTION ONE

2.15) In the end, double-click on the


shortcut that runs the software based on your chooses (Figure
13).

Figure 13: The eCognition Developer start dialog box

2.16) Now, you can select one of the options to start the
eCognition software. In this step, use the Quick Map Mode to
notice the eCognition main window (Figure 14).

Figure 14: the eCognition software the main window, in the Quick
Map Mode

36
Mastering Object Based Image Analysis SECTION ONE

2.17) Then, try to create the image of the Qarabagh area in the
eCognition software by combining the Landsat 4 MSS bands,
which you have already prepared from GLOVIS
https://glovis.usgs.gov/app site. How to access the satellite
imagery will be covered in later tutorials.
2.18) Landsat 4 is the fourth satellite of the Landsat program. It
launched on July 16, 1982, with the primary goal of providing a
global archive of satellite imagery.
Landsat 4 carried the Multispectral Scanner (MSS) with four
spectral bands:
▪ Band 4 Visible (0.5 to 0.6 µm)
▪ Band 5 Visible (0.6 to 0.7 µm)
▪ Band 6 Near-Infrared (0.7 to 0.8 µm)
▪ Band 7 Near-Infrared (0.8 to 1.1 µm)
Step 3: Create a Project

3.1) You can create a project to display the Landsat 4 raster


bands inside the eCognition software. Select File>NewProject

on the main menu bar or click the button in the toolbar.


3.2) Navigate and open the folder of the Landsat bands
directory. When you notice the Import Image Layers dialog box,
select the bands of 4 to 7 and click on the OK option (Figure
15).

37
Mastering Object Based Image Analysis SECTION ONE

Figure 15: The Import Image Layers dialog box


3.3) Your project opens, and the image should be viewable in
the Create Project window (Figure 16).

Figure 16: The Create Project window

38
Mastering Object Based Image Analysis SECTION ONE

3.4) Rename the image layers (channels) to make later


references to them easier. Double-click on the band names, and
when you notice the Layer Properties dialog box, rename the
bands and click on the OK option (Figure 17).

Figure 17: The Layer Properties dialog box

3.5) You should not forget to click on the Subset Selection


option and select a small part of the whole image, where you
prefer.
3.6) Then, inside the Create Project window, click on the OK
option; this will produce your Landsat 4 image of the Qaradagh
region (Figure 18).

39
Mastering Object Based Image Analysis SECTION ONE

Figure 18: The Subset Selection option and selecting a small


part of the image
3.7) Do not forget to select the Use Geocoding for Subset option
and click on the OK option. Soon, you will illustrate the Landsat
4 image in natural false-color mode.
3.8) Change the Color Composition of the displayed image.
Changing the color composition of the displayed image is rather
easy and is done by using the following steps.
3.8.1) Select Image Layer Mixing from the View menu or click

the "Edit Image Layer Mixing" button on the View Settings


toolbar (Figure 19).

40
Mastering Object Based Image Analysis SECTION ONE

Figure 19: The "Edit Image Layer Mixing" dialog box

3.8.2) Select Standard Deviation (3.00) under Equalizing to


apply a histogram stretch. By default, the first three channels are
loaded with the first channel assigned to the red color gun of the
monitor, followed by green and then blue, respective of the next
two channels.
3.8.3) Selecting one layer grey under Presets allows a user to
view one channel at a time. It can provide an excellent chance to
review the channel data. Before proceeding to the next step,
ensure that the three-channel options listed in #3 above are set.
3.8.4) Click OK. Soon, you will illustrate the Landsat 4 image in
natural false-color mode (Figure 20).

41
Mastering Object Based Image Analysis SECTION ONE

Figure 20: The Landsat 4 MSS image subsetted image, a small


part of Qarabag is illustrated in natural false-color mode
Sum Up:
eCognition 9.01 is a comprehensive image analysis platform for
multi-dimensional OBIA procedures. It contains all the client
and server software needed to extract information from any
satellite image, for example, the Landsat 4, in a fully-automated
or semi-automated way with the following advantages:
➢ eCognition software enables users to design feature
extraction or change detection solutions to transform
geospatial data into geo-information.
➢ You can import various geospatial data in eCognition,
fusing them into a rich stack of geo-data for the analysis.
➢ The eCognition technology examines image pixels, not
in isolation but context.

42
Mastering Object Based Image Analysis SECTION ONE

➢ The sensors on each of the Landsat satellite acquire data


in different ranges of frequencies along the
electromagnetic spectrum.

Informative Practices
Tips:
1) eCognition 9.01 was designed to improve, accelerate and
automate the interpretation of various geospatial data.
2) Inside this version of eCognition, you can build analysis
logic structures into a series of steps to create a
computer-based representation of an expert's geospatial
interpretation process, a so-called "ruleset."
3) Landsat's space-based land imaging is essential because
it provides repetitive and synoptic observations of the
Earth otherwise unavailable to researchers and managers
who work across wide geographical areas and
applications.
Workouts:
1) Try to access the free trial of eCognition available via
the (http://www.ecognition.com website.
2) When installing the eCognition software, open the
eCognition Developer version 9.01 in a ruleset mode. Go
to file> Load Image File in the main menu.
3) Download the Landsat images for 1996 and 2016 and
compare them with the Landsat data (1976) you worked
on in this tutorial. Try to interpret the existing changes,
particularly on vegetation cover.
43
Mastering Object Based Image Analysis SECTION ONE

Quizzes:
1) What are the differences between eCognition Trial and
full-version modes?
2) How does the "software" work?
3) Are you familiar with earlier Landsat 4 applications?
Allied References:
1) Anuta, P. E., Bartolucci, L. A., Dean, M. E., Valdes, J. A., and
Valenzuela, C. R. (1984). Landsat-4 MSS and Thematic
Mapper design quality and information content analysis:
ZEEE Transactions on Geoscience and Remote Sensing, v.
GE-22, no. 3, pp. 222-236.
2) Benz, U. C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.;
Heynen, M. (2004). "Multi-resolution, object-oriented fuzzy
analysis of remote sensing data for GIS-ready information".
ISPRS Journal of Photogrammetry and Remote Sensing. 58
(3): 239–258.
3) Castilla, G., & Hay, G. J. (2008). Image objects and
geographic objects. In: Object-based image analysis. Springer
Berlin Heidelberg. pp. 91-110.
4) DeGloria, S. D. (1984). Spectral variability of Landsat-4
Thematic Mapper and Multispectral Scanner data for selected
crop and forest cover types: IEEE Transactions on Geoscience
and Remote Sensing, v. GE-22, no. 3, pp. 308-311.
5) eCognition Developer (2014). Trimble Germany GmbH,
Arnulfstrasse 126, 80636 Munich, Germany. All rights
reserved.
6) Rasouli, A.A. and H. Mahmoudzadeh (2010). Fundamental of
Knowledge-Based Remote Sensing. ElmIran Press, Tabriz,
Iran.

44
Mastering Object Based Image Analysis SECTION ONE

Tutorial 2
Picking Up The Landsat Imagery With eCognition

Landsat a right gate to pick up the satellite imagery

Opening Statement:
This tutorial will explore picking up the most valuable remote
sensing sources available in the eCognition setting. To access
stored Landsat resources from previous years and learn how to
use one of the most popular data sources, the Global
Visualization Viewer (Glovis). Doing an applicable practice,
you will try to access the Landsat-5 imagery of the Shirvan
National Park in the Republic of Azerbaijan and prepare them
for use in an eCognition setting. The Landsat Collection 1
Level-1 has the highest available data quality and, it is important
for change detection applications during the wet and dry months
of the year.
Instructive Memo:
✓ Level: Beginner,
✓ Time: This tutorial should not take you more than 1 hour.
✓ Software: eCognition Developer, Version 9.01,

45
Mastering Object Based Image Analysis SECTION ONE

✓ Data: Landsat-5 Imagery: LT05_166033_19860413, and


LT05_167032_19860826,
✓ Subject Scene: The Shirvan National Park, Azerbaijan.
Tutor Objectives:
By the end of this unit, you should be:
Navigate the Glovis interface and successfully download
Landsat-5 imagery.
Explore some of the additional information on the Landsat-5
data products.
Learn how to prepare Landsat-5 Imagery in the eCognition
setting.
Explore the landcover changes in the wet and dry months based
on the Landsat images.
Background Concepts:
Landsat satellites provide the longest-running temporal
observation of the Earth's surface. Since active satellite datasets
are constantly expanding, most data sources let you filter the
available imagery by your desired attributes to help you find the
appropriate imagery for your application. For example, you may
only be interested in a cloud-free Landsat 5 image from the wet
and dry months of 1986—most data sources allow you to detect
changes inside and around a protected area with unique
vegetation for rare animals.
Several websites and repositories on the internet allow you to
download remote sensing data products free of charge. Many
sites include datasets from multiple satellites and sensors in one
location. Glovis stands for the Global Visualization Viewer.
Since 2001, it has been a tool for accessing remote sensing data.
During this tutorial, you can take your data from Glovis, which
includes data from Landsat, Sentinel, and many other missions.

46
Mastering Object Based Image Analysis SECTION ONE

Mastering the Skills:


As you continue to work with satellite data and become more
accustomed to the sources available to you, you will notice
subtle differences between them. These small differences often
make one data source more advantageous for your particular
application over the other.

Step 1: Downloading Landsat-5 Imagery

The United States Geological Service (USGS) Glovis site is


perhaps the most comprehensive data portal. The Glovis is used
to query, search, and order remotely sensed datasets, many
offered globally. Alongside Landsat images, data is available,
ASTER, and Sentinel-2, among others. The Glovis ordering
page allows you to enter Landsat scene lists to see which are
currently available for download, which ones need to be
processed, and the unavailable ones.
1.1) Register for USGS (https://glovis.usgs.gov) to download
data files from certain USGS sites; you need to register for a
free account.
1.2) After registering, login with your username and
password. Then "LANCH GLOVIS IN FULL-SCREEN
MODE".
1.3) To view data from Landsat 5, choose ‘’Choose Your
Data Set(s)> Landsat 5 Level-1 from Menu.
1.4) Enter Path/Row numbers. For example, path 166 and
row 33 are for the western coast of the Caspian Sea (Figure 1).

47
Mastering Object Based Image Analysis SECTION ONE

Figure 1: USGS the main entrance box


1.5) To manage the Landsat 5 data, you may wish to filter
Date Range, Cloud Cover, and Months (Figure 2).

Figure 2: Data Set and Metadata Filtering

48
Mastering Object Based Image Analysis SECTION ONE

1.6) You can scroll through scenes chronologically using the


Previous Scene and Next Scene buttons. Click the download
icon for the file you selected (Figure 3).

Figure 3: USGS Glovis Selecting scenes dialog box

Step 2: An alternative way to access data


When through the first way, you may not be able to access the
images; therefore, you have to follow the following method:
2.1) Right-click on the mage that you have selected. Then,
first, select the "Select Scene" option. It is another alternative
way by clicking on the interactive map (Figure 4).

49
Mastering Object Based Image Analysis SECTION ONE

Figure 4: The interactive map for Landsat image options


2.2) Click on the View Metadata option to have information
about the image you will download (Figure 5).

Figure 5: Metadata for a selected Landsat 5

50
Mastering Object Based Image Analysis SECTION ONE

2.3) Then, click on the "Share Scene" option until the Share
Scene appears (Figure 6).

Figure 6: Share Scene dialog box


2.4) Click on the Scene Summary option until the ORDER
SCENE message appears (Figure 7).

Figure 7: Scene Summary for Landsat-5 data

51
Mastering Object Based Image Analysis SECTION ONE

2.5) Another option is to click on the ORDER SCENE option.


2.6) You should convert your input to a scene list (List ID
'order-process-ee-1638493800'). This list will be available for
seven days. Please note that some scenes may not have
download or order options available (Figure 8).

Figure 8: More ordering options

2.7) You need to select the View Scene List for our
interesting data set. You will access the following window, and
then you may request the Landsat image (Figure 9).

Figure 9: Landsat image and download order options

2.8) Select the Download option to have your image selected.


Click on the Level-1 GeoTiff Data Product item. You may
prefer to download a LandsatLook Natural Color Image for
visualizing purposes. If you like to view the Landsat Image in
your ArcGIS setting, you must select the LandsatLook Images
with Geographic Reference option (Figure 10).

52
Mastering Object Based Image Analysis SECTION ONE

Figure 10: LandsatLook Natural Color Image


2.9) It takes time (normally 30-40 minutes) to download the
image zip file.

.
2.10) Uncompress the zipped files inside a preformed folder,
for our purposes, LC05_L1TP_166033_19860413 file (Figure
11).

Figure 11: Uncompressed Landsat-5 bands and associated attribute


files

53
Mastering Object Based Image Analysis SECTION ONE

2.11) Now, you have all bands of the Landsat


LT05_L1TP_166033 for the Shirvan National Park captured on
13th April 1986. Remember to take satellite imagery the next
months, 1986-05-06 and 1986-08-26, to compare changes in
landcover at your favorite location.

Step 3) Landsat 5 Bands Characters

3.1) When you unzip the Landsat 5 bands, you must understand
their most characters. Landsat 5 carried the Thematic Mapper
(TM) sensor and created images consisting of six spectral bands
with a spatial resolution of 30 meters for Bands 1-5 and 7 one
thermal band (Band 6).
3.2) The approximate scene size is 170 km north-south and 183
km east-west (106 mi by 114 mi). TM could not resolve
individual houses or trees, but it could record houses constructed
or cleared forests. Here are the Landsat satellite band
designations (Table 1).

54
Mastering Object Based Image Analysis SECTION ONE

Table 1: Landsat-5 TM bands

Band Wavelength Resolution


Band name Band Applications
number (μm) (m)

Bathymetric mapping,
distinguishing soil from vegetation,
1 Visible Blue 0.45 - 0.52 30
and deciduous from coniferous
vegetation

Visible Emphasizes peak vegetation, which


2 0.52 - 0.60 30
Green is useful for assessing plant vigor

3 Visible Red 0.63 - 0.69 30 Discriminates vegetation slopes

Emphasizes biomass content and


4 NIR 0.76 - 0.90 30
shorelines

Discriminates moisture content of


5 SWIR 1 1.55 - 1.75 30 soil and vegetation; penetrates thin
clouds

Thermal mappings and estimated


6 Thermal 10.40 - 12.50 120
soil moisture

Hydrothermally altered rocks


7 SWIR 2 2.08 - 2.35 30
associated with mineral deposits

3.3) Landsat-5 bands can discriminate vegetation types, cultural


features, biomass, vigor, etc. In turn, Band 6 measures Earth’s
thermal energy, particularly useful for tracking how land and
water surfaces are changed. Later, we will have more
information on Landsat data and their uses.

55
Mastering Object Based Image Analysis SECTION ONE

Step 4: Landsat Imagery inside eCognition


eCognition provides a selection of tools and user interface
elements typically used for image processing and analysis
within the applied science domain. Here, we will offer a brief on
how to get the Landsat bands inside eCognition.
4.1) To set up Landsat-5 Imagery inside the eCognition
setting – you need to execute the software first.

4.2) Click the icon, Go to All Programs and click the icon

to boot the eCognition Developer.


4.3) You will see the Trimble eCognition startup; select the
Quick Map Mode or Rule Set Mode by clicking its icon
(presented in the previous tutoring). Selecting Rule Set Mode
will take you to the standard Developer environment. For simple
analysis tasks, you may wish to try Quick Map Mode.
4.4) Click any portal item to stop automatic opening . The most
recently used portal will start if you do not click a portal within
three seconds. To start a different portal, close the client and
start again.
4.5) Click on the OK option to display the default eCognition
9.01. To collect the Landsat-5 bands inside eCognition, click
File on the menu bar and select the Load Image File option
(Figure 12).

56
Mastering Object Based Image Analysis SECTION ONE

Figure 12: The "Load Image File" dialog box, with recursive file
display selected
4.6) Click on the OK option to create an image. From the File
menu, select the Save Project option to save your project. To
Create New Project, Choose File > New Project on the main
menu bar.
4.7) Using the "Modify a Project" function, you can add/remove
more images or thematic layers or rename the project. Modify a
selected project by exchanging or renaming image layers or
other operations. Choose File > Modify Open Project on the
main menu bar to modify a project. The Modify Project dialog
box opens (Figure 13).

57
Mastering Object Based Image Analysis SECTION ONE

Figure 13: The Modify Project dialog box

4.8) Double-click on the "Image Layer Alias" item and rename


all the layer's names as Layer 1 (B1-Blue), Layer 2 (B2-Green),
Layer 3 (B3-Red), Layer 4 (B4-Near IR), Layer 5 (SWIR1),
Layer 6 (B6-Thermal). And Layer 7 (B7-SWIR2).
4.9) Normally, image files are large and difficult to process. So,
we should be working with a smaller area to manage easily,
which will take less memory and time.
4.10) To open the Subset Selection dialog box (after importing
image layers), press the Subset Selection button. Click on the

58
Mastering Object Based Image Analysis SECTION ONE

image and drag to select a subset area. For our interest, outline
the Shirvan National Park in the image viewer. Alternatively,
you may enter the subset coordinates. You can modify the
coordinates by typing. Then you need to confirm with OK to
return to the main dialog box. You can clear the subset selection
by Clicking Clear Subset in the main dialog box. (Figure 14).

Figure 14: the Subset Selection dialog box

4.11) When you are happy with the new image set, click on the
OK option. Then, click on the OK option in the Modify Project
window again. Then, modify the band combinations using the
"Edit Image Layer Mixing" dialog box (Figure 15).

59
Mastering Object Based Image Analysis SECTION ONE

Figure 15: The "Edit Image Layer Mixing" dialog box.

4.12) Now, you could expect to work on the subsetted image


with your desired adjustments (Figure 16).

Figure 16: Landsat-5 subsetted image for the Shirvan National Park
near the Caspian Sea coastlines

60
Mastering Object Based Image Analysis SECTION ONE

Step 5: Landsat-5 Bands Combinations

During image processing, you need better to understand the


image layers composite inside the eCognition.
5.1) Inherently, Landsat 5 images have advantages and
limitations. To get better mental perception and visual
interpretation, you need to combine the relevant image bands.
5.2) It is necessary to practice the features and combine Landsat
5 bands in the eCognition software environment Table 2).

Table 2: Different Landsat-5 band combination inside the eCognition


9.01
Band Example of Landsat-5 Applications
Combinations

3, 2, 1 Natural Color

7, 5, 3 False Color
(urban)

4, 3, 2 Color Infrared
(vegetation)

61
Mastering Object Based Image Analysis SECTION ONE

5, 4, 1 Agriculture

7, 5, 4 Atmospheric
Penetration

4, 5, 1 Healthy
Vegetation &
Land/Water

Natural with
Atmospheric
7, 4, 2 & Removal &
7, 4, 3 Shortwave
Infrared

5, 4, 3 Vegetation
Analysis

Step 6: Comparing Seasonal Landcover Changes

6.1) Once again, to compare potential landcover changes in the


Shirvan Protected Area, perform an extra Landsat 5 composite
for the dry season of 1986.

62
Mastering Object Based Image Analysis SECTION ONE

6.2) By paying attention to the details of Figure 17 visually, you


can understand the changes made. Certainly, elements of
interpretation such as color, shape, and patterns can help the
interpreter to understand the scale of changes.

Figure 17: Landcover changes during wet and dry seasons in Shirvan
National Park
6.3) At the end, you have to save the currently open project to a
project file (extension. dpr). To save a project, do the following:
• Choose File> Save Project on the main menu bar.
• Choose File> Save Project As, on the main menu bar. The
Save Project dialog box opens.
• Select a folder and enter a name for the project file (.dpr).
Click the Save button to store the File.

63
Mastering Object Based Image Analysis SECTION ONE

Sum Up:
The Landsat program is the longest-running enterprise for
acquiring satellite imagery of Earth, usually divided into scenes
for easy downloading. Landsat-5 data can assist a broad range of
specialists in managing the world's food, water, forests, and
other natural resources for a growing world population. The
Landsat images contain many layers of data collected at
different points along the visible and invisible light spectrum as
follows:
❖ As you may encounter, many distinct sorts of Landsat
satellite images are available to you at no cost from
different websites.
❖ Please note that Landsat scenes are large files, as
unzipped Landsat 5 scenes are about 404, Landsat 7
scenes are about 654, and Landsat 8 nearly 972
megabytes. It is an important consideration when
downloading and manipulating them.
❖ Landsat scenes are made of several files or layers
(bands) of data. Each band represents a section of the
electromagnetic spectrum that has been selected because
it is useful for distinguishing kinds of landcover and
landuse from one another and measuring the ways they
change over time.
❖ Regularly, challenging satellite images and extracting
useful information inside eCognition software would be
an impressive enjoyment tutorial for educational
purposes.

64
Mastering Object Based Image Analysis SECTION ONE

Informative Practices
Tips:
1) You have explored the most extensive and widely used remote sensing
data portal (USGS EarthExplorer) in the current lesson. Still, many
remote sensing data portals are available on the relevant websites, all
with various applications and focus areas.
2) Take some time to explore some of the following data sources :
➢ NASA EarthDataSearch: https://search.earthdata.nasa.gov.
3) Landsat satellites have the optimal ground resolution and spectral
bands to efficiently track landuse and document land change due to
climate change, urbanization, drought, wildfire, biomass changes
(carbon assessments), and a host of other natural and human-caused
changes.
Workouts:
1) Download the Landsat-5 imagery for your living area and carefully
manage them.
2) Import Landsat bands to the eCognition software and list geographic
and environmental characteristics .
3) Try combining different Landsat-5 bands and interpreting the
landcover characters based on visualized interpretations elements
(shape, color, and patterns).
Quizzes :
1) How many continents have Landsat-5 imagery available for them?
2) Will the cloud cover mask the satellite imagery considerably?
3) What are the following image's basic geographic and thematic
characters?

Allied References:
1) Irons, J.R., Dwyer, J.L., and Barsi, J.A. (2012). The next Landsat
satellite: the Landsat data continuity mission. Remote Sensing of
Environment, Vol. 122: pp.11–21.

65
Mastering Object Based Image Analysis SECTION ONE

2) National Aeronautics and Space Administration, (2018). BOREAS


TE-18 Landsat TM Maximum Likelihood Classification Image of the
NSA Paperback, 1st ed., NASA.
3) U.S. Geological Survey (2015). Landsat—Earth Observation
Satellites. In Fact Sheet; U.S. Geological Survey: Reston, VA, USA.
4) U.S. Geological Survey, (2018). Landsat Analysis Ready Data: U.S.
Geological Survey Fact Sheet.
5) Woodcock, C.E.; Allen, R.; Anderson, M.; Belward, A.; Bindschadler,
R.; Cohen, W.; Gao, F.; Goward, S.N.; Helder, D.; Helmer, E. (2008).
Free Access to Landsat Imagery. Science, 320, 1011a.
6) Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.;
Woodcock, C.E. (2012). Opening the Archive: How Free Data has
Enabled the Science and Monitoring Promise of Landsat. Remote
Sens. Environ. 122, 2–10.

66
Mastering Object Based Image Analysis SECTION ONE

Tutorial 3

Practicing the eCognition 9.01 Basic Structure

knowledge-based thinking with the eCognition

Opening Statement:
The main aim of the current tutorial is to familiarize students
with the primary structure of the eCognition Developer software
version 9.01. Accordingly, this tutor provides initial hands-on
exercises, introducing the eCognition simple workflow. Data
material focuses on Landsat-5 TM bands taken from the
Republic of Azerbaijan, Absheron Peninsula, Baku City. In a
general view, the eCognition Developer allows users to import
many types of image raster layers and create targeted projects.
The tutorial also extends the band combination of Landsat TM
bands, providing qualitative landcover information.
Instructive Memo:
✓ Level: Beginner.
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software, Version 9.01.
✓ Data Sources: The Landsat-5 Tiff format files (Row: 166
Path: 32 Date: 19/07/1998) are available from the NASA
GloVIS service.
67
Mastering Object Based Image Analysis SECTION ONE

✓ Subject Scene: Baku region, Azerbyazan.


Tutor Objectives:
By the end of this unit, you should:
Be aware of the structure and user interface of the eCognition
Developer software and the types of projects the software you
can be used for.
Know how to create a new project within eCognition and the
options available for creating a project.
Be familiar with the main menu items of the eCognition
Developer user interface.
Recognize how to change the viewing properties and
associated dialog boxes.
Distinguish the geo-locational features of the Baku region.
Background Concepts:
The main purpose of the eCognition Developer is to facilitate
the development of object-oriented rule-based image processing
procedures. Therefore, rather than simply independently
recognition the individual pixels within the scene, the image is
split (segmented) into regions representing objects within the
scene. Working with objects rather than pixels has numerous
benefits over traditional pixel-based analysis; for instance, you
can represent the spatial and context relationship between
objects or the shape of an object analyzed. eCognition provides
an easy-to-use interface to represent the classification rules and
a visual scripting interface to control the segmentation and
classification.

68
Mastering Object Based Image Analysis SECTION ONE

Mastering The Skills:


Inside the eCognition software, the object orient approach is
applied to highlight the principle independent of the specific
segmentation and classification techniques. However, the right
choice of processing methods can add much power to the
procedure. The right training and classification methods can
give the user the full advantage of the approach's potential.
Step 1: Understanding the User Interface
eCognition Developer is normally available from the Windows
start menu.
1.1) Start > All Programs > eCognition Developer 9.01. Once
started, it will present you with an interface similar to the one
shown in Figure 1.

Figure 1: eCognition 9.01 Start Menu

69
Mastering Object Based Image Analysis SECTION ONE

1.2) You can choose one of Quick Map Mode or Rule Set
Mode, for instance.
1.3) By choosing the Rule Set Mode and clicking on the OK
button, you will face the flowing eCognition main interface with
different interface modes.
1.4) To access other interface modes, you must toggle the
View> Appearance buttons. You may choose one of the
different interfaces series ( Figure 2).

Figure 2: eCognition optional interface modes


1.5) The eCognition clients share portals with predefined user
interfaces. Practically, a portal provides a selection of tools and
user interface elements typically used for image analysis within
an industry or science domain. However, most tools and user
interface elements that are hidden by default are still available .
Later, we will inform you of their functionalities in the
following tutorials.
1.6)

70
Mastering Object Based Image Analysis SECTION ONE

Step 2: Components of the User Interface

When you start the eCognition software, you will see with at
least five windows and associated menu-bars and many toolbars
as fallow:
2.1) Data/Image Viewer: The image and classification data
viewer. The viewer lets you view the imagery you are
classifying, including manipulating the band order and image
stretching.
2.2) Process Tree: The Window within which you could develop
your ruleset script.
2.3) Class Hierarchy: The Window displays the classes you
develop.
2.4) Image Object Information: This Window displays selected
feature values for a selected object.
2.5) Feature View: This Window displays a list of all the
available features within eCognition Developer and allows the
current image objects to be colored (high green values and low
blue values) given their value for a special feature (Figure 3).

71
Mastering Object Based Image Analysis SECTION ONE

Figure 3: The user interface with the components labeled


Step 3: Toolbar Icons Functions

When you work inside the eCognition software, you need to run
many functions, each with its order.
3.1) Table 1 provides a glossary of the icon available on the
various toolbars within eCognition Developer.

Table 1: Default Toolbar Buttons


Group Buttons Functions
File This group of buttons allows you to load
Toolbar image files, open and save projects.
File This group of buttons allows you to open
Toolbar and create new workspaces and opens the
Import Scenes dialog to select predefined
import templates.
These buttons, numbered from one to
four, allow you to switch between the four
window layouts. 1) Load and Manage
data, 2) Configure Analysis, 3) Review
Results, and 4) Develop Rule Sets. The
View Develop Rule Sets View is most
Settings commonly used to organize and modify
image analysis algorithms.

72
Mastering Object Based Image Analysis SECTION ONE

Toolbar This group of buttons allows you to select


image view options. You can select
between View of image layers,
classification, samples, and any features
you wish to visualize.
This group is concerned with displaying
outlines and borders of image objects and
views of pixels. 1) Toggle between pixel
view or object mean View; 2) Show or
hide outlines of image objects; 3) Switch
between transparent and non-transparent
outlined objects, and 4) Toggle between
show or hide polygons.
With show polygons active, you can
visualize the skeletons for selected objects
(if this button is not visible, go to View>
Customize > Toolbars and select Reset
All).
This button allows the comparison of a
down-sampled scene and toggles between
Image View or Project Pixel View.
These toolbar buttons allow you to
visualize different layers; in grayscale or
RGB. If available, they also allow you to
switch between layers.
These toolbar buttons allow you to open
the main View Settings, the Edit Image
Layer mixing dialog, and the Edit Vector
Mixing dialog.
These toolbar buttons toggle between the
3D point cloud view and back to the 2D
image view. The last button is only active
in point cloud mode and opens the Point
Cloud View Settings dialog.

3.2) Table 2 describes the Zoom functions toolbars within


eCognition Developer.

73
Mastering Object Based Image Analysis SECTION ONE

Table 2: Zoom Tools and Functions


Zoom Toolbar Functions
Zoom This toolbar region
Functions offers natural
Toolbar selection and the
ability to drag an
image, along with
several zoom options.
View The View Navigate
Navigate folder allows you to
Toolbar delete levels, select
maps and navigate
the object hierarchy.
Tools The Tools toolbar
Toolbar allow access to
advanced dialog
boxes :
The buttons on the
Tools toolbar launch
the following dialog
boxes and toolbars:
•The Manual Editing
Toolbar
•Manage
Customized Features
•Manage Variables
•Manage Parameter
Sets
•Undo
•Redo
•Save Current
Project State
•Restore Saved
Project State

Step 4: Splitting Windows Functions

There are several ways to customize the layout in eCognition


Developer, allowing you to display different views of the same
image. For example, you may wish to compare the results of a
segmentation alongside the original image.

74
Mastering Object Based Image Analysis SECTION ONE

4.1) Selecting Window> Split allows you to split the Window


into four – horizontally and vertically – to a size of your
choosing.
4.2) Alternatively, you can select Window> Split Horizontally
or Window > Split Vertically to split the Window into two.
4.3) Two more options give you the choice of synchronizing the
displays. Independent View allows you to change the size and
position of individual windows – such as zooming or dragging
images – without affecting other windows. Alternatively,
selecting Side-by-Side View will apply any changes made in
one Window to any other windows.
4.4) A final option, Swipe View, displays the entire image
across multiple sections while still allowing you to change the
view of an individual section.
Step 5: Magnifier Options

The Magnifier feature lets you view a magnified area of a region


of interest in a separate window. It offers a zoom factor five
times greater than the one available in the normal map view.
5.1) To open the Magnifier window, select View> Windows >
Magnifier from the main men.
5.2) Holding the cursor over any point of the map centers the
magnified View in the Magnifier window. You can release the
Magnifier window by dragging it while holding down the Ctrl
key.

75
Mastering Object Based Image Analysis SECTION ONE

Step 6: Docking Options

By default, the four commonly used windows, Process Tree,


Class Hierarchy, Image Object Information, and Feature View,
are displayed on the right-hand side of the workspace in the
default Develop Rule Set View. The menu item Window >
Enable Docking facilitates this feature.
6.1) When you deselect this item, the windows will display
independently of each other, allowing you to position and resize
them as you wish. This feature may be useful if you are working
across multiple monitors. Another option to undock windows is
to drag a window while pressing the Ctrl key.
6.2) You can restore the window layouts to their default
positions by selecting View> Restore default.
6.3) Selecting View > Save Current View also allows you to
save any changes to the workspace view you make (Table 3).

Table 3: The most imported Toolbars and related functions


File Toolbar
Icon Description Icon Description
Create New Project Open Workspace
Open Project Save Workspace
Save Project Predefined Import
Create New ------ --------
Workspace
View Settings Toolbar
Icon Description Icon Description
Load & Manage Transparent/Non
Data Transparent
Configure Analysis Show / Hide Polygons
Review Results Single Layer Grayscale
Develop Rule-Sets Mix Three Layers RGB

76
Mastering Object Based Image Analysis SECTION ONE

View Settings Show Previous Image


Layer
View Layer Show Next Image Layer
View Classification Edit Image Layer Mixing
View Samples Show / Hide Vector Layer
Feature View Point Cloud View or
Image View
Pixel View or Point Cloud View Settings
Object Mean View
Show or Hide -------- -----------
Outline
Zoom Toolbar
Normal Cursor Zoom in Center
Panning Tool Select
Zoom/Magnification
Factor
Area Zoom Zoon Scene to Window
Zoom Out Center ----- --------------
View Navigation Toolbar
Delete Level Next Level Down in
Object Hierarchy
Select Active Map Next Level Up in Object
Hierarchy
Select Level in ----- -----------
Object Hierarchy
Tools Toolbar
Workspace Undo Process Editing

Image Object Redo Process Editing


Information
Object Table ----- -----------

Step 7: Creating the First eCognition Project

The first step when using eCognition Developer is to load your


data into the software. The following Azer-OBIA exercises will
always do this by creating an individual project. Just remember
some vital points such as; taking into account images with less
possible volumes, fewer bands, and small geographic areas for

77
Mastering Object Based Image Analysis SECTION ONE

fast-learning purposes during the learning course. Suppose you


are later required to process very large datasets or many
individual images. In that case, a workspace provides a
convenient and efficient way to store and manage these data.
Please refer to earlier lessons for information about workspaces.
For now, to create your new project:
7.1) Either select File > New Project or select the new project
icon that will present you with the box to enter your Landsat 5
datasets (for this practice *.Tiff files) and parameters to create
your project. In this example, you are going to load the
LT05_L1TP_166032_19980719_20161223_01_T1_B1:B7 files
(bands). The main spectral and spatial characters of the Landsat
5 bands are given in Tutorial 2.
7.2) It is worth the point that Landsat 5 recorded many
significant events. It was the first satellite to capture the nuclear
accident at Chernobyl in 1986. Landsat 5 documented
deforestation occurring in tropical regions and captured the
devastating 2004 tsunami in Southeast Asia.
7.3) Inside an appropriate folder (Baku L5-19980719), select all
tiff files (bands) and inter OK (Figure 4).

78
Mastering Object Based Image Analysis SECTION ONE

Figure 4: The Import Image Layers dialog


7.4) When you click OK, the Create Project window appears. By
selecting 'Insert' next to the Image Layer List. Once you have
loaded your data, you need to define the layer aliases, where
bands 1-7 correspond with aliases BLUE, GREEN, RED, NIR
SWIR1, Thermal, and SWIR2, respectively. To bring up the
layer properties dialog, double click on each image band; in
turn, select the band and select the edit button.
7.5) You can also many raster files (such as DEM files) in the
list below the image layers list, which you could use during your
classification and segmentation, for example, a polygon
shapefile of building. The next step is to give your project a
name; in this case, call it 'Baku-166032_19980719' and check
the projection information for your image has been correctly
read. If this information is incorrect, you need to check the 'Pixel

79
Mastering Object Based Image Analysis SECTION ONE

size (unit)' on the right-hand side. You also can re-sampling


your imagery to a resolution of your choice using the
'Resolution (m/pxl)' dialog box and selecting a subset using the
'Subset Selection' button, which presents a dialog similar to the
one shown in previous tutorials.
7.6) To select a subset, you can either draw a red box on the
image in the dialog or provide the pixel limits of your subset.
Before finalizing the project and selecting OK, we will subset
the image, where minimum X is set to 2022, the maximum X is
set to 4521, the minimum Y is set to 3410, and maximum Y is
set to 4851.
7.7) Now click OK to create the project, and you will move back
to the eCognition Developer interface, Figure 5.

Figure 5: The eCognition Developer interface with a new project


loaded

80
Mastering Object Based Image Analysis SECTION ONE

7.8) Once the project has been loaded, you can pan and zoom
around the data in the display region using the zoom toolbar,
shown below in Figure 6.

Figure 6: Zoom Functions toolbar

7.9) If the zoom functions toolbar is not displayed, you can turn
it on using the View>Toolbars menu.
Step 8: Selecting bands for Display
8.1) To select the layer(s) to be displayed, you need to click on

the 'Edit Image Layer Mixing' dialog (Figure 7).

Figure 7: The "Edit Image Layer Mixing" dialog

8.2) Using the 'Layer Mixing' drop-down menu, you can select
the number of layers to be mixed in the display, and then by
selecting the individual layers, you may turn them on and off or
increase the weight (Figure 8).

81
Mastering Object Based Image Analysis SECTION ONE

Figure 8: Selecting the layers for display in different modes


8.3) Also, you can adjust the equalization (or stretch) of the data
layers is displayed using the 'Equalizing' drop-down menu. The
available options are 'Linear (1.00%)', 'Standard Deviation
(3.00)' 'Gamma Correction (0.50)', 'Histogram' and 'Manual.'

Step 9: Managing Multiple Views

9.1) eCognition Developer also allows you to split your display,


allowing multiple views of the same data. This functionality is
available from the Window menu (Figure 9).

Figure 9: the Window Menu

82
Mastering Object Based Image Analysis SECTION ONE

9.2) Here, the current display can be split horizontally or


vertically, and once split, it can be 'linked' to provide views that
automatically move together.
9.3) Once you have split your screen by selecting the Window
you wish to change, you can use the same tools outlined above
to manipulate the display properties in each of the different
views (Figure 10).

Figure 10: The result of window split functions

Step 10: Common Landsat 5 TM Band Combinations

The image bands of Landsat satellites cover a specific range of


the electromagnetic spectrum; each has its characteristics and
provides part of the landcover information. By combining
different bands of Landsat images, you can access more

83
Mastering Object Based Image Analysis SECTION ONE

information. A few examples are provided in Table 5 to explain


combining image bands and related applications.
10.1) Accordingly, as with any Landsat bands, you can arrange
them in such a way as to extract unique and new information
inside the eCognition 9.01. Matching the most suitable
informative band combinations with their respective images can
be done with the Edit Vector Layer Mixing dialog box.

84
Mastering Object Based Image Analysis SECTION ONE

Table 5: Dissimilar Landsat 5 TM band combinations indicating the


Baku, Azerbaijan
Band Sample Image Application
Combination
The Natural color: The
natural color composite
uses a band combination of
red (3), green (2), and blue
(1). It replicates close to
what our human eyes can
(B3, B2, B1) see.

Color Infrared: This band


combination is also called
the near-infrared (NIR)
composite. It uses near-
infrared (4), red (3), and
(B4, B3, B2) green (2). Because
chlorophyll reflects near-
infrared light, this band
composition is useful for
analyzing vegetation. In
particular, areas in red have
better vegetation health.
Short-Wave Infrared: The
short-wave infrared band
combination uses SWIR-2
(7), SWIR-1 (5), and red
(3). This composite
(B7, B5, B3) displays vegetation in
shades of green.

Agriculture: This band


combination uses SWIR-1
(6), near-infrared (5), and
blue (2). It’s commonly
used for crop monitoring
because of the use of short-
(B6, B5, B2) wave and near-infrared.
Healthy vegetation appears
dark green.

85
Mastering Object Based Image Analysis SECTION ONE

10.2) It would be best if you opened by double-clicking in the


lower right pane of the view settings dialog, selecting View >
Vector Layer Mixing, or selecting the Show/hide vector layers
button. This dialog lets you change the order of different layers
by dragging and dropping a thematic vector layer.
10.3) In the case of Landsat-5, some of the popular band
combinations include natural color, color infrared, and various
vegetation indexes. To create Landsat 5 TM band combinations,
you need to check out the "Edit Image Layer Mixing" dialog
box to help you visualize different band combinations in
different ways. You can read the next tutorials for more details.

Sum Up:

eCognition is an intuitive end-user software used to configure


and execute image analysis applications. It supports fully
automated or semi-automated workflows and guides users
through the application they are running. eCognition 9.01
incorporates all the necessary tools for users to import, view,
and visualize more Landsat bands and create associated
imagery. You should now open the eCognition Developer,
create a project and manipulate the display to view the band-
combinations as you wish. The next tutorials will take you
through the much imagery you ought to load into your project
and some more advanced features of the eCognition software.

86
Mastering Object Based Image Analysis SECTION ONE

Informative Practices
Tips:
1) The eCognition software is powerful out-of-the-box
Landcover and changes detection mapping solution.
2) eCognition enables users at any skill level to quickly produce
high-quality, GIS-ready deliverables from imagery.
3) By eCognition, you can rapidly and easily combine Landsat's
different bands to perceive much landcover information.
Workouts:
1) List the satellite imagery which could be inter to the
eCognition setting.
2) Name the main parts and functions of the View Settings
Toolbar.
3) List the "Edit Image Layer Mixing dialog box" main
applicability .
Quizzes :
1) What is unique in eCognition?
2) What is typical Trimble eCognition use Cases?
3) What is the main aim of band combination ideas?
Allied References:
1) eCognition Reference Book (2014). Trimble eCognition
Reference Book, Munich, Germany: Trimble Germany
GmbH.
2) eCognition User Guide, (2014). Trimble eCognition
Developer User Guide, Munich, Germany: Trimble Germany
GmbH.
3) Flanders, D., Hall-Beyer, M., and Pereverzoff, J. (2003).
Preliminary evaluation of eCognition object-based software
for cut block delineation and feature extraction. Canadian
Journal of Remote Sensing, 29(4), 441– 452.

87
Mastering Object Based Image Analysis SECTION ONE

4) Thenkabail, P.S., Vermote, E.F., Vogelmann, J., Wulder,


M.A., Wynne, R. (2008). Free access to Landsat imagery.
Science 302 (5879), 1011.
5) Trimble eCognition Suite (2015). SYSTEM
REQUIREMENTS, eCognition 9.1. Trimble Documentation.
6) U.S. Geological Survey (2016). Landsat brings understanding
to the impact of industrialization (ver. 1.1, September 2019)
U.S. Geological Survey Fact Sheet 2016–3054, p. 2.

88
Mastering Object Based Image Analysis SECTION ONE

Tutorial 4

Illustrating Land Surfaces by Landsat Imagery

Landsats are the most accurately informative Earth-observing satellite.

Opening Statement:
This tutorial teaches you through simple steps to find, download
and view free Landsat-7 imagery to extract useful landcover
information inside the eCognition software. Geospatial students
and even professionals looking for satellite imagery prefer to use
medium-resolution satellite imagery applied for a specific
application, but they are unsure where to start. The primary
method is to combine different bands to combine a few color
images, enabling you to interpret qualitative information
visually. Meanwhile, the main aim of the tutorial is to develop a
few band ratios of water surfaces for separating the water
surfaces from other types of landcover along the Kura River,
Aran Rayon, Azerbaijan.
Instructive Memo:
✓ Level: Intermediate
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software (Version 9.01),

89
Mastering Object Based Image Analysis SECTION ONE

✓ Data Sources: Landsat-7, Glovis site,


https://glovis.usgs.gov/app,
✓ Subject Scene: Kura River, LE07_L1TP_168032_2000-05-11.
Tutor Objectives:
By the end of this unit, you should:
Access to Landsat-7 bands.
Register and log in to the site.
Create image combinations.
Rationig Water Surfaces.
Background Concepts:
Surface water change is an important indicator of
environmental, climatic, and anthropogenic activities. Remote
sensors, such as Landsat-7, have been providing data for the last
four decades, useful for extracting landcover types such as
green-covers (forest, pasture, agriculture) and water sources.
Researchers have proposed many surface water extraction
techniques, among which different band-combination methods
are popular owing to their simplicity and cost-effectiveness.
Also, by applying the standard methods, you can visualize water
features; thus, everyone can detect changes. We will apply a few
well-known water band combinations and band ratios for
surface water extraction using Landsat-7 images based on the
current tutorial aims. A unique geographic site represents overall
water features, such as rivers, lakes, ponds, creeks, and other
morphologic shapes such as meanders the region.

90
Mastering Object Based Image Analysis SECTION ONE

Table 1: Landsat 7 Enhanced Thematic Mapper Plus (ETM+)


Resolution
Band Wavelength (meters) Useful for mapping

Band 1 - Blue 0.45-0.52 Bathymetric mapping,


distinguishing soil from
30
vegetation and deciduous from
coniferous vegetation

Band 2 - Green 0.52-0.60 Emphasizes peak vegetation,


30 which is useful for assessing
plant vigor

Band 3 - Red 0.63-0.69 Discriminates vegetation


30
slopes

Band 4 - Near 0.77-0.90 Emphasizes biomass content


30
Infrared and shorelines

Band 5 - Short- 1.55-1.75 Discriminates moisture content


Wave Infrared 30 of soil and vegetation;
penetrates thin clouds

Band 6 - 10.40-12.50 Thermal mapping and


Thermal 60 (30) estimated soil moisture
Infrared

Band 7 - Short- 2.09-2.35 Hydrothermally altered rocks


Wave Infrared 30 associated with mineral
deposits

Band 8 - 0.52-0.90 15 meter resolution, sharper


Panchromatic 15 image definition

Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images


consist of eight spectral bands with a spatial resolution of 30
meters for Bands 1 to 7 (Table 1). The resolution for Band 8
(panchromatic) is 15 meters. All bands can collect one of two
gain settings (high or low) for increased radiometric sensitivity
and dynamic range, while Band 6 collects both high and low

91
Mastering Object Based Image Analysis SECTION ONE

gain for all scenes. The approximate scene size is 170 km north-
south by 183 km east-west.
Landsat-7 data products are provided free of charge to all data
users, including the students and researchers, under the terms
and conditions prescribed by the Glovis NASA Programme.

Mastering The Skills:

Step 1: Setting up Landsat-7 Bands


1.1) Go to https://glovis.usgs.gov/app and create a User
Account for yourself. For more details, notice previous tutorials.

1.2) Click the icon to boot the eCognition


Developer in the Rule Set Mode.
1.3) Set up Landsat-7 Imagery all bands inside the
eCognition setting and subset image layers. For more details,
notice the earlier tutorials.
1.4) Create a project, for example, Kura River, adjusted for
part of the Aran Rayon, the longest River in Azerbaijan (Figure
1).

92
Mastering Object Based Image Analysis SECTION ONE

Figure 1: The Kura River location, in the Aran Rayon

Step 2: Mixing Image Layers

2.1) Set the layer mixing and equalizing options based on the
"Edit Image Layer Mixing dialog box capabilities. It affects
the display of Landsat-7 images, with nine bands (see Table 2),
as subsetted for small parts of the Aran Rayon.
2.2) Define the color composition to visualize image layers in
the map view. In addition, you can choose from different
equalizing options. It enables you to visualize the image better
and recognize the visual structures without changing them.
2.3) Choose to hide layers, which can be very helpful when
investigating image data and results. Note that changing the
image layer mixing changes the visual display of the image but
not the underlying image data – it has no impact on the process
of image analysis.

93
Mastering Object Based Image Analysis SECTION ONE

2.4) To do so, you need to open the Edit Image Layer Mixing

dialog box icon (Figure 2).

Figure 2: The "Edit Image Layer Mixing" dialog box, the layer mixing
and equalizing options
2.5) You may define the color composition to visualize image
layers in the map view, based on different RGB Layer Mixing
presets (Figure 3).

Figure 3: Layer Mixing presets (from left to right): One-Layer Gray,


Three-Layer Mix, Six-Layer Mix
2.6) You can also choose from different equalizing options. It
enables you to visualize the image better and recognize the
visual structures without changing them. You can also choose to

94
Mastering Object Based Image Analysis SECTION ONE

hide layers, which can be very helpful when investigating image


data and results.
2.7) Note that changing the image layer mixing' only changes
the visual display of the image but not the underlying image data
– it has no impact on the process of image analysis.
2.8) When creating a new project, the first three image layers
are displayed in red, green, and blue. Open the "Edit Image
Layer Mixing" dialog box to change the layer mixing. Choose
View > Image Layer Mixing from the main menu.
2.9) Define the display color of each image layer. You can
also set the weighting of the red, green, and blue channels. Your
choices can be displayed together as additive colors in the map
view. Any layer without a dot or a value in at least one column
will not display.
2.10) Choose a layer mixing preset (see Figure 4).

Figure 4: Edit Image Layer Mixing dialog box, Layer Mixing options

95
Mastering Object Based Image Analysis SECTION ONE

2.10.1) Clear option: all assignments and weighting are removed


from the Image Layer table. One Layer Gray displays one image
layer in grayscale mode with red, green, and blue.
2.10.2) False Color (Hot Metal) is recommended for single
image layers with large intensity ranges to display in a color
range from black over red to white. Use this preset for image
data created with positron emission tomography.
2.10.3) False Color (Rainbow) is recommended for single image
layers to visualize rainbow colors. Here, the regular color range
is converted to a color range between blue for darker pixel
intensity values and red for brighter pixel intensity values
2.10.4) Three Layer Mix displays layer one in the red channel,
layer two in green, and layer three in blue
2.10.5) Six Layer Mix displays additional layers. You may
change these settings to your preferred options with the Shift
button or by clicking the respective R, G, or B cell. It can
display one layer in more than one color, and you can display
more than one layer in the same color.
2.10.6) You can assign individual weights to each layer. Clear
"the No Layer Weights checkbox" and click a color for each
layer. Left-clicking increases the layer's color weight while
right-clicking decreases it. The Auto Update checkbox refreshes
the view with each change of the layer mixing settings. Clear
this check box to show the new settings after clicking OK. With
the Auto Update check box cleared, the Preview button becomes
active.

96
Mastering Object Based Image Analysis SECTION ONE

2.10.7) Compare the available image equalization methods and


choose one that gives you the best visualization of the objects of
interest. Equalization settings are stored in the workspace and
applied to all projects or stored in a separate project. In the
Options dialog box, you can define a default equalization setting
(Figure 5).

Figure 5: Edit Image Layer Mixing dialog box, Equalizing options

2.10.8) You may prefer to click the Parameter button to change


the equalizing parameters, if available.

Step 3: Landsat-7 Band Combinations

As you noticed, based on the Edit Image Layer Mixing dialog


box capacities, you can extract specific qualitatively information
from any Landsat-7 mage inside the eCognition setting.

97
Mastering Object Based Image Analysis SECTION ONE

3.1) In Table 2, different band combinations highlight water


surfaces in a subsetted image, indicating the Kura Rivers and
around in Aran Rayon.

Table 2: Dissimilar band combinations indicating the Kura River and


sournding water surfaces

Band (s) Equalizing Layer Appearance


Parameters

Linear (1.00%)
B6 No layer
weights

Linear (1.00%)
B5, B4, B3 No layer
weights

Standard
B6t1, B5, B4 Deviation (3)
No layer
weights

98
Mastering Object Based Image Analysis SECTION ONE

Standard
B5, B4, B3 Deviation (3)
No layer
weights

3.2) As can be seen from the appearance of the above images,


each of the Landsat band compositions reveals water levels.
However, the combination of different bands of Landsat-7 by
the Gamma Correction method illustrates the water levels more
clearly.
3.3) To create more Landsat-7 band combinations for more
information, you can check out the Edit Image Layer Mixing
dialog box that could help you visualize the more (six) band
combinations in different ways.
3.3.1) Set the Edit Image Layer Mixing dialog box for six bands,
as you see in Figure 6 (a & b).

99
Mastering Object Based Image Analysis SECTION ONE

Figure 6 (a & b): Setting the Landsat-7 bands for the Standard
Deviation and Gamma Correction parameters

3.3.2) Set the Equalizing mode to the Standard Deviation and


Gamma correction options separately and check out the No layer
weights options. Then, click on OK to result, as appears in
Figure 6.
3.3.3) As you notice from Figure 6 a, the water surfaces are
highlighted in dark and blue colores and the vegetation in green
colors. Non-vegetated and buildup areas are quite apparent and
marked in white to yellowish colors. If you change the
Equalizing mode to the Camma Correction, you will find much-
visualized information on the water surfaces (Figure 6 b).
3.3.4) In the Landsat-7 visible spectrum, clearer water reflects
less than turbid water. In the Near IR (B4) and visible bands,
water increasingly absorbs the light making it darker. It depends

100
Mastering Object Based Image Analysis SECTION ONE

on water depth and wavelength, as increasing amounts of


dissolved inorganic materials in water bodies tend to shift the
peak of visible reflectance toward the red region from the green
region (clearer water) of the spectrum.
3.3.5) Applying the "Edit Image Layer Mixing" dialog box of
eCognition, based on B1, B3, B24, B5, B27, and B8 bands
combined with the Gamma Correction parameters is very
helpful when you intend to pick out the land from water.

Step 4: Landsat-7 Water Ratioing

A very useful image processing technique is band rationing. For


each pixel, there is the possibility to divide the digital number
(DN) value of any one band by the value of another band. You
can rescale these resulting ratio values to provide gray-tone
(even colorful) images, in which you can reach higher levels of
information, depending on the eCognition algebra computation
power.
4.1) To start, right-click inside the Process Tree, and select
Append New. When the Edit Process dialog box opens, give a
name as Water Rationing and click on the OK option as Figure
7.

101
Mastering Object Based Image Analysis SECTION ONE

Figure 7: The Edit Process dialog box saved as Water


Rationing

4.2) Then, right-click on and, from the list


appears, the insert child option. When the Edit Process dialog
box opens, from the algorithms list, choose the "layer
arithmetics" (1) and set up other options as Figure 8.

Figure 8: The Edit Process window; inserting a water ratioing based


on the "layer arithmetics functionality in the eCognition software

102
Mastering Object Based Image Analysis SECTION ONE

4.3) First, set a ratio equation, for example,


"(B1Blue/B5SWIR1)" in the Output Value option (2). For the
Output Layer, type a name, for example, "B-R B1/B5" (3).
Then, the Output Layer Typeset "32Bit float" option (4). Finally,
click on the Execute key (5) to run the rationing algorithm.
4.4) When the ration equation finishes, from the menu bar, click

on the Edit Image Layer Mixing icon to open its dialog box.
Now you can set to the output layer mode. Once again, repeat
the above-mention procedure for another band rationing, for
example, "(B3Red/B4NIR)" to extract another water component.
4.5) When you insert different water rationing algorithms inside
the Process Tree, it looks like Figure 9.

Figure 9: The Process Tree, two water ratios algorithms


4.6) The results of two different water ratios are illustrated in
Figure 10 (a & b).

103
Mastering Object Based Image Analysis SECTION ONE

Figure 10: Examples of band rationing to extract other water


components around the Kura River
4.7) Hence, the NIR band is better than SWIR1 and SWIR2 for
water extraction. Another band's spectral image of Landsat-7
reveals large contrast between water and non-water bodies.

104
Mastering Object Based Image Analysis SECTION ONE

4.8) inside the eCognition, there are more algebra algorithms to


combine and permutate the Landsat 7 bands (for example, 1-5,7) in
more than 20 RGB combinations. You may observe that contrast
increased with bands in different spectral regions and using the bands
5,7,3 as the numerator for Landsat 7 and bands (6,7,4). In addition,
you could apply thresholding methods to the resulting ratio files to get
much more accurate information on the water surfaces.

Sum Up:

Earth's surface has undergone various landcover/landuse


changes in recent years. Detecting these changes or specific
classes such as farmland, forest, urban, and water has been
important in various studies. However, these changes are slow
and take a long time, and thus, they are unnoticed unless they
occur to large extents. Long historical data, such as Landsat-7,
provides concrete evidence to the scientific community to
detect, understand, and prevent these changes. In the first step,
Surface water is a very vital resource in everyday life; some of
its uses are for drinking, irrigation, aquaculture, and
thermoelectric cooling. Surface water is also a good indicator of
landcover changes in environments affected by climatic and
anthropogenic activities.

105
Mastering Object Based Image Analysis SECTION ONE

Informative Practises
Tips:
1) At an altitude of 705 km, a full surface scan by Landsat 7
takes 232 turns or 16 days. According to local solar time, the
terrain survey takes place at approximately 10 am (± 15
minutes).
2) You can fuse (combine) Landsat-7 bands, particularly
Panchormatic B8, with other sensor data to enhance your
reachers approaches.
3) You could process the Landsat-7 images to preserve land
monitoring studies, monitoring of vegetation, soil, and water
cover, as well as observation of inland waterways and coastal
areas.
Workouts:
1) Clarify the difference between TM and ETM+ sensors.
2) Download a set of Landsat-7 imagery for a region where you
live.
3) Moisture Stress Index (MSI (Landsat 4 – 7) = B5 / B4) is
applied for canopy stress analysis, productivity prediction,
and biophysical modeling. Apply this index for your selected
Landsat imagery.
Quizzes :
1) What sorts of band combinations are suitable for
demonstrating fired lands?
2) Which bands of Landsat-7 have the highest spatial
resolutions?
3) How many Landsat satellites are there in space?
Allied References:

106
Mastering Object Based Image Analysis SECTION ONE

1) Butcher, G., C. Barnes, and L. Owen (2019). Landsat: The


cornerstone of global land imaging, GIM International
Magazine, by Contributing office(s) Earth Resources
Observation and Science (EROS) Center, Report.
2) Dolan, K. P. Sabelhaus, D. Williams, (1988). “Landsat-7
Extending 25 Years of Global Coverage,” Proceedings of
Information for Sustainability, 27th International Symposium
on Remote Sensing of Environment, Tromsoe, Norway, pp.
622-625.
3) eCognition Developer (2014). Reference Book, Trimble
Germany GmbH, Arnulfstrasse 126, D-80636 Munich,
Germany.
4) Etter, M. P. (1990). Viewing the Earth: The Social
Construction of the Landsat Satellite System (Inside
Technology), The MIT Press.
5) Holm, T. (2013). “Landsat: Building a Future on 40 Years of
Success - Status: Landsat 7,” 12th Annual JACIE (Joint
Agency Commercial Imagery Evaluation) Workshop, St.
Louis, MO, USA.
6) National Aeronautics and Space Administration (2019).
Experimental study of digital image processing techniques for
LANDSAT data, Kindle Edition, NASA.

107
Mastering Object Based Image Analysis SECTION ONE

Tutorial 5

Landcover Spectral Indexing inside eCognition

indices key-signs in the landcover identification

Opening Statement:
In the current tutorial, you will learn to generate spectral
indicators captured by Landsat-8 imagery applying the
eCognition software computational methods that offer
qualitative information about the ground surfaces. You will
cover creating spectral indices by highlighting vegetation and
water features in the SariSu Lake region, Aran Rayon, in
Azerbaijan to emphasize the "Image Object Information" and
"Feature View windows " roles as key entries to the OBIA
procedures.
Instructive Memo:
✓ Level: Begenner and Intermedate
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer software (Version 9.01),
✓ Data Sources: Landsat-8 Imagery, https://glovis.usgs.gov/app,
✓ Subject Scene: Sari-Su Lake, Kura River, Azerbaijan.
Tutor Objectives:
By the end of this unit, you should be able to:
108
Mastering Object Based Image Analysis SECTION ONE

Understand Spectral Indices creation.


Prepare the Landsat-8 imagery.
Create spectral indices.
Manage Customized Features inside the eCognition.
Background Concepts:
Inside the eCognition software, it is possible to form many
spectral indices by transforming spectral data using ratios
between bands to reduce the data into meaningful information.
Landcover features could be extracted using spectral indices
ranging from vegetation, water, burned areas, snow, and among
many others. These indices enhance data interpretation and are
used in many scientific applications such as monitoring
plant/crop health, delineating water bodies, calculating burn
extents, spatial modeling, and even climate change detection
procedures.
While numerous spectral indices are available that emphasize
some different landscape features, perhaps the most common
spectral index is the normalized vegetation difference index
(NDVI). You can use this index to detect vegetation density and
health. NDVI could be derived from Equation NDVI + (NIR-
Red) / (NIR + Red). Where NIR is the near-infrared band and
Red is the red band. Resultant values scale from -1 to 1, where
positive values typically represent vegetation and (with denser
vegetation closer to 1), and negative values represent no
vegetation (bare soil, snow, water, etc.), as shown in Figure 1.

109
Mastering Object Based Image Analysis SECTION ONE

Figure 1: Representation of NDVI values scaled from -1 to +1 for


different landcover types
The Subject Scene is part of the Aran Rayon, where the Sari-Su,
the largest lake of Azerbaijan located in the Kur-Araz Lowland.
This lake stretches along the Kura River from Imishli Rayon
southeast to Sabirabad Rayon. It is one of the four lakes in the
area mainly made up of wetlands and swamps, surrounded by
rural residents and agricultural lands (Figure 2).

Figure 2: The Subject Scene, where the Sarisu and Kura River are
located

110
Mastering Object Based Image Analysis SECTION ONE

Mastering the Skills:

There are several methods in creating spectral indexes, some of


which are presented in this tutorial. Before producing these
indicators, you should note that you need to provide the required
conditions for the indexing procedure inside eCognition.

Step 1: Landsat-8 Data Preparation

Landsat-8, a collaboration between NASA and the U.S.


Geological Survey, provides moderate-resolution measurements
of the Earth's terrestrial and polar regions in the visible, near-
infrared, short wave infrared, and two thermal bands (Table 1).
Every day, staff receive and process approximately 450 new
Landsat 8 scenes. These scenes are available for download at no
cost within 24 hours of acquisition.

Table 1: Landsat 8 Operational Land Imager (OLI) and Thermal


Infrared Sensor (TIRS)
Wavelength Resolution
Band Numbers Band Characters (micrometers) (meters)

1 Coastal aerosol 0.43-0.45 30

2 Blue 0.45-0.51 30

3 Green 0.53-0.59 30

4 Red 0.64-0.67 30

5
Near Infrared (NIR) 0.85-0.88 30

6 SWIR 1 1.57-1.65 30

111
Mastering Object Based Image Analysis SECTION ONE

Wavelength Resolution
Band Numbers Band Characters (micrometers) (meters)

7 SWIR 2 2.11-2.29 30

8 Panchromatic 0.50-0.68 15

9 Cirrus 1.36-1.38 30

10 Thermal Infrared
10.6-11.19 100
(TIRS) 1

11 Thermal Infrared
11.50-12.51 100
(TIRS) 2

1.1) As you learned in the previous steps, download Landsat-8


images from the relevant site (Glovis) and manage them to TIF
format.
1.2) Import Landsat-8 bands into the eCognition 9.01 setting.
Enter only the required bands (2-8) into the software
environment to create water and vegetation indexes.
1.3) Inside the eCognition software, create a required project
and save data in a specific path and folder. You may need to
review the previous tutorials for more details.

Step 2: Basic Spectral Indexes

2.1) There are many spectral indices that you may like to
analyze various aspects of vegetation, water resources, snow,
soil, fire, among others, inside the eCognition setting.

112
Mastering Object Based Image Analysis SECTION ONE

Table 2: The main spectral indices adjusted for Landsat-8 bands


Index Formula Correspond Used
Normalized NDVI = (B5 – B4) / NDVI is highly associated with
Difference (B5 + B4) vegetation content. High NDVI values
Vegetation Index correspond to areas that reflect more in
the near-infrared spectrum.
Normalized • NDWI = (B3 – B5) / This index is used for water bodies
Difference Water (B3 + B5) analysis and can enhance water
Index information efficiently in most cases. It
is sensitive to the buildup of land and
results in over-estimated water bodies.
Green Normalized GNDVI = (B5 – B3) / It is a modified version of NDVI to be
Difference (B5 + B3) more sensitive to the variation of
Vegetation Index chlorophyll content in the crop.
Enhanced Vegetation• EVI= 2.5 * ((B5 – B4) EVI corrects for some atmospheric
Index / (B5 + 6 * B4 – 7.5 * conditions and canopy background noise
B2 + 1)) and is more sensitive in areas with dense
vegetation.

Advanced Vegetation AVI = [B5 * (1 – It is used in vegetation studies to monitor
Index B4)*(B5 – B4)]1/3 crop and forest variations over time.
Soil Adjusted • SAVI= ((B5 – B4) / SAVI is used to correct NDVI for the
Vegetation Index (B5+ B4 + 0.5)) * (1.5) influence of soil brightness in areas
where vegetative cover is low.
Normalized NDMI = (B5 – B6) / NDMI is used to determine vegetation
Difference Moisture (B5 + B6) water content.
Index
Green Coverage • GCI = (B5 / B3) -1 The Green Chlorophyll Index is used to
Index estimate leaf chlorophyll content in
various species of plants.
Normalized Burned • NBRI = (B5 – B7) / This index highlights forest fires that are
Ratio Index (B5 + B7) severe artificial or natural phenomena
that destroy natural recourses, livestock
and unbalance local environments.
Bare Soil Index • BSI = (B6 + B4) – (B5 BSI combines blue, red, near-infrared,
+ B2) / (B6 + B4) + and short wave infrared spectral bands,
(B5 + B2) to capture soil variations.
Normalized • NDSI = (B3 – B6) / NDSI shows snow cover over land areas.
Difference Snow (B3 + B6) Since snow absorbs most of the incident
Index radiation in the SWIR while clouds do
not, NDSI can distinguish snow from
clouds.
Normalized • NDGI = (B3 – B4) / This index helps detect and monitor
Difference Glacier (B3 + B4) glaciers using the green and red spectral
Index • bands.

113
Mastering Object Based Image Analysis SECTION ONE

2.2) In Table 2 summarised a compilation the formulas and


conceptualizes of the main spectral indices adjusted for Landsat-
8.

Step 3: Setting the eCognition Outline

3.1) Start the eCognition software and create the required initial
image based on the different band combinations.
3.2) Now, you need to set the main functional windows as View
Settings, Process Tree, Image Object Information, and Feature
View, as illustrated in Figure 3.

Figure 3: the eCognition main dialog boxes setting Landsat-8 band


combinations
Step 4: Indices by Arithmetic Algorithm
4.1) When you set the Landsat-8 imagery, the first opportunity
is to apply the Layer Arithmetic Algorithm to execute equations
that could help you create papular well-known spectral indices
of NDVI = (NIRB5 – RedB4) / (NIRB5 + RedB4) and NDWI =

114
Mastering Object Based Image Analysis SECTION ONE

(GreenB3 –NIRB5) / (GreenB3 + NIRB5). The procedure below


also guides you through creating any spectral index applying the
arithmetic customized feature.
4.2) Inside the eCognition, right-click in the Process Tree and
select the Layer Arithmetic algorithm. Set up all parameters and
options as you notice in Figure 4.

Figure 4: The Edit Process dialog box, setting up the Layer Arithmetic
algorithm parameters
4.3) Click on the Execute option and wait for a second to finish
the process. Soon you will notice the NDVI layer created.
4.4) Click on the Edit Image Layer Mixing and set it up, as you
can see in Figure 5. Then, click on the OK option to notice your
desired NDVI layer.

115
Mastering Object Based Image Analysis SECTION ONE

Figure 5: The resulting NDVI layer, based on linear equalizing and


false-color layer mixing options
4.5) Throughout the subsetted area, because near-infrared
(which vegetation strongly reflects) and red light (which
vegetation absorbs), the vegetation index is clearly illustrated for
quantifying the amount of vegetation. Water surfaces are shown
in blue color, and areas without or little vegetation in the range
of yellow colors are quite clear.
4.6) Repeat the above-mention steps (4-2 to 4-5) this time to
create the NDWI index. Keep in mind the needed changes
required to apply the corresponding formula.
4.7) As you notice, the NDWI index is ideal for water bodies
analysis and can enhance water information efficiently in most
cases. It is sensitive to the buildup of land and results in over-
estimated water bodies. In Figure 6, the blue false-red surfaces
are completely recognizable.

116
Mastering Object Based Image Analysis SECTION ONE

Figure 6: The resulting NDWI layer, based on linear equalizing and


false-color layer mixing options
4.8) You can even clearly see the vegetation expansion inside
the Sarisu Lake and river forms (meanders and oxbows)
alongside the Kura River.
4.9) When you conclude spectral indexing inside the eCognition
9.01, your Process Tree looks like Figure 7.

Figure 7: Process Tree, Spectral Indices defined

Step 5: Manage Customized Features

Customized tools are eCognition features to create and adapt to


your needs. They can be arithmetic or relational features that
depend on other existing features. All customized features are
based on the features shipped with eCognition Developer 9.0
117
Mastering Object Based Image Analysis SECTION ONE

and newly created customized features. The Manage


Customized Features dialog box allows you to add, edit, copy,
and delete customized features. It enables you to create new
arithmetic and relational features based on the existing ones.
5-1) Arithmetic Features
5.1.1) Arithmetic features are existing features, variables, and
constants combined via arithmetic operations. Arithmetic
features can be multiple but apply only to a single object.
5.1.2) To open the Manage Customized Features dialog box, do
one of the following:
5.1.2a) On the menu bar, click on Tools and select Manage
Customized Features.
5.1.2b) Or right-click on the "Image Object Information" dialog
box and click on the Manage Customized Features option
(Figure 8).

Figure 8: Manage Customized Features dialog box

118
Mastering Object Based Image Analysis SECTION ONE

5.1.3) Click Add to create a new customized feature. The


Customized Features dialog box will open, providing you with
tools to create arithmetic and relational features (Figure 9).
5.1.4) To edit a feature, select it and click edit to open the
Customized Features dialog box. Click Copy or Delete options
to copy or delete a feature.

Figure 9: Creating an arithmetic feature in the Customized Features


dialog box
5.1.5) To create an arithmetic customized feature, select Object
Features > Customized > Create New Arithmetic Feature. Then,
In the Feature View window, double-click on Create New
Arithmetic Feature and Insert a name (for example, NDVI-2) for
the customized feature to be created.

119
Mastering Object Based Image Analysis SECTION ONE

5.1.6) Use the calculator to create the arithmetic expression.


You can type in new constants to select features or variables in
the feature tree on the right.
5.1.7) Choose arithmetic operations or mathematical functions.
The expression you create is displayed in the text area above the
calculator. To calculate or delete an arithmetic expression,
highlight the expression with the cursor and click either
Calculate or Del as appropriate.
5.1.8) You can switch between degrees (Deg) or radians (Rad)
measurements and invert the expression. If you want to create
the new feature, click Apply, create the feature without leaving
the dialog box, or OK to create the feature and close the dialog
box (Figure 10).

Figure 10: An modified arithmetic feature in the Customized Features


dialog box
120
Mastering Object Based Image Analysis SECTION ONE

5.1.9) The initial black and white maps of vegetation and water
indices are illustrated in Figure 11.

Figure 11: Black and white maps of NDVI-2 and NDWI-2 spectral
indices created through an Arithmetic Customized Feature method
5.1.10) After creating NDVI and NDWI indices, you can find
the new arithmetic feature in the Image Object Information
window or the Feature View window under Object features >
Customized options.
5-2) Relational Features
Relational features are used to compare a particular feature of
one object to those of related objects of a specific class within a
specified distance. Related objects are surrounding neighbors,

121
Mastering Object Based Image Analysis SECTION ONE

sub-objects, super-objects, or whole image object levels.


Relational features are composed of only a single feature but
refer to related objects. We will outline this subject during the
coming tutorials when the learners become more familiar with
the nature and structure of segmented objects. In the following
tutorials, we will again discuss these topics in more detail.

Step 6: Image Object Information Window

When analyzing individual images or developing rule sets, you


will need to investigate single image objects. The Features tab of
the Image Object Information window is used to gain
information on a selected image object.
6.1) Image objects consist of spectral, shape, and hierarchical
elements. These elements are called features in eCognition
Developer 9.0. The Feature tab in the Image Object Information
window displays the values of selected attributes when an image
object is selected from within the map view. The Image Object
Information window is open by default but can also be selected
from the View menu if required.
6.2) To get information on a specific image object, click on an
image in the map view (some features are listed by default).
6.3) To add or remove features, right-click the Image Object
Information window and choose Select Features to Display. The
Select Displayed Features dialog box opens, allowing you to
select a feature of interest (Figure 12).

122
Mastering Object Based Image Analysis SECTION ONE

Figure 12: The Select Features to the Display dialog box

6.4) The selected feature values are displayed in the map view.
To compare single image objects, click another image object in
the map view, and the displayed feature values are updated
(Figure 13).

Figure 13: The Image Object Information window

123
Mastering Object Based Image Analysis SECTION ONE

6.5) Double-click a feature to display it in the map view; click it


in the map view a second time to deselect a selected image
object. If the processing for image object information takes too
long, or if you want to cancel the processing for any reason, you
can use the Cancel button in the status bar.

Step 7: Feature View Window

Image objects have spectral, shape, and hierarchical


characteristics and these features are used as sources of
information to define the inclusion-or-exclusion parameters used
to classify image objects.
7.1) There are two major types of features. First, the Object
features are attributes of image objects (for example, the area of
an image object). And Global features that are not connected to
an individual image object (for example, the number of image
objects of a certain class).
7.2) Available features are sorted in the feature tree, displayed in
the Feature View window (Figure 14). It is open by default but
can also be selected via Tools > Feature View or View > Feature
View.

124
Mastering Object Based Image Analysis SECTION ONE

Figure 14: The Feature View window


Sum Up:

In the current tutorial, you have attended to process the Landsat-


8 images inside the eCognition to determine various spectral
indices, particularly vegetation and water indices. The tutorial
added a brief overview of functions to your current experiences
in OBIA learning procedures. For more details, notice the
contents of the next tutorial. Spectral indexing is also very
important for effectiveness related to the landcover/landuse
applications as:
❖ monitoring plant growth,
❖ landcover and forests monitoring,
❖ providing information on pollutions,
❖ detecting lakes and coastal changes,
❖ mud-volcanic eruptions,
❖ landslides modeling, and disaster mapping
❖ flash-floods monitoring, and watershed modeling.

125
Mastering Object Based Image Analysis SECTION ONE

Informative Practises
Tips:
1) The Normalized Difference Vegetation Index (NDVI) is a
simple numerical indicator that you can use to analyze remote
sensing measurements. NDVI is related to vegetation, where
healthy vegetation reflects very well in the near-infrared part
of the spectrum.
2) Index values can range from -1.0 to 1.0, but vegetation values
typically range between 0.1 and 0.7. Freestanding water
(ocean, sea, lake, river, etc.) gives a rather low reflectance in
both spectral bands and thus result in very low positive or
even slightly negative NDVI values.
3) Soils that generally exhibit a near-infrared spectral reflectance
are somewhat larger than the red, thus generating rather small
positive NDVI values (say 0.1 to 0.2).
Workouts:
1) Clarify the difference between GNDVI and EVI Indices.
2) Download a set of Landsat-8 imagery for a region where you
live.
3) Try to create the basic domain spectral indices in the region.
Quizzes:
1) What sorts of indices are suitable for demonstrating fired
lands?
2) What is the basic functionality of the Image Object
Information window?
3) How could you use the Feature View window for the
thresholding aims?

126
Mastering Object Based Image Analysis SECTION ONE

Allied References:
1) Flanders, D., M. Hall-Beyer, and J. Pereverzoff. (2003).
Preliminary evaluation of eCognition object-based software
for cut block delineation and feature extraction. Can J Remote
Sens 29(4):441–52.
2) Fuentes, S. R. de Bei, J. Pech, and S. Tyerman (2012).
Computational water stress indices obtained from thermal
image analysis of grapevine canopies," Irrigation Science, vol.
30, no. 6, pp. 523–536.
3) Liu, B.; Chen, J.; Chen, J.; Zhang, W. (2018). Landcover
Change Detection Using Multiple Shape Parameters of
Spectral and NDVI Curves. Remote Sens., 10, 1251.
4) McFeeters, S. K. (1996). The use of the Normalized
Difference Water Index (NDWI) in the delineation of open
water features," International Journal of Remote Sensing, vol.
17, no. 7, pp. 1425–1432.
5) Rasouli, A.A., and Mammadov, R. (2020). Preliminary
Satellite Image Analysis Inside the ArcGIS Setting, Lambert
Academy Publishing, Germany.
6) Vermote, E., Justice, C., Claverie, M., & Franch, B. (2016).
Preliminary analysis of the performance of the Landsat 8/OLI
land surface reflectance product. Remote Sensing of
Environment, 185, 46-56.

127
Mastering Object Based Image Analysis SECTION TWO

Tutorial 6
Quick Look to the eCognition's OBIA Capabilities

Any eCognition remains a powerful environment for OBIA.

Opening Statement:

The current tutorial introduces the basic functionality of the


eCognition 9.01 that you will encounter and process the
Landsat-8 imagery. To reach the main skills inside the Software,
step-wisely, at the first step, you will load the raster data and
create a new project. By subsetting, a small part of the Landsat-8
imagery focused in the Greater Caucasus Mountains, on the
North of Azerbaijan. The second step is to segment your image
to create objects for the Nearest Neighbor classification, one of
the basic OBIA operations inside the eCognition Software. At
least you will learn how to calculate the classification accuracy
and export the final classified layers to the GIS setting.
Instructive Memo:
✓ Level: Intermediate
✓ Time: This tutorial should not take you more than 1.5 hours.
✓ Software: The eCognition Developer, Version 9.01.

128
Mastering Object Based Image Analysis SECTION TWO

✓ Data Sources: Landsat-8, LC08_167032_2020-02-29


subsetted image.
✓ Subject Scene: North Azerbaijan, part of the Caucasus
Mountains.
Tutor Objectives:
By the end of this unit, you should:
Be familiar with the main functionalities of eCognition 9.01
software.
Change the color composition of the displayed image.
Create image objects in a common-used multiresolution
segmentation algorithm.
Classify the Landsat 8 imagery raster layers.
Calculate the accuracy of classified maps.
Export the results to the ArcGIS setting.
Background Concepts:

Trimble eCognition software exists to improve, accelerate and


automate the interpretation of geospatial data. In eCognition,
some real solutions are available for geospatial data analysts,
giving full flexibility and power to solve even the most
challenging remote sensing projects. Rather than going into too
much detail too soon, you start with this tutorial rather than the
theoretical chapters to get a feel for eCognition's user interface
and its most basic features. eCognition Developer is a powerful
development environment for object-based image analysis. It is
used in earth sciences to develop rule sets for automatic remote
sensing data analysis.
For this exercise, a small part of the Landsat-8 satellite image
from the north of Azerbaijan has been selected, with the highest
peak of "Bazarduzu" in the Greater Caucasus range at 4,466
meters above sea level. Landcover in this area mostly includes

129
Mastering Object Based Image Analysis SECTION TWO

snow zones, forest cover, pastures, agriculture, and lands


affected by winter ice and snow patches (Figure 1).

Figure 1: The location of the current tutorial Subject Scene

Mastering The Skills:

Since eCognition stands on a new approach to image analysis,


taking this short tutorial may sometimes make you feel as if you
have been "thrown into the deep end." It is just what is intended.
The purpose of the tutorial is to get a first impression of how to
work with the eCognition traditional version, with a complex set
of algorithms to reach the OBIA aims.

Step 1: Setting Up the eCognition Software

1.1) Start the eCognition 9.01 Software by clicking on

to notice the Trimble eCognition start modes.


1.2) Select the Rule Set Mode option and click on the OK
option.
1.3) The eCognition Software's main empty set appears.

130
Mastering Object Based Image Analysis SECTION TWO

Step 2: Creating a new project

2.1) From the "File" menu, choose "New Project" or click

in the toolbar.
2.2) Navigate to your working directory, for example, "E:\
Satellite Images\S1-T6-North Azer\
LC08_L1TP_167032_20210522. You may have your personnel
directory setting.
2.3) Select all *.Tif image files you need and click on the OK
button.
2.4) Inside the Create Project dialog box, you can edit or remote
any band or bands that you do not need or may insert another
raster layer.
2.5) For the current tutorial, you may keep the Landsat bands of
2, 3,4,5,6,7, 8 and a subset of a raster ALOS-Palsar DEM file as
AP_05189_FBS_F0810_RT1, with 12.5-meter spatial
resolution. Notice Table 1 to realize the Landsat 8 bands
designations in the current tutorial.
Table 1: Landsat-8 bands designations
Band (s) Operation
Combination
2, 3, and 4 These bands are used to create a true-color band
combination or normal RGB picture of the visible light.
The basic aim of these filters is to create a visual map of
the area.
4, 3, and 2 These bands show agricultural farms around the image.
Dark Green in the picture indicates woods; greens are
healthy plantations.
Bands 5,4,3 You could use a combination of these bands to create a
false-colored image. To the bright red, band 4 is green,
and band 3 is blue.
Bands 6 and 7 use different parts of shortwave infrared and are helpful
in terms of monitoring rocks and soils. When analyzing
the image, the spectrum is almost fully absorbed by

131
Mastering Object Based Image Analysis SECTION TWO

water; it easily reflects the water sources. These bands are


also employed for ecological and geological researches
due to their properties. The geological band combinations
allow specifying the areas of interest for future geological
study.
Bands 7, 6, 4 indicate the vegetation: the dark-blue or black highlights
water sources. Towns and urban areas are spread within
white to cyan to purple ranges. If the image indicates red,
this would mean the sensor has taken a picture of the
volcano, forest fire, or solar panel field mentioned above
are reflecting or radiating the whole spectrum of IR.
Band 8 A panchromatic band (black and white band) is one band
that usually contains a couple of hundred nanometers
bandwidth. The bandwidth enables it to hold a high signal
noise, making the panchromatic data available at a high
spatial resolution. This band also could be used in the
image sharping and fine segmentation processes.

2.6) When you have the Landsat-8 bands, you can select one or
several bands to create a clearer picture of the landcover due to
the specific needs of different research kinds. For example, it is
possible to use false-color Images to enhance the visual
appearance of the data. The opportunity given is to substitute the
true color of the image with the color required as the following
combinations:
2.7) To speed the processing procedure, you must first select the
bands you need, as indicated in Table 1. Also, you can subset a
small interest part of the image.
2.8) When you are happy with the image selection mode, click
on the OK option. The new image project is created with Blue,
Green, and Red bands.

132
Mastering Object Based Image Analysis SECTION TWO

Step 3: Creating Color Composition

3.1) Select "Image Layer Mixing from the "View" menu or click

in the toolbar to open the layer mixing dialog.


3.2) Select the "Linear option or Standard Deviation under
"Equalizing" to apply a histogram stretch. And choose "six-layer
mix" as "presets" and click "OK to notice the image band color
changes (Figure 2).

Figure 2: Image layer mixing as representing false-color

3.3) Choose "Edit Highlight Colors" from the "View" menu>


Display Mode to change the highlight color setting. You may
like to select red as the selected color. Click "Active View" to
activate these highlight color settings.
3.4) From the File menu, select Save project in a working drive-
by given a name, for example, North-Azer.dpr.
133
Mastering Object Based Image Analysis SECTION TWO

Step 4: Creating Image Objects

Now that you have created a project, you can make your first
object-oriented image analysis as the main feature of
eCognition. For this reason, the first step in eCognition is
always to extract image object primitives (segments), which will
become the building blocks for subsequent classifications. You
will now produce such image objects with multiresolution
segmentation that generates image objects at any chosen
resolution.
4.1) Right-click inside the Process Tree and click on Append
New; in the drop-down for Algorithm, select the multiresolution
segmentation and set other segmentation parameters as the
Figure 3. You may try different segmentations settings to find
the best-fit image objects.

Figure 3: Edit Process dialog box, with an arbitrate segmentation


parameters

134
Mastering Object Based Image Analysis SECTION TWO

4.2) From the "Segmentation" menu, choose "Multiresolution


Segmentation and Domain with the pixel level.
4.3) Weight "B5-Near Infrared.tif" 2 in the field "and keep other
bands with weights equal to 1.
4.4) Insert 72 in the field "Scale Parameter. Weight "Shape"
with 0.3 and Compactness 0.7 in the "Composition of
homogeneity criterion section.
4.5) Later, you might perform at least two more segmentation
modes with varying scale, shape, and compactness parameters.
State what parameters you used for the final segmentation and
attach them to your report. You can ask yourself which layers to
be used for creating Objects?
4.6) Rember that the basis of creating image objects is the input
data. According to the data and the Algorithm you use, objects
result in different shapes—the first thing you have to evaluate,
which layers contain the important information. For example, In
our case, we made double the weight of the NIR band for image
object creation. Also, you need to know which scale parameter
to be set. The 'Scale parameter' is an abstract term. It is the
restricting parameter to stop the objects from getting to
heterogeneity. For the 'Scale parameter,' there is no definite rule.
You have to use trial and error to determine which 'Scale
parameter' results in the objects are useful for your further
classifications steps.
4.7) To start the segmentation process, click on the Execute
button.

135
Mastering Object Based Image Analysis SECTION TWO

4.8) When the segmentation process is finished, there are


different ways to display the image objects (Figure 4).

Figure 4: A part of a segmented image, displaying the image objects


4.9) By default, the image objects are transparent and
highlighted only on mouse-click. To view the image objects

colored in their mean value, click the icon or change the


view settings for image data from "Pixel" to "Object mean."
4.10) To view the borders of the objects, there are two
possibilities in eCognition:

4.10.1) To show or hide the polygons, use the button. Since


polygons use memory, they should only be created if you use
features based on polygons or skeletons in the later classification
or exported vector files. You could use these icons

as you may need.

136
Mastering Object Based Image Analysis SECTION TWO

4.10.2) Use the button to show or hide outlines without


creating polygons first. However, it is possible to view the
outlines in the color of the classification of the corresponding

object. You can use the button to show the outlines colored
in the classification colors after classification is performed.
4.11) In this stage, you can right-click on the created segmented
Level-1 inside the Process Tree and modify the segmentation

process .

Step 5: Obtaining information about image objects

The "Image Object Information" dialog provides detailed


information about the selected image objects and their features
and classification. Since you have not classified the image, there
is no classification information available yet.
5.1) Open the image object information dialog by clicking the

icon if it is not yet open.


5.2) Click an arbitrary image object (Figure 5).

137
Mastering Object Based Image Analysis SECTION TWO

Figure 5: A segmented image and an Image Object Information box

5.3) Image Object Information window gives you all the


necessary information about one object. When creating a class
hierarchy, this dialog helps you find features that separate one
class from another (Figure 6).

Figure 6: The Select displayed Features dialog box

138
Mastering Object Based Image Analysis SECTION TWO

5.4) Another tool that helps you with this task is the feature
view. The featured view allows you to display one feature for all
image objects. The image objects are rendered in gray values,
corresponding to the feature value. The brighter an object is, the
higher is its feature value for the selected feature.
5.5) From the "Tools" menu, choose "Feature View. Select the
feature "Object features > Layer values > Mean > B2-Blue.tif"
by double-clicking it. The objects will then be colored according
to their feature value for the selected feature. A low weight
value represents a high feature value; a high weight value a low
feature value. You can also visualize features out of every other
dialog where features are selected, such as the "Insert
Expression" dialog or the "Select displayed Feature" dialog
(Figure 7). In these cases, you open a pop-up menu with a right-
click and select "Update range."

139
Mastering Object Based Image Analysis SECTION TWO

Figure 7: Image Object Information, based on Feature View

Step 6: Class Sampling Procedures

Up to now, you have extracted image objects and learned how to


call up the information contained in them. You will use this
information for classifying the previously segmented image
objects. Before introducing the current uncomplicated
classification process, you need to define the classes.
6.1) Right-click inside the Class Hierarchy box and select Insert
Class (Figure 8).

140
Mastering Object Based Image Analysis SECTION TWO

Figure 8: Class Description dialog box

6.2) Type Snow into the box for name and change the box for
selecting colors to Blue, then click OK.
6.3) Repeat these steps for Agriculture, Barren, Forest, Pasture,
Snow, Water, and Snow-Tracks (patches), a geomorphological
pattern of snow and firn accumulation that lies on the surface
longer than others seasonal snow covers. Make sure that you
label and give the appropriate colors to all classes (Figure 9).

Figure 9: Class Hierarchy dialog box

141
Mastering Object Based Image Analysis SECTION TWO

6.4) On the toolbar, go to Classification –> Samples –> Select


Samples (Figure 10).

Figure 10: Selecting classification samples

6.5) Go to the Class Hierarchy box, click on the Snow class, and
make sure it is highlighted. It makes you confident that your
selections go to that class.
6.6) Zoom down to an area called Snow and double click inside
the segments that overlay Snow areas or hold the shift key and
click once in the segment. It will turn the segment to the color of
the selected class. It is easier to figure out what plays with the
segmentation tools shown in the image below (Figure 11).

Figure 11: The Segmentation tools

142
Mastering Object Based Image Analysis SECTION TWO

6.7) Select at least 30 samples for each class and remember to


select the class in the class hierarchy before selecting the
samples for it.

Step 7: Checking Sampling Procedures

You can check your samples during and after the sampling
process using two important tools.
7.1) First, you can apply the Sample Editor window, the
principal tool for inputting samples. From the Classification,
select menu Samples and then Sample Editor option. A selected
class (for example, Snow class) shows histograms of selected
features of samples in the currently active map. You can display
the same values for other image objects (for instance, Snow-
Tracks) at a certain level or all levels in the image object
hierarchy (Figure 12).

Figure 12: The Sample Editor window


7.2) You can use the Sample Editor window to compare the
attributes or histograms of image objects and samples of
different classes. It is helpful to get an overview of the feature

143
Mastering Object Based Image Analysis SECTION TWO

distribution of image objects or samples of specific classes. You


can compare the features of an image object to the total
distribution of this feature over one or all image object levels.
7.3) Also, once a class has at least one sample, You can assess
the quality of a new sample in the Sample Selection Information
window. It can help you decide if an image object contains new
information for a class or belongs to another class.
7.4) To open the Sample Selection Information window choose
Classification > Samples > Sample Selection Information or
View > Sample Selection Information from the main menu
(Figure 13).

Figure 13: The Sample Selection Information window

7.5) Names of classes are displayed in the Class column. The


Membership column shows the membership value of the
Nearest Neighbor classifier for the selected image object.

144
Mastering Object Based Image Analysis SECTION TWO

7.6) The Minimum Distance column displays the distance in


feature space to the closest sample of the respective class. The
Mean Distance column indicates the average distance to all
samples of the corresponding class.
7.7) The Critical Samples column displays the number of
samples within a critical distance to the selected class in the
feature space. The critical sample membership value can be
changed by right-clicking inside the window. Select Modify
Critical Sample Membership Overlap from the context menu.
The default value is 0.7, which means all membership values
higher than 0.7 are critical.
7.8) The Number of Samples column indicates the number of
samples selected for the corresponding class. The following
highlight colors are used for a better visual overview:
7.8.1) Gray: Used for the selected class.
7.8.2) Red: Used if a selected sample is critically close to
samples of other classes in the feature space.
7.8.3) Green: Used for all other classes are not in a critical
relation to the selected class.

Step 8: Classification Procedures

8.1) Once finished selecting and matching samples, go back up


to classification –> Nearest Neighbor –> Edit Standard NN
Feature Space.
8.2) Open up 'Object features' and double click on 'Layer
Values' and click on OK (Figure 14).

145
Mastering Object Based Image Analysis SECTION TWO

Figure 14: Edit Standard NN Feature Space dialog box


8.3) Go back to Nearest Neighbor and select Apply Standard
NN to Classes. Click the All—>> button, then click OK (Figure
15).

Figure 15: Apply Standard NN to the Classes dialog box

146
Mastering Object Based Image Analysis SECTION TWO

8.4) Right-click in the Process Tree box and click on Append


New; select classification for the Algorithm drop-down menu
(Figure 16).

Figure 16: Edit Process, with its Classification algorithm


8.5) Then, inside the Edit Process dialog box for Active classes,
change the none by clicking in the box to see the three dots,
double-clicking on the field till the Edit Classification Filter
appears.
8.6) Ensure that inside the Edit Classification Filter, every class
besides Unclassified has a check (Figure 17).

147
Mastering Object Based Image Analysis SECTION TWO

Figure 17: The "Edit Classification Filter" dialog box

8.7) In the Edit Classification Filter dialog box, click on the OK


and then click Execute option in the Edit Process dialog box.
After several minutes, the classified map appears, with all
defined classes (Figure 18).

148
Mastering Object Based Image Analysis SECTION TWO

Figure 18: The final classified map

8.8) Before the classified results are exported, they need to be


reviewed to ensure that everything worked correctly; to do this,
we will experiment with the visualization options. View the
following image for details on what each button does. Be sure to
view your classification by clicking the Classification View
button (Figure 19).

Figure 19: The visualization options

149
Mastering Object Based Image Analysis SECTION TWO

Step 9: Accuracy Assessment Tools


Accuracy assessment methods can produce statistical outputs to
check the quality of the classification results. Tables from
statistical assessments can be saved as .txt files, while you can
export graphical results in raster format.

Figure 20: Accuracy Assessment dialog box

9-1) Choose Tools > Accuracy Assessment on the menu bar to


open the Accuracy Assessment dialog box (Figure 20).
9-2) A project can contain different classifications on different
image object levels. Specify the image object level of interest
using the image object level drop-down menu. In the Classes

150
Mastering Object Based Image Analysis SECTION TWO

window, all classes and their inheritance structures are


displayed.
9-3) To select classes for assessment, click the Select Classes
button and make a new selection in the Select Classes for the
Statistic dialog box. By default, all available classes are
selected. You can deselect classes through a double-click in the
right frame.
9.4) In the Statistic type drop-down list, select one of the
following methods for accuracy assessment:
9.4.1) Classification Stability dialog box displays a statistic type
used for accuracy assessment (Figure 21). The difference
between the best and the second best class assignment is
calculated as a percentage. The statistical output displays basic
statistical operations (number of image objects, mean, standard
deviation, minimum value and maximum value) performed on
the best-to-second values per class.

Figure 21: Output of the Classification Stability statistics

151
Mastering Object Based Image Analysis SECTION TWO

9.4.2) Best Classification Result dialog box displays a statistic


type used for accuracy assessment (Figure 22). The statistical
output for the best classification result is evaluated per class.
Basic statistical operations are performed on the best
classification result of the image objects assigned to a class
(number of image objects, mean, standard deviation, minimum
value, and maximum value).

Figure 22: Output of the best Classification results

9.4.3) Error Matrix based on TTA Mask dialog box displays a


statistic type used for accuracy assessment. Test areas are used
as a reference to check classification quality by comparing the
classification with reference values (called ground truth in
geographic and satellite imaging) based on pixels. For this
option, we have not so-called the ground truth samples (Figure
23).

152
Mastering Object Based Image Analysis SECTION TWO

Figure 23: Output of the Error Matrix based on TTA Mask statistics
9.4.4) The Error Matrix Based on Samples dialog box displays a
statistic type used for accuracy assessment. Error Matrix based
on Samples is similar to Error Matrix Based on TTA Mask but
considers samples (not pixels) derived from manual sample
inputs. The match between the sample objects and the
classification is expressed in terms of parts of class samples
(Figure 24).

Figure 24: Output of the Error Matrix based on Samples statistics

153
Mastering Object Based Image Analysis SECTION TWO

9.5) To view the accuracy assessment results, click Show


statistics. To export the statistical output, click Save statistics.
You can enter a file name of your choice in the Save filename
text field. You may save it in comma-separated ASCII .txt
format; the extension .txt is attached automatically.

Step 10: Exporting the Results

10.1) When are happy with the results, you can export the
results by clicking on Export –> Export Results.
10.2) Leave the buttons to the left default; change the Export
File Name to Bazarduzu, for example.
10.3) Click on Select Classes and select all except for
unclassified, then click OK (Figure 25).

Figure 25: Export Results dialog box

154
Mastering Object Based Image Analysis SECTION TWO

10.4) Click on Select Features to add all the attributes. The first
feature you want to add to the attribute table is the area of each
class.
10.5) Follow this path to add the area: Object Features –>
Geometry –> Extent –> Area and then double click on the area
to add it to the space to the right (Figure 26).

Figure 26: Select Features for Export as Attributes dialog box

10.6) The class name is the second feature you want to add to
the attribute table. Follow this path to add the name: Class-
Related features –> Relations to Classification –> Class name
and double click on Create new Class Name, and in the new
window, click OK. It will let you select the box (Figure 27).

155
Mastering Object Based Image Analysis SECTION TWO

Figure 27: Adding New Class Name

10.7) Click OK to close the Feature Select window, make sure


your window matches the image below, and click Export.
10.8) Open the newly created shapefile in ArcMap and evaluate
the classified map.

Sum Up:

eCognition is such powerful software that it is challenging to


summarize its advantages and benefits within one chapter. In the
current tutorial, we tried to introduce some of its main concepts
before you can start working through specific tasks learning how
to apply the main approaches step-by-step in the following
sections. In this exercise, you learned how to:
❖ load Landsat 8 imagery into a project,
❖ perform image segmentation,
❖ create a class hierarchy and insert the standard nearest
neighbor as a classifier,
❖ declare sample objects as initial points for the nearest
neighbor classification and feature space optimization,
❖ perform a nearest neighbor classification: click and classify,
❖ try to have some ideas on the accurraccy assessment proceses,
❖ export the results to the ArcGIS setting.

156
Mastering Object Based Image Analysis SECTION TWO

Informative Practices
Tips:
1) In eCognition Developer, segmentation is an operation that
creates new image objects or alters the morphology of existing
image objects according to specific criteria. It means a
segmentation can be a subdividing operation, a merging
operation, or a reshaping operation.
2) There are two basic segmentation principles: Cutting
something big into smaller pieces, which is a top-down
strategy, and merging small pieces to get something bigger,
which is a bottom-up strategy.
3) Inside the eCognition 9.01, four classification algorithms are
available: Optimal Box, Nearest Neighbor, Brightness
Threshold, and Clutter Removal.
Workouts:
1) Define the different color compositions to visualize image
layers for display in the map view.
2) Try two different segmentation actions (for instant, Quadtree
and Multiresolution modes) to create image objects and
compare the results.
3) Use the Classification, Brightness Threshold action to classify
objects based on brightness.

Quizzes:
1) According to the experience from current and previous
tutorials, what are the eCognition software advantages and
disadvantages in processing Landsat-8 imagery?
2) How would you evaluate the final segemented map accuracy
inside the eCognition?

157
Mastering Object Based Image Analysis SECTION TWO

3) What does the Classification (Nearest Neighbor) action mean?


Allied References:
1) eCognition Reference Book (2014). Trimble eCognition
Reference Book (Munich, Germany: Trimble Germany
GmbH).
2) Gonzalez R. and Woods R. (1992). Digital Image Processing,
Addison Wesley, pp 414 - 428.
3) Jalan, S. (2011). Exploring the Potential of Object-Based
Image Analysis for Mapping Urban Land Cover. Journal of
the Indian Society of Remote Sensing, vol. 40, no., pp. 507–
518.
4) Schiewe, J. (2002). Segmentation of high-resolution remotely
sensed data – concepts, applications, and problems. In:
Symposium on Geospatial Theory, Processing, and
Applications, Ottawa.
5) The eCognition User Guide (2014). Trimble eCognition
Developer User Guide (Munich, Germany: Trimble Germany
GmbH).
6) United States Geological Survey - USGS (2015). Landsat
Earth observation satellites, Earth Resources Observation and
Science (EROS) Center, Earth Resources Observation, and
Science.

158
Mastering Object Based Image Analysis Section Two at a Glance

Diving into the OBIA Advanced Skills

What would be the future of advanced image processing?

`Basic Concepts:

Continuing Season One, Section Two introduces the eCognition


much-advanced capacity to import satellite image bands,
manage a workspace, create projects, and target the Sentinel-2
imagery, with some recurrences. Any interested learner who
experienced the Section One tutorials will dive into OBIA
much-advanced skills by segmentation, classifying the satellite
images, developing rulesets, and exporting the results to the
ArcGIS setting.
At a glance, beginners learn how to manage workspaces and
create a project in tutorial one. Then, in tutorial two, with access
to Sentinel-2 images, trainees will practice creating complex
landcover indices. Next, in the third workout, better-educated
followers will work to produce the main components of the
OBIA framework, image objects, by applying several
segmentation methods. In the fourth tutorial, every learner will
deal with the main subject of satellite image processing, namely

159
Mastering Object Based Image Analysis Section Two at a Glance

classification procedures. After that, during tutorial five,


intermediated students will learn how to produce and implement
rulesets with combined thresholding techniques to classify water
bodies. In tutorial six's last step, you will try a change detection
process based on an"Unsupervised Classification (USC)"
algorithm. This section will inform learners of accuracy
calculation methods and export more accurate results aiming at
the OBIA targets.

160
Mastering Object Based Image Analysis SECTION TWO

Tutorial 1

Headlong Into The eCognition 9.5 with Sentinel-2

advanced analysis software available for geospatial applications

Opening Statement:
The main purpose of the current tutorial is to teach students how
to work professionally inside the trial version of the eCognition
9.5. Accordingly, you will download and install the eCognition
Developer version 9.5 on your computer at the first step. In the
second step, the tutorial teaches you simple steps to find,
download, and manage free Sentinel-2 data by accessing the
Copernicus Open Access Hub. At the tired step, you will learn
to import downloaded imagery to the inside of the eCognition
software. Creating a simple project helps the intermediate
learners understand the main aims of OBIA concepts, helping to
design, improve, and accelerate practicing the real-world high-
resolution satellite imagery during the next tutorials.

161
Mastering Object Based Image Analysis SECTION TWO

Instructive Memo:
✓ Level: Intermediate
✓ Time: This lesson should not take you more than 1 hour.
✓ Resources: The eCognition Developer software, Version 9.5
✓ Data Sources: Sentinel-2 Imagery,
L1C_T39TVE_A030362_20210415. jp2
✓ Scene Site: The Baku Peninsula
Tutor Objectives:
By the end of this unit, you should:
Learn how to download the eCognition Developer 9.5.
Start to install the eCognition Version 9.5.
Be trained to access the Sentinel-2 imagery.
Bring into the high-resolution imagery inside eCognition.
Background Concepts:
The eCognition trial version 9.5 enables researchers to examine
almost all high-resolution satellite imagery in pixel and object
levels, not in isolation, but contextual relations. Although
students do not have access to the official version of the new
eCognition software, they will acquire the necessary
constructive skills over time by doing simple exercises. Inside
the Trimble eCognition Developer Trial 9.5 version, you can
experience many image processing, classification functions as
experts and data scientists do in the geospatial data analyzing
main steps. Intermediate learners can design feature extraction
solutions to transform geo-data into geo-information. The
possibilities are continuous, with more feature extraction
experience. Besides, researchers could pioneer OBIA techniques
and continue to push the envelope of Sentinel-2 newly provided
imagery and integrated analyses.

162
Mastering Object Based Image Analysis SECTION TWO

As soon as you can bring multispectral high-resolution Sentinel-


2 (A & B) into the eCognition setting, you will be able to get
more information from the landcover/landuse of your projected
area. Sentinel-2 carries an innovative wide swath high-
resolution multispectral imager with 13 spectral bands for a new
perspective of our land and vegetation. The combination of high
resolution, novel spectral capabilities, a swath width of 290 km,
and frequent revisit times provides unprecedented views of
Earth. The Sentinel 2 (A & B) spectral bands are given in Table
1.
Table 1: Sentinel 2 spectral bands

Central Wavelength Resolution


Band Description (μm) (m)
Ultra Blue (Coastal and
B1 Aerosol) 0.443 60
B2 Blue 0.490 10
B3 Green 0.560 10
B4 Red 0.665 10
Visible and Near Infrared
B5 (VNIR) 0.705 20
Visible and Near Infrared
B6 (VNIR) 0.740 20
Visible and Near Infrared
B7 (VNIR) 0.783 20
Visible and Near Infrared
B8 (VNIR) 0.842 10
Visible and Near Infrared
B8a (VNIR) 0.865 20
B9 Short Wave Infrared (SWIR) 0.940 60
B10 Short Wave Infrared (SWIR) 1.375 60
B11 Short Wave Infrared (SWIR) 1.610 20
B12 Short Wave Infrared (SWIR) 2.190 20

Also, Figure 1 indicates the Sentinel-2 bands spectral in


graphical characteristics.

163
Mastering Object Based Image Analysis SECTION TWO

Figure 1: Sentinel-2 bands and spectral characteristics

Mastering The Skills:


The eCognition technology enables researchers to examine
almost all high-resolution satellite imagery in pixel and object
levels, not in isolation but in contextual relations. Although you
do not have access to the official version of the new eCognition
Software, you will acquire the necessary constructive skills over
time by doing simple exercises based on the Sentinel-2 images.
Step 1: Accessing the eCognition Software
1.1) Note that customers with a valid maintenance contract
are entitled to the latest version of eCognition. This version of
the eCognition Suite 9.5 includes a variety of new and improved
productivity tools. To download the current trial version
eCognition, you need to try [email protected].
1.2) Fill out the form to request the eCognition Developer
Trial software. In the comments section, you are requested to
offer some information about the application you're interested in
and any other questions you have.

164
Mastering Object Based Image Analysis SECTION TWO

1.3) Trial software access is not limited to a specific period,


but export functions, saving projects, and the workspace
environment are restricted. Rulesets saved in trial software
cannot be opened in a fully-licensed version of eCognition
Software. eCognition Trial versions are available exclusively for
64-bit versions of Windows.
1.4) When you download the eCognition Trial version, keep
it in a folder inside one of your computer partitions.
1.5) Right-clip on the eCognition Developer Trial Zip folder
and select the Extract Here option (Figure 2).

Figure 2: Extracting eCognition Developer Trial zip files

1.6) When the eCognition Developer Trial 9.5 Setup dialog


box appears, click the Next option (Figure 3).

Figure 3: eCognition Developer Trial 9.5 Setup dialog box

165
Mastering Object Based Image Analysis SECTION TWO

1.7) In the License Agreement dialog box, select the I accept


the terms of the License Agreement option and click the Next
tab (Figure 4).

Figure 4: The License Agreement dialog box

1.8) When the Choose Cpmponnents appears, select both the


eCognition Developer Trial and NVIDIA GPU-accelerated
TensorFlow Library options, click the Next tab. Note that you
need to have nearly more than 1.7 GB of required space in your
select partition (Figure 5).

166
Mastering Object Based Image Analysis SECTION TWO

Figure 5: The Choose Cpmponnents installation option dialog box

1.9) When choosing Start Menu Folder appears, select the


Trimble eCognition Developer Trial 9.5 and the Start Menu
folder where you want to set the software shortcuts (Figure 6).

Figure 6: the Choose Start Menu Folder dialog box

1.10) Click on the Next tab to have the Choose Install


Location dialog box. For example, you may prefer to select the

167
Mastering Object Based Image Analysis SECTION TWO

E:\ partition to install the eCognition Developer Trial 9.5


version (Figure 7).

Figure 7: Selecting of the Destination Folder


1.11) The installation dialog box appears by selecting the Next
Tab. Click on the 'Install Tab' to start the installation process
(Figure 8).

Figure 8: The installation process dialog box

168
Mastering Object Based Image Analysis SECTION TWO

1.12) The eCognition Developer Trial software installing


process starts. It would be best if you wait while the software
was installed (Figure 9).

Figure 9: The Installing dialog box


1.13) When the installation process is completed successfully,
click on the Next Tab (Figure 10).

Figure 10: Announcing the Installation completion dialog box

169
Mastering Object Based Image Analysis SECTION TWO

1.14) In the last step, inside the Completing the eCognition


Developer Trial 9.5 Setup Wizard window click on the Finish
Tab to end the installation process (Figure 11).

Figure 11: the Completing the eCognition Developer Trial 9.5 Setup
Wizard window
1.15) You can start the eCognition Developer Trial version
and set up the software based on your choosier (Figure 12).

Figure 12: the eCognition developer Trial Version start state

170
Mastering Object Based Image Analysis SECTION TWO

Step 2: Accessing the Copernicus Hub


Sentinel-2 data products are provided free of charge to
all data users, including the students, and researchers under the
terms and conditions prescribed by the European Commission's
Copernicus Programme.
2.1) Go to https://scihub.copernicus.eu/dhus/#/home and
create a User Account for yourself.
2.2) In the top-right of the website map, click the Sign Up
option. Insert valid entries for your name, email and location.
Click register, and validate your email.
2.3) With a few clicks of the mouse, you've gained access to
ESA's Sentinel site.
2.4) Click on the icon (1) to log in to the site. Enter your
username and password to access the main Copernicus Open
Assess Hub.

2.5) Press on the insert research criteria icon (2) to notice


windows. You can now fill in all exciting options to define all
images you need inside the Advanced Search box. The most
important subject is selecting the Sentinel Mission type; in our

case, we want to tick the option.


2.6) There are viable options that are optionally able to help
the users in the selection procedure, such as satellite platforms,
product type, cloud cover (%), sensing period, and so on (Figure
13).

171
Mastering Object Based Image Analysis SECTION TWO

Figure 13: Copernicus Open Access Hub


2.7) Also, you could use two other options as map layer (3)

and switch to area mode (4) in the better selection of the


Sentinel images (Figure 14).

172
Mastering Object Based Image Analysis SECTION TWO

Figure 14:Insert search criteria dialog

2.8) To select the Area of Interest (AOI), Zoom in your study


area. In our example, Azerbaijan, Baku region.
2.9) Click Polygon in the bottom-left and draw a polygon
around your interested area (Figure 15).

173
Mastering Object Based Image Analysis SECTION TWO

Figure 15: Copernicus Open Access Hub dialog

2.10) Using the Search Criteria text box in the top-left, click
on the menu and choose your data to feature, for instance, which
date and sentinel mission you need (Figure 16).

Figure 16: Search criteria dialog box

174
Mastering Object Based Image Analysis SECTION TWO

2.11) Click on the search button to show the results. All


available images will be displayed (Figure 17).

Figure 17: Result of selected Sentinel 2A images

2.12) Click on this sign to view the product details (Figure 18).

Figure 18: Footprint and Quicklook summary

175
Mastering Object Based Image Analysis SECTION TWO

2.13) Select the product you want to download. When the


downloading process is finished, you will notice a zip

folder similar to
in the jp2 format. Now, you need to unzip the contents of
the above folder in a particular folder.
2.14) The Sentinel-2 satellites will each carry a single multi-
spectral instrument (MSI) with 13 spectral channels in
the visible/near-infrared (VNIR) and short wave infrared
spectral range (SWIR). Keep in mind that the Sentinel
2A level 1c already has values in TOA reflectance.
Therefore, you need to convert its bands into GeoTIFF in
SNAP software. Besides, you can convert Sentinel image
format inside the ArcGIS setting into GeoTIFF format.
2.14.1) Download the SNAP software from Sentinel Data Hub:
http://step.esa.int/main/download/snap-download/. It is
free, open-source software. It is a common open-source
architecture for ESA Toolboxes ideal for exploiting
Earth Observation data.
2.14.2) Sentinel Data Hub offers three different installers for
your convenience. Choose the one which suits your
needs. During the installation process, each toolbox can
be excluded from the installation. Note that SNAP and
the individual Sentinel Toolboxes also support numerous
sensors other than Sentinel. Inside the SNAP software,
try File/ Open Product.

176
Mastering Object Based Image Analysis SECTION TWO

2.14.3) After importing each band click File/ Export/ GeoTIFF.


Now the entire bands with GeoTIFF format are ready to take
inside the eCognition software.

Step 3: Sentinel-2 imagery inside the eCognition


3.1) When you click on the eCognition Developer Trial
version shortcut *.exe, you will soon notice the Developer Trial
Window (Figure 19).

Figure 19: the eCognition Developer Trial version start-up

3.2) From the menu bar, click on the File and select the New
Project option. Soon the Create Project Window opens by
accompanying the Import Image Layers dialog box (Figure 20).

177
Mastering Object Based Image Analysis SECTION TWO

Figure 20: The Create Project Window accompanying the


Import Image Layers dialog box
3.3) Select Sentinel-2 bands as you want and click OK. Inside
the Create Project, the image layer name set up other options,
and click OK (Figure 21).

Figure 21: The Create Project Window, setting up layers name

178
Mastering Object Based Image Analysis SECTION TWO

3.4) From the Subset Option, select a portion of the image


using your mouse device and click OK (Figure 22).

Figure 22" The Subset Option, selecting a portion of the image

3.5) By setting the View Setting options, as you notice from


Figure 23 you can create your first project inside the eCognition
9.5.

179
Mastering Object Based Image Analysis SECTION TWO

Figure 23: An eCognition 9.5 project, adjusted to the Baku Peninsula,


Azerbaijan
3.6) At this point, your first project in the Alpha software
environment is ready. In the next chapter, you will follow your
project in more detail
Sum Up:
The eCognition Suite 9.5 is an advanced analysis software
available for many OBIA applications. It is designed to improve,
accelerate and automate the interpretation of various geospatial
data and enables users to design feature extraction or change
detection solutions to transform geospatial data into geo-
information. These help us identify changes over time or
features on the earth's surface across very large data sets.

180
Mastering Object Based Image Analysis SECTION TWO

As is highlighted above, Sentinel-2 products are provided free of


charge to all data users, including the students, and researchers
under the terms and conditions prescribed by the European
Commission's Copernicus Programme. You can process satellite
images to determine various plant indices such as vegetation and
water content indexes. It is particularly important for
applications related to the country landcover/landuse mapping
such as vegetation stands, rivers and lakes bodies, mud-volcanic
eruptions, particularly detecting the Caspian Sea coastal
changes.
Informative Practices
Tips:
1) eCognition 9.5 empowers customers with highly sophisticated
pattern recognition and correlation tools that automate the
classification of objects of interest for faster and more
accurate results.
2) eCognition 9.5 now provides new algorithms to directly
leverage this state-of-the-art machine learning technology.
3) You could process the Sentinel-2 images to preserve land
monitoring studies, vegetation monitoring, soil and water
cover, and observation of inland waterways and coastal areas.
Workouts:
1) Install the eCognition 9.5 Trial version on your computer,
according to the current tutorial. And keep in your mind and
fully understand what the eCognition Developer Trial
provides and is limited.
2) Open the eCognition software, and from the Help menu,
select the System Info option. It gives you a lot of basic
information about your installed platform.
3) Download a set of Sentinel imagery for a region where you
live. Have you any idea about counterpart multi-spectral
satellite images.

181
Mastering Object Based Image Analysis SECTION TWO

Quizzes:

1) What does the eCognition Developer Trial software offer


freely?
2) What are its limitations, and what are the main windows of
the eCognition Developer when it starts?
3) How many Sentinel satellites are there in space?
Allied References:
1) Trimble eCognition Suite (2019) eCognition Developer 9.5
Reference Book, Munich, Germany. All rights reserved.
Trimble Documentation, Munich, Germany.
2) Trimble eCognition Suite (2019) eCognition Developer 9.5
System Requirements for Windows operating system.
Munich, Germany. All rights reserved. Trimble
Documentation, Munich, Germany.
3) eCognition Available online (2019):
http://www.ecognition.com/suite/ecognition-developer
(accessed on 3 April 2019).
4) Yan, L., D.P. Roy, H. Zhang, J. Li and H. Huang (2016) An
Automated Approach for Sub-Pixel Registration of Landsat-8
Operational Land Imager (OLI) and Sentinel-2 Multi-Spectral
Instrument (MSI) imagery. Remote Sens. 8, 520.
5) Radiometric Resolutions Sentinel-2 MSI (2020) User Guides,
Sentinel Online. Sentinel.ESA.int. Retrieved 5 March 2020.
6) Sentinel-2 MSI: Overview (2015) European Space Agency,
17 June 2015.

182
Mastering Object Based Image Analysis SECTION TWO

Tutorial 2

Taking a Plunge Inside The eCognition 9.5

establishing your home workspace is the first milestone

Opening Statement:
In the first step, this tutorial teaches you to be familiar with the
eCognition 9.5 structure and functionality. In the second step,
the tutorial will teach you how to set up and configure your
workspace and projects within eCognition, based on Sentinel-2
datasets. Setting up a workspace and project is the first step in
eCognition, in which you configure your folder structure and
import your data. A project is the most basic format in
eCognition architect that contains one or more sets of satellite
imagery.
Instructive Memo:
✓ Level: Intermitade,
✓ Time: This tutorial should not take you more than 1.5 hours,
✓ Software: A eCognition Developer, Version 9.5,
✓ Data Sources: Sentinel-2 imagery T39TUE_20200311.Tif,
✓ Subject Scene: Baku region, Azerbaijan.

183
Mastering Object Based Image Analysis SECTION TWO

Tutor Objectives:
By the end of this unit, you should:
Learn how to downluod Sentinel-2 Imagery.
Be familiar with the workspace and projects.
Create a unique workspace and a project.
Create multiple projects inside a workspace.
Background Concepts:
Using the new eCognition software, you can import various
geospatial data, fusing them into a rich stack of geo-data for the
analysis. You may prefer to have a logic arrangement step-
wisely to create a computer-based representation of an expert’s
geospatial interpretation process, a so-called OBIA. eCognition
then combines the analysis logic with scalable computing power
to identify changes over time or features on the earth’s surface
across very large sets of data.
The eCognition technology enables researchers to examine
almost all high-resolution satellite imagery, such as Sentinel-2
(A & B), in pixel and object levels, not in isolation but in
contextual relations. Inside the eCognition software, you can
build up a picture iteratively, recognizing groups of pixels as
objects. Just like the human mind, it uses color, shape, texture,
shape, and size of objects, as well as their context and
relationships, to draw the same conclusions that an experienced
analyst would draw. Although you do not have access to the
official version of the new eCognition software, you will acquire
the necessary constructive skills over time by doing simple
exercises.

184
Mastering Object Based Image Analysis SECTION TWO

Mastering The Skills:


The main aim of this tutorial is to introduce the trial version of
eCognition 9.5 that was designed to improve, accelerate and
automate the processing and interpretation of geospatial data.
You can discover the full power of the eCognition's
development environment for data analysis. Inside eCognition
9.5, it is possible to develop rule sets and workflows for
advanced classification, feature extraction, and batch processing.
Step 1: Being Familiar with the eCognition Structure
1.1) When you click on the eCognition Developer Trial version
shortcut *.exe, you will soon notice the Developer Trial
Window (Figure 1).

Figure 1: The eCognition Developer version Default View main parts

185
Mastering Object Based Image Analysis SECTION TWO

1.2) The Developer version has the parts, each with its function
as follows:
1) Source View: This dialog provides users a simple data
management area to modify input layer alias, display
orders, and access information on file details. To open
the dialog, choose View > Source View.
2) Item: All metadata items are listed in the feature tree if
existing. You can define a new metadata item by
clicking on Create a new Metadata item.
3) Main View: Inside the Main View, you could display
raster data, vector layers, and other derivative map
products.
4) Process Tree: eCognition Developer uses a cognition
language to create ruleware. These functions are
created by writing rule sets in the Process Tree
window.
5) View Settings: In this dialog, you can add image,
vector, and point cloud layers via drag and drop and
edit them according to view settings. Toggle between
detailed layer properties switches between grayscale
and RGB mixing, or - for point cloud data - select a 3D
subset to open the 3D viewer. The upper pane allows
individual layer settings, the lower pane global view
settings for the respective data type. If you click on the
View Settings Tab, you will notice the Process Tree,

186
Mastering Object Based Image Analysis SECTION TWO

View Settings (image layers), and the Global Settings


tab.
6) Global Settings: This option shows image layers
background, raster dataset mode, equalization mode,
and situation percentage.
7) Image Object Information Tab: This window provides
information about the characteristics of image objects.
8) Class Hierarchy Tab: Image objects can be assigned to
classes by the user, which is displayed in the Class
Hierarchy window. The classes can be grouped in a
hierarchical structure, allowing child classes to inherit
attributes from parent classes.
9) Feature View Tab: In eCognition software, a feature
represents information such as measurements, attached
data, or values.
10) Default Toolbar Buttons and Dialogs: The eCognition
Developer has eleven Toolbar Tabs, each having
several illustrated buttons. These are shown in Figure 2
and associated descriptions in Table 1 in brief.

Figure 2: The eCognition 9.5 main Toolbar Tabs

187
Mastering Object Based Image Analysis SECTION TWO

Table 1: The eCognition 9.5 toolbars and buttons descriptions


Toolbar Buttons Description
File Toolbar This group of buttons allows you to
create a new project, open and save
projects
View Settings These buttons are numbered from 1
Toolbar (Load and Manage Data), 2
(Configure Analysis),
3 (Review Results) to 4 (Develop
Rule Sets), allow you to switch
between the four
window layouts. To organize and
modify image analysis algorithms,
the Develop Rule
Sets view (4) is most commonly
used.
View Settings This button allows you to open the
View Settings dialog. Add layers
via drag and drop
and edit image, vector, and point
cloud view settings
image view This group of buttons allows you to
options select image view options. You can
select between
view image layers, classification,
samples, and any features you wish
to visualize.
displaying This group is concerned with
outlines of displaying outlines and borders of
image objects, and
pixels and views of pixels and polygons based
image on image objects as 1) Toggle
objects between pixel view
or object mean view; 2) Show or
hide outlines of image objects; 3)
Switch between transparent and
non-transparent outlined objects,
and ) Toggle between show or hide
polygons.
Zoom This region of the toolbar offers
Functions natural selection and the ability to
Toolbar pan through an
image, along with several zoom
options. You can enter the zoom
level manually to
any user-defined value.
View The View Navigate folder allows
Navigate you to delete levels, select maps
Toolbar and navigate the
object hierarchy.

188
Mastering Object Based Image Analysis SECTION TWO

Tools The Tools toolbar allow access to


Toolbar the following dialog boxes and
options: Workspace management,
Image object information, Object
and Object Table.
Tools These tools allow Undo, Redo,
Toolbar Save Current Project State, and
Restore Saved.
Project State.
Legend and These tools allow access to Class
Manual hierarchy, Process tree, Feature
Editing view, Manage.
Toolbar Customized Features, Manual
Editing Toolbar.

1.3) eCognition offers the possibility to add all kinds of data via
drag and drop to the View Settings dialog. You can add image
layers, vector, and point cloud layers to a new or existing
project. You may wish to select the layers in the Windows File
Explorer and drag and drop them to the View Settings dialog to
import them. The upper pane allows individual layer settings,
the lower pane global view settings for the respective data type.
1.4) Alternatively, you can select File > Add data layer or View
> Source View > Add data layer button.
1.5) Furthermore, eCognition project (.dpr) and workspace (.dpj)
files can be added to eCognition by drag and drop (Figure 3).

189
Mastering Object Based Image Analysis SECTION TWO

Figure 3: View Settings showing the Sentinel 2 A bands, excluding


layer (band) 1

1.6) Inside the View Settings button toggles between an


expanded or collapsed view of layers, visualizing or leaving out
details of loaded layers.

1.7) These buttons visualize different image


layers in grayscale or RGB. If more than one image layer is
loaded, they also allowed shifting between layers and their layer
mixing (details see below).

1.8) The 3D subset selection button enables the selection of


a subset in the view to open a 3D view window (active on
availability of at least one point cloud layer).
Step 2: Recognition the Image Layer Tools

2.1) Single Layer Grayscale Scenes are automatically


assigned RGB (red, green, and blue) colors by default when
image data with three or more image layers are loaded.

190
Mastering Object Based Image Analysis SECTION TWO

2.2) Use the Single Layer Grayscale button in the View Settings
dialog to display image layers separately in grayscale.
2.3) To change from RGB to grayscale mode, press the button,
and the first image layer is shown in grayscale mode.
2.4) Step through all loaded image layers by using the Show
Next/Previous Image Layer button or open multiple views for
comparison (Figure 4).

Figure 4: Single layer grayscale view with layer 2 (left) and layer 12
(right)
2.5) Three Layers RGB button displays the first three layers
of your scene in RGB. By default, layer one is assigned to the
red channel, layer two to green, and layer three to blue,
indicated by a small circle in the respective field. You can
change view settings by clicking on a circle (removes circle) or
an empty field (adds circle).

2.6) Show Previous Image Layer In Grayscale mode, this


button displays the previous image layer. The number and name
of the image layer are indicated in the middle of the status bar.

191
Mastering Object Based Image Analysis SECTION TWO

In Three Layer Mix, the color composition for the image layers
changes one image layer up for each image layer. For example,
if layers two, three and, four are displayed, the Show Previous
Image Layer Button changes the display to layers one, two, and
three. If the first image layer is reached, the previous image
layer starts again with the last image layer.

2.7) Show Next Image Layer In Grayscale mode, displays


the next image layer down. In Three Layer Mix, the color
composition for the image layers changes one image layer down
for each layer. For example, if layers two, three, and four are
displayed, the Show Next Image Layer Button changes the
display to layers three, four, and five. If the last image layer is
reached, the next image layer begins again with image layer one
(Figure 5).

Figure 5: Single Layer Grayscale (left) and Three-Layer Mix (right)


for Sentinel 2A imagery

192
Mastering Object Based Image Analysis SECTION TWO

2.8) Image Layer Properties and Settings


2.8.1) R-G-B: To define the display color of each image layer,
you can add a circle for the red, green, and blue channels by a
single click for each layer separately. They are then displayed as
additive colors in the View. Any layer without a circle in at least
one column is not shown. When creating a new project, the first
three image layers are displayed in red, green, and blue. You can
change the order of image layers by dragging and dropping
within the Image Layer(s) section (Figure 6).

Figure 6: View Settings dialog - Image Layer Properties - Show All


2.8.2) A right-click on image layers opens the context menu
where you can select to Show All or Hide All image layers at
once or Delete and Rename single image layers. You can also
rename an image layer by double click in the view settings
dialog. One layer can be displayed in more than one color, and
more than one layer can be displayed in the same color. Change
these settings to your preferred by clicking in the respective R,
G, or B cell.
2.8.3) Range: For the equalization settings (lower pane of
dialog) manual, false color (hot metal), and false color
(rainbow), you can define the displayed range individually based
193
Mastering Object Based Image Analysis SECTION TWO

on the Image Layer Equalization dialog for each image layer. To


open the dialog, click on the small dots at the end of the line of
the image layer you want to adjust (details see Equalization and
Image Layer Equalization dialog).
2.8.4) Changing the view settings only changes the visual
display of the image but not the underlying image data – it has
no impact on the Process of image analysis. You can define the
color composition to visualize image layers in the View.
Additionally, you can choose from different equalizing options.
It enables you to visualize the image better and recognize the
visual structures without changing them. You can also choose to
hide layers, which can be very helpful when investigating image
data and results.

Step 3: Lower Pane - Image Layers Global Settings

3.1) By changing the Global Settings in the lower pane - all


settings in the upper pane are overwritten:
a) Background: Click on the square to change the
background color of the image view.
b) Raster mode: Select between Image Data and samples
(rasterized). The View visualizes a rasterized version of
these layers if vector layers or samples in a TTA mask are
loaded. (The rasterized layer is used in segmentation
algorithms).
c) Equalization: There are several modes for image
equalization stretches. Compare the available methods and

194
Mastering Object Based Image Analysis SECTION TWO

choose one that gives you the best visualization of the


objects of interest. Equalization settings are stored in the
workspace and applied to all projects within the
workspace or stored in a separate project. In the Options
dialog box, you can define a default equalization setting
(Figure 7).

Figure 7: Image Layers Global Setting, Image Layer Equalization


dialog
3.2) Notice Table 2 for more details on the functionalities of the
Image Layers Global Settings options.
Table 2: The main functionalities of the Image Layers Global
Settings options.
Option Function
None: No equalization allows you to see the scene as it is, which can
be helpful at the beginning of ruleset development when
looking for an approach. The image layer is displayed without
further modification.
Linear: Linear equalization with selectable saturation factor [Range 0-
50]. Default value 1.00%. Displays images with higher
contrast than without image equalization.
Standard This equalization is a default mode. It renders a display
deviation: similar to linear equalization. Use a parameter around 1.0 to
exclude dark and bright outliers [Range 0-10].
Gamma Equalization improves the contrast of dark or bright areas by
correction: spreading the corresponding gray values [Range 0-5]. Default
factor 0.5.
Histogram: The histogram equalization increases the contrast of image
layer but can lead to over-stretching. Because intensity values

195
Mastering Object Based Image Analysis SECTION TWO

are better distributed on the histogram, it can be helpful in


cases where you want to display areas with more contrast.
Manual: Image Layer Equalization enables you to control equalization
in detail for each layer individually.
False-color False-color is recommended for single image layers with large
(hot metal): intensity ranges to display in a color range from black over red
to white. Select either the Range field in the upper pane of the
dialog and insert border values to be displayed. Alternatively,
you can open the Image Layer Equalization dialog with a
single click on the small dots at the end of the column. For
details see also Image Layer Equalization dialog.
False-color is recommended for single image layers to display a
(rainbow): visualization in rainbow colors. Here, the regular color range
is converted to a color range between blue for darker pixel
intensity values and red for brighter pixel intensity values.
Select either the Range field in the upper pane of the dialog
and insert border values to be displayed. Alternatively, you
can open the Image Layer Equalization dialog with a single
click on the small dots at the end of the column. For details,
see also Image Layer Equalization dialog.
Ignore Activating the Ignore range check-box, you can enter a range
Range: of values that will be ignored when computing the image
statistics for the image equalization. (Only active for
Equalizing modes linear, standard deviation, gamma
correction, and histogram). This option is useful when
displaying e.g., elevation data excluding values and
background areas in visualization. This option is only for
image visualization. No data values can also be assigned when
creating a project see chapter Assigning No-Data Values in
Projects and Workspaces.
Elevation With point cloud data loaded and the 3D viewer active, image
(3D view): layers can be activated in the View Settings dialog (upper
pane) and are then visualized as a 2D image in the 3D viewer.
The default elevation for the image layer is the minimum
elevation of all currently displayed point cloud points in the
3D viewer; this corresponds to the lower pane parameter
value: -auto-. Enter a new height value (in the project unit) in
this field to change the elevation.
Auto This checkbox updates the view with each change of the view
update: settings on the fly. Clear this checkbox to show the new
settings after clicking Apply only. The Discard and Apply
buttons become active with the Auto-update check box
cleared.
Apply to all With this checkbox activated, you can apply selected settings
views: to all views at once.

196
Mastering Object Based Image Analysis SECTION TWO

3.3) You can easily change image layers' order and aliases in the
Source View dialog (View > Source View). To rename a layer
alias, you can right-click and select rename or press F2 to enter
renaming mode or and double-click on a layer alias.
Step 4: Image Layer Equalization Dialog

4.1) To open the Image Layer Equalization dialog, single click


on the small dots at the end of the line of the image layer you
want to adjust. In this dialog, you can define the input range by
either setting minimum and maximum values or dragging the
borders with the mouse (yellow lines). You can set the
equalization range for each layer individually (Figure 8).

Figure 8: Image Layer Equalization Dialog box


4.2) It is also possible to adjust these parameters for all image
layers using the mouse. With the right-hand mouse button held
down in the image view: moving the mouse in a vertical
direction adjusts the center value; moving it in vertical adjusts
the width of the interval.

197
Mastering Object Based Image Analysis SECTION TWO

4.3) This function must be enabled in: Tools > Options >
Display > Use right-mouse button for adjusting window leveling
> Yes.
4.4) Image equalization is performed after mixing image layers
into a raw RGB (red, green, blue) image. If, as is usual, one
image layer is assigned to each color, the effect is the same as
applying equalization to the individual raw layer gray value
images. Image equalization leads to higher quality results if
more than one image layer is assigned to one screen color (red,
green, or blue). If it is performed after all image layers are
mixed into a raw RGB image (Figure 9).

Figure 9: An RGB setting, subsetted from Sentinel 2 A imagery, Baku


City
4.5) You can change the point cloud settings in the upper and
lower pane of the View Settings dialog. You can also show
point cloud layers individually using the respective checkbox of

198
Mastering Object Based Image Analysis SECTION TWO

a single point cloud layer. Deactivate the Point Clouds checkbox


to hide all point cloud layers in the View or select specific point
clouds to hide. You can change the order of point cloud layers
by dragging and dropping within the Point Clouds section.

4.6) 3D subset selection button is active on availability


once a point cloud layer is loaded to the project. Select a subset
using the 3D subset selection button. Subsequently, an
additional window is opened where the subset is visualized in
3D (see Navigating in 3D). You can shift the subset when the
left View is active, using the left, right, up, and down arrows of
your keyboard.
4.7) With the 3D viewer active, image layers can be activated in
the View Settings dialog (upper pane) and are then visualized as
a 2D image in the 3D viewer. The default elevation for the
image layer is the minimum elevation of all currently displayed
point cloud points in the 3D viewer (-auto-). Change the
elevation of the image layer by using the parameter in the lower
pane Elevation (3D View) and enter a new height value (in the
project unit). 3D vector layers can be displayed in the 3D
viewer, supporting 2D and 3D points, lines, and polygon
outlines (without fill).

4.8) The Zoom Scene to Window button helps you reset the
observer position and all zoom and rotation steps to default.
With point cloud data loaded, you can open an additional toolbar
View > Toolbars > 3D for more visualization options.

199
Mastering Object Based Image Analysis SECTION TWO

Step 5: Vector Layer Settings


5.1) You can change the vector layer settings in the upper pane
(individual vector layer settings) and lower pane (global
settings) of the View Settings dialog. You can change the order
of thematic layers by drag and drop. All thematic layers can be
activated and inactivated for display via the Vector Layers
checkbox or individually using the respective checkbox of a
single vector layer.
5.1.1) Upper pane - Individual Vector Layer Properties
➢ Outline: Select an outline color for vector layers.
➢ Fill: Choose a fill color for polygons.
➢ Outline Width: The value for the outline width changes the
thickness of the vector outline for all vector layers. (This value
is saved in the user settings and therefore applied to different
projects.)
➢ Transparency: Select the transparency for a vector layer
individually.
5.1.2) Lower pane - Vector Layers Global Settings
✓ Background: Click on the square to change the background
color of the image view.
✓ Outline Width: The value for the outline width in this lower
pane of the dialog changes the thickness of the vector outline for
all vector layers. (This value is saved in the user settings and
therefore applied to different projects.)
✓ Auto Update: This checkbox updates the view with each
change of the view settings on the fly. Clear this check box to

200
Mastering Object Based Image Analysis SECTION TWO

show the new settings after clicking Apply only. The Discard
and Apply buttons become active with the Auto-update check
box cleared.
✓ Apply to all Views: With this checkbox activated, you can
apply selected settings to all views at once.
✓ Note that by changing the Global Settings in this lower pane -
all settings in the upper pane are overwritten (Figure 10).

Figure 10: Vector Layer Settings for visualization of a vector layer

5.2) The Layer Visibility Flag: It is also possible to change the


visibility of individual layers and maps in the Manage Aliases
for Layers dialog box.
5.2.1) To display the dialog, go to Process > Edit Aliases >
Image Layer Aliases (or Thematic Layer Aliases).
5.2.2) Hide a layer by selecting the alias in the left-hand column
and un-checking the ‘visible’ checkbox (Figure 11).

201
Mastering Object Based Image Analysis SECTION TWO

Figure 11: Manage Aliases for Layers dialog box

Step 6: Workspace inside eCognition 9.5

Workspaces are at the top of the hierarchical tree and are


essentially containers for projects, allowing you to bundle
several of them together. They are especially useful for handling
complex image analysis tasks where information needs to be
shared.
6-1) How to Create a Workspace:
6.1.1) You can start by creating a new workspace by selecting
New Workspace from the File menu. Also, you may click on the

Create New Workspace icon . This workspace will contain


all of our projects.
6.1.2) We'll give our workspace a name, confirm it's in the
appropriate folder, and click OK to create the workspace. Once
we've created a workspace, you'll see that on the left side of the
screen underneath the workspace section (Figure 12).

202
Mastering Object Based Image Analysis SECTION TWO

Figure 12: Create New Workspace dialog box


6.1.3) Moving over to Windows Explorer, we see a new folder
created with the name of our workspace. In this case,
eCog_MudVolcan project. The folder is a *. DPJ file within that
workspace, which is the workspace file. So once again, when
you create a workspace, it creates a folder and a *.DPJ file

, which is the workspace file. So once


again, when you create a workspace, it creates a folder

and a *.DPJ file.


6.1.4) Back in eCognition, you can now create your desired

projects by clicking on the Create New Project icon . This


project will be created within your workspace automatically. Or
navigate to the directory containing our data, and we'll begin by
loading our data.
6.1.5) Then, you may start by loading in the image data set
called naip. Ing. Now I paid attention to the metadata, so I'm
going to specify the layer alias names for each of my layers.
Band two corresponds to the blue wavelengths, band three to

203
Mastering Object Based Image Analysis SECTION TWO

green wavelengths, band four to red, and finally, band four to


near-infrared.
6.1.6) Later, you can repeat this process for the Sentinel 2A data
sets. Don't be confused by the fact that it says import image
layers. It is the term eCognition used for continuous raster data.
Once again, I'm giving these layers meaningful aliases. Aliases
make it easier for me to work with these raster data sets within
eCognition. They also allow me to apply my ruleset to other
data sets, providing the alias names are the same. So this gives
me a tremendous amount of flexibility in developing and sharing
rulesets.
6.1.7) You may load in my mud-volcanoes location data as
shapefile data. So instead of loading it as an image layer, we
load it as a thematic layer. Nevertheless, we're still going to give
it a layer alias. So once again, if we have a separate project that
uses a feature data set similar to this one, our rules set will work
thanks to the alias name. Finally, we're going to give our project
a meaningful name. I'm just going to call it eCog_Mud Volcans
in this case, and then we'll click OK to create the project.
6.1.8) Now that I've created my project, I'll want to save it. I can
go over to my taskbar and click on the Save Project icon.
6-2) Working with the Workspace
6.2.1) We see a new DPR folder in our workspace folder within
Windows Explorer. This DPR folder contains all of the projects
within our workspace. Every time we save a project, it stores its
version number.

204
Mastering Object Based Image Analysis SECTION TWO

6.2.2) Now that we've created our project, let's explore how
eCognition handles these data sets. First, we will go to Edit
Aliases, Image Layer Aliases (Figure 13) under the Process
menu.

Figure 13: Manage Aliases Layers dialog box


6.2.3) Manage Aliases Layers dialog box lists all of the data sets
we specified as image layers. So those continuous rasters, the
Landsat 8, Sentinel 2A, and the DEM model, all bands
corresponding to our image data sets. We may see that we have
the alias names and the particular layers corresponding to them.
6.2.4) We can then do the same for our thematic aliases. It will
show the alias we identified for our vector data set of mud-
volcanoes location point data. The layers that we specified as
image layers could be viewed using the Image Layer Mixing
Dialogue.
6.2.5) Clicking on the Edit Image Layer Mixing Dialogue opens
the Image Layer Mixing Dialogue window. We can change the

205
Mastering Object Based Image Analysis SECTION TWO

layer mixing down below. For this example, we'll move to a


one-layer grey. And then, we'll cycle through all of our layers to
ensure that they're being displayed correctly within eCognition.
6.2.6) To confirm that our thematic data loaded correctly, we
can go to our view settings, and in the view settings dialogue,
change the layer from image data to our thematic point layer.
6.2.7) By right-clicking on your project and choosing to modify
it, you can go in and change the name of your project, add or
remove layers, or view your project setup.
6.2.8) If your project is large, it may be advantageous to
establish a subset of that project when developing your ruleset.
Subsets are smaller versions of your project that contain subsets
of the data. As a result, many of your algorithms, particularly
time-consuming ones such as a segmentation algorithm, will run
significantly faster.
6.2.9) To establish your subset, click on the subset tool, draw a
rectangle around an interesting area you'd like to subset, then
right-click and choose Save Subset to Workspace. This action
will save the subset as a new project within your workspace. It
will retain all the data and layers of the original project, only for
the smaller subset section that you selected.
6.2.10) You can establish multiple subsets from a single project.
For example, the first subset I establish would be good for
testing a building algorithm, but the second would be great for
testing out a water extraction algorithm. Once you've created
your subsets, you can double-click on them to view them and

206
Mastering Object Based Image Analysis SECTION TWO

then click on the Save Project icon to ensure they save in the
workspace.
6.2.11) Moving over to Windows Explorer, we see that our
subsets save as new project files within the DPR folder, located
within our workspace demo folder.
Step 7: Creating an Initial Project
7.1) To create a simple project – one without thematic layers,
metadata, or scaling (geocoding is detected automatically) – go
to File> Load Image File in the main menu (Figure 14).

Figure 14: Load "Image File dialog box" for a simple project, with
recursive file display, selected
7.2) Load Image File (along with Open Project, Open
Workspace, and Load Ruleset) uses a customized dialog box.
Selecting a drive displays sub-folders in the adjacent pane; the
dialog will display the parent folder and the subfolder.
7.3) Clicking on a sub-folder then displays all the recognized
file types within it (this is the default.

207
Mastering Object Based Image Analysis SECTION TWO

7.4) You can filter file names or file types using the File Name
field. To combine different conditions, separate them with a
semicolon (for example *.tif; *.las). The File Type drop-down
list lets you select from a range of predefined file types. The
buttons at the top of the dialog box let you easily navigate
between folders. Pressing the Home button returns you to the

root file system .


7.5) There are three additional buttons available. The Add to
Favorites button on the left lets you add a shortcut to the left-
hand pane, listed under the Favorites heading. The second
button, Restore Layouts, tidy up the display in the dialog box.
The third, Search Subfolders, additionally displays the contents
of any subfolders within a folder. By holding down Ctrl or Shift,
you can select more than one folder. Files can be sorted by
name, size, and by date modified. In the "Load Image File"
dialog box, you can:
7.5.1) Select multiple files by holding down the Shift or Ctrl
keys, as long as they have the same number of dimensions.
7.5.2) Access a list of recently accessed folders displayed in the
Go to Folder drop-down list. You can also paste a file path into
this field (updating the folder buttons at the top of the dialog
box).
Step 8: Creating a Project with Predefined Settings
8.1) When you create a new project, the software generates a
main map representing the image data of a scene. You select

208
Mastering Object Based Image Analysis SECTION TWO

image layers and optional data sources like thematic layers or


metadata for loading to a new project to prepare this. You can
rearrange the image layers, select a subset of the image or
modify the project default settings. In addition, you can add
metadata.
8.2) An image file contains one or more image layers. For
example, an RGB image file contains three image layers, which
are displayed through the Red, Green, and Blue channels
(layers).
8.3) Open the Create Project dialog box by going to File> New
Project (for more detailed information on creating a project,
refer to The Create Project Dialog Box). The Import Image
Layers dialog box opens. Select the image data you wish to
import, then press the Open button to display the Create Project
dialog box.
8.3.1) Opening certain file formats or structures requires
selecting the correct driver in the File Type drop-down list.
8.3.2) Then, select from the main File in the files area. If you
select a repository file (archive file), another Import Image
Layers dialog box opens, where you can select from the
contained files. Press Open to display the Create Project dialog
box.
8.3.3) The Create Project Dialog Box looks like Figure 15.

209
Mastering Object Based Image Analysis SECTION TWO

Figure 15: Create Project dialog box


8.4) The Create Project dialog box gives you several options.
We can edit these options at any time by selecting File> Modify
Open Project:
8.4.1) Change the name of your project in the Project Name
field. The Map selection is not active here but can be changed in
the Modify Project dialog box after project creation is finished.
8.4.2) If you load two-dimensional image data, you can define a
subset using the Subset Selection button (Figure 16).

210
Mastering Object Based Image Analysis SECTION TWO

Figure 16: The Subset Selection dialog box


8.4.3) If the complete scene to be analyzed is relatively large,
subset selection enables you to work on a smaller area to save
processing time (Figure 17).

Figure 17: A subsetted project for a smaller Qobustan area

211
Mastering Object Based Image Analysis SECTION TWO

8.4.4) If you want to rescale the scene during import, edit the
scale factor in the text box corresponding to the scaling method
used: resolution (m/pxl), magnification (x), percent (%), or
pixel.
8.4.5) To use the geocoding information from an image file to
be imported, select the Use Geocoding checkbox.
8.4.6) For feature calculations, value display, and export, you
can edit the Pixels Size (Unit). If you keep the default (auto), the
unit conversion is applied according to the coordinate system of
the image data.
8.4.7) If geocoding information is included, the pixel size equals
the resolution. In other cases, pixel size is 1.
8.5) In special cases, you may want to ignore the unit
information from the included geocoding information. To do so,
deactivate Initialize Unit Conversion from the Input File item in
Tools > Options in the main menu
8.6) The Image Layer pane allows you to insert, remove and edit
image layers. The order of layers can be changed using the up
and down arrows – If you use multi-dimensional image data
sets, you can check and edit multi-dimensional map parameters.
You can set the number, the distance, and the starting item for
both slices and frames.
8.6.1) If you load two-dimensional image data, you can set the
value of those pixels that are not to be analyzed. Select an image
layer and click the No Data button to open the Assign No Data
Values dialog box.

212
Mastering Object Based Image Analysis SECTION TWO

8.6.2) If you import image layers of different sizes, the largest


image layer dimensions determine the scene's size. When
importing without geocoding, the smaller image layers keep
their size if the Enforce Fitting check box is cleared.
8.6.3) Select the Enforce Fitting checkbox to stretch the smaller
image layers to the scene size.
8.6.4) Thematic layers can be inserted, removed, and edited
similarly to image layers. If not done automatically, you can
load Metadata source files to make them available within the
map.
Step 9: Setting the Geocoding
9.1) Geocoding is the assignment of positioning marks in
images by coordinates. In earth sciences, position marks serve as
geographic identifiers. But geocoding is helpful for life sciences
image analysis too. Typical examples include working with
subsets, multiple magnifications, or thematic layers to transfer
image analysis results.
9.2) Typically, available geocoding information is automatically
detected: if not, you can enter coordinates manually. Images
without geocodes automatically create a virtual coordinate
system with a value of 0/0 at the upper left and a unit of 1 pixel.
For such images, geocoding represents the pixel coordinates
instead of geographic coordinates. The Layer Properties dialog
box allows you to edit the geocoding information Figure 18).

213
Mastering Object Based Image Analysis SECTION TWO

Figure 18: Layer Properties dialog box


9.3) The software cannot re-project image layers or thematic
layers. Therefore all image layers must belong to the same
coordinate system to be read properly. If the coordinate system
is supported, geographic coordinates from inserted files are
detected automatically. If the information is not included in the
image file but is nevertheless available, you can edit it manually.
9.4) After importing a layer in the Create New Project or
Modify Existing Project dialog boxes, double-click on a layer to
open the Layer Properties dialog box. To edit geocoding
information, select the Geocoding check box. You can edit the
following:
• x coordinate of the lower-left corner of the image
• y coordinate of the lower-left corner of the image
• Pixel size defining the geometric resolution

214
Mastering Object Based Image Analysis SECTION TWO

9.5) More Options To Apply


9.5.1) The Assign No Data Values Dialog Box: No-data values
can only be assigned to scenes with two dimensions. It allows
you to set the value of pixels that are not analyzed. You can
apply only no-data-value definitions to maps that have not yet
been analyzed.
9.5.2) Importing Image Layers of Different Scales: You can
insert images and thematic layers with different resolutions
(scales) into a map. They need not have the same number of
columns and rows. To combine image layers of different
resolutions (scales), the images with the lower resolution –
having a larger pixel size – are resampled to the smallest pixel
size. If the layers have the same size and geographical position,
then geocoding is not necessary for the resampling of images.
9.5.3) Editing Multidimensional Map Parameters: When
creating a new map, you can check and edit parameters of multi-
dimensional maps that represent time series. Typically, these
parameters are taken automatically from the image data set, and
this display is for checking only. However, you may want to
change the number, distance, and frames' starting item in special
cases.
9.5.4) Multisource Data Fusion: If the loaded image files are
geo-referenced to one single coordinate system, you can insert
image layers and thematic layers with different geographical
coverage, size, or resolution. It means that you can
simultaneously use image data and thematic data of various

215
Mastering Object Based Image Analysis SECTION TWO

origins. It can bring the different information channels into a


genuine relationship.
Sum Up:
eCognition version 9.5 is such powerful software that it is
challenging to summarize all of its advantages and benefits
within one tutorial. But in the current tutorial, we tried to
introduce some of its main structure and functionalities before
you can start working through specific tasks learning how to
apply the main approaches step-by-step in the following
tutorials.
Likewise, we believe that workspaces are at the top of the
hierarchical tree and are essentially containers for projects,
allowing you to bundle several of them together. They are
especially useful for handling complex image analysis tasks
where you must process high-resolution satellite imagery.

216
Mastering Object Based Image Analysis SECTION TWO

Informative Practices
Tips:
1) Any Workspace is a safe house for eCognition projects.
2) The image file and the associated data within a scene can be
independent of eCognition software (although this is not
always true).
3) You can fuse (combine) Sentinel-2 data and other sensor data to enhance
your reachers approaches.
Workouts:
1) Create a workspace and place two different subsetted projects
inside it.
2) Load Image bands for the project as you created and then
change image layers differently .
3) Clarify the difference between Sentinel 2a and 2b.
Quizzes :
1) What does this group of buttons allow you to do?

2) After the workspace is created, what to do next?


3) How many Sentinel-2 satellites are there in space?
Allied References:
1) Copernicus: Sentinel-2 (2020) Satellite Missions – eoPortal
Directory. directory.eoportal.org. Retrieved 5 March.
2) eCognition (2018) Trimble, Retrieved August 12, 2018, from
http://www.ecognition.com.
3) eCognition Developer (2013) Manual for Satellite Data
Analysis eCognition Developer, PNGFA. Trimble Germany
GmbH, Arnulfstrasse 126, D-80636 Munich, Germany.
4) eCognition Developer 9.1.2 (2015) Release Notes, Trimble
Germany GmbH, Arnulfstrasse 126, D-80636 Munich,
Germany.
5) Klatt, S. (2012) Recognition with eCognition, Skid trail
detection with multiresolution segmentation in eCognition

217
Mastering Object Based Image Analysis SECTION TWO

Developer Presentation from Research Colloqium, 4th


Semester M.Sc. Forest Information Technology.
6) Yan, L., D.P. Roy, H. Zhang, J. Li and H. Huang (2016) An
Automated Approach for Sub-Pixel Registration of Landsat-8
Operational Land Imager (OLI) and Sentinel-2 Multi-Spectral
Instrument (MSI) imagery. Remote Sens. 8, 520.

218
Mastering Object Based Image Analysis SECTION TWO

Tutorial 3

Examining The Image Segmentation Algorithms

sets of related pixels, also prized as image objects


Opening Statement:
In the current tutorial, you first become acquainted with the
other way of accessing the Sentinel-2 imagery adjusted to the
northwest of Azerbaijan, around the Mingachevir Dam. Then,
you will learn how to segment such imagery in different ways
inside the eCognition software, which advances a new, object-
oriented approach to image analysis. In contrast to traditional
image processing methods, the basic processing units of OBIA
are image objects or segments and not single pixels. Even the
classification acts on image objects. For this reason, one
motivation for the object-oriented approach is that the expected
result of many image analysis tasks is the extraction of real-
world objects, proper in shape and proper in classification.
Instructive Memo:
✓ Level: Intermediate
✓ Time: This unit should not take you more than 1.5 hours.
✓ Software: eCognition Developer version 9.5.

219
Mastering Object Based Image Analysis SECTION TWO

✓ Data Sources: Sentinel-2, L1C_L1C_T38TPL_A030262_


20210408T074154.tif.
✓ Subject Scene: Mingachevir Dam, Azerbaijan.
Tutor Objectives:
By the end of this unit, you should:
Be able to access the Sentinel-2 Images through the GLOVIS
website.
Be able to apply each of the segmentation techniques
available with eCognition Developer to an image.
Be aware of the difference between the various segmentation
algorithms and the types of objects (size and shape) they
produce.

Background Concepts:
Segmentation is defined as the partitioning of an image
into image objects; in a way, an image object is a group of
connected pixels in a scene. Segmentation means grouping
neighboring pixels into regions (or segments) based on
similarity criteria (digital number, texture). Image objects in
remotely sensed imagery are often homogenous and can be
delineated by segmentation. It is always the first step of any
process within eCognition Developer as it generates the image
objects on which the classification process will be performed.
The important part is for the segmentation process to identify
objects that represent the features you wish to classify and are
distinct in terms of the features available within eCognition
(e.g., spectral values, shape, and texture). You could capture the
data sources from Sentinel-2 imagery, adjusted for the around
Mingachevir Dam. Mingachevir is the fourth-largest city in

220
Mastering Object Based Image Analysis SECTION TWO

Azerbaijan, with about 110,000. It's often called the "city of


lights" because of its hydroelectric power station on the Kur
River, which divides the city down the middle (Figure 1).

Figure 1: The selected subject Scene, the Mingachevir Lake


The Mingachevir Dam (Hydro Power Station) is an earth-fill
embankment dam on the Kura River north
of Mingachevir in Azerbaijan.
Mastering The Skills:
In image segmentation, the expectation is, in many cases, to
automatically extract the desired objects of interest in an image
for a certain task. However, this expectation ignores the
considerable semantic multitude that in most cases needs to be
handled to achieve this result successfully, or it leads to the
development of highly specified algorithms applicable to only a
reduced class of problems and image data.

221
Mastering Object Based Image Analysis SECTION TWO

Step 1: The Sentinel-2 Images From GLOVIS


1.1) Registration and Login
GloVis utilizes the USGS EROS Registration System (ERS) and
can be accessed with existing GloVis or Earth
Explorer credentials.
1.1.1) The ERS Help Document has step-by-step instructions
on the registration process. To set up new credentials, click
Create New Account to register.
1.1.2) Select Login on the top toolbar. Enter your username
and password and then click Sign In to access all features of
GloVis (Figure 2).

Figure 2: USGS Glovis site, the entrance Hub


1.2) Select Data Set(s) and Apply Filters

222
Mastering Object Based Image Analysis SECTION TWO

Navigate GloVis by using the Interface Controls panel on the


left side of the screen. This panel provides options to set up a
data search.
1.2.1) Choose Your Data Set(s). To begin, select a data set from
the Choose Your Data Set menu. Toggle the data set on (

) to activate and save the selection. The total


number of available scenes is listed below the data set name,
and the map view shows a coverage map to indicate data
availability. The coverage map for each data set is a different

color ( ). You can turn off the coverage map by clicking the
icon.
1.2.2) Metadata Filter: The Metadata Filter provides options
that narrow search results. Update one or more filters, then click
Apply to save all selections and view the matching scenes.
a) You can Filter data temporally by Date Range using
mm/dd/yyyy to mm/dd/yyyy.
b) Enter a Cloud Cover range to narrow the results based on
the percentage (0-100%) of the scene covered by clouds.
Leave this filter empty for data sets that do not report
cloud cover, such as GLS.
c) Select Months to further limit search results to scenes
acquired during a portion of the year. Hold down the Shift
or Ctrl keys to select more than one month while selecting
additional rows.
1.2.3) Click the triangle ( ) in the upper right of the header
to collapse the menu.
1.3) Define Area of Interest
The next step is to define the geographic area of interest.

223
Mastering Object Based Image Analysis SECTION TWO

1.3.1) To select a location, click and drag the map to pan and
scroll to zoom into your area of interest. Then, utilize the Jump
To… menu ( ) in the upper right of the map. As the zoom
function is activated, the display shows browse images that

intersect the Map Center Point crosshair ( ) in the center of


the viewport. To see data behind the Interface Controls, click
the triangle in the upper right of that panel to collapse the panel.
As the mouse moves across the map, the coordinates are
displayed in decimal degrees in the upper right portion of the

map view area. You can enable the Full Screen ( ) mode to
view the map in Full-Screen mode.
a) Pan and Zoom: Use the mouse scroll wheel or the +/-
controls in the upper right corner of the map view to activate the
browse imagery display or enlarge the browse images. The
zoom level required to activate to browse images depends on
scene size and varies by data set.
b) Pan to adjacent areas by clicking and dragging the map. The
data sets automatically refresh if the zoom level allows imagery
to display.

c) Jump To… Options ( ): The Jump To… menu, in the upper


right of the map field, provides the opportunity to use Current
Location, Latitude/Longitude, or WRS Path/Row to select a
location. The Current Location shifts the center point of the map
to the current physical browser location once the browser shares
the current location with the GloVis site.

224
Mastering Object Based Image Analysis SECTION TWO

d) As the area of interest is modified, each data set updates the


count of scenes that match.
To indicate data availability, you can toggle on the "browse

level coverage map icon" ( ).


1.4) Scene Navigation and Selection
The Scene Navigator panel appears near the lower right portion
of the screen when browse images are loaded.
1.4.1) The Scene Navigator shows the Data Set, Entity ID, and
Acquisition date of the most recent scene. The current scene is
depicted with a red outline and is displayed on top of all other
browse images. For the visually impaired, You can change the
outline's color in the Map Preferences section of the Preferences
menu.
1.4.2) Scenes are arranged by acquisition date. The Previous
button selects and highlights the next most recent scene. Use
the Previous and Next buttons to scroll through the results. The
display uses a rolling display of browse images closest to the
acquisition date of the current scene.
1.4.3) The Select button adds a scene to the Selected Scenes list
just below the black toolbar across the top of the map view.
Once a scene has been selected, click Unselect to remove it from
the Selected Scenes.

225
Mastering Object Based Image Analysis SECTION TWO

1.5) Scene Navigator Controls


The Scene Navigator panel also includes icons to Share Scene,
View Metadata, Download Scene, and Hide Scene. Login is
required for downloading.

1.5.1) Share Scene ( ): The Share Scene icon opens a menu


with options for Scene Summary, Metadata XML, and Reduced
Resolution Browse (Figure 3).

Figure 3: Share Scene Summary dialog box


1.5.2) Scene Summary opens a new browser tab that shows the
browse image, links to download available products, full
metadata for the current scene, and links to order products
(WMS On-Demand and Bulk Download) options are available.
Metadata XML opens a new tab with the scene XML metadata.
And Reduced Resolution Browse opens a new tab with a larger
browse image.

1.5.3) View Metadata ( ): Click the View Metadata icon to


display the full metadata for the current scene. The data set
attributes are displayed in table format (Figure 4).

226
Mastering Object Based Image Analysis SECTION TWO

Figure 4: display the full metadata for the current scene

1.5.4) Download Scene ( ): Click the Download Scene icon to


see download options for the current scene. Click Download for
the desired product to download the data. Download options
and products vary by data set.

1.5.5) Hide Scene ( ): Click the Hide Scene icon to hide the
current scene from the list and remove it from the map. The
number of Hidden scenes is indicated by a counter next to the
data set name in the Interface Controls panel. Click on the
counter to clear hidden scenes to make them available for
display again.

227
Mastering Object Based Image Analysis SECTION TWO

1.6) Scene List


Go to Selected Scenes at the top of the map view to view the
scene list created by selecting scenes in the Scene Navigator
panel (Figure 5).

Figure 5: Selected Scenes dialog box


1.6.1) Individual Scene Controls: The scene list shows each
scene's data set and entity ID. Individual Scene Controls include

Show Metadata ( ), Download Scene ( ), Order Scene (

), or Show Footprint ( ). Download and Order options


vary by data set.
1.6.2) Scene List Controls: The selected scenes panel allows
users to Show All Footprints, Export Scene ID List, Import
Scene List, or Clear All Scenes.

Figure 6: Sentinel 2A format, Download Options dialog

228
Mastering Object Based Image Analysis SECTION TWO

1.6.3) Download and Order options vary by data set (Figure 6).
The Export Scene List and Import Scene List are functions of
GloVis only.
Step 2: Set up a Project
As with all work within eCognition Developer, the first step is
to create a project containing all the datasets required for the
study.
2.1) Your targeted project should have the same parameters as
that shown in these steps in the previous tutorials.
2.2) Figure 7 shows the current tutorial selected area, part of the
Mingachevir area, Azerbaijan, inside the eCognition project.

Figure 7: setting up the Mingachevir area inside the eCognition


Project
2.3) Please note the order in which the image bands have been
loaded, i.e., the high-resolution bands (blue, green, or red) first,
as this will decide the project's image resolution. In case the 20
m multispectral of Sentinel-2 channels are resampled to the 10

229
Mastering Object Based Image Analysis SECTION TWO

m of the first data. Once you have matched your project window
select OK and create your project.
2.4) Although it is important to keep an eye on the size of the
images you are creating, a project with an eCognition Developer
can become very slow with very large datasets due to the
number of objects generated during the segmentation process.
Step 3: Display Imagery
3.1) Doing these exercises, we recommended you subset a small
part of the Sentinel-2 imagery.
3.2) For the multispectral false-color image, use the band
combination B2-Blue, B3-Green, B4-Red, B7-VNIR, and B11-
SWIR components, as is illustrated in Figure 8.

Figure 8: The layer mixing properties for the subsetted image


3.3) You need to set the eCognition interface with the project
and display parameters defined. For more details, read previous
tutorials.
230
Mastering Object Based Image Analysis SECTION TWO

Step 4: Setup your Process Tree

The Process Tree ( ) will contain the script you produce to


control the processes (algorithms) that run and the order in
which they are executed. To insert a process:
4.1) Right-click within the process tree window, and the
following menu will appear (Figure 9).

Figure 9: Process tree context menu


4.2) Select 'Append New,' and the Edit Process dialog will
appear (Figure 10).

231
Mastering Object Based Image Analysis SECTION TWO

Figure 10: The Edit Process dialog box, a basic outline for the
segmentation processes
4.3) The Edit Process dialog box would be the main
segmentation process. You can put several underset
segmentations processes by selecting the Insert Child option. In
addition, you may like to arrange other algorithm arrangements
in the Template Process Tree before or after segmentation
processes, as you notice in Figure11.

Figure 11: Template Process Tree, a suggested arrangement


4.4) It is important to keep the scripts that you produce during
your segmentation procedures as organized as possible. It will
allow you to understand what you have done when you come

232
Mastering Object Based Image Analysis SECTION TWO

back to it. With this in mind, Figure 11 contains a template you


may aim to adhere to during your exercises.
Step 5: Examples of Segmentation Algorithms
5.1) Starting with Multiresolution Segmentation
5.1) The first and most general segmentation technique available
within eCognition Developer is multiresolution segmentation.
To insert this algorithm within your Process Tree, right-click on
your 'Segmentation' process in the template you previously
entered and select 'Insert Child.'
5.2) Select the algorithm as 'multiresolution Segmentation
within the created dialog box. If this algorithm is not available,
scroll to the bottom of the list, select more, and move the
algorithms you wish to have in the list to the right-hand column.
It should now present you with the dialog box shown in Figure
12.

Figure 12: Edit Process dialog box, adjusted for the multiresolution
segmentation algorithm
5-3) Table 1 briefly describes the parameters available for this
segmentation algorithm. The 'Edit Process' dialog is made up of
233
Mastering Object Based Image Analysis SECTION TWO

elements. Each of which will become clear as you move through


the notes:
Table 1: An overview of parameters for segmentation
Parameter Function Description
The name of the process and can either be manually
entered or automatically provided by the software.
a) Name: A good convention is to manually edit the name
where nothing else is changed within the process.
Otherwise, use the automatic function. The note
icon allows a comment to be written about the
process.
The algorithm to execute. This drop-down menu
b) Algorithm: allows you to select the algorithm you wish to
execute; there is an extensive list of algorithms that
it will use during these units.
The image object domain defines the object(s) that
will execute the algorithm. The drop-down box and
c) Image Object 'Parameter' box allow the level to be selected. The
Domain: following button ('all objects') allows a class(es) to
be defined, while the final button ('no condition')
allows a rule to be used, for example, area > 20 m2.
d) Loops & Cycles: It is possible to allow a process to form a loop,
often in the form of a while loop, and the tick box
allows this to be selected .
e) Algorithm A simple description of the algorithm you are
Description: using.
f) Algorithm These are the parameters associated with an
Parameters: algorithm that you could select, such as the name of
the level in the hierarchy created by the
segmentation algorithm.
Increases the weighting of the layer when
calculating the heterogeneity measure used to
g) Segmentation decide whether pixels/objects are merged. Zero
Settings: ignores layer. Scale parameter controls the amount
of spectral variation within objects and their
resultant size with no unit. Suppose any thematic
layers are available. Thematic Layer Usage allows
thematic layers to be turned on and off individually
for use within the segmentation process.
Shape factor acts as a weighting between the
object's shape and its spectral color whereby if 0.
h) Composition of Only color is considered, whereas if > 0, the
homogeneity criterion: object's shape and the color are considered, and,
Shape therefore, fewer fractal boundaries are produced.

234
Mastering Object Based Image Analysis SECTION TWO

The higher the value, the more that shape is


considered. Compactness is a weighting factor for
representing the compactness of the objects formed
during the segmentation process.

5.4) To run the segmentation process, leave the parameters at


their default values (as it is shown in Figure ?) and click
execute. It is recommended that you give your level a proper
name. A common level name convention is to number them,
starting with Level 1. Once you are happy with the parameters
and have executed the process, you should have completed your
first segmentation.
5.5) The multiresolution segmentation creates objects using an
iterative algorithm, whereby objects (starting with individual
pixels) are grouped until a threshold representing the upper
object variance is reached. The variance threshold (scale
parameter) is weighted with shape parameters (with separation
of shape and compactness parameters) to minimize the fractal
borders of the objects. By increasing the variance threshold,
larger objects will be created, although their exact size and
dimensions are dependent on the underlying data.
5.6) Once the segmentation is executed, select the 'Show or Hide

Outlines' icon ( ), and the outlines of the objects (segments)


created will be displayed over the image. Make sure you have
the cursor in the 'cursor mode' rather than the' zoom mode,'
select the objects (with either the outlines turned on or off) in
turn. Using the 'Image Object Information' window, you will see

235
Mastering Object Based Image Analysis SECTION TWO

the values for features associated with the selected object.


Examples of a few segmented samples are shown in Table 2.
Table 2: Image objects created by Multiresolution Segmentation for
different sampled sites (see Figure 8).
Sampled Segmented Samples Image Objects
Sites Created
image objects well-
adapted to the river
path, a small lake and
traditional agricultural
(a) fields.

image objects well-


adapted to the
traditional and modern
agricultural fields.
(b)

image objects well-


adapted to a small salty
lake and non-vegetated
areas around.
(c)

image objects well-


adapted to the
Mingachevir Dam
water surface and
downstream channels
(d)

236
Mastering Object Based Image Analysis SECTION TWO

5.7) In Table 2, image objects created by Multiresolution


Segmentation algorithm for part of modern irrigated agricultural
fields located in the southwest of the Mingachevir Dam.
Step 6: Streaming Other Segmentation Processes
By spending approximately 30 minutes experimenting with
different segmentation and input parameters, you will observe
the differences in the images objects created. A few examples
are given as follow:
6.1) Quadtree-based Segmentation
A Quadtree segmentation creates regular square objects, where
the variation of the object defines the size. Unlike
multiresolution segmentation, the objects are created by dividing
larger objects until the resultant objects are all within the upper
boundary of allowed variation. As with the multiresolution
segmentation, the variation at which a final object is defined
using a scale parameter.
6.1.1) To create a quadtree segmentation, follow the same
procedure used in Figure 13.

237
Mastering Object Based Image Analysis SECTION TWO

Figure 13: The Edit Process dialog box, Quadtree based


segmentation
6.1.2) A scale factor of 20-60 is recommended for the
segmentation, although others might be more appropriate. Try
several scale factors for representing different landcover types
(Figure 14).

Figure 14: The Quadtree segmentation resulted for the part of the
Mingachevir Lake

238
Mastering Object Based Image Analysis SECTION TWO

6.2) Chessboard Segmentation


A chessboard segmentation is the simplest segmentation
available as it just splits the image into square objects with a
size predefined by the user. The segmentation does not consider
the underlying data, and therefore when large objects are
created. It will not delineate the features within the data you are
trying to classify.
6.2.1) To start a Chessboard segmentation, right-click on the
associated item inside the Process Tree and select the Edit
option.
6.2.2) When the Edit Process dialog box opens, set up all
parameters as Figure 15.

Figure 15: The Edit Process dialog box, Quadtree based


segmentation

239
Mastering Object Based Image Analysis SECTION TWO

6.2.3) This type of segmentation tends to be used in more


advanced processes where segmentation is undertaken in several
steps combined with a classification (Figure 16).

Figure 16: The chessboard segmentation result


6.2.4) To perform a default chessboard segmentation, set up the
process the same way as the multiresolution and Quadtree
Segmentation, but selecting the chessboard segmentation
algorithm. To begin with, use a value of 10 for the Segmentation
that generates objects of 10x 10 pixels and then progressively
increase or decrease this value. Notice, in each case, how the
boundaries and spectral information of the underlying data are
ignored.
6.2.5) Inside the eCognition 9.5, you could experience more
segmentation algorithms to create an adjusted segmentation for
your data and aims. For some of the segmentation algorithms'
descriptions and uses are given in Table 3.

240
Mastering Object Based Image Analysis SECTION TWO

Table 3: More dissimilar segmentation methods available inside the


eCognition 9.5
Segmentation Description Uses
Type
This algorithm aims to split The algorithm aims to
bright and dark objects using a optimize this separation by
Contrast Split threshold that maximizes the considering different pixel
Segmentation contrast been the resulting values within the range
bright objects (consisting of provided by the user
pixel values above the parameters, with values
thresholds) and dark objects selected based on the inputted
(consisting of pixel values step size and stepping
below the threshold). To parameter.
execute this algorithm, you
will need to create two classes,
one for the bright and one for
the dark.
is a merging algorithm that To use this segmentation
Spectral will merge neighboring algorithm, you must already
Difference objects with a spectral mean have segmentation (level) in
Segmentation below the threshold given to place. You cannot create a
produce the final objects. new level using this
algorithm.
The contrast filter As with the previous simple
segmentation uses a exercises, experiment with
Contrast combination of two-pixel this algorithm to segment the
Filter filters to create a thematic image provided. Although, as
Segmentation raster layer, with the values no you will see, this algorithm
object, objects in the first layer does not produce results on
(filter response 1), an object in par with the other algorithms
the second layer (filter outlined in this unit for the
response 2), object in both image subset provided.
layers and ignored by
threshold.
Multi- Multi-Threshold segmentation Combining the automatic
Threshold splits the domain based on threshold algorithm with
Segmentation pixel values. This kind of multi-threshold segmentation
segmentation creates image allows you to create fully
objects and classifies them automated and adaptive image
based on user-created analysis algorithms based on
thresholds. It can also create threshold segmentation. A
unclassified image objects manual definition of fixed
based on pixel values thresholds is not necessary.
thresholds.

241
Mastering Object Based Image Analysis SECTION TWO

Step 7: Information from Image Objects


7.1) Image Object Information Window
image objects consist of spectral, shape, and hierarchical
elements. If required, these objects' information could be
extracted using the Image Object Information window and
Feature View Window inside the eCognition 9.5.
7.1.1) To get information on a specific image object, click on an
image in the map view (some features are listed by default).
7.1.2) To add or remove features, right-click the Image Object
Information window and choose the "Select Features to
Display." The Select Displayed Features dialog box opens,
allowing you to select a feature of interest.
7.1.3) If the Object Features are not listed in the "Available
Search Feature list (on the left side of the "Select Displayed
Features"), dabble-click on the "Create new Mean" to show the
Create Mean dialog box (Figure 17).

Figure 17: The Create Mean dialog box

242
Mastering Object Based Image Analysis SECTION TWO

7.1.4) Then, from the Value list, select bands or features to add
them to the Selected Search Feature on the right side of the
Select displaced Features list. Remember that you can even add
some of Geometry, such as Area and Number of pixels, to the
mean values list.
7.1.5) The selected feature values are displayed in the map view.
To compare single image objects, click another image object in
the map view, and the displayed feature values are updated
(Figure 18).

Figure 18: The Image Object Information window and associated


information on segmented objects
7.1.6) Double-click a feature to display it in the map view; click
it in the map view a second time to deselect a selected image
object. If the processing for image object information takes too
long, or if you want to cancel the processing for any reason, you
can use the Cancel button in the status bar. For more details,
refer to the previous tutorials.
243
Mastering Object Based Image Analysis SECTION TWO

7.2) Feature View Window


Image objects have spectral, shape, and hierarchical
characteristics and these features are used as sources of
information to define the inclusion-or-exclusion parameters used
to classify image objects.
7.2.1) There are two major types of features. First, the Object
features are attributes of image objects (for example, the area of
an image object). And Global features that are not connected to
an individual image object (for example, the number of image
objects of a certain class).
7.2.2) Available features are sorted in the feature tree, displayed
in the Feature View window (Figure ?). It is open by default but
can also be selected via Tools > Feature View or View > Feature
View.
7.2.3) To set mean ranges for each select band, right-click on
and select the Update Range (Figure 19). By arranging the mean
values, it is possible to find out thresholds values for each
feature.

244
Mastering Object Based Image Analysis SECTION TWO

Figure 19: The Feature View window and information about objects
Sum Up:
You could use segmentation algorithms to subdivide entire
images at a pixel level or specific image objects from other
domains into smaller image objects. eCognition 9.5 software
provides several diverse approaches to Segmentation. It may
range from very simple algorithms, such as Chessboard and
Quadtree-based Segmentation, to highly sophisticated methods
such as Multiresolution Segmentation Multi-Threshold
Segmentation algorithms. Those are required to create new
image object levels based on image layer information. But they
are also a valuable tool to refine existing image objects by
subdividing them into smaller pieces for more accurate
classification.
A few examples of image segmentation algorithms are given
during the current tutorial. If you spend approximately 30

245
Mastering Object Based Image Analysis SECTION TWO

minutes with different segmentation algorithms and parameters,


you will observe the differences in the images objects created.
Generally speaking, there is no definitive answer as to whether
one Segmentation is better than another. The final selection
depends upon whether you are satisfied that the objects you are
interested in classifying are adequately delineated.

Informative Practices
Tips:
1) The first step of an eCognition image segmentation is to cut the
image into pieces.
2) Segmentation serves as a building block for further analysis .
3) There is a choice of several algorithms to do the segmentation
process.
Workouts:
1) Using the layer combination of your choice (bands with the 10-
meter resolution are recommended), experiment with the image
equalizations available. Again, observe how the various land
cover types change to these changes.
2) Decide on the most appropriate segmentation algorithm for
segmenting this scene.
3) As you are doing this, consider what elements you think to
provide a better segmentation and how it could use the different
characteristics of the various algorithms to achieve the
segmentation you require.
Quizzes :

246
Mastering Object Based Image Analysis SECTION TWO

1) What are the main differences between a Chessboard


Segmentation and a Multi-Threshold segmentation?
2) What are the scale a shape parameters?
3) Why is the Multiresolution Segmentation algorithm the best
choice in the segmentation process?
Allied References:
1) Aly, M. El-naggar, (2018). Determination of optimum
segmentation parameter values for extracting building from
remote sensing Images, Alexandria Engineering Journal (2018)
57, 3089–3097.
2) Baatz, M. and A. Chäpe, (2000). Multiresolution Segmentation:
An Optimization Approach for High Quality Multi-scale Image
Segmentation, Angew. Geogr. Info. Verarbeitung, Wichmann-
Verlag, Heidelberg , pp. 12-23.
3) Drǎguţ, L., D. Tiede, and S. Levick, (2010). ESP: a tool to
estimate scale parameter for multiresolution image segmentation
of remotely sensed data, Int. J. Geography Inform. Sci., 24 (6),
pp. 859-871.
4) Ikokou, G. and J. Smit, (2013). A technique for optimal selection
of segmentation scale parameters for object-oriented
classification of urban scenes, South Afr. J. Geomatics., 2 (4).
5) Lucieer, S. Stein, (2002) Existential uncertainty of spatial objects
segmented from satellite sensor imagery, IEEE Trans. Geosci.
Remote Sens., 40 (11).
6) Zhang, H. and J. Fritts, S. Goldman, (2008). Image segmentation
evaluation: a survey of unsupervised methods, Comput. Vision
Image Understanding, 110, pp. 260-280.

247
Mastering Object Based Image Analysis SECTION TWO

Tutorial 4

Objective Image Classification Processes

from row images to informative Knowledge

Opening Statement:
The current tutorial will create a nearest neighbor classification
of a segmented Sentinel-2 image acquired from an area around
the Baku region, Azerbyazan. The main objective of image
classification is to identify and portray, as a unique gray level
(or color), the features occurring in an image in terms of the
object or type of landcover these features represent on the
ground.
Instructive Memo:
✓ Level: Intermediate,
✓ Time: This unit should not take you more than 1.5 hours,
✓ Software: The eCognition version 9.5,
✓ Data Sources: Sentinel-2:
L1C_T39TVE_A030362_20210415,
✓ Subject Scene. Baku Region, The Republic of Azerbaijan.
Tutor Objectives:
By the end of this unit, you should:

248
Mastering Object Based Image Analysis SECTION TWO

Be able to complete all the steps required in the process tree.


Complete a classification based on the nearest neighbor
classifier.
Be aware of the parameters and features to aid classification
process.
Be attentive to the merge and export processes.
Background Concepts:
Image classification is the process of categorizing and labeling
groups of pixels or vectors within an image based on specific
rules. The categorization law can be devised using spectral or
textural characteristics. Two general classification methods are
'supervised' and 'unsupervised.' In simple words, image
classification is a technique used to classify or predict the class
of a specific object in an image. This technique's main goal is to
identify the features in an image accurately. The OBIA, nearest
neighbor classification, approach is one of the most recent
methods that delineate segments of homogeneous image areas as
objects. In the next step, the delineated segments could be
classified into real-world objects based on spectral, textural,
neighborhood, and object-specific shape parameters and context
information.
In contrast to traditional image processing methods, the basic
processing units of OBIA classification are image objects or
segments, not single pixels, and even the classification acts on
image objects. One motivation for the object-oriented approach
is that the expected result of many image analysis tasks is the
extraction of real-world objects, proper in shape and proper in
classification. Common, pixel-based approaches cannot fulfill

249
Mastering Object Based Image Analysis SECTION TWO

this expectation .Figure 1 illustrates an area that contains


extensive build-up and water surfaces at various levels of
improvement.

Figure 1: Map of the Baku Region, Abshourn Peninsula


Mastering the Skills:
There are many image classification techniques inside the
eCognition software. One of the most used is The Nearest
Neighbour (NN) that classifier is a supervised classification
approach whereby training samples are located and used to
classify all remaining (unknown) objects in the image for each
class required. The NN classifier has been used successfully for
many classification problems.
Step 1: Creating a Project
1.1) As with previous works within eCognition Developer,
the first step is to create a project containing all the datasets
required for the study. You will create a subset of the inputted

250
Mastering Object Based Image Analysis SECTION TWO

images once again. Your project should have the same


parameters as that shown in former tutorials
1.2) For the current tutorial, we clipped the subsetted area
from the Sentinel-2 image with, as a part of the Baku City, with
Minimum X: 400430, Maximum X: 414230 in the upper-left
and Minimum 4482250 Y; Maximum Y: 4490670 at the lower-
left coordinates (Figure 2).

Figure 2: A project adjusted for the northern part of the Baku


city, RGB image
1.3) As with previous works within eCognition Developer,
you may prefer to create a project containing all the datasets
required for your desired study area. We recommended creating
a small subset of the inputted images with the same parameters
as in former tutorials.
1.4) Note the order in which the image bands were loaded,
i.e., the Blue (B2), Green (B3), Red (B4), and VNIR (B8) bands
first sets, as these will decide the project's image (with 10 m)

251
Mastering Object Based Image Analysis SECTION TWO

resolution. In this tutorial, the 20 m VNIR (B8a) and SWIR


(B11) bands are resampled to the 10 m of the first set bands.
1.5) Once you have matched your project window to those
shown in Figure 2. Select OK and create your project.
Step 2: Setting-Up the Class Hierarchy
2.1) For classification, the first task is to create the classes you
require and (in this case) to insert the Nearest Neighbour Feature
into each class.
2.2) To create a class, you require the 'Class Hierarchy' window
(shown in Figure 3) to be open. If the window is not already
visible, then click on the icon.

Figure 3: Class Hierarchy Window before inserting any classes


2.3) To insert a class, right-click in the Class Hierarchy window
and select 'Insert Class' (Figure 4).

252
Mastering Object Based Image Analysis SECTION TWO

Figure 4: Inserting a class into the class hierarchy


2.4) This provides you with an empty 'Class Description (Figure
5).

Figure 5: Class Description window for the Water Surface class


2.5) The next step is to edit your class description by giving
your class a name. For example, give the class "Water Surface"
and assign a blue color (Figure 5). When you have done this,
insert and name other new classes. You should then have four

253
Mastering Object Based Image Analysis SECTION TWO

classes inserted and named: Water Surface, Green-Cover, Built-


up Area, Mixed Landuse, and Non-Vegeted classes.
2.6) After giving each class a name, select an appropriate color
for each. It can be anything you wish, although the final
classification will be easier to understand and interpret if you
choose a logical color (e.g., Green for vegetation).
2.7) Next, the features (e.g., mean object spectral response) to be
used for classification (in this case, the standard nearest
neighbor algorithm) need to be inserted into the class. To do
this, right-click on the 'and (min)' and select 'Insert new
Expression' (Figure 6).

Figure 6. Inserting a new expression into the Built-up class


2.8) This will present the window, where you need to select
'Standard Nearest Neighbour' and click Insert (Figure 7).

254
Mastering Object Based Image Analysis SECTION TWO

Figure 7: Selecting the expression to be used for the classification


2.9) Your resulting class description should be similar to that
shown in Figure 8 for the Built-up class, for example.

255
Mastering Object Based Image Analysis SECTION TWO

Figure 8: The resulting Built-up class description used for the


classification
2.10) The same procedure needs to be repeated for the
remaining classes so that you end up with a classification
hierarchy similar to that shown in Figure 9.

Figure 9: The class hierarchy outline

256
Mastering Object Based Image Analysis SECTION TWO

2.11) To select the features used for the nearest neighbor


classification, use the 'Edit Standard NN feature Space' function
(Figure 10). Initially, it would help if you used the mean spectral
values of the objects.

Figure 10: The menu for editing the NN feature space


2.12) When the Edit Standard NN feature Space dialog box
opens, you can set the parameters as you want (Figure 11).

Figure 11: Editing dialog for selecting the features within the NN
feature space
Step 3: Setting up the Process Tree

257
Mastering Object Based Image Analysis SECTION TWO

3.1) As with all classifications in eCognition Developer, the first


task is to perform segmentation. In this case, using a multi-
resolution segmentation is recommended, although you could
investigate others.
3.2) As with previous units, it is recommended you create an
outline within your process tree mirroring that outlined in Figure
12. Remember, a process is created by right-clicking in the
process tree window and selecting 'Append Process' or 'Insert
Child Process'.

Figure 12: Process outline and Segmentation process


3.3) To create the process that performs the segmentation, right-
click on the Segmentation process you have already created,
select 'Insert Child Process,' and then the algorithm
'Multiresolution Segmentation'.

258
Mastering Object Based Image Analysis SECTION TWO

3.4) Choose the parameters shown in Figure 13 and once you


have entered these parameters, click on 'Execute' to perform the
segmentation.

Figure 13: Parameters used for the segmentation of the Sentinel-2


image
3.5) Note that the layer weighting for the B2, B3, B4, and B8
bands could be increased to 2. It takes advantage of the extra
spatial resolution of these bands, 10 m rather than 20 m of the
multispectral. In adition, you may attention to the wights of
Scale (52), Shape (0.3), and Compactness (0.7) paramaters
(Figure 14).

259
Mastering Object Based Image Analysis SECTION TWO

Figure 14: The Multispectral Segmentation algorithm result


Step 4: Classification Process
4.1) To run the classification, you need to add a classification
process to your process tree. You could achieve it by right-
clicking on the process you named 'Classification' and selecting
'Insert Child Process.' Edit the new process such that it is similar
in appearance to that shown in Figure 15.
4.2) To select multiple classes, use the 'Shift' and 'Control' keys
as you would in Windows Explorer.

Figure 15: The process parameters used for the classification

260
Mastering Object Based Image Analysis SECTION TWO

4.3) After inputting the parameters into the process, click on the
'OK button at the bottom. You need to select samples before
performing your classification by clicking the Active classes
option that illustrates the Edit Classification Filter (Figure 16).

Figure 16: The Edit Classification Filter


4.4) Select all classes inside the Edit Classification Filter and
click on OK.
Step 5: Selecting Samples to Train Classifier
5.1) The next stage is to select the samples for each of the five
classes. You need to have executed the segmentation process
before undertaking these steps.
5.2) To create a sample, you need to activate the tool for sample
selection (Select Samples), as shown in Figure 17.

261
Mastering Object Based Image Analysis SECTION TWO

Figure 17: Activating sample selection function


5.3) Once you have activated sample selection, highlight the
class you wish to create a sample for in the class hierarchy
window. Double-click on the objects (you wish to select as
samples) or hold down the Shift key and use a single click. To
unselect a sample, repeat the selection process for each chosen
object.
5.4) To aid the selection of your samples, eCognition Developer
offers two windows (both available from the menu of
information based on the selected samples. Firstly, the 'Sample
Editor' window and secondly the 'Sample Selection Information'
window (Figure 18).

262
Mastering Object Based Image Analysis SECTION TWO

Figure 18: Sample Editor Window


5.5) The Sample Editor provides a visual comparison of two
classes using a range of selected features. In Figure 18, the
Green-Cover and Water Surface are compared using the object
means from each spectral band of the Sentinel-2. When an
object is selected, a red arrow is illustrated where the object
means fits the other samples' mean (Figure 19).

Figure 19: Sample Selection Information Window

263
Mastering Object Based Image Analysis SECTION TWO

5.6) To change the displayed features, right-click within the


main window and select 'Features to Display.' If you only want
the features used within the NN calculation, select 'Display
Standard Nearest Neighbour Features' (Figure 20).

Figure 20: The Apply Standard NN to Classes dialog box


5.7) The Sample Selection Information window displays
information on the NN membership boundaries and the
distances to the other classes of the selected object from the
selected samples.
5.8) To select the classes to be displayed, right-click within the
window and select 'Select classes to Display.' Classes displayed
in red have, for the selected object, an overlap in the distance
measure, and therefore the samples may need to be re-analyzed
and altered. To set the threshold at which a class is highlighted
in red, right-click in the Sample Selection Information window
and select 'Modify critical sample membership overlap.' The
threshold ranges from 0 - 1, and 0 highlights any overlap.
264
Mastering Object Based Image Analysis SECTION TWO

5.9) Once you have selected your samples, you should have an
image that is similar in appearance. Bear in mind that the
selection of samples does not have a correct answer. Just select
the samples you consider to be most representative of the classes
you wish to separate and give the best separation in the Sample
Editor and Sample Selection windows.
Step 6: Running the Classification Process
6.1) Now, you can execute the classification process you
previously created (right-click and select execute on the
classification process). You should now have a nicely classified
image similar to Figure 21.

Figure 21: Final classification map of the Baku region


6.2) If you are unhappy with the classification, repeat the
procedure but select more or alternative samples before
reclassifying the image. The Process Tree after the classification
process is shown in Figure 22.

265
Mastering Object Based Image Analysis SECTION TWO

Figure 22: The Process Tree after the inclusion of the classification
process
6.3) To re-run the classification, open the process and click on
execute or select the process and press F5 or right-click on the
process and select 'execute'.
Step 7: Merging the Result
7.1) The next step is to set up the processes which will merge
your classification so that all neighboring objects of the same
class will form single objects.
7.2) It is important to merge your classification to identify
complete objects. For instance, you can query the Urban Area to
find its complete area once merged. To merge the result, you
will need to enter a merge process for each class; 'Insert Child').
The merge parameters for the vegetation class are shown in
Figure 23.

266
Mastering Object Based Image Analysis SECTION TWO

Figure 23: The Edit Process dialog box, the merge algorithm
parameters for the Built-up class
7.3) The class for merging is defined using the Image Object
Domain. The class of interest is defined; if you were to select
multiple classes, all the privileged classes would be merged,
removing the boundaries and classification of these objects.
7.4) To save time, once you have created your first merge
process, you can copy-and-paste (ctrl-c, ctrl-v, or right-click on
the process) to duplicate it and then edit the class you wish to
merge.
7.5) Once you are happy with your classification, execute the
merge image objects processes you have previously created.
Your results should appear similar to those shown in Figure 24.
7.6) The purpose of merging is to create a final classification
representing the scene's objects. For example, we will now
calculate the area of the whole water surface features. But, be

267
Mastering Object Based Image Analysis SECTION TWO

aware that merging the image objects removes your samples, as


the segmentation will have changed.

Figure 24: Sentinel-2 RGB image (a) and the classified map (b)
Step 8: Feature Space Optimization Tool
8.1) To refine the classification further, eCognition Developer
offers an automated feature, the Feature Space Optimization
function, to automatically identify the features which 'best'
separate the classes for which samples have been selected
(Figure 25).

268
Mastering Object Based Image Analysis SECTION TWO

Figure 25: The NN classification menu, Feature Space Optimization


function
8.2) To use this feature delete your classification (delete level)
and re-run the segmentation process. You will also need to re-
select that will delete your samples as these each time you
change the segmentation (i.e., merge or delete level).
8.3) After selecting your samples, open the Feature Space
Optimization dialog box (Figure 26).

Figure 26: The Feature Space Optimization dialog box

269
Mastering Object Based Image Analysis SECTION TWO

8.4) To use this tool, select the features you wish to compare –
Initially, try the mean, standard deviation, and pixel ratio but
later try other combinations. Then select calculate; once the
calculation has finished, select advanced to see which features
offered the best separation, and 'Apply to the Std NN' to use
within the classification. You can now run your classification
step.
Step 9: Exportting Result
9.1) To end with, run the results to export. It will result in an
ESRI shapefile and create a map shown in Figure 27.
Researchers usually wish to export the classification result from
eCognition into a GIS, mostly ArcGIS, for further processing or
the production of a map.

Figure 27: The Select Features for Export dialog box

270
Mastering Object Based Image Analysis SECTION TWO

9.2) The final process will be to export the classification to an


ESRI shapefile (Figure 28).

Figure 28: Process parameters to export the classification as a


shapefile
9.3) Area is found under Object Features > Shape > Generic
while class name is found under Class-Related features >
Relations to Classification > Class name.
9.4) You will need to create the class name feature, right-click
on the 'Create new Class name' and select Create, leave the
parameters as their default values, and select OK. The shapefile
will output to the directory within which your project is saved. If
you have not saved your project, it will export the shapefile to
the input imagery's directory.
9.5) To select the classes to export, you again edit the Image
Object Domain, remember these parameters define the image
objects to which it will apply the process. The outputted

271
Mastering Object Based Image Analysis SECTION TWO

shapefile is defined as 'Classification.' The features to be


exported are the area (of the image object) and the class name.
Remember that, when you are working inside a trial version of
eCognition version 9.5, you may notice a massage as "Sorry,
this feature is not supported by this version. In this case, accept
the message and enjoy your tutor finishing up.
9.6) When you finish your tutorial successfully, the Process Tree
should then be the same as the one shown below in Figure 29.

Figure 29: The final "Process Tree" dialog box algorithms


Sum Up:
You can process image objects locally in specific Sentinel
satellite imagery based on the classification. Object-oriented
image analysis is a specific strength, a circular interplay
between processing and classifying image objects. Regularly,
specific information is available for classification based on
segmentation, scale, and shape of image objects, as you have
experienced during the previous tutorials. In turn, based on
classification, specific processing algorithms, such as NNC and
Support Vector Machine (SVM), can be activated. In many

272
Mastering Object Based Image Analysis SECTION TWO

applications, the desired geo-information and objects of interest


are extracted step by step by iterative loops of classifying and
rule-based image processing.

Informative Practices
Tips:
1) Assign Class assigns a class to an image object with certain
features, using a threshold value,
2) Classification uses the class description to assign a class ,
Hierarchical Classification uses the class description and the
hierarchical structure of classes,
3) Advanced Classification Algorithms are designed to perform
a specific classification task, such as finding minimum or
maximum values of functions or identifying connections
between objects.
Workouts:
1) Experiment with different classification methods, and be
aware of what types of landcover/landuse you need to create.
2) Experiment with different features within the standard NN
feature space. (Classification > Nearest Neighbor > Edit
Standard NN feature space).
3) Experiment with different features and maximum dimension
levels within the feature optimization tool by applying
classification> Nearest Neighbor > Feature Space
Optimization.
Quizzes :

273
Mastering Object Based Image Analysis SECTION TWO

1) Could you list the process parameters used for the


classification?
2) What is the Sample Editor Window main functionality?
3) What does the Feature Space Optimization Tool do?
Allied References:
1) Foody, G. Approaches for the production and evaluation of
fuzzy land cover classifications from remotely-sensed data.
Int. J. Remote Sens. 1996, 17, 1317–1340.
2) Franklin, S. E., & Wulder, M. A. (2002) Remote sensing
methods in medium spatial resolution satellite data landcover
classification of large areas. Progress in Physical Geography,
26, 173−205.
3) Gregorio, A (2005) Land Cover Classification System
(LCCS), version 2: Classification Concepts and User Manual.
FAO Environment and Natural Resources Service Series, No.
8, Rome: FAO.
4) Haase, G. (1989) Medium-scale landscape classification in the
German Democratic Republic. Landscape Ecology 3 (1): 29-
41.
5) Kaplan, G., Avdan, U. (2017) Object-based water body
extraction model using Sentinel-2 satellite imagery. European
Journal of Remote Sensing, 50(1), p. 137-143.
6) Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett,
C.N., & Paradine, D. (2005) Automated tree recognition in
old-growth conifer stands with high-resolution digital
imagery. Remote Sensing of Environment, 94, 311-326.

274
Mastering Object Based Image Analysis SECTION TWO

Tutorial 5
Threshold Rule-Setting With eCognition

inside the eCognition, a child process is a matter

Opening Statement:
The main purpose of the current tutorial is to teach the
application of combined methods of thresholding rule-set
techniques in the classification of water bodies. For this practice,
there is a need for six high-resolution bands of Sentinel 2A
imagery subsetted inside the eCognition 9.5 version for the
Neftchala Peninsula, Azerbaijan. Accordingly, first, you will
need to load satellite imagery, create a new project, segment the
imagery. In the second step, you will create two customized
Normalized Difference Water Index (NDWI) and Brightness
indices, which help you start a threshold-based classification of
water bodies in a wetland illustrating the sea, rivers, water
canals, ponds, and lakes.
Initial Memo:
✓ Level: Intermediate,
✓ Time: This tutorial should not take you more than 2 hours,

275
Mastering Object Based Image Analysis SECTION TWO

✓ Resources: A eCognition software, Version 9.5,


✓ Data Sources: Sentinel 2A imagery,
L1C_T39SUD_A022097_20210530_Tif
✓ Subject Scene: Azerbaijan, Caspian Sea, Neftchala Peninsula .
Tutor Objectives:
By the end of this unit, you should:
learn how to create a multiresolution segmentation within
eCognition Developer.
distinguish how to create customized NDWI and Brightness
indexes,
recognize the difference between absolute and fuzzy thresholds.
Introduce threshold-based classification of water bodies.
Background Concepts:
The first step of an eCognition image analysis is to cut the
image into pieces, which is called segmentation, and there is a
choice of several algorithms to do this. Another OBIA typically
follows this, for example, a few spectral indices, to yield more
practical information by processing high-resolution satellite
imagery. Such information could be threshold rule-setted that
determines whether an image object matches a condition or not.
Typically, you can apply thresholds in class descriptions if a
feature can separate classes. It is also possible to assign image
objects to a class based on only one condition; however, the
advantage of using class descriptions lies in combining several
conditions. The concept of threshold conditions is also available
for ruleset-based classification; in this case, the threshold
condition is part of the domain and can be added to most
algorithms. This tutorial will practice some of these concepts in
the eCognition 9.5 software environment in a part of the

276
Mastering Object Based Image Analysis SECTION TWO

Neftchala Peninsula, mainly dominated by water surfaces


(Figure 1).

Figure 1: The location of Neftchala Peninsula, Azerbaijan


Mastering The Skills:
Following on from the previous tutorials, you are now
implementing a more detailed classification, for example, the
nearest neighbor. Continue teaching OBIA methods. The main
aim of the tutorial is to provide you with experience in entering
thresholds for a rule-based classification and creating the
corresponding image objects to classify water surfaces. Multiple
tools are at the users' fingertips for much subsequent analysis
inside the eCognition software to obtain this aim.
Step 1: Create a New Project
A project is the most basic format in eCognition Architect. A
project contains one or more maps and optionally a related rule
set. Projects can be saved separately as a *.dpr project file, but

277
Mastering Object Based Image Analysis SECTION TWO

one or more projects can also be stored as part of a workspace.


To create a simple project:
1.1) Open the eCognition Developer Trial version 9.5.
1.2) Go to File > New Project in the main menu.
1.3) The Import Image Layers dialog box opens.
1.4) Inside the "Import Image Layers" dialog box, you have
the right to access the Local Disk, main folders, Subfolders, and
other options shown in Table 1. You took an example of the
Sentinel 2A image from the western coastal part of the Caspian
Sea, Azerbaijan (Figure 2).

Figure 2: The Import Image Layers dialog box


1.5) In Table 1, all descriptions are indicated with their main
functions.
278
Mastering Object Based Image Analysis SECTION TWO

Table 1: The Import Image Layers dialog box main functions


Item Descriptions
1
This icon shows the data source panel
2 Local Disk, for example, C:/
3 Main Folder, for example, Azer-images
4 Subfolder, for example, Mud-Volcano
5
Click on to add to favorites
6
Click on to restore Layouts
7
Click on to search Subfolders
8 Main Folder Name
9 Subfolder Name
10 image layers
11 Image Band Preview
12 Image Properties
13 Accessing to the folder
14 File Name filtering
15 File Type (*.Tiff., img, and many other extensions
16 File Path
17 Confirming the next step
18 Canceling the import layers process

1.6) Select the image bands you wish to import, then press
the Ok button to display the Create Project dialog box.

279
Mastering Object Based Image Analysis SECTION TWO

Step 2: Managing the Project Dialog Box


2.1) The Create Project dialog box gives you several options.
You can edit these options at any time by selecting File >
Modify Open Project as follows:
1) The Menu Bar has the same functions inside the Create
Project Dialog Box (Figure 3).

Figure 3: The Project Dialog Box


2) You may change the name of your project in the Project
Name field. The Map selection is not active here but can be
changed in the Modify Project dialog box after project
creation is finished.
3) If you load two-dimensional image data, you can define
a subset using the Subset Selection button. If the complete

280
Mastering Object Based Image Analysis SECTION TWO

scene to be analyzed is relatively large, subset selection


enables you to work on a smaller area to save processing
time (Figure 4).

Figure 4: Subset Selection Dialog Box


4) You can view the image metadata and associated
information. Here for image bands, the pixel resolution is
presented. Select the Use Geocoding checkbox to use the
geocoding information from an image file to be imported.
You could set pixel size for each unit:
a) If geocoding information is included, the pixel size equals
the resolution. In other cases, pixel size is 1. The image
bands are listed here.
b) If you want to rescale the scene during import, edit the scale
factor in the text box corresponding to the scaling method
used: resolution (m/pxl), magnification (x), percent (%), or

281
Mastering Object Based Image Analysis SECTION TWO

pixel (pxl/pxl). You can edit the Pixels Size (Unit) for
feature calculations, value display, and export, and you can
edit the Pixels Size (Unit).
c) If you keep the default (auto), the unit conversion is applied
according to the unit of the coordinate system of the image
data. You may want to ignore the unit information from the
included geocoding information in special cases. To do so,
deactivate Initialize Unit Conversion from the Input File
item in Tools > Options in the main menu. Geocoding is the
assignment of positioning marks in images by coordinates.
In earth sciences, position marks serve as geographic
identifiers. But geocoding is helpful for life sciences image
analysis too. Typical examples include working with
subsets, multiple magnifications, or thematic layers to
transfer image analysis results. Typically, available
geocoding information is automatically detected.
d) if not, you can enter coordinates manually. Images without
geocodes automatically create a virtual coordinate system
with a value of 0/0 at the upper left and a unit of 1 pixel.
For such images, geocoding represents the pixel coordinates
instead of geographic coordinates.
5) The Image Layer pane allows you:
a) to insert, remove and edit image layers (Figure 4). The
order of layers can be changed using the up and down
arrows.

282
Mastering Object Based Image Analysis SECTION TWO

b) If you use multidimensional image data sets, check and edit


multidimensional map parameters. If you load two-
dimensional image data, you can set the value of those
pixels that are not to be analyzed. Select an image layer and
click the No Data button to open the Assign No Data Values
dialog box.
c) If you import image layers of different sizes, the largest
image layer dimensions determine the scene's size. When
importing without geocoding, the smaller image layers keep
their size if the Enforce Fitting check box is cleared.
d) If you want to stretch the smaller image layers to the scene
size, select the Enforce Fitting checkbox. Also, you can
change the band name by activating the Layer Properties
dialog box (Figure 5).

Figure 5: The Layer Properties dialog box and the geocoding


information
6) Thematic layers can be inserted, removed, and edited
similarly to image layers.

283
Mastering Object Based Image Analysis SECTION TWO

7) If not done automatically, you can load Metadata source


files to make them available within the map.
8) If you click on the OK option, you can create a new project
inside the eCognition Developer.
Step 3: Rearrange the eCognition Setting
When you start any eCognition projects, you can have your
modification and schedule set.
3.1) You may prefer to change the color composition of the
subsetted Image.
3.2) For Sentinel-2, the RGB composite is as natural colors (4,
3, 2); false color Infrared (8, 4, 3); false color urban (12, 11, 4);
agriculture (11, 8, 2); healthy vegetation (8, 11, 2); land/water
(8, 11, 4).
3.3) You could also rearrange the eCognition main windows for
better use, as shown in Figure 6.

Figure 6: A new project for the Neftchala Peninsula subsetting the


Sentinel-2, inside the eCognition Developer

284
Mastering Object Based Image Analysis SECTION TWO

Step 4: Setting up the Process Tree


eCognition provides an artificial language for developing
advanced image analysis algorithms. These algorithms use
object-oriented image analysis and local adaptive processing
principles. You could achieve this by developing a series of
processes, also known as a rule-set. A single process is the
elementary unit of a rule-set providing a solution to a specific
image analysis problem. A single process allows the application
of a specific algorithm to a specific region of interest in the
image, the image object domain. All conditions for classification
and region of interest selection may incorporate semantic
information. Processes may have an arbitrary number of child
processes. The so formed hierarchy defines the image analysis's
structure and flows control. Arranging processes containing
different types of algorithms allows the user to build a
sequential image analysis routine.
4.1) To begin, the process window will need to be opened. On
the main toolbar, under Process Tab, you can click Process Tree
to open it. Remember that you can dock this window at any
location within or outside the software.
4.2) Right-click within the Process Tree window, and a context
menu offers commands for process management (Figure 7).

285
Mastering Object Based Image Analysis SECTION TWO

Figure 7: Context menu of the Process Tree window


4.3) Select Append New Process or click Ctrl + A. It will now
open the Edit Process dialog box (Figure 8).
4.4) Give a name, as Threshold –Rule Sets, for instance, and
click OK. This process will serve as a parent process for all your
algorithms to fall underneath. It is handy because you can
execute this single process, and all other algorithms will
subsequently execute, as well.

286
Mastering Object Based Image Analysis SECTION TWO

Figure 8: Edit Process dialog box with a known name: "Threshold


Rule-Sets" process
4.5) For detailed information regarding the Edit Process dialog
box, see the contents of the previous tutorial.
Step 5: Operating a Multiresolution Segmentation Algorithm
An Object-based classification requires image objects, so you
need to insert a child process employing a segmentation
algorithm.
5.1) Right-click on the "Threshold Rule-Sets" process and select
the Child Process option.
5.2) The "Edit Process" dialog box is displayed. Select the
multiresolution segmentation algorithm to segment pixels into
objects (Figure 9).
5.3) Define a new level, the level that the image objects are
stored on, as Level-1.
5.4) Adjust the image layer weight settings to emphasize the
near-infrared band (B8-VNIR), given that you've got only three

287
Mastering Object Based Image Analysis SECTION TWO

visible bands (B2, B3, and B4) and a SWIR band. Keep in mind
that a band combination of Sentinel-2 satellite would be a useful
RGB composite in recognizing water surfaces.
5.5) Finally, you need to update the scale parameter (52) to
make your image objects a bit larger and then adjust the shaping
compactness settings to emphasize the shape (0.3) a little bit
more, and also to try and get more compact objects (0.7).

Figure 9: Edit Process dialog box, defining a multiresolution


segmentation algorithm
5.6) When you get happy with the settings, click on the OK
button.
5.7) To execute the multiresolution segmentation algorithm,
click on it and choose the Execute button. It runs the algorithm,

288
Mastering Object Based Image Analysis SECTION TWO

grouping the pixels into objects. In Figure 9, more details are


given by a sequence of numbers.
5.8) Figure 10 shows a segmented image and View Settings
arrangment.

Figure 10: The segmented process result for the part of the Neftchala
Peninsula
5.10) You can then toggle on and off the image objects using the

Show/Hide outlines button . To try more options, see the


previous tutorials.
Step 6: Getting the Image Objects Information
The attributes about image objects known as features within
eCognition are displayed within the Image Object Information
window. Not all features are displayed by default.
6.1) So, by right-clicking and choosing Select Features to
Display, you can choose from the available list of features
(Figure 11).

289
Mastering Object Based Image Analysis SECTION TWO

Figure 11: Image Object Information window


6.2) Some of the most popular ones are under object features,
layer values, and mean values (Figure 12).

Figure 12: Select displayed Features box


6.3) You have to add the mean values for all image layer bands
for this aim.

290
Mastering Object Based Image Analysis SECTION TWO

6.4) Then, go down to geometry, extent, and double-click on


Area to add the Area feature. It is the area of each image object
and pixels.
6.5) When you select an object, you'll notice that the
corresponding feature information is displayed in the Image
Object Information window (Figure 13).

Figure 13: A selected image object-related features


6.6) Clicking around on the image objects, you will see that the
mean band values are useful but not yet perfect for separating
water bodies from the other features.
Step 7: Getting Customized Features
7.1) To create a customized feature, click on Tools from the
main Menu and select the Manage Customized Feature option
(Figure 14).

291
Mastering Object Based Image Analysis SECTION TWO

Figure 14: Manage Customized Features dialog box


7.2) Click on the Add option to notice the Customized Features
dialog box. A customized feature allows us to create indices
such as NDWI. You can give the feature a name and plug in the
formula for NDWI (Figure 15).

Figure 15: the Customized Features dialog box, A formula for NDWI

292
Mastering Object Based Image Analysis SECTION TWO

7.3) The following question calculates the NDWI formula:


NDWI Sentinel-2= ([Mean B3-Green]-[ Mean B8-VNIR])/([
Mean B3-Green]+[ Mean B8-VNIR]). The NDWI monitors
changes related to water content in water bodies. As water
bodies strongly absorb light in the visible to the infrared
electromagnetic spectrum, NDWI uses green and near-infrared
bands to highlight water bodies. It is sensitive to built-up land
and can result in the over-estimation of water bodies.
7.4) When you select an object, the NDWI index appears in the
Image Object Information window (Figure 16).

Figure 16: NDWI values showing inside the "Image Object


Information" dialog box
7.5) The NDWI index provides a robust means to separate water
bodies from vegetated and impervious surfaces. The values
within the Image Object Information window help you to assist
with bands and spectral indices information.

293
Mastering Object Based Image Analysis SECTION TWO

7.6) You can also display the actual values by going to the
Feature View window. Double-clicking on NDWI, for example,
assigns a gray-scale color ramp based on the NDWI values to
each of the objects.
7.7) By going into the lower left-hand corner of the Feature
View window and selecting the checkbox, you can play around
with the actual NDWI value ranges. You'll probably want to
right-click on the feature and choose update range to get the full
range of values before doing this. You can then use the arrows
to select the lower and upper range of values.
7.8) This isn't doing classification; it's just previewing what
would happen if you use these threshold values for classification
(Figure 17).

Figure 17: Adjusted NDWI values for the water bodies, including the
segmented objects

294
Mastering Object Based Image Analysis SECTION TWO

7.9) When you get satisfied with adjusted water bodies (Figure
18) by ranging the Feature View window tools, you may move
to the next step.

Figure 18: Illustration of the water bodies (greenness color)


Step 8: Threshold-Based Classification
Now that you have a good idea (created thresholds) about the
features, you can classify the threshold objects into the water
bodies class.
8.1) Let's right-click on the multiresolution segmentation
algorithm, choose to append new, and insert the 'assign' class
algorithm (Figure 19).

295
Mastering Object Based Image Analysis SECTION TWO

Figure 19: The Edit Process dialog box, an assigned class algorithm
with class filter and conditions
8.2) For classification, you can use the assigned class algorithm,
a very simple classification algorithm for threshold-based
classification.
8.3) Under the cross-filter, you could check the box for
unclassified so that we're only focusing on objects that are
unclassified (Figure 20).

Figure 20: The "Edit Classification Filter" dialog box

296
Mastering Object Based Image Analysis SECTION TWO

8.4) Then, you'll go to condition, and you'll click the ellipse


button. The first condition you'll use is on the NDWI values. So
undervalue one, you need to choose from a feature, then go to
the customized features and select NDWI. Now you need to
choose anything that has a value greater than 0.02. This
threshold condition says any unclassified objects, all image
objects, and have an NDWI value greater than 0.02 and the
Brightness <=1150 (Figure 21).

Figure 21: Defining two thresholds of NDWI and Brightness values


for the water bodies
8.5) By accepting these conditions, the result of the thresholding
process is shown in Figure 22.

Sentinel-2 RGB Image

297
Mastering Object Based Image Analysis SECTION TWO

Threshold Rule-Set
Classified NDWI Map

Figure 22: The result of the thresholding process, Neftchalar Peninsula


water bodies
Step 9: Looking for an Alternative Approach

Now let's look at an alternative approach, in which you may


separate the classification into separate steps, rather than using
both conditional statements within a single assigned class
algorithm.
9.1) In the next step, you can also assign the thresholding, as
mentioned earlier, to a new Green-Cover class. Because
Vegetation is highly cross-related to the water bodies in the
wetland area that you just created, the clause description
dialogue pops up. You can choose a color to represent that class.
9.2) Once we right-click and execute this algorithm, any objects
that meet the criteria, that is, they have an NDVI greater than
0.25, and the Average Brightness > = 2000, will be assigned to
the Green-Cover class. With trial and error efforts, you may

298
Mastering Object Based Image Analysis SECTION TWO

apply more accurate thresholds to determine the exact NDVI


and Average Brightness values (Figure 23).

Figure 23: Edit Condition dialog box NDVI and Average Brightness
values
9.3) The view classification button will display the
classification, and then you can choose to display the
classification as either outlines or a solid fill. Overall, NDVI
was very effective in helping us classify vegetated objects
(Figure 24).

Figure 24: The result of the thresholding process for the green-cover
classification

299
Mastering Object Based Image Analysis SECTION TWO

9.4) You can to going back into the assigned clause algorithm
and remove that visible brightness condition. You may insert
this later and in a different assigned class algorithm. Rerunning
the segmentation algorithm every time I change my threshold
parameters for classification isn't very efficient.
9.5) If you need to insert a new algorithm, you can use the
"remove classification" algorithm. It will go right after the
segmentation algorithm, and it simply clears the classification
from the image objects. This is a very quick algorithm, and it's
an efficient algorithm to run if I want to play around with my
classification parameters.
9.6) Finally, you can go back to the original rule sets, with all
objects with an NDWI, NDVI, or Brightness threshold indices,
and modify less or greater than specific values assigned to the
water or the vegetation classes.
Sum Up:
This tutorial introduced you to threshold-and rule-based
classifications inside eCognition 9.5 by processing the Sentinel-
2 imagery just adjusted to the parts of Neftcala Peninsula in
nearby the Caspian Sea. You looked at different approaches in
your next exercise as you handled the following procedures:
✓ a project by loading Sentinel 2A images,
✓ a multiresolution segmentation to create image objects,
✓ an impression of how to obtain information associated
with NDWI indexing,
✓ a threshold-rule-based classified water bodies.

300
Mastering Object Based Image Analysis SECTION TWO

Informative Practices
Tips:
1) Note that the eCognition 9.5 trial software access is not limited
to a specific period.
2) Keep in mind that export functions, saving projects, and the
workspace environment are restricted.
3) The most important point is that you cannot open the rule sets
saved in trial software in a fully-licensed version of eCognition
software. If you are interested in exporting and analyzing the
processed data, use the eCognition authorized versions.
Workouts:
1) Experiment with different segmentation algorithms and
parameters. It would help if you did not have to edit the
thresholds you have entered to reclassify the resulting segments.
Still, you may notice varying levels of accuracy between
different segmentations.
2) The classification produced during this unit is superficially OK
but contains numerous errors when viewed in more detail. Try
to improve the quality of the classification through the
refinement of the existing rules.
3) In addition to the rule used within the classification, there may
be other features available within eCognition Developer which
could aid the classification.
Quizzes:
1) Which subsetting options are selectable inside the Subset
Selection Dialog Box?
2) What exactly does the Threshold Classification Process do?
3) Why do you use the "assign class algorithm" through a
threshold-based classification procedure?
Allied References:
1) Agancy, E. S. (2015). Sentinel-2 MSI: Overview. https://
sentinel.esa.int/documents/247904/685211/Sentinel-
2_User_Handbook.
2) Athelogou, Maria; Schmidt, Günter; Schäpe, Arno; Baatz,
Martin; Binnig, Gerd (2007). Cognition Network Technology –
A Novel Multimodal Image Analysis Technique for Automatic
Identification and Quantification of Biological Image
Contents. Imaging Cellular and Molecular Biological
Functions. Principles and Practice. pp. 407–422.

301
Mastering Object Based Image Analysis SECTION TWO

3) Blaschke T. and J. Stroble (2001). What’s wrong with pixels?


Some recent developments interfacing remote sensing and GIS,
GIS-Zeitschrift für Geoinformationsysteme 6:12-17.
4) Blaschke, T. (2010). Object based image analysis for remote
sensing. ISPRS Journal of Photogrammetry and Remote
Sensing, 1-6.
5) Lucas R.M. Rowlands A., Brown, A., Keyworth, S and
Bunting, P. (2007). Rule-based classification of multi-temporal
satellite imagery for habitat and agricultural land cover
mapping. International Society for Photogrammetry and
Remote Sensing, 62(3), 165-185.
6) Torres-Sánchez, J., F.López-Granados and J. M. Peña (2015).
An automatic object-based method for optimal thresholding in
UAV images: Application for vegetation detection in
herbaceous crops, Computers and Electronics in Agriculture,
Volume 114, June 2015, Pages 43-52.

302
Mastering Object Based Image Analysis SECTION TWO

Tutorial 6
Practicing Change Detection with OBIA

OBIA provides an accurate change detection


Opening Statement:
In the current tutorial, you will learn how to detect changes on
the small part of the Caspian Sea by processing the Sentinel-2
imagery sampled for 2015 and 2021. The functionality is
explained based on an example of change detection use of the
"Unsupervised Classification (USC)" algorithm within the
Trimble eCognition software 9.5 step-wisely. It is based on the
Iterative Self-Organizing Data Analysis Techniques
(ISODATA) approach and creates a single raster landcover
output with pixel values corresponding to the water, non-water
classes. Furthermore, you will learn how to apply the crucial
"copy map" algorithm and how implementing it together could
strengthen a rule set USC procedure.
Instructive Memo:
✓ Level: intermediate and advanced levels,
✓ Time: This tutorial should not take you more than 2 hours.
✓ Software: an eCognition Developer version 9.5,
✓ Data Sources: Sentinel-2 images for 2015 and 2021
303
Mastering Object Based Image Analysis SECTION TWO

✓ Subject Scene: Caspian Sea, Gil Island.


By the end of this unit, you will:
create two independent maps.
understand the nature of unsupervised ISODATA,
customize algorithm for ‘USC and segmentation.'
assign classes to each cluster.
Classifying water and non-water classes in both maps
individually
Illustrating the actual changes on the Caspian Sea,

Background Concepts:
Coastal zone detection is an important task in national
development and environmental protection, in which extraction
of shorelines should be regarded as fundamental research of
necessity. Very dynamic coastlines such as the Caspian Sea
coasts and its islands could pose considerable risk to the
surrounding countries' economic-social developments. Due to
the rapid advances in image processing methods, modern and
reliable OBIA techniques are required to detect and update the
coastline geodatabase of these areas to explore rates of physical
and ecological retreats. Natural and artificial land features are
very dynamic, changing somewhat rapidly in our lifetime. Thus,
accurately, you can more fully understand the physical and
human processes at work. An advanced OBIA plays a unique
role for easier interpretation in the Caspian Sea changes.
In eCognition Developer, you can work with so-called 'maps.' A
map is a "sub‐project" where you can process independently.
Within one project, you can have several maps. Maps are
independent "sub‐projects," The original scene is always the
304
Mastering Object Based Image Analysis SECTION TWO

'main' map; all other, created maps can have individual names.
One of the needs for the changes detection process is to map
multispectral images in a fast manner correctly. USC is where
the outcomes (groupings of pixels with common characteristics)
are based on the software analysis of an image without the user
providing sample classes. The eCognition uses techniques to
determine which pixels are related and groups them into classes.
USC using cluster algorithms is often used when there are no
field observations and other reliable geographic information. For
USC, eCognition users can execute an ISODATA cluster
analysis and categorize continuous pixel data into
classes/clusters having similar spectral-radiometric values.

Mastering The Skills:


Inside the eCognition, one application field of maps is for
change detection by setting up two maps. There is a main map
containing all image layers from both points of times (T1=2015
and T2=2021), then two independent maps are created, with
only the image layers of one point of time. These follow the
ISODATA approach, spectral indexing, and multi-threshold
segmentation algorithms that are the most straightforward
techniques for the final accurate classification and logical way to
detect actual change detection from both maps.

305
Mastering Object Based Image Analysis SECTION TWO

Step 1: Preparation of Sentinel-2 Imagery

1.1) To access the main aims of the current tutorial, you need
to download images from the Sentinel-2 satellite acquired by
ESA's Open Access (https://scihub.copernicus.eu), selected for
specific dates 2015 and 2021. Basic information on the Sentinel-
2 satellites is given in the previous tutorials.
1.2) For the current tutorial, we selected the environmentally
sensitive coastal part of the Caspian Sea in Azerbaijan, an
internationally recognized greatest lake of global importance.
1.3) To achieve the main objectives of the current practice,
several methods of image pre-processing have to step-wisely
apply the sentinel-2 zipped datasets, including opening the
downloaded zipped files and managing the corresponding Geo-
Tiff bands. The details of conversion methods are given in the
previous tutorials.

Step 2: Starting the eCognition Software

2.1) eCognition version 9.5 is yet powerful software that it is a


challenge to summarize all of its advantages and benefits within
one tutorial. But let's give it a try to introduce some of its main
concepts by working through specific change detection tasks
and learning how to apply the main approaches step-by-step.
2.2) If you have not accessed the official version of the
eCognition software, the eCognition Trial 9.5 software is now
available due to the free installation steps. For more details on
this process read the previous tutorials. It must be mentioned
306
Mastering Object Based Image Analysis SECTION TWO

that the free version is limited as saving of rule sets, saving of


projects, and export functions which are disabled. But, there is
much to experience and learn on advanced image processing
from such a superior developer program.
2.3) As soon as you access the trial version of eCognition 9.5,
you can follow the main functionality of the software. You may
start by setting the Sentinel-2 imagery inside the eCognition
Developer to begin this tutorial.

Step 3: Setting image layers of the ‘main map’

3.1) From the main menu, select File> New Project to access the
downloaded two sets of multispectral image layers from a subset
of a Sentinel-2 scene. T1 is the layers from 2015,09,14 and T2 is
from 2021,09,27. Evaluate the loaded Image Layers in the new
project named as Gi-Island Changes.
3.2) When you set the desired bands inside the eCognition, you
will notice the following screen in which the layers of T1 and
T2 are displayed (Figure 1).

307
Mastering Object Based Image Analysis SECTION TWO

Figure 1: Create Project window, notice Sentinel-2 selected T1 and T2


bands
3.2) It is preferable to start with a small area subsetted from the
main Sentinel-2 image, as you can see in Figure 2.

Figure 2: Create Project, Subset Selection option

308
Mastering Object Based Image Analysis SECTION TWO

3.3) Click the ‘Edit Image Layer Mixing’ button in the ‘View’
toolbar or go to the main menu View > Image Layer Mixing.
Note that In the lower right corner of the viewer, you see which
map is currently displayed (Figure 3). In our example right now,
it is the ‘main’ map.

Figure 3: The Image Layers of T1 (2015) are displayed

3.4) Click on the up arrow in the lower right of the ‘Edit Image
Layer’ dialog box until the bullets are moved completely to the
T2 multispectral layers (Figure 4).

309
Mastering Object Based Image Analysis SECTION TWO

Figure 4: The Image Layers of T2 (2021) are displayed

Step 4) Creating two independent maps

4.1) Right-click in the Process Tree and select Append New. Set
the Edit Process dialog box as Figure 5.

Figure 5: The Edit Process dialog box, named the Gil-Island Changes

310
Mastering Object Based Image Analysis SECTION TWO

4-2) Right-click on the Gil-Island Changes and select the Insert


Child option. Set up all 'Copy Map' algorithm parameters as
Figure 6.

Figure 6: Process settings of algorithm ‘copy map’

4.3) The most important parameters of this algorithms are:


a) The 'Source Region' defines whether you want to create a
map from the full extent of the original scene or if a region is a basis
for the new map.
b) The ‘Target Map Name’ is where the name of the new map is
defined.
c) If the map to be created should have a different resolution,
this is defined in the 'field Scale.'

311
Mastering Object Based Image Analysis SECTION TWO

d) In the field 'Image Layers,' the Image layers needed for the
new map are defined. If nothing is set, all Image layers of the source
map are copied to the new map.
e) In the field 'Thematic Layers,' the thematic layers for the new
map are defined.
f) If 'Yes' is set in the field 'Copy Image Object Hierarchy,' the
existing Image Object Levels are copied to the new map. If you want a
backup map, you will use this option.
4.4) The process settings to create the map for T1 are shown in
Figure 7.

Figure 7: Process Tree with a process to create the MapT1 highlighted

4.4.1) Double‐click on the first child process ‘copy map to


'MapT1' to open it. In the Domain, the default settings are kept.
No threshold must be set, no specific map, as there is only one
existing.
4.4.2) The Source Region is kept as ‘none' because no region
exists, which could be the basis for the map. You shall copy the
full extent of the loaded subset to the new map.

312
Mastering Object Based Image Analysis SECTION TWO

4.4.3) In the 'Target Map Name field,' the name 'MapT1' is


entered. To insert a name for the new map, type it in.
4.4.4) In the field 'Scale,' the default setting ‘Use current scene
scale’ is kept, as no change in resolution for the change
detection analysis is needed.
4.4.5) In the field 'Image Layers,' only the layers from T1 are
chosen. The new map will then contain only these layers. Click
on the '…' next to the 'Image Layers' field. The 'Select Image
Layers' dialog box opens (Figure 8).

Figure 8: The 'Select Image Layers' dialog box, only the T1 Layers are
selected
4.4.6) To explore the new map, click on the 'Execute' process to
create ‘MapT1’.
4.4.7) To display a map, use the dropdown list in the ‘View
Navigate’ toolbar, right beside the ‘Delete Level’ button. Select
‘MapT1’ (Figure 9).

313
Mastering Object Based Image Analysis SECTION TWO

Figure 9: The name of the displayed map appears in the lower right
corner of the viewer window, now ‘MapT1’ is displayed.
4.4.8) Now repeat the settings mentioned above to create a map
for T2 (Figure 10).

Figure 10: The Edit Process dialog box, process settings to create the
Map-T2.

314
Mastering Object Based Image Analysis SECTION TWO

4.4.9) Examine the algorithm parameters as the process before;


the default settings are kept for the Domain. Here, of course, the
'Target Map Name' is 'MapT2'. In the field 'Image Layers,' only
the layers of T2 are chosen. Then, Execute the algorithm.
4.4.10) To evaluate both new maps, first open a second viewer,
then display T1 in the one, T2 in the other viewer. Link both
viewers with ‘Side by Side’ mode. To open a second viewer, go
to the main menu Window and select either ‘Split Horizontally’
or 'Split Vertically.' Then, click in the left viewer window to
make it active and select MapT1 from the dropdown list in the
‘View Navigate’ Toolbar.
4.4.11) By finishing up the first stage, click in the right viewer
window to make it and select MapT2 from the drop‐down list in
the 'View Navigate' Toolbar. Then, go to the main menu
'Windows' and select 'Side by side View.' After that, you need to
zoom in on maps T1 and T2 and compare the differences in both
maps. Especially the water is quite different in the two maps
(Figure 11). Two maps are created, and then the views synchronized:
MapT1 (left) and MapT2 (right).

315
Mastering Object Based Image Analysis SECTION TWO

Figure 11: Synchronized MapT1 and MapT2

Step 5: Classifying water in both maps independently


Now that you have two separate maps, you can segment and
classify both individually. It means you can create two
independent Image Object Hierarchies within one project,
separated into two maps. Which process is applied to which map
is controlled by the algorithm Domain? If you need to apply
several processes to a map, you can define the map in the parent
process as a domain and use the settings 'From Parent' in the
subsequent child processes.
5.1) When setting up maps T1 T2, right-click on the Process
Tree and select Add New Process. Then, select the Cluster
analysis (Unsupervised Classification) algorithm from the
Algorithm list to see the Edit Process dialog box (Figure 12).
316
Mastering Object Based Image Analysis SECTION TWO

Figure 12: Edit Process dialog box set to Map1 with the USC
algorithm
5.2) To run the USC algorithm, you may prefer to set all
parameters as default settings that it shown in Figure 12. More
details on the functionality of all required parameters are stated
in Table 1.

Table 1: Required parameters for USC Functionality


Function Algorithm Description
Supported Pixel Level; Image Object Level; Current Image Object;
Domains Neighbor Image Object; Super Object; Sub Object;
Linked Objects
Algorithm Parameters that have to be set to run USC algorithm
Parameters
Use Input Layer Choose whether to provide your input layers in a layered
Array array or select the layers explicitly.
Input Image Select the layers to be considered for cluster analysis.
Layers

317
Mastering Object Based Image Analysis SECTION TWO

Output Layer Define the name for the temporary layer that contains the
Name result of the cluster analysis (cluster IDs).
Number of Define how many times the clustering algorithm will
Iterations iterate.
Maximum Define the maximum number of clusters to be created.
number of
clusters
The Initial Define an initial number of clusters to be created.
number of
clusters
Minimum Define the minimum number of pixels per cluster.
cluster size
Maximum Define the standard deviation that has to be exceeded for
standard cluster splitting to occur. A value of 0 means that splitting
deviation is always allowed
Minimum You can define a distance threshold for cluster centers. If
cluster distance cluster centers are closer than this, clusters are merged. A
value of 0 means that this threshold is ignored.
5.3) When you set all parameters, click on the Execute button to
produce the USC ISODATA output map with a defined number
of classes (Figure 13).

318
Mastering Object Based Image Analysis SECTION TWO

Figure 13: Selecting image layers for the Map-1 classified by USC
algorithm
5.4) Do not forget to select the image layers (for the Map-T2
bands) with adequate information in the USC procedure as
Figure 14.

Figure 14: Edit Process dialog box set to Map 2 with the USC
algorithm
5.5) The result of the USC algorithm for Map 2 is shown in
Figure 15.

319
Mastering Object Based Image Analysis SECTION TWO

Figure 15: Selecting image layers for the Map-2 classified by USC
algorithm
5.6) Figure 16 shows many details on the Visual comparison of
the Gil Island Caspian Sea changes.

Figure 16: Visualizing change detection on RGB images, NFDWI


values, and USC results adjusted for the Gil Island, Caspian Sea

320
Mastering Object Based Image Analysis SECTION TWO

5.7) Changes in the Caspian Sea water depletion trend for seven
years is quite visually detectable. Most likely, due to climate
change in the region, the trend of lowering the water level in the
coming years will be quite tangible. Such changes will lead to
negative changes in the region's coastal environment, animal
species, and downbeat economic and social problems.

Step 6: Other complementary applicable methods

Based on the OBIA capabilities, there is more eCognition


software to develop the current change detection procedure as:
6.1) For a different stage, you may prefer to create a few index
layers such as NDVI by applying Index Layer Calculation
algorithms (please notice previous tutorial contents). Such
spectral indices could be used along with different segmentation
and classification process.
6.2 The next step would be to introduce a multi-threshold
segmentation based on the USC outputs. They effectively help
you to move to the most advanced OBIA techniques.
6.3 After both maps have been classified, they could be
synchronized back to the main map. With their different
classification of water, the two levels will form the basis for a
change detection procedure.
6.4) To reduce the rate of inaccuracy that may introduce in the
USC steps, you may prefer to apply a few more object-based
analyses. For example, Remove Object algorithm considering
neighborhood properties or context information) inside the
321
Mastering Object Based Image Analysis SECTION TWO

eCognition software, such as removing small doubtful objects


from the USC result that will considerably improve the final
USC map.
6.5) At least one application field of maps is for real change
detection; there are more algorithms for both time points (T1,
T2, T3, and so on) inside the eCognition 9.5.
Sum Up:
eCognition offers many Geoscience analysis methods:
supervised and unsupervised machine learning, deep learning,
knowledge-based analysis with Fuzzy Logic and threshold
conditions, all together to run OBIA procedures. For
unsupervised classification, we tutored the possibility of
executing an ISODATA cluster analysis addressing part of the
Caspian Sea, subsetted coastal, by processing two sets of
Sentinel-2 imagery. You learned how to place maps in one
project. After that, you understood that ISODATA could
categorize continuous pixel data into classes/clusters with
similar spectral-radiometric values. Indeed, you learned how to
apply a 'copy map' algorithm that can be used for multi-scale
image analysis.

322
Mastering Object Based Image Analysis SECTION TWO

Informative Practices
Tips:
1) USC is where the outcomes (groupings of pixels with common
characteristics) are based on the software analysis of an image
without the user providing sample classes .
2) USC algorithms discover hidden patterns or data groupings
without the need for human intervention.
3) The 'copy map' algorithm's most frequently used options are:
defining a subset of the selected map using a region variable,
selecting a scale, setting a resampling method, copying all
layers, selected image layers, and thematic layers, and copying
the image object hierarchy of the source map.
Workouts:
1) List the different methods of image classifications.
2) Apply the current tutorial methodology to the Sentinel-2 data
for the Caspian Sea major islands for the years 1995 and 2000
and compare the results.
3) Use an NDWI spectral indexing and multi-threshold
segmentation algorithm to detect changes on the other coastal
sides of the Caspian Sea.
Quizzes :
1) What is the difference between object-based and pixel-based
classification?
2) Is object-based classification supervised?
3) Which classification method is better for the high-resolution
satellite imagery?

323
Mastering Object Based Image Analysis SECTION TWO

Allied References:
1) Araya Y. H. and Hertagen, C. (2008). A Comparison of Pixel
and Object-based Land Cover Classification: A Case Study of
The Asmara Region, Eritrea. WIT Transaction on Built
Environment Vol 100, ISSN 1743-3509, Geo-Environment
Landscape Evolution III.
2) Kaplan, G, and Avdan, U. (2017). Object-based water body
extraction model using Sentinel-2 satellite imagery, European
Journal of Remote Sensing, 50:1, 137-143.
3) Kaplan, G., and Ugur A. (2018). Sentinel-1 and Sentinel-2 Data
Fusion for Wetlands Mapping: Balikdami, Turkey, International
Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences - ISPRS Archives 42.3 (2018): 729–734.
4) Rasouli, A.A. (2018). Geo-OBIA and Spatial Information
Sciences Implementations, Eurasian GIS Conference 2018, 04-
07 September 2018, Baku, Azerbaijan .
5) Rasouli, A.A. (2020). Detection of Caspian Sea Coastline
Changes by Fuzzy-Based Object-Oriented Image Analysis. The
Second Eurasian CONFERENCE RISK-2020 12 – 19 April
2020 – Tbilisi / GEORGIA :
6) Zerrouki, N. and Bouhaffra, D. (2014). Pixel-based Or Object-
based: Which Approach Is More Appropriate for Remote
Sensing Image Classification? IEEE International Conference
on Systems, Man & Cybernetics, San Diego(CA), pp. 864-869.

324

ISBN: 978-625-8061-89-5

You might also like