0% found this document useful (0 votes)
22 views13 pages

Chapter 5

The document discusses visual and digital image interpretation, including visual interpretation elements like tone, shape, size, pattern, texture, shadow, site, and association. It also discusses characteristics of remote sensing image data and how interpretation involves identifying targets based on visual elements to extract useful information.

Uploaded by

anduyefkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views13 pages

Chapter 5

The document discusses visual and digital image interpretation, including visual interpretation elements like tone, shape, size, pattern, texture, shadow, site, and association. It also discusses characteristics of remote sensing image data and how interpretation involves identifying targets based on visual elements to extract useful information.

Uploaded by

anduyefkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT 5

VISUAL AND DIGITAL IMAGE VISUALIZING AND INTERPRETATION

Introduction
In order to take advantage of and make good use of remote sensing data, we must be able to
extract meaningful information from the imagery. This brings us to the topic of discussion in this
unit - visualization and interpretation.

5.1 VISUAL INTERPRETATION

Interpretation and analysis of remote sensing imagery involves the identification and/or
measurement of various targets in an image in order to extract useful information about them.

Targets in remote sensing have the following characteristics:


• Targets may be a point, line, or area feature.
• The target must be distinguishable; it must contrast with other features around it in the image.

Much interpretation and identification of targets in remote sensing imagery is performed


manually or visually, i.e. by a human interpreter. In many cases this is done using imagery
displayed in a pictorial or photograph type format, independent of what type of sensor was used
to collect the data and how the data were collected. Visual interpretation may also be performed
by examining digital imagery displayed on a computer screen. Both analogue and digital imagery
can be displayed as black and white (also called monochrome) images, or as color images by
combining different channels or bands representing different wavelengths.

When remote sensing data are available in digital format, digital processing and analysis may
be performed using a computer. Digital processing may be used to enhance data as
anintroduction to visual interpretation. Digital processing and analysis may also be carried out to
automatically identify targets and extract information completely without manual intervention by
a human interpreter. However, rarely is digital processing and analysis carried out as a complete
replacement for manual interpretation. Often, it is done to supplement and assist the human
analyst.

1|Page
Manual interpretation and analysis dates back to the early beginnings of remote sensing for air
photo interpretation. Digital processing and analysis is more recent with the advent of digital
recording of remote sensing data and the development of computers. Both manual and digital
techniques for interpretation of remote sensing data have their respective advantages and
disadvantages. Generally, manual interpretation requires little, if any, specialized equipment,
while digital analysis requires specialized, and often expensive, equipment. Manual
interpretation is often limited to analyzing only a single channel of data or a single image at a
time due to the difficulty in performing visual interpretation with multiple images.

The computer environment is more amenable to handling complex images of several or many
channels or from several dates. In this sense, digital analysis is useful for simultaneous analysis
of many spectral bands and can process large data sets much faster than a human interpreter.
Manual interpretation is a subjective process, meaning that the results will vary with different
interpreters. Digital analysis is based on the manipulation of digital numbers in a computer and is
thus more objective, generally resulting in more consistent results. However, determining the
validity and accuracy of the results from digital processing can be difficult.

It is important to repeat that visual and digital analyses of remote sensing imagery are not
mutually exclusive. Both methods have their merits. In most cases, a mix of both methods is
usually employed when analyzing imagery. In fact, the ultimate decision of the utility and
relevance of the information extracted at the end of the analysis process still must be made by
humans.

5.2 Elements of Visual Interpretation


As we noted in the previous section, analysis of remote sensing imagery involves the
identification of various targets in an image, and those targets may be environmental or artificial
features which consist of points, lines, or areas. Targets may be defined in terms of the way they
reflect or emit radiation. This radiation is measured and recorded by a sensor, and ultimately is
depicted as an image product such as an air photo or a satellite image.

2|Page
A systematic study of aerial and space images usually involves several basic characteristics of
features shown on an image. The exact characteristics useful for any specific task and the manner
in which they are considered depend on the field of application. However, most applications
consider the following basic characteristics, or variations of them: shape, size, pattern, tone (or
hue), texture, shadows, site, association, and resolution (Olson, 1960).

Visual interpretation using these elements is often a part of our daily lives, whether we are
conscious of it or not. Examining satellite images on the weather report or following high speed
chases by views from a helicopter are all familiar examples of visual image interpretation.
Identifying targets in remotely sensed images based on these visual elements allows us to further
interpret and analyze. The nature of each of these interpretation elements is described below,
along with an image example of each.

• Tone (hue) refers to the relative brightness or color of objects in an image. Generally, tone is
the fundamental element for distinguishing between different targets or features. (Figure 4.1)
showed how relative photo tones could be used to distinguish between water body and land
mass on black and white photographs. The lighter toned areas are drier and covered by sand;
the darker toned areas are covered by water. Variations in tone also allow the elements of
shape, texture, and pattern of objects to be distinguished. Without tonal differences, the
shapes, patterns, and textures of objects could not be discerned.

Figure 5.1: visual interpretation using tone

3|Page
Shape refers to the general form, structure, or outline of individual objects. Shape can be a very
distinctive clue for interpretation. Straight edge shapes typically represent urban or agricultural
(field) targets, while natural features, such as forest edges, are generally more irregular in shape,
except where man has created a road or clear cuts. Farm or crop land irrigated by rotating
sprinkler systems would appear as circular shapes. Figure 4.2 showed how could be shape used
to distinguish features.

Figure 5.2: visual interpretation using shape

Size of objects in an image is a function of scale. It is important to assess the size of a target
relative to other objects in a scene, as well as the absolute size, to aid in the interpretation of
that target. A quick approximation of target size can direct interpretation to an appropriate
result more quickly. For example, if an interpreter had to distinguish zones of land use, and
had identified an area with a number of buildings in it, large buildings such as factories or
warehouses would suggest commercial property, whereas small buildings would indicate
residential use. Figure 4.3 showed how could be size used to distinguish features.

4|Page
Figure 5.3: visual interpretation using size

Pattern refers to the spatial arrangement of visibly discernible objects. Typically an orderly
repetition of similar tones and textures will produce a distinctive and ultimately recognizable
pattern (Figure 4.4). Orchards with evenly spaced trees and urban streets with regularly
spaced houses are good examples of pattern.

Figure 5.4: visual interpretation using pattern

5|Page
Texture refers to the arrangement and frequency of tonal variation in particular areas of an
image. Rough textures would consist of a mottled tone where the grey levels change abruptly
in a small area, whereas smooth textures would have very little tonal variation. Smooth
textures are most often the result of uniform, even surfaces, such as fields, asphalt, or
grasslands (Figure 4.4).

Figure 5.4: visual interpretation using texture

Shadow is also helpful in interpretation as it may provide an idea of the profile and relative
height of a target or targets which may make identification easier. However, shadows can
also reduce or eliminate interpretation in their area of influence, since targets within shadows
are much less (or not at all) discernible from their surroundings. Shadow is also useful for
enhancing or identifying topography and landforms, particularly in radar imagery.

6|Page
Figure 5.5: visual interpretation using shadow

Association takes into account the relationship between other recognizable objects or features in
proximity to the target of interest. The identification of features that one would expect to
associate with other features may provide information to facilitate identification. For
instance, commercial properties may be associated with proximity to major transportation
routes, whereas residential areas would be associated with schools, playgrounds, and sports
fields. In our example, a lake is associated with boats, a marina, and adjacent recreational
land.

Figure 5.6: visual interpretation using association

7|Page
Resolution depends on many factors, but it always places a practical limit on interpretation
because some objects are too small or have too little contrast with their surroundings to be
clearly seen on the image. Other factors, such as image scale, image color balance, and condition
of images (e.g., torn or faded photographic prints) also affect the success of image interpretation
activities.

5.3 Image data characteristics


Remote sensing image data are more than a picture; they are measurements of EM energy. Image
data are stored in a regular grid format (rows and columns). A single image element is called a
pixel, a contraction of 'picture element'. For each pixel, the measurements are stored as Digital
Numbers, or DN-values. Typically, for each measured wavelength range a separate data set is
stored, which is called a band or a channel, and sometimes a layer.

The quality of image data is primarily determined by the characteristics of the sensor-platform
system. The image characteristics are usually referred to as:
1. Spatial characteristics, which refer to the area measured.
2. Spectral characteristics, which refer to the spectral wavelengths that the sensor is sensitive to.
3. Radiometric characteristics, which refer to the energy levels that are measured by the sensor.
4. Temporal characteristics, which refer to the time of the acquisition.

Each of these characteristics can be further specified by the extremes that are observed
(coverage) and the smallest units that can be distinguished (resolution):

• Spatial coverage, which refers to the total area covered by one image. With multispectral
scanners this is proportional to the total field of view (FOV) the instrument, which determines the
swath width on the ground.

• Spatial resolution, which refers to the smallest unit-area measured. This indicates the minimum
detail of objects that can be distinguished. The detail noticeable in an image is dependent on the
spatial resolution of the sensor and refers to the size of the smallest possible feature that can be
detected. Spatial resolution of passive sensors depends primarily on their Instantaneous Field of
View (IFOV).
8|Page
Images where only large features are visible are said to have coarse or low resolution. In fine or
high resolution images, small objects can be detected. Military sensors for example, are
designed to view as much detail as possible, and therefore have very fine resolution. Commercial
satellites provide imagery with resolutions varying from a few meters to several kilometers.
Generally speaking, the finer the resolution, the less total ground area can be seen.

• Spectral resolution, which is related to the widths of the spectral wavelength bands that the sensor
is sensitive to. The finer the spectral resolution, the narrower the wavelength ranges for a particular
channel or band.

Many remote sensing systems record energy over several separate wavelength ranges at various
spectral resolutions. These are referred to as multi-spectral sensors. Advanced multi-spectral
sensors called hyper spectral sensors, detect hundreds of very narrow spectral bands throughout
the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum. Their very
high spectral resolution facilitates fine discrimination between different targets based on their
spectral response in each of the narrow bands.

• Radiometric resolution, which refers to the smallest differences in levels of energy that can be
distinguished by the sensor. It describes the actual information content in an image. Every time
an image is acquired on film or by a sensor, its sensitivity to the magnitude of the
electromagnetic energy determines the radiometric resolution. The radiometric resolution of an
imaging system describes its ability to discriminate very slight differences in energy. The finer
the radiometric resolution of a sensor the more sensitive it is to detecting small differences in
reflected or emitted energy.

• Temporal coverage is the span of time over which images are recorded and stored in image
archives.

• Revisit time, which is the (minimum) time between two successive image acquisitions over the
same location on Earth. This is sometimes referred to as temporal resolution. Temporal
Resolution refers to the length of time it takes for a satellite to complete one entire orbit cycle.
The revisit period of a satellite sensor is usually several days. Therefore the absolute temporal

9|Page
resolution of a remote sensing system to image the exact same area at the same viewing angle a
second time is equal to this period.

Additional (related) properties of image data are:

• Pixel size: determined by the image coverage and the image size, is the area covered by one
pixel on the ground. Pixel sizes of different sensor system may range from less than 1 meter
(high spatial resolution) to larger than 5 km (low spatial resolution).

• Number of band: refers to the number of distinct wavelength-bands stored. Typical values are,
for example, 1 panchromatic band (black/white aerial photography), 15 multispectral bands
(Terra/ASTER), or 220 hyper spectral bands.

• Image size, is related to the spatial coverage and the spatial resolution. It is expressed as the
number of rows (or lines) and number of columns (samples) in one scene. Typically, remote
sensing images contain thousands of rows and columns.

5.4 Data selection criteria


• Spatio-temporal characteristics
For the selection of the appropriate data type it is necessary to fully understand the information
requirements for a specific application. Therefore, you have to analyze the spatio-temporal
characteristics of the phenomena under investigation. For example, a different type of image
data is required for topographic mapping of an urban area than for studying changing global
weather patterns.

The time of image acquisition should also be given thought. In aerial photography, a low Sun
angle causes long shadows due to elevated buildings, which may obscure features that we are
interested in. To avoid long shadows, the aerial photographs could be taken around noon.

10 | P a g e
The type and the amount of cloud cover also play a role, and under certain conditions no images
can be taken at all. Persistent cloud cover may be the reason for lack of image data in areas of the
tropical belt. The use of radar satellites may be a solution to this problem.

Yet another issue is seasonal cycles. In countries with a temperate climate, deciduous trees bear
no leafs in the winter and early spring, and therefore images taken during this time of year allow
the best view on the infrastructure. However, during the winter the Sun angle may be too low to
take good images, because of long shadows.

• Availability of image data

Once the image data requirements have been determined, you have to investigate their
availability and costs. The availability depends on data that were already acquired and stored in
archives, or data that need to be acquired at your request. The size and accessibility of image
archives is growing at an ever fasting rate. If up-to-date data are required, these need to be
requested through an aerial survey company or from a remote sensing data distributor.

Examples of current operational space borne missions (platform/ sensor) are:


High-resolution panchromatic sensors with a pixel size between 0.5 m and 6 m (Orb View
/PAN, Ikonos/PAN, QuickBird/PAN, EROS/PAN, IRS PAN, Spot/PAN).
Multispectral sensors with a spatial resolution between 4 m and 30 m (Landsat/ETM+,
Spot/HRG, IRS/Liss3, Ikonos/OSA, CBERS/CCD, Terra/ Aster).
A large number of weather satellites and other low-resolution sensor a pixel size between 0.1
km and 5 km (GOES/Imager, Meteosat/Seviri, Insat/VHRR, NOAA/ AVHRR,
Envisat/Meris, Terra/MODIS, Spot/VEGETATION, IRS/WiFS).
Imaging radar missions with a spatial resolution between 8 m and 150 m (Envisat/ ASAR,
ERS/SAR, Radarsat/SAR).
Today, more than 1000 aerial survey cameras are in service and used to acquire (vertical) aerial
photography. Per year, an estimated 30 aerial survey cameras are sold. In addition to film
cameras, an increasing number of other types of sensors are used in airborne remote sensing; lists
more than 200 types of instruments, including 15 types of imaging spectrometers, 20 radar
system and 20 laser scanners. Worldwide, a growing number of airborne laser scanner (more

11 | P a g e
than 50) are available, with the majority being flown in North America. Also a growing number
of airborne imaging spectrometers (more than 30) are available for data acquisition. These
instruments are mainly owned by mining companies. The number of operational airborne radar
systems is relatively small. Although many experimental radar systems exist, there are only a
few commercially operated systems.

• Cost of image data

It is rather difficult to provide indications about the costs of image data. Costs of different types
of image data can only be compared when calculated for a specific project with specific data
requirements. Existing data from the archive are cheaper than data that have to be specially
ordered. Another reason to be cautious in giving prices is that different qualities of image data
(processing level) exist.

The costs of vertical aerial photographs depend on the size of the area, the photo scale, the type
of film and processing used and the availability of aerial reconnaissance companies. Under
2
European conditions the, costs of the aerial photography is somewhere between 5 Euro/km to 20
2 2
Euro/km . The cost of optical satellite data varies from free (public domain) to 45 Euro/km .
Usually, the images either have a fixed size, or a minimum order applies. Low resolution data
(NOAA AVHRR) can be downloaded for free from the Internet. Medium resolution data
2 2
(Landsat, SPOT, IRS) cost in the range of 0.01 Euro /km to 0.70 Euro/km . High resolution
2 2
satellite data (Ikonos, SPIN-2) cost between 15 Euro/km and 45 Euro/km . For Ikonos derived
2
information products prices can go up to 150 Euro/km . Educational institutes often receive a
considerable reduction.

5.5 Normalized difference vegetation index

Ratio images are often useful for discriminating subtle differences in spectral variations, in a
scene that is masked by brightness variations. Different band ratios are possible given the
number of spectral bands of the satellite image. The utility of any given spectral ratio depends
upon the particular reflectance characteristics of the features involved and the application at

12 | P a g e
hand. For example a near-infrared / red ratio image might be useful for differentiating between
areas of stressed and non-stressed vegetation.

Various mathematical combinations of satellite bands have been found to be sensitive indicators
of the presence and condition of green vegetation. These band combinations are thus referred to
as vegetation indices. Two such indices are the simple vegetation index (VI) and the normalized
difference vegetation index (NDVI). NDVI = (NIR – Red) / (NIR + Red)

Both are based on the reflectance properties of vegetated areas as compared to clouds, water and
snow on the one hand, and rocks and bare soil on the other. Vegetated areas have a relatively
high reflection in the near-infrared and a low reflection in the visible range of the spectrum.
Clouds, water and snow have larger visual than near-infrared reflectance. Rock and bare soil
have similar reflectance in both spectral regions. The effect of calculating VI or the NDVI is
clearly demonstrated in next table.

It is clearly shown that the discrimination between the 3 land cover types is greatly enhanced by the
reaction of a vegetation index. Green vegetation yields high values for the index. In contrast, water
yield negative values and bare soil gives indices near zero. The NDVI, as a normalized index, is
preferred over the VI because the NDVI is also compensating changes in illumination conditions,
surface slopes and aspect.

13 | P a g e

You might also like