Image Processing
Objectives
• Describe how raw satellite data is converted into an image
• Describe the characteristics of an image
• Describe techniques used by analysts to enhance and manipulate image data
Introduction
After data is collected and transmitted to the ground station, it must be processed and
converted into a format that is usable by the researcher who will interpret the data. Often
satellite-derived data is converted into imagery that provides a visualization of the data collected
by the sensor. However, the format of these data in their original form is usually not such that an
interpreter can learn much about the target. Often, the data must be processed, enhanced, and
manipulated to provide a useful set of information. This technique, which is part science and part
art, is called image processing.
Converting the data stream to an image
Satellite image data is sent from the satellite to the ground station in a raw digital format,
which is essentially a stream of numerical data. The smallest unit of digital data is a bit. A bit is
represented by a binary number, which has only two possible values, 0 or 1. A bit can be used to
represent any piece of data that has two states, such as on/off, true/false, or open/closed. With
only two potential values, a bit does not offer much flexibility in representing data that is more
complex than a binary number. Therefore, data is often stored as a collection of eight bits,
resulting in a unit of data called a byte.
A byte is a unit of data that is comprised of 8 bits, thus providing a data element with up
to 256 potential values (2^8). Radiometers that measure the intensity of electromagnetic
radiation will generally convert the detected energy levels into a value that ranges from 0 to 255
and represent each of these measured energy levels with a single byte. These bytes will be strung
together in a pre-determined manner, converted into a signal, and transmitted to the collections
facility. Here, the signal will be converted back into a digital stream of bytes where it can be read
in and interpreted by processing software. Images generated in this manner are thus referred to as
"8-bit digital images."
Characteristics of Images
Though remotely-sensed images are collected from a wide variety of sensors and
transmitted to a ground station through many different paths, all image data have certain
characteristics in common.
Pixels and Digital Number
When a stream of bytes is received from a satellite sensor, the value of each byte is
applied to a single dot, or pixel (short for "picture element"). The numerical value of the pixel,
known as its Digital Number (DN), is translated into a shade of gray that ranges somewhere
between white and black. These pixels, when arranged together in the correct order, form an
image of the target in which the varying shades of gray represent the varying energy levels
detected on the target.
The following image illustrates this concept. The Landsat 7 image clip in the upper left of
this image is a false color composite image centered over a resevoir in a portion of central
Maryland. When a selected portion of the image is magnified several times, it becomes apparent
that the image is really just comprised of rows of pixels, each with its own color.
It is important to remember that a satellite image is not just a picture of the target similar
to what a simple camera would take. Instead it is a collection of numeric data that is capable of
being displayed as an image. The underlying dataset can be manipulated using algorithms
(mathematical equations) that correct for errors (like atmospheric interference), re-map the data
to a geographical reference point, or extract information that is not readily apparent in the data.
The data for two or more images of the same location can even be combined mathematically,
creating imagery that is a composite of multiple datasets. These data products, known as derived
products, can be generated by performing calculations on the raw numerical (digital numbers)
data.
Gray scale
Most raw unprocessed satellite imagery is stored in a gray scale format. A gray scale is a color
scale that ranges from black to white, with varying intermediate shades of gray. A commonly
used gray scale for remote sensing image processing is a 256 shade gray scale, where a value of
0 represents a pure black color, the value of 255 represents pure white, and each value in
between represents a progressively darker shade of gray.
Objects in a gray tone display have a brightness value (or
digital number), which represents the measured energy level of
the item. Contrast refers to the difference in relative brightness
between an item and its surroundings as seen in the image. A
particular feature is easily detected in an image when contrast
between an item and its background are high. However, when the
contrast is low, an item might go undetected in an image.
The following two images illustrate differences in
[256 level gray scale] contrast between images taken in different portions of the
electromagnetic spectrum. The image on the left is a clip from a
Landsat 7 Thematic Mapper Band 4 image, which depicts the spectral response in the near
infrared portion of the spectrum. This spectral band is especially sensitive to young vegetative
growth, which contains pigments that reflect near infrared radiation from its leaf surfaces. The
image on the right is a clip from the exact same scene as the first image and is a Landsat 7
Thematic Mapper Band 3 image, which depicts the spectral response in the visible red portion of
the spectrum. This channel is not very sensitive to vegetation. In both images, contrast between
the deep reservoir water (which appears black in the lower center part of each image) and land is
relatively high. In the first image, however, the contrast between the agricultural fields (which
are bright white and light gray) and the surrounding land use classes (pasture, suburban
developments, forested areas) is much higher than in the visible red image. This is caused by the
heightened sensitivity to reflected near infrared in the channel 4 sensor.
Thematic Mapper Band 4 (RED) Thematic Mapper Band 3 (NEAR IR)
Images of raw, unprocessed data streams are often not particularly useful to a human
interpreter, since the contrast is often very low and the human eye can only distinguish between a
few dozen shades of gray. Image processing techniques can be used to enhance the contrast
between the most important shades of gray that make up an unprocessed image.
Resolution
Resolution is a property of an image that describe the level of detail that can be discerned
from it. Since the smallest element in a satellite image is a single pixel, resolution describes the
area on the Earth's surface represented by a single pixel. For example, in a weather satellite
image that has a resolution of 1 km, each pixel represents the average brightness value over an
area that is 1 km by 1 km. Features smaller than 1 km will be difficult to discern clearly in an
image with 1 km resolution.
In higher resolution imagery, each pixel represents a much smaller portion of the Earth.
For example, Landsat 7 typically produces imagery with 30 meter resolution. Thus, each pixel in
a Landsat image represents the average brightness of an area that is 30 meters by 30 meters.
Thus, much greater detail can be seen in a Landsat image when compared to a 1 km weather
satellite image since it has a higher resolution.
[comparison of 3 different resolutions]
The resolution of a particular satellite sensor must be optimized for the intended use of
the data. Weather satellites generally monitor weather patterns that cover hundreds or even
thousands of miles, therefore there is no need for resolution higher than about 0.5 km. Landsat
and other land-use satellites, however, need to distinguish between much smaller items, such as a
corn field and a forested area or between a road and a protected wetland. Therefore, a higher
resolution is required. The trade-off for higher resolution, however, is that the amount of data
produced by the satellite is much greater, which increases transmission times and burdens the
mission with a tremendous amount of data to store.
Enhancing and Manipulating Image Data
Raw satellite data often contain a vast amount of information that is not readily apparent
to the analyst. Therefore, image enhancement techniques are used to highlight features of interest
and expose subtle differences in the spectral signature of the components of the target. Some of
these techniques involve modifying an image in order to improve contrast between features in a
well defined spectral range or to improve resolution and detail, while other techniques use
complex mathematical calculations to derive an entirely new image from a set of raw image data.
Color and False Color
The human eye can only distinguish between about 16 shades of gray in an image,
however it is able to distinguish between millions of different colors. Thus, a common image
enhancement technique is to assign specific digital number (DN) values (or ranges of DN values)
to specific colors, thereby increasing the contrast of particular DN values with the surrounding
pixels in an image. An entire image can be converted from a gray scale to a color image, or
portions of an image that represent the DN values of interest can be colored.
A true color image is one for which the colors have been assigned to DN values that
represent the actual spectral range of the colors used in the image (blue features appear blue,
green features appear green, red features appear red, etc). A photograph is an example of a true
color image. The following image is a "true color" image generated from Landsat 7. The Red
pixels in this image are assigned to pixels that have the strongest spectral response in the red
band of the visible light spectrum, and the same is true for the green and the blue colors. The
result is an image that most closely represents the Earth's surface as it would appear to the human
eye.
False color is a technique by which colors are assigned to spectral bands that do not
equate to the spectral range of the selected color. This allows an analyst to highlight particular
features of interest using a color scheme that makes the features stand out. For example, in the
following image, the color red has been assigned to DN values that are found in the near infrared
portion of the spectrum. Young vegetation, which reflects near infrared, appears bright red in this
image. This image, a 432 composite image from Landsat 7, is useful for identifying and locating
areas where agricultural activity is concentrated, since new spring planting is easily detected by
its bright red tones.
Enhancement curves
When the brightness value and corresponding gray tone of each pixel in a gray tone
display is plotted on a graph, the line that results illustrates the relationship between brightness
and gray tone. This relationship is a linear relationship that distributes gray scale tones evenly
across the 256 potential brightness values in the satellite image data.
The problem with this linear relationship is that the brightness values of interest may be
concentrated in a small portion of the brightness value range, and the gray tones assigned to the
brightness values outside of the area of interest are essentially wasted. To improve the contrast in
the portion of the image that is of interest we can use mathematically defined enhancement
curves. These curves are a commonly used tool in processing remotely sensed imagery since
they re-distribute the most frequently used gray scale values in such a way as to highlight a
particular range or gradient of brightness values that is of use to the particular needs of the
researcher.
The image below illustrates the use of a simple enhancement curve that re-distributes the
gray scale values to highlight the brightness values in the upper middle range. Enhancement
curves can be much more complex than this simple example, and they are often developed to be
applied to imagery for specific purposes.
Transformation
A transformation is a image which is created by transforming raw image data into an
entirely new image using mathematical formulas (or algorithms) to calculate a new digital
number for each pixel in an image. The underlying numerical data that makes up the raw satellite
data is changed, or transformed, into another format according to a set of established
mathematical rules.
Composite images
A composite image is a transformation derived from two or more images of the same
geographical region taken in different bands of the electromagnetic spectrum. Each pixel value in
each of the images is extracted and a calculation is performed that produces a computed value.
The entire set of computed values is then stored as a new image and can be displayed in a gray
scale or color display the same way a non-derived image can be viewed. This derived product
may provide insights into the complex relationships between the data in each participating image
that would be otherwise undetectable from the individual images.
Composite images are especially useful when used with multispectral data, such as that
produced by the Landsat satellite. Images from each of the 7 different Landsat sensors can be
combined to create composite data products that collectively provide far more insight into the
nature of the target than the 7 individual image datasets.
Classification
Classification is a process by which a set of items is grouped into classes based on
common characteristics. Classification of satellite image data is based on placing pixels with
similar values into groups and identifying the common characteristics of the items represented by
these pixels.
Classification is another tool that is very useful when multispectral imagery of the same
geographical region is compared. Algorithms can be used that derive a value for each pixel in the
image from its brightness values in each image. Plotting the resulting data on a 2 or 3
dimensional graph can identify clusters of pixels that share common spectral characteristics
across multiple bands.
Preprocessing
Before digital images can be analyzed, they usually require some degree of
preprocessing. This may involve radiometric corrections, which attempt to remove the effects of
sensor errors and/or environmental factors. Common corrections of this type include those that
attempt to adjust DN values that have been affected by atmospheric interference or absorption.
Ancillary data collected at the same time the image was obtained can be used as a calibration
tool to support radiometric corrections.
Geometric Corrections are also a very important form of pre-processing. This method
attempts to rectify any error introduced into an image by the geometry of the curved Earth's
surface and the movement of the satellite. Geometric correction is a process by which points in
an image are registered to corresponding points on a map or other image that has already been
rectified. The goal of geometric correction is to put image elements in their proper planimetric (x
and y) positions.