MADA11

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Introduction to Image Processing

Image Pre-Processing
Once an image is acquired it is generally processed to eliminate
errors
Two categories:
• Geometric correction
• Radiometric correction
Introduction to Image Processing

Geometric Correction
Sources of distortion
• Variations in altitude
• Variations in velocity
• Earth curvature
• Relief displacement
• Atmospheric refraction
• Skew distortion from earth’s eastward rotation
Introduction to Image Processing

Geometric Correction
• Raw digital images contain two types of geometric distortions:
systematic and random
• Systematic sources are understood and can be corrected by
applying formulas
• Random distortions, or ‘residual unknown systematic distortions’
are corrected using multiple regression of ground control points
that are visible from the image
Introduction to Image Processing

Radiometric Correction
Radiance measured at a given point is influenced by:
• Changes in illumination
• Atmospheric conditions (haze, clouds)
• Angle of view
• Instrument response characteristics
• Elevation of the sun (seasonal change in sun angle)
• Earth-sun distance variation
Introduction to Image Processing

Image enhancement
• Improving image quality, particularly contrast
It includes a number of methods used for enhancing subtle radiometric
differences so that the eye can easily perceive them
Two types: point and local operations
• Point: modify brightness value of a given pixel independently
• Local: modify pixel brightness based on neighborhood brightness
values
Introduction to Image Processing

Image enhancement
Three types of manipulation are:
• Contrast enhancement: Methods include gray level thresholding,
level slicing and contrast stretching
• Spatial feature manipulation: Methods include spatial filtering, edge
enhancement and Fourrier analysis
• Multi-image manipulation: Methods include multispectral band
ratioing and differencing, principal components, canonical
components, vegetative components, decorrelation stretching, others
Introduction to Image Processing

Contrast Enhancement
(Point Operation)

Most images start with low contrast; these improve it


• Level slicing re-classes DNs into fewer classes, so differences can
be more easily seen; colours or grayscale values can be assigned.
• Contrast Streching is the opposite, where a smaller number of
values are stretched out over full DN range
Introduction to Image Processing

Contrast enhancement
Here is what spectral histograms look like

Note that DN is not zero for any of them


Source: http://www.sci-ctr.edu.sg/ssc/publication/remotesense/process.htm
Introduction to Image Processing

Contrast enhancement

The image on the left is hazy because of atmospheric scattering; the image is improved (right)
through the use of Gray level thresholding. Note that, If there is more contrast and features
can be better extracted.
Source: http://www.sci-ctr.edu.sg/ssc/publication/remotesense/process.htm
Introduction to Image Processing

Spatial Feature Enhancement


(local operation)

• Spatial filtering/ Convolution: Neighborhood operations, that


calculate a new value for the center pixel based on the values of its
neighbors within a window; includes low-pass (emphasizes regional
spatial trends, demphasizes local variability ) and high-pass
(emphasizes local spatial variability) filters
Introduction to Image Processing

Spatial Feature Enhancement

Edge Enhancement: This is a convolution method that combines


elements of both low and high-pass filtering in a way that accentuates
linear and local contrast features without losing the regional patterns
• First, a high-pass image is made with local detail
• Next, all or some of the gray level of the original scene is added
back
• Finally, the composite image is contrast stretched
Introduction to Image Processing

Image classification

Spectral pattern recognition procedures classifies a pixel based on its


pattern of radiance measurements in each band: more common and
easy to use

Spatial pattern recognition classifies a pixel based on its relationship


to surrounding pixels: more complex and difficult to implement

Temporal pattern recognition: looks at changes in pixels over time to


assist in feature recognition
Introduction to Image Processing

Spectral Classification
Two types of classification:
• Supervised: The analyst designates on-screen “training areas” known
land cover type from which an interpretation key is created, describing
the spectral attributes of each cover class . Statistical techniques are
then used to assign pixel data to a cover class, based on what class its
spectral pattern resembles.
• Unsupervised: Automated algorithms produce spectral classes based
on natural groupings of multi-band reflectance values (rather than
through designation of training areas), and the analyst uses references
data, such as field measurements, DOQs or GIS data layers to assign
areas to the given classes
Introduction to Image Processing

Spectral Classification
Unsupervised:
Spectral class 1
• Computer groups all pixels
according to their spectral
relationships and looks for
natural spectral groupings of
Spectral class 2

pixels, called spectral classes


• Assumes that data in different
cover class will not belong to
same grouping
• Once created, the analyst
assesses their utility Source: F.F. Sabins, Jr., 1987, Remote Sensing: Principles and Interpretation.
CASE STUDY:

An automated industrial conveyor belt system using image


processing and hierarchical clustering for classifying marble slabs
In quality classification tasks, the classification output determines the
category, or quality group, of a particular item. A typical classification process
comprises five main steps:

(i) Locating or recognizing the items on the conveyor belt via some type of a
sensor such as a camera, scanner,etc.

(ii) Acquiring the necessary data from the item (i.e.taking pictures,
measuring the amount of reflected light, electromagnetic wave, or another
type of signal). The acquisition device is usually located above the conveyor
belt to view the items orthographically.
(iii) Processing the data to extract several useful features.

(iv) Classification of the item using the extracted features and a


classifier.

(v) Performing the necessary action following the classification


result of the classifier.
Marble quality classification is based on some physical, mechanical, and
technological properties required by universal standards.

At the same time, the classified marble slabs should reflect attractive
colour and pattern choices. Important constraints for aesthetic
appearance are homogeneity, texture, color, distribution of limestone
The marble specimens used in this study are extracted from a mine in
Manisa region of Turkey. Although there are not unique criteria for
classifying marble specimens, colour scheme, homogeneity, size,
orientation, thickness, and distribution of the filled joints (red–brown
colored veins) are often used to visually perform the classification by
human experts.
Fig. 1. Typical marble
Slab
Images
From
Four
Different quality groups:
(a) Group1,
(b) Group2,
(c) Group3, and
(d) Group4.
(1) Homogenous limestone (beigecolour) (Fig. 1(a));

(2) Limestone with filled thin joints (veins) (Fig. 1(b));

(3) Brecciated limestone (composed of limestone grains of different


shape and size cemented with cohesive matrix) (Fig. 1(c)).
Here,cohesive matrix is defined as the collection of joints (veins) that
are unified to construct a larger area of material; and

(4) Homogenous cohesive matrix(Fig. 1(d)).


Fig. 2. Block diagram of the electro-mechanical conveyor
Belt system for marble classification
Fig. 3. A look inside the closed and black painted
Cabinet that is designed for image acquisition.
The image acquisition part of the system is designed in order to
standardize capturing of the marble surface images.

The system consists of a camera (an 8 mega pixel CanonEOS350D


digital camera with18–55 mm EF-S zoom lens or a CCD web-cam),
connection cables, light sources, a desktop computer, and a cabinet
housing all these parts.

The complete image acquisition system is shown in Fig. 3.

The 8 mega pixel camera produces high quality images with a resolution
of 1575x1550.
Experiments showed that using a resolution of 315x310 was enough to
process images without affecting the success rate.

For this purpose, a CCD web-cam was also used.

The camera was set to have a position perpendicular to the bottom


surface of the cabinet and the USB connection to the desktop personal
computer was established via cables.
The cabinet was used to ensure a fully isolated and uniformly
illuminated area.

The florescent light sources were positioned to prevent blazing as


much as possible that may occur at the surface of marble samples.

Thus, in this system, fluorescent lamps suitably positioned in the


closed and black painted cabinet are used for illumination.
Although the light sources inside the cabinet are located carefully so
that illumination can be made uniform inside the image acquisition unit,
some non-uniform illumination remained.

To compensate for the remaining non- uniformity, image of a white


paper is captured and then the maximum gray-level intensity value in
the green channel of this image is calculated.

The green channel is selected after several experiments on conversion


of the RGB image to grayscale.
Fig. 4. (a) Acquired image with non-uniform illumination, (b) correction of
non-uniform illumination by adding the template image obtained from
the white paper to the green channel of the acquired image, and (c) the
resulting gray scale image after illumination correction and background
cropping.
Fig. 5. Four synthetic images used to represent cohesive material
regions in four different quality groups: (a) in typical samples of Group1,
cohesive material regions are very small in size, (b) in typical samples of
Group2, cohesive material regions have vein like shapes, (c) in typical
samples of Group3, cohesive material regions are unified, and (d) in
typical samples of Group4, cohesive material regions are very large in
size.
The numbers of samples for each quality group (from 1 to 4) are
172, 388, 411, and 187, respectively, comprising a data set of 1158
samples in total.
Fig. 6. Hierarchical clustering scheme.
N is the number of quality groups and
K is the level index. At the end of the tree,
each quality group is constructed by
merging its corresponding clusters at
each level using union, U, operation.
Fig. 7. (a) Mechanical structure of the
system and inclined loading ramp,
(b) pneumatic pistons, (c) PLC used
in the system (TWDLCAE40DRF),
(d) relay circuit, (e) 220V panel, and
(f) air compressor for the pneumatic
pistons.
Fig. 8. Flow chart of the complete system.
Fig. 9. (a) SCADA interface of the system and
(b) MATLAB GUI of the system

You might also like