0% found this document useful (0 votes)
34 views

Digital Image Processing Lectures 1 & 2

Digital image processing involves manipulating digital images using computer algorithms and software. It originated from the need to process lunar images from the Apollo mission in the 1960s. Key applications now include medical imaging, computer vision, remote sensing, and image transmission. A typical image processing system involves acquiring an image, preprocessing it through techniques like enhancement and restoration, applying transforms, and compressing the image for storage or transmission.

Uploaded by

Suleman Malik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Digital Image Processing Lectures 1 & 2

Digital image processing involves manipulating digital images using computer algorithms and software. It originated from the need to process lunar images from the Apollo mission in the 1960s. Key applications now include medical imaging, computer vision, remote sensing, and image transmission. A typical image processing system involves acquiring an image, preprocessing it through techniques like enhancement and restoration, applying transforms, and compressing the image for storage or transmission.

Uploaded by

Suleman Malik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Digital Image Processing

Lectures 1 & 2
Mahmood (Mo) Azimi, Professor
Colorado State University
Fort Collins, Co 80523
email : [email protected]
Each time you have to click on this icon

wherever it appears to load the next sound file

To advance click enter or page down to go back use page up


Introduction
The primary interest in transmitting and handling images in digital
forms goes back to 1920’s. However, due to lack of strong computer
systems and vast storage requirements the interest in this area was
not fully explored until mid 1960’s after the successful Apollo
mission and other space programs. The serious work in the area
of digital image processing (DIP) began at JPL when the need for
processing lunar images was felt after the Apollo mission.

DIP, originally established to analyze and improve lunar images


is rapidly growing into a wealth of new applications, due to
the enormous progress made in both algorithm development and
computer engineering. At present, there is no technical area that is
not affected one way or another by DIP. The most important fields
To advance click enter or page down to go back use page up 1
of growth appear to emerge in the areas of medical image processing
(e.g. surgical simulators and tele-surgery), data communication
and compression (e.g. HDTV, 3D TV), remote sensing (e.g.
meteorological, environmental and military), computer vision (e.g.
robotics, assembly line inspection, autonomous systems such as
UAV’s), etc.
Currently the emphasis is being shifted towards real-time digital
image processing. For the years ahead, trends in computer
engineering, especially in parallel/pipeline processing technologies,
coupled with new emerging applications indicate no limitation for
the horizon of the DIP area.

To advance click enter or page down to go back use page up 2


Applications:
1. Medical: Automatic detection/classification of tumors in X-ray
images, magnetic resonance imaging (MRI), processing of CAT-
scan, ultrasound images, chromosome identification, blood test,
etc.
2. Computer Vision: Identification of parts in an assembly
line,robotics, tele-operation, autonomous systems, bin-picking, etc.
3. Remote Sensing: meteorology and climatology, tracking of earth
resources, geographical mapping; prediction of agricultural crops,
urban growth and weather, flood, fire control, etc.
4. Radar and Sonar: Detection and recognition of various targets,
guidance or maneuvering of aircraft or missiles.
5. Image Transmission: HDTV and 3DTV, teleconferencing,
communications over computer networks/satellite, military
To advance click enter or page down to go back use page up 3
communication, space missions.
6. Office Automation: Document storage, retrieval and
reproduction.
7. Identification Systems: Facial, Iris, finger-print-based ID
systems, airport and bank security, etc.

To advance click enter or page down to go back use page up 4


A typical image processing system is:

Digital Image: A sampled and quantized version of


a 2D function that has been acquired by optical or
other means, sampled at equally spaced rectangular grid
pattern, and quantized in equal intervals of amplitudes.
The task of digital image processing involves handling,
To advance click enter or page down to go back use page up 5
transmission, enhancement and analysis of digital
images with the aid of digital computers. This calls for
manipulation of 2-D signals. There generally three types
of processing that are applied to an image. These are:
low-level, intermediate-level and high-level processing
which are described below.
Areas of Digital Image Processing (DIP):
Starts with one image and produces a modified version of that
image.
1. Image Representation and Modelling
An image can be represented either in the spatial domain or
the transform domain. An important consideration in image
representation is the fidelity or intelligibility criteria for measuring
To advance click enter or page down to go back use page up 6
the quality of an image. Such measures includes contrast (gray
level difference within an image), spatial frequencies, color and
sharpness of the edge information.
Images represented in the spatial domain directly indicate the type
and the physical nature of the imaging sensors; e.g. luminance
of object in a scene for pictures taken by camera, absorption
characteristics of the body tissue for X-ray images, radar cross-
section of a target for radar imaging, temperature profile of a
region for infrared imaging and gravitational field in an area in
geophysical imaging.
Linear statistical models can also be used to model the images in
the spatial domain. These models allow development of algorithms
that are useful for an entire class or an ensemble of images rather
than for a single image.
To advance click enter or page down to go back use page up 7
Alternatively, unitary or orthogonal image transforms can be
applied to digital images to extract such characteristics as spectral
(frequency) content, bandwidth, power spectra, or other salient
features for various applications such as filtering, compression, and
object recognition.

2. Image Enhancement (low-level):


No imaging system will give images of perfect quality. In image
enhancement the aim is to manipulate an image in order to improve
its quality. This requires that an intelligent human viewer is
available to recognize and extract useful information from an
image. Since the human subjective judgement may either be wise
or fickle, certain difficulties may arise. Thus, a careful study
requires subjective testing of a group of human viewers. The
psychophysical aspect should always be considered in this process.
To advance click enter or page down to go back use page up 8
Examples of image enhancement (low-level processing) are

? Contrast & gray scale improvement


? Spatial frequency enhancement
? Pseudo coloring
? Noise removal
? Edge sharpening
? Magnification and Zooming

3. Image Restoration (low-level):


As in the image enhancement, the ultimate goal of image
restoration is to improve the quality of an image in some sense.
Image restoration involves recovering or estimating an image
that has been degraded by some deterministic and/or stochastic
phenomena. Blur is a deterministic phenomenon which is caused by
To advance click enter or page down to go back use page up 9
atmospheric turbulence (satellite imaging), relative motion between
the camera and the object, defocusing , etc. Noise, on the other
hand, is a stochastic phenomenon which corrupts the images
additively and/or multiplicatively. Sources of additive noise are
imperfection of sensors, thermal noise and channel noise. Examples
of multiplicative noise are speckle in coherent imaging systems such
as synthetic aperture radar (SAR), lasers and ultrasound images
and also film grain noise.
The restoration techniques aim at modelling the degradation and
then applying an appropriate scheme in order to recover the original
image. Some of the typical methods are:
? Image estimation and noise smoothing
? Deblurring
? Inverse filtering
To advance click enter or page down to go back use page up 10
? 2D Wiener and Kalman filters

Image reconstruction can also be viewed as a special class


of restoration where two or higher dimensional objects are
reconstructed from several projections. Applications: CT scanners
(medical), astronomy, radar imaging and NDE. Typical methods
are:
? Radon Transform
? Projection theorem
To advance click enter or page down to go back use page up 11
? Reconstruction algorithms
4. Image Transforms (Intermediate-Level):
Image transformation involves mapping digital images to the
transform domain using a unitary image transform such as
2D DFT, 2D Discrete Cosine Transform (DCT), 2D Discrete
Wavelet Transform (DWT). In the transform domain certain useful
characteristics of the images, which cannot typically be ascertained
in the spatial domain, are revealed. Image transformation performs
both feature extraction and dimensionality reduction which are
crucial for various applications. These operations are considered
intermediate-level since images are mapped to reduced dimensional
feature vectors.
5. Image Data Compression and Coding:
In many applications one needs to transmit or store images in
To advance click enter or page down to go back use page up 12
digital forms. The number of bits is tremendous. It is therefore
essential to compress or efficiently code the data e.g. LANDSAT-D
is approximately sending 3.7 × 1015 bits of information per year.
Storage and/or transmission of such huge amount of data require
large capacity and/or bandwidth which would be expensive or
impractical.

Straight digitalization requires 8 bits per picture element (pixel).


Using frame coding technique (DPCM) or transform coding one
can reduce this to 1-2 bits/pixel which preserves the quality of
images. Using frame-to-frame coding (transmit frame differences)
further reduction is possible. Motion-compensated coding detects
and estimates motion parameters from video image sequences and
motion-compensated frame differences are transmitted. An area
of active research is stereo-video sequence compression and coding
To advance click enter or page down to go back use page up 13
for 3D TV and virtual reality applications.

Some of the typical schemes are:

? Pixel-by-pixel coding
? Predictive coding
? Transform coding
? Hybrid coding
? Frame-to-frame coding
? Vector quantization

6. Image Analysis and Computer Vision (High-level):


Image analysis and computer vision involve (a) segmentation, (b)
feature extraction and (c) classification/recognition. Segmentation
techniques are used to isolate the desired object from the scene
so that the features can be measured easily and accurately e.g.
To advance click enter or page down to go back use page up 14
separation of targets from background. The most useful features are
then extracted from the segmented objects (targets). Quantitative
evaluation of these features allow classification and description of
the object.

The overall goal of image analysis is to build an automatic


interactive system which can derive symbolic descriptions from
a scene. Pattern recognition can be regarded as the inverse of
computer graphics. It starts with a picture or scene and transforms
it into an abstract description; a set of numbers, a string of symbols
To advance click enter or page down to go back use page up 15
or a graph. The differences and similarities between three areas of
computer graphics, image processing and pattern recognition are
shown in figure above:
A complete image processing or computer vision system is shown
below:

Sensor: Collects the image data.


Preprocessor: Compression of data, noise removal, etc.
Segmentation: Isolating the objects of interest using edge
To advance click enter or page down to go back use page up 16
detection and region growing.
Feature Extraction: Extract a representative set of features
from segmented objects.
Classifier: classifies each object or region and extracts attributes
from them.
Structural Analyzer: Determines the relationships among the
classified primitives. The output is the description of the original
scene.
World Model: This is used to guide each stage of the analyzing
system. The results of each stage can be used in turn to refine the
world model. Before we start to analyze a scene a world model is
constructed incorporating as much as a priori information about
the scene as possible.

Typical operations applied in image analysis are:


To advance click enter or page down to go back use page up 17
? Edge extraction
? Line extraction
? Texture discrimination
? Shape recognition
? Object recognition
A more complete image analysis system is shown below:

To advance click enter or page down to go back use page up 18


The common questions underlying these areas are:
1. How do we describe or characterize images?
2. What mathematical techniques do we want to use on an image?
3. How do we implement these algorithms?
4. How do we evaluate the quality of the processed image?
Goals of this course:
? Fundamental concepts
? Image processing mathematics
? Basic techniques
? State-of-the-art
? Applications

To advance click enter or page down to go back use page up 19

You might also like