Dept:Electronics and Communication Engineering Subject Name:Digital Image Processing SUBJECT CODE:15A04708
Dept:Electronics and Communication Engineering Subject Name:Digital Image Processing SUBJECT CODE:15A04708
SUBJECT CODE:15A04708
FACULTY:K.SELVAKUMARASAMY
ASSOCIATE PROFESSOR
ECE DEPT.
1
SYLLABUS
UNIT–I
Introduction to Digital Image processing – Example fields of its usage- Image sensing and
Acquisition – image Modeling - Sampling, Quantization and Digital Image representation – Basic
relationships between pixels, - Mathematical tools/ operations applied on images - imaging
geometry.
UNIT–II
2D Orthogonal and Unitary Transforms and their properties - Fast Algorithms - Discrete Fourier
Transform - Discrete Cosine Transforms- Walsh- Hadamard Transforms- Hoteling Transforms ,
Comparison of properties of the above.
UNIT–III
2
SYLLABUS
UNIT–IV
Degradation model, Algebraic approach to restoration – Inverse filtering – Least Mean Square
filters, Constrained Least square restoration. Blind Deconvolution Image segmentation: Edge
detection -,Edge linking, Threshold based segmentation methods – Region based Approaches -
Template matching –use of motion in segmentation
UNIT–V
1. R.C .Gonzalez & R.E. Woods, “Digital Image Processing”, Addison Wesley/Pearson education,3rd Edition, 2010.
REFERENCES:
1. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image processing usingMATLAB”, Tata McGraw Hill,
2010.
3. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004
3
COURSE OBJECTIVES AND
OUTCOMES
To study the image fundamentals and mathematical transforms necessary for
image processing.
To Understand the concepts of image enhancement in spatial and frequency
domain.
To Acquire knowledge on image compression methods.
To Acquire knowledge on image segmentation methods.
To Understand the concepts of image restoration
Apply 2D filters for image enhancement in spatial and frequency domain. (BL-03)
UNIT 1 CONTENTS:
5
WHAT IS A DIGITAL IMAGE?
1 pixel
– 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity)
Fig 1.3 : For most of this course we will focus on grey-scale images
8
WHAT IS A DIGITAL IMAGE?
9
WHAT IS A DIGITAL IMAGE?
The continuum from image processing of computer vision can be broken up into low-,
mid- and high-level processes.
10
HISTORY OF DIGITAL IMAGE PROCESSING
printer
11
HISTORY OF DIGITAL IMAGE PROCESSING
12
HISTORY OF DIGITAL IMAGE PROCESSING
13
HISTORY OF DIGITAL IMAGE PROCESSING
14
HISTORY OF DIGITAL IMAGE PROCESSING
1980s - Today: The use of digital image processing techniques has exploded and they
are now used for all kinds of tasks in all kinds of areas
▪ Image enhancement/restoration
▪ Artistic effects
▪ Medical visualisation
▪ Industrial inspection
▪ Law enforcement
15
APPLICATIONS – IMAGING MODALITIES
16
APPLICATIONS: IMAGE ENHANCEMENT
▪ One of the most common uses of DIP techniques: improve quality, remove noise etc
▪ Radio frequencies
▪ Magnetic Resonance Imaging (MRI)
22
APPLICATIONS: MEDICINE
Take slice from MRI scan of canine heart, and find boundaries between types of tissue
▪ Image with gray levels representing tissue density
fig 1.18 Original Image of Dog Heart fig 1.19 Edge Detection Image
23
APPLICATIONS: GIS
▪ Meteorology (NOAA)
(infra red)
▪ Global inventory of human
settlement
▪ Not hard to imagine the kind of
analysis that might be done
using this data
27
APPLICATIONS: LAW ENFORCEMENT
▪ Gesture recognition
It deals with tools for extracting image components that are useful in the
representation and description of shape and boundary of objects. It is
majorly used in automated inspection applications
This technique is used for the storage required to save an image or the
bandwidth required to transmit it. which is most important in Internet
applications
41
COMPONENTS OF IMAGE PROCESSING SYSTEM
42
COMPONENTS OF IMAGE PROCESSING SYSTEM
Image Sensors With reference to sensing, two elements are required to acquire digital
image. 1) A physical device that is sensitive to the energy radiated by the object we wish to
Image processing
Hardcopy
Special image software
processing hardware
Network
Network
package also includes capability for the user to write code. More sophisticated software
Network
Network
Network
50
ELEMENTS OF THE VISUAL PERCEPTION
51
STRUCTURE OF THE HUMAN EYE
53
IMAGE FORMATION IN THE EYE
• The distance between the center of the lens and the retina
along the visual axis is approximately 17m meters
54
IMAGE FORMATION IN THE EYE
Fig:1.41
56
IMAGE SENSING & ACQUISITION
▪ To generate a 2-D image using a single sensor, there has to be relative displacements
in both the x- and y-directions between the sensor and the area to be imaged.
• The number represents the brightness of the image at that point this gives the
grabbed image is now a digital image and can be accessed as a two dimensional
array of data
63
IMAGE SAMPLING AND QUANTIZATION
• The vertical tick marks indicate the
specific value assigned to each of the
eight intensity intervals.
• The continuous intensity levels are
quantized by assigning one of the eight
values to each sample.
• The assignment is made depending on
the vertical proximity of a sample to a
tick mark.
• Starting at the top of the image and
carrying out this procedure line by line
produces a two-dimensional digital
image fig 1.47 Sampling and Quantization
64
• In addition to the number of discrete levels used ,the accuracy in
quantization is highly dependent on the noise content of the sampled signal
• In practice, the method of sampling is determined by the sensor
arrangement used to generate the image.
1. When an image is generated by a single sensing element combined with
mechanical motion , the output of the sensor is quantized in the manner
described above.
2. When a sensing strip is used for image acquisition the number of sensors in
the strip establishes the sampling limitations in one image direction.
Mechanical motion in the other direction can be controlled.
3. When a sensing array is used for image acquisition , there is no motion and
the number of sensors in the array establishes the limits of sampling in both
the directions.
65
Fig.1.48
▪ The amplitude of the image is nonzero and finite i.e. 0< f(x,y)<α
Where, 0<i(x,y)< α
0<r(x,y)<1
67
WHAT IS GRAY LEVELS ?
69
REPRESENTATION OF DIGITAL
IMAGES
70
Spatial and Gray – level Resolution
• Sampling is the principal factor determining the spatial resolution of an image.
• Spatial resolution is the smallest discernible detail in an image.
• Suppose that we construct a chart with vertical lines of width W, with the space
between the lines also having width W.
• A line pair consists of one such line and its adjacent space.
• The width of a line pair is 2W, and there are 1/2Wline pairs per unit distance.
• Resolution is simply the smallest number of discernible line pairs per unit
distance.
• Gray-level resolution similarly refers to the smallest discernible change in
gray level.
• We have considerable discretion regarding the number of samples used to
generate a digital image, but this is not true for the number of gray levels. Due
to hardware considerations, the number of gray levels is usually an integer
power of 2.
71
Spatial and Gray – level Resolution
The subsampling was accomplished by deleting the appropriate number of rows
and columns from the original image
72
(a) 452*374,256-level image (b)–(d) Image displayed in 128, 64, and 32 gray levels,
while keeping the spatial resolution constant (e)–(g) Image displayed in 16, 8, 4, and 2 gray
levels.
73
• An early study by Huang [1965] attempted to quantify experimentally the effects
on image quality produced by varying N and k simultaneously.
• Sets of these three types of images were generated by varying N and k, and
observers were then asked to rank them according to their subjective quality.
Results were summarized in the form of so-called isopreference curves in the Nk-
plane.
• Each point in the Nk-plane represents an image having values of N and k equal to
the coordinates of that point.
• It was found in the course of the experiments that the isopreference curves tended
to shift right and upward, but their shapes in each of the three image categories
were similar. since a shift up and right in the curves simply means larger values for
N and k, which implies better picture quality.
a) Image with a low level of detail (b) Image with a medium level of detail (c) Image
with a relatively large amount of detail
Representative iso-preference curves
74
for the three types of images
• Pixel :A pixel is the smallest unit of a digital image or graphic that can be
displayed and represented on a digital display device. A pixel is also known
as a picture element
• A pixel is the basic logical unit in digital graphics. Pixels are combined to
form a complete image, video, text or any visible thing on a computer
display.
• A pixel is represented by a dot or square on a computer monitor display
screen. Pixels are the basic building blocks of a digital image or display and
are created using geometric coordinates.
• Neighborhood
• Adjacency
• Connectivity
• Paths
76
BASIC RELATIONSHIPS BETWEEN PIXELS
• Neighbors of a pixel p at coordinates (x,y)
Neighborhood relation is used to tell adjacent pixels. It is useful for analyzing regions.
(x,y-1)
4-neighbors of p:
78
NEIGHBORS OF A PIXEL
(x-1,y-1) (x+1,y-1)
Diagonal neighbors of p:
p (x−1,y−1
)
ND(p) = (x+1,y−1
(x-1,y+1) (x+1,y+1) )
(x−1,y+1
Diagonal -neighborhood) relation considers only
fig 1.50 Diagonal neighbors of p (x+1,y+1)
diagonal neighbor pixels.
79
NEIGHBORS OF A PIXEL
8-neighbors of p:
(x−1,y−1)
(x-1,y-1) (x,y-1) (x+1,y-1)
(x,y−1)
(x+1,y−1)
N8(p) = (x−1,y)
(x+1,y)
(x-1,y) p (x+1,y)
(x−1,y+1)
(x,y+1)
(x+1,y+1)
(x-1,y+1) (x,y+1) (x+1,y+1)
80
BASIC RELATIONSHIPS BETWEEN PIXELS
• Adjacency A pixel p is adjacent to pixel q if they are
connected. Two image subsets S1 and S2 are
adjacent if some pixel in S1 is adjacent to some pixel
in S2 S1
Let V be the set of intensity values S2
81
Example
1 1 0 0 1 0 1 1
1 1 0 1 0 0 1 1
1 0 0 0 1 1 1 1
82
Consider two image subsets S1 and S2 shown in the
following figure, for V={1}, Determine whether these two
subsets are (a) 4-adjacent(b) 8-adjacent (c)m-adjacent
S1 S2
0 0 0 0 0 0 0 1 1 0
1 0 0 1 0 0 1 0 0 1
1 0 0 1 0 1 1 0 0 0
0 0 1 1 1 0 0 0 0 0
0 0 1 1 1 0 0 1 0 1
83
Connectivity, Adjacency
Case I: V={1} V={0,1,2,3,4,5,6,7,8,9,10}
Case 2: v={0}
0 1 0 1 54 10 100 5
0 0 1 0 81 150 2 34
0 0 1 0 201 200 3 45
1 0 0 0 7 70 147 52
84
EXAMPLES: ADJACENCY AND PATH
V = {1}
0 1,1 1 1,2 1 1,3 0 1 1 0 1 1
0 2,1 1 2,2 0 2,3 0 1 0 0 1 0
0 3,1 0 3,2 1 3,3 0 0 1 0 0 1
m-adjacent
8-adjacent The m-path from (1,3) to (3,3):
The 8-path from (1,3) to (3,3): (1,3), (1,2), (2,2), (3,3)
(i) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
1 1 1
1 0 0
1 1 1
0 0 1
1 1 1
85
• Path
A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q
with coordinates (xn, yn) is a sequence of distinct pixels with coordinates
We can define 4-, 8-, and m-paths based on the type of adjacency used.
86
BASIC RELATIONSHIPS BETWEEN PIXELS
• Connected in S
Let S represent a subset of pixels in an image. Two pixels p with
coordinates (x0, y0) and q with coordinates (xn, yn) are said to be
connected in S if there exists a path (x0, y0), (x1, y1), …, (xn, yn)
Wherei,0 i n,( xi , yi ) S
Let S represent a subset of pixels in an image
• For every pixel p in S, the set of pixels in S that are connected to p is called
a connected component of S.
• If S has only one connected component, then S is called Connected Set.
• A Region R is a subset of pixels in an image such that all pixels in R form a
connected component. i.e We call R a region of the image if R is a
connected set
• Two regions, Ri and Rj are said to be adjacent if their union forms a
connected set.
• Regions that are not to be adjacent are said to be disjoint.
87
BASIC RELATIONSHIPS BETWEEN PIXELS
• Boundary (or border)
The boundary of the region R is the set of pixels in the region that
have one or more neighbors that are not in R.
If R happens to be an entire image, then its boundary is defined as
the set of pixels in the first and last rows and columns of the image.
88
QUESTION 1
1 1 1 Region 1
1 0 1
0 11 0 Region 2
0 0 11
1 1 1
1 1 1
89
DISTANCE MEASURES
• Given pixels p, q and z with coordinates (x, y), (s, t), (u, v) respectively,
the distance function D has following properties:
a. D(p, q) ≥ 0 [D(p, q) = 0, if p = q]
(x2,y2)
q
b. D(p, q) = D(q, p)
p
c. D(p, z) ≤ D(p, q) + D(q, z) (x1,y1)
90
DISTANCE MEASURES
The following are the different Distance measures:
a. Euclidean Distance : Is the straight-line distance
between two pixels.
De(p, q) = [(x-s)2 + (y-t)2]1/2
91
Euclidean distance vs city block
93
• Imagine that the foreground regions in the input binary image
are made of some uniform slow blurring material.
94
• The result of the transform is a grey level image ,that looks
similar to the input image except that grey level intensity of
points inside foreground regions are changed to show the
distance to the closest boundary from each point.
95
• Just as there are many different types of distance transform, there are
many types of skeletonization algorithms all of which produce slightly
different results. However, the general effects are all similar.
• Thus, for instance, we can get a rough idea of the length of a shape by
considering just the end points of the skeleton and finding the maximally
separated pair of end points on the skeleton.
• We can distinguish many qualitatively different shapes from one another on
the basics of how many ‘tripple points’ there are i.e. points where atleast
three branches of this skeleton meet.
96
97
98
(3,3)
3 1 2 1 q (0,3)
0 2 0 2
1 2 1 1
1 0 1 2
P (0,0)
(3,0)
99
(0,0)
a) 2 P 3 2 6 1 b) 4 2 2 P 3
6 2 3 6 2
4 3 2 1
5 3 2 3 5
1 2 2 0
2 4 3 5 2
1 3 1 0
4 5 2 3 6 q (4,4) q
100
INTRODUCTION TO MATHEMATICAL OPERATIONS IN DIP
• Array vs. Matrix Operation
101
Let
Array Operation:
Matrix Operation:
102
LINEAR VERSUS NONLINEAR OPERATIONS
• We have to check whether output image is linear or
nonlinear compared to input image.
• Consider a general operator, H , that produces an Output
image, g(x,y) , for a given input image , f(x,y):
H f ( x, y ) g ( x, y )
Then H is said to be a linear operator if
H ai f i ( x, y ) a j f j ( x, y )
Additivity
H ai fi ( x, y ) H a j f j ( x, y )
ai H f i ( x, y ) a j H f j ( x, y ) Homogeneity
ai gi ( x, y ) a j g j ( x, y )
Example: Sum operator – linear
Max operator - Nonlinear 103
ARITHMETIC OPERATIONS
104
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION
K
1
g ( x, y )
K
g ( x, y )
i 1
i
105
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION
K
1
g ( x, y )
K
g ( x, y )
i 1
i
1 K 2
E g ( x, y ) E g i ( x, y ) 2 K
K i 1 g ( x ,y ) 1
gi ( x , y )
K i 1
1 K
E f ( x, y ) ni ( x, y ) 1 2
K i 1 2 n( x, y )
1 K
1 K
ni ( x , y ) K
f ( x, y ) E
K
i 1
ni ( x, y )
K i 1
f ( x, y )
106
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION
107
AN EXAMPLE OF IMAGE SUBTRACTION: MASK MODE RADIOGRAPHY
Mask h(x,y): an X-ray image of a region of a patient’s body
Live images f(x,y): X-ray images captured at TV rates after injection of the
contrast medium
Enhanced detail g(x,y) = f(x,y) - h(x,y)
The procedure gives a movie showing how the contrast medium propagates
through the various arteries in the area being observed .
108
IMAGE SUBTRACTION FOR ENHANCING DIFFERENCES
109
IMAGE MULTIPLICATION AND DIVISION FOR
SHADING CORRECTION
fig 1.58(a) Shaded SEM image of a tungsten filament and support, magnified approx. 130
times.
fig 1.58(b) The Shading pattern.
fig 1.58(c) Product of (a) by the reciprocal of (b).
• The union of two gray-scale images (sets) A and B is defined as the set
A B {max(a, b) | a A, b B}
z
112
LOGICAL OPERATIONS
114
IMAGE GEOMETRY
115
INTENSITY ASSIGNMENT
• Forward Mapping
( x, y ) T {(v, w)}
It’s possible that two or more pixels can be transformed to the same location
in the output image.
• Inverse Mapping
(v, w) T 1{( x, y )}
The nearest input pixels to determine the intensity of the output pixel value.
Inverse mappings are more efficient to implement than forward mappings.
116
117
IMAGE INTERPOLATION
• Interpolation — Process of using known data to estimate
unknown values
e.g., zooming, shrinking, rotating, and geometric correction
• Interpolation (sometimes called resampling) — an imaging
method to increase (or decrease) the number of pixels in a digital
image.
Some digital cameras use interpolation to produce a larger image
than the sensor captured or to create digital zoom
f1(x2,y2) =
f(x1,y1)
f(round(x2), round(y2))
=f(x1,y1)
f1(x3,y3) =
f(round(x3), round(y3))
=f(x1,y1)
118
(x,y)
119
BICUBIC INTERPOLATION
• The intensity value assigned to point (x,y) is obtained by the following equation
• The sixteen coefficients are determined by using the sixteen nearest neighbors.
3 3
f 3 ( x, y ) aij x y i j
i 0 j 0
120
121
EXAMPLES: INTERPOLATION
122
EXAMPLES: INTERPOLATION
123