Image Processing Unit 1
Image Processing Unit 1
UNIT – 1
Why do we need Image Processing?
➢ To improve the Pictorial information for human interpretation
1) Noise Filtering
2) Content Enhancement
a) Contrast enhancement
b) Deblurring
3) Remote Sensing
➢ Processing of image data for storage, transmission and representation for autonomous
machine perception
What is Image?
An image is a two dimensional function f(x,y), Where x and y are spatial(plane)
coordinates and the amplitude of ‘ f ‘ at any pair of coordinates (x,y) is called intensity or gray
level of the image at that point. When x, y and the intensity values of ‘f ‘are all finite, discrete
quantities then the image is called Digital Image
Analog Image- An analog image is mathematically represented as a continuous range of values
that give the position and intensity.
Digitization –it’s the process of transforming images such as analog image into digital image or
digital data.
PIXELS:
A digital image is composed of a finite number of elements, each of which has a particular
location and value. These elements are called Picture elements or Image elements, pels and
Pixels.
What is Digital Image Processing?
Digital image processing is a method to perform some operations on an image. In order to
get an enhanced image or to extract some useful information from it. It is a type of signal
processing in which input is an image and output may be image or characteristics / features
associated with that image.
(or)
Digital image processing is defined as the process of analyzing and manipulating images
using computer
The main advantage of DIP:
• It allows wide range of algorithms to be applied to the input data.
• It avoids noise and signals distortion problems.
1.1 Fundamentals of Digital Imaging:
, = 𝐻[ , +Ƞ , ………
The electrical signals will be converted into image format inside a computer. This
reception may also differ according to the variation in the lens and filter design. A method called
three pass scanning is commonly used in which each movement of the scan head from one end to
another uses each composite color to be passed between the lens and the CCD sensors. After the
three composite colors are scanned, the scanner software assembles the three filtered images into
one single-color image.
There is also a single pass scanning method in which the image captured by the lens will
be split into three pieces. These pieces will pass through any of the color composite filters. The
output will then be given to the CCD sensors. Thus, the single-color image will be combined by
the scanner
1.5 IMAGE SAMPLING AND QUANTIZATION:
In order to become suitable for digital processing, an image function f(x,y) must
be digitized both spatially and in amplitude. Typically, a frame grabber or digitizer is used to
sample and quantize the analogue video signal. Hence in order to create an image which is
digital, we need to covert continuous data into digital form. There are two steps in which it is
done:
• Sampling
• Quantization
The sampling rate determines the spatial resolution of the digitized image, while
the quantization level determines the number of grey levels in the digitized image. A magnitude
of the sampled image is expressed as a digital value in image processing. The transition between
continuous values of the image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of
fine shading details in the image. The occurrence of false contours is the main problem in image
which has been quantized with insufficient brightness levels.
• Sampling: Process of digitizing the coordinate values is called sampling
• Quantization: Process of digitizing the amplitude values is called quantization
The Basic concepts of image sampling and quantization can be explained with the
example given below.
Example:
Consider a continuous image f(x,y) shown in figure (a) which is needed to be
converted into digital form. Its gray level plot along line AB is given in figure (b). This image is
continuous with respect to the x and y coordinates as well as in amplitude. i.e. gray level values.
Therefore, to convert into digital form, both the coordinates and amplitude values should be
sampled.
To sample this function, equally spaced samples are taken along the line AB. The
samples are shown as small squares in figure (c). The set of these discrete locations give the
sampled functions.
Even after sampling, the gray level values of the samples have a continuous range.
Therfore, to make it discrete, the samples are needed to be quantized. For this purpose, a gray
level scale shown at the figure (c) right side is used. It is divided into eight discrete levels,
ranging from black to white. Now, by assigning one of the eight discrete gray levels to each
sample, the continuous gray levels are quantized
a b
c d
Generating a digital image. (a) Continuous image. (b) A scaling line from A to B in the
continuous image, used to illustrate the concepts of sampling and quantization. (c)
Sampling and quantization. (d) Digital scan line.
1.7 RELATIONSHIP BETWEEN PIXELS:
A relation of pixels plays an important role in
digital image processing. Where the pixels relations (x-1, y-1) (x-1, y) (x-1, y+1)
are used for finding the differences of images and
also in its sub images.
(x, y-1) (x, y) ‘p’ (x, y+1)
1.7.1 Neighbors of a Pixel:
A pixel, p can have three types of neighbors known (x+1, y-1) (x+1, y) (x+1, y+1)
as,
1. 4 – Neighbors, N4(p)
2. Diagonal Neighbors, ND(p)
3. 8 – Neighbors, N8(p)
a. 4 - Neighbors, N4(p)
The neighbors of a pixel ‘p’ at coordinates (x, y) induces two horizontal and two
vertical neighbors. The coordinate of these neighbors is given by,
(x+1, y), (x-1, y), (x, y+1), (x, y-1)
Here, each pixel is at unit distance from (x,y) as shown in figure. If (x,y) is on the border of the
image, some of the neighbors of pixel ‘p’ lie outside the digital image.
a. 4-Connectivity
Two pixels p and q, both having values from a set V are 4-connected if q is from
the set N4(p).
b. 8-Connectivity
Two pixels p and q, both having values from a set V are 4-connected if q is from
the set N8(p).
c. m-Connectivity
Mixed connectivity is a modification of 8-adjacency. It is used to remove the
ambiguities present in 8-connectivity.
Two pixels p and q with values from {V} are m-connectivityif the following
conditions are satisfied.
• q is in N4(p).
• q is in ND(p) and the set [N4(p) ∩ N4(q)] is empty (has no pixels whose
values are from V).
• Closed Path:
In a path, if (X0, Y0) = (Xn, Yn) i.e. the first and last pixel are the same, it’s known as
a closed path
According to the adjacency present, paths can be classified as:
1. 4 – path
2. 8 – path
3. m – path
1.7.5 Region, Boundary and Edges:
• In an image I of pixels, a subset R of pixels in an image I is called a Region of the image
if R is a connected set.
• Boundary is also known as border or contour. The boundary of the region R is the set of
pixels in the region that have one or more neighbors that are notin R. if R is an entire
image, its boundary is defined as the set of pixels in the first and last rows and columns
of the image.
• An edge can be defined as a set of contiguous pixel positions where an abrupt change of
intensity (gray or color) values occur
1.7.6 Distance Measure:
Distance measures are used to determine the distance between two different pixels in a
same image. Various distance measures are used to determine the distance between different
pixels.
Conditions: Consider three pixels p, q and z, p has coordinates (x, y), q has coordinates
(s, t) and z has coordinates (v, w). For these three pixels D is a Distance function or metric if
• D(p, q) ≥ 0, [D(p, q)=0 if p = q]
• D(p, q) = D(q, p) and
• D(p, z) ≤ D(p, q) + D(q, z)
Types:
• Euclidean Distance
• City – Block (or) D4 Distance
• Chessboard (or) D8 Distance
• Quasi-Euclidean Distance
• Dm Distance
a. Euclidean Distance: The Euclidean distance is the straight-line distance between two
pixels.
De(p, q) = √ − 𝟐 + − 𝟐
b. City – Distance: The city block distance metric measures the path between the pixels
based on a 4-connected neighborhood. Pixels whose edges touch are 1 unit apart;
pixels diagonally touching are 2 units apart.
D4(p, q) =| − | + | − |
c. Chessboard Distance: The chessboard distance metric measures the path between
the pixels based on an 8-connected neighborhood. Pixels whose edges or corners
touch are 1 unit apart
D8(p, q) = 𝒎𝒂 | − |, | − |
Gray level resolution refers to the predictable or deterministic change in the shades or
levels of gray in an image. In short gray level resolution is equal to the number of bits per
pixel. The number of different colors in an image is depends on the depth of color or bits per
pixel.
The mathematical relation that can be established between gray level resolution and bits
per pixel can be given as.
L= k
In this equation L refers to number of gray levels. It can also be defined as the shades of
gray. And k refers to bpp or bits per pixel. So the 2 raise to the power of bits per pixel is
equal to the gray level resolution.
THRESHOLD METHOD
The threshold method uses a threshold value which converts the grayscale image
into binary image. The output image replaces all pixels in the input image with luminance
greater than the threshold value with the value 1 (white) and replaces all other pixels with the
value 0 (black).