0% found this document useful (0 votes)
29 views106 pages

Unit3 Suhail Rashid

Uploaded by

Sm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views106 pages

Unit3 Suhail Rashid

Uploaded by

Sm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Noida Institute of Engineering and Technology, Greater Noida

Image Segmentation & Image /Object Features


Extraction

Unit: 3

Image Processing and Pattern Recognition


ACSAI0522 Suhail Rashid
Assistant Professor
(CSE)
(B.Tech Vth Sem)
NIET, Gr. Noida

MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3


1
12/10/24
Evaluation Scheme

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 2
Syllabus

INTRODUCTION TO IMAGE PROCESSING & IMAGE FORMATION:


Image processing systems and its applications, Basic image file formats, Geometric and
photometric models; Digitization - sampling, quantization; Image definition, its representation and
neighbourhood metrics.

INTENSITY TRANSFORMATIONS & SPATIAL FILTERING:


Enhancement, contrast stretching, histogram specification, local contrast enhancement;
Smoothing, linear and order statistic filtering, sharpening, spatial convolution, Gaussian
smoothing, DoG, LoG.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 3
Syllabus

IMAGE SEGMENTATION & IMAGE/OBJECT FEATURES EXTRACTION:


Pixel classification; Grey level thresholding, global/local thresholding; Optimum thresholding -
Bayes analysis, Otsu method; Derivative based edge detection operators, edge detection/linking,
Canny edge detector; Region growing, split/merge techniques, line detection, Hough transform,
Textural features - gray level co-occurrence matrix; Moments; Connected component analysis;
Convex hull; Distance transform, medial axis transform, skeletonization/thinning, shape
properties.

IMAGE REGISTRATION:

Mono-modal/multimodal image registration; Global/local registration; Transform and similarity


measures for registration; Intensity/pixel interpolation.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 4
Syllabus

COLOUR IMAGE PROCESSING & MORPHOLOGICAL FILTERING BASICS:

Fundamentals of different colour models - RGB, CMY, HSI, YCbCr, Lab; False colour; Pseudo colour;
Enhancement; Segmentation, Dilation and Erosion Operators, Top Hat Filters

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 5
Applications

• Image sharpening and restoration.


•Medical field.
•Remote sensing.
•Transmission and encoding.
•Machine/Robot vision.
•Color processing.
•Pattern recognition.
•Video processing.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 6
Course Objective

1. Analyze general terminology of digital image processing.


2. Examine various types of images, illumination factors,
intensity transformations and spatial filtering.
3. Develop Fourier transform for image processing in frequency
domain.
4. Evaluate the methodologies for image segmentation,
restoration etc.
5. Implement image process and analysis of algorithms for de-
noising and de- blurring.
6. Apply image processing algorithms in practical applications
for enhancing the images.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 7
Course Outcome

Bloom’s
Course Outcomes (CO) Knowledge
Level (KL)
CO1 Understanding the concept of image processing and its techniques. K3

CO2 Explain and exemplify spatial filtering and intensity transformation. K2

CO3 Performing Image Segmentation and understanding image/object K3


features extraction techniques.

CO4 Analyze different image registration types. K4

CO5 Illustrate color image processing techniques and doing morphological K3


filtering

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 8
Program Outcomes
1 Engineering knowledge
2 Problem analysis:
3 Design/development of solutions
4 Conduct investigations of complex problems
5 Modern tool usage
6 The engineer and society
7 Environment and sustainability
8 Ethics
9 Individual and team work
10 Communication
11 Project management and finance
12 Life-long learning
12/10/24 MINI JAIN Image Processing and pattern 9
recognition ACSAI0522 Unit 3 9
CO-PO Mapping

CO-PO Mapping
CO/PO PO1 PO2 PO3 PO4 PO PO PO7 PO8 PO PO1 PO1 PO1
5 6 9 0 1 2

1 KCS052.1 3 3 3 2 3 1 3 2 1 3
2 KCS052.2 3 3 3 3 3 3

3 KCS052.3 3 3 1 3 3 1 1 1

4 KCS052.4 3 3 2 3 1 3 1 3

5 KCS052.5 3 3 3 2 3 2 3

*3= High *2= Medium *1=Low

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 10
Program Specific Outcomes

On successful completion of graduation degree the Engineering graduates will be


able to:
PSO1: The ability to design and develop the hardware sensor device and related
interfacing software system for solving complex engineering problem.
PSO2: The ability to understanding of Inter disciplinary computing techniques and
to apply them in the design of advanced computing .
PSO 3:. The ability to conduct investigation of complex problem with the help of
technical, managerial, leadership qualities, and modern engineering tools
provided by industry sponsored laboratories.
PSO 4: The ability to identify, analyze real world problem and design their solution
using artificial intelligence ,robotics, virtual. Augmented reality ,data analytics,
block chain technology and cloud computing.

12/10/24 MINI JAIN Image Processing and pattern 11


recognition ACSAI0522 Unit 3 11
PSO Mapping

Program Specific Outcomes and Course Outcomes Mapping


CO/PSO PSO1 PSO2 PSO3 PSO4

CO1 3 1 3 3

CO2 3 1 3 2

CO3 3 2 1

CO4 3 2 3 2

CO5 3 2 3 1

*3= High *2= Medium *1=Low

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 12
Program Educational Objectives

The Program Educational Objectives (PEOs) of B.Tech (Computer science &


Engineering) program are established and are listed as follow:
PEO 1: To have an excellent scientific and engineering breadth so as to
comprehend, analyze, design and provide sustainable solutions for real-life
problems using state-of-the-art technologies.
PEO 2: To have a successful career in industries, to pursue higher studies or to
support entrepreneurial endeavors and to face the global challenges.
PEO 3: To have an effective communication skills , professional attitude, ethical
values and a desire to learn specific knowledge in emerging trends, technologies
for research, innovation and product development and contribution to society.
PEO 4: To have life-long learning for up-skilling and re-skilling for successful
professional career as engineer, scientist, entrepreneur and bureaucrat for
betterment of society.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 13
12/10/24 13
Question Paper Template

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 14
Question Paper Template

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 15
Question Paper Template
SECTION – A CO

1. Attempt all parts- [10×1=10]

Question-
1-a. -1

1-b. Question- -1
1-c. Question- -1
1-d. Question- -1
1-e. Question- -1
1-f. Question- -1
1-g. Question- -1
1-h. Question- -1
1-i. Question- -1
1-j. Question- -1

2 Attempt all parts- [5×2=10] CO

2-a. Question- -2
2-b. Question- -2
2-c. Question- -2
2-d. Question- -2
2-e. Question- -2

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 16
Question Paper Template
SECTION – B CO

3 Answer any five of the following- [5×6=30]

3-a. Question- -6

3-b. Question- -6

3-c. Question- -6

3-d. Question- -6

3-e. Question- -6

3-f. Question- -6

3-g. Question- -6

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 17
Question Paper Template
SECTION – C CO

4 Answer any one of the following- [5×10=50]


4-a. Question- -10

4-b. Question- -10


5 Answer any one of the following-
5-a. Question- -10

5-b. Question- -10


6 Answer any one of the following-
6-a. Question- -10

6-b. Question- -10


7 Answer any one of the following-
7-a. Question- -10

7-b. Question- -10

8 Answer any one of the following-


8-a. Question- -10

8-b. Question- -10

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 18
Content (Unit-3)

Pixel classification; Grey level thresholding, global/local


thresholding; Optimum thresholding - Bayes analysis, Otsu
method; Derivative based edge detection operators, edge
detection/linking, Canny edge detector; Region growing,
split/merge techniques, line detection, Hough transform,
Textural features - gray level co-occurrence matrix; Moments;
Connected component analysis; Convex hull; Distance
transform, medial axis transform, skeletonization/thinning,
shape properties.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 19
Pixel classification(CO3)

• Pixel classification refers to the process of assigning


labels or categories to individual pixels in an image.
• The goal is to analyze and understand the content of an
image at the pixel level by assigning each pixel a
specific class or category based on its characteristics.
• Pixel classification is a fundamental task in computer
vision and has applications in various fields such as
object recognition, semantic segmentation, and image
analysis.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 20
Example of Pixel Classification

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 21
Pixel Classification using thresholding

• Pixel classification using thresholding is a simple and widely


used technique for segmenting images based on their pixel
intensities.
• Thresholding involves comparing each pixel's intensity value
to a predefined threshold value and assigning it to one of
the two classes based on the comparison result.
• Thresholding is one of the segmentation techniques that
generates a binary image (a binary image is one whose
pixels have only two values – 0 and 1 and thus requires only
one bit to store pixel intensity) from a given grayscale image
by separating it into two regions based on a threshold value.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 22
Different thresholding Methods

• There are different types of thresholding methods. Some of


them are listed below :
Ø Grey level thresholding
Ø Global thresholding
Ø Optimum thresholding

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 23
Grey level thresholding (CO3)

• Grey level thresholding is known as global thresholding.


• It is a simple image segmentation technique that divides an image
into two classes based on the intensity values of the pixels.
• It assumes that the image has a clear intensity distribution with a
distinct separation between the foreground and background or
the object of interest and the background.
• In this a single threshold value is applied to the entire image,
dividing it into two classes. Pixels with intensity values below the
threshold are assigned to one class (e.g., background), while
pixels with intensity values equal to or above the threshold are
assigned to the other class (e.g., foreground).

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 24
Process of Gray level thresholding

• Image Conversion: If the input image is in color or grayscale


format, it is converted to a grayscale image. This ensures that
each pixel's intensity value is represented by a single value,
simplifying the thresholding process.
• Threshold Selection: A threshold value is determined to separate
the image into two classes. This value can be chosen manually
based on prior knowledge or visual inspection of the image, or it
can be determined automatically using various threshold
selection techniques.
• Pixel Classification: Each pixel in the image is compared to the
threshold value. If the pixel intensity is below the threshold, it is
assigned to one class (e.g., background). If the pixel intensity is
equal to or above the threshold, it is assigned to the other class
(e.g., foreground or object).

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 25
Contd…

• Output: The resulting segmented image consists of two


regions, each corresponding to one of the classes.
Typically, the background is assigned a value of 0 (black),
and the foreground or object is assigned a value of 255
(white) in a grayscale image.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 26
Methods for selecting the threshold value

• Visual Inspection: This method involves visually examining the


image and selecting a threshold value based on the intensity
distribution. It is subjective and relies on the user's knowledge
and judgment.
• Otsu's Method: Otsu's method is an automatic threshold
selection technique that maximizes the inter-class variance. It
determines the threshold value that minimizes the weighted
sum of variances within each class.
• Triangle Method: The triangle method calculates the
threshold as the intensity value at the peak of a histogram's
triangular representation. It assumes a linear distribution of
pixel intensities and finds the threshold that minimizes the
distance between the histogram peak and the line connecting
the highest and lowest intensities.
27
12/10/24 MINI JAIN Image Processing and pattern 27
recognition ACSAI0522 Unit 3
Global thresholding (CO3)

• Global thresholding is a form of grey level


thresholding where a single threshold value is
applied to the entire image without considering
local variations.
• The threshold value is determined based on
certain criteria, such as Otsu's method, which
maximizes the inter-class variance or entropy-
based methods.
• Global thresholding is simple and efficient but may
not be suitable for images with varying lighting
conditions or regions with different intensity
characteristics.

28
MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit
12/10/24 28
3
Local Thresholding (CO3)

• Unlike global thresholding, local thresholding takes


into account local variations in the image and applies
different threshold values to different regions.
• It involves dividing the image into smaller sub-regions
or using adaptive methods to determine the
threshold values based on the local characteristics.
• Local thresholding is beneficial when images have
uneven illumination, varying intensity gradients, or
distinct objects with different intensity properties.
• It helps to handle cases where a single global
threshold would result in inaccurate segmentation.

29
MINI JAIN Image Processing and pattern
12/10/24 29
recognition ACSAI0522 Unit 3
Local thresholding methods

• Adaptive Thresholding: Adaptive thresholding calculates different


thresholds for each pixel's neighborhood, taking into account
local image statistics. It adjusts the threshold based on the local
intensity mean, median, or other statistical measures.
• Otsu's Binarization: Otsu's method can be adapted to perform
local thresholding by dividing the image into smaller regions and
calculating the threshold value independently for each region.
This allows for better handling of local variations.
• Niblack's Binarization: Niblack's method is another local
thresholding technique that computes the threshold based on the
local mean and standard deviation within a neighborhood. It
provides a simple way to handle varying lighting conditions and
textural variations.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 30
Optimum thresholding (CO3)

• Optimum thresholding, also known as adaptive thresholding or


optimal thresholding, is a technique used in image processing to
separate objects from the background based on their pixel
intensities.
• Thresholding is the process of converting a grayscale image into a
binary image, where pixels are classified as either foreground
(object) or background.
• In traditional thresholding methods, a global threshold value is
applied to the entire image.
• However, in many cases, the image may have non-uniform
lighting conditions or variations in contrast, which can lead to
suboptimal results.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 31
Bayes analysis (CO3)

• Bayesian analysis can be applied to optimum thresholding by


formulating it as a statistical decision problem.
• In this approach, the goal is to find the threshold that maximizes
the posterior probability of correct classification based on the
observed data.
• It can be used to find solutions of problem like “what is the
probability that the average male height is between 70 and 80
inches or that the average female height is between 60 and 70
inches?’’

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 32
Steps in Bayes Analysis

• Define the problem: Clearly define the problem and specify the
desired criteria for correct classification. For example, you may
want to maximize the accuracy, precision, recall, or a
combination of these metrics.
• Formulate prior beliefs: Specify prior beliefs about the
distribution of pixel intensities in the foreground and background
classes. This can be based on prior knowledge or assumptions
about the data.
• Likelihood model: Formulate a likelihood model that describes
the probability distribution of the observed pixel intensities given
the true class labels. This model captures the statistical
relationship between the observed data and the underlying
classes.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 33
Contd…

• Calculate posterior probabilities: Apply Bayes' theorem to


calculate the posterior probabilities of the class labels given the
observed data. The posterior probability can be computed as the
product of the prior probability and the likelihood, normalized by
the evidence.
• Decision rule: Define a decision rule that maps the posterior
probabilities to a threshold value. This decision rule can be based
on maximizing a specific performance metric, such as accuracy or
a cost function.
• Parameter estimation: Estimate the parameters of the prior and
likelihood models using techniques such as maximum likelihood
estimation or Bayesian estimation. This step involves fitting the
models to training data.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 34
Contd…

• Threshold selection: Use the decision rule to select the


threshold that maximizes the chosen performance metric.
This can be done using optimization techniques, such as
grid search or gradient ascent.
• Classification: Apply the selected threshold to classify the
pixels in the image as foreground or background based on
their intensities.

35
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 35
Bayes Analysis Method Representation

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 36
Otsu Method (CO3)

• The Otsu method, also known as Otsu's thresholding


or the maximum variance method, is a widely used
image processing technique for automatic threshold
selection.
• It is named after its inventor, Nobuyuki Otsu.
• The Otsu method determines an optimal threshold
value by maximizing the between-class variance of
the grayscale image.
• The between-class variance measures the separability
of the foreground and background pixels.

MINI JAIN Image Processing and pattern 37


12/10/24 37
recognition ACSAI0522 Unit 3
Steps in Ostu’s Method

• Convert the input image to grayscale if it is in color.


• Compute a histogram of the grayscale image. A histogram is
a plot that shows the distribution of pixel intensities in an
image.
• Calculate the total number of pixels in the image.
• Iterate through all possible threshold values from 0 to the
maximum intensity value. For each threshold value, divide
the pixels into two classes: foreground (pixels with
intensities less than or equal to the threshold) and
background (pixels with intensities greater than the
threshold).

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 38
Contd…

• Compute the probabilities of the foreground and


background classes by summing up the histogram
values for each class and dividing by the total
number of pixels.
• Compute the means of the foreground and
background classes by averaging the intensities of
the pixels in each class.
• Compute the between-class variance using the
probabilities and means of the foreground and
background classes.
• Update the threshold value if the between-class
variance is higher than the previous maximum
variance.
39
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 39
Contd…

• The threshold value with the maximum between-class variance


is selected as the optimal threshold.
• Apply the optimal threshold to the grayscale image, converting
it into a binary image where pixels below the threshold are
assigned to the foreground, and pixels above the threshold are
assigned to the background.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 40
Process of the Otsu threshold method

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 41
Derivative based edge detection operators (CO3)

• Derivative-based edge detection operators are image


processing techniques that use the concept of derivatives to
identify and highlight edges in an image.
• These operators work by computing the gradients or
derivatives of the image intensity values, which capture the
changes in pixel intensities across the image.
• These derivative-based edge detection operators are effective
in detecting edges because they respond strongly to regions
with significant intensity changes
• By focusing on the changes in pixel intensities, these operators
can identify the boundaries between objects or regions in an
image.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 42
Commonly used derivative-based edge detection
operators
• Roberts Operator: The Roberts operator calculates the gradient
by applying two 2x2 convolution kernels to the image. These
kernels emphasize horizontal and vertical changes in pixel
intensities. The gradients are then combined to estimate the
edge strength.
• Prewitt Operator: The Prewitt operator is an improvement over
the Roberts operator and uses two 3x3 convolution kernels.
One kernel highlights vertical changes, and the other
emphasizes horizontal changes. The gradients obtained from
these kernels are combined to estimate the edge strength.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 43
Contd…

• Sobel Operator: The Sobel operator is another


popular derivative-based edge detection operator.
It employs two 3x3 convolution kernels to estimate
the gradients in the horizontal and vertical
directions. The gradients are then combined to
calculate the edge strength.
• Scharr Operator:The Scharr operator is an
extension of the Sobel operator and provides
better rotation invariance. It also uses two 3x3
convolution kernels to estimate the gradients in
the horizontal and vertical directions.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 44
Edge detection/Linking (CO3)

• Edge detection is a fundamental task in image processing


that aims to identify and locate the boundaries or edges
between different objects or regions in an image.
• Once the edges are detected, edge linking is often
performed to connect adjacent edge segments and form
longer, continuous contours or curves.
• Edge linking helps in creating more meaningful and
complete representations of objects or structures present
in the image.
• Edge detection and linking are widely used in various
applications such as image segmentation, object
recognition, boundary detection, and feature extraction.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 45
Steps in Edge linking algorithms
• Edge Detection: The first step is to apply an edge
detection operator, such as the ones mentioned in the
previous response (e.g., Roberts, Prewitt, Sobel, Canny),
to identify the initial edge pixels in the image. This results
in a binary or grayscale image where the edge pixels are
highlighted.
• Edge Tracing: Once the initial edge pixels are detected,
edge tracing algorithms are used to follow the contours
of the edges. The most common approach is to examine
the neighborhood of each edge pixel and determine the
next pixel to connect. Different connectivity criteria, such
as 4-connectivity (horizontal and vertical neighbors) or 8-
connectivity (including diagonal neighbors), can be used
to guide the edge linking process.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 46
Contd…
• Edge Linking: During the edge linking step, neighboring
edge pixels are connected to form longer edge segments or
curves. This is typically done by examining the local
properties of the edge pixels, such as intensity gradients,
orientations, or other edge characteristics, to determine
the continuity and coherence of the edges.
• Thresholding or Hysteresis: To control the linking process
and avoid spurious or noisy connections, thresholding or
hysteresis is often employed. A high threshold is set to
initially select strong edge pixels, and then a lower
threshold is used to selectively connect weaker edge pixels
if they are adjacent to the initially selected strong edges.
This helps to ensure that only relevant edges are connected
while suppressing noise and weak edge fragments.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 47
Contd…

• Post-processing: After edge linking, post-processing steps can


be applied to refine the connected edges further. This may
include smoothing the edges, removing small or spurious edge
segments, or applying curve fitting techniques to approximate
the connected edges with smoother curves or contours.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 48
Canny edge detector (CO3)

• The Canny edge detector is a popular and widely used


algorithm for edge detection in image processing.
• It was developed by John F. Canny in 1986
• It is known for its effectiveness in detecting edges
while suppressing noise.
• The Canny edge detector is known for its ability to
detect edges accurately, even in the presence of noise.
• It provides well-defined and thin edges and allows for
parameter tuning to adjust the edge detection results
according to specific requirements.
• The algorithm is widely used in various applications
such as image analysis, object detection, feature
extraction, and computer vision tasks.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 49
Steps in Canny edge detection algorithm

• Gaussian Smoothing: To reduce noise interference, the


algorithm applies a Gaussian filter to the input image. The
Gaussian filter helps to smooth the image by convolving it
with a Gaussian kernel, effectively reducing high-frequency
noise.
• Gradient Calculation: The next step is to calculate the
gradient of the smoothed image. This is done by applying
derivative operators (typically Sobel operators) to estimate
the gradient magnitude and direction at each pixel. The
gradient magnitude represents the rate of change in
intensity, which is higher at the edges.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 50
Steps in Canny edge detection algorithm

• Non-maximum Suppression: Non-maximum suppression is


performed to thin out the detected edges. It involves examining
the gradient magnitude and direction to identify the local maxima
along the edge directions. Only the local maxima are retained,
while the rest of the edge pixels are suppressed, ensuring that the
resulting edges are thin and have a consistent width.
• Double Thresholding: In this step, two threshold values are applied
to classify the remaining edge pixels into strong and weak edges.
The higher threshold, called the upper threshold, is used to identify
strong edges, while the lower threshold, called the lower
threshold, is used to identify weak edges. Pixels with gradient
magnitudes above the upper threshold are considered strong
edges, while those between the upper and lower thresholds are
considered weak edges.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 51
Steps in Canny edge detection algorithm

• Edge Tracking by Hysteresis: To link weak edges and suppress


noise, edge tracking by hysteresis is performed. Starting from
the strong edge pixels, a connectivity-based approach is used
to track and connect adjacent weak edges that are likely to
belong to the same edge structure. This helps to form
continuous and connected edge contours while suppressing
isolated weak edges.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 52
Canny Edge detection example

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 53
Region Growing (CO3)

• Region growing is a technique used in image processing and


computer vision to segment an image into meaningful
regions based on certain criteria.
• It is a pixel-based segmentation method that groups
adjacent pixels with similar properties to form coherent
regions.
• The basic idea behind region growing is to start with a seed
pixel or a set of seed pixels and iteratively grow the region
by adding neighboring pixels that satisfy a given similarity
criterion.
• The similarity criterion could be based on various
properties such as intensity values, color, texture, or a
combination of these factors.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 54
Steps in Region Growing algorithm

• Seed Selection: Choose one or more seed pixels as the starting points for
region growing. These seeds can be manually selected or automatically
determined based on specific criteria.
• Pixel Similarity Check: Compare the properties of the seed pixel(s) with its
neighboring pixels. If a neighboring pixel satisfies the similarity criterion, it
is added to the growing region.
• Region Expansion: Repeat the similarity check and inclusion process for
the newly added pixels. This step is performed iteratively until no more
pixels can be added to the region.
• Termination Condition: Define a termination condition based on specific
criteria. For example, the region growing process may stop when the
difference between the properties of a neighboring pixel and the mean
properties of the current region exceeds a threshold.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 55
Example of Region Growing algorithm

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 56
Split/merge techniques (CO3)

• Split/merge techniques are commonly used in image


segmentation to divide an image into regions or
objects of interest.
• These techniques combine two fundamental steps:
splitting, which divides an image into smaller
regions, and merging, which combines neighboring
regions to form larger coherent regions.
• The split/merge process continues iteratively until a
certain termination criterion is met.
• Split/merge techniques offer flexibility in handling
various image characteristics and can adapt to
different types of images.
• They can handle both homogeneous regions and
regions with complex structures.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 57
Overview of the split/merge technique

1. Splitting:
I. Start with the entire image as a single region.
II. Apply a splitting criterion to determine if the region
should be divided into smaller sub-regions. The
splitting criterion can be based on various
properties such as intensity, color, texture, or edge
information.
III. If the splitting criterion is met, divide the region
into smaller sub-regions. This can be done by using
algorithms like quadtree decomposition, which
divides the region into four equal-sized sub-regions,
or recursive splitting methods like the watershed
algorithm.
IV. Repeat the splitting process for each newly created
sub-region.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 58
Contd…

2. Merging:
I. Compare adjacent regions and apply a merging
criterion to determine if they should be combined
into a larger region. The merging criterion is
typically based on similarity measures between the
neighboring regions, such as color similarity,
intensity difference, or texture coherence.
II. If the merging criterion is met, merge the adjacent
regions into a larger region.
III. Repeat the merging process for the remaining
adjacent regions.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 59
Contd…

3. Iteration:
I. Iterate the splitting and merging steps until a
termination criterion is satisfied. The termination
criterion can be based on factors such as the
number of regions, the difference in region
properties, or the convergence of region
boundaries.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 60
Example of Split/merge technique

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 61
Line detection (CO3)

• Line detection is a computer vision technique used to identify


and extract lines or line segments from an image or video.
• It plays a crucial role in various applications, such as image
processing, robotics, self-driving cars, and industrial
automation.
• Implementations of line detection algorithms can be found in
popular computer vision libraries, such as OpenCV, which
provide ready-to-use functions and methods for line
detection.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 62
Hough transform (CO3)

• The Hough Transform works by transforming the image space


to a parameter space, where lines are represented as points.
• The Hough Transform is a popular method for line detection
due to its robustness against noise, partial occlusions, and
gaps in the lines.
• There are also other line detection techniques, such as the
LSD (Line Segment Detector) algorithm, which directly detects
line segments in an image without requiring a parameter
space transformation.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 63
Steps in Hough transform

• Preprocess the image: Convert the image to grayscale and


apply any necessary filters, such as Gaussian smoothing or
edge detection (e.g., using the Canny edge detector).
• Define the parameter space: Create an accumulator array,
often called the Hough space, which represents the possible
parameters of lines. Typically, the parameters are the distance
and angle (slope) of the line.
• Voting: For each edge pixel in the preprocessed image, vote in
the Hough space for all possible lines that could pass through
that pixel. Each vote increments the corresponding bin in the
accumulator array.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 64
Steps in Hough transform

• Thresholding: Set a threshold value in the accumulator array to


determine the minimum number of votes required to consider a
line as valid.
• Extraction: Identify the local maxima in the accumulator array
above the threshold. Each local maximum corresponds to a line
candidate.
• Parameter reconstruction: Convert the line candidates from the
parameter space back to the image space. This involves calculating
the endpoints of the lines based on the parameters.
• Post-processing: Optionally, perform additional filtering or
refinement steps on the detected lines, such as merging or
removing overlapping lines or applying heuristics to improve
accuracy.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 65
Example of Hough transform

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 66
Textural features (CO3)

• Textural features refer to specific characteristics or properties


of the texture within an image or a piece of text.
• These features are often extracted and analyzed to gain
insights or make decisions in various fields such as computer
vision, image processing, and natural language processing.
• The choice of textural features often depends on the problem
at hand and the characteristics of the data being analyzed.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 67
Gray level co-occurrence matrix (CO3)

• A gray level co-occurrence matrix (GLCM) is a statistical


method used in image processing and analysis to describe the
spatial relationship between pixel intensities in a grayscale
image.
• It provides information about the texture and patterns
present in an image.
• The GLCM is derived from an input image by calculating the
frequency of occurrence of pairs of pixel intensity values at
specified pixel displacements and orientations.
• Each element of the GLCM represents the joint probability
distribution of two pixel intensity values occurring at a given
displacement and orientation.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 68
Gray level co-occurrence matrix

• Once the GLCM is computed, various statistical measures can be


derived from it to quantify different aspects of image texture.
• Some commonly used measures include contrast, energy,
homogeneity, entropy, and correlation.
• These measures provide information about the spatial variations,
uniformity, and relationship between neighboring pixel intensities
in the image.
• GLCM analysis has applications in various fields such as image
classification, pattern recognition, texture analysis, and
segmentation.
• It allows for the characterization of texture properties and can be
used as input features for machine learning algorithms to classify or
analyze images based on their texture patterns.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 69
Steps to create GLCM

• Convert the input grayscale image into discrete intensity


levels. This can be achieved by quantizing the pixel intensities
into a fixed number of levels or by using a thresholding
technique.
• Select a displacement vector that determines the spatial
relationship between pixel pairs. The displacement vector
defines the direction and distance between two pixels used to
calculate their co-occurrence.
• Specify the desired orientation(s) of the displacement vector.
This determines the angle at which the displacement vector is
applied relative to the image grid.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 70
Steps to create GLCM

• For each pair of pixels at the specified displacement and


orientation, calculate the co-occurrence frequency by
incrementing the corresponding element in the GLCM.
• Normalize the GLCM by dividing each element by the sum of
all elements in the matrix. This normalization ensures that the
GLCM represents a probability distribution.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 71
Gray level co-occurrence matrix example

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 72
Moments (CO3)

• Moments are mathematical descriptors used to characterize the shape,


distribution, and other properties of an object or data set.
• In the context of image analysis, moments are often used to describe the
spatial distribution of pixel intensities within an image.
• There are different types of moments, including raw moments, central
moments, and normalized moments.
• Each type of moment provides different information about the image
distribution.
• Raw moments are computed by summing the pixel intensities raised to a
power, multiplied by the spatial coordinates.
• Central moments are similar to raw moments but are calculated with
respect to the centroid of the image distribution, providing information
about the shape and orientation of the object.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 73
Moments

• Normalized moments are derived from central moments and


are useful for comparing different images or object shapes, as
they are invariant to translation, rotation, and scale.
• Moments can be used for tasks such as image matching,
object recognition, and shape analysis.
• They provide a compact representation of the image or object
characteristics, which can be further utilized in various
algorithms and statistical analyses.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 74
Connected component analysis (CO3)

• Connected component analysis, also known as connected


component labeling or region labeling, is a technique used in image
processing and computer vision to identify and analyze distinct
objects or regions within an image.
• The main goal of connected component analysis is to group
together pixels or regions that belong to the same object based on
their spatial proximity and similar characteristics.
• Connected component analysis is widely used in various
applications, such as object recognition, image segmentation,
object tracking, and pattern recognition.
• It provides a foundation for higher-level image analysis tasks and
plays a crucial role in many computer vision algorithms.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 75
Steps in Connected component analysis

• Image Preparation: Convert the input image to a binary format, where the
regions of interest are represented by foreground pixels (usually white) and
the background is represented by background pixels (usually black). This can
be achieved through various image thresholding techniques.
• Pixel Connectivity: Determine the connectivity criteria for neighboring pixels.
Typically, 4-connectivity (pixels connected horizontally and vertically) or 8-
connectivity (including diagonal connections) is used.
• Connected Component Labeling: Iterate through each foreground pixel in the
binary image and assign a label to it. Initially, each foreground pixel is
unlabeled. The labeling process involves examining the neighboring labeled
pixels and assigning the same label to the current pixel if they meet the
connectivity criteria. If the current pixel has multiple labeled neighbors, a
decision is made to either assign a new label or merge the labels.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 76
Steps in Connected component analysis

• Label Equivalence: After labeling all the foreground pixels,


there may be cases where different labels correspond to the
same connected component. Label equivalence analysis is
performed to identify and merge such labels, ensuring that
each connected component has a unique label.
• Analysis and Visualization: Once the connected components
are labeled, various analyses can be performed on them. This
may include calculating the area, perimeter, centroid, or other
properties of each component. Additionally, the labeled
components can be visually highlighted or extracted for
further processing or analysis.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 77
Example of Connected component analysis

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 78
Convex hull (CO3)

• In computational geometry, the convex hull of a set of points is the


smallest convex polygon or polyhedron that contains all the points.
• In simpler terms, it represents the outer boundary or envelope of a set
of points, forming a shape with no inward curves or indentations.
• In image analysis, the convex hull can be used to approximate the
shape of objects or regions. It helps in segmenting objects from the
background and extracting meaningful features.
• In Pattern Recognition the convex hull is used in pattern recognition
algorithms to represent the shape of objects and to identify their
distinctive features.
• In Optimization the convex hull can be utilized in optimization
problems where the objective is to find the smallest convex shape that
encloses a given set of points. This is often encountered in facility
location problems, clustering algorithms, and resource allocation.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 79
Distance transform (CO3)

• The distance transform is a mathematical operation used in


image processing and computer vision to calculate the
distance of each pixel in an image to a defined set of points or
objects.
• It is a useful tool for various applications, including image
segmentation, shape analysis, and feature extraction.
• The distance transform assigns a numerical value to each
pixel, representing its distance to the nearest object or point
of interest.
• Typically, the distance is measured using Euclidean distance,
although other distance metrics such as Manhattan distance
or Chebyshev distance can also be used.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 80
Computation of Distance transform

• The distance transform can be computed in different ways, but one


common approach is the chamfer distance transform.
• In the chamfer distance transform, a set of reference points or
objects is defined, and the distance to each pixel is calculated by
finding the shortest path from that pixel to the nearest reference
point/object.
• This can be done efficiently using algorithms like the chamfer
matching algorithm or the fast marching method.
• Once the distance transform is computed, the resulting image or
data structure can be used for further analysis.
• For example, in image segmentation, the distance transform can be
used to identify regions or objects based on their distance to a
predefined seed point or region.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 81
Example of Distance transform

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 82
Medial axis transform (CO3)

• The medial axis transform (MAT) is a mathematical technique


used in image processing and computational geometry to
represent the geometric skeleton or centerline of a shape.
• It provides a compact and useful representation of the
shape's structure and is often employed in various
applications, such as shape recognition, object analysis, and
path planning.
• The medial axis of a shape refers to the set of all points within
the shape that have more than one closest point on the
boundary of the shape.
• The medial axis transform is a valuable tool for shape analysis
and understanding the structure of objects in various
domains, including computer vision, computer graphics, and
robotics.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 83
Steps for computing Medial axis transform

• Shape representation: The shape of interest is typically represented


as a binary image, where the object of interest is labeled as
foreground (white) and the background is labeled as background
(black).
• Distance transform: A distance transform is applied to the binary
image, assigning each pixel a value representing its distance to the
nearest boundary pixel. Various distance metrics can be used, such
as Euclidean distance or chamfer distance.
• Medial axis extraction: The medial axis is then obtained by
identifying the points where the distance transform reaches a local
maximum. These points correspond to the skeleton or centerline of
the shape.
• Post-processing: Depending on the specific application, post-
processing steps may be performed to refine the medial axis
representation or remove spurious branches.
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 84
Example of Media Axis transform

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 85
Skeletonization/thinning (CO3)

• Skeletonization, also known as thinning, is a digital image


processing technique used to extract a skeletal representation of
an object or shape.
• The skeleton, or thinned representation, captures the essential
structure and topology of the object while reducing it to a
simplified representation.
• Thinning algorithms operate on binary images, where the object
of interest is represented by foreground pixels (usually white)
and the background is represented by background pixels (usually
black).
• The goal of skeletonization is to iteratively erode the foreground
pixels until only the central line or medial axis, known as the
skeleton, remains.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 86
Skeletonization/thinning

• Thinning algorithms typically work by examining local neighborhoods of


pixels and removing certain foreground pixels based on predefined rules.
• The process continues until no more pixels can be removed without
altering the connectivity or topology of the object.
• Skeletonization has various applications in image analysis, pattern
recognition, and computer vision.
• It can be used for shape analysis, object recognition, feature extraction,
and skeleton-based representations for further processing or analysis.
• skeletonization is a powerful technique, the resulting skeleton may not
always perfectly represent the original object's shape, especially in the
presence of noise or complex structures.
• Therefore, the choice of thinning algorithm and preprocessing steps may
vary depending on the specific application and desired outcome.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 87
Skeletonization/thinning example

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 88
Shape properties (CO3)

• In image processing, shape properties refer to the


characteristics or features of objects or regions within an
image.
• These properties are often computed to analyze and describe
the shape of objects, identify patterns, or perform various
tasks such as object recognition or image segmentation.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 89
Shape properties used in image processing

• Area: The area of an object or region represents the number


of pixels it occupies in the image. It provides a measure of the
size or extent of the object.
• Perimeter: The perimeter of an object or region is the total
length of its boundary. It indicates the length of the object's
outline.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 90
Shape properties used in image processing

• Centroid: The centroid represents the geometric center of an


object or region. It is computed as the average position of all
the pixels within the object.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 91
Shape properties used in image processing

• Boundary: The boundary of an object or region is the set of


pixels that form its outermost edge. It is often used for shape
extraction or contour detection.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 92
Shape properties used in image processing

• Compactness: Compactness measures how closely an object


or region resembles a compact shape, such as a circle or
square. It is typically calculated as the ratio of the object's
area to the area of a shape with the same perimeter.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 93
Shape properties used in image processing

• Eccentricity: Eccentricity characterizes the elongation or


flattening of an object's shape. It is determined by the ratio of
the major axis to the minor axis of the object's bounding
ellipse.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 94
Shape properties used in image processing

• Convexity: Convexity quantifies the degree to which an object


or region is convex. It is computed by comparing the area of
the object to the area of its convex hull, which is the smallest
convex polygon that encloses the object.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 95
Shape properties used in image processing

• Orientation: Orientation describes the angle or direction of an


object's major axis relative to a reference axis. It provides
information about the object's orientation in the image.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 96
Shape properties used in image processing

• Aspect Ratio: The aspect ratio represents the ratio of an


object's width to its height. It is used to describe the
elongation or stretching of the object.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 97
MCQ

1. Which technique is commonly used for separating objects from the background based
on pixel intensity values?
a) Region-based segmentation
b) Thresholding
c) Edge detection
d) Watershed segmentation
2. Which method groups pixels based on similarity criteria such as color, texture, or
intensity?
a) Region growing
b) Thresholding
c) Edge detection
d) Watershed segmentation

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 98
MCQ

3. What is the purpose of image segmentation?


a) To enhance image quality
b) To extract features from an image
c) To divide an image into multiple regions or segments
d) To remove noise from an image
4. Which technique identifies and separates different objects or regions within an image?
a) Image enhancement
b) Image segmentation
c) Image compression
d) Image restoration

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 99
MCQ

5. What is the main objective of image/object feature extraction?


a) To segment an image into multiple regions
b) To classify objects in an image
c) To enhance the visual quality of an image
d) To extract relevant information or characteristics from an image
6. Which of the following methods can be used for edge detection?
a) Sobel operator
b) K-means clustering
c) Histogram equalization
d) Gaussian blur

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 100
MCQ

7. Which technique is based on the concept of flooding an image from seed points to
identify regions?
a) Region growing
b) Thresholding
c) Edge detection
d) Watershed segmentation
8. Which feature extraction technique can be used to identify and describe shapes in an
image?
a) Histogram of Oriented Gradients (HOG)
b) Scale-Invariant Feature Transform (SIFT)
c) Principal Component Analysis (PCA)
d) Haar Cascade Classifier

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 101
Old Question Papers

• Write short note on Image Restoration .


• Difference between Band Pass filters and Notch filters.
• Explain the degradation model.
• Explain region based Winner Filtering.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 102
Assignment

1. Define pixel classification using thresholding.


2. What are the steps in Otsu method.
3. Explain each step in Region growing algorithm.
4. What are the steps in Canny edge detection Algorithm/
5. What are the steps to create GLCM.
6.Explain convex hull in detail.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 103
Faculty Video Links, Youtube & NPTEL Video Links
and Online Courses Details

• https://www.youtube.com/watch?v=Y_-HgmvF9Zc

• https://www.youtube.com/watch?v=MiSS_aEEf8w

• https://www.youtube.com/watch?v=F3ZvWQMyj4I

• https://www.youtube.com/watch?v=onWJQY5oFhs

• https://www.youtube.com/watch?v=ecu8kreTwYM

• https://www.youtube.com/watch?v=7ImSbCj8bRI

• https://www.youtube.com/watch?v=yKFaHFwTg00

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 104
Summary

§ Image processing is a method to perform some operations on


an image, in order to get an enhanced image or to extract
some useful information from it.
§ Image processing basically includes the following three steps:
i. Importing the image via image acquisition tools;
ii. Analyzing and manipulating the image;
iii. Output in which result can be altered image or report that is
based on image analysis.

12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 105
12/10/24 MINI JAIN Image Processing and pattern recognition ACSAI0522 Unit 3 106

You might also like