0% found this document useful (0 votes)
65 views100 pages

Chapter 11and 12 - MIP and PR

This document provides an overview of morphological image processing and pattern recognition techniques. It discusses topics like morphological operations, structuring elements, dilation, erosion, opening, closing, hit-or-miss transformations, boundary extraction, region filling, thinning, thickening, skeletons, pruning, gray-scale morphology, morphological gradient, top-hat transformation, watersheds, and pattern recognition applications in image analysis. The document is intended as an introduction to these fundamental concepts in digital image processing and computer vision.

Uploaded by

Khadar Mohamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views100 pages

Chapter 11and 12 - MIP and PR

This document provides an overview of morphological image processing and pattern recognition techniques. It discusses topics like morphological operations, structuring elements, dilation, erosion, opening, closing, hit-or-miss transformations, boundary extraction, region filling, thinning, thickening, skeletons, pruning, gray-scale morphology, morphological gradient, top-hat transformation, watersheds, and pattern recognition applications in image analysis. The document is intended as an introduction to these fundamental concepts in digital image processing and computer vision.

Uploaded by

Khadar Mohamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 100

Digital Image Processing

(COSC 603)

Tessfu G. (PhD)
School of Computing
Department of Computer Science
Dire Dawa Institute of Technology
Morphological Image processing
Introduction

• Morphology commonly denotes a branch of biology that deals with the


form and structure of animals and plants.

• Here, the same word morphology is used as a tool for extracting image
components that are useful in the representation and description of
region shape. It is also used for pre- or post processing, such as
filtering.

• The language of mathematical morphology use set theory to represent


objects in an image.

3
What are Morphological Operations?

• Morphological operations come from the word “morphing” in Biology


which means “changing a shape”.

• Image morphological operations are used to manipulate object shapes


such as thinning, thickening, and filling.

• Binary morphological operations are derived from set operations.

4
Basic Set Operations

• Concept of a set in binary image morphology:


• Each set may represent one object. Each pixel (x,y) has its status:
belong to a set or not belong to a set.

5
Translation and Reflection Operations

Translation Reflection

6
Logical Operations*

Remark: *For binary


images only

7
Dilation Operations

• Dilation:  
A  B  z Bˆ z  A   
• Set B is commonly referred to as the structuring element, and also
viewed as a convolution mask.

• Although dilation is based on set operations where convolution is based


on arithmetic operations, the basic idea is analogous. B is flipping about
its origin and slides over set (image) A.

8
Dilation Operations

 = Empty set

Dilate means “extend”

A = Object to be dilated
B = Structuring element

9
Dilation Operations

10
Dilation Operations

11
Example: Application of Dilation

“Repair” broken characters


12
Erosion Operation

Erosion means “trim”

A = Object to be eroded
B = Structuring element

13
Erosion Operation

14
Erosion Operation

15
Example: Application of Dilation and Erosion

Remove small objects such as noise

16
Duality Between Dilation and Erosion

where c = complement

Proof:

17
Opening Operation

• Combination of all parts of A that can completely contain B

• Opening eliminates narrow and small details and corners.


18
Example of Opening

19
Closing Operation

• Closing fills narrow gaps and notches

20
Example of Closing

21
Duality Between Opening and Closing

• Properties Opening

• Properties Closing

Idem potent property: can’t change any more


22
Example: Application of Morphological Operations

Finger print
enhancement

23
Hit-or-Miss Transformation

where X = shape to be detected


W = window that can contain X

24
Hit-or-Miss Transformation

25
Boundary Extraction

Original Boundary
image

26
Region Filling

where X0 = seed pixel p

Original Results of region filling


image

27
Extraction of Connected Components

where X0 = seed pixel p

28
Example: Extraction of Connected Components

X-ray image
of bones

Thresholded
image

Connected
components

29
Convex Hull

30
Example: Convex Hull

31
Thinning

32
Example: Thinning

Make an object
thinner.

33
Thickening

34
Skeletons

Dot lines are


skeletons of this
structure

35
Skeletons

36
Skeletons

37
Pruning

38
Example: Pruning

Original
image

After
Thinning End
3 times points

Dilation
of end Pruned
points result

39
Summary of Binary Morphological Operations

40
Summary of Binary Morphological Operations

41
Summary of Binary Morphological Operations

42
Summary of Binary Morphological Operations

43
Basic Types of Structuring Elements

x = don’t care
44
Gray-Scale Dilation

45
Gray-Scale Dilation

46
Gray-Scale Erosion

47
Gray-Scale Erosion

48
Example: Gray-Scale Dilation and
Erosion
Original image After dilation

Darker Brighter

After erosion 49
Gray-Scale Opening

50
Gray-Scale Closing

51
Example: Gray-Scale Opening and Closing

Original image After opening After closing

Reduce dark
Reduce white objects
objects
52
Gray-scale Morphological Smoothing

Smoothing: Perform opening followed by closing

Original image After smoothing

53
Morphological Gradient

Original image Morphological Gradient


54
Top-Hat Transformation

Original image Results of top-hat transform

55
Example: Texture Segmentation Application

Small
blob

Original image Segmented result


Large blob
Algorithm:
1. Perform closing on the image by using successively larger
structuring elements until small blobs are vanished.
2. Perform opening to join large blobs together
3. Perform intensity thresholding
56
Example: Granulometry
Objective: to count the number of particles of each size
Method:
1. Perform opening using structuring elements of increasing
size
2. Compute the difference between the original image and the
result after each opening operation
3. The differenced image obtained in Step 2 are normalized
and used to construct the size-distribution graph.

Size distribution
Original image graph 57
Morphological Watershads

58
Morphological Watershads

59
Morphological Watershads

60
Gradient Image

Original P Surface of P
image
at edges look like
mountain ridges.
61
Morphological Watershads

62
Morphological Watershads

63
Morphological Watershads

64
Convex Hull

65
Pattern Recognition in Image Processing
Objectives
• To give an overview on techniques and applications of pattern
recognition as applied to image analysis.

67
Pattern Recognition in Image Analysis
• Image Analysis: Extraction of knowledge from image data.
• Pattern Recognition: Detection and extraction of patterns from data.
• Pattern: A subset of data that may be described by some well- defined
set of rules.
• Patterns may constitute the smallest entity in the data that represent
knowledge.
Pixel

This is a pattern

68
Example Pattern Recognition Applications

69
Pattern Recognition in Image Analysis
• Pattern recognition (in general as well as applied to images) is mainly a
classification task.
• Major reference book: Duda, Hart, and Stork, Pattern Classification,
John Wiley & Sons
• Images often constitute the data source for pattern recognition

70
Pattern Recognition as a Classification Task

Features: f  ( f1 , f1..., fn )

Objects to be classified are described


by features. Features are evaluated to
separate objects into different classes.

Features: f  ( f1 , f1..., fn )

71
Pattern Recognition as a Classification Task

Features: f  ( f1 , f1..., fn )

Objects to be classified are described by


features. Features are evaluated to separate
objects into different classes.

Features: f  ( f1 , f1..., fn )

72
Feature Detection
• An important prerequisite for feature detection in images is the
extraction of structures that have common feature values
 Segmentation.
• Why?
• Often, features are not computed from single pixels but from pixel sets.
Their computation is erroneous if feature values change over the set.

Image

Segmentation

Feature
Computation

Classification Pattern Recognition

73
Classification
• Classification: grouping patterns (samples) according to their features
into different classes.
• How do we decide this?
• Decide which features are relevant to the problem and decide
on a way to compute them.
• Decide on a general classification technique (based on the
type of features)
• Train a classifier based on samples of the images to be
analyzed:
• Find out a differentiation between different meanings based on
feature characteristics.
• Find a suitable generalization of the feature values based on the
training set
• Estimate the error of the classifier using an independent,
representative test data set.
• Apply the classifier to images of the problem domain.
74
Segmentation and Classification
• Segmentation
• Creating pixel aggregates
• Criterion: Homogeneity in segments and discontinuity
between segments
• Classification
• Assigning a meaning to the segments (pixels)
• Criterion: predefined mapping of features onto classes

75
Classification Techniques
1. Statistical pattern recognition
2. Artificial neural networks (ANNs)
3. Semantic (syntactic) pattern recognition

76
Statistical Pattern Recognition
• Statistical pattern recognition: Assume that the pattern is a sample of
from a number of known distributions and assign it to the one class to
which it most likely belongs to.

• Requires a feature vector as input information

77
Statistical Pattern Recognition: Example
• Problem: Differentiating between men and
women in images.
• Observation: Most men have short hairs
while most women do not.
• Statistical Pattern Recognition Approach:

1.Feature extraction length_hair(image)=0...1


(short...long)
2.Decision
If P(person=man | length_hair) >
P(person=woman | length_hair)
THEN person=man ELSE person=woman

78
Artificial Neural Networks
• ANNs are a (simplified) approximation of biological neural networks
that are used to exploit the human capability of rapid, parallel
discrimination based on learned generalization.

• They are a very general tool to create arbitrary decision boundaries.

Classification decision

Features are input

79
Semantic Pattern Recognition
• Features that decide for class membership may also consists of rules and
spatial relationships that must be followed.
• Example: a chair is recognized as such because three to four legs
carry a plane-like structure that is connected to the chair’s back
• Rules and relationships may be expressed by graphs or grammars.

• The recognition task consists of finding a graph / a sentence in a knowledge


base that corresponds to relationships / rules which are derived from the
picture to be analyzed.
 Structures (entities) are nodes of a
graph.
 Relationships are edges of a graph.
 Nodes and edges may carry
values.
 Edges may be directed.

80
Pattern Recognition System: Example
• A fish processing plan wants to automate the process of sorting incoming
fish according to species (salmon or sea bass) on a conveyor belt using
optical sensing.

81
An Example in details

• The automation system consists of


• a conveyor belt for incoming products
• two conveyor belts for sorted products
• a pick-and-place robotic arm
• a vision system with an overhead CCD camera
• a computer to analyze images and control the robot arm
82
Elements of a classifier
• inputs: a feature vector - f = {f1, f2, · · ·}
• output: classes - { salmon, sea bass }

83
Problem Analysis
• Set up a camera and take some sample images to extract features.

salmon Sea bass


• Physical differences between two types of fishes:
• Length
• Lightness
• Width
• Number and shape of fins
• Position of the mouth
• etc...

84
An Example in details (Cont.)
• Preprocessing
• image processing algorithms
• segmentation: to isolate fishes from one another and from the
background
• Feature extraction
• feature construction: converting “raw" data into a set of useful
features
• feature selection: selecting relevant and informative features,
data reduction, visualization
• The features are passed to a classifier

85
Useful and not so Useful Features

These should have similar


appearances!

Useful features pertain to


object
Useless features are those
pertaining to
imaging the scene (e.g.,
highlights ...)

86
An Example in details (Cont.)

87
Classification with Length
• Select the length of the fish as a possible feature for discrimination

• Compute the distribution (histogram) of lengths for both classes

• Determine a decision boundary (threshold) that minimizes the


classification error

88
An Example in details (Cont.)
• To attempt to classify the fish merely by seeing whether or not the length
of a fish exceeds some critical value l.
• Can we reliably separate sea bass from salmon by length alone?

89
An Example in details (Cont.)
• The length is a poor feature alone!
• Cost for misclassification
• Salmon is misclassified to sea bass
• Sea bass is wrongly classified to salmon
• Select the lightness as a possible feature.

90
An Example in details (Cont.)
• Threshold decision boundary and cost relationship

• Move our decision boundary toward smaller values of lightness in


order to minimize the cost (reduce the number of sea bass that are
classified salmon!) ) Task of decision theory

91
Classification with Two Features
• Adopt the lightness and add the width of the fish
• Pattern for fish f   f , f 
1 2

• f 1 represents “Lightness“
• f 2 represents “width"

width
Salmon Sea Bass

lightness

92
More Features for Classification
• We might add other features that are not correlated with the ones we
already have
• A precaution should be taken not to reduce the performance by adding
“noisy features“
• For our example
• Suppose that other features are too expensive to measure, or
provide little improvement or possibly even degrade the
performance
• We are forced to make our decision based on the two features

93
Complicated Model
• If our models were extremely complicated, our classifier would have a
decision boundary more complex than the simple straight line
• Ideally, the best decision boundary should be the one which provides an
optimal performance such that all the training samples would be separated
perfectly
width width

lightness
lightness

94
Optimal Performance?

length

lightness

• However, our satisfaction is premature because the central aim of


designing a classifier is to correctly classify new input
 Issue of generalization

95
Optimal Tradeoff

• The decision boundary shown might represent the optimal tradeoff


between performance on the training set and simplicity of classifier,
thereby giving the highest accuracy on new patterns.

96
Summary: Pattern Recognition (PR) System

• Sensing
• Use of a camera or microphone
• PR system depends of the bandwidth, the resolution sensitivity
• Segmentation and grouping
• Patterns should be well separated and should not overlap
• Feature extraction
• Discriminative features
• Invariant features with respect to translation, rotation and scale.
• Classification
• Use a feature vector provided by a feature extractor to assign the
object to a category

97
The Design Cycle

• Data collection
• How do we know when we have collected an adequately large
and representative set of examples for training and testing the
system?

• Feature Choice
• Depends on the characteristics of the problem domain. Simple to
extract, invariant to irrelevant transformation insensitive to noise.

• Model Choice
• Unsatisfied with the performance of our fish classifier and want to
jump to another class of model

98
The Design Cycle

• Training
• Use data to determine the classifier. Many different procedures for
training classifiers and choosing models

• Evaluation
• Measure the error rate (or performance and switch from one set of
features to another one)

• Computational Complexity
• What is the trade-off between computational ease and
performance
• (How an algorithm scales as a function of the number of features,
patterns or categories?)

99
End of the Course !!!

100

You might also like