0% found this document useful (0 votes)
18 views43 pages

Pattern Recoginition 5

Uploaded by

rememberme6783
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views43 pages

Pattern Recoginition 5

Uploaded by

rememberme6783
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

Pattern recognition

((PR))

Pattern Recognition System

1
1-Pattern Recognition System
PR Definitions
• Theory, Algorithms, Systems to put Patterns into
Categories

• Classification of Noisy or Complex Data

• Relate Perceived Pattern to Previously Perceived


Patterns

2
1-Pattern Recognition System
1. Domain-specific knowledge
– Acquisition, representation
2. Data acquisition
– camera, ultrasound, MRI,….
3. Preprocessing
– Image enhancement, restoration
4. Segmentation
- Region based methods , Boundaries based methods
5. Representation) feature extraction )
– Features: color, shape, texture,…
6. Post-processing; use of context
7- Classification or Decision making
– Statistical (geometric) pattern recognition
– Syntactic (structural) pattern recognition
– Artificial neural networks

3
A Class is a collection of objects that are similar, but
not necessarily identical, and which is distinguishable
from other classes. Fig.1 illustrates the difference
between classification where the classes are known
beforehand and classification where classes are
created after inspecting the objects

Fig 1 Classification when the classes are (A) known and (B) unknown
beforehand 4
Pattern Class
• A collection of similar (not necessarily
identical) objects
• A class is defined by class samples
(paradigms, exemplars, prototypes,
training/learning samples)
• Inter-class similarity as shown in Fig .2
• Intra-class variability as shown in Fig .3
• How do we define similarity?
Pattern Class
Inter-class Similarity

Fig. 2 shows the Inter-class Similarity

a-Identical twins b-Characters that look similar


Pattern Class
Intra-class Variability

Fig. 3 shows the Intra-class Variability

a- Same face under different expression b- The letter “T” in different typefaces
, pose, illumination
1-Pattern Recognition System
Pattern Class Model
• A mathematical or statistical model
(description) for each class (population);
other models: syntactic/structural, template
• The class description (class-conditional
density) that is learned from samples
• Given a pattern, choose the best-fitting
model for it; assign the pattern to the class
associated with the best-fitting model
1-Pattern Recognition System
Terminology
Classification Decision Rules

• Parametric = based upon statistical


parameters (mean & standard deviation)

• Non-Parametric = based upon objects


(polygons) in feature space

• Decision Rules = for sorting pixels into classes

9
Feature extraction
 Features are characteristic properties of the objects whose
value should be similar to objects in a particular class, and
different from the values for objects in another class (or from
the background).
Features may be either
Continuous (i.e., with numerical values) such as length, area, and texture.

OR
Categorical (i.e., with labeled values). such as features that are either
a- Ordinal [where the order of the labeling is meaningful (e.g., class
standing, military rank, level of satisfaction)]
or
b- Nominal [where the ordering is not meaningful (e.g., name, zip code,
department)].

The choice of appropriate features depends on the particular image and


the application at hand
10
 Robust
 Discriminating
 Reliable
 Independent
 Structural features include:
 Measurements obtainable from the gray-level histogram of an
object (using region-of-interest processing)

 The texture of an object, using either statistical moments of


the gray-level histogram of the object or its fractal dimension)

 Shape features include:


 - The size or area, A, of an object,
11
Its circularity (a ratio of perimeter to area, or area to
perimeter
Its aspect ratio
its skeleton or medial axis transform, or points within it
The Euler number: the number of connected
components (i.e., objects) minus the number of holes in
the image.

Statistical moments of the boundary (1D) or area (2D):


the (m, n)the moment of a 2D discrete function, f(x, y),
such as a digital image with M N pixels

12
2-Pattern Classification example
Feature extraction
Task: to extract features which are good for classification.
Good features:
•Objects from the same class have similar feature values.
•Objects from different classes have different values.

13
The Classification stage

The classification stage assigns objects to certain


categories (or classes) based on the feature
information.

How many features should we measure? And which are the


?best

The problem is that the more measured the higher the


dimension of feature space, and the more complicated the
classification will become (not to mention the added
.requirements for computation time and storage)

14
Feature selection—choosing the most informative
subset of features, and removing as many irrelevant
and redundant features as possible .

Feature extraction—combining the existing feature set


into a smaller set of new, more informative features.

15
2-Pattern Classification example

Some Lessons Learnt

• Training samples help define the models


• Some features are more discriminating
than others
• Multiple features can be used for better
models
• Some decisions have higher costs
• General models perform better than
overly complex models 16
Learning and Adaptation
 Training is the process of using data to
determine the best set of features for a classifier is
known as training the classifier

Learning (machine learning or artificial


intelligence) refers to some form of adaptation of
the classification algorithm to achieve a better
response, i.e., to reduce the classification error on
a set of training data. 17
Learning and Adaptation
• Supervised learning
A teacher provides a category label or cost for each
pattern in the training set.

• Unsupervised learning
The system forms clusters or “natural groupings” of
the input patterns

• In reinforcement learning the output of the system is


a sequence of actions to best reach the goal. The
machine learning program must discover the best
sequence of actions to take to yield the best reward.

18
Approaches to Classification
The goal of the classifier is to classify new
data (test data) to one of the classes,
characterized by a decision region. The
borders between decision regions are called
decision boundaries.
Classification techniques can be divided into
two broad areas: statistical or structural (or
syntactic) techniques, with a third area that
borrows from both , sometimes called
cognitive methods, which include neural
networks and genetic algorithms
19
1- Statistical approaches:- first area deals with
objects or patterns that have an underlying and
quantifiable statistical basis for their generation and
are described by quantitative features such as length,
area, and texture.

2- Structural (or syntactic) approaches


The second area deals with objects best described by
qualitative features describing structural or syntactic
relationships inherent in the object.

3- The third area that borrows from both,


sometimes called cognitive methods, which include
neural networks and genetic algorithms. 20
Examples
Classification by Shape -1

Fig 4. (a) Original image, (b) after Otsu thresholding, (c) after subsequent
skeletonization, (d) after conditionally dilating the branch pixels from (c), (e) after
21
logically combining (b) and (d), (f) color coding the nuts and bolts
2- Classification by Size

Fig. 5 (a) Segmented, labeled image (using Fig. 4 a), (b) one-dimensional feature
space showing the areas of the features, (c) the features “painted” with grayscales
representing their measured areas, and (d) after thresholding image (c) at a value 22
of
800
Figure 6 a is an image containing a number of electronic
components, of different shapes and sizes (the transistors are
three-legged, the thyristors are three-legged with a hole in
them, the electrolytic capacitors have a round body with two
legs, and the ceramic capacitors are larger than the resistors).
A combination of the shape and size methods can be used to
separate the objects into their different classes (Fig. 6b)).

Fig. 6 (a) Electronic components (b) classified according to type, using shape
23
and size
In Figure 7 Circularity can distinguish the bananas from the
other two fruit: size (or, perhaps texture, but not color in this
grayscale image) could be used to distinguish the apples from
the grapefruit. from the grapefruit

24
Part 2

25
Shape
,is not the only
but a very powerful
descriptor
of image content
26
What is the shape?
The most commonly cited definition is given as follows:
The shape is all the geometrical information that remains
when location, scale, and rotation effects are filtered out from
an object

In other words, a shape is invariant to Euclidean similarity


transformations of scaling, translation, and rotation. Two
objects have the same shape if they can be mapped onto
each other by translation, rotation, and scaling.

27
Static and Dynamic Shapes
Shapes can be either static or
dynamic.

• Static shapes are rigid shapes


that do not change in time by
deformation or articulation. –
For example, a model of a car
is a static shape,

• While a human face is a


dynamic shape since it
changes while speaking and
smiling, for instance. 28
Two-dimensional shapes can be described in
two different ways

1. Use of the object boundary and its features (e.g.


boundary length). This method is directly connected to
edge and line detection. The resulting description
schemes are called external representations.

2. Description of the region occupied by the object on


the image plane. This method is linked to the region
segmentation techniques. The resulting representation
schemes are called internal representations.

29
What is a Good Representation
• There are a variety of ways to represent a shape, however,
there are certain attributes/criteria for a representation to
be a good one:

1. Sufficient
2. Wide domain
3. Convenient
4. Sensitive
5. Unambiguous
6. Hierarchical
7. Generative
8. Stable
9. Accessible
10.Efficient
30
?What is a Good Representation
1- Sufficient
Is this representation sufficient enough? The answer mainly
depends on the application.

If we want to detect human versus other classes, this


representation might work, but if we want to differentiate
Ahmmed from Maryam, this representation is not sufficient.

31
What is a Good Representation
2. Uniqueness:-
This is of crucial importance in object recognition because each
object must have a unique Representation.

Not unique: dog dog dog


unique: pitbull collie cocker-spaniel

Consider the domain to be the animals, those three are dogs


however they are different members of the domain, they are
not the same dogs, hence the word dog as a representation is
not unique for all members in this domain
32
What is a Good Representation
3. Completeness / Unambiguous

This refers to unambiguous representations of Invariance


under geometrical transformations.
An object/shape may have different representations but no
two distinct objects may have a common representation.

3 III
– The number three has different representations (Arabic and
roman) however the same representation can not refer to
different numbers.

33
What is a Good Representation
Invariance under translation, rotation, scaling, and
reflection is very important for object recognition
applications.
Capable of directly generating/recovering the represented
shape

34
What is a Good Representation

5. Sensitivity. This is the ability of a representation


scheme to reflect easily the differences between
similar objects

6. Abstraction/rent detail. This refers to the ability of


the representation to represent the basic features of
a shape and to abstract from detail. This property is
directly related to the noise robustness of the
representation. 35
Shape Representation
The boundary of Binary Shapes

36
Shape Representation
Chain Codes: Boundary Representation

37
Shape Representation
Chain Codes: Boundary Representation
Problems with the Chain Code
Chain code representation is conceptually appealing, yet
has the following three problems:
1. Dependent on the starting point
2. Dependent on the orientation
3. Dependent on the Scaling

To use boundary representation in object recognition, we


need to achieve invariance to starting point and orientation

 Normalized codes
 Differential codes

38
Shape Representation
Chain Codes: Boundary Representation

1- Differentiation Strategy
change in direction around the border (differences between
chain code numbers modulo 4 or 8):

39
Shape Representation
Chain Codes: Boundary Representation
1- Differentiation Strategy
:Differentiate example

40
Shape Representation
Chain Codes: Boundary Representation
Normalization Strategy -2

41
Shape Representation
Chain Codes: Boundary Representation

Note that the shape numbers of two objects related by 90o


rotation
are indeed identical
42
Thank you

Any Questions ?

You might also like