Unit2 Slides
Unit2 Slides
7
Challenges in Data Collection
• The biometric features may change
• The presentation of the biometric feature at
the sensor may change
• The performance of the sensor itself may
change
• The surrounding environmental conditions
may change
8
Image Preprocessing
• Preprocessing involves steps to make image
suitable for feature extraction and matching
• Most of the engineers spend a good amount
of time in data pre-processing before building
the model.
• The aim of pre-processing is an improvement
of the image data that suppresses undesired
distortions or enhances some image features
relevant for further processing and analysis
task.
9
Image Preprocessing
The steps to be taken are :
• Read image
• Image Enhancement
– Remove noise
– Segmentation
– Removing Specular Highlights
– Region of Interest Detection and so on…
• Morphology(smoothing edges)
10
Image Enhancement
• Image enhancement refers to the process of
highlighting certain information of an image, as
well as removing any unnecessary information
according to specific needs.
• The tools used for image enhancement include
many different kinds of software such as filters,
image editors and other tools for changing various
properties of an entire image or parts of an image.
• The principal objective of Image Enhancement is to
modify attributes of an image to make it more
suitable for a given task and a specific observer
11
Image Enhancement
12
Image Enhancement (Spatial Domain)
13
Image Enhancement (Frequency Domain)
Frequency Domain
14
Image Enhancement Techniques
15
Image Enhancement by Point Processing
16
Image Enhancement by Point Processing
17
Image Enhancement by Point Processing
18
Image Enhancement by Point Processing
19
Image Enhancement by Point Processing
20
Image Enhancement by Point Processing
21
Image Enhancement by Point Processing
22
Image Enhancement by Point Processing
(d) Gray-level slicing
• Highlighting a specific range of gray levels in an image often is
desired.
• Applications include enhancing features such as masses of
water in satellite imagery and enhancing flaws in x-ray images
• Example: A transformation function that highlights a range
[A,B] of intensities while diminishing all others to a constant.
23
Image Enhancement by Point Processing
24
Image Enhancement by Point Processing
2 Histogram processing:
25
Image Enhancement by Point Processing
2 Histogram processing:
26
Image Enhancement by Point Processing
27
Image Enhancement by Point Processing
28
Image Enhancement by Point Processing
29
Image Enhancement by Point Processing
30
Image Enhancement by Point Processing
31
Image Enhancement by Point Processing
32
Image Enhancement by Point Processing
33
Image Enhancement by Point Processing
34
Image Enhancement by Point Processing
35
Image Enhancement by Point Processing
36
Image Enhancement by Point Processing
37
Image Enhancement by Point Processing
38
Enhancement in the frequency domain:
39
Enhancement in the frequency domain:
40
Enhancement in the frequency domain:
41
Removing Specular Highlights
• Image contains specular highlights due to source of
light
• Such highlights can be removed in 2 stages
– Adaptive Thresholding - Adaptive thresholding is
the method where the threshold value is
calculated for smaller regions and therefore, there
will be different threshold values for different
regions.
– Hole Filling - The hole-filling method is to fill in
the holes in the depth image captured from the
sensors used for rendering.
Image Segmentation 53
Approaches in Image Segmentation
• Similarity approach: This approach is based on
detecting similarity between image pixels to form a
segment, based on a threshold. ML algorithms like
clustering are based on this type of approach to
segment an image.
• Discontinuity approach: This approach relies on the
discontinuity of pixel intensity values of the image.
Line, Point, and Edge Detection techniques use this
type of approach for obtaining intermediate
segmentation results which can be later processed to
obtain the final segmented image
Image Segmentation 54
Image Segmentation Techniques
1. Threshold Based Segmentation
2. Edge Based Segmentation
3. Region-Based Segmentation
4. Clustering Based Segmentation
5. Artificial Neural Network Based Segmentation
Image Segmentation 55
Threshold based Segmentation
• Image thresholding segmentation is a simple form of
image segmentation.
• It is a way to create a binary or multi-color image
based on setting a threshold value on the pixel
intensity of the original image
• In this thresholding process, we will consider the
intensity histogram of all the pixels in the image.
• Then we will set a threshold to divide the image into
sections.
Image Segmentation 56
Threshold based Segmentation
• For example, considering image pixels ranging from 0
to 255, we set a threshold of 60.
• So all the pixels with values less than or equal to 60
will be provided with a value of 0(black) and all the
pixels with a value greater than 60 will be provided
with a value of 255(white).
• But this threshold has to be perfectly set to segment
an image into an object and a background.
Image Segmentation 57
Threshold Based Segmentation
Image Segmentation 58
Edge Based Segmentation
• Edge-based segmentation relies on edges found in an
image using various edge detection operators.
• These edges mark image locations of discontinuity in
gray levels, color, texture, etc.
• When we move from one region to another, the gray
level may change. So if we can find that discontinuity,
we can find that edge
• We can use various edge detectors like Sobel edge
operator, canny edge detector, Kirsch edge operator,
Prewitt edge operator, Robert’s edge operator, etc.
Image Segmentation 59
Edge Based Segmentation
Image Segmentation 60
Region-Based Segmentation
• Region-based segmentation involves dividing an image
into regions with similar characteristics.
• The similarity between pixels can be in terms of intensity,
color, etc.
• Each region is a group of pixels, which the algorithm
locates via a seed point.
• Some predefined rules are set which have to be obeyed
by a pixel in order to be classified into similar pixel
regions. The preferred rule can be set as a threshold.
• Region-based segmentation methods are preferred over
edge-based segmentation methods in case of a noisy
image.
Image Segmentation 61
Region-Based Segmentation
• Region-Based techniques are further classified into 2
types based on the approaches they follow.
– Region growing method
– Region splitting and merging method
• In Region growing method, we start with some pixel
as the seed pixel and then check the adjacent pixels.
• If the adjacent pixels abide by the predefined rules,
then that pixel is added to the region of the seed
pixel and the following process continues till there is
no similarity left.
• This method follows the bottom-up approach.
Image Segmentation 62
Region-Based Segmentation
• In Region splitting, the whole image is first taken as a
single region.
• If the region does not follow the predefined rules,
then it is further divided into multiple regions
(usually 4 quadrants) and then the predefined rules
are carried out on those regions in order to decide
whether to further subdivide or to classify that as a
region.
• This process continues till there is no further division
of regions required i.e every region follows the
predefined rules.
Image Segmentation 63
Region-Based Segmentation
• Usually, first region splitting is done on an image so as to
split an image into maximum regions, and then these
regions are merged in order to form a good segmented
image of the original image.
• In case of Region splitting, the following condition can be
checked in order to decide whether to subdivide a region
or not.
• If the absolute value of the difference of the maximum
and minimum pixel intensities in a region is less than or
equal to a threshold value decided by the user, then the
region does not require further splitting.
Image Segmentation 64
Cluster-Based Segmentation
Image Segmentation 65
Cluster-Based Segmentation
• Clustering algorithms are unsupervised classification
algorithms that help identify hidden information in
images.
• One of the most dominant clustering-based
algorithms used for segmentation is KMeans
Clustering.
• This type of clustering can be used to make segments
in a colored image.
• The algorithm divides images into clusters of pixels
with similar characteristics, separating data
elements and grouping similar elements into clusters
Image Segmentation 66
Cluster-Based Segmentation
KMeans Clustering
• Let’s imagine a 2-dimensional dataset for better
visualization.
• First, in the dataset, centroids (chosen by the user) are
first randomly initialized.
• Then the distance of all the points to all the clusters is
calculated and the point is assigned to the cluster with
the least distance.
• Then centroids of all the clusters are recalculated by
taking the mean of that cluster as the centroid.
• Then again data points are assigned to those clusters.
And the process continues till the algorithm converges to
a good solution.
Image Segmentation 67
Cluster-Based Segmentation
Image Segmentation 68
Pattern classification and Extraction
• Pattern is everything around in this digital
world.
• A pattern can either be seen physically or it
can be observed mathematically by applying
algorithms.
• Example: The colors on the clothes, speech
pattern, etc. In computer science, a pattern is
represented using vector feature values.
Pattern classification and Extraction
• What is Pattern Recognition?
Pattern recognition can be defined as the
classification of data based on knowledge already
gained or on statistical information extracted from
patterns and/or their representation.
• One of the important aspects of pattern recognition is
its application potential.
• Examples: Speech recognition, speaker identification,
multimedia document recognition (MDR), automatic
medical diagnosis.
Pattern classification and Extraction
• In a typical pattern recognition application, the raw data is
processed and converted into a form that is amenable for a
machine to use.
• Pattern recognition involves the classification and cluster of
patterns.
• In classification, an appropriate class label is assigned to a
pattern based on an abstraction that is generated using a
set of training patterns or domain knowledge.
Classification is used in supervised learning.
• Clustering generated a partition of the data which helps
decision making, the specific decision-making activity of
interest to us.
• Clustering is used in unsupervised learning.
Pattern classification and Extraction
• Pattern is defined as composite of features that are
characteristic of an individual.
• In classification, a pattern is a pair of variables {x,w}
where x is a collection of observations or features
(feature vector) and w is the concept behind the
observation (label).
• The quality of a feature vector is related to its ability
to discriminate examples from different classes
• Examples from the same class should have similar
feature values and while examples from different
classes having different feature values.
Pattern classification and Extraction
87
Levels of features
• Level 1 features(orientation field or ridge flow
and singular points)
• Level 2 feature(ridge skeleton)
• Level 3 features(ridge contour,pore,dot)
Three different levels of features
Three different levels of features
• At the global level a fingerprint shows a
smoothed structure except in one or more
regions containing distinctive characteristics
called singularities which can be:
• delta (represented with the symbol ∆);
• loop (represented with the symbol ∩);
• whorl (represented with the symbol O)
Fingerprint acquisition
• The first step in fingerprint recognition is image
acquisition - the process of capturing and digitizing the
fingerprint of an individual for further processing.
• The primary reason for the popularity of fingerprint
recognition is the availability of mature, convenient,
and low-cost sensors that can rapidly acquire the
fingerprint of an individual with minimum or no
intervention from a human operator.
• These compact fingerprint sensors have also been
embedded in many consumer devices such as laptops
and mobile phones.
Fingerprint acquisition
• Finger Print Scanners
Fingerprint Acquisition: Sensing Techniques
• Digital images of the fingerprints can be acquired using off-line or
on-line methods.
Off line Method:
• Off-line techniques generally do not produce the digital image
directly from the fingertip. Rather, the fingerprint is first
transferred to a substrate (e.g., paper) that is subsequently
digitized.
• For example, an inked fingerprint image, the most common form of
off-line capture, is acquired by first applying ink to the subject’s
fingertip and then rolling or pressing the finger on paper, thereby
creating an impression of the fingerprint ridges on paper.
• This kind of fingerprint is often called rolled fingerprint.
• A very important kind of off-line fingerprint image is the latent
fingerprint: a partial fingerprint image lifted from a crime scene by
a forensic expert. Compared to a rolled fingerprint, the latent is
most of the times of bad quality and hard to process
Different fingerprint impressions
On-line techniques
Online:
• On-line techniques produce the digital image directly from a
subject’s fingertip via digital imaging technology that circumvents
the need for obtaining an impression on a substrate. The resulting
fingerprint image is referred to as a live-scan fingerprint.
• A typical fingerprint scanner comprises:
• a sensor to read the ridge pattern on the finger surface;
• an A/D (Analog to Digital) converter to convert the signal;
• an interface module responsible for communicating with
external devices.
• Almost of all existing sensors belong to one of the following
families: optical, solid-state and ultrasound. These sensors are also
called touch sensors.
Different ways of acquiring
Different sensors
(a) Optical Frustrated Total Internal Reflection
(FTIR)
(b) Capacitance
(c) Ultrasound Reflection
(d) Piezoelectric Effect
(e) Temperature Differential
FTIR Based Fingerprint Sensing
• Optical frustrated total internal reflection
• This technique utilizes a glass platen LED light
source and a CCD Camera for constructing
fingerprint images.
• When finger is placed on one side of a glass
platen ( prism) only the ridges of the finger are in
contact with the platen not the valleys.
• The light source illuminates the glass at a certain
angle and the camera is placed such that it can
capture the light reflected from the glass.
FTIR Based Fingerprint Sensing
• The light incident on the ridges is randomly
scattered ( results in dark image) while the
light incident on the valleys suffers total
internal reflection(results in bright image)
• Difficult to have this arrangement in compact
form, since the focal length of small lenses can
be very large.
FTIR BASED SENSING
CAPCITANCE
• Capacitance based solid state live scan fingerprint
sensors are more commonly used than the
Optical FTIR sensors since they are very small in
size and can be easily embedded into laptop
,mobile phones and computer peripherals.
• Essentially consists of an array of electrodes.
• There are tens of thousands of small capacitance
plates(electrodes ) embedded in chip.
• Fingerprint skin acts as the other electrode,
thereby forming a miniature capacitor
CAPCITANCE
Ultrasound reflection
• It is based on sending acoustic signals toward the
fingertip and capturing the echo signal.
• The sensors has two main components the
sender that generates short acoustic pulses,and
the receiver that detects the responses obtained
when these pulses bounce off the fingerprint
surface.
• Resilent to dirt and oil accumulations that may
visually mar the fingerprint
• Expensive not suited for large scale production
Temperature differential
• This mechanism is made up of pyro electric
material that generates a current based on
temperature differentials.
• Temperature differential is created when two
surfaces are brought into contact.
• Fingerprint ridges,being in contact with the
sensor surface, produce an different
temperature differential than valleys that are
away from the sensor surface.
Fingerprint scanner operation
methods
• Plain fingerprint is obtained by simply placing the
finger on the surface of fingerprint sensor.
• A rolled fingerprint is obtained by rolling the
finger from nail to nail on the surface of
fingerprint sensor
• Sweep fingerprint is obtained by combining
narrow fingerprint slices .( typically 3mm wide)
while the user swipes his finger across the sensor
• Clarity of ridge pattern is another important
determinant of quality.
Feature extraction
• Commercial fingerprint recognition systems
are mainly based on Level 1 (ridge orientation
and frequency) and Level 2 features (ridges
and minutiae).
• Generally, Level 1 features are first extracted
and then Level 2 features are extracted with
the guidance of Level 1 features.
Feature extraction
Steps
Typical feature extraction algorithm which
includes four main steps, namely
(a) ridge orientation and frequency estimation
(b) ridge extraction
(c) singularity extraction
(d) Binarization and Thinning
1. Ridge orientation and frequency
Estimation
2. Singularity Extraction
• Fingerprint singularity can be extracted from
the orientation field using the well known
Poincare index method.
• Poincare index refers to the cumulative
change of orientations along a closed path in
an orientation field.
• To accurately detect the location and type of
singularity, Poincare index is generally
evaluated using the eight neighbors of a pixel.
Types of Finger Prints
Poincare index of a pixel corresponding to a singular
point can take one of four possible values:
0 (non-singular),
1 (loop),
-1 (delta), and
2 (whorl).
The non-singular, loop, delta and whorl are types of
finer prints
3. Ridge Extraction
(a)The hand contour and the relevant points for finding the
regions of interest. (b) Processed finger on the hand contour.
Geometry and Lighting Normalization
(a) Original hand image with the regions of interest marked on it. (b)
Palm subimage. (c) Little-finger subimage. (d) Ring-finger subimage.
(e) Middle-finger subimage. (f) Index-finger subimage. (g) Thumb
subimage.
Geometry and Lighting Normalization
• After the subimages have been extracted, a lighting
normalization using histogram fitting is applied.
• In this process, a target histogram G(l) is selected for each of
the six subimage