Computer Vision - Session 2
Computer Vision - Session 2
Agenda
- Definition and Goals of Edge detection
- Modeling intensity changes.
- Steps in edge detection.
- Criteria for optimal edge detection.
- Methods of Edge Detection.
- Segmentation and its types.
- Image Segmentation Techniques.
-
-
• Edge Detection:
•discontinuities in depth,
•discontinuities in surface orientation,
•changes in material properties and
•variations in scene illumination.
How to benefit from Edge Detection?
Goal of edge detection:
1- useful technique in computer vision is edge detection, where the boundaries between
objects are automatically identified. Having these boundaries makes it easy to segment
the image ,which can then be recognized separately.
2- Important features can be extracted from the edges of an image (e.g., corners, lines, curves).
2- These features are used by higher-level computer vision algorithms (e.g., recognition).
• Modeling intensity changes:
- Edges can be modeled according to their intensity profiles.
- Their some models such as :
1- Step edge.
2- Ramp edge.
3- Ridge edge.
4- Roof edge.
-
Step edge:
the image intensity abruptly changes from one value to one side of the discontinuity to a
different value on the opposite side.
Ramp edge:
a step edge where the intensity change is not instantaneous but occur over finite distance.
Roof edge:
a ridge edge where the intensity change is not instantaneous but occur over finite distance
(generated usually by the intersection of surfaces).
why we use Derivatives in Edge Detection?
(1) Smoothing.
(2) Detection
(3) Localization
(1) Smoothing:
suppress as much noise as possible, without destroying
the true edges.
(2) Detection:
determine which edge pixels should be discarded as
noise and which should be retained (usually, thresholding
provides the criterion used for detection).
(3) Localization:
determine the exact location of an edge Edge thinning
and linking are usually required in this step.
Criteria for optimal edge detection:
the detector must return one point only for each true edge point; that is,
minimize the number of local maxima around the true edge (created by noise).
Methods of Edge Detection:
1- Prewitt Operator.
2- Sobel Operator.
3- Robert Operator.
4- Canny.
1- Laplace Operator
What is the extent of the addition added by Zero-Crossing over the about local extrema?
Robert operator:
The Roberts cross operator is one of the first edge detectors. It’s is to approximate the
gradient of an image through discrete differentiation which is achieved by computing
the sum of the squares of the differences between diagonally adjacent pixels.
●
1 ●
5
●
1 ●
5
Prewitt Operator:
Note:
Robert use a four direction to determine the change.
Prewitt use an eight direction to determine the change.
Sobel Operator:
- use only two filters one in horizontal and another in vertical.
- Solve the problem of sensitivity to outliers (which used in Robert operator).
- Solve the problem of multiple mask (which used in Prewitt operator).
Sobel Operator:
Canny Operator:
- Sobel X and Y
Smoothing using Gaussian and compute magnitude
NON-Maximum Suppression
The effect of NON-Maximum Suppression
hysteresis thresholding
1- if the gradient at a pixel is above ‘high’ declare it an ‘edge pixel’,
2- if the gradient at a pixel is below ‘Low’ declare it an ‘non-edge pixel’,
1- if the gradient at a pixel is between ‘Low’ and ‘high’ declare it an ‘edge pixel’, if and
only if it is connected to ‘edge pixel
The Effect of hysteresis thresholding
segmentation
Image segmentation is a method of dividing a digital image
into subgroups called image segments, reducing the
complexity of the image and enabling further processing or
analysis of each image segment.
Segmentation may be:
Complete segmentation:
set of disjoint regions uniquely corresponding with objects in the input image.
Partial segmentation:
regions do not correspond directly to image objects.
Types of Image Segmentation:
▪ Semantic segmentation:
Label all pixels into a semantic class.
▪ Instance segmentation:
Label pixels belonging to individual objects in the scene. Only objects are segmented.
▪ Panoptic segmentation:
Find pixels belonging to individual object in the scene and group the remaining pixels
1- Edge-Based Segmentation
2- Threshold-Based Segmentation
3- Region-Based Segmentation
4- Cluster-Based Segmentation
Threshold-Based Segmentation:
is the simplest image segmentation method, dividing pixels based on
their intensity relative to a given value or threshold. It is suitable for
segmenting objects with higher intensity than other objects or
backgrounds.
– Manually
– Automatically via:
●
Histogram centroid calculation (Average pixel intensity)
●
Optimal threshold algorithm
●
Otsu threshold algorithm
Manual Thresholding
Here we manually select threshold that best splits the two clusters.
●
Manual Thresholding
Here we manually select threshold that best splits the two clusters.
●
Manual Thresholding
Histogram Centroid
Algorithm:
●
Histogram Centroid
●
Here this automatically obtained mask requires a lot of manual postprocessing to get it
closer to the one obtained via manual thresholding, so it wouldn’t be so “automatic”
afterall.
Take the maximum
Cluster-Based Segmentation:
•Example: K-mean
K-mean:
●
Optimal threshold algorithm
Thank you