Practical No. 5
Practical No. 5
Aim :
Compare the results of any three edge detection algorithms on the same image dataset anddo the
analysis of the result.
Theory:
Image Segmentation
The first step in image analysis is to segment the image. Segmentation is the process to
subdivide the image into its constituent parts or objects. The level to which subdivision is carried
depends on the problem being solved.
In computer vision, image segmentation is the process of partitioning a digital image into
multiple segments (sets of pixels, also known as super-pixels). The goal of segmentation is to
simplify and/or change the representation of an image into something that is more meaningful
and easier to analyze. Image segmentation is typically used to locate objects and boundaries
(lines,curves, etc.) in images. More precisely, image segmentation is the process of assigning a
label to every pixel in an image such that pixels with the same label share certain characteristics.
The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a
region are similar with respect to some characteristic or computed property, such as colour,
intensity, or texture. Adjacent regions are significantly different with respect to the same
characteristic(s). When
applied to a stack of images, typical in medical imaging, the resulting contours after image
segmentation can be used to create 3D reconstructions with the help of interpolation algorithms
like marching cubes.
Segmentation algorithms for monochrome images are generally based on one of the two
basic properties of grey level values:
1. Discontinuity: In this category, the approach is to partition the image based on abrupt changes
in grey level.
2. Similarity: In this category, the approaches are based in thresholding, region growing, and
region splitting and merging.
Edge Detection :
Edge detection is by far the most common approach for detecting meaningful discontinuities in
gray level. The reason is that isolated points and the thin lines are not frequent occurrences in
most practical applications. An edge is the boundary between two regions with relative distinct
gray level properties.Note from the profile that an edge (transition from dark to light) is modeled
as a smooth, rather than as anabrupt, change of gray level. This model reflects the fact that edges
in digital images are generally slightly blurred as a result of sampling.
1.Gradient Operator :
Gradient vector always points in the direction of maximum rate of change off at co-ordinates
(x,y). Magnitude of the vector is the main quantity in edge detection and given by,
Use of different gradient operators for the following 3x3 grey level image is discussed below.
2.Roberts Operator:
One of the simplest way to implement first order partial derivative is to use Roberts cross-
gradient operator. The mask for Roberts operator are given by,
The two cross differences for x and y gradient components are given by equation,
at point Z5.
3. Apply the Sobel operator to find the x and y gradients of the image using the
cv2.Sobel()function.
4. Calculate the magnitude and angle of the gradients using the following formulae:
np.sqrt(sobel_x**2 + sobel_y**2)
np.arctan2(sobel_y, sobel_x)
3. Display the binary image with the detected edges using the cv2 library.
4. Use the Canny edge detector to find the edges by setting the upper and lower thresholds
andusing the cv2.Canny () function.
6. Display the binary image with the detected edges using the cv2 library.
Laplacian operator edge detection algorithm
4. Use the Laplacian operator edge detector to find the edges using the cv2.Laplacian () function.
7. Display the binary image with the detected edges using the cv2 library.
Conclusion:
We have successfully compare the results of edge detection algorithms on the image data set and
analysis of the result.