Computer Vision Unit 1, 2
Computer Vision Unit 1, 2
Fig. 124 Harris detector. The binary images represent the negative (contours),
weak (flat areas) and positive (corners) values of the coefficient R.
Morphological operations
Morphological operations are image-processing techniques used to analyze and
process geometric structures in binary and grayscale images. These operations
focus on the shape and structure of objects within an image. They are particularly
useful in image segmentation, object detection, and noise removal tasks.
Morphological operations aim to probe and transform an image based on its shape,
enhancing features or removing imperfections.
What is Morphological Operations?
Morphological operations are techniques used in image processing that focus on
the structure and form of objects within an image. These operations process images
based on their shapes and are primarily applied to binary images, but can also be
extended to grayscale images. The core idea is to probe an image with a
structuring element and modify the pixel values based on their spatial arrangement
and the shape of the structuring element. Key morphological operations include
erosion, dilation, opening, closing, and others, each serving distinct purposes in
enhancing and analyzing images.
Morphological operations rely on two key elements:
The Input Image: Usually a binary image, where the objects of interest are
represented by foreground pixels (typically white) and the background by
background pixels (typically black). Grayscale images can also be processed
using morphological operations.
The Structuring Element: A small matrix or kernel that defines the
neighborhood of pixels over which the operation is performed. The shape
and size of the structuring element can greatly influence the outcome of the
morphological operation.
Different Morphological Operations
1. Erosion
Erosion is a fundamental morphological operation that reduces the size of objects
in a binary image. It works by removing pixels from the boundaries of objects.
Purpose: To remove small noise, detach connected objects, and erode
boundaries.
How it Works: The structuring element slides over the image, and for each
position, if all the pixels under the structuring element match the foreground,
the pixel in the output image is set to the foreground. Otherwise, it is set to
the background.
2. Dilation
Dilation is the opposite of erosion and is used to increase the size of objects in an
image.
Purpose: To join adjacent objects, fill small holes, and enhance features.
How it Works: The structuring element slides over the image, and for each
position, if any pixel under the structuring element matches the foreground,
the pixel in the output image is set to the foreground.
3. Opening
Opening is a compound operation that involves erosion followed by dilation.
Purpose: To remove small objects or noise from the image while preserving
the shape and size of larger objects.
How it Works: First, the image undergoes erosion, which removes small
objects and noise. Then, dilation is applied to restore the size of the
remaining objects to their original dimensions.
4. Closing
Closing is another compound operation that consists of dilation followed by
erosion.
Purpose: To fill small holes and gaps in objects while preserving their
overall shape.
How it Works: First, dilation is applied to the image, filling small holes and
gaps. Then, erosion is applied to restore the original size of the objects.
Useful link:
https://www.mathworks.com/help/images/morphological-dilation-and-
erosion.html
Texture
What Is Texture Analysis In Computer Vision?
The texture is one of the major characteristics of image data which is
used for identifying objects or regions of interest in an image.
In computer vision, we are required to deal with the different structural
characteristics of image or video data. The texture is one of the major
characteristics of this kind of data which is used for identifying objects
or regions of interest in an image. In this article, we will have an
understanding of texture and texture analysis. We will also discuss some
of the important procedures that are required to be followed in the way
of texture analysis. The major points to be discussed in this article are
listed below.
Table of Contents
1. What is Texture?
2. What is Texture Analysis?
3. Challenges in Texture Analysis
4. Feature Extraction Method for Categorizing Textures
1. Feature extraction
2. Classification
5. Application of Texture Analysis
Let’s start the discussion by understanding the texture.
What is Texture?
There are two kinds of texture: one is tactile and the other one is optical,
where we can feel the tactile texture by touching or seeing the surface.
When we talk about the optical or visual texture, it refers to the shape
and content of the image. Humans can easily diagnose the texture of the
image but making a machine to analyze the texture of the image has its
complexity. In the field of image processing, we can consider the spatial
changes of the brightness intensity of the pixel as the texture of the
image.
In image processing, textural images are those images in which a
specific pattern of texture distribution is repeated sequentially
throughout the image. The below image can be a representation of the
textural image where part (b), represents the repeated pattern on the
texture.
Going through the above points, we can define and understand the
texture. Now the main aim of the article is to understand texture
analysis. In the next section, we will see how we can define and
generalize the texture analysis.
What is Texture Analysis?
Till now we have got an understanding of the texture in the image data.
The aim of the article is to discuss how machines understand the texture
of images so that machines can be capable of performing different tasks
of image processing. Understanding the texture of the images requires
texture analysis and we can consider texture analysis as a whole subject.
In the general view of the textural analysis, we can find the areas like the
following.
Considering the above representation, the texture analysis can be
categorized into four categories: texture segmentation, texture synthesis,
texture classification, texture shape.
Let’s have a small discussion about the categories so that we will have a
proper vision about the areas of image processing where they can be
used.
Texture Segmentation: In image data, we can find out the difference
between the image areas in the context of the texture. By texture
segmentation, we find different boundaries of the different textures in
the image. We can also say that, in texture segmentation, we compare
different areas of the images if the textual characteristics are different
and define them by assigning the boundaries.
Texture Synthesis: In image synthesis, we use methods to make images
that have a similar texture as the images we have as input. This part of
the texture analysis is being used in the creation of computer games and
image graphics.
Texture Shape Extraction: In this section, we try to extract the 3D
view and areas of the images. Normally these areas are covered with a
unique or specific texture. This section is useful in analyzing the shape
and structure of the objects in the image using the image’s textual
properties and spatial relationship of textures with each other.
Texture classification: We can consider it as the most important lesson
of texture analysis which is responsible for describing the type of image
texture. Texture classification is the process of assigning an unknown
sample of textures from the image to any predefined texture class.
Here we have seen a basic introduction to the texture analysis and the
parts of the texture analysis. Now we are required to know the
challenges we may face in the texture analysis.
Challenges in Texture Analysis
Talking about the real world, we can face two major challenges in
texture analysis. These major challenges are as follows:
Rotation image
Noise image
We can say that these challenges in texture analysis and image
classification can have various destructive effects. So if we are applying
texture analysis methods for classification against these challenges, the
methods are not sustainable. In practice, the performance level of the
process can be reduced severely. We always want to make the analyzing
and categorizing process of the images robust and stable while
neutralizing the effect of these challenges.
Also, there can be various chances of images to differ from each other in
terms of scale, viewpoint, brightness or intensity of light. Formally, this
causes challenges in texture classification. To reduce the effects of the
challenges, various methods and logic have been introduced. Also, we
can simply classify the texture using feature extraction. Let’s take a look
at the feature extraction for categorizing texture.
Feature Extraction Method for Categorizing Textures
As we have discussed in the above section, the classification of texture is
one of the most important parts of texture analysis and the basic idea
behind it is to provide the labels to samples of any image according to
the class of texture. We can perform the classification using feature
extraction from the images. We can split the process into two parts as
follows:
1. The feature extraction part: In this part, we try to extract the
textual properties of the images and the motive of this part is to
make a model which can deal with every texture of the image that
exists in training time.
2. The classification part: In this part, we perform the texture
analysis on the test images with the same techniques which we
applied for the training images and apply a classification algorithm
which can be a statistic or deep learning algorithm.
The images get examined by the feature extractor and then texture
classification is done by the classification algorithm. The basic
representation of the procedure can be given by the following image:
Let’s have a look at a more descriptive definition of these two parts.
Feature extraction
As we have discussed in the above points, the basic idea behind this part
is to extract texture features from the images, and for this procedure, we
are required to have a model for every texture available in the training
images. These features can be discrete histogram, numerical, empirical
distribution, and texture features such as contrast, spatial structure,
direction, etc. The extracted texture feature can be used for
teaching classification. There can be various ways to classify texture and
the efficiency of these ways can be dependent on the type of texture
features extracted. These methods can be divided into the following
groups:
1. Statistical methods
2. Structural methods
3. Model-based methods
4. Transformer methods
We can use any of the methods for extracting features from the images.
Classification
In the second stage of the process, we perform classification on the
extracted texture features based on the machine learning algorithms with
classification algorithms. Using the classification method, appropriate
classes for each texture are selected. Using the comparison between the
vector of the extracted texture feature from the extraction part of the
process and the vector of the selection test phase characteristics, we
determine its classes. This step is repeated for every image presented in
the. The estimated classes for evaluation with the actual class are
adapted and the recognition rate is calculated which shows the efficiency
of the implemented algorithm. Normally applied accuracy measure is:
Classification accuracy = (correct matches / number of test image) ×100
Here we have seen how the texture classification can be applied to the
images which is an important part of the texture analysis.
Application of Texture Analysis
In the above sections of the article, we have seen that textures present in
the image are the precious information that can be utilized for various
tasks related to image processing. Some of the tasks and applications
that can be performed using texture analysis are as follows:
1. Face detection
2. Tracking objects in the videos
3. Diagnosis of product quality
4. Medical image analysis
5. Remote sensing
6. Vegetation
Module 2
Transformation: Orthogonal, Euclidean, Affine and Projective.
In geometric transformations, "orthogonal" refers to a transformation that
preserves angles and lengths, "Euclidean" encompasses rotations,
translations, and scaling while preserving distances and angles, "affine"
allows for shearing and scaling in addition to Euclidean transformations while
still preserving parallel lines, and "projective" is the most general type,
allowing for perspective distortion and not necessarily preserving parallelism,
lengths, or angles, but still maintaining collinearity
Orthogonal Transformation:
Euclidean Transformation:
Affine Transformation:
Projective Transformation:
Affine Transformations:
Affine transformations are the simplest form of transformation. These
transformations are also linear in the sense that they satisfy the following
properties:
Lines map to lines
Points map to points
Parallel lines stay parallel
Some familiar examples of affine transforms
are translations, dilations, rotations, shearing, and reflections. Furthermore, any
composition of these transformations (like a rotation after a dilation) is another
affine transform.
Fourier transform
The Fourier transform is a mathematical operation that analyzes the frequency
components of an image in computer vision. It's a useful tool for understanding the
features of an image or signal.
The Fourier Transform is a mathematical tool used to decompose a signal into its
frequency components. In the case of image processing, the Fourier Transform can
be used to analyze the frequency content of an image, which can be useful for tasks
such as image filtering and feature extraction.
In this article, we will discuss how to find the Fourier Transform of an image using
the OpenCV Python library. We will begin by explaining the basics of the Fourier
Transform and its application in image processing, then we will move on to the
steps involved in finding the Fourier Transform of an image using OpenCV.
Basics of the Fourier Transform
The Fourier Transform decomposes a signal into its frequency components by
representing it as a sum of sinusoidal functions. For a signal represented as a
function of time, t, the Fourier Transform is given by the following equation:
Where is the Fourier Transform of the signal f(t), and f is the frequency in
Hertz (Hz). The Fourier Transform can be thought of as a representation of the
signal in the frequency domain, rather than the time domain.
In the case of image processing, the Fourier Transform can be used to analyze the
frequency content of an image. This can be useful for tasks such as image filtering,
where we want to remove certain frequency components from the image, or feature
extraction, where we want to identify certain frequency patterns in the image.
Steps to find the Fourier Transform of an image using OpenCV
Step 1: Load the image using the cv2.imread() function. This function takes in the
path to the image file as an argument and returns the image as a NumPy array.
Step 2: Convert the image to grayscale using the cv2.cvtColor() function. This is
optional, but it is generally easier to work with grayscale images when performing
image processing tasks.
Step 3: Use the cv2.dft() function to compute the discrete Fourier Transform of
the image. This function takes in the image as an argument and returns the Fourier
Transform as a NumPy array.
Step 4: Shift the zero-frequency component of the Fourier Transform to the center
of the array using the numpy.fft.fftshift() function. This step is necessary because
the cv2.dft() function returns the Fourier Transform with the zero-frequency
component at the top-left corner of the array.
Step 5: Compute the magnitude of the Fourier Transform using
the numpy.abs() function. This step is optional, but it is generally easier to
visualize the frequency content of an image by looking at the magnitude of the
Fourier Transform rather than the complex values.
Step 6: Scale the magnitude of the Fourier Transform using
the cv2.normalize() function. This step is also optional, but it can be useful for
improving the contrast of the resulting image.
Step 7: Use the cv2.imshow() function to display the magnitude of the Fourier
Transform.
Example 1
Here is the complete example of finding the Fourier Transform of an image
using OpenCV:
Input Image :
Input Image
Python3
import cv2
import numpy as np
# now we will be loading the image and converting it to grayscale
image = cv2.imread(r"Dhoni-dive_165121_730x419-m.jpg")
Output image:
Fourier transform
In this example, we first load the image and convert it to grayscale using the
cv2.imread() and cv2.cvtColor() functions. Then, we compute the discrete Fourier
Transform of the image using the cv2.dft() function and store the result in the
‘fourier’ variable.
k, l represents the row, length indices of the kernel respectively. x(n,m) is the input
and y(n,m) is the output images. n,m is the row, column indices of the input and
output images.
Notice that the output image size is smaller than the input image size. A larger
kernel size would further decrease the output image dimensions. One way to fix
this downsizing is to pad the input image. You can populate the padded image by
extending the pixel values at the edge. Extending the edge pixels is one of many
methods of padding. Below shows the input image padded by 1 pixel. The padded
pixels are outlined in blue dotted lines.
Convolution computation illustrated.
The updated illustration with padding is shown below. Now, the output image has
the same dimension as the original input image.
Base Image
The following kernel sharpens the image.
Sharpening
One application of convolutional filtering is with edge detection. This kernel is a
diagonal edge detecting kernel. It rewards changes in color along a diagonal.
Histogram
A histogram is a graphical representation or visual display that shows the
distribution of data. It is commonly used in statistics, data analysis, and various
fields to illustrate the frequency or occurrence of different values within a dataset.
Histograms provide a way to understand the shape, central tendency, and spread of
data, making it easier to identify patterns and trends.
In the context of digital image processing, a histogram is a specific representation
that displays the frequency of each intensity level or color value within an image. It
is a fundamental tool for analyzing the tonal characteristics of an image, showing
how many pixels in the image have a particular intensity value. This information
can be used to evaluate image contrast, brightness, and overall visual quality.
A basic histogram typically consists of bars or bins that correspond to different
intensity or value ranges on the x-axis, while the frequency of pixels falling within
each range is displayed on the y-axis. By examining the shape and distribution of
the histogram, one can gain insights into the image’s characteristics, which can be
useful for image enhancement, processing, and analysis.
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load an image
image = cv2.imread('pic.jpeg')
histogram of a image
Histogram Equalization
Histogram equalization is a technique used in image processing to enhance the
contrast and dynamic range of an image. It works by redistributing the pixel
intensities in such a way that they become more uniformly distributed across the
entire available intensity range. This can result in an image with improved visual
quality and enhanced details, making it particularly useful for various computer
vision and image analysis applications.
histogram equalization Wikipedia
Histogram equalization applications
1. Medical Imaging: Histogram equalization is used to enhance medical
images, such as X-rays, MRIs, and CT scans, to make subtle details more
visible. This can aid in the detection of anomalies and improve the accuracy
of diagnoses.
2. Satellite and Remote Sensing: In satellite imagery and remote sensing,
histogram equalization can improve the visibility of features on the Earth’s
surface, which is vital for applications like land cover classification and
environmental monitoring.
3. Computer Vision: Histogram equalization is used in computer vision tasks,
such as object detection, recognition, and tracking, to enhance the contrast in
images and make objects and features more distinguishable.
4. Photography and Image Editing: In image editing software, histogram
equalization can be applied to adjust the contrast and brightness of
photographs. This can be especially useful when dealing with underexposed
or overexposed images.
5. Historical Image Restoration: For restoring old and deteriorated images,
histogram equalization can be used to enhance image quality, recover
details, and make historical documents and photos more accessible and
legible.
6. Astronomy: In astronomical image processing, histogram equalization can
bring out details in images of celestial objects, making it easier to study and
analyze distant galaxies, stars, and other astronomical phenomena.
7. Enhancing Low-Light Images: Images taken in low-light conditions can
suffer from poor visibility and high noise. Histogram equalization can help
improve the quality and reveal details in such images.
8. Ultrasound Imaging: In medical ultrasound imaging, histogram equalization
can be applied to enhance the visibility of structures within the body, aiding
in diagnosis.
9. Forensic Analysis: In forensic science, histogram equalization can be used to
enhance surveillance footage and images to better identify individuals and
objects.
10.Document Scanning and OCR: When scanning documents, histogram
equalization can enhance the text and illustrations, making it easier for
optical character recognition (OCR) software to accurately extract text.
11.Enhancing Historical and Cultural Artifacts: For preserving and studying
historical manuscripts, paintings, and artifacts, histogram equalization can
be applied to reveal faded or degraded details.
How to apply Histogram equalization
Then after understanding the application of histogram equalization, we will go to
the details of calculation formulas.
Calculate the Histogram:
Here, δ is the Kronecker delta function, which is 1 if I(x,y)=k and 0 otherwise. This
formula computes the histogram H(k) by counting the frequency of each intensity
level k in the image.
Calculate the Cumulative Distribution Function (CDF)
Here, N is the total number of pixels in the image. This formula computes the
CDF C(k) by summing up the relative frequencies of intensity levels from 0 to k.
Histogram Equalization Mapping
The equalization mapping function E(k) maps the original intensity levels to new
levels, and it is given by
Here, C min is the minimum value of the CDF, and L is the number of possible
intensity levels. This function scales the CDF values to cover the full intensity
range (0 to L−1).
Apply Equalization
Finally, the equalized image equalized I(x,y) is created by applying the equalization
mapping to the original image:
The result is an image with improved contrast due to the redistribution of pixel
intensities, which can enhance the visual quality and reveal details that might be
obscured in the original image. This technique is widely used in image processing
and computer vision for various applications, such as image enhancement and
feature extraction.
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Plot the original and equalized images along with their histograms
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
plt.title('Original Image')
plt.imshow(image, cmap='gray')
plt.subplot(2, 2, 2)
plt.title('Equalized Image')
plt.imshow(equalized_image, cmap='gray')
plt.subplot(2, 2, 3)
plt.title('Original Histogram')
plt.plot(hist_original, color='black')
plt.xlim([0, 256])
plt.subplot(2, 2, 4)
plt.title('Equalized Histogram')
plt.plot(hist_equalized, color='black')
plt.xlim([0, 256])
plt.show()
MODULE 3
Basics of Edge
Edges can be defined as the points in an image where the intensity of pixels
changes sharply. These changes often correspond to the physical boundaries of objects
within the scene.
Characteristics of Edges
1. Gradient Magnitude: The edge strength is determined by the gradient
magnitude, which measures the rate of change in intensity.
2. Gradient Direction: The direction of the edge is perpendicular to the
direction of the gradient, indicating the orientation of the boundary.
3. Localization: Edges should be well-localized, meaning they should
accurately correspond to the true boundaries in the image.
4. Noise Sensitivity: Edges can be affected by noise, making it essential to use
techniques that can distinguish between actual edges and noise.
Types of Edge Detection
Edge detection techniques can be broadly categorized based on the method they
use to identify edges. Here are the main types:
1. Gradient-Based Methods
Sobel Operator
Roberts Cross Operator
Prewitt Operator
2. Second-Order Derivative Methods
Laplacian of Gaussian (LoG)
Difference of Gaussians (DoG)
3. Optimal Edge Detection
Canny Edge Detector
Laplacian of Gaussian (LoG)
The Laplacian of Gaussian (LoG) is a method used to detect edges in an image. It
involves smoothing the image with a Gaussian filter to reduce noise, followed by
applying the Laplacian operator to highlight regions of rapid intensity change. This
combination allows for effective edge detection while minimizing the impact of noise.
Mathematical Formulation
1. Gaussian Smoothing: The image is first smoothed using a Gaussian filter to
reduce noise. The Gaussian filter is defined
as: G(x,y)=12πσ2ex2+y22σ2G(x,y)=2πσ21e2σ2x2+y2, σσ is the standard
deviation of the Gaussian.
2. Laplacian Operator: The Laplacian operator is then applied to the
as: ∇2f(x,y)=∂2f∂x2+∂2f∂y2∇2f(x,y)=∂x2∂2f+∂y2∂2f
smoothed image. The Laplacian is defined
3. LoG: The combined LoG operator is the result of convolving the Gaussian-
smoothed image with the
Laplacian: LoG(x,y)=∇2(G(x,y)∗I(x,y))LoG(x,y)=∇2(G(x,y)∗I(x,y))
Advantages
Reduces noise through Gaussian smoothing before edge detection.
Effective at detecting edges of various orientations and scales.
Disadvantages
Computationally intensive due to the convolution operations.
Sensitive to the choice of σ\sigmaσ (standard deviation of the Gaussian).
Difference of Gaussian (DoG)
The Difference of Gaussian (DoG) is an edge detection technique that
approximates the Laplacian of Gaussian by subtracting two Gaussian-blurred versions
of the image with different standard deviations. This method is simpler and faster to
compute than LoG while providing similar edge detection capabilities.
Mathematical Formulation:
1. Gaussian Smoothing: The image is smoothed using two Gaussian filters
with different standard deviations, σ1σ1 and σ2σ2
: G1(x,y)=12πσ12ex2+y22σ12,G2(x,y)=12πσ22ex2+y22σ22G1(x,y)=2πσ12
1e2σ12x2+y2,G2(x,y)=2πσ221e2σ22x2+y2
2. Difference of Gaussian: The DoG is computed by subtracting the two
Gaussian-blurred images: DoG(x,y)=(Gσ1(x,y)
−Gσ2(x,y))∗I(x,y)DoG(x,y)=(Gσ1(x,y)−Gσ2(x,y))∗I(x,y)
Advantages
Computationally more efficient than LoG.
Provides good edge detection by approximating the Laplacian of Gaussian.
Disadvantages
Less accurate than LoG due to the approximation.
Sensitive to the choice of the standard deviations (σ1σ1 and σ2σ2) for the
Gaussian filters.
Canny Edge Detector
The Canny Edge Detector is a multi-stage algorithm known for its accuracy and
robustness in detecting edges. Introduced by John Canny in 1986, this method aims to
find edges by looking for the local maxima of the gradient of the image. It optimizes the
edge detection process based on three criteria: low error rate, good localization, and
minimal response to noise.
Steps Involved:
1. Smoothing: The first step involves reducing noise in the image using a
Gaussian filter: G(x,y)=12πσ2ex2+y22σ2G(x,y)=2πσ21e2σ2x2+y2. The
image is convolved with this Gaussian kernel to produce a smoothed image.
2. Finding Gradients: The gradients of the smoothed image are computed
using finite difference approximations, typically with the Sobel
operator: [−101−202−101][−1−2−1000121]⎣⎡−1−2−1000121⎦⎤⎣⎡−101
−202−101⎦⎤ The gradient magnitude is then computed
as: G=Gx2+Gy2,θ=tan−1GyGxG=Gx2+Gy2,θ=tan−1GxGy.
3. Non-Maximum Suppression: This step involves thinning the edges by
suppressing non-maximum gradient values. Only the local maxima in the
direction of the gradient are preserved, resulting in a set of thin edges.
4. Double Thresholding: Two thresholds, Tlow and ThighTlow and Thigh ,
are applied to classify the gradient magnitudes into strong, weak, and non-
relevant pixels:
Strong edges: G≥ThighG≥Thigh
Weak edges: Tlow≤G<ThighTlow≤G<Thigh
Non-relevant pixels: G<TlowG<Tlow
5. Edge Tracking by Hysteresis: Weak edges connected to strong edges are
preserved, while others are discarded. This step ensures continuity and
accuracy in edge detection by linking weak edge pixels that form a
continuous line with strong edges.
Advantages
High accuracy and robustness to noise.
Good localization of edges.
Produces thin, well-defined edges.
Disadvantages
Computationally intensive due to multiple processing steps.
Sensitive to the choice of thresholds for double thresholding.
Line detectors (Hough Transform)
The Hough Transform is a widely applied algorithm in computer
vision for feature extraction. In theory, it can detect any kind of
shape, e.g. lines, circles, ellipses, etc.
Hough transform in its simplest from can be used to detect
straight lines in an image.
Algorithm
A straight line is the simplest boundary we can recognize in an
image. Multiple straight lines can form a much complex boundary.
We transform the image space into hough space. By doing this
we convert a line in image space to a point on hough space.
Code
import numpy as np
import matplotlib.pyplot as plt
import cv2%matplotlib inline# Read in the image
image = cv2.imread('images/phone.jpg')# Change color to RGB (from
BGR)
image = cv2.cvtColor(image,
cv2.COLOR_BGR2RGB)plt.imshow(image)
Performing Edge detection
# Convert image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)# Define our
parameters for Canny
low_threshold = 50
high_threshold = 100
edges = cv2.Canny(gray, low_threshold,
high_threshold)plt.imshow(edges, cmap='gray')
plt.imshow(line_image)