RO190642 - Lab Manual - 2023
RO190642 - Lab Manual - 2023
Page
No. Title of the program
No
2
11. Detecting Cells using Image Segmentation 35
3
Ex. No: 1 READING AN IMAGE FROM A SOURCE FILE
Date:
AIM
REQUIREMENTS
PROCEDURE
PROGRAM
import cv2
print("Package imported")
img= cv2.imread('lena_bgr_cv.jpg')
cv2.imshow('OUTPUT',img)
cv2.waitKey(0)
4
OUTPUT
RESULT
The image specified in the cv2.imread() function will be displayed in a window created
by OpenCV. You can close the window by pressing any key on your keyboard.
5
Ex. No: 2 VIDEO PLAYBACK FROM A SOURCE FILE
Date:
AIM
To write a program that plays a video file using OpenCV (cv2) in Python.
REQUIREMENTS
PROCEDURE
PROGRAM
import cv2
cap = cv2.VideoCapture('Boston Dynamics.mp4')
while True:
success, img = cap.read()
cv2.imshow("video",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
6
OUTPUT
RESULT
The code will play the video file " Boston Dynamics.mp4" frame by frame and
display each frame using the "cv2.imshow()" method. The video will continue to play
until the user presses the "q" key.
7
Ex. No: 3 DISPLAYING THE VIDEO CAPTURED BY A
Date: WEBCAM
AIM
To capture a video from the default camera and display it in a window using OpenCV
in Python.
REQUIREMENTS
PROCEDURE
1. Import the OpenCV library by adding the following line of PROGRAM "import
cv2".
2. Create a VideoCapture object to capture the video from the default camera. The
argument "0" specifies that you want to capture the video from the default
camera.
3. Set the width and height of the video to 640 and 480 pixels respectively using
cap.
4. Start capturing the video in a loop using cap.read() and display it in a window
using cv2.imshow('video', img).
5. The loop will break and the video capture will stop if the "q" key is pressed.
PROGRAM
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
while True:
success, img = cap.read()
8
cv2.imshow("video",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
OUTPUT
RESULT
The code will capture the video from the default camera and display it in a window.
The video capture will stop if the "q" key is pressed.
9
Ex. No: 4
INCREASING THE BRIGHTNESS OF THE
Date: WEBCAM INPUT
AIM
To capture a video from the webcam and display it in real-time by increasing the
brightness.
REQUIREMENTS
OpenCV library
Webcam
PROCEDURE
PROGRAM
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(10,180)
while True:
success, img = cap.read()
10
cv2.imshow('video',img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
OUTPUT
RESULT
The code captures the video from the webcam and displays it in real-time. The user
can quit the video stream by pressing the 'q' key.
11
Ex. No: 5
IMAGE PROCESSING - DILATION
Date:
AIM
The AIM of this code is to perform a few image processing operations using OpenCV
and NumPy in Python and display the intermediate results.
REQUIREMENTS
PROCEDURE
PROGRAM
import cv2
import numpy as np
img = cv2.imread('lena_bgr_cv.jpg')
12
kernel = np.ones((5, 5), np.uint8)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (17, 17), 0)
imgCanny = cv2.Canny(img, 150, 200)
imgDilation = cv2.dilate(imgCanny, kernel, iterations=1)
cv2.imshow('Canny', imgCanny)
cv2.imshow('Dilation', imgDilation)
cv2.waitKey(0)
OUTPUT
RESULT
After executing this code, the Canny and Dilation images will be displayed. These
images represent the intermediate results of the image processing operations performed
on the original image.
13
Ex. No: 6
IMAGE PROCESSING - EROSION
Date:
AIM
To write a program that performs a few image processing operations using OpenCV
and Numpy in Python and display the intermediate results.
REQUIREMENTS
PROCEDURE
14
PROGRAM
import cv2
import numpy as np
img = cv2.imread("lena_bgr_cv.jpg")
kernel = np.ones((5, 5), np.uint8)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (17, 17), 0)
imgCanny = cv2.Canny(img, 150, 200)
imgDilation = cv2.dilate(imgCanny, kernel, iterations=1)
imgEroded=cv2.erode(imgDilation, kernel, iterations=1)
cv2.imshow('Canny', imgCanny)
cv2.imshow('Eroded', imgEroded)
cv2.waitKey(0)
OUTPUT
RESULT
After executing this code, the Canny and Eroded images will be displayed. These
images represent the intermediate results of the image processing operations performed
on the original image
15
Ex. No: 7 COUNTING SIMILARLY- SHAPED OBJECTS
Date: FROM THE IMAGE
AIM
To write a Python program to detect common objects in an image and draw bounding
boxes around them.
REQUIREMENTS
Python 3.x
OpenCV
NumPy
Matplotlib
cvlib
PROCEDURE
16
PROGRAM
17
OUTPUT
RESULT
Thus, all the similarly- shaped objects present in an image have been highlighted and
then counted.
18
Ex. No: 8 CLASSIFYING SIMILAR OBJECTS FROM AN
IMAGE
Date:
AIM
To capture live video from the default camera (0), perform classification on the captured
video using a pre-trained Keras model and display the video with the predicted class
label and the frame rate.
REQUIREMENTS
OpenCV: Open Source Computer Vision Library for image and video processing.
cvzone: A wrapper around OpenCV to simplify computer vision tasks.
keras_model.h5: A pre-trained deep learning model saved in the Hierarchical
Data Format (HDF5) used for image classification.
labels.txt: A text file containing the class labels used in the classification task.
PROCEDURE
import cvzone
import cv2
from cvzone.ClassificationModule import Classifier
cap = cv2.VideoCapture(0)
myClassifier = Classifier('keras_model.h5','labels.txt')
fpsReader = cvzone.FPS()
while True:
_, img = cap.read()
predictions, index = myClassifier.getPrediction(img,scale=1)
print(predictions, index)
fps, img = fpsReader.update(img,pos=(450,50))
print(fps)
cv2.imshow("image",img)
cv2.waitKey(1)
OUTPUT
20
21
22
23
24
RESULT
The code captures live video from the default camera and displays the video with the
predicted class label and the frame rate. The predicted class label and the confidence
score are printed on the console. The user can exit the PROGRAM by pressing any key
on the keyboard.
25
Ex. No: 9
DETECTING ANGLES BETWEEN LINES IN
Date: IMAGES USING HOUGH TRANSFORM
AIM
To write a program to detect and measure the angle between two lines on a protractor
image using mouse clicks.
REQUIREMENTS
PROCEDURE
26
PROGRAM
import cv2
import numpy as np
import math
points = []
def drawcircle(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
cv2.circle(img, (int(x), int(y)), 6, (255, 0, 0), -1)
if (len(points) != 0):
cv2.arrowedLine(img, tuple(points[0]), (x, y), (255, 0, 0), 3)
points.append([x, y])
cv2.imshow('image', img)
print(points)
if (len(points) == 3):
degrees = findangle()
print(degrees)
def findangle():
a = points[-2]
b = points[-3]
c = points[-1]
m1 = slope(b, a)
m2 = slope(b, c)
angle = math.atan((m2 - m1) / 1 + m1 * m2)
angle = round(math.degrees((angle)))
if angle < 0:
angle = 180 + angle
cv2.putText(img, str(angle), (b[0] - 40, b[1] + 40),
cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imshow('image', img)
return angle
def slope(p1, p2):
return (p2[1] - p1[1]) / (p2[0] - p1[0])
img = cv2.imread("protractor.jpg")
while True:
cv2.imshow('image', img)
cv2.setMouseCallback('image', drawcircle)
if cv2.waitKey(1) & 0xff == ord('r'):
img = cv2.imread("protractor.jpg")
27
points = []
cv2.imshow('image', img)
cv2.setMouseCallback('image', drawcircle)
if cv2.waitKey(1) & 0xff == ord('q'):
break
OUTPUT
28
29
RESULT
The code allows the user to click three points on the protractor image to calculate and
display the angle between two lines. The user can reset the image and the points by
pressing the 'r' key and exit the PROGRAM by pressing the 'q' key. The protractor image
with the clicked points and the angle text is continuously displayed until the user
presses the 'q' key.
30
Ex. No: 10
DETECTING LINES IN IMAGES USING HOUGH
Date: TRANSFORM
AIM
REQUIREMENTS
Python 3.x
OpenCV library
image named "line-detection.png" in the same directory as this Python script.
PROCEDURE
1. Import the required libraries: The first step is to import the OpenCV and NumPy
libraries.
2. Read the input image: Read the image "line-detection.png" using the OpenCV
function cv2.imread() and store it in a variable called 'img'.
3. Convert the image to grayscale: Convert the RGB image to grayscale using the
OpenCV function cv2.cvtColor() and store it in a variable called 'gray'.
4. Apply edge detection: Apply the Canny edge detection method on the grayscale
image using the OpenCV function cv2.Canny() and store the result in a variable
called 'edges'.
5. Detect lines using the HoughLine method: Use the OpenCV function
cv2.HoughLines() to detect lines in the edge-detected image. This function
returns an array of r and theta values.
31
6. Draw lines on the original image: Use a for loop to iterate through each r_theta
value in the lines array. For each r_theta value, calculate the values of a, b, x0,
y0, x1, y1, x2, and y2 using the equations provided in the code. Finally, draw a
line on the original image using the OpenCV function cv2.line().
7. Display the result: Display the result image using the OpenCV function
cv2.imshow() and wait for a keyboard event using the cv2.waitKey() function.
8. Save the result: Save the result images in the current directory using the OpenCV
function cv2.imwrite().
PROGRAM
# The below for loop runs till r and theta values are in the range of the 2d array
for r_theta in lines:
32
arr = np.array(r_theta[0], dtype=np.float64)
r, theta = arr
# All the changes made in the input image are finally written on a new image
# houghlines.jpg
cv2.imwrite('grayDetected.jpg', img)
cv2.imshow("result gray img", gray)
cv2.imwrite('edgeDetected.jpg', img)
cv2.imshow("result edge img", edges)
cv2.imwrite('linesDetected.jpg', img)
33
cv2.imshow("Result Image", img)
cv2.waitKey(0)
OUTPUT
RESULT
34
Ex. No: 11 DETECTING CELLS USING IMAGE
Date: SEGMENTATION
AIM
REQUIREMENTS
OpenCV (cv2)
NumPy
Matplotlib
PROCEDURE
35
PROGRAM
import numpy as np
import cv2
import matplotlib.pyplot as plt
# histogram equalization
histoNorm = cv2.equalizeHist(gray)
cv2.imwrite('histoNorm.png', histoNorm)
# contrast stretching
# Function to map each intensity level to output intensity level.
def pixelVal(pix, r1, s1, r2, s2):
if (0 <= pix and pix <= r1):
return (s1 / r1) * pix
elif (r1 < pix and pix <= r2):
return ((s2 - s1) / (r2 - r1)) * (pix - r1) + s1
36
else:
return ((255 - s2) / (255 - r2)) * (pix - r2) + s2
# Define parameters.
r1 = 70
s1 = 0
r2 = 200
s2 = 255
# Vectorize the function to apply it to each value in the Numpy array.
pixelVal_vec = np.vectorize(pixelVal)
37
OUTPUT
38
39
RESULT
The code saves the output images in PNG format, with names indicating the type of
processing applied. The input and output images are displayed using the
plt.imshow() function.
40
Ex. No: 12 TEXTURE SEGMENTATION OF AN
Date:
IMAGE USING FILTERS
AIM
SOFTWARE REQUIREMENT
PROCEDURE
1. Read Image:
Read and display a grayscale image of textured patterns on a bag.
41
Threshold the rescaled image Eim to segment the textures. A threshold value of 0.8 is
selected because it is roughly the intensity value of pixels along the boundary between
the textures. Remove the objects in the top texture by using bwareaopen.
Use imclose to smooth the edges. Use imfill to fill holes in the object in closeBWao.
PROGRAM
I = imread('bag.png');
imshow(I)
title('Original Image')
E = entropyfilt(I);
S = stdfilt(I,ones(9));
R = rangefilt(I,ones(9));
Eim = rescale(E);
Sim = rescale(S);
montage({Eim,Sim,R},'Size',[1 3],'BackgroundColor','w',"BorderSize",20)
title('Texture Images Showing Local Entropy, Local Standard Deviation, and Local
Range')
BW1 = imbinarize(Eim,0.8);
imshow(BW1)
title('Thresholded Texture Image')
BWao = bwareaopen(BW1,2000);
imshow(BWao)
title('Area-Opened Texture Image')
nhood = ones(9);
closeBWao = imclose(BWao,nhood);
imshow(closeBWao)
title('Closed Texture Image')
mask = imfill(closeBWao,'holes');
42
imshow(mask);
title('Mask of Bottom Texture')
extureTop = I;
textureTop(mask) = 0;
textureBottom = I;
textureBottom(~mask) = 0;
montage({textureTop,textureBottom},'Size',[12],'BackgroundColor','w',"BorderSize",
20)
title('Segmented Top Texture (Left) and Segmented Bottom Texture (Right)')
L = mask+1;
imshow(labeloverlay(I,L))
title('Labeled Segmentation Regions')
boundary = bwperim(mask);
imshow(labeloverlay(I,boundary,"Colormap",[0 1 1]))
title('Boundary Between Textures')
OUTPUT
43
44
45
RESULT
Thus the identification and segmentation of regions based on their texture using
MATLAB was done.
46
Ex. No: 13 COLOUR- BASED SEGMENTATION USING
Date:
K-MEANS CLUSTERING
AIM
REQUIREMENTS
OpenCV (cv2)
NumPy
Matplotlib
PROCEDURE
1. The image is read using the cv2.imread() function, which returns an image in
the BGR color space.
2. The cv2.cvtColor() function is used to convert the image from BGR to RGB
color space, as the imshow() function in Matplotlib expects RGB images.
3. The image is displayed using the imshow() function from Matplotlib and the
cv2.imshow() function from OpenCV, which displays the image in a separate
window.
4. The image is reshaped into a 2D array of pixels and 3 color values (RGB)
using the reshape() function from NumPy.
5. The pixel values are converted to float type using the np.float32() function.
6. The criteria for the k-means algorithm to stop running is defined using the
cv2.TERM_CRITERIA_EPS and cv2.TERM_CRITERIA_MAX_ITER
flags.
7. The k-means clustering algorithm is applied using the cv2.kmeans() function,
with the number of clusters set to 3 and random centers initially chosen for
clustering.
8. The resulting centers are converted to 8-bit values using the np.uint8()
function.
47
9. The segmented data is reshaped into the original image dimensions using the
reshape() function.
10. The segmented image is displayed using the imshow() function from
Matplotlib and the cv2.imshow() function from OpenCV.
PROGRAM
import numpy as np
import matplotlib.pyplot as plt
import cv2
# Reshaping the image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1, 3))
48
k=3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10,
cv2.KMEANS_RANDOM_CENTERS)’
OUTPUT
49
K=3
K=6
RESULT
The code segments the input image into 3 clusters based on color, producing a
segmented image. The original and segmented images are displayed using Matplotlib
and OpenCV, respectively.
50
Ex. No: 14 LINE FOLLOWER ROBOT CONTROL
Date:
A line follower robot is a robot which follows a certain path controlled by a feed back
mechanism.
REQUIREMENTS
1. Arduino Uno R3
ATmega 328P board x 1
2. Motor driver shield
(L293D) x 1
3. IR sensors x 2
4. Gear motor single shaft x 4
5. Robot wheels x 4
6. Acrylic sheet x 1
7. Lithium-ion battery(18650) x 2
8. Battery holder x 1
9. Jumper wires (Male to Female type wires)
1 2
51
PROCEDURE
2. Then, we attached all four gear motors with wheels on the bottom of acrylic sheet.
3. Then, we attached the Arduino Uno on the top of the acrylic sheet. 4. Then, we
5. Then, we connected the pins of all four gear motors in the motor driver as per the
connections diagram.
6. Then, we took two IR sensors and attached them in the front of the chasis with
some distance between them with respect to the width of line path. One sensor is for
left side detection and another is for the right side detection.
7. We connected the VCC pins to 5volt and the ground pins to ground.
8. We connected both the IR sensor signal pins in the motor driver as per the circuit
/connections diagram.
9. Finally, we connected the lithium ion batteries (18650) with the motor driver
and placed the battery holder on chassis or body of the line follower robot.
10. Then, we uploaded the arduino program of our project into the Arduino Uno
R3 using USB cable.
52
PROGRAM
//defining motors
AF_DCMotor motor1(1, MOTOR12_1KHZ);
AF_DCMotor motor2(2, MOTOR12_1KHZ);
AF_DCMotor motor3(3, MOTOR34_1KHZ);
AF_DCMotor motor4(4, MOTOR34_1KHZ);
void setup() {
//declaring pin types
pinMode(left,INPUT);
pinMode(right,INPUT);
//begin serial communication
Serial.begin(9600);
void loop(){
53
//printing values of the sensors to the
serial monitor
Serial.println(digitalRead(left));
Serial.println(digitalRead(right));
//line detected by both
if(digitalRead(left)==0 &&
digitalRead(right)==0){ //Forward
motor1.run(FORWARD);
motor1.setSpeed(150);
motor2.run(FORWARD);
motor2.setSpeed(150);
motor3.run(FORWARD);
motor3.setSpeed(150);
motor4.run(FORWARD);
motor4.setSpeed(150);
}
//line detected by left sensor
}
//line detected by right sensor
54
else if(!digitalRead(left)==0 &&
digitalRead(right)==0){ //turn right
motor1.run(BACKWARD);
motor1.setSpeed(200);
motor2.run(BACKWARD);
motor2.setSpeed(200);
motor3.run(FORWARD);
motor3.setSpeed(200);
motor4.run(FORWARD);
motor4.setSpeed(200);
}
//line detected by none
else if(!digitalRead(left)==0 &&
!digitalRead(right)==0){ //stop
motor1.run(RELEASE);
motor1.setSpeed(0);
motor2.run(RELEASE);
motor2.setSpeed(0);
motor3.run(RELEASE);
motor3.setSpeed(0);
motor4.run(RELEASE);
motor4.setSpeed(0);
}
}
55
ROBOT
RESULT
Thus the control of a line follower robot was carried out successfully.
56
Ex. No: 15 STUDY OF NAVIGATION CONTROL OF
Date:
MOBILE ROBOTS USING NEURAL
NETWORK
ABOUT
Navigation control of mobile robots using neural network algorithms is an exciting area
of research and development in robotics. Neural networks are a type of artificial
intelligence algorithm inspired by the way the human brain processes information.
In the context of mobile robots, neural networks can be used to analyze sensor data and
make decisions about how to navigate in complex environments. This can be
particularly useful for robots operating in dynamic and unpredictable environments,
such as warehouses, hospitals, and factories.
REQUIREMENTS
Python 3.X
NumPy
Tensorflow
PROCEDURE
57
3.Compile the model:
model.compile(loss='mean_squared_error', optimizer='adam')
Here we use the trained model to predict movement based on sensor data, and then send
movement commands to the robot
PROGRAM
import numpy as np
import tensorflow as tf
58
# Generate training data
train_X = np.random.randn(10000, 10)
train_y = np.random.randn(10000, 2)
test_X = np.random.randn(1000, 10)
test_y = np.random.randn(1000, 2)
RESULT
Thus the Navigation control of mobile robots using neural network algorithms was
performed.
59
Ex. No: 16 STUDY OF NAVIGATION CONTROL OF
Date:
MOBILE ROBOTS USING FUZZY LOGIC
ALGORITHM
AIM
REQUIREMENTS
Python 3.X
NumPy
scikit-fuzzy
PROCEDURE
3. Collect training data: Use the sensors to collect data about the environment,
such as images from the camera, distance measurements from the ultrasonic
sensors, and other relevant data. Use this data to train the neural network
algorithm.
60
4. Define input variables and membership functions: Define the input variables for
the fuzzy logic algorithm based on the sensor data. For example, if the robot is
using ultrasonic sensors to detect obstacles, the input variables could be
distance and angle. Define the membership functions for each input variable to
convert the sensor data into fuzzy sets.
5. Define output variables and membership functions: Define the output variables
for the fuzzy logic algorithm, which will be the control commands sent to the
robot. The output variables could be speed and direction. Define the
membership functions for each output variable.
6. Define rules: Define the rules for the fuzzy logic algorithm, which will dictate
how the input variables should be combined to produce the output variables.
For example, if the distance to an obstacle is close and the angle is small, the
speed should be reduced and the direction should be changed.
7. Design and train the neural network: Using TensorFlow, design a neural network
that takes in sensor data as input and produces control commands as output.
Train the network using the training data collected in the previous step.
8. Deploy the neural network: Once the neural network is trained, deploy it on the
Raspberry Pi 4. This can be done by writing a Python script that reads sensor
data, feeds it to the neural network, and sends the resulting control commands
to the mobile robot.
9. Test the navigation control: Test the navigation control system by running the
script and observing the behavior of the mobile robot. You may need to adjust
the neural network architecture and training data to improve the system's
performance.
61
10. Take appropriate safety measures while testing the navigation control system
to prevent any accidents or damage to the robot or surrounding environment.
PROGRAM
import numpy as np
import skfuzzy as fuzz
62
rule3 = fuzz.Rule(distance_low & angle_pos, direction_right & speed_slow)
rule4 = fuzz.Rule(distance_med & angle_neg, direction_left & speed_fast)
rule5 = fuzz.Rule(distance_med & angle_zero, direction_straight & speed_fast)
rule6 = fuzz.Rule(distance_med & angle_pos, direction_right & speed_fast)
rule7 = fuzz.Rule(distance_high & angle_neg, direction_left & speed_fast)
rule8 = fuzz.Rule(distance_high & angle_zero, direction_straight & speed_fast)
rule9 = fuzz.Rule(distance_high & angle_pos, direction_right & speed_fast)
print("Direction:", direction_output)
print("Speed:", speed_output)
63
RESULT
The output of this code is the direction and speed values that the control system
recommends based on the input values provided. The direction value represents the
degree of turn required, with negative values indicating a left turn, zero indicating a
straight path, and positive values indicating a right turn. The speed value represents the
recommended speed of the vehicle, with lower values indicating slower speeds and
higher values indicating faster speeds.
64
Ex. No: 17 IMPLEMENTING SLAM IN RASPBERRY
Date:
PI MOBILE ROBOT
AIM
PROCEDURE
1 RUNNING TELEOP
1. Open a new terminal using:
Ctrl+Alt+t
5. Start node that reads the sensor data and updates odom:
roslaunch t o r t o i s e b o t f i r m w a r e s e rve r b r in g u p .
launch
7.Start teleop:
rosrun t o r t o i s e b o t c o n t r o l t o r t o i s e b o t t e l e o p k e y . py
65
2 RUNNING SLAM
1. Follow the same steps as Running Teleop after which follow the below
mentioned steps:
2. Open a new Terminal:
Ctrl+Alt+t
3.Start SLAM:
Note: Note down the starting position of the robot before you start mapping
as this is the same position you must start at this exact position whilst
Running Autonomous Navigation.
roslaunch t o r t o i s e b o t s l a m t o r t o i s e b o t s l a m . launch
4. After you are done mapping. To save the map run:
roslaunch t o r t o i s e b o t s l a m map saver .
launch map name:= anything ( Eg : roslaunch t o
r t o i s e b o t s l a m map saver . launch map name:=
ialab )
3. Now in the opened RViz window use the ”2D Nav Goal” button and
mark the goal location and orientation on the map.
66
TELEOP SAMPLE IMAGE
67
AUTONOMOUS NAVIGATION SAMPLE IMAGE
68
RESULT
69
Ex. No: 18 CIRCLE CONTOUR DETECTION
Date:
ABOUT
REQUIREMENTS
python 3
open cv library
numpy
mathplotlib library
input image
PROCEDURE
70
9. Dilate the detected edges using the cv2.dilate() function with a kernel size of (1,
1) and one iteration. Store the dilated image in the dilated variable.
10. Display the dilated image using the plt.imshow() function with the cmap='gray'
argument.
11. Find the contours in the dilated image using the cv2.findContours() function with
the cv2.RETR_EXTERNAL and cv2.CHAIN_APPROX_NONE flags. Store the
contours and hierarchy in the contours and hierarchy variables, respectively.
12. Convert the input image from BGR to RGB using the cv2.cvtColor() function
and store it in the rgb variable.
13. Draw the contours on the RGB image using the cv2.drawContours() function
with a green color and a line thickness of 2.
14. Display the final image using the plt.imshow() function.
15. Print the number of coins found in the image using the len() function on the
contours variable.
PROGRAM
# Import libraries
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('bitss.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plt.imshow(gray, cmap='gray')
#plt.show()
blur = cv2.GaussianBlur(gray, (11, 11), 0)
plt.imshow(blur, cmap='gray')
#plt.show()
canny = cv2.Canny(blur, 30, 150, 3)
plt.imshow(canny, cmap='gray')
#plt.show()
dilated = cv2.dilate(canny, (1, 1), iterations=0)
plt.imshow(dilated, cmap='gray')
#plt.show()
(cnt, hierarchy) = cv2.findContours(
71
dilated.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.drawContours(rgb, cnt, -1, (0, 255, 0), 2)
plt.imshow(rgb)
#plt.show()
print("coins in the image : ", len(cnt))
OUTPUT
72
73
RESULT
Thus the detection of circular contours has been successfully carried out in the image.
74