0% found this document useful (0 votes)
21 views74 pages

RO190642 - Lab Manual - 2023

The document describes programs for various image processing tasks using OpenCV in Python. It includes programs to read and display an image, play a video, capture from a webcam, increase brightness of webcam input, perform dilation and erosion on an image.

Uploaded by

niya enzie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views74 pages

RO190642 - Lab Manual - 2023

The document describes programs for various image processing tasks using OpenCV in Python. It includes programs to read and display an image, play a video, capture from a webcam, increase brightness of webcam input, perform dilation and erosion on an image.

Uploaded by

niya enzie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

S.

Page
No. Title of the program
No

1. Reading an image from a Source File 4

2. Video playback from a Source File 6

3. Displaying the video captured by a Webcam 8

4. Increasing the Brightness of the Webcam Input 10

5. Image Processing – Dilation 12

6. Image Processing – Erosion 14

7. Counting similarly - shaped objects from an 16


image
8. Classifying similar objects from an image 19

9. Detecting angles between Lines in Images using 26


Hough transform
10. Detecting Lines in Images using Hough 31
transform

2
11. Detecting Cells using Image Segmentation 35

12. Texture Segmentation of an image using Filters 41


13. Colour- based Segmentation using K-means 47
Clustering
14. Line follower Robot Control 51
15. Study of Navigation control of mobile robots 57
using Neural Network
16. Study of Navigation control of mobile robots 60
using Fuzzy Logic Algorithm
17. Implementing SLAM in Raspberry 65
Pi mobile robot
18. Circle Contour Detection 70

3
Ex. No: 1 READING AN IMAGE FROM A SOURCE FILE
Date:

AIM

To write a program that displays an image using OpenCV in PyCharm (Community


Edition).

REQUIREMENTS

 OpenCV (cv2) library must be installed.


 A video file must be present.

PROCEDURE

1. Import the cv2 library.


2. Use the cv2.imread() function to read an image file. The first argument is the path
to the image file. The second argument specifies how to read the image. In this
case, cv2.IMREAD_UNCHANGED means that the image is loaded with its
original color channels.
3. Use the cv2.imshow() function to display the image. The first argument is the
window name, and the second argument is the image to display.
4. Use the cv2.waitKey(0) function to wait for a keyboard event. The argument is
the time in milliseconds to wait. A value of 0 means waiting indefinitely.

PROGRAM

import cv2
print("Package imported")
img= cv2.imread('lena_bgr_cv.jpg')
cv2.imshow('OUTPUT',img)
cv2.waitKey(0)

4
OUTPUT

RESULT

The image specified in the cv2.imread() function will be displayed in a window created
by OpenCV. You can close the window by pressing any key on your keyboard.
5
Ex. No: 2 VIDEO PLAYBACK FROM A SOURCE FILE
Date:

AIM

To write a program that plays a video file using OpenCV (cv2) in Python.

REQUIREMENTS

 OpenCV (cv2) library must be installed.


 A video file must be present with the name " Boston Dynamics.mp4".

PROCEDURE

1. Import the OpenCV library using "import cv2".


2. Use the "cv2.VideoCapture" method to capture the video file and store it in the
"cap" variable.
3. Start a while loop that continuously reads the video frame by frame and displays
each frame.
4. The loop breaks when the user presses the "q" key (as specified by the "if"
statement).

PROGRAM

import cv2
cap = cv2.VideoCapture('Boston Dynamics.mp4')
while True:
success, img = cap.read()
cv2.imshow("video",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

6
OUTPUT

RESULT

The code will play the video file " Boston Dynamics.mp4" frame by frame and
display each frame using the "cv2.imshow()" method. The video will continue to play
until the user presses the "q" key.

7
Ex. No: 3 DISPLAYING THE VIDEO CAPTURED BY A
Date: WEBCAM

AIM

To capture a video from the default camera and display it in a window using OpenCV
in Python.

REQUIREMENTS

 OpenCV (cv2) library must be installed.

PROCEDURE

1. Import the OpenCV library by adding the following line of PROGRAM "import
cv2".
2. Create a VideoCapture object to capture the video from the default camera. The
argument "0" specifies that you want to capture the video from the default
camera.
3. Set the width and height of the video to 640 and 480 pixels respectively using
cap.
4. Start capturing the video in a loop using cap.read() and display it in a window
using cv2.imshow('video', img).
5. The loop will break and the video capture will stop if the "q" key is pressed.

PROGRAM

import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
while True:
success, img = cap.read()
8
cv2.imshow("video",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

OUTPUT

RESULT

The code will capture the video from the default camera and display it in a window.
The video capture will stop if the "q" key is pressed.
9
Ex. No: 4
INCREASING THE BRIGHTNESS OF THE
Date: WEBCAM INPUT

AIM

To capture a video from the webcam and display it in real-time by increasing the
brightness.

REQUIREMENTS

 OpenCV library
 Webcam

PROCEDURE

1. Import the necessary library - cv2.


2. Create a VideoCapture object and specify the device index (0 for the default
webcam).
3. Set the resolution of the captured video using the set() method.
4. Start a loop to capture the video frames in real-time using the read() method of
the VideoCapture object.
5. Display the captured frames using the imshow() method of the cv2 library.
6. Check for any key press events and break the loop if the 'q' key is pressed.
7. Release the VideoCapture object and destroy all windows using the release() and
destroyAllWindows() methods respectively.

PROGRAM

import cv2
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(10,180)
while True:
success, img = cap.read()

10
cv2.imshow('video',img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

OUTPUT

RESULT

The code captures the video from the webcam and displays it in real-time. The user
can quit the video stream by pressing the 'q' key.

11
Ex. No: 5
IMAGE PROCESSING - DILATION
Date:

AIM

The AIM of this code is to perform a few image processing operations using OpenCV
and NumPy in Python and display the intermediate results.

REQUIREMENTS

 OpenCV (cv2) library must be installed.


 NumPy library must be installed.
 Image File for dilation is required.

PROCEDURE

1. Import the OpenCV and Numpy libraries.


2. Read an image file using OpenCV's imread function.
3. The image file is located at the specified path.
4. Create a 5x5 rectangular structuring element using Numpy's ones function.
Convert the image to grayscale using OpenCV's cvtColor function.
5. Apply the Canny edge detection algorithm to the grayscale image using
OpenCV's Canny function to detect the edges in the image. Dilate the edge-
detected image using OpenCV's dilate function. This operation increases the
thickness of the edges in the image. Display the intermediate results (Canny and
Dilation images) using OpenCV's imshow function. Wait for a key event using
OpenCV's waitKey function.

PROGRAM

import cv2
import numpy as np
img = cv2.imread('lena_bgr_cv.jpg')
12
kernel = np.ones((5, 5), np.uint8)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (17, 17), 0)
imgCanny = cv2.Canny(img, 150, 200)
imgDilation = cv2.dilate(imgCanny, kernel, iterations=1)
cv2.imshow('Canny', imgCanny)
cv2.imshow('Dilation', imgDilation)
cv2.waitKey(0)

OUTPUT

RESULT

After executing this code, the Canny and Dilation images will be displayed. These
images represent the intermediate results of the image processing operations performed
on the original image.

13
Ex. No: 6
IMAGE PROCESSING - EROSION
Date:

AIM

To write a program that performs a few image processing operations using OpenCV
and Numpy in Python and display the intermediate results.

REQUIREMENTS

 OpenCV (cv2) library must be installed.


 NumPy library must be installed.
 Image File for dilation is required.

PROCEDURE

1. Import the OpenCV and Numpy libraries.


2. Read an image file using OpenCV's imread function.
3. The image file is located at the specified path.
4. Create a 5x5 rectangular structuring element using Numpy's ones function.
5. Convert the image to grayscale using OpenCV's cvtColor function.
6. Blur the grayscale image using OpenCV's GaussianBlur function.
7. This function uses a Gaussian filter to reduce the noise in the image.
8. Apply the Canny edge detection algorithm to the grayscale image using
OpenCV's Canny function to detect the edges in the image.
9. Erode the image using OpenCV's erode function.
10. This operation reduces the size of the objects in the image.
11. Display the intermediate results (Canny and Eroded images) using OpenCV's
imshow function.
12. Wait for a key event using OpenCV's waitKey function.

14
PROGRAM

import cv2
import numpy as np
img = cv2.imread("lena_bgr_cv.jpg")
kernel = np.ones((5, 5), np.uint8)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (17, 17), 0)
imgCanny = cv2.Canny(img, 150, 200)
imgDilation = cv2.dilate(imgCanny, kernel, iterations=1)
imgEroded=cv2.erode(imgDilation, kernel, iterations=1)
cv2.imshow('Canny', imgCanny)
cv2.imshow('Eroded', imgEroded)
cv2.waitKey(0)

OUTPUT

RESULT

After executing this code, the Canny and Eroded images will be displayed. These
images represent the intermediate results of the image processing operations performed
on the original image
15
Ex. No: 7 COUNTING SIMILARLY- SHAPED OBJECTS
Date: FROM THE IMAGE

AIM
To write a Python program to detect common objects in an image and draw bounding
boxes around them.

REQUIREMENTS
 Python 3.x
 OpenCV
 NumPy
 Matplotlib
 cvlib

PROCEDURE

1. Import necessary libraries (cv2, numpy, matplotlib, cvlib).


2. Read the input image using the cv2.imread function and store it in a variable
‘img’.
3. Convert the image from BGR to RGB using cv2.cvtColor function and store the
result in a variable ‘img1’.
4. Display the original image using matplotlib’s imshow function.
5. Use cvlib’s detect_common_objects function to detect objects in the image and
store the bounding boxes, labels, and count in the variables ‘box’, ‘label’, and
‘count’ respectively.
6. Use cvlib’s draw_bbox function to draw bounding boxes around the detected
objects in the image and store the result in a variable ‘output’.
7. Convert the output image from BGR to RGB using cv2.cvtColor function.
8. Display the output image using matplotlib’s imshow function.
9. Print the number of objects detected in the image using the len function.

16
PROGRAM

# import required libraries to count objects in an image


import cv2
import numpy as np
import matplotlib.pyplot as plt
import cvlib as cv
from cvlib.object_detection import draw_bbox
from numpy.lib.polynomial import poly

# loading and viewing the image


img= cv2.imread('dog.jpg')
img1= cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(10, 10))
plt.axis("off")
plt.imshow(img1)
plt.show()
box, label, count= cv.detect_common_objects(img)
output = draw_bbox(img, box, label, count)

# creating boxes around various objects


output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(10, 10))
plt.axis("off")
plt.imshow(output)
plt.show()

# count objects in the images


print("Number of objects in this image are " + str(len(label)))

17
OUTPUT

RESULT

Thus, all the similarly- shaped objects present in an image have been highlighted and
then counted.

18
Ex. No: 8 CLASSIFYING SIMILAR OBJECTS FROM AN
IMAGE
Date:

AIM

To capture live video from the default camera (0), perform classification on the captured
video using a pre-trained Keras model and display the video with the predicted class
label and the frame rate.

REQUIREMENTS

 OpenCV: Open Source Computer Vision Library for image and video processing.
 cvzone: A wrapper around OpenCV to simplify computer vision tasks.
 keras_model.h5: A pre-trained deep learning model saved in the Hierarchical
Data Format (HDF5) used for image classification.
 labels.txt: A text file containing the class labels used in the classification task.

PROCEDURE

1. Import the required libraries: cvzone, cv2, and Classifier from


cvzone.ClassificationModule.
2. Initialize a VideoCapture object to capture live video from the default camera
(0).
3. Instantiate a Classifier object with the pre-trained Keras model (keras_model.h5)
and class labels (labels.txt) as arguments.
4. Initialize an FPS object from the cvzone module to measure the frame rate of the
captured video.
5. Enter into an infinite loop where each iteration captures a frame from the video,
passes it through the Classifier object to obtain the predicted class label and the
confidence score, and displays the frame with the predicted label and the frame
rate.
6. Display the output video using the imshow function from OpenCV.
7. Wait for 1 millisecond to allow for user input from the keyboard.
8. Exit the loop when the user presses any key.
19
PROGRAM

import cvzone
import cv2
from cvzone.ClassificationModule import Classifier
cap = cv2.VideoCapture(0)
myClassifier = Classifier('keras_model.h5','labels.txt')
fpsReader = cvzone.FPS()
while True:
_, img = cap.read()
predictions, index = myClassifier.getPrediction(img,scale=1)
print(predictions, index)
fps, img = fpsReader.update(img,pos=(450,50))
print(fps)
cv2.imshow("image",img)
cv2.waitKey(1)

OUTPUT

20
21
22
23
24
RESULT

The code captures live video from the default camera and displays the video with the
predicted class label and the frame rate. The predicted class label and the confidence
score are printed on the console. The user can exit the PROGRAM by pressing any key
on the keyboard.

25
Ex. No: 9
DETECTING ANGLES BETWEEN LINES IN
Date: IMAGES USING HOUGH TRANSFORM

AIM

To write a program to detect and measure the angle between two lines on a protractor
image using mouse clicks.

REQUIREMENTS

 OpenCV: Open Source Computer Vision Library for image processing.


 numpy: A Python library for mathematical operations on arrays and matrices.
 math: A Python library for mathematical functions.

PROCEDURE

1. Import the required libraries: cv2, numpy, and math.


2. Define an empty list to store the clicked points.
3. Define a mouse callback function drawcircle that draws a circle at the clicked
point and an arrow between the last two clicked points.
4. Append the clicked points to the list points and display the updated image with
the clicked points and the arrow.
5. Call the findangle function when three points have been clicked, which calculates
the angle between the last two clicked points and the latest clicked point and
returns the angle in degrees.
6. Draw the angle text on the image and display the updated image.
7. Define the function slope that calculates the slope of a line between two points.
8. Load an image of a protractor using imread function of OpenCV.
9. Enter into an infinite loop where each iteration displays the protractor image with
the clicked points and waits for user input.
10. If the user presses the 'r' key, reset the image and the points list.
11. If the user presses the 'q' key, exit the infinite loop.
12. Release the OpenCV window and close all windows.

26
PROGRAM

import cv2
import numpy as np
import math
points = []
def drawcircle(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
cv2.circle(img, (int(x), int(y)), 6, (255, 0, 0), -1)
if (len(points) != 0):
cv2.arrowedLine(img, tuple(points[0]), (x, y), (255, 0, 0), 3)
points.append([x, y])
cv2.imshow('image', img)
print(points)
if (len(points) == 3):
degrees = findangle()
print(degrees)
def findangle():
a = points[-2]
b = points[-3]
c = points[-1]
m1 = slope(b, a)
m2 = slope(b, c)
angle = math.atan((m2 - m1) / 1 + m1 * m2)
angle = round(math.degrees((angle)))
if angle < 0:
angle = 180 + angle
cv2.putText(img, str(angle), (b[0] - 40, b[1] + 40),
cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imshow('image', img)
return angle
def slope(p1, p2):
return (p2[1] - p1[1]) / (p2[0] - p1[0])
img = cv2.imread("protractor.jpg")
while True:
cv2.imshow('image', img)
cv2.setMouseCallback('image', drawcircle)
if cv2.waitKey(1) & 0xff == ord('r'):
img = cv2.imread("protractor.jpg")
27
points = []
cv2.imshow('image', img)
cv2.setMouseCallback('image', drawcircle)
if cv2.waitKey(1) & 0xff == ord('q'):
break

OUTPUT

28
29
RESULT

The code allows the user to click three points on the protractor image to calculate and
display the angle between two lines. The user can reset the image and the points by
pressing the 'r' key and exit the PROGRAM by pressing the 'q' key. The protractor image
with the clicked points and the angle text is continuously displayed until the user
presses the 'q' key.

30
Ex. No: 10
DETECTING LINES IN IMAGES USING HOUGH
Date: TRANSFORM

AIM

To detect lines in an image using the Hough Line method.

REQUIREMENTS
 Python 3.x
 OpenCV library
 image named "line-detection.png" in the same directory as this Python script.

PROCEDURE

1. Import the required libraries: The first step is to import the OpenCV and NumPy
libraries.

2. Read the input image: Read the image "line-detection.png" using the OpenCV
function cv2.imread() and store it in a variable called 'img'.

3. Convert the image to grayscale: Convert the RGB image to grayscale using the
OpenCV function cv2.cvtColor() and store it in a variable called 'gray'.

4. Apply edge detection: Apply the Canny edge detection method on the grayscale
image using the OpenCV function cv2.Canny() and store the result in a variable
called 'edges'.

5. Detect lines using the HoughLine method: Use the OpenCV function
cv2.HoughLines() to detect lines in the edge-detected image. This function
returns an array of r and theta values.

31
6. Draw lines on the original image: Use a for loop to iterate through each r_theta
value in the lines array. For each r_theta value, calculate the values of a, b, x0,
y0, x1, y1, x2, and y2 using the equations provided in the code. Finally, draw a
line on the original image using the OpenCV function cv2.line().

7. Display the result: Display the result image using the OpenCV function
cv2.imshow() and wait for a keyboard event using the cv2.waitKey() function.

8. Save the result: Save the result images in the current directory using the OpenCV
function cv2.imwrite().

PROGRAM

# Python PROGRAM to illustrate HoughLine method for line detection


import cv2
import numpy as np

# Reading the required image in which operations are to be done.


# Make sure that the image is in the same directory in which this python
PROGRAM is
img = cv2.imread('line-detection.png')

# Convert the img to grayscale


gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Apply edge detection method on the image


edges = cv2.Canny(gray, 50, 150, apertureSize=3)

# This returns an array of r and theta values


lines = cv2.HoughLines(edges, 1, np.pi/180, 200)

# The below for loop runs till r and theta values are in the range of the 2d array
for r_theta in lines:

32
arr = np.array(r_theta[0], dtype=np.float64)
r, theta = arr

# Stores the value of cos(theta) in a


a = np.cos(theta)
# Stores the value of sin(theta) in b
b = np.sin(theta)

# x0 stores the value rcos(theta)


x0 = a*r

# y0 stores the value rsin(theta)


y0 = b*r

# x1 stores the rounded off value of (rcos(theta)-1000sin(theta))


x1 = int(x0 + 1000*(-b))

# y1 stores the rounded off value of (rsin(theta)+1000cos(theta))


y1 = int(y0 + 1000*(a))

# x2 stores the rounded off value of (rcos(theta)+1000sin(theta))


x2 = int(x0 - 1000*(-b))

# y2 stores the rounded off value of (rsin(theta)-1000cos(theta))


y2 = int(y0 - 1000*(a))

# cv2.line draws a line in img from the point(x1,y1) to (x2,y2).


# (0,0,255) denotes the colour of the line to be
# drawn. In this case, it is red.
cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 2)

# All the changes made in the input image are finally written on a new image
# houghlines.jpg
cv2.imwrite('grayDetected.jpg', img)
cv2.imshow("result gray img", gray)
cv2.imwrite('edgeDetected.jpg', img)
cv2.imshow("result edge img", edges)
cv2.imwrite('linesDetected.jpg', img)

33
cv2.imshow("Result Image", img)
cv2.waitKey(0)

OUTPUT

RESULT

The result of this code is three output images: "grayDetected.jpg", "edgeDetected.jpg",


and "linesDetected.jpg". The "grayDetected.jpg" image shows the input image
converted to grayscale. The "edgeDetected.jpg" image shows the result of edge
detection on the grayscale image. The "linesDetected.jpg" image shows the detected
lines drawn on the original image. Additionally, the result of the grayscale and edge-
detected images are displayed using the cv2.imshow() function.

34
Ex. No: 11 DETECTING CELLS USING IMAGE
Date: SEGMENTATION

AIM

To write an OpenCV code is to detect cells using image segmentation.

REQUIREMENTS

 OpenCV (cv2)
 NumPy
 Matplotlib

PROCEDURE

1. Import the required libraries (NumPy, cv2, Matplotlib).


2. Load the input image "cell.png" using cv2.imread().
3. Convert the loaded image to grayscale using cv2.cvtColor().
4. Apply median and Gaussian blurring on the grayscale image using
cv2.medianBlur() and cv2.GaussianBlur() functions, respectively.
5. Perform histogram equalization using cv2.equalizeHist().
6. Apply Contrast Limited Adaptive Histogram Equalization (CLAHE) using
cv2.createCLAHE() function.
7. Apply contrast stretching on the grayscale image using the pixelVal() function
that maps each intensity level to an output intensity level.
8. Detect edges using the Canny edge detection algorithm using cv2.Canny()
function.
9. Save the output images using cv2.imwrite() function.
10. Display the input and output images using Matplotlib plt.imshow() function.

35
PROGRAM

import numpy as np
import cv2
import matplotlib.pyplot as plt

# read original image


image = cv2.imread("cell.png")
kernel = np.ones((5, 5), np.uint8)

# convert to gray scale image


gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imwrite('gray.png', gray)

# apply median filter for smoothing


blurM = cv2.medianBlur(gray, 5)
cv2.imwrite('blurM.png', blurM)

# apply gaussian filter for smoothing


blurG = cv2.GaussianBlur(gray, (9, 9), 0)
cv2.imwrite('blurG.png', blurG)

# histogram equalization
histoNorm = cv2.equalizeHist(gray)
cv2.imwrite('histoNorm.png', histoNorm)

# create a CLAHE object for


# Contrast Limited Adaptive Histogram Equalization (CLAHE)
clahe = cv2.createCLAHE(clipLimit = 2.0, tileGridSize=(8, 8))
claheNorm = clahe.apply(gray)
cv2.imwrite('claheNorm.png', claheNorm)

# contrast stretching
# Function to map each intensity level to output intensity level.
def pixelVal(pix, r1, s1, r2, s2):
if (0 <= pix and pix <= r1):
return (s1 / r1) * pix
elif (r1 < pix and pix <= r2):
return ((s2 - s1) / (r2 - r1)) * (pix - r1) + s1
36
else:
return ((255 - s2) / (255 - r2)) * (pix - r2) + s2

# Define parameters.
r1 = 70
s1 = 0
r2 = 200
s2 = 255
# Vectorize the function to apply it to each value in the Numpy array.
pixelVal_vec = np.vectorize(pixelVal)

# Apply contrast stretching.


contrast_stretched = pixelVal_vec(gray, r1, s1, r2, s2)
contrast_stretched_blurM = pixelVal_vec(blurM, r1, s1, r2, s2)
cv2.imwrite('contrast_stretch.png', contrast_stretched)
cv2.imwrite('contrast_stretch_blurM.png',
contrast_stretched_blurM)

# edge detection using canny edge detector


edge = cv2.Canny(gray, 100, 200)
cv2.imwrite('edge.png', edge)
edgeG = cv2.Canny(blurG, 100, 200)
cv2.imwrite('edgeG.png', edgeG)
edgeM = cv2.Canny(blurM, 100, 200)
cv2.imwrite('edgeM.png', edgeM)
plt.imshow(image)
plt.show()
plt.imshow(edge)
plt.show()
plt.imshow(edgeG)
plt.show()
plt.imshow(edgeM)
plt.show()

37
OUTPUT

38
39
RESULT

The code saves the output images in PNG format, with names indicating the type of
processing applied. The input and output images are displayed using the
plt.imshow() function.

40
Ex. No: 12 TEXTURE SEGMENTATION OF AN
Date:
IMAGE USING FILTERS

AIM

To identify and segment regions based on their texture using MATLAB.

SOFTWARE REQUIREMENT

PROCEDURE

1. Read Image:
Read and display a grayscale image of textured patterns on a bag.

2. Create Texture Image:


Use entropyfilt to create a texture image. Entropy is a statistical measure of
randomness.
Use stdfilt and rangefilt to achieve similar segmentation results. Use rescale to
rescale the texture images E and S so that pixel values are in the range [0, 1] as
expected of images of data type double. Display the three texture images in a
montage.

3. Create Mask for Bottom Texture:

41
Threshold the rescaled image Eim to segment the textures. A threshold value of 0.8 is
selected because it is roughly the intensity value of pixels along the boundary between
the textures. Remove the objects in the top texture by using bwareaopen.
Use imclose to smooth the edges. Use imfill to fill holes in the object in closeBWao.

4. Use Mask to Segment Textures:


Separate the textures into two different images.

5. Display Segmentation Results:


Create a label matrix that has the label 1 where the mask is false and the label 2
where the mask is true. Overlay label matrix on the original image. Outline the
boundary between the two textures in cyan.

PROGRAM

I = imread('bag.png');
imshow(I)
title('Original Image')
E = entropyfilt(I);
S = stdfilt(I,ones(9));
R = rangefilt(I,ones(9));
Eim = rescale(E);
Sim = rescale(S);
montage({Eim,Sim,R},'Size',[1 3],'BackgroundColor','w',"BorderSize",20)
title('Texture Images Showing Local Entropy, Local Standard Deviation, and Local
Range')
BW1 = imbinarize(Eim,0.8);
imshow(BW1)
title('Thresholded Texture Image')
BWao = bwareaopen(BW1,2000);
imshow(BWao)
title('Area-Opened Texture Image')
nhood = ones(9);
closeBWao = imclose(BWao,nhood);
imshow(closeBWao)
title('Closed Texture Image')
mask = imfill(closeBWao,'holes');
42
imshow(mask);
title('Mask of Bottom Texture')
extureTop = I;
textureTop(mask) = 0;
textureBottom = I;
textureBottom(~mask) = 0;
montage({textureTop,textureBottom},'Size',[12],'BackgroundColor','w',"BorderSize",
20)
title('Segmented Top Texture (Left) and Segmented Bottom Texture (Right)')
L = mask+1;
imshow(labeloverlay(I,L))
title('Labeled Segmentation Regions')
boundary = bwperim(mask);
imshow(labeloverlay(I,boundary,"Colormap",[0 1 1]))
title('Boundary Between Textures')

OUTPUT

43
44
45
RESULT

Thus the identification and segmentation of regions based on their texture using
MATLAB was done.

46
Ex. No: 13 COLOUR- BASED SEGMENTATION USING
Date:
K-MEANS CLUSTERING

AIM

To perform color segmentation on an image using the k-means clustering algorithm.

REQUIREMENTS

 OpenCV (cv2)
 NumPy
 Matplotlib

PROCEDURE

1. The image is read using the cv2.imread() function, which returns an image in
the BGR color space.
2. The cv2.cvtColor() function is used to convert the image from BGR to RGB
color space, as the imshow() function in Matplotlib expects RGB images.
3. The image is displayed using the imshow() function from Matplotlib and the
cv2.imshow() function from OpenCV, which displays the image in a separate
window.
4. The image is reshaped into a 2D array of pixels and 3 color values (RGB)
using the reshape() function from NumPy.
5. The pixel values are converted to float type using the np.float32() function.
6. The criteria for the k-means algorithm to stop running is defined using the
cv2.TERM_CRITERIA_EPS and cv2.TERM_CRITERIA_MAX_ITER
flags.
7. The k-means clustering algorithm is applied using the cv2.kmeans() function,
with the number of clusters set to 3 and random centers initially chosen for
clustering.
8. The resulting centers are converted to 8-bit values using the np.uint8()
function.
47
9. The segmented data is reshaped into the original image dimensions using the
reshape() function.
10. The segmented image is displayed using the imshow() function from
Matplotlib and the cv2.imshow() function from OpenCV.

PROGRAM

import numpy as np
import matplotlib.pyplot as plt
import cv2

# Read in the image


image = cv2.imread('butterfly.jpg')

# Change color to RGB (from BGR)


image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
cv2.imshow("image",image)
cv2.waitKey(0)

# Reshaping the image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1, 3))

# Convert to float type


pixel_vals = np.float32(pixel_vals)
# the below line of code defines the criteria for the algorithm to stop running,
# which will happen is 100 iterations are run or the epsilon (which is the required
accuracy)
# becomes 85%

criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100,


0.85)
# then perform k-means clustering with number of clusters defined as 3
# also random centres are initially choosed for k-means clustering

48
k=3
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10,
cv2.KMEANS_RANDOM_CENTERS)’

# convert data into 8-bit values


centers = np.uint8(centers)
segmented_data = centers[labels.flatten()]
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
plt.imshow(segmented_image)
cv2.imshow("image1",segmented_image)
cv2.waitKey(0)

OUTPUT

49
K=3

K=6

RESULT

The code segments the input image into 3 clusters based on color, producing a
segmented image. The original and segmented images are displayed using Matplotlib
and OpenCV, respectively.

50
Ex. No: 14 LINE FOLLOWER ROBOT CONTROL
Date:

WHAT IS A LINE FOLLOWER ROBOT?

A line follower robot is a robot which follows a certain path controlled by a feed back
mechanism.

REQUIREMENTS

1. Arduino Uno R3
ATmega 328P board x 1
2. Motor driver shield
(L293D) x 1
3. IR sensors x 2
4. Gear motor single shaft x 4
5. Robot wheels x 4
6. Acrylic sheet x 1
7. Lithium-ion battery(18650) x 2
8. Battery holder x 1
9. Jumper wires (Male to Female type wires)

1 2

51
PROCEDURE

1. First, we took the acrylic sheet of dimensions of 160x120 mm as a chassis (body)


for the project.

2. Then, we attached all four gear motors with wheels on the bottom of acrylic sheet.

3. Then, we attached the Arduino Uno on the top of the acrylic sheet. 4. Then, we

attached the motor driver onto the Arduino Uno.

5. Then, we connected the pins of all four gear motors in the motor driver as per the
connections diagram.

6. Then, we took two IR sensors and attached them in the front of the chasis with
some distance between them with respect to the width of line path. One sensor is for
left side detection and another is for the right side detection.

7. We connected the VCC pins to 5volt and the ground pins to ground.

8. We connected both the IR sensor signal pins in the motor driver as per the circuit
/connections diagram.

9. Finally, we connected the lithium ion batteries (18650) with the motor driver
and placed the battery holder on chassis or body of the line follower robot.

10. Then, we uploaded the arduino program of our project into the Arduino Uno
R3 using USB cable.

52
PROGRAM

//LINE FOLLOWER ROBOT USING ARDUINO


UNO R3//
// INSTALL THE AFMOTOR LIBRARY
BEFORE UPLOADING THE CODE//
// GO TO SKETCH >> INCLUDE LIBRARY >> ADD .ZIP
LIBRARY >> SELECT AF MOTOR ZIP FILE //

//including the libraries


#include <AFMotor.h>

//defining pins and variables


#define left A4
#define right A3

//defining motors
AF_DCMotor motor1(1, MOTOR12_1KHZ);
AF_DCMotor motor2(2, MOTOR12_1KHZ);
AF_DCMotor motor3(3, MOTOR34_1KHZ);
AF_DCMotor motor4(4, MOTOR34_1KHZ);
void setup() {
//declaring pin types
pinMode(left,INPUT);
pinMode(right,INPUT);
//begin serial communication
Serial.begin(9600);

void loop(){

53
//printing values of the sensors to the
serial monitor
Serial.println(digitalRead(left));

Serial.println(digitalRead(right));
//line detected by both

if(digitalRead(left)==0 &&
digitalRead(right)==0){ //Forward
motor1.run(FORWARD);
motor1.setSpeed(150);
motor2.run(FORWARD);
motor2.setSpeed(150);
motor3.run(FORWARD);
motor3.setSpeed(150);
motor4.run(FORWARD);
motor4.setSpeed(150);
}
//line detected by left sensor

else if(digitalRead(left)==0 &&


!analogRead(right)==0){ //turn left
motor1.run(FORWARD);
motor1.setSpeed(200);
motor2.run(FORWARD);
motor2.setSpeed(200);
motor3.run(BACKWARD);
motor3.setSpeed(200);
motor4.run(BACKWARD);
motor4.setSpeed(200);

}
//line detected by right sensor
54
else if(!digitalRead(left)==0 &&
digitalRead(right)==0){ //turn right
motor1.run(BACKWARD);
motor1.setSpeed(200);
motor2.run(BACKWARD);
motor2.setSpeed(200);
motor3.run(FORWARD);
motor3.setSpeed(200);
motor4.run(FORWARD);
motor4.setSpeed(200);

}
//line detected by none
else if(!digitalRead(left)==0 &&
!digitalRead(right)==0){ //stop
motor1.run(RELEASE);
motor1.setSpeed(0);
motor2.run(RELEASE);
motor2.setSpeed(0);
motor3.run(RELEASE);
motor3.setSpeed(0);
motor4.run(RELEASE);
motor4.setSpeed(0);
}
}

55
ROBOT

RESULT

Thus the control of a line follower robot was carried out successfully.

56
Ex. No: 15 STUDY OF NAVIGATION CONTROL OF
Date:
MOBILE ROBOTS USING NEURAL
NETWORK

ABOUT

Navigation control of mobile robots using neural network algorithms is an exciting area
of research and development in robotics. Neural networks are a type of artificial
intelligence algorithm inspired by the way the human brain processes information.

In the context of mobile robots, neural networks can be used to analyze sensor data and
make decisions about how to navigate in complex environments. This can be
particularly useful for robots operating in dynamic and unpredictable environments,
such as warehouses, hospitals, and factories.

REQUIREMENTS

 Python 3.X
 NumPy
 Tensorflow

PROCEDURE

1. Import the necessary libraries:


import numpy as np
import tensorflow as tf

2. Define the neural network architecture


model = tf.keras.models.Sequential([
tf.keras.layers.Dense(32, input_shape=(10,), activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(2, activation='tanh')
])

57
3.Compile the model:
model.compile(loss='mean_squared_error', optimizer='adam')

4. Train the model:

model.fit(train_X, train_y, epochs=100, batch_size=32,


validation_data=(test_X, test_y))

5. Use the model for navigation control:


while True:
# Get sensor readings
sensor_data = get_sensor_data()

# Predict movement using the trained model


movement = model.predict(np.array([sensor_data]))

# Send movement commands to the robot


send_movement_command(movement[0])

Here we use the trained model to predict movement based on sensor data, and then send
movement commands to the robot

PROGRAM

import numpy as np
import tensorflow as tf

# Define the neural network architecture


model = tf.keras.models.Sequential([
tf.keras.layers.Dense(32, input_shape=(10,), activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(2, activation='tanh')
])
model.compile(loss='mean_squared_error', optimizer='adam')

58
# Generate training data
train_X = np.random.randn(10000, 10)
train_y = np.random.randn(10000, 2)
test_X = np.random.randn(1000, 10)
test_y = np.random.randn(1000, 2)

# Train the model


model.fit(train_X, train_y, epochs=100, batch_size=32, validation_data=(test_X,
test_y))

# Use the model to control the robot


while True:
# Get sensor readings
sensor_data = get_sensor_data()

# Predict movement using the trained model


movement = model.predict(np.array([sensor_data]))

# Send movement commands to the robot


send_movement_command(movement[0])

RESULT

Thus the Navigation control of mobile robots using neural network algorithms was
performed.
59
Ex. No: 16 STUDY OF NAVIGATION CONTROL OF
Date:
MOBILE ROBOTS USING FUZZY LOGIC
ALGORITHM

AIM

To demonstrate the implementation of a fuzzy logic-based control system using the


scikit-fuzzy library in Python. The system takes in two input variables (distance and
angle) and provides two output variables (direction and speed).

REQUIREMENTS

 Python 3.X
 NumPy
 scikit-fuzzy

PROCEDURE

1. Connect sensors to the Raspberry Pi 4: In order to navigate a mobile robot using


a neural network, the Raspberry Pi 4 needs to be connected to sensors such as
cameras, ultrasonic sensors, and other relevant sensors that can provide data
about the environment the robot is in.

2. Install necessary libraries: Install OpenCV and TensorFlow libraries on the


Raspberry Pi 4. These libraries will help in processing the sensory data and
implementing the neural network algorithm.

3. Collect training data: Use the sensors to collect data about the environment,
such as images from the camera, distance measurements from the ultrasonic
sensors, and other relevant data. Use this data to train the neural network
algorithm.
60
4. Define input variables and membership functions: Define the input variables for
the fuzzy logic algorithm based on the sensor data. For example, if the robot is
using ultrasonic sensors to detect obstacles, the input variables could be
distance and angle. Define the membership functions for each input variable to
convert the sensor data into fuzzy sets.

5. Define output variables and membership functions: Define the output variables
for the fuzzy logic algorithm, which will be the control commands sent to the
robot. The output variables could be speed and direction. Define the
membership functions for each output variable.

6. Define rules: Define the rules for the fuzzy logic algorithm, which will dictate
how the input variables should be combined to produce the output variables.
For example, if the distance to an obstacle is close and the angle is small, the
speed should be reduced and the direction should be changed.

7. Design and train the neural network: Using TensorFlow, design a neural network
that takes in sensor data as input and produces control commands as output.
Train the network using the training data collected in the previous step.

8. Deploy the neural network: Once the neural network is trained, deploy it on the
Raspberry Pi 4. This can be done by writing a Python script that reads sensor
data, feeds it to the neural network, and sends the resulting control commands
to the mobile robot.

9. Test the navigation control: Test the navigation control system by running the
script and observing the behavior of the mobile robot. You may need to adjust
the neural network architecture and training data to improve the system's
performance.

61
10. Take appropriate safety measures while testing the navigation control system
to prevent any accidents or damage to the robot or surrounding environment.

PROGRAM

import numpy as np
import skfuzzy as fuzz

# Define input variables


distance = np.arange(0, 11, 1)
angle = np.arange(-90, 91, 1)

# Define output variables


direction = np.arange(-50, 51, 1)
speed = np.arange(0, 101, 1)

# Generate membership functions for input variables


distance_low = fuzz.trimf(distance, [0, 0, 5])
distance_med = fuzz.trimf(distance, [0, 5, 10])
distance_high = fuzz.trimf(distance, [5, 10, 10])

angle_neg = fuzz.trimf(angle, [-90, -90, 0])


angle_zero = fuzz.trimf(angle, [-45, 0, 45])
angle_pos = fuzz.trimf(angle, [0, 90, 90])

# Generate membership functions for output variables


direction_left = fuzz.trimf(direction, [-50, -50, 0])
direction_straight = fuzz.trimf(direction, [-25, 0, 25])
direction_right = fuzz.trimf(direction, [0, 50, 50])

speed_slow = fuzz.trimf(speed, [0, 0, 50])


speed_fast = fuzz.trimf(speed, [50, 100, 100])

# Define fuzzy rules


rule1 = fuzz.Rule(distance_low & angle_neg, direction_left & speed_slow)
rule2 = fuzz.Rule(distance_low & angle_zero, direction_straight & speed_slow)

62
rule3 = fuzz.Rule(distance_low & angle_pos, direction_right & speed_slow)
rule4 = fuzz.Rule(distance_med & angle_neg, direction_left & speed_fast)
rule5 = fuzz.Rule(distance_med & angle_zero, direction_straight & speed_fast)
rule6 = fuzz.Rule(distance_med & angle_pos, direction_right & speed_fast)
rule7 = fuzz.Rule(distance_high & angle_neg, direction_left & speed_fast)
rule8 = fuzz.Rule(distance_high & angle_zero, direction_straight & speed_fast)
rule9 = fuzz.Rule(distance_high & angle_pos, direction_right & speed_fast)

# Create control system


direction_ctrl = fuzz.ControlSystem([rule1, rule2, rule3, rule4, rule5, rule6, rule7, rule8,
rule9])
speed_ctrl = fuzz.ControlSystem([rule1, rule2, rule3, rule4, rule5, rule6, rule7, rule8,
rule9])

# Define input values


distance_input = 7
angle_input = 30

# Create control system simulation


direction_simulation = fuzz.ControlSystemSimulation(direction_ctrl)
speed_simulation = fuzz.ControlSystemSimulation(speed_ctrl)

# Set input values


direction_simulation.input['distance'] = distance_input
direction_simulation.input['angle'] = angle_input
speed_simulation.input['distance'] = distance_input
speed_simulation.input['angle'] = angle_input

# Compute output values


direction_simulation.compute()
speed_simulation.compute()

# Get output values


direction_output = direction_simulation.output['direction']
speed_output = speed_simulation.output['speed']

print("Direction:", direction_output)
print("Speed:", speed_output)

63
RESULT

The output of this code is the direction and speed values that the control system
recommends based on the input values provided. The direction value represents the
degree of turn required, with negative values indicating a left turn, zero indicating a
straight path, and positive values indicating a right turn. The speed value represents the
recommended speed of the vehicle, with lower values indicating slower speeds and
higher values indicating faster speeds.

64
Ex. No: 17 IMPLEMENTING SLAM IN RASPBERRY
Date:
PI MOBILE ROBOT

AIM

To implement SLAM in Raspberry Pi Mobile Robot.

PROCEDURE

1 RUNNING TELEOP
1. Open a new terminal using:
Ctrl+Alt+t

2. SSH into Pi:


ssh pi@ ipaddr
( Eg : ssh pi@192 . 1 6 9 . 4 . 2 0 )

3. Turn on ROS and tortoisebot sensors:


roslaunch t o r t o i s e b o t f i r m w a r e bringup . launch

4. Open a new Terminal:


Ctrl+Alt+t

5. Start node that reads the sensor data and updates odom:
roslaunch t o r t o i s e b o t f i r m w a r e s e rve r b r in g u p .
launch

6. Open a new Terminal:


Ctrl+Alt+t

7.Start teleop:
rosrun t o r t o i s e b o t c o n t r o l t o r t o i s e b o t t e l e o p k e y . py

65
2 RUNNING SLAM

1. Follow the same steps as Running Teleop after which follow the below
mentioned steps:
2. Open a new Terminal:
Ctrl+Alt+t

3.Start SLAM:
Note: Note down the starting position of the robot before you start mapping
as this is the same position you must start at this exact position whilst
Running Autonomous Navigation.
roslaunch t o r t o i s e b o t s l a m t o r t o i s e b o t s l a m . launch
4. After you are done mapping. To save the map run:
roslaunch t o r t o i s e b o t s l a m map saver .
launch map name:= anything ( Eg : roslaunch t o
r t o i s e b o t s l a m map saver . launch map name:=
ialab )

3 RUNNING AUTONOMOUS NAVIGATION


1. Follow the same steps as Running Teleop till step number 6 after which
follow:
2. Start Autonomous Navigation:
roslaunch t o r t o i s e b o t n a v i g a t i o n t o r t o i s e b o t n a v i
g a t i o n . launch m a p f i l e := anything ( Eg : roslaunch t o r t o
i s e b o t n a v i g a t i o n t o r t o i s e b o t n a v i g a t i o n . launch map
name:= i a l a b )

3. Now in the opened RViz window use the ”2D Nav Goal” button and
mark the goal location and orientation on the map.

66
TELEOP SAMPLE IMAGE

SLAM SAMPLE IMAGE

67
AUTONOMOUS NAVIGATION SAMPLE IMAGE

68
RESULT

Thus the implementation of SLAM in Raspberry Pi Mobile Robot was performed.

69
Ex. No: 18 CIRCLE CONTOUR DETECTION
Date:

ABOUT

To write a program that detects circle contours in an image.

REQUIREMENTS

 python 3
 open cv library
 numpy
 mathplotlib library
 input image

PROCEDURE

1. Import the necessary libraries: cv2, numpy, and matplotlib.pyplot.


2. Load the input image using the cv2.imread() function and store it in the image
variable.
3. Convert the input image to grayscale using the cv2.cvtColor() function and store
it in the gray variable.
4. Display the grayscale image using the plt.imshow() function with the
cmap='gray' argument.
5. Apply Gaussian blur to the grayscale image using the cv2.GaussianBlur()
function with a kernel size of (11, 11) and sigma value of 0. Store the blurred
image in the blur variable.
6. Display the blurred image using the plt.imshow() function with the cmap='gray'
argument.
7. Apply Canny edge detection to the blurred image using the cv2.Canny() function
with a threshold of 30 and 150. Store the detected edges in the canny variable.
8. Display the detected edges using the plt.imshow() function with the cmap='gray'
argument.

70
9. Dilate the detected edges using the cv2.dilate() function with a kernel size of (1,
1) and one iteration. Store the dilated image in the dilated variable.
10. Display the dilated image using the plt.imshow() function with the cmap='gray'
argument.
11. Find the contours in the dilated image using the cv2.findContours() function with
the cv2.RETR_EXTERNAL and cv2.CHAIN_APPROX_NONE flags. Store the
contours and hierarchy in the contours and hierarchy variables, respectively.
12. Convert the input image from BGR to RGB using the cv2.cvtColor() function
and store it in the rgb variable.
13. Draw the contours on the RGB image using the cv2.drawContours() function
with a green color and a line thickness of 2.
14. Display the final image using the plt.imshow() function.
15. Print the number of coins found in the image using the len() function on the
contours variable.

PROGRAM

# Import libraries
import cv2
import numpy as np
import matplotlib.pyplot as plt

image = cv2.imread('bitss.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plt.imshow(gray, cmap='gray')
#plt.show()
blur = cv2.GaussianBlur(gray, (11, 11), 0)
plt.imshow(blur, cmap='gray')
#plt.show()
canny = cv2.Canny(blur, 30, 150, 3)
plt.imshow(canny, cmap='gray')
#plt.show()
dilated = cv2.dilate(canny, (1, 1), iterations=0)
plt.imshow(dilated, cmap='gray')
#plt.show()
(cnt, hierarchy) = cv2.findContours(
71
dilated.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.drawContours(rgb, cnt, -1, (0, 255, 0), 2)

plt.imshow(rgb)
#plt.show()
print("coins in the image : ", len(cnt))

OUTPUT

72
73
RESULT

Thus the detection of circular contours has been successfully carried out in the image.

74

You might also like