0% found this document useful (0 votes)
51 views36 pages

CV Practical Record Editted - PDF

The document provides details of a practical record note for a computer vision subject. It includes the student name, register number, class details, subject code, subject title and an index listing the experiments conducted with dates, titles, and page numbers. It also includes a certificate signed by the staff in-charge and head of the department, submitted for semester practical examination.

Uploaded by

aa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views36 pages

CV Practical Record Editted - PDF

The document provides details of a practical record note for a computer vision subject. It includes the student name, register number, class details, subject code, subject title and an index listing the experiments conducted with dates, titles, and page numbers. It also includes a certificate signed by the staff in-charge and head of the department, submitted for semester practical examination.

Uploaded by

aa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

COLLEGE OF SCIENCE AND HUMANITIES


DEPARTMENT OF COMPUTER APPLICATIONS

PRACTICAL RECORD NOTE

STUDENT NAME :
REGISTER
:
NUMBER

CLASS :
YEAR &
:
SEMESTER

SUBJECT CODE :

SUBJECT TITLE :

April 2023
1

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY


COLLEGE OF SCIENCE AND HUMANITIES
DEPARTMENT OF COMPUTER APPLICATIONS
SRM Nagar, Kattankulathur – 603 203

CERTIFICATE

Certified to be the bonafide record of practical work done


by_____________________________Register No______________________of
Degree course for UDS21402J-Introduction to Computer Vision in the
Computer lab in SRM Institute of Science and Technology during the academic
year 2022-2023.

Staff In-charge Head of the Department

Submitted for Semester Practical Examination held on __________________.

Internal Examiner External Examiner


2

INDEX

Ex.No. Date Title of the experiment Page no Staff sign


1 2/1/23 Read, Write and Show Images

2 9/1/23 Color Space

3 18/1/23 Thresholding Techniques

4 25/1/23 Contour Detection

5 2/2/23 Image Scaling, Rotation and


Translation

6 9/2/23 Edge detection

7 16/2/23 Image filtering,Box Blur

8 23/2/23 Gaussian Blur, Median blur &


Bilateral

9 2/3/23 Sift Invariant Feature Transform

10 9/3/23 Binary Robust Independent


Elementary Transform

11 16/3/23 Face Detection

12 16/3/23 Harris Corner Detection

13 24/3/23 Features from accelerated segment


test

14 31/3/23 Morphological operations


Errosion &Dilation
15 31/3/23 Morphological operations
Open/Close
3

Date: 2/1/23
Ex.no: Read, Write and Show Images

Aim:

Algorithm:
4

Source Code:
!pip install opencv-python

from google.colab import drive


drive.mount('/content/drive')

from google.colab.patches import cv2_imshow

import cv2
import numpy as np
image1 = cv2.imread('/content/drive/MyDrive/Human.png')
cv2.imwrite('/content/drive/MyDrive/Human.png',image1)
cv2_imshow(image1)

Output :
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-
wheels/public/simple/
Requirement already satisfied: opencv-python in /usr/local/lib/python3.9/dist-packages
(4.7.0.72)
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.9/dist-packages
(from opencv-python) (1.22.4)

Mounted at /content/drive
5

Date: 9/1/23
Ex. No: Color Space

Aim:

Algorithm:
6

Source Code:
import cv2
image1 = cv2.imread('/content/drive/MyDrive/Human.png')
img = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
cv2_imshow(img)
img = cv2.cvtColor(image1, cv2.COLOR_BGR2HSV)
cv2_imshow(img)
img = cv2.cvtColor(image1, cv2.COLOR_BGR2Lab)
cv2_imshow(img)
img = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
cv2_imshow(img)

Output:
7

Date:18/1/23
Ex. No: Thresholding Techniques

Aim:

Algorithm:
8

Source Code:
import cv2
image1 = cv2.imread('/content/drive/MyDrive/Human.png')
ret, thresh1 = cv2.threshold(img, 140, 234, 246, cv2.THRESH_BINARY )
ret, thresh2 = cv2.threshold(img, 45, 247, 234, cv2.THRESH_BINARY_INV)
ret, thresh3 = cv2.threshold(img, 34, 254, 231, cv2.THRESH_TRUNC)
ret, thresh4 = cv2.threshold(img, 249, 248, 123, cv2.THRESH_TOZERO)
ret, thresh5 = cv2.threshold(img, 237, 167, 189, cv2.THRESH_TOZERO_INV)

cv2_imshow(thresh1)
cv2_imshow(thresh2)
cv2_imshow(thresh3)
cv2_imshow(thresh4)
cv2_imshow(thresh5)

Output:
9

Date: 25/1/23
Ex. No: Contour Detection

Aim:

Algorithm:
10

Source Code:
import numpy as np
img=cv2.imread('/content/drive/MyDrive/Human.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh=cv2.threshold(gray,160,235,0)
contours,hirearchy
=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
print("Number of contours found=",len(contours))
cv2.drawContours(img,contours,100,(150,150,0),3)
cv2_imshow(img)

import cv2
img=cv2.imread('/content/drive/MyDrive/Eagle.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh=cv2.threshold(gray,110,234,0)
contours,hirearchy=
cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cv2.drawContours(img,contours,100,(149,170,0),2)
cv2_imshow(img)

Output:
No of contours found = 157
11

Date: 2/2/23
Ex. No: Image Scaling,Rotation and Translation

Aim:

Algorithm:
12

Source Code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image=cv2.imread('/content/drive/MyDrive/Human.png')
half=cv2.resize(image,(4743,3166),fx=0.1,fy=0.1)
bigger=cv2.resize(image,(1050,1610))
stretch_near=cv2.resize(image,(780,540),interpolation=cv2.INTER_LINEAR)
titles=["Original","Half","Bigger","Interpolation Nearest"]
images=[image,half,bigger,stretch_near]
count=4

for i in range (count):


plt.subplot(2,2,i+1)
plt.title(titles[i])
plt.imshow(images[i])
plt.show()
Image rotation
import cv2
from google.colab.patches import cv2_imshow
img=cv2.imread('/content/drive/MyDrive/Human.png')
window_name='Image'
image =cv2.rotate(img,cv2.ROTATE_90_COUNTERCLOCKWISE)
image1 =cv2.rotate(img,cv2.ROTATE_180)
image2 =cv2.rotate(img,cv2.ROTATE_90_CLOCKWISE)
cv2_imshow(image)
cv2_imshow(image1)
cv2_imshow(image2)
Image Translation
image=cv2.imread('/content/drive/MyDrive/Human.png')
height,width=image.shape[:2]
T=np.float32([[1,0,height/4],[0,1,width/4]])
print('Original image')
cv2_imshow(image)
print('Translated image')
cv2_imshow(cv2.warpAffine(image,T,(width,height)))
13

Output:
Scaling

Rotation

Translation
14

Date: 09/02/23
Ex. No: Edge Detection

Aim:

Algorithm:
15

Source Code :
import cv2
from google.colab.patches import cv2_imshow
img=cv2.imread('/content/drive/MyDrive/Human.png')
low=100
upper=250
edge=cv2.Canny(img,low,upper)
cv2_imshow(edge)

Output:
16

Date:16/2/23 Image Filtering , Box Blurring


Ex. No:

Aim:

Algorithm:
17

Source Code :
#Image Filtering :
import cv2
import numpy as np
img = cv2.imread('/content/drive/MyDrive/Human.png')
id_kernel = np.array([[0,0,0],[0,1,0],[0,0,0]])
flt_img = cv2.filter2D(src=img,ddepth=-1,kernel=id_kernel)
cv2_imshow(flt_img)

#Box Blurring:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('/content/drive/MyDrive/Human.png')
Rgb=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
# It is a unity matrix which is divided by 9
array=np.array([[1, 1, 1], [1,1,1],[1,1,1]])
box_blur_ker =array/9.0
Box_blur = cv2.filter2D(src=img, ddepth=-1, kernel=box_blur_ker)
# Showing the box blur image using matplotlib library function plt.imshow()
plt.imshow(Box_blur)
plt.show()

Output :
Image Filtering:

Box Blurring :
18

Date:23/2/23
Ex.No: Gaussian Blur, Median Blur & Bilateral

Aim:

Algorithm:
19

Source Code:
#Gaussian Blur
import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('/content/drive/MyDrive/Human.png')
plt.imshow(img)
plt.show()
#size of kernel
gaussian_blur = cv2.GaussianBlur(src=img, ksize=(3,3),sigmaX=0, sigmaY=0)
#function plt.imshow()
plt.imshow(gaussian_blur)
plt.show()

#Median Blur
import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('/content/drive/MyDrive/Human.png')
plt.imshow(img)
plt.show()
value
median_blur = cv2.medianBlur(src=img, ksize=9)
#Showing the Median blur image using matplotlib library #function plt.imshow()
plt.imshow(median_blur)
plt.show()

# Bilateral Blur
bilateral = cv2.bilateralFilter(image, 9, 75, 75)
cv2_imshow(bilateral)
cv2.waitKey(0)
cv2.destroyAllWindows()
20

Output:
Gaussian Blur
input image output imge

Median Blur
input image output image

Bilateral Blur
21

Date:2/3/23
Ex.No: Shift Invariant Feature Transformation

Aim:

Algorithm:
22

Source Code:
#import required libraries
import cv2
#read input image
img = cv2.imread('/content/drive/MyDrive/Human.png')
#convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Initiate SIFT object with default value
sift = cv2.SIFT_create()
#find the keypoints on image(grayscale)
kp=sift.detect(gray,None)
#draw keypoints in image
img2 = cv2.drawKeypoints(gray,kp,None,flags=0)
#display the image with keypoints drawn on it
cv2_imshow(img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
23

Date:9/3/23
Ex.No: Binary Robust Independent Elementary Features

Aim:

Algorithm:
24

Source Code:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
from google.colab.patches import cv2_imshow
img = cv.imread('/content/drive/MyDrive/Human.png', cv.IMREAD_GRAYSCALE)
# Initiate FAST detector
star = cv.xfeatures2d.StarDetector_create()
# Initiate BRIEF extractor
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
# find the keypoints with STAR
kp = star.detect(img,None)
# compute the descriptors with BRIEF
kp, des = brief.compute(img, kp)
print( brief.descriptorSize() )
print( des.shape )

cv2_imshow(img)

Output:
25

Date:16/3/23
Ex.No: Face Detection

Aim:

Algorithm:
26

Source Code:
import cv2
from google.colab.patches import cv2_imshow

img=cv2.imread("/content/drive/MyDrive/Human.png")
classifier=cv2.CascadeClassifier("/content/drive/MyDrive/haarcascade_frontalface_default.x
ml")

gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

face=classifier.detectMultiScale(gray)

for x,y,w,h in face:


cv2.rectangle(img,(x,y),(x+w,y+h),(234,45,66),2)

cv2_imshow(img)

Output:
27

Date:16/3/23
Ex.No: Harris Corner Detection

Aim:

Algorithm:
28

Source Code:
import cv2
from google.colab.patches import cv2_imshow
import numpy as np
image =cv2.imread('/content/drive/MyDrive/Human.png')
#convert the input image into grayscale color space
operatedImage = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#modify the datatype #setting to 32-bit floating point
operatedImage = np .float32(operatedImage)
#apply the cv2.cornerHarris method # to detect the corners with appropriate
#values as input parameters
dest = cv2.cornerHarris(operatedImage,2,5,0.07)
#Results are marked though the dilated corners
dest=cv2.dilate(dest,None)
#Reverting back to the original image,
#with optimal threshold value
image[dest>0.01*dest.max()]=[0,0,255]
#the window showing output image with corners
cv2_imshow(image)
Output:
29

Date:24/3/23
Ex.No: Feature From Accelerated Segment Test

Aim:

Algorithm:
30

Source Code:
import cv2
#read input image
img = cv2.imread('/content/drive/MyDrive/Human.png')
#convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Initiate FAST object with default values
fast = cv2.FastFeatureDetector_create()
#find the keypoints on image(grayscale)
kp = fast.detect(gray,None)
#draw keypoints in image
img2 = cv2.drawKeypoints(img,kp,None)
#Print all default params
print("Threshold: ",fast.getThreshold())
print("nonmaxSuppression: ", fast.getNonmaxSuppression())
print("neighborhood:", fast.getType())
print("Total Keypoints with nonmaxSuppression:", len(kp))
#display the image with keypoints drawn on it
cv2_imshow(img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
31

Date:31/3/23
Ex.No: Morphological Erosion & Dilation

Aim:

Algorithm:
32

Source Code:
#importing the required modules
import cv2
import numpy as np
#reading the image which is to be eroded using imread() function
img = cv2.imread("/content/drive/MyDrive/Human.png")
#defining the kernel matrix
kernel = np.ones((5,5),np.uint8)
#using erode function on the input image to be eroded
erodedimage = cv2.erode(img,kernel,iterations = 1)
img_dilation = cv2.dilate(img, kernel, iterations=1)
#displaying the eroded image as the output on the screen
cv2_imshow(erodedimage)
cv2_imshow(img_dilation)
cv2.waitKey(0)
Output:

-1
33

Date:31/3/23
Ex.No: Morphological open & close

Aim :

Algorithm :
34

Source Code :
#Open
import cv2
import numpy as np
#reading the image which is to be eroded using imread() function
img = cv2.imread("/content/drive/MyDrive/Human.png")
#defining the kernel matrix
kernel = np.ones((5,5),np.uint8)
opening = cv.morphologyEx(img, cv.MORPH_OPEN, kernel)
closing = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)
#displaying the eroded image as the output on the screen
cv2_imshow(opening)
cv2_imshow(closing)
cv2.waitKey(0)

#CLOSE
#importing the required modules
import cv2
import numpy as np
#reading the image which is to be eroded using imread() function
img = cv2.imread("/content/drive/MyDrive/Human.png")
#defining the kernel matrix
kernel = np.ones((5,5),np.uint8)

close = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)


#displaying the eroded image as the output on the screen
cv2_imshow(close)
cv2_imshow(img_dilation)
Output:
Open

-1
35

Close

You might also like