CV Lab Manual PDF
CV Lab Manual PDF
Page Faculty
SI.NO Date Name of the Experiment Marks
No Signature
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
EXP.NO 01
OPENCV INSTALLATION AND WORKING WITH PYTHON
DATE
AIM:
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
By default, pip installs Python packages to a system directory. This requires root access
and hence you may not be able to upgrade the version of pip using the above command. To
overcome that problem, we need to execute the following command:
We can directly download and install OpenCV-python by using pip. If you have previously
installed a version of OpenCV, remove it before installation to avoid conflicts using the command:
To install OpenCV-python, execute the following code via the command prompt:
Ensure that you select the correct package for your environment. Installing multiple packages
has several demerits and can cause conflicts. If we have installed multiple different packages in the
same environment, we need to uninstall them all with the command given above and reinstall only
one package.
5. Installing OpenCV-contrib
Contrib modules are additional modules that constantly under development and are often
used alongside the latest releases of OpenCV. Usually, some functions get transferred to and from
OpenCV-Python and OpenCV-contrib-python.
The following command is executed on the command prompt to install OpenCV -contrib-
python:
By executing the above steps, we have successfully installed the latest version of OpenCV -python in
our machine for the Windows operating system.
RESULT:
Thus, the installation procedure of openCV on python was studied and installation process done
successfully.
EXP.NO 02 BASIC IMAGE PROCESSING - LOADING IMAGES,
CROPPING, RESIZING, THRESHOLDING, CONTOUR
DATE ANALYSIS, BOLB DETECTION
AIM:
To write a python program to implement the following Basic Image Processing operations
1. Loading images.
2. Cropping.
3. Resizing.
4. Thresholding.
5. Contour analysis.
6. Bolb detection.
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Perform Cropping, Resizing, Thresholding, Contour analysis and Bolb detection over the input image.
PROGRAM:
import cv2
import numpy as np
# Read an image
image = cv2.imread('image.jpg')
imagel = cv2.imread('image.jpg',0)
# Read image
im = cv2.imread("image.jpg", cv2.IMREAD_GRAYSCALE)
# Detect blobs.
keypoints = detector.detect(im)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
cv2.destroy AllWindows()
# Resize the image
resized_image = cv2.resize(image, (400, 300))
RESULT:
Thus, the python program to implement the Cropping, Resizing, Thresholding, Contour analysis and Bolb
detection is executed successfully.
EXP.NO 03
IMAGE ANNOTATION - DRAWING LINES, TEXT CIRCLE,
RECTANGLE, ELLIPSE ON IMAGES
DATE
AIM:
1. Drawing Lines.
2. Text Circle.
3. Rectangle.
4. Ellipse.
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Perform Drawing Lines, Text Circle, Rectangle and Ellipse over the input image.
PROGRAM:
# Import dependencies
import cv2
# Read Images
img = cv2.imread('image.jpg')
# Display Image
cv2.imshow('Original Image',img)
# Print error message if image is null
if img is None:
print('Could not read image')
# Draw line on image
imageLine = img.copy()
#Draw the image from point A to B
pointA = (200,80)
pointB = (450,80)
cv2.line(imageLine, pointA, pointB, (255, 255, 0), thickness=3,
lineType=cv2.LINE_AA)
cv2.imshow('Image Line', imageLine)
# Make a copy of image
imageCircle = img.copy()
# define the center of circle
circle_center = (415,190)
# define the radius of the circle
radius =100
# Draw a circle using the circle() Function
cv2.circle(imageCircle, circle_center, radius, (0, 0, 255), thickness=3,
lineType=cv2.LINE_AA)
# Display the result
cv2.imshow("Image Circle",imageCircle)
# make a copy of the original image
imageRectangle = img.copy()
# define the starting and end points of the rectangle
start_point =(300,115)
end_point =(475,225)
# draw the rectangle
cv2.rectangle(imageRectangle, start_point, end_point, (0, 0, 255), thickness= 3,
lineType=cv2.LINE_8)
# display the output
cv2.imshow('imageRectangle', imageRectangle)
# make a copy of the original image
imageEllipse = img.copy()
# define the center point of ellipse
ellipse_center = (415,190)
# define the major and minor axes of the ellipse
axisl = (100,50)
axis2 = (125,50)
# draw the ellipse
#Horizontal
cv2.ellipse(imageEllipse, ellipse_center, axisl, 0, 0, 360, (255, 0, 0), thickness=3)
#Vertical
cv2.ellipse(imageEllipse, ellipse_center, axis2, 90, 0, 360, (0, 0, 255), thickness=3)
# display the output
cv2.imshow('ellipse Image',imageEllipse)
# make a copy of the original image
imageText = img.copy()
#let's write the text you want to put on the image
text= 'I am a Happy dog!'
#org: Where you want to put the text
org = (50,350)
# write the text on the input image
cv2.putText(imageText, text, org, fontFace = cv2.FONT_HERSHEY_COMPLEX,
fontScale = 1.5, color= (150,225,100))
# display the output image with text over it
cv2.imshow("Image Text",imageText)
cv2.waitKey(0)
RESULT:
Thus, the python program to implement the Drawing Lines, Text Circle, Rectangle and Ellipse is
executed successfully.
IMAGE ENHANCEMENT - UNDERSTANDING COLOR
EXP.NO 04
SPACES, COLOR SPACE CONVERSION, HISTOGRAM
EQUIALIZATION, CONVOLUTION, IMAGE SMOOTHING,
DATE
GRADIENTS, EDGE DETECTION
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Perform Understanding Color Spaces, Color Space Conversion, Histogram Equialization,
Convolution, Image Smoothing, Gradients and Edge Detection over the input image.
PROGRAM:
import cv2
import numpy as np
img = cv2.imread('image.jpg')
imgl = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Displaying the image color space conversion
cv2.imshow('original', img)
cv2.imshow('Gray', imgl)
cv2.imshow('HSV', img2)
# creating a Histograms Equalization
# of a image using cv2.equalizeHist()
R, G, B = cv2.split(img)
outputl_R = cv2.equalizeHist(R)
outputl_G = cv2.equalizeHist(G)
outputl_B = cv2.equalizeHist(B)
OUTPUT:
RESULT:
Thus, the python program to implement the Understanding Color Spaces, Color Space Conversion,
Histogram Equialization, Convolution, Image Smoothing, Gradients and Edge Detection is executed
successfully.
IMAGE FEATURES, IMAGE ALIGNMENT AND IMAGE
EXP.NO 05
TRANSFORMS - FOURIER, HOUGH, EXTRACT ORB IMAGE
FEATURES, FEATURE MATCHING, CLONING, FEATURE
DATE
MATCHING BASED IMAGE ALIGNMENT
AIM:
To write a python program to implement the following Image Features, Image Alignment and Image
Transforms operations.
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Perform Extraction of ORB image features, Feature Matching, Cloning, Feature matching based
image alignment, Fourier Transforms, Hough Transforms over the input image.
PROGRAM:
import numpy as np
import cv2
# Read the query image as query_img
# and train image This query image
# is what you need to find in train image
# Save it in the same directory
# with the name image.jpg
query_img = cv2.imread('query.jpg')
train_img = cv2.irnread('train.jpg')
# Convert it to grayscale
query_img_bw = cv2.cvtColor(query_img,cv2.COLOR_BGR2GRAY)
train_img_bw = cv2.cvtColor(train_img, cv2.COLOR_BGR2GRAY)
# Clone seamlessly.
output= cv2.seamlessClone(src, dst, src_mask, center, cv2.NORMAL_CLONE)
cv2.imshow(" opencv-normal-clone.jpg", output)
#cv2.imshow("opencv-mixed-clone-example.jpg", mixed_clone)
cv2.waitKey(0)
b) Feature Matching, Cloning, Feature matching based image alignment, Fourier Transforms,
Hough Transforms:
# Python program to illustrate HoughLine
# method for line detection
import cv2
import numpy as np
OUTPUT:
Thus, the python program to implement the Extraction of ORB image features, Feature Matching,
Cloning, Feature matching based image alignment, Fourier Transforms and Hough Transforms is executed
successfully.
EXP.NO 06
IMAGE SEGMENTATION USING GRAPHCUT I GRABCUT
DATE
To write a python program to implement the Image segmentation process using Graph cut and Grab cut
method.
TOOLS REQUIRED:
1. Computer with 32 bit or 64 bit Windows Operating system and 4GB RAM
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Perform Image segmentation process using Graph cut and Grab cut method over the input image.
PROGRAM:
a) GRAPH CUT:
#import cv2
import numpy as np
import matplotlib.pyplot as plt
import maxflow
from skimage import data, color
# Load an example image (you can replace this with your own image loading code)
image= data.astronaut()
#image=cv2.imread('image.jpg')
image_gray = color.rgb2gray(image)
b) GRABCUT:
# Python program to illustrate
# foreground extraction using
# GrabCut algorithm
# organize imports
import numpy as np
import cv2
OUTPUT:
GRAPH CUT:
Figure 1
100 100
200 200
300 300
400 400
500 500
100 200 300 400 500 100 200 300 400 500
Ln: 1 Col:O
GRABCUT:
• I Figure 1
250
50 200
100
150
150
100
200
50
250
Activdte Wi'ldows
A++l+O. l[gi
RESULT:
Thus, the python program to implement the Image segmentation process using Graph cut and Grab
cut method is executed successfully.
EXP.NO 07
CAMERA CALIBRATION WITH CIRCULAR GRID
DATE
AIM:
To write a python program to implement the concept of camera calibration with circular grid.
APPARATUS:
1. Computer with
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Apply the above mentioned concept of camera calibration with circular grid in the live video.
PROGRAM:
import numpy as np
import cv2
import yaml
# termination criteria
criteria= (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30,
0.001)
########################################Blob
Detector##############################################
# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255
# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64 # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500 # maxArea may be adjusted to suit for your
experiment
# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1
# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87
# Filter by Inertia
blobParams.filterBylnertia = True
blobParams.minlnertiaRatio = 0.01
#####################################################################
##############################
#####################################################################
##############################
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
cap = cv2.VideoCapture(0)
found= 0
while(found < 10): # Here, 10 can be changed to whatever number you like to choose
ret, img =cap.read()# Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
RESULT:
Thus, the python program to implement the concept of camera calibration with circular grid is
executed successfully.
EXP.NO 08
CREATING DEPTH MAP FROM STEREO IMAGES
DATE
AIM:
To write a python program to implement the concept of creating depth map from stereo images.
APPARATUS:
1. Computer with
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Apply the above mentioned concept of creating depth map from stereo image to the image.
PROGRAM:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
imgL = cv.imread('tsukuba_l.png',O)
imgR = cv.imread('tsukuba_r.png',O)
#cv.imshow('lmgL',imgL)
stereo= cv.StereoBM.create(numDisparities=l6, blockSize=15)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
pit.show()
OUTPUT:
Figure 1
50
100
150
200
250
n: 11 Col· O
RESULT:
Thus, the python program to implement the concept of creating depth map from stereo images is
executed successfully.
EXP.NO 09
OBJECT DETECTION AND TRACKING USING KALMAN
FILTER, CAMSHIFT
DATE
AIM:
To write a python program to implement the following object detection and tracking operations
APPARATUS:
1. Computer with
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
Step 3: Apply the above mentioned object detection and tracking operations on the prerecorded video or live
video from the camera.
PROGRAM:
cap = cv2.VideoCapture(0)
output= None
X, y, w, h = 0, 0, 0, 0
first_point_saved = False
second_point_saved = False
track_window = (x, y, w, h)
can_track = False
# initialize tracker
# Start tracking
if can_track == True:
dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)
# apply camshift to get the new location
ret, track_window = cv2.CamShift( dst, track_window, term_crit)
# Draw it on image
pts = cv2.boxPoints(ret)
pts = np.int0(pts)
print(ret)
print(track_window)
cv2.imshow('roi', roi)
output= cv2.polylines(frame,[pts],True, 255,2)
else:
output = frame
if first_point_saved:
cv2.circle(output, (x, y), 5, (0, 0, 255), -1)
cv2.destroyWindow
cap.release()
cv2.destroyAllWindows()
OBJECT TRACKING:
import cv2
from Detector import detect
from KalmanFilter import KalmanFilter
def main():
HiSpeed = 100
KF = KalmanFilter(0.1, 1, 1, 1, 0.1,0.1)
debugMode=l
while(True):
# Read frame
ret, frame = VideoCap.read()
# Detect object
centers = detect(frame,debugMode)
# Predict
(x, y) = KP.predict()
# Draw a rectangle as the predicted object position
cv2.rectangle(frame, (int(x - 15), int(y - 15)), (int(x + 15), int(y + 15)), (255, 0, 0), 2)
# Update
(xl, yl) = KF.update(centers[0])
# Draw a rectangle as the estimated object position
cv2.rectangle(frame, (int(xl - 15), int(yl - 15)), (int(xl + 15), int(yl + 15)), (0, 0,
255), 2)
cv2.imshow('image', frame)
cv2.waitKey(HiSpeed-ControlSpeedVar+1)
if_name_ "_main_"·
# execute main
main()
OBJECT DETECTOR:
def detect(frame,debugMode):
# Convert frame from BGR to GRAY
gray= cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
if (debugMode):
cv2.imshow('gray', gray)
# Find contours
contours, _ = cv2.findContours(img_thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
centers=[]
for c in contours:
# ref: https://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
(x, y), radius= cv2.minEnclosingCircle(c)
radius = int(radius)
import numpy as np
import matplotlib.pyplot as plt
# Intial State
self.x = np.matrix([[O], [0], [0], [0]])
def predict(self):
# Refer to :Eq.(9) and Eq.(10) in https://machinelearningspace.corn/object-tracking-
simple-implementation-of-kalman-filter-in-
python/?preview_id= l 364&preview _nonce=52f6fl 262e&preview=true&_thumbnail_id=1
795
I= np.eye(self.H.shape[l])
OUTPUT:
Thus, the python program to implement the concept of object detection and tracking operations is
executed successfully.
CONTENT BEYOND SYLLABUS
EXP.NO 01
CONTENT BASED VIDEO RETRIVAL
DATE
AIM:
To write a python program to implement the concept of content based video retrival.
APPARATUS:
1. Computer with
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
PROGRAM:
import argparse
from collections import Counter
import time
from pyspin.spin import make_spin, Spin2
import app.config as config
from app.helpers import display_results_histogram, get_number_of_frames,
get_video_filenames, get_video_fps, \
get_video_first_frame, print_finished_training_message, terminal_yes_no_question,
show_final_match, \
video_file_already_stabilised
from app.histogram import HistogramGenerator
from app.video_operations import VideoStabiliser
def main():
"""
Program entry point. Parses command line input to decide which phase of the system
to
run.
:return: None
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model",
help="The histogram model to use. Choose from the following options: 'rgb',
'hsv' or 'gray'. "
"Leave empty to train using all 3 histogram models.")
parser.add_argument("--mode",
required=True,
help="The mode to run the code in. Choose from the following options:
'train', 'test' or "
"'segment'.")
parser.add_argument("--showhists",
action=''store_true'',
help="Specify whether you want to display each generated histogram.")
parser.add_argument("-d", "--debug",
action='' store_true '',
help="Specify whether you want to print additional logs for debugging
purposes.")
args = parser.parse_args()
config.debug = args.debug
config.mode = args.mode
config.show _histograms = args.showhists
config.model = args.model
if config.mode == "train":
off_line_colour_based_feature_extraction_phase()
elif config.mode == "test":
on_line_retrieval_phase()
elif config.mode == "segment":
database_preprocessing_phase()
else:
print("Wrong mode chosen. Choose from the following options: 'train', 'test' or
'segment'.")
exit(O)
@make_spin(Spin2, "Generating histograms for database
videos...".format(config.model))
def off_line_colour_based_feature_extraction_phase():
"""
Generates and stores averaged greyscale, RGB and HSV histograms for all the videos
in
the directory-based database.
:return: None
directory = "../footage/"
files= get_video_filenames(directory)
# start measuring runtime
start_time = time.time()
for file in files:
if config.model == "gray":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_greyscale_histogram()
elif config.model == "rgb":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_rgb_histogram()
elif config.model == "hsv":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_hsv_histogram()
else:
histogram_generator_gray = HistogramGenerator(directory, file)
histogram_generator_gray.generate_video_greyscale_histogram()
histogram_generator_rgb = HistogramGenerator(directory, file)
histogram_generator_rgb.generate_video_rgb_histogram()
histogram_generator_hsv = HistogramGenerator(directory, file)
histogram_generator_hsv.generate_video_hsv_histogram()
runtime = round(time.time() - start_time, 2)
print_finished_training_message(config.model, directory, runtime)
def on_line_retrieval_phase():
"""
Prompts the user to stabilise and crop the query video before generating the same
averaged greyscale, RGB and HSY
histograms to compare with the database videos' previously stored histograms using
distance metrics.
:return: None
directory = "../recordings/"
recordings= ["recordingl.mp4", "recording2.mp4", "recording3.mp4",
"recording4.mp4", "recording5.mp4",
"recording6.mp4", "recording7.mp4", "recording8.mp4"]
mismatches_directory = "../recordings/mismatches/"
mismatches= ["mismatchl.mp4", "mismatch2.mp4"]
# 0: cloudy-sky, 1: seal, 2: butterfly (skewed), 3: wind-turbine, 4: ice-hockey, 5:
jellyfish,
6: people-dancing,
# 7: jellyfish (skewed)
file = recordings [7]
# ask user to stabilise the input query video or not
is_stabilise_video = terminal_yes_no_question("Do you wish to stabilise the recorded
query video?")
stable_filename ="stable-"+ file[:-4] + ".avi" # the stable version of the video
# yes: stabilise the video and use the stable .avi version
if is_stabilise_ video:
if not video_file_already_stabilised(directory + stable_filename):
VideoStabiliser(directory, "{}".format(file))
print("\nStabilised version of query already found: '{}"'.format(stable_filename))
file = stable_filename
# no: check if a version of the stabilised video doesn't already exist - use it if it does
else:
if video_file_already_stabilised(directory + stable_filename):
file = stable_filename
print("\nUsing query: '{}"'.format(file))
print("\nPlease crop the recorded query video for the signature to be generated.")
if config.model == "gray":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_greyscale_histogram(is_query=True)
histogram_generator.match_histograms()
elif config.model == "rgb":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_rgb_histogram(is_query=True)
histogram_generator.match_histograms()
elif config.model == "hsv":
histogram_generator = HistogramGenerator(directory, file)
histogram_generator.generate_video_hsv_histogram(is_query=True)
histogram_generator.match_histograms()
else:
# calculate query histogram
# greyscale
histogram_generator_gray = HistogramGenerator(directory, file)
histogram_generator_gray.generate_video_greyscale_histogram(is_query=True)
cur_reference_points = histogram_generator_gray.get_current_reference_points()
# start measuring runtime (after manual cropping)
start_time = time.time()
#RGB
histogram_generator_rgb = HistogramGenerator(directory, file)
histogram_generator_rgb.generate_video_rgb_histogram(is_query=True,
cur_ref_points=cur_reference_points)
#HSY
histogram_generator_hsv = HistogramGenerator(directory, file)
histogram_generator_hsv.generate_video_hsv_histogram(is_query=True,
cur_ref_points=cur_reference_points)
# calculate distances between query and DB histograms
histogram_generator_gray.match_histograms(cur_all_model='gray')
histogram_generator_rgb.match_histograms(cur_all_model='rgb')
histogram_generator_hsv.match_histograms(cur_all_model='hsv')
# Combine matches from all 3 histogram models to output one final result
all_results = histogram_generator_hsv.get_results_array() # array of all matches made
(using weights)
results_count = Counter( all_results) # count the number of matches made for each
video in all_results array
# transform from count to percentage of matches made
results_percentage = diet()
for match in results_count:
percentage= round((results_count[match] / len(all_results)) * 100.0, 2)
results_percentage[match] = percentage
display_results_histogram(results_percentage)
print("Matches made: {}".format(results_count))
print("% of matches made: {}".format(results_percentage))
# find best result
final_result_name = ""
final_result_count = 0
for i, r in enumerate(results_count):
if i == 0:
final_result_name = r
final_result_count = results_count[r]
else:
if results_count[r] > final_result_count:
final_result_name = r
final_result_count = results_count[r]
# print results
runtime= round(time.time() - start_time, 2)
accuracy= final_result_count / len(all_results)
get_video_first_frame(directory + file, "../results", is_query=True)
get_video_first_frame("../footage/{ }".format(final_result_name), "../results",
is_result=True)
show_final_match(final_result_name, "../results/query.pug", "../results/result.pug",
runtime, accuracy)
print_finished_training_message(final_result_name, config.model, runtime, accuracy)
def database_preprocessing_phase():
"""
Applies a shot boundary detection algorithm to a video for segmentation.
:return: None
directory = "../recordings/"
video = "scene-segmentation.mp4"
#directory= "/Volumes/ADAM2/"
#movies= ["Inception (2010).mp4"]
# video = movies[0]
shot_boundary _detector = HistogramGenerator(directory, video)
video_capture = shot_boundary_detector.get_video_capture()
frame_count = get_number_of_frames(vc=video_capture)
fps= get_video_fps(vc=video_capture)
print("Total Frames: {}".format(frame_count))
print("FPS: {}\n".format(fps))
# start measuring runtime
start_time = time.time()
# start processing video for shout boundary detection
print("Starting to process video for shot boundary detection...")
shot_boundary_detector.rgb_histogram_shot_boundary_detection(threshold=7)
# print final results
runtime = round(time.time() - start_time, 2)
print("--- Number of frames in video: {} ---".format(frame_count))
print("--- Runtime: { } seconds ---".format(runtime))
if name== " main ":
main()
OUTPUT:
Greyscale histogram for 'stable-recordingB.avi'
0.25
0.20
0.15
0.10
0.05
0.00
.::-· • ff,
Ill ., ',..,,
I
.A
Match 'jellyfish.mp4' found in 8.88s with 75.0% accuracy
RESULT:
Thus, python program to implement content based video retrival is executed successfully.
EXP.NO 02
FACE DETECTION & RECOGNITION
DATE
AIM:
To write a python program to implement the face detection & recognition using OpenCV.
APPARATUS:
1. Computer with
2. Python3
3. OpenCV computer vision Library for Open CV in Python
ALGORITHM:
PROGRAM:
import cv2
import numpy as np
f_casd = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret,frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
face = f_casd.detectMultiScale(gray,1.3,5)
for (x,y,w,h) in face:
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,255),2)
cv2.imshow('frame',frame)
if cv2.waitKey(l) & 0xFF == ord('q'):
break
cap.release()
cv2.destroy AllWindows()