ImageAnalytics Edited
ImageAnalytics Edited
Name ARISH B
Semester VI
1
2
19ADEN2016 – Image and Video Analytics
Name :
Class :
Roll No. :
Certified that this is bonafide record of work done by the above student of the
during the year
3
4
List of Exercises
5
Dr. Mahalingam College of Engineering & Technology
Needs
Criteria Excellent Good Satisfactory
Improvement
Total Marks 75 64 49 33
6
EX NO:01
Write a program that computes the T-pyramid of an image
DATE :
AIM:
To Write a program that computes the T-pyramid of an image.
ALGORITHM:
7
PROGRAM:
import cv2
import numpy as np
# Compute T-pyramid
for i in range(levels - 1):
expanded = cv2.pyrUp(gaussian_pyramid[i+1], dstsize=(gaussian_pyramid[i].shape[1],
gaussian_pyramid[i].shape[0]))
t_level = cv2.absdiff(gaussian_pyramid[i], expanded)
t_pyramid.append(t_level)
return t_pyramid
cv2.waitKey(0)
cv2.destroyAllWindows()
8
OUTPUT:
Original Image:
9
10
Criteria Marks
Preparation /20
Observation /25
Interpretation of Result /20
Viva /10
Total /75
RESULT:
Thus, to Write a program that computes the T-pyramid of an image has been implemented
successfully.
11
12
EX NO:02 Write a program that derives the quad tree representation of an image
using the homogeneity criterion of equal intensity.
DATE :
AIM:
To Write a program that derives the quad tree representation of an image using the
homogeneity criterion of equal intensity.
ALGORITHM:
13
PROGRAM:
import cv2
import numpy as np
img = cv2.imread(r"C://Users//Arish//Downloads//dog.webp")
from operator import add
from functools import reduce
def split4(image):
half_split = np.array_split(image, 2)
res = map(lambda x: np.array_split(x, 2, axis= 1), half_split)
return reduce(add, res)
split_img = split4(img)
split_img[0].shape, split_img[1].shape
fig, axs = plt.subplots(2, 2)
axs[0, 0].imshow(split_img[0])
axs[0, 1].imshow(split_img[1])
axs[1, 0].imshow(split_img[2])
axs[1, 1].imshow(split_img[3])
plt.imshow(full_img)
14
OUTPUT:
Original Image:
Output Image:
15
16
Criteria Marks
Preparation /20
Observation /25
Interpretation of Result /20
Viva /10
Total /75
RESULT:
Thus, to Write a program that derives the quad tree representation of an image using the
homogeneity criterion of equal intensity has been implemented successfully.
17
18
EX NO:03
Develop programs for the following geometric transforms
DATE :
AIM:
To Develop programs for the following geometric transforms (a) Rotation (b) Change
of scale (c) Skewing (d) Affine transform calculated from three pairs of corresponding points
(e)Bilinear transform calculated from four pairs of corresponding points.
ALGORITHM:
TRANSFORMATION MATRICES:
For each desired transformation, create a corresponding transformation matrix. For
example:
• Translation: Create a 3×3 matrix with a 1 in the diagonal and the translation values in
the last column.
• Rotation: Compute the rotation matrix using trigonometric functions (sin and cos) and
the given rotation angle.
• Scaling: Create a 3×3 matrix with scaling factors along the diagonal and 1 in the last row
and column.
• Shearing: Create an affine transformation matrix with shear factors in the off-diagonal
elements.
COMBINE TRANSFORMATION MATRICES:
• Multiply the individual transformation matrices in the order you want to apply them.
Matrix multiplication is not commutative, so the order matters. The combined matrix
represents the sequence of transformations.
APPLY THE COMBINED TRANSFORMATION MATRIX:
In image processing, you can use libraries like OpenCV or Pillow to apply the combined
transformation matrix to the image. For example, in
19
Program:
import cv2
import numpy as np
if center is None:
center = (w // 2, h // 2)
return rotated_image
return scaled_image
return skewed_image
return transformed_image
return dst
# Rotation
rotated_image = rotate_image(image, 45)
20
# Change of scale
scaling_factors = (0.5, 2)
scaled_image = scale_image(image, scaling_factors)
# Skewing
skewing_factors = (0.2, 0.3)
skewed_image = skew_image(image, skewing_factors)
# Affine transform
A = np.float32([[1, 0.5, 50],
[0.5, 1, -20]])
transformed_image_affine = affine_transform_image(image, A)
# Bilinear transform
B = np.float32([[2, 0.5, 0],
[0.5, 2, 0]])
transformed_image_bilinear = bilinear_transform_image(image, B)
# Display images
cv2.imshow("Original Image", image)
cv2.imshow("Rotated Image", rotated_image)
cv2.imshow("Scaled Image", scaled_image)
cv2.imshow("Skewed Image", skewed_image)
cv2.imshow("Affine Transformed Image", transformed_image_affine)
cv2.imshow("Bilinear Transformed Image", transformed_image_bilinear)
cv2.waitKey(0)
cv2.destroyAllWindows()
2
Output:
(e)Bilinear transform
2
Criteria Marks
Preparation /20
Observation /25
Interpretation of Result /20
Viva /10
Total /75
RESULT:
Thus, to Develop programs for the following geometric transforms has been implemented
successfully.
2
2
EX NO:04
Develop a program to implement Object Detection and Recognition
DATE :
AIM:
To Develop a program to implement Object Detection and Recognition.
ALGORITHM:
2
PROGRAM:
OUTPUT:
Cloning into 'yolov7-object-tracking'...
remote: Enumerating objects: 223, done.
remote: Counting objects: 100% (23/23), done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 223 (delta 8), reused 9 (delta 2), pack-reused 200
Receiving objects: 100% (223/223), 171.97 KiB | 4.41 MiB/s, done.
Resolving deltas: 100% (107/107), done.
[youtube] Extracting URL: https://www.youtube.com/watch?v=ORrrKXGx2SE
[youtube] ORrrKXGx2SE: Downloading webpage
[youtube] ORrrKXGx2SE: Downloading ios player API JSON
[youtube] ORrrKXGx2SE: Downloading android player API JSON
[youtube] ORrrKXGx2SE: Downloading player 190c935f
[youtube] ORrrKXGx2SE: Downloading m3u8 information
[info] ORrrKXGx2SE: Downloading 1 format(s): 248+251
[download] Destination: background video | people | walking | [ORrrKXGx2SE].f248.webm
[download] 100% of 4.15MiB in 00:00:00 at 21.18MiB/s
[download] Destination: background video | people | walking | [ORrrKXGx2SE].f251.webm
[download] 100% of 6.34KiB in 00:00:00 at 104.16KiB/s
[Merger] Merging formats into "background video | people | walking | [ORrrKXGx2SE].webm"
Deleting original file background video | people | walking | [ORrrKXGx2SE].f251.webm (pass -k to
keep)
Deleting original file background video | people | walking | [ORrrKXGx2SE].f248.webm (pass -k to
keep)
Namespace(weights=['yolov7.pt'], download=True, source='background video | people | walking |
[ORrrKXGx2SE].webm', img_size=640, conf_thres=0.25, iou_thres=0.45, device='', view_img=False,
save_txt=False, save_conf=False, nosave=False, classes=[0], agnostic_nms=False, augment=False,
update=False, project='runs/detect', name='YOLOV7 Object Tracking', exist_ok=False, no_trace=False,
colored_trk=False, save_bbox_dim=False, save_with_object_id=False)
Model weights not found. Attempting to download now...
yolov7.pt: 100% 72.1M/72.1M [00:01<00:00, 75.3MiB/s]
YOLOR s ' yolov7-object-tracking-49-g45def67 torch 2.1.0+cu118 CPU
•
7̇
.̧
Fusing layers...
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
/usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an
upcoming release, it will be required to pass the indexing argument. (Triggered internally at
../aten/src/ATen/native/TensorShape.cpp:3526.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Model Summary: 306 layers, 36905341 parameters, 6652669 gradients, 104.5 GFLOPS
2
Convert model to Traced-model...
traced_script_module saved!
model is traced!
video 1/1 (1/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1374.6ms) Inference, (38.8ms) NMS
OpenCV: FFMPEG: tag 0x7634706d/'mp4v' is not supported with codec id 12 and format 'webm / WebM'
[webm @ 0x596e5a782640] Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT
subtitles are supported for WebM.
video 1/1 (2/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1239.5ms) Inference, (1.3ms) NMS
video 1/1 (3/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1227.3ms) Inference, (1.3ms) NMS
video 1/1 (4/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1217.8ms) Inference, (1.3ms) NMS
video 1/1 (5/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1334.5ms) Inference, (3.0ms) NMS
video 1/1 (6/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1962.2ms) Inference, (1.9ms) NMS
video 1/1 (7/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1995.4ms) Inference, (2.0ms) NMS
video 1/1 (8/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1962.0ms) Inference, (3.2ms) NMS
video 1/1 (9/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1944.4ms) Inference, (2.1ms) NMS
video 1/1 (10/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1476.0ms) Inference, (1.4ms) NMS
video 1/1 (11/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1236.8ms) Inference, (1.5ms) NMS
video 1/1 (12/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1259.4ms) Inference, (1.4ms) NMS
video 1/1 (13/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1259.5ms) Inference, (1.5ms) NMS
video 1/1 (14/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1278.8ms) Inference, (1.3ms) NMS
video 1/1 (15/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1239.8ms) Inference, (1.8ms) NMS
video 1/1 (16/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1221.2ms) Inference, (1.7ms) NMS
video 1/1 (17/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1260.1ms) Inference, (1.3ms) NMS
video 1/1 (18/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1901.0ms) Inference, (2.0ms) NMS
video 1/1 (19/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (2003.8ms) Inference, (2.0ms) NMS
video 1/1 (20/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (2031.9ms) Inference, (2.0ms) NMS
video 1/1 (21/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1908.9ms) Inference, (2.2ms) NMS
video 1/1 (22/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1706.6ms) Inference, (1.6ms) NMS
video 1/1 (23/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1289.3ms) Inference, (1.7ms) NMS
video 1/1 (24/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1230.8ms) Inference, (1.3ms) NMS
video 1/1 (25/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1243.9ms) Inference, (1.5ms) NMS
2
video 1/1 (26/343) /content/yolov7-object-tracking/background video | people | walking |
2
[ORrrKXGx2SE].webm: 37 persons, Done. (1283.7ms) Inference, (1.8ms) NMS
2
video 1/1 (53/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1281.7ms) Inference, (1.5ms) NMS
2
video 1/1 (80/343) /content/yolov7-object-tracking/background video | people | walking |
2
[ORrrKXGx2SE].webm: 35 persons, Done. (1987.4ms) Inference, (2.7ms) NMS
video 1/1 (81/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1926.9ms) Inference, (1.7ms) NMS
video 1/1 (82/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1934.2ms) Inference, (2.2ms) NMS
video 1/1 (83/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1571.0ms) Inference, (1.5ms) NMS
video 1/1 (84/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1242.1ms) Inference, (1.4ms) NMS
video 1/1 (85/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1265.2ms) Inference, (1.3ms) NMS
video 1/1 (86/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 38 persons, Done. (1231.5ms) Inference, (1.7ms) NMS
video 1/1 (87/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 38 persons, Done. (1259.7ms) Inference, (1.4ms) NMS
video 1/1 (88/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1275.2ms) Inference, (1.5ms) NMS
video 1/1 (89/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 37 persons, Done. (1276.3ms) Inference, (1.7ms) NMS
video 1/1 (90/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1283.4ms) Inference, (1.5ms) NMS
video 1/1 (91/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1962.5ms) Inference, (2.7ms) NMS
video 1/1 (92/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (2100.0ms) Inference, (4.3ms) NMS
video 1/1 (93/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (2046.1ms) Inference, (2.1ms) NMS
video 1/1 (94/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (2153.0ms) Inference, (5.7ms) NMS
video 1/1 (95/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (2163.6ms) Inference, (2.2ms) NMS
video 1/1 (96/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1426.7ms) Inference, (2.0ms) NMS
video 1/1 (97/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1362.8ms) Inference, (1.3ms) NMS
video 1/1 (98/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1220.3ms) Inference, (1.3ms) NMS
video 1/1 (99/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1230.2ms) Inference, (1.2ms) NMS
video 1/1 (100/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1248.5ms) Inference, (1.4ms) NMS
video 1/1 (101/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1270.9ms) Inference, (1.4ms) NMS
video 1/1 (102/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1273.4ms) Inference, (1.4ms) NMS
video 1/1 (103/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1513.6ms) Inference, (1.8ms) NMS
video 1/1 (104/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1944.6ms) Inference, (2.3ms) NMS
video 1/1 (105/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1996.1ms) Inference, (1.9ms) NMS
video 1/1 (106/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (2039.2ms) Inference, (2.1ms) NMS
video 1/1 (107/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (2141.4ms) Inference, (2.0ms) NMS
30
video 1/1 (108/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1658.1ms) Inference, (1.6ms) NMS
video 1/1 (109/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1351.7ms) Inference, (1.3ms) NMS
video 1/1 (110/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1336.7ms) Inference, (2.1ms) NMS
video 1/1 (111/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1361.9ms) Inference, (1.5ms) NMS
video 1/1 (112/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1385.1ms) Inference, (1.5ms) NMS
video 1/1 (113/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1390.3ms) Inference, (1.5ms) NMS
video 1/1 (114/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1358.9ms) Inference, (1.6ms) NMS
video 1/1 (115/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1689.9ms) Inference, (2.2ms) NMS
video 1/1 (116/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1947.6ms) Inference, (1.9ms) NMS
video 1/1 (117/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1983.9ms) Inference, (5.2ms) NMS
video 1/1 (118/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 36 persons, Done. (1953.3ms) Inference, (1.9ms) NMS
video 1/1 (119/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (2144.6ms) Inference, (2.1ms) NMS
video 1/1 (120/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1505.0ms) Inference, (1.4ms) NMS
video 1/1 (121/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1350.5ms) Inference, (1.5ms) NMS
video 1/1 (122/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1321.6ms) Inference, (1.4ms) NMS
video 1/1 (123/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1288.1ms) Inference, (1.7ms) NMS
video 1/1 (124/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 35 persons, Done. (1314.6ms) Inference, (1.6ms) NMS
video 1/1 (125/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1340.8ms) Inference, (1.4ms) NMS
video 1/1 (126/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1368.9ms) Inference, (1.6ms) NMS
video 1/1 (127/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 34 persons, Done. (1888.7ms) Inference, (1.9ms) NMS
video 1/1 (128/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1983.3ms) Inference, (1.9ms) NMS
video 1/1 (129/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 33 persons, Done. (1985.4ms) Inference, (3.2ms) NMS
video 1/1 (130/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1900.0ms) Inference, (1.9ms) NMS
video 1/1 (131/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 32 persons, Done. (1959.7ms) Inference, (2.2ms) NMS
video 1/1 (132/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1298.8ms) Inference, (1.4ms) NMS
video 1/1 (133/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (1284.0ms) Inference, (1.3ms) NMS
video 1/1 (134/343) /content/yolov7-object-tracking/background video | people | walking |
31
[ORrrKXGx2SE].webm: 30 persons, Done. (1310.5ms) Inference, (1.7ms) NMS
video 1/1 (135/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1341.0ms) Inference, (2.3ms) NMS
video 1/1 (136/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1264.6ms) Inference, (1.4ms) NMS
video 1/1 (137/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1277.3ms) Inference, (1.4ms) NMS
video 1/1 (138/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (1270.8ms) Inference, (1.5ms) NMS
video 1/1 (139/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1569.8ms) Inference, (2.0ms) NMS
video 1/1 (140/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (1923.6ms) Inference, (1.9ms) NMS
video 1/1 (141/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (2061.4ms) Inference, (2.1ms) NMS
video 1/1 (142/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 31 persons, Done. (2091.8ms) Inference, (2.0ms) NMS
video 1/1 (143/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (1980.4ms) Inference, (2.3ms) NMS
video 1/1 (144/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (1591.9ms) Inference, (1.4ms) NMS
video 1/1 (145/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1343.8ms) Inference, (1.3ms) NMS
video 1/1 (146/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1267.5ms) Inference, (1.3ms) NMS
video 1/1 (147/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1294.7ms) Inference, (1.4ms) NMS
video 1/1 (148/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1300.9ms) Inference, (1.3ms) NMS
video 1/1 (149/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1339.4ms) Inference, (1.8ms) NMS
video 1/1 (150/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1406.1ms) Inference, (1.4ms) NMS
video 1/1 (151/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 30 persons, Done. (1690.3ms) Inference, (4.7ms) NMS
video 1/1 (152/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1943.2ms) Inference, (2.0ms) NMS
video 1/1 (153/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 29 persons, Done. (1972.2ms) Inference, (6.3ms) NMS
video 1/1 (154/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1930.9ms) Inference, (1.9ms) NMS
video 1/1 (155/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1939.3ms) Inference, (2.1ms) NMS
video 1/1 (156/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1414.8ms) Inference, (1.8ms) NMS
video 1/1 (157/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1281.8ms) Inference, (1.5ms) NMS
video 1/1 (158/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1276.4ms) Inference, (1.5ms) NMS
video 1/1 (159/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1340.1ms) Inference, (1.5ms) NMS
32
video 1/1 (160/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1342.1ms) Inference, (1.4ms) NMS
video 1/1 (161/343) /content/yolov7-object-tracking/background video | people | walking |
[ORrrKXGx2SE].webm: 28 persons, Done. (1285.1ms) Inference, (1.5ms) NMS
RESULT:
Thus, to Develop a program to implement Object Detection and Recognition has been
implemented successfully.
39
40
EX NO:05
Develop a program for Facial Detection and Recognition
DATE :
AIM:
To Develop a program for Facial Detection and Recognition.
ALGORITHM:
Face Detection: The very first task we perform is detecting faces in the image or video stream. Now
that we know the exact location/coordinates of face, we extract this face for further processing
ahead.
Feature Extraction: Now that we have cropped the face out of the image, we extract features from
it. Here we are going to use face embeddings to extract the features out of the face. A neural network
takes an image of the person’s face as input and outputs a vector which represents the most
important features of a face. In machine learning, this vector is called embedding and thus we call
this vector as face embedding.
ARCHITECTURE:
4
1
PROGRAM:
import numpy as np
import cv2
while True:
# Read a frame from the video capture device
ret, img = cap.read()
# Release the video capture device and close all OpenCV windows
cap.release()
cv2.destroyAllWindows()
4
2
OUTPUT:
4
3
4
4
Criteria Marks
Preparation /20
Observation /25
Interpretation of Result /20
Viva /10
Total /75
RESULT:
Thus, to Develop a program for Facial Detection and Recognition has been implemented
successfully.
4
5
4
6
EX NO:06
Write a program for event detection in video surveillance system
DATE :
AIM:
To write a program for event detection in video surveillance system.
ALGORITHM:
4
7
PROGRAM:
import cv2
while video_capture.isOpened():
ret, frame = video_capture.read()
if not ret:
break
# Find contours
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
4
8
OUTPUT:
4
9
4
10
Criteria Marks
Preparation /20
Observation /25
Interpretation of Result /20
Viva /10
Total /75
RESULT:
Thus, to Write a program for event detection in video surveillance system has been
implemented successfully.
4
11
50
Overall Record Completion Status
Completed
Date of completion
Faculty Signature
51