Lab Record
Lab Record
1
T-PYRAMID OF AN IMAGE
DATE :
AIM:
To write python program for T- pyramid of an image.
ALGORITHM:
1. First load the image
2. Then construct the Gaussian pyramid with 3 levels.
3. For the Laplacian pyramid, the topmost level remains the same as in
Gaussian. The remaining levels are constructed from top to bottom by
subtracting that Gaussian level from its upper expanded level.
PROGRAM:
import cv2
import numpy as np
RESULT:
Thus, the program for T-pyramid has been implemented and the output is
obtained successfully.
EX. NO. 2
QUAD TREE REPRESENTATION
DATE :
AIM:
To write a python program for quad tree representation of an image using
the
homogeneity criterion of equal intensity.
ALGORITHM:
1. Divide the current two dimensional space into four boxes.
2. If a box contains one or more points in it, create a child object,
storing in it the two dimensional space of the box
3. If a box does not contain any points, do not create a child for it
4. Recurse for each of the children.
PROGRAM:
import matplotlib.pyplot as plt import cv2
import numpy as np
def split4(image):
half_split= np.array_split(image,2)
res= map(lambda x: np.array_split(x,2,axis=1),half_split) return
reduce(add,res)
split_img=split4(img) split_img[0].shape,split_img[1].shape
fig,axs=plt.subplots(2,2) axs[0,0].imshow(split_img[0])
axs[0,1].imshow(split_img[1]) axs[1,0].imshow(split_img[2])
axs[1,1].imshow(split_img[3])
def concatenate4(north_west,north_east,south_west,south_east):
top=np.concatenate((north_west,north_east),axis=1)
bottom=np.concatenate((south_west,south_east),axis=1) return
np.concatenate((top,bottom),axis=0)
full_img=concatenate4(split_img[0],split_img[1],split_img[2],split_img[3])
plt.figure()
plt.imshow(full_img) plt.show()
OUTPUT:
RESULT:
Thus, the python program for quad tree representation has been
implemented and the output is obtained successfully.
EX. NO. 3
GEOMETRIC TRANSFORMS
DATE :
AIM:
To Develop programs for the following geometric transforms:
(a) Rotation.
(b) Change of scale.
(c) Skewing.
(d) Affine transform calculated from three pairs of
corresponding points. (e)Bilinear transform calculated from
four pairs of corresponding points.
ALGORITHM:
TRANSFORMATION MATRICES:
For each desired transformation, create a corresponding transformation
matrix. For example:
1. Translation: Create a 3×3 matrix with a 1 in the diagonal and the
translation values in the last column.
2. Rotation: Compute the rotation matrix using trigonometric functions
(sin and cos) and the given rotation angle.
3. Scaling: Create a 3×3 matrix with scaling factors along the diagonal
and 1 in the last row and column.
4. Shearing: Create an affine transformation matrix with shear factors in
the off diagonal elements.
COMBINE TRANSFORMATION MATRICES:
5. Multiply the individual transformation matrices in the order you want
to apply them. Matrix multiplication is not commutative, so the order matters.
The combined matrix represents the sequence of transformations.
APPLY THE COMBINED TRANSFORMATION MATRIX:
6. Convert the 3×3 matrix to a 2×3 matrix by removing the last row.
7. Use cv2.warpAffine() for affine transformations or
cv2.warpPerspective() for projective transformations.
8. Provide the combined transformation matrix and the input image as
arguments to apply the transformations.
PROGRAM:
import cv2
import numpy as np
def rotate_image(image, angle):
height, width = image.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width / 2, height / 2), angle, 1)
rotated_image = cv2.warpAffine(image, rotation_matrix, (width, height)) return
rotated_image
# Usage
image = cv2.imread("img.jpg") angle_degrees = 45
rotated = rotate_image(image, angle_degrees) cv2.imshow("Rotated Image",
rotated) cv2.waitKey(0)
cv2.destroyAllWindows()
Skewing:
RESULT:
Thus, the python program for geometric transforms has been
implemented and the output is obtained successfully.
EX. NO. 4
OBJECT DETECTION AND RECOGNITION
DATE :
AIM:
To Develop a program to implement Object Detection and Recognition.
ALGORITHM:
1. The first step is to have Python installed on your computer.
Download and install Python 3 from the official Python
website.
2. Once you have Python installed on your computer, install the
following dependencies using pip:
Python
$ pip install python 3.7.6 TensorFlow
$ pip install tensorflow OpenCV
$ pip install opencv-python
Keras
$ pip install keras
ImageAI
$ pip install imageAI
3. Now download the TinyYOLOv3 model file that contains the
classification model that will be used for object detection.
4. Now let’s see how to actually use the ImageAI library.
We need the necessary folders:
5. Open your preferred text editor for writing Python code and create a
new file detector.py.
6. Running the python file detector.py.
PROGRAM:
# importing the required library
from imageai.Detection import ObjectDetection
# iterating through the items found in the image for eachItem in detection:
print(eachItem["name"] , " : ", eachItem["percentage_probability"])
OUTPUT:
person : 69.11083459854126
person : 63.95843029022217
person : 62.82603144645691
person : 82.48097896575928
person : 84.3036949634552
person : 57.25393295288086
>>>
RESULT:
Thus, the python program for Object Detection and Recognition has been
implemented and the output is obtained successfully.
EX. NO. 5
MOTION ANALYSIS USING MOVING
DATE :
EDGES
AIM:
To develop a program for motion analysis using moving edges, and apply it
to your image sequences.
ALGORITHM:
PROGRAM:
import cv2
import numpy as np
def motion_analysis(video_path):
cap = cv2.VideoCapture(video_path)
ret, prev_frame = cap.read()
prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)
while cap.isOpened(): ret, frame = cap.read() if not ret:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
edges_prev = cv2.Canny(prev_gray, 50, 150)
edges_curr = cv2.Canny(gray, 50, 150)
frame_diff = cv2.absdiff(edges_prev, edges_curr)
# Display the moving edges
cv2.imshow('Moving Edges', frame_diff) if cv2.waitKey(30) & 0xFF ==
ord('q'):
break
image prev_gray = gray.copy()
cap.release() cv2.destroyAllWindows()
video_path = "Human Analytics video.mp4" motion_analysis(video_path)
OUTPUT:
RESULT:
Thus, the python program for motion analysis using moving edge was
implemented and output is obtained successfully.
EX. NO. 6
DATE : FACIAL DETECTION AND RECOGNITION
AIM:
To Develop a program for Facial Detection and Recognition.
ALGORITHM:
Face Detection:
The very first task we perform is detecting faces in the image or video
stream. Now that we know the exact location/coordinates of face, we extract this
face for further processing ahead.
Feature Extraction:
Now that we have cropped the face out of the image, we extract features
from it. Here we are going to use face embeddings to extract the features out of
the face. A neural network takes an image of the person’s face as input and
outputs a vector which represents the most important features of a face. In
machine learning, this vector is called embedding and thus we call this vector as
face embedding.
ARCHITECTURE:
Face Recognition:
Face recognition technology is a method of identifying or confirming an
individual’s identity using their face. It operates through biometric analysis,
which involves measuring and analysing specific biological characteristics.
1. Collecting face images using OpenCV and saving them in a folder.
2. Training an image classification model using Teachable Machine, a web
based tool by Google.
3. Downloading the model in Keras format and loading it in Python.
4. Detecting faces from a webcam and predicting their names using the
trained
model.
PROGRAM:
Face Detection:
import cv2
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
haarcascade_frontalface_default.xml')
image_path = 'img1.jpg' # Replace 'image.jpg' with the path to your image image
= cv2.imread(image_path)
if isExist:
print("Name Already Taken") nameID=str(input("Enter Your Name
Again: "))
else:
os.makedirs(path)
while True:
ret,frame=video.read() faces=facedetect.detectMultiScale(frame,1.3, 5) for
x,y,w,h in faces:
count=count+1
name='./images/'+nameID+'/'+ str(count) + '.jpg' print("Creating
Images.................................." +name)
cv2.imwrite(name, frame[y:y+h,x:x+w]) cv2.rectangle(frame, (x,y),
(x+w, y+h), (0,255,0), 3)
cv2.imshow("WindowFrame", frame) cv2.waitKey(1)
if count>500:
break video.release() cv2.destroyAllWindows()
test.py:
import tensorflow as tf
from tensorflow import keras import numpy as np
import cv2
from keras.models import load_model import numpy as np
facedetect = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap=cv2.VideoCapture(0) cap.set(3, 640)
cap.set(4, 480) font=cv2.FONT_HERSHEY_COMPLEX
model = load_model('keras_model.h5',compile=False)
def get_className(classNo): if classNo==0:
return "Paranjothi Karthik" elif classNo==1:
return "virat"
while True:
sucess, imgOrignal=cap.read()
faces = facedetect.detectMultiScale(imgOrignal,1.3,5) for x,y,w,h in faces:
crop_img=imgOrignal[y:y+h,x:x+h] img=cv2.resize(crop_img,
(224,224))
img=img.reshape(1, 224, 224, 3) prediction=model.predict(img)
classIndex = (model.predict(img) > 0.5).astype("int32") classIndex
= classIndex.any()
probabilityValue=np.amax(prediction) if classIndex==0:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),
(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)
elif classIndex==1:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),
(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)
cv2.putText(imgOrignal,str(round(probabilityValue*100, 2))+"%" ,
(180, 75),
font, 0.75, (255,0,0),2, cv2.LINE_AA)
cv2.imshow("Result",imgOrignal) k=cv2.waitKey(1)
if k==ord('q'):
break
cap.release() cv2.destroyAllWindows()
OUTPUT:
RESULT:
Thus, the python program for Facial Detection and Recognition was
implemented and output is obtained successfully.
EX. NO 7
EVENT DETECTION IN VIDEO
DATE :
SURVEILLANCE SYSTEM
AIM:
To Write a program for event detection in video surveillance system.
ALGORITHM:
1. Preprocessing:
This stage involves cleaning and preparing the data from sensors like
cameras. This might include noise reduction or format conversion.
2. Background Modeling:
This step establishes a baseline for "normal" activity in the scene. It can
use techniques like:
Frame differencing: Compares consecutive video frames to detect changes
(movement).
Statistical methods: Builds a model of the background based on pixel
intensity variations over time.
3. Object Detection and Tracking:
This stage identifies and tracks objects of interest (people, vehicles) in the
scene. Common techniques include:
Background subtraction: Isolates foreground objects from the background
model.
Machine Learning: Employs algorithms like Support Vector Machines
(SVMs) or Convolutional Neural Networks (CNNs) to identify objects
based on training data.
4. Event Definition and Classification:
Here, the system analyzes object behavior and interactions to define
events. This might involve:
Motion analysis: Tracks object movement patterns and speed.
Object interaction: Analyzes how objects interact with each other or the
environment (e.g., entering restricted zones).
Classification algorithms then categorize these events.
5. Decision Making and Alerting:
Finally, the system evaluates the classified event's severity and triggers
pre-defined actions based on rules. This might involve:
Generating alerts for security personnel.
Recording video footage of the event.
PROGRAM:
import cv2
# Initialize video capture
video_capture = cv2.VideoCapture("human surveillance.mp4")
# Replace with your video file
# Initialize background subtractor
bg_subtractor = cv2.createBackgroundSubtractorMOG2() while
video_capture.isOpened():
ret, frame = video_capture.read() if not ret:
break
# Apply background subtraction
fg_mask = bg_subtractor.apply(frame)
# Apply thresholding to get a binary mask
thresh = cv2.threshold(fg_mask, 50, 255, cv2.THRESH_BINARY)
# Find contours
contours, _ = cv2.findContours(thresh,
cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
# Filter contours based on area (adjust the threshold as needed) if
cv2.contourArea(contour) > 100:
# Draw a bounding box around detected objects or events x, y, w, h =
OUTPUT:
RESULT:
Thus, the python program for event detection in video surveillance system
was implemented and output is obtained successfully.
EX. NO. 8
DATE : IMAGE SUPER-RESOLUTION
AIM:
To write a program to develop a program for Image Super-Resolution.
ALGORITHM:
PROGRAM:
import torch
from PIL import Image
import torchvision.transforms as transforms
import torchvision.models as models
OUTPUT:
RESULT:
Thus, the python program for developing a Image Super-Resolution has
been implemented and output is obtained successfully.