Yolo v3 Proj Draft
Yolo v3 Proj Draft
by
Mapúa University
July 2021
Chapter 1
INTRODUCTION
In the current times, most establishments implement social distancing and crowd
control practice as the scare of COVID-19 is still present. As per the World Health
Organization (WHO) last June 24, 2021, the number of total confirmed cases is 1,367,894 in
the Philippines with the number of new cases is still at thousands during the month of March.
The suggested action by the WHO, Inter-Agency Task Force (IATF) and Department of Health
maintaining 1 meter distance between people, along with proper wearing of face mask and face
shield. However, most of these establishments conduct these manually and would require their
personnel for this work. Additionally, there may be times their practice may not be properly
implemented as the flow of people coming in and out of their establishment are not in sync
with their schedule in tracking. These concerns may be solved by implementing a camera
system that would continuously perform the following actions of counting the number of
people and monitor the distance between people in the vicinity. An admonish system will also
be placed for immediate mitigation of breaches of social distancing and people limit.
Recent studies for the improvement on social distancing and crowd control practice are
emerging recently. One of these recent studies is by Imran Ahmed, et. al., last November 2020,
focused on the development of a framework with deep learning capability for social distance
tracking that was implemented utilizing an overhead perspective and You Only Look Once
(YOLOv3) object recognition for identifying distance between identified humans in captured
videos. Another recent study is by Poopa Gupta, et. al., last 2020, which focused on using
YOLOv3 algorithm and Single Shot multi-box Detector (SSD) algorithm for counting the
number of people present in captured videos with also considering the detection efficiency.
The two recent studies discussed were only for the detection and lacked an admonish system
utilizing a camera and an alarm to mitigate the violations being detected. A study by Y. Cai et.
al., last March 29, 2016, focused on utilizing a single Pan-tilt-zoom (PTZ) camera for people
tracking and face capture done dynamically. The zoom function of the PTZ camera is for the
detection of the person and his/her face, while the pan and tilt function are for real-time
detection and following of the person. The system first detects the person while the camera is
in zoomed-out mode. At given set time, the camera will be zoomed-in to detect a person on
the testing area while addressing the person’s association with face images and trajectories.
The detection of the PTZ camera is dynamic in nature as it continues to move and follow the
person being detected. Receiving and processing images in this study must be done in real-
time.
The use of camera sensor and algorithm such as YOLOv3 provided an effective means
to detect social distancing activities, but lack thereof an admonishing system for immediate
resolution of the violations. At present, the means of establishments to determine the social
manually, however their monitoring of the crowd limit and distance between each people may
not be done accurately especially if the movement of people varies. Implementing a social
distancing and admonishing system to have an automatic means to monitor social distancing
activity and halt further violations can be considered. There has no present admonishing system
social distancing protocol violations. The specific objectives of the study are as follows: (1)
To design and construct the prototype setup for the Admonishing System for Social Distance
and Crowd Control. (2) To utilize YOLOv3 algorithm for counting people and distance
between people in the vicinity using camera sensor. (3) To develop the alarm and admonishing
system with camera capturing focused on violators and display violators on assigned monitor
display. (4) To determine the accuracy and the cycle time of the admonishing system.
The significance of the study is for automated use of establishments (i.e. stores, offices)
in monitoring social distancing activities of people within its vicinity which can improve the
real-time monitoring and tracking of the establishment personnel. With its automated
capability, it will give less manual workload for the establishment personnel. This is also to
further improve the implementation of crowd control and social distancing practices of
establishments.
The main scope of the study is the development and evaluation of Admonishing System
for Social Distance and Crowd Control. Python will be the main language that will be used for
this study. For the development of the program for social distance tracking, crowd control
tracking and admonishing system, the system will be installed in the multi-purpose hall. The
area represents common lobbies of most establishments. The size of testing site will have a
floor area of at least 20 sq. m. and the maximum people limit for the testing will be limited to
As per various health institutions, such as World Health Organization (WHO) and Centers
for Disease Control and Prevention (CDC) of United States of America, Social Distancing (or
physical distancing) is a means of keeping a safe distance between yourself and other people
with at least around 1 meter (other considers having at least a greater distance of 6 feet). This
practice is served important for people to be not infected from virus, including COVID-19, as
the considered main means of transmission of virus is from person to person. It is also widely
considered as a mean to limit the spread of disease and to be closed to contaminated surfaces
and infected people in public places. This is also a means to avoid possible inhaling and being
exposed from foreign droplets and particles coming from other people when they talk, cough
or sneeze. Practicing social distancing also requires the wearing of mask for further protection
as there would also be minimum distance between each people. Guidelines and rulings on the
allowable crowd limit have been implemented around the world but having priority for full
crowd capacity is allowable for essential and important workforce such as hospitals, frontline
services institutions, and food preparation establishments, and having fractional allowable
Object Detection is a computer vision technique used in images and videos for
identifying object instances and for leveraging machine learning or deep learning to yield better
output. Object detection should be able to replicate the idea of object recognition of a human.
Object Detection is mostly used in video surveillance and image retrieval systems. Below
shows the application of object detection in identifying and locating “vehicle” object in the
captured image.
The algorithm “You Only Look Once, Version” or YOLOv3 is a real-time object
detection algorithm used to identify specific objects from images and videos and compare it
with its dataset repository. This algorithm is considered a Convolutional Neural Network
(CNN) in object detection where CNN is a classifier-based systems that process input images
and videos as structured arrays of data and identify patterns between them. YOLOv3 is usually
implemented using Keras and/or OpenCV deep learning libraries and its object classification
systems are usually used by Artificial Intelligence (AI) programs for identifying objects.
YOLO has the main advantage of being much faster in identifying object making it suitable
The YOLOv3 mode is able to look at the whole image to performed predictions of the
objects being detected. YOLOv3 algorithm made use of similarity “scores” to predefined
classes and dataset. Each detected object will be assigned with its determined classes that have
the highest similarity “score” from the predefined class. The algorithm of YOLOv3 considers
to first separate the detected image into grid cells. Prediction of boundary boxes count around
detected objects that got high similarity “scores”. Each object will have these scores of its
accuracy with the predefined classes and the generation of boundary boxes by clustering the
For this study, YOLOv3 will be mainly used in detecting and counting of people
coming from the video feed from the main camera. The dataset repository used for this
algorithm will be coming from commercially available dataset from the official website of
fundamental features that a basic computer can functionally perform. It also has ports that
enable one to allow communication with other devices such as monitors, printers, routers, etc.
One of their latest product, The Raspberry Pi 4 Model B, will be used be used for the project.
Some of its specifications include 8 GB of SDRAM, Quad core CPU (64-bit SoC @1.5 GHz),
4 USB ports (2 USB 3.0, 2 USB 2.0), HDMI Ports (micro), Camera and Display ports, and an
SD Card reader.
Raspberry Pi allows users to create embedded systems using is as the main processor.
With it, the programming language Python is used. According to the website of Raspberry Pi:
“A Python learning tool with beginner exercises in using variables, data structures and basic
control flow.”. The programming language is capable of doing the basic programming
Arduino
published as an open source with both software and hardware extensible to its users. This
device is commonly used for systems requiring the use of sensors and controlling of various
output peripherals. The Arduino IDE is based on the C++ programming language and is
(PTZ) Cameras for the admonishing feature of the whole system. This will be done when
admonishing.
directly using two lines. One line is for the use of data transmitting of the first device to the
receiving second device, and the other line will be for the data transmitting of the second device
to the receiving first device. In this communication scheme, data are being transmitted
asynchronously and no external clock signal is required. The transmission UART-abled device
adds start and stop bits to the data packet it will transmit. When the receiving UART-abled
device detects the start bit, it will detect the incoming bits until the stop bit at baud rate. For
this study, this UART will be used for the primary communication of Arduino and Raspberry
Pi.
Camera Sensor
Commercially available modern camera sensors typically can capture moving images
of at least 720p resolution at an average 60 frames per second, have focal length of around 72°,
viewing angle of around 90° - 100°. These specifications can be used in projects involving
object detection, including detection, and counting of human object being captured.
Modern camera sensors specifically made for Raspberry Pi are having the specifications
suitable for object detection. Most of these camera sensors are automatically powered directly
IMX219 8-megapixel sensor and used for capturing high-definition (HD) video at most 1080p
and capturing pictures of 3280 x 2464 pixel. This device is compatible with any models of
Raspberry Pi (including 4 model B) and attached thru the dedicated Camera Sensor interface
Pan-Tilt-Zoom Camera, or commonly abbreviated as PTZ Camera, are devices that acts
as a regular camera but can be panned, titled and zoomed while it is being operated and are
mostly for visual surveillance monitoring. The control for pan, tilt and zoom are usually
programmed and controlled by another microcontroller or computer. For this study, PTZ
Cameras will be mainly used for the admonishing system where it will be for capturing social
distancing violators detected by the main camera. The capture video by the PTZ Camera will
be fed to the display monitor in order for the people in the vicinity to identify the violators.
People detection and counting using YOLOv3 and SSD models
In this study, the researchers utilized You Only Look Once (YOLOv3) and Single Shot
multi-box Detector (SSD) algorithms to count number of people at any junction. These
algorithms were used by the researchers independently as one is for object detection and the
other is for object counting. The researchers have found out that SSD has higher accuracy than
In this study, the researcher developed a framework utilizing deep learning platform and
YOLOv3 object recognition to detect humans using a camera set in an overhead perspective
for social distance tracking. A pre-trained algorithm connected with an overhead human data
set was used for their detection algorithm utilizing detected bounding box information. For
each detected human, the centroid pairwise distances are being determined using Euclidean
violation threshold. The violation threshold established will be used to determine if the
The researchers testing included different video sequence to determine the efficiency of
the model. The researchers found out that their framework was successful in identifying
humans who breachers social distance protocols. The accuracy of 92% and 98% achieved by
the detection model without and with transfer learning, respectively. The tracking accuracy of
In this study, the researchers attempted to develop a video-based approach for people
monitoring and counting. The researchers developed a real-time people counting system
utilizing a single low-end video camera. The project also considered to overcome the problems
on estimation of scene background and merge-split scenarios whenever multiple persons are
moving closely, as several persons may be considered as a single person by their developed
system. The researchers considered background subtraction using adaptive background and
automatic thresholding to take reduces noises and static object changes. An adaptive Kalman
filter is utilized to track continuously moving objects, in order to allow approximation of future
position of moving objects, especially under heavy occlusion. In removing shadows from the
captured video images. Results from their testing that their method is capable of tracking and
In this study, the researchers were trying to develop a real-time method to measure the
distance using a modified camera with image sensor inclined by a certain angle. The camera is
set where it is focused at projection plane. The object distance is determined by acquiring the
image distance and applying it to the lens formula. The image distance was determined as the
distance between the projection plane and the camera lens. The researchers were able to
conclude that their method is effective and can perform measuring by an average time of 3.21
ms.
An Image Processing based Object Counting Approach for Machine Vision Application
In this study, the researchers utilized a camera to perform machine vision applications,
such as object-independent product counting. The camera used have a resolution of 1280x720
px, 59 fps Frame Rate, Global shutter type, CCD Sensor and GigE Interface. The approach for
this study is based on Otsu thresholding and Hough transformation for the system to able count
automatically the product type and color. The product in this study are eggs and soda bottles.
The camera in this study will capture image of the products through a conveyer. These images
will then be applied with the following: Gaussian filter, S Channel, Otsu Threshold, Sobel
A study by Long Zhao, last March 29, 2016, focused on utilizing a single Pan-tilt-zoom
(PTZ) camera for persistent people tracking and face capture. The system first detects the
person while the camera is in zoomed-out mode. At given set time, the camera will be zoomed-
in to detect a person on the testing area while addressing the person’s association with face
images and trajectories. The detection of the PTZ camera is dynamic in nature as it continues
to move and follow the person being detected. Receiving and processing images in this study
CONCEPTUAL FRAMEWORK
The study will be conducted in a controlled environment (in the Multi-purpose hall of
a subdivision) with setup representing common lobbies of most establishments. Figure 3.1
videos from the testing area and will be sent to the main computer (Raspberry Pi). The process
of people detection and counting and measuring of distance between people will be performed
in the computer continuously. When social distancing protocol violations were for straight 30
seconds (i.e. number of people in the vicinity exceeds the crowd limit or distance between
detected people is less than one (1) meter), the alarm system and admonishing system will be
activated. An activated alarm system to warn people in the vicinity of the violation and pinpoint
the specific area (or cluster) where the violation has occurred. An admonishing system will be
primarily activated when distance detected between two people is less than 1 meter for thirty
(30) straight seconds. Depending on the area or cluster the violation has occurred, an assigned
PTZ camera will focus on the violators for them to be displayed in the monitor display. The
PTZ cameras will be controlled (panning, tilting and zooming) by the main controller when
needed in order to have an optimized view of the detected violators. The accuracy and cycle
The block diagram of the methodology below shows the general procedural steps of the study:
The main objective of the study is to develop a camera-based admonishing system for
social distancing protocol violations. The specific objectives as well as the procedural steps
(2) To utilize YOLOv3 algorithm for counting people and distance between people in the
(3) To develop the alarm and admonishing system with camera capturing focused on violators
(4) To determine the accuracy and the cycle time of the admonishing system.
The following procedural steps will be done to address the specific objectives.
Procedural steps A and B will address specific objective 1. Procedural step C will address
specific objective 2. Procedural step D will address specific objective 3. And procedural step
will be fed to the Raspberry Pi and be processed by the people counting social distance tracking
programs. For crowd limit violation, the main output device that will be used are the speakers
for playing the alarm sound for crowd limit violation. For social distancing violation, the
speakers for playing the alarm sound for crowd limit violation and the admonishing process
will be done. For the admonishing, the pan-tilt-zoom (PTZ) camera, through the Arduino
connected to the Raspberry Pi using UART Communication, will focused on the violators, and
Figure 3.4. illustrates the outline of the setup and clustering of the testing area:
The testing area will be assigned with six (6) clusters and these clusters will be
considered for the alarm system and admonishing system. Cameras will be placed in the testing
area in frontal view setup: one main camera sensor that will be primarily used for monitoring
of social distancing activities and PTZ cameras that will be primarily used for the admonishing
system and are assigned with three adjacent clusters (one PTZ camera will be assigned for
clusters 1a, 1b and 1c while the other PTZ camera is for clusters 2a, 2b and 2c). A speaker will
also be placed in the testing area for the alarm system and a monitor display will be used to
A. Preparation of Materials
Raspberry Pi 4
The Raspberry Pi 4 will be the main computer of the project. The programs for used
for the admonishing system where it will be for capturing social distancing violators detected
by the main camera. The capture video by the PTZ Camera will be fed to the display monitor
Camera Sensor
The main camera sensor will be used for the people counting and measurement of
distance between detected people. The video captured by the camera will be fed to the
Raspberry Pi computer thru the CSI Port. An extender would also be used in order for giving
the admonishing feature of the whole system. The Arduino and Raspberry Pi will be connected
by UART Communication.
The Raspberry Pi and Arduino will be connected with each other. The Arduino and Raspberry
PTZ Cameras will be mainly used for the admonishing system where it will be for
capturing social distancing violators detected by the main camera. The capture video by the
PTZ Camera will be fed to the display monitor in order for the people in the vicinity to identify
the violators. PTZ Cameras will be connected to the output ports of the Arduino
Table 3.4 shows the estimated cost of the components involved. Price subject to change
Testing Area
The testing area for the development of the prototype will be done on the multi-purpose
hall of a subdivision. The crowd limit is set to 8 people and minimum allowable distance
most optimized views for the camera. The main camera will be placed
to have a proper frontal view of the vicinity. The PTZ cameras will be
placed and be calibrated such that it can be pan and titled to view its
assigned clusters.
Figure 3.8 Sample Frontal View of Main Camera
The troubleshooting of the camera view will be done until the view practically covered
the whole view of the testing area. Once the optimized positioning of the camera has been
determined, proper mounting and wiring will be done. The image below shows the
approximated position of the cameras (main camera sensor, PTZ Cameras), monitor and
speakers.
Once the main camera has been properly set, the estimation of
will be determined as reference for the program that will be formed for
the social distance monitoring. This will be done for both vertical and
horizontal distance of the per horizontal side of the pixel (mm/pixel) and
y be the equivalent vertical distance of the per vertical side of the pixel
Considering: total number of horizontal pixels = 3280 & Horizontal distance of vicinity = 3
meters or 3000 mm
Figure 3.10 Sample Frontal View of Main Camera with Horizontal measurement
3000 𝑚𝑚
x= = 0.915 𝑚𝑚/𝑝𝑖𝑥𝑒𝑙𝑠
3280 𝑝𝑖𝑥𝑒𝑙𝑠
The program will be calibrated locally such that the program would
YOLOv3
The algorithm “You Only Look Once, Version” or YOLOv3 is a real-time object
detection algorithm used to identify specific objects from images and videos and compare it
with its dataset repository. This algorithm is considered a Convolutional Neural Network
(CNN) in object detection where CNN is a classifier-based systems that process input images
and videos as structured arrays of data and identify patterns between them. YOLOv3 is usually
implemented using Keras and/or OpenCV deep learning libraries and its object classification
systems are usually used by Artificial Intelligence (AI) programs for identifying objects.
YOLO has the main advantage of being much faster in identifying object making it suitable
For this study, YOLOv3 will be mainly used in detecting and counting of people
coming from the video feed from the main camera. The object detection of the study will utilize
pre-trained models and the main python program will be connected to these pre-trained models.
The dataset repository used for this algorithm will be coming from commercially available
https://pjreddie.com/darknet/yolo/). The
Development of Program
The development of program will be thru Pycharm program and Python programming
language. YOLOv3 algorithm and its given dataset repository will be used for people object
people will be encoded and programmed accordingly. The alarm and admonishing feature
program will also be developed using the same program, with consideration of the connected
output peripherals.
Pycharm
Czech company JetBrains mainly utilizing Python language computer programming. Main
features of Pycharm includes smart code editor and inspection, on-the fly code fixes, code
debugging, and code refactoring. Pycharm supports cross-platform IDE on the following
operating systems: Windows, macOS and Linus. For the project, the free Community Edition
of Pycharm will be used as the included main features are suitable in the development of the
Program Start
While Camera Sensor is ON
Detect People object (using YOLOv3 algorithm)
Count total number of detected people
If counted detected people > assigned maximum limit people
Assign time = 1 sec.
If counted detected people > assigned maximum limit people
time = time + 1 sec.
If time = 30 secs then
Activate Alarm System for Crowd Violations
Else return
Else
Return
End
Program Start
While Camera Sensor is ON
Detect People object (using YOLOv3 algorithm)
Identify centroid of detected people
Measure distance between centroid of detected people
If measured distance < 1m for 30 secs.
Activate system for social distance violation
Activate Admonishing System
Else
Return
End
C. Testing and Simulation of the program for counting people and distance between
The program for counting people and distance between people will be developed
utilizing Yolov3 algorithm with associating dataset coming from the official YOLOv3 website
(by the developer). The program will be tested in a controlled setup initially in order to improve
its accuracy in detecting people, counting people and distance between people. After finalizing
the program, the different subsystem of the project will be tested in each assigned test cases.
The general procedure for the people counting and measuring distance between people is
presented in the flowchart, Figure 3.12 and Figure 3.13. The testing will be done in each
will identify people (object) in the vicinity in real-time. In every second, the number of detected
people will be counted, and the system will identify if the count is still within the passable
crowd limit. If the count of detected people is greater than the designated crowd limit of eight
(8) people, there would be a possible crowd limit violation and the system will start an internal
timer for tracking the continuous time that the violation is occurring. If straight 30 seconds has
been tracked for the violation, the crowd system alarm will initiate.
people (object) in the vicinity in real-time. Further consideration is to identify the centroid each
identified people object. In every second, the system will measure the centroid of each
identified object, and the system will identify if the measure of each people is greater than the
minimum allowable distance of one (1) meter. If there would be a detected distance of less
than 1 meter, there would be a possible social distancing violation and the system will start an
internal timer for tracking the continuous time that the violation is occurring. If straight 30
seconds has been tracked for the violation, the social distance violation alarm and admonishing
The testing is for the prototype’s ability of performing people detection and counting. It
will be tested in each of the test cases enumerated below. The tables below will be used for
testing of the operation of the People Detection and Counting program. The accuracy of Real-
time People Counting capability of the system will be counted automatically by the program
and be counted manually by the researcher. For this study, the threshold limit in terms of people
count is eight (8) for all test cases. For each test cases, there would be 10 trials each. These test
1. People positioning at least 1 meter away from each other and not exceeding crowd
limit.
2. People positioning at least 1 meter away from each other and exceeding crowd limit.
3. Two (2) people violating social distancing protocol and are very close with each other.
The accuracy for each testing will be determined by getting the number of total times
that the system count is equal to the manual count divided the total number of trails and then
multiplying by 100%.
Testing for Social Distance Tracking
The testing is for the prototype’s ability of measuring distance between people. It will
be tested in test cases of specific distance between two people. For each trial of each test cases,
the positioning of each people will be varied throughout the test area while maintaining the
assigned distance between. Table 3.10 -3.12 will be used for testing of the operation of the
Social Distance Tracking program. For each test cases, there would be 10 trials each. The
measurement of distance between people will be done both manually and automated by the
system.
2. Distance Between is 1 m.
3. Distance Between is 2 m.
For each test cases, the percentage accuracy will be calculated for each trial.
𝑎𝑐𝑡𝑢𝑎𝑙 − |𝑎𝑐𝑡𝑢𝑎𝑙−𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑|
Percentage Accuracy, %Accuracy = 𝑥 100%
𝑎𝑐𝑡𝑢𝑎𝑙
The considered accepted minimum percentage accuracy is 95%. The basis would be
from the published work of R. Keniya & N. Mehendale titled “Real-time social distancing
detector using SocialdistancingNet19 deep learning network” last August 2020 have achieved
accuracy of 92.8 % . Acquiring a percentage accuracy of at least 95% will yield a successful
trial. The accuracy of the social distance tracking will be computed as the total number of
successful trials divided the total number of trails and then multiplying by 100%.
Alarm System
The alarm system will be activated when these violations are detected by the previous system:
seconds, the alarm system will play and alarm tone and voice message indicating that the crowd
limit has been violated. The alarm system will continue to be activated until the main camera
sensor has detected no crowd limit violation for 15 seconds. The testing for Alarm System for
Crowd Limit violation will be tested in a controlled environment where the crowd limit may
be varied between trials. The general procedure for this is presented in the flowchart, Figure
3.14.
For the testing of Alarm System for Crowd Limit violation, it will be tested in each of
the test cases enumerated below. Each test cases will be assigned with different assigned crowd
limit to determine if the system would be flexible with various crowd limit setting. Each test
cases will have ten trials each. For each trail, the testing area will be populated in increments
of 1, starting from 1 people. People in the area will purposely stay in the area for at least 30
seconds. There should be no alarm in case of the number of people is equal or less than the
assigned crowd limit and should alarm if the number of people is greater than the assigned
crowd limit. This is in order to test if the alarm system for crowd limit will response
accordingly. The alarm system for crowd limit violation will only be activated if the detected
number of people in the vicinity is greater than the designated people limit for straight 30
seconds. Once activated, the alarm tone will play, and an automated voice message will
indicate that crowd limit violation has occurred. Table 3.13- 3.15 will be used for testing of
the reliability of the Alarm System for Crowd Limit violation program.
For trials with activated alarm system, the following succeeding procedure will be
considered: Once the alarm is activated, the number of people will be purposely reduced such
that the total number of people is equal or less than the assigned crowd limit. The alarm system
Table 3.13 Alarm System for Crowd Limit violation for Test Case #1
Trial Crowd Populated Did Alarm In case of Is Response
Limit People in System activation, did Correct?
Setting the Testing Activate? alarm stop after (YES/NO)
Area (YES/NO) continuous 15
seconds of no
crowd violation
(YES/NO/NA)
1 6 1
2 6 2
3 6 3
4 6 4
5 6 5
6 6 6
7 6 7
8 6 8
9 6 9
10 6 10
Table 3.14 Alarm System for Crowd Limit violation for Test Case #2
Trial Crowd Populated Did Alarm In case of Is Response
Limit People in System activation, did Correct?
Setting the Testing Activate? alarm stop after (YES/NO)
Area (YES/NO) continuous 15
seconds of no
crowd violation
(YES/NO/NA)
1 7 1
2 7 2
3 7 3
4 7 4
5 7 5
6 7 6
7 7 7
8 7 8
9 7 9
10 7 10
Table 3.15 Alarm System for Crowd Limit violation for Test Case #3
Trial Crowd Populated Did Alarm In case of Is Overall
Limit People in System activation, did Response
Setting the Testing Activate? alarm stop after Correct?
Area (YES/NO) continuous 15 (YES/NO)
seconds of no
crowd violation
(YES/NO/NA)
1 8 1
2 8 2
3 8 3
4 8 4
5 8 5
6 8 6
7 8 7
8 8 8
9 8 9
10 8 10
The reliability (in percentage) will be determined for each test cases. This will be
computed by the total count of correct Overall response of the system over the number of trials,
multiplied by 100.
Once the prototype has detected social distancing violations occurring for straight 30
seconds, the alarm system will play and alarm tone and voice message indicating that a social
distancing protocol has been violated and the cluster where violation has occurred. The alarm
system will continue to be activated until the main camera sensor has detected the violators to
be separated by at least 1 meters. The general procedure for this is presented in the flowchart,
Figure 3.15.
Figure 3.15 Alarm System for Social Distancing Violation Flowchart
For the testing of Alarm System for Social Distancing violation, the testing area will
be populated with two (2) people that are less than 1 meter between each other. 3 trials will be
dedicated for each cluster, having total of 18 trials. The alarm system for social distancing
violation will only be activated if there are detected people object that are less than 1 meter
away from each other for straight 30 seconds. For each trial, the two people in the area will
purposely stay in the assigned cluster and close to each other for at least 30 seconds.
Afterwards, the alarm for social distancing violation should start with voiceover indicating the
cluster where the violation has occurred. This is to test if the alarm system for crowd limit will
response accordingly. Once activated, the people will purposely stay away from each other.
The alarm system should turn off after if distance between the detected violators is at least 1m.
Table 3.16 will be used for testing of the reliability of the Alarm System for Crowd Limit
violation program.
Table 3.16 Alarm System for Social Distancing violation
For each trail, alarm system will be tested if it will response correctly. The alarm system
for social distancing violation will only be activated if there is a measured distance between
people less than 1 meters for straight 30 seconds. Once activated, the alarm tone will play, and
an automated voice message will indicate the cluster where the social distancing violation has
occurred. The reliability (in percentage) will be determined by the total count of correct
E. Testing to determine the Accuracy and Cycle Time of the Admonishing System
Admonishing System
The admonishing system will utilize secondary PTZ cameras to have an optimized view
of the violators. After the assigned PTZ camera acquired an optimized view of the violators
specifically detected by the main camera sensor, the following actions will occur at the same
time: (1) an alarm tone will be activated and, (2) the PTZ camera will focus on the violators
and a monitor display will show the optimized image of the violators. The actions of the
admonish system will stop once the main camera sensor have detected the violation have
already been mitigated. The discussed recent studies also have its application for general use,
but for these study, our application would be for establishment where there is still a heavy
volume of people during their operating hours (i.e. government establishments lobby, grocery
mart)
The admonishing system will be activated once the prototype has detected social
distancing violations occurring for straight 30 seconds. Prior to the activation of the
admonishing system, the cluster where the violation has occurred will be identified in order to
determine the assigned PTZ camera to be utilized. The general procedure for this is presented
Cycle Time
The cycle time of the admonishing system will start at the moment the prototype has detected
social distancing violations occurring for straight 30 seconds until the PTZ camera acquired an
optimized view of the violators and a monitor display will show the optimized image of the
violators.
Table 3.17 will be used for testing to determine the accuracy and cycle time of the
Admonishing System. For each trail, the admonishing system will be tested if it will response
correctly. The assigned PTZ camera should be able to focus on the detected cluster and detected
violators for detected violators to be displayed on the monitor. The zoom function of the PTZ
camera may be utilized to zoom in on the detected violators for better video presentation.
Table 3.17 Admonishing System Response
Trial Did assigned Did assigned Did the Is Overall Total cycle
PTZ focus on PTZ focus on monitor Response time (in
correct cluster? detected display Correct? (YES, secs.)
(YES/NO) violators? violators? if all previous 3
(YES/NO) (YES/NO) items are YES.
Otherwise, NO)
1
2
…
20
For each trial, the admonishing system will be tested for the following functions: (1)
assigned PTZ camera focused on correct cluster, (2) assigned PTZ focused on detected
violators and (3) monitor displays properly the detected violators. A “YES” will be tagged if
the function worked properly. The overall response for each trial will be tagged with “YES” if
all 3 functions are properly working (or were tagged with “YES”). The accuracy of the
Admonishing System will be determined based on the successful overall responses. The
calculation of the accuracy is equal to the number of successful overall response over the total
For determining the cycle time of the admonishing system, the average of all total cycle time
https://covid19.who.int/region/wpro/country/ph
[2] I. Ahmed, M.Ahmad, J.J.P.C. Rodrigues, G. Jeon, S. Din, A deep learning-based social
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7603992/
[3] P. Gupta, V. Sharma, S. Varma. People detection and counting using YOLOv3 and SSD
models, ScienceDirect, Jan. 20, 2021. Accessed on: June 1, 2021. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S2214785320392312
[4] Y. Cai & G. Medioni, Persistent people tracking and face capture using a PTZ camera,
https://www.researchgate.net/publication/346028195_Persistent_people_tracking_and_face_
capture_using_a_PTZ_camera
[5] S. Gupta et. al., SD-Measure: A Social Distancing Detector, ResearchGate, Nov. 2020.
https://www.researchgate.net/publication/345316546_SD-
Measure_A_Social_Distancing_Detector
[6] T. Banas, How to Calculate Relative Accuracy, Sciencing, Oct. 31, 2020. Accessed on:
6069718.html
[7] R. Keniya, M. Mehendale, Real-Time Social Distancing Detector Using
Socialdistancingnet-19 Deep Learning Network, SSRN, Aug. 11, 2020. Accessed on: June 1,
[8] E.Y. Li, Dive Really Deep into YOLO v3: A Beginner’s Guide, Towards Data Science,
https://towardsdatascience.com/dive-really-deep-into-yolo-v3-a-beginners-guide-
9e3d2666280e
[9] YOLO: Real-Time Object Detection. n.d. Accessed on: June 1, 2021. [Online]. Available:
https://pjreddie.com/darknet/yolo/
[10] RASPBERRY PI FOUNDATION. RASPBERRY PI. n.d. Accessed on: June 1, 2021.
[11] RASPBERRY PI FOUNDATION. Camera Module v2. n.d. Accessed on: June 1, 2021.
[12] RASPBERRY PI FOUNDATION. Pi NoIR Camera v2. n.d. Accessed on: June 1, 2021.
[13] L. Duran-Polanco, M. Siller. Crowd management COVID-19. Science Direct. April 12,
https://www.sciencedirect.com/science/article/pii/S1367578821000249
[14] Ains. 15 Best Raspberry Pi Cameras and Lenses. Seeed Studio. n.d. Accessed on: June
raspberry-pi-cameras-and-lenses-m/
[15] MathWorks, Object Detection, n.d. Accessed on: June 1, 2021. [Online]. Available:
https://www.mathworks.com/discovery/object-detection.html
[16] World Health Organization. COVID-19: physical distancing. n.d. Accessed on: June 1,
19/information/physical-distancing
[17] World Health Organization. COVID-19 transmission and protective measures. n.d.
https://www.who.int/westernpacific/emergencies/covid-19/information/transmission-
protective-measures
[18] V. Meel, YOLOv3: Real-Time Object Detection Algorithm (What’s New?), viso.ai, Feb.
learning/yolov3-overview/
[19] Sparkfun, What is an Arduino, n.d. Accessed on: June 1, 2021. [Online]. Available:
https://learn.sparkfun.com/tutorials/what-is-an-arduino/all
[20] Arduino, What is Arduino?, Feb. 05, 2018. Accessed on: June 1, 2021. [Online].
Available: https://www.arduino.cc/en/Guide/Introduction
[21] N. Bansal, Object Detection using YoloV3 and OpenCV, Towards Data Science, Mar. 8,
https://towardsdatascience.com/object-detection-using-yolov3-and-opencv-19ee0792a420
[22] Q. -C. Mao, H. -M. Sun, Y. -B. Liu and R. -S. Jia, "Mini-YOLOv3: Real-Time Object
Detector for Embedded Applications," in IEEE Access, vol. 7, pp. 133529-133538, 2019, doi:
10.1109/ACCESS.2019.2941547.
[23] S. Campbell, Basics of UART Communication, Circuit Basics, n.d. Accessed on: June 1,
[24] Centers for Disease Control and Prevention, Guidance for Unvaccinated People, June
https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html
[25] JetBrains, PyCharm Features, n.d. Accessed on: June 1, 2021. [Online]. Available:
https://www.jetbrains.com/pycharm/features/
[26] JetBrains, Get started, June 11, 2021 Accessed on: June 23, 2021. [Online]. Available:
https://www.jetbrains.com/help/pycharm/quick-start-guide.html#create
[27] S. N. Sinha,, Pan-Tilt-Zoom (PTZ) Camera, Springer Link, Feb. 5, 2016. Accessed on:
https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-31439-6_496
[28] T. Bella, Places without social distancing have 35 times more potential coronavirus
spread, study finds, The Washington Post, May 15, 2020 .Accessed on: June 1, 2021.
[Online]. Available: https://www.washingtonpost.com/nation/2020/05/15/social-distancing-
study-coronavirus-spread/
[29] AdrieSenstosa, Raspberry Pi - Arduino Serial Communication, n.d. Accessed on: June
Communication/