0% found this document useful (0 votes)
74 views10 pages

Software Requirements Specification: Social Distancing Detection

The document provides a software requirements specification for a social distancing detection system. The system will use computer vision and deep learning techniques like OpenCV, Keras, YOLO object detection, and image classification to detect people in video streams and calculate distances between detected people to check for violations of social distancing guidelines. It is intended to help authorities analyze high-risk areas and redesign public spaces. The system will be trained on large datasets and tested in challenging environments to ensure real-world effectiveness, especially in indoor spaces like shopping centers.

Uploaded by

anubhav gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views10 pages

Software Requirements Specification: Social Distancing Detection

The document provides a software requirements specification for a social distancing detection system. The system will use computer vision and deep learning techniques like OpenCV, Keras, YOLO object detection, and image classification to detect people in video streams and calculate distances between detected people to check for violations of social distancing guidelines. It is intended to help authorities analyze high-risk areas and redesign public spaces. The system will be trained on large datasets and tested in challenging environments to ensure real-world effectiveness, especially in indoor spaces like shopping centers.

Uploaded by

anubhav gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Software Requirements

Specification
For

Social Distancing Detection

Version 1.0 approved

Prepared By:

Anubhav Gautam (1802710023)

Ashish Tyagi (1802710030)

Ashutosh Pandey (1802710032)


Table of Contents
Table of Contents...........................................................................................................................ii
Revision History.............................................................................................................................ii
1. Introduction..............................................................................................................................1
1.1 Purpose...........................................................................................................................................1
1.2 Document Conventions...................................................................................................................1
1.3 Intended Audience and Reading Suggestions.................................................................................1
1.4 Product Scope.................................................................................................................................1
1.5 References......................................................................................................................................1
2. Overall Description..................................................................................................................2
2.1 Product Perspective.........................................................................................................................2
2.2 Product Functions...........................................................................................................................2
2.3 User Classes and Characteristics.....................................................................................................2
2.4 Operating Environment...................................................................................................................2
2.5 Design and Implementation Constraints.........................................................................................2
2.6 User Documentation.......................................................................................................................2
2.7 Assumptions and Dependencies......................................................................................................3
3. External Interface Requirements...........................................................................................3
3.1 User Interfaces................................................................................................................................3
3.2 Hardware Interfaces........................................................................................................................3
3.3 Software Interfaces.........................................................................................................................3
3.4 Communications Interfaces.............................................................................................................3
4. System Features.......................................................................................................................4
4.1 System Feature 1.............................................................................................................................4
4.2 System Feature 2 (and so on)..........................................................................................................4
5. Other Nonfunctional Requirements.......................................................................................4
5.1 Performance Requirements.............................................................................................................4
5.2 Safety Requirements.......................................................................................................................5
5.3 Security Requirements....................................................................................................................5
5.4 Software Quality Attributes............................................................................................................5
5.5 Business Rules................................................................................................................................5
6. Other Requirements................................................................................................................5
Appendix A: Glossary....................................................................................................................5
Appendix B: Analysis Models.......................................................................................................5
Appendix C: To Be Determined List............................................................................................6

Revision History
Name Date Reason For Changes Version
1.Introduction

1.1 Purpose
Social distancing is a recommended solution by the World Health
Organisation (WHO) to minimise the spread of COVID-19 in public
places. The majority of governments and national health authorities
have set the 2-m physical distancing as a mandatory safety measure
in shopping centres, schools and other covered areas.

We can use OpenCV, computer vision, and deep learning to


implement social distancing detectors.

1.2 Intended Audience and Reading Suggestions


The project is intended help authorities to redesign the layout of a
public place or to take precaution actions to mitigate high-risk zones.
The developed model is a generic and accurate people detection and
tracking solution that can be applied in many other fields such as
autonomous vehicles, human action recognition, anomaly detection,
sports, crowd analysis, or any other research areas where the human
detection is in the centre of attention.

1.3 Product Scope


The model will be trained and tested using a large and
comprehensive dataset, in challenging environments and lighting
conditions. This will ensure the model is capable of performing in
real-world scenarios, particularly in covered shopping centres where
the lighting conditions are not as ideal as the outdoor lighting.
1.4 References
https://www.pyimagesearch.com/2020/06/01/opencv-
social-distancing-detector/

2. Overall Description

2.1 Product Perspective


Since the onset of coronavirus pandemic, many countries have used
technology-based solutions, to inhibit the spread of the disease. For
example, the Indian government uses the Aarogya Setu program to
find the presence of COVID-19 patients in the adjacent region, with
the help of GPS and Bluetooth. This may also help other people to
maintain a safe distance from the infected person.
The utilisation of Artificial Intelligence, Computer Vision, and
Machine Learning, can help to discover the correlation of high-level
features. For example, it may enable us to understand and predict
pedestrian behaviours in traffic scenes, sports activities, medical
imaging, or anomaly detection, by analysing spatio-temporal visual
information and statistical data analysis of the images sequences.

2.2 Product Functions


 Apply object detection to detect all people in a video stream.
 Compute the pairwise distances between all detected people.
 Based on these distances, check to see if any two people are
less than N pixels apart.
2.3 User Classes and Characteristics
 OpenCV
 Keras Library
 YOLO Object Detector
 COCO image classification
 Image Net

2.4 Operating Environment


The document proposes a three-stage model including people
detection, tracking, inter-distance estimation as a total solution for
social distancing monitoring and zone-based infection risk analysis.
The system can be integrated and applied on all types of CCTV
surveillance cameras with any resolution from VGA to Full-HD, with
real-time performance.

2.5 Design and Implementation Constraints


One major challenge encountered was that in order to have a robust
detector, we would require a set of rich training datasets. This should
include people with a variety in gender and age (man, women, boy,
girl) with millions of accurate annotation and labelling. We selected
two large datasets of MS COCO and Google Open Image dataset that
satisfy the mentioned expectations.

2.6 Assumptions and Dependencies


Detecting distances between pedestrians from monocular
images without any extra information is not possible. One way
(not very accurate though) is to ask the user for specific inputs
leading to a distance estimation between the pedestrians. If the
user could mark two points on the frame that are 6 feet apart,
using extrapolation, one could find the distance between
different points on the frame. This would have been true if the
camera was equidistant to all the points on the plane where the
pedestrians were walking. The closer the pedestrians are to the
camera the bigger they are. The closer the two points (which are
the same number of pixels apart ) on the frame to the camera,
the smaller is the actual distance between them.

3.External Interface Requirements

3.1 User Interface


 The first step involved in the image detection was using tiny
YOLO V2model. However, any detection model would work.

 The second step is calculate the distances between all objects.


While this may sound simple, it is actually more complicated if
you think about it. Normally a camera does not provide a top
view. Instead it is at some angle which leads to certain
perspective.
3.2 Hardware Interface
The working of the algorithm is shown in the diagram below:

4 System Features

4.1 Classifying the objects


The head of a DNN is responsible for classifying the objects (e.g.,
people, bicycles, chairs, etc.) as well as calculating the size of the
objects and the coordinates of the correspondent bounding boxes.
There are usually two types of head sections: one-stage (dense) and
two-stage (sparse). The two-stage detectors use the region proposal
before applying the classification. First, the detector extracts a set of
object proposals (candidate bounding boxes) by a selective search.
Then it resizes them to a fixed size before feeding them to the CNN
model. This is similar to R-CNN based detectors [46–48]. In spite of
the accuracy of two-stage detectors, such methods are not suitable
for the systems with restricted computational resources [99]. On the
other hand, the one-stage detectors perform a unified detection
process. They map the image pixels to the enclosed grids and checks
the probability of the existence of an object in each cell of the grids.
4.2 Tracking People
The next step after the detection phase is people tracking and ID
assignment for each individual. We use the Simple Online and Real-
time (SORT) tracking technique [103] as a framework for the Kalman
filter [104] along with the Hungarian optimisation technique to track
the people. Kalman filter predicts the position of the human at time t
+ 1 based on the current measurement at time t and the
mathematical modelling of the human movement. This is an effective
way to keep localising the human in case of occlusion. The Hungarian
algorithm is a combinatorial optimisation algorithm that helps to
assign a unique ID number to identify a given object in a set of image
frames, by examining whether a person in the current frame is the
same detected person in the previous frames or not.

4.3 Inter-Distance Estimation


By using a single camera, the projection of a 3-D world scene into a 2-D
perspective image plane leads to unrealistic pixel-distances between the
objects. This is called perspective effect, in which we can not perceive uniform
distribution of distances in the entire image. For example, parallel lines
intersect at the horizon and farther people to the camera seem much shorter
than the people who are closer to the camera coordinate centre. In 3-
dimensional space, the centre or the reference point of each bounding box is
associated with three parameters (x, y, z), while in the image received from the
camera, the original 3D space is reduced to two-dimensions of (x, y), and the
depth parameter (z) is not available. In such a lowered-dimensional space, the
direct use of the Euclidean distance criterion to measure inter-people distance
estimation would be erroneous. In order to apply a calibrated IPM transition,
we first need to have a camera calibration by setting z = 0 to eliminate the
perspective effect. We also need to know the camera location, its height, angle
of view, as well as the optics specifications (i.e., the camera intrinsic
parameters) .
By applying the IMP, the 2D pixel points (u, v) will be mapped to the
corresponding world coordinate points (Xw,Yw, Zw):

where h is the camera height, f is focal length, and ku and kv are the measured
calibration coefficient values in horizontal and vertical pixel units, respectively.
(cx, cy) is the principal point shifts that corrects the optical axis of the image
plane.

5. Other Non Functional Requirements

5.1 Performance Requirements


In order to train the developed model, we considered a transfer
learning approach by using pre-trained models on Microsoft COCO
dataset followed by fine-tuning and optimisation of our YOLO-based
model.
Four common multi-object annotated datasets were investigated
including PASCAL VOC, Microsoft COCO, Image Net ILSVRC, and
Google Open Images Datasets V6+.
All of the benchmarking tests and comparisons were conducted on
the same hardware and software: a Windows 10-based platform with
an Intel c CoreTM i5-3570K processor and an NVIDIA RTX 2080 GPU
with CUDA version 10.1

5.2 Security Requirements


One of the controversial opinions that we received from health
authorities was the way of dealing with family members and couples
in social distancing monitoring. Some researchers believed social
distancing should apply on every single individual without any
exceptions and others were advising the couples and family
members can walk in a close proximity without being counted as a
breach of social distancing. In some countries such as in the UK and
EU region the guideline allows two family members or a couple walk
together without considering it as the breach of social distancing.

Appendix A : Keywords
social distancing; COVID-19; human detection and tracking; distance
estimation; deep convolutional neural networks; crowd monitoring;
pedestrian detection; inverse perspective mapping

You might also like