Coding Project 1
Coding Project 1
INTRODUCTION
An activity recognition system is projected to identify the basic day to day activities
performed by a human being. It is challenging to achieve high rate accuracy for
recognition of these activities due to the complexity and diversity in human activities.
Activity models required for identification and classification of human activities are
constructed based on different approaches specific to the application. The activities of a
human being can be generally categorized into normal activities or anomalous activities.
A human being’s deviation from normal behavior to abnormal causing harm to the
surrounding or to himself is classified as an anomalous activity. To achieve anomaly
detection, one of the most widespread method is using the videos of normal events as
training data to learn a model and then detecting the suspicious events which would do
not fit in the learned model. For example, human pose guesstimate is used in applications
including video surveillance, animal tracing and actions understanding, sign language
recognition, advanced human-computer interaction, as well as marker less motion
capturing. Low cost depth sensors consist of limitations like limited to indoor use, and
their low resolution and noisy depth information make it difficult to estimate human
poses from depth images. Hence, we are to using neural networks to overcome these
problems. Anomalous human activity recognition from surveillance video is an active
exploration part of image processing and computer visualization
1
CHAPTER-2
SYSTEM STUDY AND ANALYSIS
For detecting suspicious human activity, it is important for the model to learn
suspicious human poses. Human pose estimation is one of the key problems in
computer vision that has been studied for more than 15 years. It is related to
identifying human body parts and possibly tracking their movements. It is used
in AR/VR, gesture recognition, gaming consoles, etc. Initially, low cost depth
sensors (motion sensors) were used to find human movement in gaming
consoles. However, these sensors are limited to indoor use, and their low
resolution and noisy depth information make it difficult to estimate the human
activity going on from depth images. Hence, they are not a suitable option for
suspicious activity detection.
2.1.1 DISADVANTAGES
The first step was to decide which suspicious activities to focus on. We selected 5 suspicious
activities to classify: Shooting, punching, kicking, knife attack and sword fight. These 5
activities formed 5 classes for our classifier model. The non-suspicious activities were put in
a 6th class. The CCTV Camera is a video camera that feeds or streams its image in real time
.The system will detect suspicious person i.e. unauthorized entry in a restricted place in a
video by using AMD algorithm and will start tracking once the user has specified a
suspicious person by his/her on the display. The main purpose of efficient background
subtraction method is to generate a reliable background model and thus significantly improve
the detection of moving objects. Advanced Motion Detection (AMD) achieves complete
2
detection of moving objects. A camera is been connected inside the monitoring room which
produces alert messages on the account of any suspicious activity.
2.2.1 ADVANTAGES
More accuracy
Easy to detect suspicious activity
High resolution
PYTHON
Python is an easy to learn, powerful programming language. It has efficient high-level data
elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal
language for scripting and rapid application development in many areas on most platforms.
The Python interpreter and the extensive standard library are freely available in source or
binary form for all major platforms from the Python Web site, https://www.python.org/, and
may be freely distributed. The same site also contains distributions of and pointers to many
free third party Python modules, programs and tools, and additional documentation.
The Python interpreter is easily extended with new functions and data types implemented in
C or C++ (or other languages callable from C). Python is also suitable as an extension
This tutorial introduces the reader informally to the basic concepts and features of the Python
language and system. It helps to have a Python interpreter handy for hands-on experience, but
all examples are self-contained, so the tutorial can be read off-line as well.
3
For a description of standard objects and modules, see The Python Standard Library. The
Python Language Reference gives a more formal definition of the language. To write
extensions in C or C++, read Extending and Embedding the Python Interpreter and Python/C
API Reference Manual. There are also several books covering Python in depth.
Python is a powerful programming language ideal for scripting and rapid application
development. It is used in web development (like: Django and Bottle), scientific and
(Pygame, Panda3D).
This tutorial introduces you to the basic concepts and features of Python 3. After reading the
tutorial, you will be able to read and write basic Python programs, and explore Python in
This tutorial is intended for people who have knowledge of other programming languages
program called an interpreter runs Python code on almost any kind of computer. This means
that a programmer can change the code and quickly see the results. This also means Python is
slower than a compiled language like C, because it is not running machine code directly.
means a programmer can focus on what to do instead of how to do it. Writing programs in
Python drew inspiration from other programming languages like C, C++, Java, Perl, and
Lisp.
4
Python's developers strive to avoid premature optimization. Additionally, they reject patches
improvements on speed. When speed is important, a Python programmer can move time-
compiler. Cython is also available. It translates a Python script into C and makes direct C-
Keeping Python fun to use is an important goal of Python’s developers. It reflects in the
language's name, a tribute to the British comedy group Monty Python. On occasions, they are
playful approaches to tutorials and reference materials, such as referring to spam and eggs
Sometimes only Python code is used for a program, but most of the time it is used to do
simple jobs while another programming language is used to do more complicated tasks.
Its standard library is made up of many functions that come with Python when it is installed.
On the Internet there are many other libraries available that make it possible for the Python
language to do more things. These libraries make it a powerful language; it can do many
different things. Some things that Python is often used for are:
Web development
Scientific programming
Network programming
Game programming.
5
Syntax
Python has a very easy-to-read syntax. Some of Python's syntax comes from C,
because that is the language that Python was written in. But Python uses whitespace to
delimit code: spaces or tabs are used to organize code into groups. This is different from C. In
C, there is a semicolon at the end of each line and curly braces ({}) are used to group code.
6
2.3 SYSTEM SPECIFICATION
7
CHAPTER-3
Input design is the process of converting the user-oriented. Input to a computer based format.
The goal of the input design is to make the data entry easier, logical and free error. Errors in
the input data are controlled by the input design. The quality of the input determines the
The entire data entry screen is interactive in nature, so that the user can directly enter
into data according to the prompted messages. The users are also can directly enter into data
according to the prompted messages. The users are also provided with option of selecting an
appropriate input from a list of values. This will reduce the number of error, which are
Input design is one of the most important phases of the system design. Input design is
the process where the input received in the system are planned and designed, so as to get
necessary information from the user, eliminating the information that is not required. The aim
of the input design is to ensure the maximum possible levels of accuracy and also ensures that
The input design is the part of overall system design, which requires very careful
attention. If the data going into the system is incorrect then the processing and output will
Input design features can ensure the reliability of the system and produce result from accurate
Output design is very important concept in the computerized system, without reliable
output the user may feel the entire system is unnecessary and avoids using it. The proper
output design is important in any system and facilitates effective decision-making. The output
Computer output is the most important and direct source of information the user.
Efficient, intelligible output design should improve the system’s relationships with the user
and help in decision making. A major form of output is the hardcopy from the printer.
Output requirements are designed during system analysis. A good starting point for
the output design is the data flow diagram. Human factors reduce issues for design involved
An application is successful only when it can provide efficient and effective reports.
Reports are actually presentable form of the data. The report generation should be useful to
9
the management for future reference. The reports are the main source of information for
user’s operators and management. Report generated are a permanent record of the transaction
occurred. After any valid transactions; have commenced the report of the same are
generations and: filed for future reference. Great care has been taken when designation the
A well database is essential for the good performance of the system .several tables are
referenced or manipulated at various instance. The table also knows as relation; provide
possible, while the normalizing tables, care should be taken to make sure that the number of
tables do not exceed the optimum level, so that table maintenance. Is convenient and effective
The process of doing database design generally consists of a number of steps which
will be carried out by the database designer. Not all of these steps will be necessary in
relationships.
Within the relational model the final step can generally be broken down into two
further steps that of determining the grouping of information within the system, generally
determining what are the basic objects about which information is being stored, and then
10
determining relationships between these groups of information, or object.This step isn’t
with expertise in the area of database design, rather than expertise in the domain from which
the data to be stored is drawn e.g. financial information, biological information etc.Therefore
the data to be stored in the database must be determined in cooperation with a person who
does have expertise in that domain, and who is aware of what data must be stored within the
system.
3.4SYSTEM DEVELOPMENT
MODULES USED
Modules
Data Collection: First of all, the information for different Websites and Social Media
applications based on certain parameters is extracted data.
Preprocessing: Then we will apply various pre-processing steps such as Noise removal,
resizing, binary conversion and gray scaling in order to make our dataset proper.
Noise removal: Noise is removed from the input video. In image processing, the key
process for denoising is filtering. Generally average filters, median filters, Wiener filters
and Kalman filters are utilized to reduce noise.
Resizing: Image resizing is necessary when we need to increase or decrease the total
number of pixels, whereas remapping can be done when we are adjusting for lens
distortion or rotating an image.
11
Binary conversion: A binary image is one that holds the pixels that can have any one of
precisely two colors, classically black and white. Binary images are also well known as
bi-level or as two-level. This means that each and every single pixel is put in storage as a
solitary bit – i.e. in value of 0and 1.
Data Training: We compile artificial as well as real time using online news data and
provide training with any machine learning classifier.
Data Training: We gathered artificial as well as real time using social media data and
provide training with any machine learning classifier.
Testing with machine learning: We give testing dataset to system and apply machine
learning algorithm to detect the activity accordingly.
Analysis: We determine the accuracy of proposed system and estimate with other
existing systems.
12
CHAPTER-4
SYSTEM TESTING:
Testing is a series of different tests that whose primary purpose is to fully exercise
the computer based system. Although each test has a different purpose, all work should verify
that all system element have been properly integrated and performed allocated function.
Testing is the process of checking whether the developed system works according to the
The philosophy behind testing is to find the errors. A good test is one that has a
high probability of finding an undiscovered error. A successful test is one that uncovers the
undiscovered error. Test cases are devised with this purpose in mind. A test case is a set of
data that the system will process as an input. However the data are created with the intent of
determining whether the system will process them correctly without any errors to produce the
required output.
Types of Testing:
Unit testing
Integration testing
Validation testing
Output testing
Unit Testing
All modules were tested and individually as soon as they were completed and were
13
checked for their correct functionality.
Integration Testing
The entire project was split into small program; each of this single programs gives a
frame as an output. These programs were tested individually; at last all these programs where
combined together by creating another program where all these constructors were used. It
The user interface testing is important since the user has to declare that the arrangements
made in frames are convenient and it is satisfied. when the frames where given for the test,
the end user gave suggestion. Based on their suggestions the frames where modified and put
into practice.
Validation Testing
package. Interfacing errors have been uncovered and corrected and a final series of test i.e.,
Validation succeeds when the software function in a manner that can be reasonably accepted
by the customer.
Output Testing
After performing the validation testing the next step is output testing of the proposed
system. Since the system cannot be useful if it does not produce the required output. Asking
the user about the format in which the system is required tests the output displayed or
generated by the system under consideration. Here the output format is considered in two
ways. one is on screen and another one is printed format. The output format on the screen is
found to be corrected as the format was designed in the system phase according to the user
needs. And for the hardcopy the output comes according to the specifications requested by
14
the user.
An acceptance test as the objective of selling the user on validity and reliability of
the system. It verifies that the procedures operate to system specification and mat the
Performance Testing
This project is a application based project, and the modules are interdependent with
the other modules, so the testing cannot be done module by module. So the unit testing is not
possible in the case of this driver. So this system is checked only with their performance to
IMPLEMENTATION
It making the new system available to a prepared set of users (the deployment), and
positioning on-going support and maintenance of the system within the Performing
Organization (the transition). At a finer level of detail, deploying the system consists of
executing all steps necessary to educate the Consumers on the use of the new system, placing
the newly developed system into production, confirming that all data required at the start of
operations is available and accurate, and validating that business functions that interact with
the system are functioning properly. Transitioning the system support responsibilities
involves changing from a system development to a system support and maintenance mode of
operation, with ownership of the new system moving from the Project Team to the
Performing Organization.
15
List of System implementation is the important stage of project when the theoretical design is
tuned into practical system. The main stages in the implementation are as follows:
Planning
Training
Changeover Planning
Planning is the first task in the system implementation. Planning means deciding on
the method and the time scale to be adopted. At the time of implementation of any system
people from different departments and system analysis involve. They are confirmed to
practical problem of controlling various activities of people outside their own data processing
committee. The committee considers ideas, problems and complaints of user department, it
The following roles are involved in carrying out the processes of this phase. Detailed
descriptions of these roles can be found in the Introductions to Sections I and III.
_ Project Manager
_ Project Sponsor
_ Business Analyst
_ Data/Process Modeler
16
_ Technical Lead/Architect
_ Application Developers
_ Customer Decision-Maker
_ Customer Representative
_ Consumer
The purpose of Prepare for System Implementationis to take all possible steps to
ensure that the upcoming system deployment and transition occurs smoothly, efficiently, and
flawlessly. In the implementation of any new system, it is necessary to ensure that the
Consumer community is best positioned to utilize the system once deployment efforts have
been validated. Therefore, all necessary training activities must be scheduled and
coordinated. As this training is often the first exposure to the system for many individuals, it
synchronized with the deployment plan and with each other. Often the performance of
17
_ Consumers may experience a period of time in which the systems that they depend on to
perform their jobs are temporarily unavailable to them. They may be asked to maintain
detailed manual records or logs of business functions that they perform to be entered into the
responsibilities while at the same time having to continue current levels of service on other
activities to all parties involved in the project is critical. A smooth deployment requires
strong leadership, planning, and communications. By this point in the project lifecycle, the
team will have spent countless hours devising and refining the steps to be followed. During
this preparation process the Project Manager must verify that all conditions that must be met
prior to initiating deployment activities have been met, and that the final ‘green light’ is on
for the team to proceed. The final process within the System Development Lifecycle is to
order for there to be an efficient and effective transition, the Project Manager should make
sure that all involved parties are aware of the transition plan, the timing of the various
Due to the number of project participants in this phase of the SDLC, many of the
necessary conditions and activities may be beyond the direct control of the Project Manager.
Consequently, all Project Team members with roles in the implementation efforts must
18
understand the plan, acknowledge their responsibilities, recognize the extent to which other
implementation efforts are dependent upon them, and confirm their commitment.
19
CHAPTER 5
20
BIBLIOGRAPHY
1. Joey Tianyi Zhou, Jiawei Du, Hongyuan Zhu, Xi Peng, Rick SiowMong Goh, (2019)
“AnomalyNet: An Anomaly Detection Network for Video Surveillance”, IEEE
Transactions on Information Forensics and Security, 1(1), pp. 99-105
4. W. Luo, W. Liu, and S. Gao, (2017)“A revisit of sparse coding based anomaly
detection in stacked rnn framework,” in The IEEE International Conference on
Computer Vision (ICCV),
21
“Abnormal event detection at 150 fps in Matlab ,”(2013) Proceedings of the IEEE
international conference on computer vision, , pp. 2720–2727.
10. Gul, M.A., M.H. Yousaf, S. Nawaz, Z. Ur Rehman and H. Kim. 2020. Patient
monitoring by abnormal human activity recognition based on CNN architecture.
Electronics 9:1993.
11. Ullah, W., A. Ullah, I.U. Haq, K. Muhammad, M. Sajjad and S.W. Baik. 2021. CNN
features with bi-directional LSTM for real-time anomaly detection in surveillance
networks. Multimed. Tools Appl. 80:16979–16995.
12. Anishchenko, L. 2018. Machine learning in video surveillance for fall detection. 2018
ural symposium on biomedical engineering, radioelectronics and information
technology (usbereit). IEEE. pp.99–102.
13. Karpathy, A., G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei. 2014.
Largescale video classification with convolutional neural networks. Proceedings of
the IEEE conference on Computer Vision and Pattern Recognition. pp.1725–1732.
Feichtenhofer, C., A. Pinz and A. Zisserman. 2016. Convolutional two-stream network fusion
for video action recognition. Proceedings of the IEEE conference on computer vision and
pattern recognition. pp.1933–1941.
22
APPENDENCIES
ARCHITECTURE
23
SAMPLE CODING
import numpy as np
import cv2
import imutils
import time
# detector
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
def norm_pdf(x,mean,sigma):
return (1/(np.sqrt(2*3.14)*sigma))*(np.exp(-0.5*(((x-mean)/sigma)**2)))
def CNN_test(train_model,test,flatten):
if not(flatten):
24
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
# Step 2 - Pooling
# Feature Map - Take Max -> Pooled Feature Map, reduced size, reduce complexity
# Step 3 - Flattening
# We don't because the high numbers from convolution feature from the feature detector
25
# Max Pooling keeps them these high numbers, and flattening keeps these high numbers
# Why didn't we take all the pixels and flatten into a huge vector?
# Only pixels of itself, but not how they're spatially structured around it
# But if we apply convolution and pooling, since feature map corresponds to each
feature
# of an image, specific image unique pixels, we keep the spatial structure of the picture.
classifier.add(Flatten())
# Logarithmic loss - binary cross entropy, more than two outcomes, categorical cross
entropy
26
# Find patterns in pixels, 10000 images, 8000 training, not much exactly or use a trick
# Image augmentation will create batches and each batch will create random
transformation
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size=(64, 64),
27
batch_size=32,
class_mode='binary')
classifier.fit_generator(training_set,
samples_per_epoch=8000,
nb_epoch=25,
validation_data=test_set,
nb_val_samples=2000)
import numpy as np
test_image = image.img_to_array(test_image)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] == 1:
prediction = 'suspicious'
else:
prediction = 'Non-suspicious'
return 0
28
def detect_activities(a,layer):
detection_activity=1
detection_activity=2
detection_activity=5
elif layer==430:
detection_activity=3
elif layer==435:
detection_activity=4
elif layer==150:
detection_activity=6
return detection_activity
def class_name(activity_class):
if activity_class==1:
activity_name="Walking Detection!!!"
elifactivity_class==2:
activity_name="Jogging Detection!!!"
elifactivity_class==3:
29
activity_name="Clapping Detection!!!"
elifactivity_class==4:
elifactivity_class==5:
elifactivity_class==6:
return activity_name
def connectedLayers(count,bias,d):
start_point = (0, 0)
start_point1 = (0, 0)
end_point = (0, 0)
if bias==323 or bias==2729:
learning_rate=1
else:
learning_rate=0
return learning_rate,start_point1,start_point,end_point,predict;
# creating object
fgbg1 = cv2.bgsegm.createBackgroundSubtractorMOG();
fgbg2 = cv2.createBackgroundSubtractorMOG2();
fgbg3 = cv2.bgsegm.createBackgroundSubtractorGMG();
cap = cv2.VideoCapture('abnormal_crowd.avi')
31
_,frame = cap.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
frame1=frame
row,col = frame.shape
mean = np.zeros([3,row,col],np.float64)
mean[1,:,:] = frame
variance = np.zeros([3,row,col],np.float64)
variance[:,:,:] = 400
omega = np.zeros([3,row,col],np.float64)
omega[0,:,:],omega[1,:,:],omega[2,:,:] = 0,0,1
omega_by_sigma = np.zeros([3,row,col],np.float64)
foreground = np.zeros([row,col],np.uint8)
background = np.zeros([row,col],np.uint8)
T = 0.5
a = np.uint8([255])
b = np.uint8([0])
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print( length )
learning_rate,start_point1,start_point,end_point,predict=connectedLayers(a,length,frame1)
if (cap.isOpened()== False):
count=0
c=0
while(cap.isOpened()):
if learning_rate==0:
frame_gray = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY)
# converting data type of frame_gray so that different operation with it can be performed
frame_gray = frame_gray.astype(np.float64)
33
# Because variance becomes negative after some time because of norm_pdf function so
we are converting those indices
# values which are near zero to some higher values according to their preferences
variance[0][np.where(variance[0]<1)] = 10
variance[1][np.where(variance[1]<1)] = 5
variance[2][np.where(variance[2]<1)] = 1
sigma1 = np.sqrt(variance[0])
sigma2 = np.sqrt(variance[1])
sigma3 = np.sqrt(variance[2])
# getting values for the inequality test to get indexes of fitting indexes
compare_val_1 = cv2.absdiff(frame_gray,mean[0])
compare_val_2 = cv2.absdiff(frame_gray,mean[1])
compare_val_3 = cv2.absdiff(frame_gray,mean[2])
34
# and medium probable is greater than T and most probable is less than T
fore_index1 = np.where(omega[2]>T)
# Finding those indices where a particular pixel values fits at least one of the gaussian
#finding common indices for those indices which satisfy line 70 and 80
temp[fore_index1] = 1
temp[gauss_fit_index3] = temp[gauss_fit_index3] + 1
index3 = np.where(temp == 2)
# finding com
temp = np.zeros([row,col])
temp[fore_index2] = 1
35
index = np.where((compare_val_3<=value3)|(compare_val_2<=value2))
temp[index] = temp[index]+1
index2 = np.where(temp==2)
match_index = np.zeros([row,col])
match_index[gauss_fit_index1] = 1
match_index[gauss_fit_index2] = 1
match_index[gauss_fit_index3] = 1
not_match_index = np.where(match_index == 0)
#updating variance and mean value of the matched indices of all three gaussians
36
mean[1][gauss_fit_index2] = (1 - rho) * mean[1][gauss_fit_index2] + rho *
frame_gray[gauss_fit_index2]
# updating least probable gaussian for those pixel values which do not match any of the
gaussian
mean[0][not_match_index] = frame_gray[not_match_index]
variance[0][not_match_index] = 200
omega[0][not_match_index] = 0.1
#pred=CNN_test(frame1,not_match_index,1)
# normalise omega
sum = np.sum(omega,axis=0)
37
omega = omega/sum
index = np.argsort(omega_by_sigma,axis=0)
mean = np.take_along_axis(mean,index,axis=0)
variance = np.take_along_axis(variance,index,axis=0)
omega = np.take_along_axis(omega,index,axis=0)
activity_class=detect_activities(frame1,length)
activity_name=class_name(activity_class)
# converting data type of frame_gray so that we can use it to perform operations for
displaying the image
frame_gray = frame_gray.astype(np.uint8)
(regions, _) = hog.detectMultiScale(frame1,
winStride=(4, 4),
padding=(4, 4),
scale=1.05)
38
# Drawing the regions in the
# Image
(x + w, y + h),
(0, 0, 255), 2)
#print(length)
print(activity_name)
background[index2] = frame_gray[index2]
background[index3] = frame_gray[index3]
cv2.imshow('Background Subtraction',cv2.subtract(frame_gray,background))
cv2.imshow('Activity Detection_',frame1)
time.sleep(0.1)
count += 1
39
break
else:
fgmask1 = fgbg1.apply(frame1);
fgmask2 = fgbg2.apply(frame1);
fgmask3 = fgbg3.apply(frame1);
# of same width
color = (255, 0, 0)
cv2.imshow('OUTPUT', fgmask2)
# Line thickness of -1 px
thickness = 4
learning_rate,start_point1,start_point,end_point,predict=connectedLayers(c,length,fgmask3)
#print(length)
print(predict)
40
cv2.putText(image, predict, (start_point1), cv2.FONT_HERSHEY_SIMPLEX, 0.9,
(36,255,12), 2)
cv2.imshow('Activity Detection_',frame1)
c=c+1
if k == 27:
break;
41
SAMPLE INPUT
FIG 1.1
42
FIG 1.2
43
FIG 1.3
44
FIG 1.4
45
SAMPLE OUTPUT
FIG 1.5
46