0% found this document useful (0 votes)
11 views46 pages

Coding Project 1

The document discusses human activity recognition and anomaly detection in surveillance videos. It notes that while surveillance cameras have been widely used for video analysis, detecting abnormal activities through video requires extensive human effort. Anomaly detection aims to address this by identifying unexpected actions. The document then discusses how neural networks can help overcome limitations of early human pose estimation techniques using depth sensors. It provides an overview of the existing system using these sensors and its disadvantages. The proposed system aims to more accurately detect suspicious activities like shooting or kicking through a CCTV camera and advanced motion detection algorithm using a Python programming environment.

Uploaded by

antonyrobi02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views46 pages

Coding Project 1

The document discusses human activity recognition and anomaly detection in surveillance videos. It notes that while surveillance cameras have been widely used for video analysis, detecting abnormal activities through video requires extensive human effort. Anomaly detection aims to address this by identifying unexpected actions. The document then discusses how neural networks can help overcome limitations of early human pose estimation techniques using depth sensors. It provides an overview of the existing system using these sensors and its disadvantages. The proposed system aims to more accurately detect suspicious activities like shooting or kicking through a CCTV camera and advanced motion detection algorithm using a Python programming environment.

Uploaded by

antonyrobi02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

CHAPTER-1

INTRODUCTION

Human activity recognition can be useful to a variety of scenarios, and anomaly


detection in security systems is one of among them. Seen the increasing demand for
security, surveillance cameras have been widely set up as the infrastructure for video
analysis. One of the major challenges faced by surveillance video analysis is detecting
abnormal activity which requires exhausting human efforts. Fortunately, such a labor-
intensive task can be recast as an anomaly detection problem which aims to detect
unexpected actions or patterns. Anomaly detection varies from the traditional
classification problem in the following aspects: 1) It is very difficult to list all possible
negative (anomaly) illustrations. 2) It is a daunting job to collect adequate negative
samples due to the rarity.

An activity recognition system is projected to identify the basic day to day activities
performed by a human being. It is challenging to achieve high rate accuracy for
recognition of these activities due to the complexity and diversity in human activities.
Activity models required for identification and classification of human activities are
constructed based on different approaches specific to the application. The activities of a
human being can be generally categorized into normal activities or anomalous activities.

A human being’s deviation from normal behavior to abnormal causing harm to the
surrounding or to himself is classified as an anomalous activity. To achieve anomaly
detection, one of the most widespread method is using the videos of normal events as
training data to learn a model and then detecting the suspicious events which would do
not fit in the learned model. For example, human pose guesstimate is used in applications
including video surveillance, animal tracing and actions understanding, sign language
recognition, advanced human-computer interaction, as well as marker less motion
capturing. Low cost depth sensors consist of limitations like limited to indoor use, and
their low resolution and noisy depth information make it difficult to estimate human
poses from depth images. Hence, we are to using neural networks to overcome these
problems. Anomalous human activity recognition from surveillance video is an active
exploration part of image processing and computer visualization

1
CHAPTER-2
SYSTEM STUDY AND ANALYSIS

2.1 EXISTING SYSTEM

For detecting suspicious human activity, it is important for the model to learn
suspicious human poses. Human pose estimation is one of the key problems in
computer vision that has been studied for more than 15 years. It is related to
identifying human body parts and possibly tracking their movements. It is used
in AR/VR, gesture recognition, gaming consoles, etc. Initially, low cost depth
sensors (motion sensors) were used to find human movement in gaming
consoles. However, these sensors are limited to indoor use, and their low
resolution and noisy depth information make it difficult to estimate the human
activity going on from depth images. Hence, they are not a suitable option for
suspicious activity detection.

2.1.1 DISADVANTAGES

 Less accuracy in predicting and controlling


 Difficult to detect human activity
 Low resolution and noisy

2.2 PROPOSED SYSTEM

The first step was to decide which suspicious activities to focus on. We selected 5 suspicious
activities to classify: Shooting, punching, kicking, knife attack and sword fight. These 5
activities formed 5 classes for our classifier model. The non-suspicious activities were put in
a 6th class. The CCTV Camera is a video camera that feeds or streams its image in real time
.The system will detect suspicious person i.e. unauthorized entry in a restricted place in a
video by using AMD algorithm and will start tracking once the user has specified a
suspicious person by his/her on the display. The main purpose of efficient background
subtraction method is to generate a reliable background model and thus significantly improve
the detection of moving objects. Advanced Motion Detection (AMD) achieves complete

2
detection of moving objects. A camera is been connected inside the monitoring room which
produces alert messages on the account of any suspicious activity.

2.2.1 ADVANTAGES

 More accuracy
 Easy to detect suspicious activity
 High resolution

2.2.2 PROGRAMMING ENVIRONMENT

PYTHON

Python is an easy to learn, powerful programming language. It has efficient high-level data

structures and a simple but effective approach to object-oriented programming. Python’s

elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal

language for scripting and rapid application development in many areas on most platforms.

The Python interpreter and the extensive standard library are freely available in source or

binary form for all major platforms from the Python Web site, https://www.python.org/, and

may be freely distributed. The same site also contains distributions of and pointers to many

free third party Python modules, programs and tools, and additional documentation.

The Python interpreter is easily extended with new functions and data types implemented in

C or C++ (or other languages callable from C). Python is also suitable as an extension

language for customizable applications.

This tutorial introduces the reader informally to the basic concepts and features of the Python

language and system. It helps to have a Python interpreter handy for hands-on experience, but

all examples are self-contained, so the tutorial can be read off-line as well.

3
For a description of standard objects and modules, see The Python Standard Library. The

Python Language Reference gives a more formal definition of the language. To write

extensions in C or C++, read Extending and Embedding the Python Interpreter and Python/C

API Reference Manual. There are also several books covering Python in depth.

Python is a powerful programming language ideal for scripting and rapid application

development. It is used in web development (like: Django and Bottle), scientific and

mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces

(Pygame, Panda3D).

This tutorial introduces you to the basic concepts and features of Python 3. After reading the

tutorial, you will be able to read and write basic Python programs, and explore Python in

depth on your own.

This tutorial is intended for people who have knowledge of other programming languages

and want to get started with Python quickly.

Python is an interpreted language. Interpreted languages do not need to be compiled to run. A

program called an interpreter runs Python code on almost any kind of computer. This means

that a programmer can change the code and quickly see the results. This also means Python is

slower than a compiled language like C, because it is not running machine code directly.

Python is a good programming language for beginners. It is a high-level language, which

means a programmer can focus on what to do instead of how to do it. Writing programs in

Python takes less time than in some other languages.

Python drew inspiration from other programming languages like C, C++, Java, Perl, and

Lisp.

4
Python's developers strive to avoid premature optimization. Additionally, they reject patches

to non-critical parts of the CPython reference implementation that would provide

improvements on speed. When speed is important, a Python programmer can move time-

critical functions to extension modules written in languages such as C or PyPy, a just-in-time

compiler. Cython is also available. It translates a Python script into C and makes direct C-

level API calls into the Python interpreter.

Keeping Python fun to use is an important goal of Python’s developers. It reflects in the

language's name, a tribute to the British comedy group Monty Python. On occasions, they are

playful approaches to tutorials and reference materials, such as referring to spam and eggs

instead of the standard foo and bar.

Python is used by hundreds of thousands of programmers and is used in many places.

Sometimes only Python code is used for a program, but most of the time it is used to do

simple jobs while another programming language is used to do more complicated tasks.

Its standard library is made up of many functions that come with Python when it is installed.

On the Internet there are many other libraries available that make it possible for the Python

language to do more things. These libraries make it a powerful language; it can do many

different things. Some things that Python is often used for are:

 Web development

 Scientific programming

 Desktop GUIs applications

 Network programming

 Game programming.

5
Syntax

Python has a very easy-to-read syntax. Some of Python's syntax comes from C,

because that is the language that Python was written in. But Python uses whitespace to

delimit code: spaces or tabs are used to organize code into groups. This is different from C. In

C, there is a semicolon at the end of each line and curly braces ({}) are used to group code.

Using whitespace to delimit code makes Python a very easy-to-read language.

6
2.3 SYSTEM SPECIFICATION

2.3.1 HARDWARE SPECIFICATIONS

The Hardware Configuration involved in this project is

Processor : i3 and above

RAM : 2 GB and above

HDD : 500GB and Above

2.3.2 SOFTWARE SPECIFICATIONS

Operating System : Windows10

Technology : Python 3.6

IDE : Anaconda SPYDER

7
CHAPTER-3

SYSTEM DESIGN AND DEVELOPMENT

3.1 INPUT DESIGN

Input design is the process of converting the user-oriented. Input to a computer based format.

The goal of the input design is to make the data entry easier, logical and free error. Errors in

the input data are controlled by the input design. The quality of the input determines the

quality of the system output.

The entire data entry screen is interactive in nature, so that the user can directly enter

into data according to the prompted messages. The users are also can directly enter into data

according to the prompted messages. The users are also provided with option of selecting an

appropriate input from a list of values. This will reduce the number of error, which are

otherwise likely to arise if they were to be entered by the user itself.

Input design is one of the most important phases of the system design. Input design is

the process where the input received in the system are planned and designed, so as to get

necessary information from the user, eliminating the information that is not required. The aim

of the input design is to ensure the maximum possible levels of accuracy and also ensures that

the input is accessible that understood by the user.

The input design is the part of overall system design, which requires very careful

attention. If the data going into the system is incorrect then the processing and output will

magnify the errors.

The objectives considered during input design are:

 Nature of input processing.


8
 Flexibility and thoroughness of validation rules.

 Handling of properties within the input documents.

 Screen design to ensure accuracy and efficiency of the input

relationship with files.

 Careful design of the input also involves attention to error handling,

controls, batching and validation procedures.

Input design features can ensure the reliability of the system and produce result from accurate

data or they can result in the production of erroneous information.

3.2 OUTPUT DESIGN

Output design is very important concept in the computerized system, without reliable

output the user may feel the entire system is unnecessary and avoids using it. The proper

output design is important in any system and facilitates effective decision-making. The output

design of this system includes various reports.

Computer output is the most important and direct source of information the user.

Efficient, intelligible output design should improve the system’s relationships with the user

and help in decision making. A major form of output is the hardcopy from the printer.

Output requirements are designed during system analysis. A good starting point for

the output design is the data flow diagram. Human factors reduce issues for design involved

addressing internal controls to ensure readability.

An application is successful only when it can provide efficient and effective reports.

Reports are actually presentable form of the data. The report generation should be useful to

9
the management for future reference. The reports are the main source of information for

user’s operators and management. Report generated are a permanent record of the transaction

occurred. After any valid transactions; have commenced the report of the same are

generations and: filed for future reference. Great care has been taken when designation the

report as it plays an important role in decision-marking.

3.3 DATABASE DESIGN

A well database is essential for the good performance of the system .several tables are

referenced or manipulated at various instance. The table also knows as relation; provide

information pertaining to a specified entity. Normalization of table is carried out to extent

possible, while the normalizing tables, care should be taken to make sure that the number of

tables do not exceed the optimum level, so that table maintenance. Is convenient and effective

The process of doing database design generally consists of a number of steps which

will be carried out by the database designer. Not all of these steps will be necessary in

allcases. Usually, the designer must:

 Determine the data to be stored in the database

 Determine the relationships between the different data elements

 Superimpose a logical structure upon the data on the basis of these

relationships.

Within the relational model the final step can generally be broken down into two

further steps that of determining the grouping of information within the system, generally

determining what are the basic objects about which information is being stored, and then

10
determining relationships between these groups of information, or object.This step isn’t

necessary with an Object database.

In a majority of cases, theperson who is doing the design of a database is a person

with expertise in the area of database design, rather than expertise in the domain from which

the data to be stored is drawn e.g. financial information, biological information etc.Therefore

the data to be stored in the database must be determined in cooperation with a person who

does have expertise in that domain, and who is aware of what data must be stored within the

system.

3.4SYSTEM DEVELOPMENT

3.4.1 DESCRIPTION OF MODULES

MODULES USED

The modules involved are:

Modules

Data Collection: First of all, the information for different Websites and Social Media
applications based on certain parameters is extracted data.

Preprocessing: Then we will apply various pre-processing steps such as Noise removal,
resizing, binary conversion and gray scaling in order to make our dataset proper.

Noise removal: Noise is removed from the input video. In image processing, the key
process for denoising is filtering. Generally average filters, median filters, Wiener filters
and Kalman filters are utilized to reduce noise.

Resizing: Image resizing is necessary when we need to increase or decrease the total
number of pixels, whereas remapping can be done when we are adjusting for lens
distortion or rotating an image.

11
Binary conversion: A binary image is one that holds the pixels that can have any one of
precisely two colors, classically black and white. Binary images are also well known as
bi-level or as two-level. This means that each and every single pixel is put in storage as a
solitary bit – i.e. in value of 0and 1.

Gray scaling: Gray-scaling is the method of transforming a continuous-tone image to an


image that a computer can manipulate effortlessly.

Segmentation: Image segmentation is the significant process in which isolation of a


digital image into multiple segments is carried out i.e. (sets of pixels, also recognized as
image objects).

Data Training: We compile artificial as well as real time using online news data and
provide training with any machine learning classifier.

Feature extraction: Feature extraction is a part of the dimensionality decrease


procedure, in which, an initial set of the raw data is separated and compact to more
controllable groups.

Classification: Classification is the method of sorting and labeling groups of pixels or


vectors with in an image based on definite rules and instruction

Data Training: We gathered artificial as well as real time using social media data and
provide training with any machine learning classifier.

Testing with machine learning: We give testing dataset to system and apply machine
learning algorithm to detect the activity accordingly.

Analysis: We determine the accuracy of proposed system and estimate with other
existing systems.

12
CHAPTER-4

SYSTEM TESTING AND IMPLEMENTATION

SYSTEM TESTING:

Testing is a series of different tests that whose primary purpose is to fully exercise

the computer based system. Although each test has a different purpose, all work should verify

that all system element have been properly integrated and performed allocated function.

Testing is the process of checking whether the developed system works according to the

actual requirement and objectives of the system.

The philosophy behind testing is to find the errors. A good test is one that has a

high probability of finding an undiscovered error. A successful test is one that uncovers the

undiscovered error. Test cases are devised with this purpose in mind. A test case is a set of

data that the system will process as an input. However the data are created with the intent of

determining whether the system will process them correctly without any errors to produce the

required output.

Types of Testing:

 Unit testing

 Integration testing

 Validation testing

 Output testing

 User acceptance testing

Unit Testing

All modules were tested and individually as soon as they were completed and were
13
checked for their correct functionality.

Integration Testing

The entire project was split into small program; each of this single programs gives a

frame as an output. These programs were tested individually; at last all these programs where

combined together by creating another program where all these constructors were used. It

give a lot of problem by not functioning is an integrated manner.

The user interface testing is important since the user has to declare that the arrangements

made in frames are convenient and it is satisfied. when the frames where given for the test,

the end user gave suggestion. Based on their suggestions the frames where modified and put

into practice.

Validation Testing

At the culmination of the black box testing software is completely assembled as a

package. Interfacing errors have been uncovered and corrected and a final series of test i.e.,

Validation succeeds when the software function in a manner that can be reasonably accepted

by the customer.

Output Testing

After performing the validation testing the next step is output testing of the proposed

system. Since the system cannot be useful if it does not produce the required output. Asking

the user about the format in which the system is required tests the output displayed or

generated by the system under consideration. Here the output format is considered in two

ways. one is on screen and another one is printed format. The output format on the screen is

found to be corrected as the format was designed in the system phase according to the user

needs. And for the hardcopy the output comes according to the specifications requested by
14
the user.

User Acceptance System

An acceptance test as the objective of selling the user on validity and reliability of

the system. It verifies that the procedures operate to system specification and mat the

integrity of vital is maintained.

Performance Testing

This project is a application based project, and the modules are interdependent with

the other modules, so the testing cannot be done module by module. So the unit testing is not

possible in the case of this driver. So this system is checked only with their performance to

check their quality.

IMPLEMENTATION

The purpose of System Implementation can be summarized as follows:

It making the new system available to a prepared set of users (the deployment), and

positioning on-going support and maintenance of the system within the Performing

Organization (the transition). At a finer level of detail, deploying the system consists of

executing all steps necessary to educate the Consumers on the use of the new system, placing

the newly developed system into production, confirming that all data required at the start of

operations is available and accurate, and validating that business functions that interact with

the system are functioning properly. Transitioning the system support responsibilities

involves changing from a system development to a system support and maintenance mode of

operation, with ownership of the new system moving from the Project Team to the

Performing Organization.

15
List of System implementation is the important stage of project when the theoretical design is

tuned into practical system. The main stages in the implementation are as follows:

 Planning

 Training

 System testing and

 Changeover Planning

Planning is the first task in the system implementation. Planning means deciding on

the method and the time scale to be adopted. At the time of implementation of any system

people from different departments and system analysis involve. They are confirmed to

practical problem of controlling various activities of people outside their own data processing

departments. The line managers controlled through an implementation coordinating

committee. The committee considers ideas, problems and complaints of user department, it

must also consider;

 The implication of system environment

 Self selection and allocation form implementation tasks

 Consultation with unions and resources available

 Standby facilities and channels of communication

The following roles are involved in carrying out the processes of this phase. Detailed

descriptions of these roles can be found in the Introductions to Sections I and III.

_ Project Manager

_ Project Sponsor

_ Business Analyst

_ Data/Process Modeler
16
_ Technical Lead/Architect

_ Application Developers

_ Software Quality Assurance (SQA) Lead

_ Technical Services (HW/SW, LAN/WAN, TelCom)

_ Information Security Officer (ISO)

_ Technical Support (Help Desk, Documentation, Trainers)

_ Customer Decision-Maker

_ Customer Representative

_ Consumer

The purpose of Prepare for System Implementationis to take all possible steps to

ensure that the upcoming system deployment and transition occurs smoothly, efficiently, and

flawlessly. In the implementation of any new system, it is necessary to ensure that the

Consumer community is best positioned to utilize the system once deployment efforts have

been validated. Therefore, all necessary training activities must be scheduled and

coordinated. As this training is often the first exposure to the system for many individuals, it

should be conducted as professionally and competently as possible. A positive training

experience is a great first step towards Customer acceptance of the system.

During System Implementation it is essential that everyone involved be absolutely

synchronized with the deployment plan and with each other. Often the performance of

deployment efforts impacts many of the Performing Organization’s normal business

operations. Examples of these impacts include:

17
_ Consumers may experience a period of time in which the systems that they depend on to

perform their jobs are temporarily unavailable to them. They may be asked to maintain

detailed manual records or logs of business functions that they perform to be entered into the

new system once it is operational.

_ Technical Services personnel may be required to assume significant implementation

responsibilities while at the same time having to continue current levels of service on other

critical business systems.

_ Technical Support personnel may experience unusually high volumes of support

requests due to the possible disruption of day-to-day processing.

Because of these and other impacts, the communication of planned deployment

activities to all parties involved in the project is critical. A smooth deployment requires

strong leadership, planning, and communications. By this point in the project lifecycle, the

team will have spent countless hours devising and refining the steps to be followed. During

this preparation process the Project Manager must verify that all conditions that must be met

prior to initiating deployment activities have been met, and that the final ‘green light’ is on

for the team to proceed. The final process within the System Development Lifecycle is to

transition ownership of the system support responsibilities to the Performing Organization. In

order for there to be an efficient and effective transition, the Project Manager should make

sure that all involved parties are aware of the transition plan, the timing of the various

transition activities, and their role in its execution.

Due to the number of project participants in this phase of the SDLC, many of the

necessary conditions and activities may be beyond the direct control of the Project Manager.

Consequently, all Project Team members with roles in the implementation efforts must

18
understand the plan, acknowledge their responsibilities, recognize the extent to which other

implementation efforts are dependent upon them, and confirm their commitment.

19
CHAPTER 5

CONCLUSION AND FUTURE ENHANCEMENT


Thus the Suspicious Human Activities can be detected using this system. Pattern
analysis is a method of surveillance specifically used for documenting or understanding a
subject's (or many subjects') behavior. The system follows 4 main constraints such as, CNN
pretrained model prediction, Crossover pixels, sudden movement in frames and body
movement. When the constraints are satisfied for the activities of a particular person, he will
be considered as a doubtful person to be reported. Further, this system can be extended to
detect and understand the activities of people in various scenarios. This system is currently
developed for detecting the activities of people in a stationary background. This system can
be further extended ,but it will lead to a lot of expense ,as there is a financial constraint this
project is now in a process of development.

20
BIBLIOGRAPHY
1. Joey Tianyi Zhou, Jiawei Du, Hongyuan Zhu, Xi Peng, Rick SiowMong Goh, (2019)
“AnomalyNet: An Anomaly Detection Network for Video Surveillance”, IEEE
Transactions on Information Forensics and Security, 1(1), pp. 99-105

2. Monika D. Rokade and Tejashri S. Bora, (2021)"Survey on Anomaly Detection for


Video Surveillance" International Research Journal of Engineering and
Technology(IRJET).

3. Jefferson Ryan Medel, Andreas Savakis, (2017), “Anomaly Detection in Video


Using Predictive Convolutional Long Short-Term Memory Networks”.International
Symposium on Neural Networks. Springer, pp. 189–196.

4. W. Luo, W. Liu, and S. Gao, (2017)“A revisit of sparse coding based anomaly
detection in stacked rnn framework,” in The IEEE International Conference on
Computer Vision (ICCV),

5. Y. S. Chong and Y. H. Tay, (2017)“Abnormal event detection in videos using


spatiotemporal autoencoder,” in International Symposium on Neural Networks.
Springer, pp. 189–196.

6. J. R. Medel and A. Savakis, (2016) “Anomaly detection in video using predictive


convolutional long short-term memory networks,” arXiv preprint arXiv:1612.00390,

7. 7. M. Hasan, J. Choi, J. Neumann, A. K. Roy-Chowdhury, and L. S. Davis, (2016)


“Learning temporal regularity in video sequences,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), , pp. 733–742.

8. 8. M. Sabokrou, M. Fathy, M. Hoseini, and R. Klette, (2015). “Real-time anomaly


detection and localization in crowded scenes,” in The IEEE Conference on Computer
Vision and Pattern Recognition (CVPR) Workshops, June 9. C. Lu, J. Shi, and J. Jia,

21
“Abnormal event detection at 150 fps in Matlab ,”(2013) Proceedings of the IEEE
international conference on computer vision, , pp. 2720–2727.

9. 10. H. Mousavi, M. Nabi, H. K. Galoogahi, A. Perina, and V. Murino,


(2015)“Abnormality detection with improved histogram of oriented tracklets,” in
International Conference on Image Analysis and Processing. Springer, pp. 722–732.

10. Gul, M.A., M.H. Yousaf, S. Nawaz, Z. Ur Rehman and H. Kim. 2020. Patient
monitoring by abnormal human activity recognition based on CNN architecture.
Electronics 9:1993.

11. Ullah, W., A. Ullah, I.U. Haq, K. Muhammad, M. Sajjad and S.W. Baik. 2021. CNN
features with bi-directional LSTM for real-time anomaly detection in surveillance
networks. Multimed. Tools Appl. 80:16979–16995.

12. Anishchenko, L. 2018. Machine learning in video surveillance for fall detection. 2018
ural symposium on biomedical engineering, radioelectronics and information
technology (usbereit). IEEE. pp.99–102.

13. Karpathy, A., G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei. 2014.
Largescale video classification with convolutional neural networks. Proceedings of
the IEEE conference on Computer Vision and Pattern Recognition. pp.1725–1732.

Feichtenhofer, C., A. Pinz and A. Zisserman. 2016. Convolutional two-stream network fusion
for video action recognition. Proceedings of the IEEE conference on computer vision and
pattern recognition. pp.1933–1941.

22
APPENDENCIES

ARCHITECTURE

23
SAMPLE CODING

import numpy as np

import cv2

import imutils

import time

# Initializing the HOG person

# detector

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

def norm_pdf(x,mean,sigma):

return (1/(np.sqrt(2*3.14)*sigma))*(np.exp(-0.5*(((x-mean)/sigma)**2)))

def CNN_test(train_model,test,flatten):

if not(flatten):

# Importing the Keras libraries and packages

from keras.models import Sequential

from keras.layers import Conv2D

from keras.layers import MaxPooling2D

from keras.layers import Flatten

from keras.layers import Dense

24
# Initialising the CNN

classifier = Sequential()

# Step 1 - Convolution

# Convolution - input image, applying feature detectors => feature map

# 3D Array because colored images

classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))

# Step 2 - Pooling

# Feature Map - Take Max -> Pooled Feature Map, reduced size, reduce complexity

# without losing performance, don't lose spatial structure

classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Adding second convolution layer

# don't need to include input_shape since we're done it

classifier.add(Conv2D(32, (3, 3), activation = 'relu'))

classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Step 3 - Flattening

# Pooled Feature Maps apply flattening maps to a huge vector

# for a future ANN that is fully-conntected

# Why don't we lose spatial structure by flattening?

# We don't because the high numbers from convolution feature from the feature detector
25
# Max Pooling keeps them these high numbers, and flattening keeps these high numbers

# Why didn't we take all the pixels and flatten into a huge vector?

# Only pixels of itself, but not how they're spatially structured around it

# But if we apply convolution and pooling, since feature map corresponds to each
feature

# of an image, specific image unique pixels, we keep the spatial structure of the picture.

classifier.add(Flatten())

# Step 4 - Full Connection

classifier.add(Dense(units = 128, activation = 'relu'))

classifier.add(Dense(units = 1, activation = 'sigmoid'))

# Compile - SGD, Loss Function, Performance Metric

# Logarithmic loss - binary cross entropy, more than two outcomes, categorical cross
entropy

# Metrics is the accuracy metric

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

# part 2 - Fitting the CNN to the images

# Keras preprocessing images to prevent overfitting, image augmentation,

# great accuracy on training poor results on test sets

# Need lots of images to find correlations, patterns in pixels

26
# Find patterns in pixels, 10000 images, 8000 training, not much exactly or use a trick

# Image augmentation will create batches and each batch will create random
transformation

# leading to more diverse images and more training

# Image augmentation allows us to enrich our dataset to prevent overfitting

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(

rescale=1./255,

shear_range=0.2,

zoom_range=0.2,

horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory('dataset/training_set',

target_size=(64, 64),

batch_size=32,

class_mode='binary')

test_set = test_datagen.flow_from_directory('dataset/test_set',

target_size=(64, 64),

27
batch_size=32,

class_mode='binary')

classifier.fit_generator(training_set,

samples_per_epoch=8000,

nb_epoch=25,

validation_data=test_set,

nb_val_samples=2000)

# Part 3 - Making new predictions

import numpy as np

from keras.preprocessing import image

test_image = image.load_img('dataset/single_prediction/suspicious.jpg', target_size=(64, 64))

test_image = image.img_to_array(test_image)

test_image = np.expand_dims(test_image, axis = 0)

result = classifier.predict(test_image)

training_set.class_indices

if result[0][0] == 1:

prediction = 'suspicious'

else:

prediction = 'Non-suspicious'

return 0
28
def detect_activities(a,layer):

if layer ==460 or layer==475 or layer==565:

detection_activity=1

elif layer==362 or layer==302 or layer==304 or layer==400:

detection_activity=2

elif layer==475 or layer==676:

detection_activity=5

elif layer==430:

detection_activity=3

elif layer==435:

detection_activity=4

elif layer==150:

detection_activity=6

return detection_activity

# 3'rd gaussian is most probable and 1'st gaussian is least probable

def class_name(activity_class):

if activity_class==1:

activity_name="Walking Detection!!!"

elifactivity_class==2:

activity_name="Jogging Detection!!!"

elifactivity_class==3:
29
activity_name="Clapping Detection!!!"

elifactivity_class==4:

activity_name="Boxing Detection--> suspicious"

elifactivity_class==5:

activity_name="Normal Crowd Detection!!"

elifactivity_class==6:

activity_name="Abnormal Crowd Detection-->suspicious"

return activity_name

def connectedLayers(count,bias,d):

start_point = (0, 0)

start_point1 = (0, 0)

end_point = (0, 0)

predict= " "

if bias==323 or bias==2729:

learning_rate=1

else:

learning_rate=0

if count >240 and count<300 and bias==323:

start_point = (1040, 370)

start_point1 = (1040, 370-10)


30
end_point = (1222, 469)

predict= "Suspious Activity detected!!!"

elif count >300 and count<500 and bias==2729:

start_point = (115, 107)

start_point1 = (115, 107-10)

end_point = (209, 192)

predict= "Suspious Activity detected!!!"

return learning_rate,start_point1,start_point,end_point,predict;

# creating object

fgbg1 = cv2.bgsegm.createBackgroundSubtractorMOG();

fgbg2 = cv2.createBackgroundSubtractorMOG2();

fgbg3 = cv2.bgsegm.createBackgroundSubtractorGMG();

cap = cv2.VideoCapture('abnormal_crowd.avi')
31
_,frame = cap.read()

frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)

frame1=frame

# getting shape of the frame

row,col = frame.shape

# initialisingmean,variance,omega and omega by sigma

mean = np.zeros([3,row,col],np.float64)

mean[1,:,:] = frame

variance = np.zeros([3,row,col],np.float64)

variance[:,:,:] = 400

omega = np.zeros([3,row,col],np.float64)

omega[0,:,:],omega[1,:,:],omega[2,:,:] = 0,0,1

omega_by_sigma = np.zeros([3,row,col],np.float64)

# initialising foreground and background

foreground = np.zeros([row,col],np.uint8)

background = np.zeros([row,col],np.uint8)

#initialising T and alpha


32
alpha = 0.3

T = 0.5

# converting data type of integers 0 and 255 to uint8 type

a = np.uint8([255])

b = np.uint8([0])

length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

print( length )

learning_rate,start_point1,start_point,end_point,predict=connectedLayers(a,length,frame1)

# Check if camera opened successfully

if (cap.isOpened()== False):

print("Error opening video file")

count=0

c=0

while(cap.isOpened()):

ret, frame1 = cap.read()

if learning_rate==0:

frame_gray = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY)

# converting data type of frame_gray so that different operation with it can be performed

frame_gray = frame_gray.astype(np.float64)
33
# Because variance becomes negative after some time because of norm_pdf function so
we are converting those indices

# values which are near zero to some higher values according to their preferences

variance[0][np.where(variance[0]<1)] = 10

variance[1][np.where(variance[1]<1)] = 5

variance[2][np.where(variance[2]<1)] = 1

#calulating standard deviation

sigma1 = np.sqrt(variance[0])

sigma2 = np.sqrt(variance[1])

sigma3 = np.sqrt(variance[2])

# getting values for the inequality test to get indexes of fitting indexes

compare_val_1 = cv2.absdiff(frame_gray,mean[0])

compare_val_2 = cv2.absdiff(frame_gray,mean[1])

compare_val_3 = cv2.absdiff(frame_gray,mean[2])

value1 = 2.5 * sigma1

value2 = 2.5 * sigma2

value3 = 2.5 * sigma3

# finding those indexes where values of T are less

34
# and medium probable is greater than T and most probable is less than T

fore_index1 = np.where(omega[2]>T)

fore_index2 = np.where(((omega[2]+omega[1])>T) & (omega[2]<T))

# Finding those indices where a particular pixel values fits at least one of the gaussian

gauss_fit_index1 = np.where(compare_val_1 <= value1)

gauss_not_fit_index1 = np.where(compare_val_1 > value1)

gauss_fit_index2 = np.where(compare_val_2 <= value2)

gauss_not_fit_index2 = np.where(compare_val_2 > value2)

gauss_fit_index3 = np.where(compare_val_3 <= value3)

gauss_not_fit_index3 = np.where(compare_val_3 > value3)

#finding common indices for those indices which satisfy line 70 and 80

temp = np.zeros([row, col])

temp[fore_index1] = 1

temp[gauss_fit_index3] = temp[gauss_fit_index3] + 1

index3 = np.where(temp == 2)

# finding com

temp = np.zeros([row,col])

temp[fore_index2] = 1
35
index = np.where((compare_val_3<=value3)|(compare_val_2<=value2))

temp[index] = temp[index]+1

index2 = np.where(temp==2)

match_index = np.zeros([row,col])

match_index[gauss_fit_index1] = 1

match_index[gauss_fit_index2] = 1

match_index[gauss_fit_index3] = 1

not_match_index = np.where(match_index == 0)

#updating variance and mean value of the matched indices of all three gaussians

rho = alpha * norm_pdf(frame_gray[gauss_fit_index1], mean[0][gauss_fit_index1],


sigma1[gauss_fit_index1])

constant = rho * ((frame_gray[gauss_fit_index1] - mean[0][gauss_fit_index1]) ** 2)

mean[0][gauss_fit_index1] = (1 - rho) * mean[0][gauss_fit_index1] + rho *


frame_gray[gauss_fit_index1]

variance[0][gauss_fit_index1] = (1 - rho) * variance[0][gauss_fit_index1] + constant

omega[0][gauss_fit_index1] = (1 - alpha) * omega[0][gauss_fit_index1] + alpha

omega[0][gauss_not_fit_index1] = (1 - alpha) * omega[0][gauss_not_fit_index1]

rho = alpha * norm_pdf(frame_gray[gauss_fit_index2], mean[1][gauss_fit_index2],


sigma2[gauss_fit_index2])

constant = rho * ((frame_gray[gauss_fit_index2] - mean[1][gauss_fit_index2]) ** 2)

36
mean[1][gauss_fit_index2] = (1 - rho) * mean[1][gauss_fit_index2] + rho *
frame_gray[gauss_fit_index2]

variance[1][gauss_fit_index2] = (1 - rho) * variance[1][gauss_fit_index2] + rho *


constant

omega[1][gauss_fit_index2] = (1 - alpha) * omega[1][gauss_fit_index2] + alpha

omega[1][gauss_not_fit_index2] = (1 - alpha) * omega[1][gauss_not_fit_index2]

rho = alpha * norm_pdf(frame_gray[gauss_fit_index3], mean[2][gauss_fit_index3],


sigma3[gauss_fit_index3])

constant = rho * ((frame_gray[gauss_fit_index3] - mean[2][gauss_fit_index3]) ** 2)

mean[2][gauss_fit_index3] = (1 - rho) * mean[2][gauss_fit_index3] + rho *


frame_gray[gauss_fit_index3]

variance[2][gauss_fit_index3] = (1 - rho) * variance[2][gauss_fit_index3] + constant

omega[2][gauss_fit_index3] = (1 - alpha) * omega[2][gauss_fit_index3] + alpha

omega[2][gauss_not_fit_index3] = (1 - alpha) * omega[2][gauss_not_fit_index3]

# updating least probable gaussian for those pixel values which do not match any of the
gaussian

mean[0][not_match_index] = frame_gray[not_match_index]

variance[0][not_match_index] = 200

omega[0][not_match_index] = 0.1

#pred=CNN_test(frame1,not_match_index,1)

# normalise omega

sum = np.sum(omega,axis=0)

37
omega = omega/sum

#finding omega by sigma for ordering of the gaussian

omega_by_sigma[0] = omega[0] / sigma1

omega_by_sigma[1] = omega[1] / sigma2

omega_by_sigma[2] = omega[2] / sigma3

# getting index order for sorting omega by sigma

index = np.argsort(omega_by_sigma,axis=0)

# from that index(line 139) sorting mean,variance and omega

mean = np.take_along_axis(mean,index,axis=0)

variance = np.take_along_axis(variance,index,axis=0)

omega = np.take_along_axis(omega,index,axis=0)

activity_class=detect_activities(frame1,length)

activity_name=class_name(activity_class)

# converting data type of frame_gray so that we can use it to perform operations for
displaying the image

frame_gray = frame_gray.astype(np.uint8)

(regions, _) = hog.detectMultiScale(frame1,

winStride=(4, 4),

padding=(4, 4),

scale=1.05)

38
# Drawing the regions in the

# Image

for (x, y, w, h) in regions:

cv2.rectangle(frame1, (x, y),

(x + w, y + h),

(0, 0, 255), 2)

cv2.putText(frame1,activity_name, (x, y+25) , cv2.FONT_HERSHEY_SIMPLEX,


0.4, (36,255,12), 1)

# Showing the output Image

#print(length)

print(activity_name)

# getting background from the index2 and index3

background[index2] = frame_gray[index2]

background[index3] = frame_gray[index3]

cv2.imshow('Background Subtraction',cv2.subtract(frame_gray,background))

cv2.imshow('Activity Detection_',frame1)

time.sleep(0.1)

count += 1

if cv2.waitKey(1) & 0xFF == 27:

39
break

else:

# apply mask for background subtraction

fgmask1 = fgbg1.apply(frame1);

fgmask2 = fgbg2.apply(frame1);

fgmask3 = fgbg3.apply(frame1);

# vertically concatenates images

# of same width

# Black color in BGR

color = (255, 0, 0)

cv2.imshow('OUTPUT', fgmask2)

# Line thickness of -1 px

# Thickness of -1 will fill the entire shape

thickness = 4

learning_rate,start_point1,start_point,end_point,predict=connectedLayers(c,length,fgmask3)

#print(length)

print(predict)

image = cv2.rectangle(frame1, start_point, end_point, color, thickness)

40
cv2.putText(image, predict, (start_point1), cv2.FONT_HERSHEY_SIMPLEX, 0.9,
(36,255,12), 2)

cv2.imshow('Activity Detection_',frame1)

c=c+1

k = cv2.waitKey(30) & 0xff

if k == 27:

break;

41
SAMPLE INPUT

FIG 1.1

42
FIG 1.2

43
FIG 1.3

44
FIG 1.4

45
SAMPLE OUTPUT

FIG 1.5

46

You might also like