Automated Attendance System Using Image Processing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 74

Automated Attendance System Using Image Processing

Abstract:

In today’s era of technology aided world, image processing is gaining


immense importance towards digital world. Now a days, the field of image
processing has wide range applications in biometric recognition, behavioral
analysis, teleconferencing and video surveillance. This paper typically puts
forward idea of using image processing techniques such as detection and
recognition of faces to design the system that can automatically handle the
attendance of the students. Various factors that act as challenges in face
recognition are illumination, orientation, size, clarity, expression and
intensity of facial images. With the help of training dataset, system is trained
to detect the figures representing faces (positive images) and distinguish it
from background (negative images) environment. The aim is to develop the
automated system for detection and recognition of faces using their images
from videos and recording the attendance of the students by identifying
him/her from their variant facial features. This helps to maintain and handle
the attendance system automatically without any human intervention. This
new system can ease the hectic attendance maintenance and handling the
attendance will be more precise and efficient.
Introduction:

Facial image has set out to be an important biometric feature, which easily is
acquirable and doesn’t require any special or physical interaction between
the subject and the device. As it is observed, image recognition is very
complex and challenging one affecting variety of parameters such as
intensity, orientation, expression and size. Individual recognition is of most
importance in today’s world due to varied reasons. Real-time applications of
this algorithms faces some limitations to resolve loss of important
information. Detection and recognition of faces in videos using image
processing is discussed in. Various steps for detection and recognition are
demonstrated and details regarding what algorithms are used to implement
this techniques are described.

Face detection methods can be classified based on the individuals face


appearance, facial geometric structure, face colour etc.. Some of the image
processing techniques uses extraction of depth features to detect faces with
respect to geometric variations and textures. Mapping of edges and skin
colour thresholding is used to detect faces in Viola- Jones . This new system
can ease the hectic attendance maintenance and handling the attendance
will be more precise and efficient. It works with human face detection with
the help of Viola Jones algorithm and face recognition with Fisher Face
algorithm and achieves accuracy of 45 % to 50%. We train the dataset by
giving the images of the students and the algorithm itself takes images of
the students. Students will first register themselves in the system with
proper details, and facial images captured from different angles and
positions. On successful completion of registration process, store students
data in the database. Video Acquisition is done by capturing the video of the
class being conducted in a classroom.

Acquired video is used to detect and recognize faces of different students


and differentiate them from background using image processing techniques
i.e. Viola – Jones Algorithm for face detection, Cropping of faces and Binary
Face Algorithm for face recognition. verification is done by comparing facial
image of the students with the faces stored in the database. If the faces of
students are matched, then their attendance is recorded and updated in the
system. Also, it takes care to see that once attendance is recorded for a
particular student no more update to the attendance of that student will be
recorded(i.e 1 student= 1 attendance).

Literature Survey:

Reference 1:
Title: “An Analysis of the Viola-Jones Face Detection Algorithm, Image
Processing”

Author: Yi-Qing Wang

Description: Viola-Jones algorithm, the first ever real-time face detection


system. There are three ingredients working in concert to enable a fast and
accurate detection: the integral image for feature computation, Adaboost for
feature selection and an attentional cascade for efficient computational
resource allocation. Here we propose a complete algorithmic description, a
learning code and a learned face detector that can be applied to any color
image.

Reference 2:

Title: “Rapid object detection using a boosted cascade of simple features”

Author: P. Viola and M. Jones

Description: The first is the introduction of a new image representation


called the "integral image" which allows the features used by our detector to
be computed very quickly. The second is a learning algorithm, based on
AdaBoost, which selects a small number of critical visual features from a
larger set and yields extremely efficient classifiers. The third contribution is a
method for combining increasingly more complex classifiers in a "cascade"
which allows background regions of the image to be quickly discarded while
spending more computation on promising object-like regions.

Reference 3:

Title: “Face Detection by Using OpenCV’s Viola-Jones Algorithm based on


coding eyes”
Author: Abdul Mohsen Abdul Hossen

Description: Facial identification is one of the biometrical approaches


implemented for identifying any facial image with the use of the basic
properties of that face. In this paper we proposes a new improved approach
for face detection based on coding eyes by using Open CV's Viola-Jones
algorithm which removes the falsely detected faces depending on coding
eyes.

Reference 4:

Title: “Face recognition using Fisherface algorithm”

Author: Hyung-Ji Lee

Description: the Fisherface algorithm as a class-specific method is robust


about variations such as lighting direction and facial expression. In the
proposed face recognition adopting the above two methods, the linear
projection per node of an image graph reduces the dimensionality of labeled
graph vector and provides a feature space to be used effectively for the
classification.

Reference 5:

Title: “Face recognition using Eigenfaces”

Author: M.E. Gaikwad

Description: Face is a complex multidimensional visual model and developing


a computational model for face recognition is difficult. The goal is to
implement the system (model) for a particular face and distinguish it from a
large number of stored faces with some real-time variations as well. The
Eigenface approach uses Fisher Face Algorithm (FFA) algorithm for the
recognition of the images. It gives us efficient way to find the lower
dimensional space.

Existing System:

Face recognition is a part of pattern recognition. In early 1990s, Fisher faces


and Eigen faces were proposed by. Fisher faces has better performance than
Eigenfaces. Belhumeur, Hespanha, Kriegman presents Eigen and Fisher face
as face recognition methodology based on the features. This feature based
methods helps to achieve stability towards lighting conditions and poses
variations with use of non-linear feature spaces.

Disadvantages:

As it is observed, image recognition is very complex and challenging one


affecting variety of parameters such as intensity, orientation, expression and
size. The traditional methods lags the effectiveness of the system leading
the time and paper wastage, and causes proxy attendance which is
eliminated in automated system.

Objective:

The aim is to develop the automated system for detection and recognition of
faces using their images from videos and recording the attendance of the
students by identifying him/her from their variant facial features. This helps
to maintain and handle the attendance system automatically without any
human intervention. This new system can ease the hectic attendance
maintenance and handling the attendance will be more precise and efficient.
The proposed system contributes to human face detection with the help of
Viola Jones algorithm and face recognition with Fisher Face algorithm.

Scope:

The goal of the project is detecting the students face and automatically
taking attendance without human interaction. By using Viola Jones algorithm
and detecting the faces by Fisher Face algorithm(FFA) that are stored in the
database when we take the images for training. We train the algorithm by
giving the dataset that contains student images from different angles for
accuracy and detection. After performing multiple trials and rigorous training
of the image set, it is been observed that face recognition has been possible
with near to accurate results.

In these experiments, frames are captured from the videos at the regular
interval of 2 seconds. These frames are used to detect the numbers of faces
present in the system. Using the registered students data, detected faces
from the frames are recognized by matching the features with the database.

Motivation:

Now a days, the field of image processing has wide range applications in
biometric recognition, behavioral analysis, teleconferencing and video
surveillance. This project typically puts forward idea of using image
processing techniques such as detection and recognition of faces to design
the system that can automatically handle the attendance of the students.
Proposed System:

The proposed system aims to develop an automated attendance system. To


achieve the project objective, firstly, video segments are captured of the
classroom lecture. Pre – Processing of video is done to remove unwanted
artifacts i.e. noise and other invariants. The next stage demonstrates
detection of faces from the complex backgrounds and recognition of human
being. This system helps to identify students to track his/her presence in the
lecture and to avoid proxy attendance caused by unauthorized students.
There are four stages of operation to develop the system, they are: Video
acquisition, detection of faces and cropping, extraction of features and
recognition of face.

Advantages:

This system has been designed to automate the attendance maintenance.


The main objective behind developing this system is to eradicate all the
drawbacks and unconventional methods of manual attendance handling. So
to overcome all such drawbacks of manual attendance, this framework would
come out to be better and reliable solution with respect to both time and
security.

Libraries Used:

Tkinter: (Tkinter is the standard GUI library for Python. Python when
combined with Tkinter provides a fast and easy way to create GUI
applications. Tkinter provides a powerful object-oriented interface to the Tk
GUI toolkit.)

CV2(OpenCV): (OpenCV is an open source computer vision and machine


learning software library. OpenCV was built to provide a common
infrastructure for computer vision applications and to accelerate the use of
machine perception in the commercial products.)
CSV: (CSV module is a built-in function that allows Python to parse these
types of files. CSV files are used to store a large number of variables – or
data. They are incredibly simplified spreadsheets – think Excel – only the
content is stored in plaintext.)

PIL: (PIL is a free library for the Python programming language that adds
support for opening, manipulating, and saving many different image file
formats.)

OS: (OS module provides allows you to interface with the underlying
operating system that Python is running on – be that Windows, Mac or
Linux.)

Datetime: (The datetime module supplies classes for manipulating dates and
times.)

Time: (The Python time module provides many ways of representing time in
code, such as objects, numbers, and strings. It also provides functionality
other than representing time, like waiting during code execution and
measuring the efficiency of your code.)

NumPy: (NumPy is a module for Python with powerful data structures,


implementing multi-dimensional arrays and matrices.)

Pandas: (It provides ready to use high-performance data structures and data
analysis tools. Pandas module runs on top of NumPy and it is popularly used
for data science and data analytics.)
Specifications:

HARDWARE REQUIREMENTS:

 Processor : Intel i3 and above


 RAM : 4GB and Higher
 Hard Disk : 500GB: Minimum

SOFTWARE REQUIREMENTS:

 Programming Language / Platform : Python


 IDE : pycharm/jupyter
Methodology:

Various steps for detection and recognition are demonstrated and details
regarding what algorithms are used to implement this techniques are
described.

Face detection methods can be classified based on the individuals face


appearance, facial geometric structure, face colour etc. Some of the image
The proposed system aims to develop an automated attendance system. To
achieve the project objective, firstly, video segments are captured of the
classroom lecture. Pre – Processing of video is done to remove unwanted
artifacts i.e. noise and other invariants. The next stage demonstrates
detection of faces from the complex backgrounds and recognition of human
being. This system helps to identify students to track his/her presence in the
lecture and to avoid proxy attendance caused by unauthorized students.
There are four stages of operation to develop the system, they are: Video
acquisition, detection of faces and cropping, extraction of features and
recognition of face.

Work Flow of the system: Students will first register themselves in the
system with proper details, and facial images captured from different angles
and positions. On successful completion of registration process, store
students data in the database. Video Acquisition is done by capturing the
video of the class being conducted in a classroom. Acquired video is used to
detect and recognize faces of different students and differentiate them from
background using image processing techniques i.e. Viola – Jones Algorithm
for face detection, Cropping of faces and Binary Face Algorithm for face
recognition. Student’s identity verification is done by comparing facial image
of the students with the faces stored in the database. If the faces of students
are matched, then their attendance is recorded and updated in the system.
Also, it takes care to see that once attendance is recorded for a particular
student no more update to the attendance of that student will be recorded
(i.e. 1 student = 1 attendance).

A. Viola Jones Algorithm :

Viola Jones was the one to formulate the first ever real-time face detection
algorithm which helped to detect faces from the images. In this module, a
complete algorithmic description is implemented which includes feature
computation using integral images, selection of features with the help of
adaboost training and cascading for efficient allocation of computational
resources. This algorithm provides fast and accurate detection. To compute
face detection, Viola – Jones algorithm is used. Viola Jones algorithm is
divided in four phases:

1. Haar Feature Selection

In Haar feature selection, compute scalar product between the image and
Haar templates. Then, calculate the difference between the number of black
pixels and number of white pixels to obtain numerous features. All the
images are normalized using mean and variance to recoup the effect of
different lighting conditions. Images having variance value lower than one
with little information of interest are excluded. Five Haar patterns that are
used to compute various features from the facial images. These haar
patterns marked with black or white pixels are moved over an image to
compute all the features. These features helps to detect faces from an
images with required computation.

2. Creating an Integral Image

Integral Image is an effective way of computing the summation of pixel


values in a given image. It is mainly used to compute the mean intensity
value within a given image. Firstly, create an integral image which helps to
compute value at each pixel (o,p) which is the addition of pixels above (o,p)
and pixels to the left of (o,p) inside a rectangular window. For integral image,
the value in the Summed Area Table at (o,p) is simply calculated by:

3. Adaboost Training

Adaboost training is used to select a subset of features and to construct the


classifier. Adaboost refers to a particular method of training a boost
classifier. In Boost classifier every weak learner takes input in a form of
object and returns the value showing class of the object. This technique
creates strong classifier from number of weak classifiers. To amplify the
performance of the system on classification problems, adaboost training is
done.
4. Cascading Classifiers

Cascading is a peculiar case of learning which concatenates multiple


classifiers. All information gathered from the output of the given classifier is
used as an additional data for the next classifier in the cascade. Cascading
classifiers are trained with several hundreds of "positive" sample images of
face and arbitrary "negative" images (i.e. background). Both the positive and
negative images must be of the same size. Once the classifier is trained it is
applied to an image to detect the faces. To search for the faces from the
entire image frame, the search window travels across the image and checks
every location of the classifier.

Development:
We proposed as an alternative to the user-based neighborhood approach.
We first consider the dimensions of the input and output of the neural
network. In order to maximize the amount of training data we can feed to the
network, we consider a training example to be a user profile (i.e. a row from
the user-item matrix R) with one rating withheld. The loss of the network on
that training example must be computed with respect to the single withheld
rating. The consequence of this is that each individual rating in the training
set corresponds to a training example, rather than each user. As we are
interested in what is essentially a regression, we choose to use root mean
squared error (RMSE) with respect to known ratings as our loss function.
Compared to the mean absolute error, root mean squared error more heavily
penalizes predictions which are further off. We reason that this is good in the
context of recommender system because predicting a high rating for an item
the user did not enjoy significantly impacts the quality of the
recommendations. On the other hand, smaller errors in prediction likely
result in recommendations that are still useful—perhaps the regression is not
exactly correct, but at least the highest predicted rating are likely to be
relevant to the user

Data Processing is a task of converting data from a given form to a much


more usable and desired form i.e. making it more meaningful and
informative. Using Machine Learning algorithms, mathematical modeling and
statistical knowledge, this entire process can be automated. The output of
this complete process can be in any desired form like graphs, videos, charts,
tables, images and many more, depending on the task we are performing
and the requirements of the machine. This might seem to be simple but
when it comes to really big organizations like Twitter, Facebook,
Administrative bodies like Paliament, UNESCO and health sector
organizations, this entire process needs to be performed in a very structured
manner.

Collection:

The most crucial step when starting with ML is to have data of good quality
and accuracy. Data can be collected from any authenticated source
like data.gov.in, Kaggle or UCI dataset repository.For example, while
preparing for a competitive exam, students study from the best study
material that they can access so that they learn the best to obtain the best
results. In the same way, high-quality and accurate data will make the
learning process of the model easier and better and at the time of testing,
the model would yield state of the art results.
A huge amount of capital, time and resources are consumed in collecting
data. Organizations or researchers have to decide what kind of data they
need to execute their tasks or research.
Example: Working on the Facial Expression Recognizer, needs a large
number of images having a variety of human expressions. Good data
ensures that the results of the model are valid and can be trusted upon.

Preparation:
The collected data can be in a raw form which can’t be directly fed to the
machine. So, this is a process of collecting datasets from different sources,
analyzing these datasets and then constructing a new dataset for further
processing and exploration. This preparation can be performed either
manually or from the automatic approach. Data can also be prepared in
numeric forms also which would fasten the model’s learning.
Example: An image can be converted to a matrix of N X N dimensions, the
value of each cell will indicate image pixel.

Input:
Now the prepared data can be in the form that may not be machine-
readable, so to convert this data to readable form, some conversion
algorithms are needed. For this task to be executed, high computation and
accuracy is needed. Example: Data can be collected through the sources like
MNIST Digit data(images), twitter comments, audio files, video clips.

Processing:
This is the stage where algorithms and ML techniques are required to
perform the instructions provided over a large volume of data with accuracy
and optimal computation.

Output:
In this stage, results are procured by the machine in a meaningful manner
which can be inferred easily by the user. Output can be in the form of
reports, graphs, videos, etc

Storage:
This is the final step in which the obtained output and the data model data
and all the useful information are saved for the future use.

Data Preprocessing for Machine learning in Python


• Pre-processing refers to the transformations applied to our data before
feeding it to the algorithm.
• Data Preprocessing is a technique that is used to convert the raw data into
a clean data set. In other words, whenever the data is gathered from
different sources it is collected in raw format which is not feasible for the
analysis.

Need of Data Preprocessing


• For achieving better results from the applied model in Machine Learning
projects the format of the data has to be in a proper manner. Some specified
Machine Learning model needs information in a specified format, for
example, Random Forest algorithm does not support null values, therefore to
execute random forest algorithm null values have to be managed from the
original raw data set.
• Another aspect is that data set should be formatted in such a way that
more than one Machine Learning and Deep Learning algorithms are
executed in one data set, and best out of them is chosen.

1. Rescale Data

 When our data is comprised of attributes with varying scales,


many machine learning algorithms can benefit from rescaling the
attributes to all have the same scale.
 This is useful for optimization algorithms in used in the core of
machine learning algorithms like gradient descent.
 It is also useful for algorithms that weight inputs like regression
and neural networks and algorithms that use distance measures
like K-Nearest Neighbors.
 We can rescale your data using scikit-learn using the
MinMaxScaler class.

2. Binarize Data (Make Binary)


 We can transform our data using a binary threshold. All values
above the threshold are marked 1 and all equal to or below are
marked as 0.
 This is called binarizing your data or threshold your data. It can
be useful when you have probabilities that you want to make
crisp values. It is also useful when feature engineering and you
want to add new features that indicate something meaningful.
 We can create new binary attributes in Python using scikit-learn
with the Binarizer class.

3. Standardize Data
 Standardization is a useful technique to transform attributes with a
Gaussian distribution and differing means and standard deviations
to a standard Gaussian distribution with a mean of 0 and a standard
deviation of 1.
 We can standardize data using scikit-learn with the StandardScaler
class.

Data Cleansing
Introduction:
Data cleaning is one of the important parts of machine learning. It plays a
significant part in building a model. Data Cleaning is one of those things that
everyone does but no one really talks about. It surely isn’t the fanciest part
of machine learning and at the same time, there aren’t any hidden tricks or
secrets to uncover. However, proper data cleaning can make or break your
project. Professional data scientists usually spend a very large portion of
their time on this step.
Because of the belief that, “Better data beats fancier algorithms”.
If we have a well-cleaned dataset, we can get desired results even with a
very simple algorithm, which can prove very beneficial at times.
Obviously, different types of data will require different types of cleaning.
However, this systematic approach can always serve as a good starting
point.

Steps involved in Data Cleaning


1. Removal of unwanted observations
This includes deleting duplicate/ redundant or irrelevant values from your
dataset. Duplicate observations most frequently arise during data collection
and irrelevant observations are those that don’t actually fit the specific
problem that you’re trying to solve.
 Redundant observations alter the efficiency by a great extent as the
data repeats and may add towards the correct side or towards the
incorrect side, thereby producing unfaithful results.
 Irrelevant observations are any type of data that is of no use to us and
can be removed directly.

2. Fixing Structural errors


The errors that arise during measurement transfer of data or other similar
situations are called structural errors. Structural errors include typos in the
name of features, same attribute with different name, mislabeled classes, i.e.
separate classes that should really be the same or inconsistent
capitalization.
 For example, the model will treat America and america as different
classes or values, though they represent the same value or red, yellow
and red-yellow as different classes or attributes, though one class can
be included in other two classes. So, these are some structural errors
that make our model inefficient and gives poor quality results.

3. Managing Unwanted outliers


Outliers can cause problems with certain types of models. For example,
linear regression models are less robust to outliers than decision tree
models. Generally, we should not remove outliers until we have a legitimate
reason to remove them. Sometimes, removing them improves performance,
sometimes not. So, one must have a good reason to remove the outlier, such
as suspicious measurements that are unlikely to be the part of real data.

4. Handling missing data


Missing data is a deceptively tricky issue in machine learning. We cannot just
ignore or remove the missing observation. They must be handled carefully as
they can be an indication of something important. The two most common
ways to deal with missing data are:

1. Dropping observations with missing values.


Dropping missing values is sub-optimal because when you drop
observations, you drop information.
 The fact that the value was missing may be informative in itself.
 Plus, in the real world, you often need to make predictions on new data
even if some of the features are missing!

2. Imputing the missing values from past observations.


Imputing missing values is sub-optimal because the value was originally
missing but you filled it in, which always leads to a loss in information, no
matter how sophisticated your imputation method is.
 Again, “missingness” is almost always informative in itself, and you
should tell your algorithm if a value was missing.
 Even if you build a model to impute your values, you’re not adding any
real information. You’re just reinforcing the patterns already provided
by other features.
 Both of these approaches are sub-optimal because dropping an
observation means dropping information, thereby reducing data and
imputing values also is sub-optimal as we fil the values that were not
present in the actual dataset, which leads to a loss of information.
 Missing data is like missing a puzzle piece. If you drop it, that’s like
pretending the puzzle slot isn’t there. If you impute it, that’s like trying
to squeeze in a piece from somewhere else in the puzzle.
 So, missing data is always informative and indication of something
important. And we must aware our algorithm of missing data by
flagging it. By using this technique of flagging and filling, you are
essentially allowing the algorithm to estimate the optimal constant for
missingness, instead of just filling it in with the mean.

Some data cleansing tools

• Openrefine
• Trifacta Wrangler
• TIBCO Clarity
• Cloudingo
• IBM Infosphere Quality Stage
Conclusion
So, we have discussed four different steps in data cleaning to make the data
more reliable and to produce good results. After properly completing the
Data Cleaning steps, we’ll have a robust dataset that avoids many of the
most common pitfalls. This step should not be rushed as it proves very
beneficial in the further process.

Feature Scaling is a technique to standardize the independent features


present in the data in a fixed range. It is performed during the data pre-
processing.

Machine Learning Process

How does Machine Learning Work?

Machine Learning algorithm is trained using a training data set to create a


model. When new input data is introduced to the ML algorithm, it makes a
prediction on the basis of the model.
The prediction is evaluated for accuracy and if the accuracy is acceptable,
the Machine Learning algorithm is deployed. If the accuracy is not
acceptable, the Machine Learning algorithm is trained again and again with
an augmented training data set.

The Machine Learning process involves building a Predictive model that can
be used to find a solution for a Problem Statement. To understand the
Machine Learning process let’s assume that you have been given a problem
that needs to be solved by using Machine Learning.
The below steps are followed in a Machine Learning process:

Step 1: Define the objective of the Problem Statement

At this step, we must understand what exactly needs to be predicted. In our


case, the objective is to predict the possibility of rain by studying weather
conditions. At this stage, it is also essential to take mental notes on what
kind of data can be used to solve this problem or the type of approach you
must follow to get to the solution.

Step 2: Data Gathering

At this stage, you must be asking questions such as,

 What kind of data is needed to solve this problem?


 Is the data available?
 How can I get the data?

Once you know the types of data that is required, you must understand how
you can derive this data. Data collection can be done manually or by web
scraping. However, if you’re a beginner and you’re just looking to learn
Machine Learning you don’t have to worry about getting the data. There are
1000s of data resources on the web, you can just download the data set and
get going.

Coming back to the problem at hand, the data needed for weather
forecasting includes measures such as humidity level, temperature,
pressure, locality, whether or not you live in a hill station, etc. Such data
must be collected and stored for analysis.

Step 3: Data Preparation

The data you collected is almost never in the right format. You will encounter
a lot of inconsistencies in the data set such as missing values, redundant
variables, duplicate values, etc. Removing such inconsistencies is very
essential because they might lead to wrongful computations and predictions.
Therefore, at this stage, you scan the data set for any inconsistencies and
you fix them then and there.

Step 4: Exploratory Data Analysis

Grab your detective glasses because this stage is all about diving deep into
data and finding all the hidden data mysteries. EDA or Exploratory Data
Analysis is the brainstorming stage of Machine Learning. Data Exploration
involves understanding the patterns and trends in the data. At this stage, all
the useful insights are drawn and correlations between the variables are
understood.

For example, in the case of predicting rainfall, we know that there is a strong
possibility of rain if the temperature has fallen low. Such correlations must
be understood and mapped at this stage.

Step 5: Building a Machine Learning Model


All the insights and patterns derived during Data Exploration are used to
build the Machine Learning Model. This stage always begins by splitting the
data set into two parts, training data, and testing data. The training data will
be used to build and analyze the model. The logic of the model is based on
the Machine Learning Algorithm that is being implemented.

Choosing the right algorithm depends on the type of problem you’re trying to
solve, the data set and the level of complexity of the problem. In the
upcoming sections, we will discuss the different types of problems that can
be solved by using Machine Learning.

Step 6: Model Evaluation & Optimization

After building a model by using the training data set, it is finally time to put
the model to a test. The testing data set is used to check the efficiency of the
model and how accurately it can predict the outcome. Once the accuracy is
calculated, any further improvements in the model can be implemented at
this stage. Methods like parameter tuning and cross-validation can be used
to improve the performance of the model.

Step 7: Predictions

Once the model is evaluated and improved, it is finally used to make


predictions. The final output can be a Categorical variable (eg. True or False)
or it can be a Continuous Quantity (eg. the predicted value of a stock).

In our case, for predicting the occurrence of rainfall, the output will be a
categorical variable.

STUDY OF THE SYSTEM


To provide flexibility to the users, the interfaces have been developed that
are accessible through a browser. The GUI’S at the top level have been
categorized as

Analysis

Although the scale of this project is relatively small, to produce a


professional solution is it imperative that the current problem is understood
accurately. However, this task has been made doubly difficult by the lack of
support from the company. Thankfully, the Application manager has been
kind enough to spare me some of his own time to discuss the problem with
me further. Therefore, this chapter is concerning with analyzing the current
situation and expectations of the user for this system.

Requirements:
The minimum requirements of the project are listed below:
 Examine the tools and methodologies required to gain an overview of
the system requirements for the proposed database.
 Examine suitable database management systems that can be used to
implement the proposed database.
 Evaluate appropriate website authoring and web graphic creation tools
that can be used to develop web based forms for the proposed
database

 Produce and apply suitable criteria for evaluating the solution

Requirement Analysis:
Taking into account the comparative analysis stated in the previous section
we could start specifying the requirements that our website should achieve.
As a basis, an article on all the different requirements for software
development was taken into account during this process. We divide the
requirements in 2 types: functional and nonfunctional requirements.
Functional requirements
Functional requirement should include function performed by a specific
screen outline work-flows performed by the system and other business or
compliance requirement the system must meet.
Functional requirements specify which output file should be produced from
the given file they describe the relationship between the input and output of
the system, for each functional requirement a detailed description of all data
inputs and their source and the range of valid inputs must be specified.
The functional specification describes what the system must do, how the
system does it is described in the design specification.
If a user requirement specification was written, all requirements outlined in
the user requirements specifications should be addressed in the functional
requirements.

Nonfunctional requirement
Describe user-visible aspects of the system that are not directly related with
the functional behavior of the system. Non-Functional requirements include
quantitative constraints, such as response time (i.e. how fast the system
reacts to user commands.) or accuracy (.e. how precise are the systems
numerical answers.).

 Portability
 Reliability
 Usability
 Time Constraints
 Error messages
 Actions which cannot be undone should ask for confirmation
 Responsive design should be implemented
 Space Constraints
 Performance
 Standards
 Ethics
 Interoperability
 Security
 Privacy
 Scalabilit

UI Requirements
1. Administrative user interface
The ‘administrative user interface’ concentrates on the consistent
information that is practically, part of the organizational activities and which
needs proper authentication for the data collection. These interfaces help the
administrators with all the transactional states like Data insertion, Data
deletion and Date updating along with the extensive data search capabilities.

2. The operational or generic user interface

The ‘operational or generic user interface’ helps the end users of the system
in transactions through the existing data and required services. The
operational user interface also helps the ordinary users in managing their
own information in a customized manner as per the included Flexibilities.

INPUT DESIGN AND OUTPUT DESIGN

INPUT DESIGN
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation
and those steps are necessary to put transaction data in to a usable form for
processing can be achieved by inspecting the computer to read data from a
written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the
amount of input required, controlling the errors, avoiding delay, avoiding
extra steps and keeping the process simple. The input is designed in such a
way so that it provides security and ease of use with retaining the privacy.
Input Design considered the following things:
 What data should be given as input?
 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when
error occur.

OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the


input into a computer-based system. This design is important to avoid errors
in the data input process and show the correct direction to the management
for getting correct information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle


large volume of data. The goal of designing input is to make data entry
easier and to be free from errors. The data entry screen is designed in such a
way that all the data manipulates can be performed. It also provides record
viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered
with the help of screens. Appropriate messages are provided as when
needed so that the user will not be in maize of instant. Thus the objective of
input design is to create an input layout that is easy to follow

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and
presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output
design it is determined how the information is to be displaced for immediate
need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design
improves the system’s relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought
out manner; the right output must be developed while ensuring that each
output element is designed so that people will find the system can use easily
and effectively. When analysis design computer output, they should Identify
the specific output that is needed to meet the requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information
produced by the system.
The output form of an information system should accomplish one or more of
the following objectives.
 Convey information about past activities, current status or projections
of the
 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

SYSTEM STUDY
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and
business proposal is put forth with a very general plan for the project and
some cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The
expenditures must be justified. Thus the developed system as well within the
budget and this was achieved because most of the technologies used are
freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have
a high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing
this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system


by the user. This includes the process of training the user to use the system
efficiently. The user must not feel threatened by the system, instead must
accept it as a necessity. The level of acceptance by the users solely depends
on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he
is also able to make some constructive criticism, which is welcomed, as he is
the final user of the system.

SOFTWARE REQUIREMENT SPECIFICATION

What is SRS?
Software Requirements Specification (SRS) is the starting
point of the software developing activity. As system grew more complex it
became evident that the goal of the entire system cannot be easily
comprehended. Hence the need for the requirement phase arose. The
software project is initiated by the client needs. The SRS is the means of
translating the ideas of the minds of clients (the input) into a formal
document (the output of the requirement phase).
The SRS phase consists of two basic activities:

Problem/Requirement Analysis:
The process is order and more nebulous of the two, deals
with understand the problem, the goal and constraints.

Requirement Specification:
Here, the focus is on specifying what has been found giving
analysis such as representation, Specification languages and tools, and
checking the specifications are addressed during this activity.
The requirement phase terminates with the production of the
validate SRS document. Producing the SRS document is the basic of this
phase.

Role of SRS:
The purpose of the SRS is to reduce the communication gap
between the clients and the developers. SRS is the medium though which
the client and user needs are accurately specified.
It forms the basis of software development. A good SRS
should satisfy all the parties involved in the system.

Purpose:
The purpose of this document is to describe all external
requirements for the E-learning System. It also describes the interfaces for
the system.
 Scope:
This document is the only one that describes the
requirements of the system. It is meant for the use by the developers, and
will also by the basis for validating the final deliver system. Any changes
made to the requirements in the future will have to go through a formal
change approval process. The developer is responsible for asking for
clarifications, where necessary, and will not make any alternations without
the permission of the client.
 Overview:
The SRS begins the translation process that converts the
software Requirements into the language the developers will use. The SRS
draws on the Use Cases from the user Requirement Document and analyses
the situations from a number of perspectives to discover and eliminate
inconsistencies, ambiguities and omissions before development progresses
significantly under mistaken assumptions.

Proposed System Architecture:

The proposed system is built around conventional three-tier


architecture. The three-tier architecture for web development allows
programmers to separate various aspects of the solution design into modules
and work on them separately. That is, a developer who is best at one part of
development, say UI development need not worry about the implementation
levels so much. It also allows for easy maintenance and future
enhancements. The three-tiers of the solution include:

 The Layout:

This tier is at the uppermost layer and is closely bound to


the user, i.e., the users of the system interact with it through this tier.

 The business-tier:

This tier is responsible for implementing all the business


rules of the organization. It operates on the data provided by the users
through the web-tier and the data stored in the underlying data-tier. So in
a way this tier works on data from the web-tier and the data-tier in order
to perform task for the users in agreement with the business rules of the
organization.

 The data-tier:

This tier contains the persist able data that is required by the
business tier to operate on. Data plays a very important role in the
functioning of any organization. Thus, persisting of such data is very
important. The data tier performs the job of persisting the data.
Software Development Life Cycle:

The Systems Development Life Cycle (SDLC), or Software Development Life


Cycle in systems engineering, information systems and software
engineering, is the process of creating or altering systems, and the models
and methodologies use to develop these systems.
Requirement Analysis and Design

Analysis gathers the requirements for the system. This stage


includes a detailed study of the business needs of the organization. Options
for changing the business process may be considered. Design focuses on
high level design like, what programs are needed and how are they going to
interact, low-level design (how the individual programs are going to work),
interface design (what are the interfaces going to look like) and data design
(what data will be required). During these phases, the software's overall
structure is defined. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive
to solve in the later stage of the software development. Much care is taken
during this phase. The logical system of the product is developed in this
phase.
Implementation

In this phase the designs are translated into code. Computer programs are
written using a conventional programming language or an application
generator. Programming tools like Compilers, Interpreters, and Debuggers
are used to generate the code. Different high level programming languages
like PYTHON 3.6, Anaconda Cloud are used for coding. With respect to the
type of application, the right programming language is chosen.

Testing

In this phase the system is tested. Normally programs are written as a series
of individual modules, this subject to separate and detailed test. The system
is then tested as a whole. The separate modules are brought together and
tested as a complete system. The system is tested to ensure that interfaces
between modules work (integration testing), the system works on the
intended platform and with the expected volume of data (volume testing)
and that the system does what the user requires (acceptance/beta testing).

Maintenance

Inevitably the system will need maintenance. Software will definitely


undergo change once it is delivered to the customer. There are many
reasons for the change. Change could happen because of some unexpected
input values into the system. In addition, the changes in the system could
directly affect the software operations. The software should be developed to
accommodate changes that could happen during the post implementation
period.

SDLC METHDOLOGIES
This document play a vital role in the development of life cycle (SDLC) as
it describes the complete requirement of the system. It means for use by
developers and will be the basic during testing phase. Any changes made to
the requirements in the future will have to go through formal change
approval process.
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral
Model of Software Development and Enhancement. This model was not the
first model to discuss iterative development, but it was the first model to
explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years
long. Each phase starts with a design goal and ends with a client reviewing
the progress thus far. Analysis and engineering efforts are applied at each
phase of the project, with an eye toward the end goal of the project.
The following diagram shows how a spiral model acts like:

The steps for Spiral Model can be generalized as follows:


 The new system requirements are defined in as much details as
possible.
 This usually involves interviewing a number of users representing all
the external or internal users and other aspects of the existing system.
 A preliminary design is created for the new system.
 A first prototype of the new system is constructed from the preliminary
design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.
 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths,


weakness, and risks.

2. Defining the requirements of the second prototype.

3. Planning a designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involve development cost
overruns, operating-cost miscalculation, or any other factor that could,
in the customer’s judgment, result in a less-than-satisfactory final
product.
 The existing prototype is evaluated in the same manner as was the
previous prototype, and if necessary, another prototype is developed
from it according to the fourfold procedure outlined above.
 The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
 The final system is constructed, based on the refined prototype.
 The final system is thoroughly evaluated and tested. Routine
maintenance is carried on a continuing basis to prevent large scale
failures and to minimize down time.
SYSTEM DESIGN

System design is transition from a user oriented document to programmers


or data base personnel. The design is a solution, how to approach to the
creation of a new system. This is composed of several steps. It provides the
understanding and procedural details necessary for implementing the
system recommended in the feasibility study. Designing goes through logical
and physical stages of development, logical design reviews the present
physical system, prepare input and output specification, details of
implementation plan and prepare a logical design walkthrough.

The database tables are designed by analyzing functions involved in the


system and format of the fields is also designed. The fields in the database
tables should define their role in the system. The unnecessary fields should
be avoided because it affects the storage areas of the system. Then in the
input and output screen design, the design should be made user friendly.
The menu should be precise and compact.

SOFTWARE DESIGN
In designing the software following principles are followed:

1. Modularity and partitioning: software is designed such that, each


system should consists of hierarchy of modules and serve to partition into
separate function.

2. Coupling: modules should have little dependence on other modules of a


system.
3. Cohesion: modules should carry out in a single processing function.

4. Shared use: avoid duplication by allowing a single module be called by


other that need the function it provide

Data Flow Diagram:


Technical Architecture:
System Architecture:
UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented
software engineering. The standard is managed, and was created by, the
Object Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised
of two major components: a Meta-model and a notation. In the future,
some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for
specifying, Visualization, Constructing and documenting the
artifacts of software system, as well as for business modeling and
other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical
notations to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
Provide extendibility and specialization mechanisms to extend the core
concepts.
Be independent of particular programming languages and development
process.
Provide a formal basis for understanding the modeling language.
Encourage the growth of OO tools market.
Support higher level development concepts such as collaborations,
frameworks, patterns and components.
Integrate best practices

UML Diagrams:

Unified Modeling Language (UML) is a modelling language. The main purpose


of UML is to visualize the way a system has been designed. It is a visual
language to sketch the behavior and structure of the system. This was
adopted by Object Management Group (OMG) as a standard in 1997.

Use Case Diagram:

 The purpose of use case diagram is to capture the dynamic aspect of a


system. This is used to gather the requirements of a system including
internal and external influences.
 The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the
system can be depicted.
 The UML is a very important part of developing objects oriented
software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.
Use-case diagram for automated attendance management

Sequence Diagram:

 A sequence diagram details the interaction between objects in a


sequential order i.e. the order in which these interactions take place.

 This diagrams sometimes known as event diagrams or event scenarios.


This helps in understanding how the objects and component interacts
to execute the process.

 This has two dimensions which represents time (Vertical) and different
objects (Horizontal)
Sequence diagram for Automated Attendance management
system

Class Diagram:

 The class diagram describes the structure of a system by showing the


system's classes, their attributes, operations, and the relationships
among the classes.
 It explains which class contains information and also describes
responsibilities of the system. This is also known as structural diagram.
Class diagram for automated attendance

Activity Diagram:

 It is behavioural diagram which reveals the behaviour of a system.it


sketches the control flow from initiation point to a finish point showing
the several decision paths that exist while the activity is being
executed.
 This doesn’t show any message flow from one activity to another, it is
sometimes treated as the flowchart. Despite they look like a flowchart,
they are not.
 In the Unified Modeling Language, activity diagrams can be used to
describe the business and operational step-by-step workflows of
components in a system
Activity diagram for Automated attendance management
system
Sample code:
import tkinter as tk
from tkinter import *
import cv2
import csv
import os
import numpy as np
from PIL import Image,ImageTk
import pandas as pd
import datetime
import time

#####Window is our Main frame of system


window = tk.Tk()
window.title("FAMS-Face Recognition Based Attendance Management
System")

window.geometry('1280x720')
window.configure(background='snow')

####GUI for manually fill attendance

def manually_fill():
global sb
sb = tk.Tk()
sb.iconbitmap('AMS.ico')
sb.title("Enter subject name...")
sb.geometry('580x320')
sb.configure(background='snow')

def err_screen_for_subject():
def ec_delete():
ec.destroy()
global ec
ec = tk.Tk()
ec.geometry('300x100')
ec.iconbitmap('AMS.ico')
ec.title('Warning!!')
ec.configure(background='snow')
Label(ec, text='Please enter your subject name!!!', fg='red', bg='white',
font=('times', 16, ' bold ')).pack()
Button(ec, text='OK', command=ec_delete, fg="black", bg="lawn
green", width=9, height=1, activebackground="Red",
font=('times', 15, ' bold ')).place(x=90, y=50)

def fill_attendance():
ts = time.time()
Date = datetime.datetime.fromtimestamp(ts).strftime('%Y_%m_%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:
%S')
Time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour, Minute, Second = timeStamp.split(":")
####Creatting csv of attendance

##Create table for Attendance


date_for_DB = datetime.datetime.fromtimestamp(ts).strftime('%Y_%m_
%d')
global subb
subb=SUB_ENTRY.get()
DB_table_name = str(subb + "_" + Date + "_Time_" + Hour + "_" +
Minute + "_" + Second)
import pymysql.connections

###Connect to the database


try:
global cursor
connection = pymysql.connect(host='localhost', user='root',
password='root', db='manually_fill_attendance')
cursor = connection.cursor()
except Exception as e:
print(e)

sql = "CREATE TABLE " + DB_table_name + """


(ID INT NOT NULL AUTO_INCREMENT,
ENROLLMENT varchar(100) NOT NULL,
NAME VARCHAR(50) NOT NULL,
DATE VARCHAR(20) NOT NULL,
TIME VARCHAR(20) NOT NULL,
PRIMARY KEY (ID)
);
"""

try:
cursor.execute(sql) ##for create a table
except Exception as ex:
print(ex) #

if subb=='':
err_screen_for_subject()
else:
sb.destroy()
MFW = tk.Tk()
MFW.iconbitmap('AMS.ico')
MFW.title("Manually attendance of "+ str(subb))
MFW.geometry('880x470')
MFW.configure(background='snow')

Testing

Test strategy and approach


Field testing will be performed manually and functional tests will be written in
detail.
Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct pag

The actual purpose of testing is to discover errors. Testing is the process


of trying to discover every conceivable fault or weakness in a work
product. It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. TYPES OF
TESTING
There are many types of testing methods are available in that mainly
used testing methods are as follows
Unit testing

Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow should be
validated. It is the testing of individual software units of the application .it is
done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a
specific business process, application, and/or system configuration. Unit tests
ensure that each unique path of a business process performs accurately to
the documented specifications and contains clearly defined inputs and
expected results.

Integration testing

Integration tests are designed to test integrated software


components to determine if they actually run as one program. Testing is
event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the combination
of components.

Functional test
Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements,
system documentation, and user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.


Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be
exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on
requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified
and the effective value of current tests is determined

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable
results. An example of system testing is the configuration oriented system
integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or
at least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the
software works.

Test cases:

Tested Test Inputs Expected Actual status


name output Output
1 enter name entered successfully success
student enrollment id details taken
details

Tested Test Inputs Expected Actual status


name output Output
2 Take Student faces detection detection success
student with given
images images

Tested Test Inputs Expected Actual status


name output Output
3 Training student train model successfully success
model image with trained
name,
enrollment id

Tested Test Inputs Expected Actual status


name output Output
4 Automatic trained faces csv file with success
fill images detected subject
attendance and csv file name and
of time
attendance created
created successfully

Tested Test name Inputs Expected Actual status


output Output
5 Manually fill name entered successfully success
attendance enrollment successfully taken and
id csv file
created
successfully

Tested Test name Inputs Expected Actual status


output Output
6 checking admin login csv file of successfully success
register with registered opened
students credentials students
Screenshots:
Conclusion:
This system has been designed to automate the attendance maintenance.
The main objective behind developing this system is to eradicate all the
drawbacks and unconventional methods of manual attendance handling. The
traditional methods lags the effectiveness of the system leading the time
and paper wastage, and causes proxy attendance which is eliminated in
automated system. So to overcome all such drawbacks of manual
attendance, this framework would come out to be better and reliable solution
with respect to both time and security. In this way, automated attendance
system helps to distinguish between the faces in classroom and recognize
the faces accurately to mark their attendance. The efficiency of the system
can be improvised by fine tasking of the training process.
References:

[1] S. V. Tathe, A. S. Narote, S. P. Narote, Human Face Detection and


Recognition in Videos, International Conference on Advances in Computing,
Communications and Informatics (ICACCI), Sept 2016, 21-24, Jaipur, India.

[2] Hemdan, S. Karungaru, K. Terada, "Facial features based method for


human tracking", 17th Korea-Japan Joint Workshop on Frontiers of Computer
Vision, pp. 1 4, 2011.

[3] R. Sarkar, S. Bakshi, P. K. Sa, "A real-time model for multiple human
face tracking from low-resolution surveillance videos", Procedia Technology,
vol. 6, pp. 1004-1010, 2012.

[4] Yi-Qing Wang, An Analysis of the Viola-Jones Face Detection Algorithm,


Image Processing On Line, 4 (2014), pp.128-148.

[5] P. Viola and M. Jones, Rapid object detection using a boosted cascade of
simple features, Proceedings of the 2001 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, vol. 1, 2001, pp.511 518.

[6] P.Viola and M.Jones, Robust real-time face detection, International


Journal of Computer Vision, vol. 57, pp.137-154, May 2004.

[7] M. A. Turk, A. P. Pentland, "Face recognition using eigenfaces", IEEE


Computer Society Conference on Computer Vision and Pattern Recognition,
pp. 586 591, 1991.

[8] Xiaoyang Tan and Bill Triggs, "Enhanced Local Texture Feature Sets for
Face Recognition Under Difficult Lighting Conditions", IEEE Trans, on Image
Processing, Vol. 19, No.6, June 2010.

[9] Belhumeur, P., Hespanha, J., &Kriegman, D. (1997). Eigenfaces vs.


Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 19(7).
[10] Liu, K.; Cheng, Y.-Q. & Yang, J.-Y. (1993) Algebraic feature extraction
for image recognition based on an optimal discriminant criterion. Pattern
Recognition, Vol.26, No.6, 903-906.

[11] Aleix Martinez, “Fisherfaces”. Scholarpedia, Vol.6, No.2:4282, 2011.


[Online] http://www.scholarpedia.org/article/Fisherfaces

[12] Philipp Wagner, “Fisherfaces”, June 03, 2012 [Online].


https://www.bytefish.de/blog/fisherfaces/

[13] OpenCV, Cascade Classifier Training OpenCV 2.4.9.0 documentation,


[Online]. http://docs.opencv.org/doc/userguide=ugtraincascade:

[14] N. Seo, OpenCV haartraining (Rapid Object Detection with a Cascade of


Boosted Classifiers Based on Haar-like Features), [Online].
http://note.sonots.com/SciSoftware/haartraining.html

You might also like