Face Recogninition Based Attendance System Using Python Django and OpenCV and Machine Learning-1
Face Recogninition Based Attendance System Using Python Django and OpenCV and Machine Learning-1
1. INTRODUCTION
In colleges, universities, organizations, schools, and offices, taking attendance is one of the
most important tasks that must be done on a daily basis. The majority of the time, it is done
manually, such as by calling by name or by roll number. The main goal of this project is to
create a Face Recognition-based attendance system that will turn this manual process into an
automated one. In the Fingerprint based existing attendance system, a portable fingerprint
device need to be configured with the students fingerprint earlier. Later either during the lecture
hours or before, the student needs to record the fingerprint on the configured device to ensure
their attendance for the day. The problem with this approach is that during the lecture time it
may distract the attention of the students. This project meets the requirements for bringing
modernization to the way attendance is handled, as well as the criteria for time management.
This device is installed in the classroom, where and student's information, such as name, roll
number, class, sec, and photographs, is trained. The images are extracted using Open CV.
Before the start of the corresponding class, the student can approach the machine, which will
begin taking pictures and comparing them to the qualified dataset. Logitech C270 web camera
and NVIDIA Jetson Nano Developer kit were used in this project as the camera and processing
board. The image is processed as follows: first, faces are identified using a Haarcascade
classifier, then faces are recognized using the LBPH (Local Binary Pattern Histogram)
Algorithm, histogram data is checked against an established dataset, and the device
automatically labels attendance. An Excel sheet is developed, and it is updated every hour with
the information from the respective class instructor. All the students of the class must register
themselves by entering the required details and then their images will be captured and stored
in the dataset. During each session, faces will be detected from live streaming video of
classroom. The faces detected will be compared with images present in the dataset. If match
found, attendance will be marked for the respective student. At the end of each session, list
of absentees will be mailed to the respective faculty handling the session. Face recognition
systems are part of facial image processing applications and their significance as are search
area are increasing recently. They use biometric information of the humans and are applicable
easily instead of finger print, iris, signature etc., because these types of biometrics are not much
suitable for non-collaborative people. Face recognition systems are usually applied and
preferred for people and security cameras in metropolitan life. These systems can be used for
crime prevention, video surveillance, person verification, and similar security activities. We
describe a face recognition-based automated attendance system utilizing a Python GUI in this
work. This technique has a lot of applications in everyday life, notably in school and college.
Scaling of the image size is conducted at the first phase,or pre-processing stage, to avoid or
minimize information loss. HAAR CASCADE and LBPH are the algorithms involved. Overall,
we created a Python programme that accepts an image from a database, performs all
necessary conversions for picture identification, then confirms the image in video or real time
using a user-friendly interface by accessing the camera.
1.1 MOTIVATION
The management of attendance is compulsory and important in all the institutions for knowing
the performance of students. In order to keep track of student attendance, identify who is in
class, and assure their safety, schools need to take attendance. It can be used to find potential
concerns or issues that might be affecting the student's attendance as well as to help schools
maintain track of which pupils are present and which ones are absent. Taking attendance also
enables schools to keep track of attendance rates and make sure that kids are showing up to
class on a regular basis. This can be used to pinpoint areas where student involvement and
participation need to be improved. Taking attendance also assists schools in planning for future
classes and ensuring proper class sizes. Teachers must properly record attendance because it is
a primary determinant in raising educational standards. Schools have many different methods
of taking attendance, ranging from traditional methods such as sign-in sheets to more
technologically advanced solutions such as biometric systems, online platforms, and mobile
apps. Traditional sign-in sheets require teachers to do a row call or let the students sign their
names on a piece of paper when they enter the classroom. However, the method takes a lot of
time especially when the class has a large number of students, there is a high chance of
misplacing the paper and also students are likely to cheat during sign-in as they also sign-in for
other students who are not present. Schools are also beginning to use ID scanners to
automatically register student attendance, and biometric systems using fingerprints or retinal
scans to identify and register student attendance. Online platforms such as cords of student
attendance. Face recognition has many advantages over biometric methods. With biometric
methods, users must take a voluntary action such as standing in front of a camera or pressing
their finger on a touch pad. This method takes time since students are required to stand in a
queue to give their thumb impression on the system. On the other hand, with face recognition,
no action is required as the camera can capture images from a distance and the features from
the images are extracted for identification. This makes it much easier to recognize faces without
wasting time. Feature extraction is a process used for face recognition in which the face region
is analysed to extract meaningful information for further processing. There are three primary
techniques for this purpose: holistic matching methods, feature based matching methods and
hybrid methods. Holistic methods use the whole face region as a raw input, while feature based
methods extract local features such as eyes, nose, and mouth. Hybrid methods combine both
techniques to offer the best recognition results. In this work, we develop a web-based
automated multiple student face recognition attendance system using the deep learning library
face recognition. The model is deployed to the web on which users (teachers) are able to
interact with the system through the interface of the web app and mark attendance. Users can
register new students, initiates starting and stopping to take attendance. Identification entails
matching the extracted face images from the video feed with a reference in the student database.
The assigned student name on the detected faces is added into a CSV file which as the source
of recording the attendance in which the user will be able to read and write into the file which will
be imported to Excel sheet in xlsx format.
1.2 PROBLEM STATEMENT AND OBJECTIVES
Attendance is prime important for both the teacher and student of an educational organization.
So it is very important to keep record of the attendance. The problem arises when we think
about the traditional process of taking attendance in class room. Calling name or roll number
of the student for attendance is not only a problem of time consumption but also it needs energy.
So an automatic attendance system can solve all above problems. There are some automatic
attendances making system which are currently used by much institution. One of such system
is biometric technique and RFID system. Although it is automatic and a step ahead of
traditional method it fails to meet the time constraint. The student has to wait in queue for
giving attendance, which is time taking. This project introduces an involuntary attendance
marking system, devoid of any kind of interference with the normal teaching procedure. The
system can be also implemented during exam sessions or in other teaching activities where
attendance is highly essential. This system eliminates classical student identification such as
calling name of the student, or checking respective identification cards of the student, which
can not only interfere with the ongoing teaching process, but also can be stressful for students
during examination sessions. In addition, the students have to register in the database to be
recognized. The enrolment can be done on the spot through the user friendly interface.
Traditional student attendance marking technique is often facing a lot of trouble. The face
recognition student attendance system emphasizes its simplicity by eliminating classical
student attendance marking technique such as 5 calling student names or checking respective
identification cards. There are not only disturbing the teaching process but also causes
distraction for students during exam sessions. Apart from calling names, attendance sheet is
passed around the classroom during the lecture sessions. The lecture class especially the class
with a large number of students might find it difficult to have the attendance sheet being passed
around the class. Thus, face recognition attendance system is proposed in order to replace the
manual signing of the presence of students which are burdensome and causes students get
distracted in order to sign for their attendance. Furthermore, the face recognition based
automated student attendance system able to overcome the problem of fraudulent approach and
lecturers does not have to count the number of students several times to ensure the presence of
the students. The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial
identification. One of the difficulties of facial identification is the identification between known
and unknown images. In addition, paper proposed by Pooja G.R et al. (2010) found out that the
training process for face recognition student attendance system is slow and time-consuming.
In addition, the paper proposed by Priyanka Wagh et al. (2015) mentioned that different
lighting and head poses are often the problems that could degrade the performance of face
recognition based student attendance system. Hence, there is a need to develop a real time
operating student attendance system which means the identification process must be done
within defined time constraints to prevent omission. The extracted features from facial images
which represent the identity of the students have to be consistent towards a change in
background, illumination, pose and expression. High accuracy and fast computation time will
be 6 the evaluation points of the performance.
1.3 CHALLENGES OF THE PROJECT
We designed a web-based face recognition system that captures the student faces from a
webcam and uses the face recognition deep learning library to detect their facial features. The
system was deployed to a web application after testing to ensure that it was working properly.
The first step in building a face recognition attendance system is to collect face images of
students using a camera or webcam connected to the system during registration. The stored
images in the database are compared to the detected faces from the video feed. Once student
faces are recognized, their attendance is marked in the system. shows the overview proposed
architecture of the face recognition attendance marking system. The system makes use of the
dlib’s state-of-the-art face recognition model which achieved an accuracy of 99.38% on the
labeled faces in the benchmark dataset. The camera locates and recognizes student faces in the
video feed, after which it analyzes the face images by contrasting them with images stored in
the database. The geometry of the face, including the separation between the eyes, the depth of
the eye socket, the distance from the forehead to the chin, the curve of the cheekbones, and the
contour of the lips, ears, and chin, are the important features in identifying faces, according to
the face recognition library. The detected faces from frames in the video are encoded using the
face_encodes function. The faces detected are compared to the ones in the dataset using the
compare_faces method and if an image that matches an image in the student database, a
prediction will be made and the label is assigned to the image. The faces that do not match
those in the database, an unknown label will be assigned to them and they won’t be entered
into the attendance record. The system is able to establish an interaction with the users by
means of using a web app. The web app offers the user interface via which users may control
the system's back-end. We designed the system in such a way that users can register new
students either through the manual registration or using the web application user-interface. The
manual registration method is currently the most effective way for registration. The
administrator can just create a folder labeled the student’s name and manually insert the
student’s face image. Once the folder has been inserted in the record database, the new
student’s faces are able to be recognized in the video feed during taking attendance taking. On
the other hand, users can access the website and navigate to the registration form shown in to
register new students. To register new students, the teacher enters the name of the student into
the textbox of the form and press submit. Once its submitted the form ID and input text (name)
are displayed in the URL which will be taken by the request.Get() function to the backend. The
function will get the name used to register a student then use it to execute the code for making
a directory in which the image data will be saved. Once the submit button is clicked the camera
automatically opens and captures faces from the video frames once the program is executed.
However, since the camera takes a few seconds once it opens to capture a particular number of
images (we assigned 10 images), it is impossible to have a visual of the video feed hence the
user before pressing submit they have to position the person on the right camera position. Once
the camera turns off, the captured images will already have been taken and saved directly into
the student database folder created. Then the system requires the user to close the webserver
and restart again lest an index error will happen since the program hasn’t processed the new
student folder for recognition.
1.4 PROJECT OUTLINE
Face recognition is crucial in daily life in order to identify family, friends or someone we
are familiar with. We might not perceive that several steps have actually taken in order to
identify human faces. Human intelligence allows us to receive information and interpret the
information in the recognition process. We receive information through the image projected
into our eyes, by specifically retina in the form of light. Light is a form of electromagnetic
waves which are radiated from a source onto an object and projected to human vision.
Robinson-Riegler, G., & Robinson-Riegler, B. (2008) mentioned that after visual
processing done by the human visual system, we actually classify shape, size, contour and
the texture of the object in order to analyse the information. The analysed information will
be compared to other representations of objects or face that exist in our memory to
recognize. In fact, it is a hard challenge to build an automated system to have the same
capability as a human to recognize faces. However, we need large memory to recognize
different faces, for example, in the Universities, there are a lot of students with different
race and gender, it is impossible to remember every face of the individual without making
mistakes. In order to overcome human limitations, computers with almost limitless memory,
high processing speed and power are used in face recognition systems. The human face is a
unique representation of individual identity. Thus, face recognition is defined as a biometric
method in which identification of an individual is performed by comparing real-time capture
image with stored images in the database of that person (Margaret Rouse, 2012). 4
Nowadays, face recognition system is prevalent due to its simplicity and awesome
performance. For instance, airport protection systems and FBI use face recognition for
criminal investigations by tracking suspects, missing children and drug activities (Robert
Silk, 2017). Apart from that, Facebook which is a popular social networking website
implement face recognition to allow the users to tag their friends in the photo for
entertainment purposes (Sidney Fussell, 2018). Furthermore, Intel Company allows the
users to use face recognition to get access to their online account (Reichert, C., 2017). Apple
allows the users to unlock their mobile phone, iPhone X by using face recognition
(deAgonia, M., 2017). The work on face recognition began in 1960. Woody Bledsoe, Helen
Chan Wolf and Charles Bisson had introduced a system which required the administrator to
locate eyes, ears, nose and mouth from images. The distance and ratios between the located
features and the common reference points are then calculated and compared. The studies
are further enhanced by Goldstein, Harmon, and Lesk in 1970 by using other features such
as hair colour and lip thickness to automate the recognition. In 1988, Kirby and Sirovich
first suggested principle component analysis (PCA) to solve face recognition problem.
Many studies on face recognition were then conducted continuously until today (Ashley
DuVal, 2012). For data to remain secure measures must be taken to prevent
unauthorized access. Security means that data are protected from various forms of
destruction. The system security problemcan be divided into four related issues: security,
integrity, privacy and confidentiality. Username and password requirement to sign in
ensures security. It will also provide data security as we are using the secured databases for
maintaining the documents. All the students of the class must register themselves by
entering the required details and then their images will be captured and stored in the dataset.
During each session, faces will be detected from live streaming video of classroom. The
faces detected will be compared with images present in the dataset. If match found,
attendance will be marked for the respective student. At the end of each session, list of
absentees will be mailed to the respective faculty handling the session.
THEORETICAL BACKGROUND
2. THEORETICAL BACKGROUND
Firstly, all students in the class have had their images taken during registration, which are then
recorded in the student database, in preparation for the attendance check. These images are
used to generate the student face database as a reference for real-time face recognition. To
check the attendance of a student for the class, the system detects face images of the student
through the real-time video stream and employs dlib’s face recognition module to predict
whether the student matches anyone in the database, and (if yes) further identifies the name of
the student. The output results of the face recognition will be used to update the attendance
record in the format of an excel file. For the web development, Django framework was used
on which the model was deployed to for facial recognition and a camera integrated to it. The
Django framework uses templates in which the HTML, CSS and some other JavaScript
packages from bootstrap was used to design the interface. The URLs were added which assigns
Django the pages to generate in response to the URL request. In order to redirect http requests
to the appropriate view based on the request URL, a mapper was used. To integrate the
templates and the URLs, class-based views and path URL patterns were first imported. The
content is taken from the views and the styling will be from the templates then the results from
the face recognition shown on the web interface will be sent to the excel file. The system was
developed using python 3.8 with the help of the face recognition deep learning library and
python libraries such as OpenCV, Pandas, Numpy and Django on an Nvidia GeForce RTX
2080, Intel(R) Core (TM) i7-9700k CPU 3.60G.
Face Recognition
The web app interface design contains a homepage with navigation options in which the user
can either select register new students or take attendance. For taking attendance there will be a
start and stop camera button. When a user clicks the start button, a request action is initiated,
opening the camera and launching a video feed to record all identified faces thus taking
attendance.
Mark Attendance
The excel file is used to mark and store the attendance records. The file contains 4 columns
which are the student’s name, date, status and time which will be filled during marking.
Students in the classroom will be recorded using a web camera and those whose faces appear
in the video feed they will be detected. The faces detected and recognized in the video stream
are recorded as present and those whose faces are detected but are not recognized because they
are not registered in the system or does not belong in that class are given the unknown label. A
condition was added in which faces assigned the label “unknown” are not recorded into the
excel file. We also added a time range for taking attendance such that those who will be late
are marked as late while those who did not attend class are not recorded for attendance and
they will be regarded as absent. The system records the name, date and time of the face it
captures on the video into the excel, we added another condition that once a student has been
marked, the system will not record him or her the next time the student’s face is captured on
the video feed. This avoids redundance of information in the excel as well as distort the records.
LITERATURE REVIEW
Author Algorithm Problem Summary
Visar Shehu PCA The recognition rate Using HAAR
Year : 2015 is 56%, having a Classifier and
problem to computer vision
recognize student in algorithm to
year 3 or 4 implement face
recognition
Syen navaz PCA, ANN Low accuracy with Using PCA to train
Year : 2018 the big size of and reduce
images to train with dimensionality and
PCA ANN to classify
input data and find
the pattern
Kar, Nirmalya PCA Repeat image Using Eigenvector
Year : 2016 capturing and Eigenvalue for
face recognition
The human face has unique physical shapes and features that are used to identify or confirm
identity. Face Recognition records this facial biometric. Various face recognition methods
measure facial biometric parameters. Facial recognition has become a very important topic in
recent years. Facial recognition is effectively applied to various applications such as security
systems, authentication, access control, surveillance systems, smartphone unlocking and social
networking systems, etc. Face recognition is not used as the primary form of badge in most
practices. However, advances in technology and algorithms allow facial recognition systems
to replace standard password and fingerprint scanners. Facial recognition systems are
employed throughout the world today by governments and private companies. Their
effectiveness varies, and some systems have previously been scrapped because of their
ineffectiveness. All the students of the class must register themselves by entering the required
details and then their images will be captured and stored in the dataset. During each session,
faces will be detected from live streaming video of classroom. The faces detected will be
compared with images present in the dataset. If match found, attendance will be marked for the
respective student. At the end of each session, list of absentees will be mailed to the respective
faculty handling the session.
Better security: -
For data to remain secure measures must be taken to prevent unauthorized access. Security
means that data are protected from various forms of destruction. The system security problem
can be divided into four related issues: security, integrity, privacy and confidentiality.
Username and password requirement to sign in ensures security. It will also provide data
security as we are using the secured databases for maintaining the documents.
The proposed system eliminates the manual errors while entering the details of the users during
the registration.
Better service: -
The product will avoid the burden of hard copy storage. We can also conserve the time and
human resources for doing the same task. The data can be maintained for longer period with
no loss of data.
4.7 DATASET DESCRIPTION
The main working principle of the project is that, the video captured data is converted into
image to detect and recognize it. Further the recognized image of the student is provided with
attendance, else the system marks the database as absent.
Face Detection
Face detection is a function that identifies a person's face in a digital image. The system
identifies the face of a person present in an image or video. To determine which photo or 4
videos contain a face (or several), you need to see the full structure of the face. Human faces
share similar features such as eyes, nose, forehead, mouth, and chin. So, the purpose of face
detection is to find the position and size of a face in an image. The detected face is used in a
face recognition algorithm. One of the important problems we are trying to solve in computer
vision is the automatic detection of objects in images without human intervention. Face
detection can be thought of as such a problem when it comes to detecting human faces in
images. Human faces may have some differences, but in general it's safe to say that there are
certain characteristics associated with everyone's face. Although there are many different face
recognition algorithms, the Viola Jones algorithm is one of the oldest methods still in use today
and we will use it later in this article. After finishing this article, you can take a look at the
Viola-Jones algorithm. We will provide a link to this at the end of this article. Face detection
is usually the first step in many face-related technologies such as face recognition or face
authentication. However, facial recognition can have very useful applications. Perhaps the most
successful application of facial recognition is photography. When you take a picture of your
friend, the digital camera's built in facial recognition algorithm detects the position of your face
and adjusts focus accordingly.
Feature Extraction
Face recognition is crucial in daily life in order to identify family, friends or someone we are
familiar with. We might not perceive that several steps have actually taken in order to identify
human faces. Human intelligence allows us to receive information and interpret the information
in the recognition process. In this step, features are extracted from the detected face. In LBPH,
the first local binary image of the image is computed and a histogram for face recognition is
generated. This will create a template. A template is a data set that represents the unique and
unique characteristics of a detected face. Now that we have cropped the face from the image,
we extract the features from the image. Here, we will use face embeddings to extract facial
features. The neural network takes an image of a human face as input and outputs a vector
representing the most important facial features. In machine learning, this vector is called an
embedding, so this vector is called a face embedding.
Face Recognition
Face Recognition allows you to uniquely identify and verify a person's face by comparing and
analysing biometric data. A facial recognition system is an application used to identify or verify
an identity in a digital image. At this stage, the algorithm has already been trained. Each
histogram generated is used to represent each image in the training data set. So, if you have an
input image, you step back on this new image and create a histogram representing the image.
So, to find an image that matches the input image, we just need to compare the two histograms
and return the image with the closest histogram. Different approaches can be used to compare
histograms (compute distance between two histograms). Example: Euclidean distance,
chisquare, absolute value, etc. In this example, we can use Euclidean distance based on
following formula: We can then use a threshold and the ‘confidence’ to automatically estimate
if the algorithm has correctly recognized the image. We can assume that the algorithm has
successfully recognized if the confidence is lower than the threshold defined.
4.9 MODEL ARCHITECTURE
4.10 EVALUTION MATRICS
Coding allows humans to communicate with these devices. Modern technology such as traffic
lights, calculators, smart TVs, and cars use internal coding systems. Since computers do not
communicate like humans, coding acts as a translator. Code converts human input into
numerical sequences that computers understand.
Coding computers means programming them. Coding, sometimes called computer
programming, is the process of how we communicate with computers. Our cell phones, laptops,
and tablets need code to work. Coding helps us communicate with our devices. Without coding,
smart TVs and traffic lights wouldn’t operate. We wouldn’t be able to find our favourite
podcasts or stream a movie on television. Why? Because they all run on code.
OpenCV
OpenCV is an opensource machine learning and computer vision library. OpenCV is a free,
cross-platform library. It was released in 1999. Intel released OpenCV to promote
CPUintensive applications. Developed in C++. Provides bindings for the Java and Python
programming languages .It runs on various operating systems such as Linux, Windows, OSx,
etc. It focuses on video capture, image processing and analysis. It has face detection and object
detection. OpenCV can be used to read and write images, capture and save video. You can
perform feature detection such as faces, cars, images, and more. Libraries are used by Yahoo,
Google, Microsoft, Intel, and many other well-known companies. A person had to manually
and accurately determine the coordinates of facial features such as the center of the pupil, the
inner and outer corners of the eye, and the widow peak of the hairline. The coordinates were
used to calculate 20 distances, including the width of the mouth and eyes. So a human can
process about 40 images per hour, making a database of calculated distances. The computer
then automatically compared the distances in each photo, calculated the difference in distances,
and returned records closed with possible matches. The algorithm uses edge or line detection
proposed by Viola and Jones in their 2001 research paper “Fast Object Detection Using
Boosted Cascade of Simple Functions”. The algorithm is given many positive face images and
many negative non-face images to train on.
This section describes how LBPH is used for face recognition. First, a data set for the images
is collected and each image is tagged with a unique ID. The image is split into an 8X8 grid and
converted to grayscale. A 3X3 matrix of each pixel containing the intensity (0-255) is extracted
from the image. The central value threshold of this matrix is taken, which determines the
adjacent values of the matrix. Each adjacent value is compared to the central value. Set to 1 if
the adjacency value is greater than or equal to the threshold and set to 0 if the adjacency value
is less than the threshold. The matrix values will then contain only binary values. Decimal
values are calculated using the following formula:
In the above formula, 'n' is the 8 neighbours of the center pixel, ic and in are the level values
of the gray center pixel and the periphery, respectively. Pixels. If x is greater than or equal to
the threshold, then S(x) is equal to 1. S(x) is 0 if x is less than the threshold.
Algorithm
Learning Objectives
1. Understand the intuition behind the LBPH (Local Binary Patterns Histograms)
algorithm for face recognition, which involves analysing pixel patterns and creating
histograms for image representation.
2. Learn about the representation of images using pixels and matrices, along with the
basics of images, pixels, and colour channels.
3. Explore the concept of histograms in statistics and their application in the LBPH
algorithm for counting colour occurrences in image squares.
4. Gain insights into implementing the LBPH algorithm for face recognition using Python
and OpenCV, including data gathering, cleaning, model training, and face recognition.
5. Understand the process of testing the LBPH face recognition model on test images and
interpreting the model predictions and expected outputs.
This article was published as a part of the Data Science Blogathon Images & Pixel
All images are represented in the Matrix formats, as you can see here, which are composed of
rows and columns. The basic component of an image is the pixel. An image is made up of a set
of pixels. Each one of these is small squares. By placing them side by side, we can form the
complete image. A single pixel is considered to be the least possible information in an image.
For every image, the value of pixels ranges between 0 to 255.
And when we multiply 32 by 32, the result is 1024, which is the total number of pixels in the
image. Each pixel is composed of Three values are R, G, and B, which are the basic colours
red, green, and blue. The combination of these three basic colours will create all these colours
here in the image so we conclude that a single pixel has three channels, one channel for each
one of the basic colours.
Haar Cascade Algorithm
Paul Viola and Michael Jones proposed Haar Cascade Algorithm, which is productively used
for Object Detection. This Algorithm is based on a Machine Learning approach in which lots
of images are used, whether positive or negative, to train the classifier.
o Positive Images: Positive Images are a type of image that we want our classifier to
identify.
o Negative Images: Negative Images are a type of image that contains something else,
The first real-time face detector also used the Haar classifiers, which we are introducing here.
Finding objects in pictures and videos is done by a machine learning programme known as a
Haar classifier or a Haar cascade classifier.
Gathering the Haar features is the first stage. Haar features are nothing but a calculation that
happens on adjacent regions at a certain location in a separate detecting window. The
calculation mainly includes adding the pixel intensities in every region and between the sum
differences calculation. This is arduous in the case of large images because these integral
images are used in which operations are reduced.
2. Integral Image Creation:
Creating Integral Images reduces the calculation. Instead of calculating at every pixel, it creates
the sub-rectangles, and the array references those sub rectangles and calculates the Haar
Features. The only important features are those of an object, and mostly all the remaining Haar
features are irrelevant in the case of object detection. But how do we choose from among the
hundreds of thousands of Haar features the ones that best reflect an object? Here Adaboost
enters the picture.
3. Adaboost Training:
The "weak classifiers" are combined by Adaboost Training to produce a "strong classifier" that
the object detection method can use. This essentially consists of selecting useful features and
teaching classifiers how to use them. By moving a window across the input image and
computing the Haar characteristics for each part of the image, weak learners are created. This
distinction stands in contrast to a threshold that can be trained to tell objects apart from non-
objects. These are "weak classifiers," but an accurate strong classifier needs many Haar
properties.