0% found this document useful (0 votes)
179 views

Arduino Based Face Detection and Tracking System

This document describes a semester project to develop an Arduino-based face detection and tracking system. A team of 4 students will build the system under the guidance of an advisor. The system will use the Arduino platform and open source code to detect and track faces in video streams from cameras. It can potentially be used for security applications. The project will involve researching face detection algorithms, designing the system model, implementing the detection and tracking code, and testing the working prototype. The goal is to create a functional system for detecting and tracking faces in video in real-time.

Uploaded by

bayissa hailu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views

Arduino Based Face Detection and Tracking System

This document describes a semester project to develop an Arduino-based face detection and tracking system. A team of 4 students will build the system under the guidance of an advisor. The system will use the Arduino platform and open source code to detect and track faces in video streams from cameras. It can potentially be used for security applications. The project will involve researching face detection algorithms, designing the system model, implementing the detection and tracking code, and testing the working prototype. The goal is to create a functional system for detecting and tracking faces in video in real-time.

Uploaded by

bayissa hailu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

BAHIR DAR UNIVERSITY

BAHIRDAR UNIVERSITY INSTITUTE OF TECHNOLOGY

FACULTY OF ELECTRICAL AND COMPUTER ENGINEERING

SEMESTER PROJECT

First Degree of BACHELOR OF SCIENCE in Electrical and Computer


Engineering (Electronics and Communication Focus)

Arduino based Face detection and tracking system


BY

1. Burk Kindie …………….……………….BDU0601166UR

2. Ermias wasihun…..……………………….BDU0600650UR

3. Shambel zienawi…………………………..BDU0601472UR

4. Shegaw zigale…………….……………….BDU0601475UR

Jan.3. 2018

i
Approval
This final project has been approved in partial fulfillment of the requirements for the Degree of
Bachelor of Science in Electrical and Computer Engineering with specialization in Electronics
and Communication Engineering.

Faculty of Electrical and Computer Engineering.

Project Adviser: Signature:

1. Mr. Lijaddis G. ________________

Team Members:

1. Burk Kindie ________________

2. Erimyas Wasihun ________________

3. Shamble Zienawi. ________________

4. Shegaw Zigale ________________

i
Abstract

An application for automatic face detection and tracking on video streams from surveillance
cameras in public or commercial places will discuss in this project. In many situations it is useful
to detect where the people are looking for, e.g. in exhibits, commercial malls, and public places
in buildings. Prototype is designed to work with web cameras for the face detection and
tracking system based on open source platforms Arduino. The system is based on AdaBoost
algorithm and abstracts faces Haar-Like features. This system can be used for security purpose to
record the visitor face as well as to detect and track the face. A program is developed using math
lab that can detect people’s face and also track from the web camera for security purpose.
Security Measures are one of the things in which technology had entered long time back.
Security without Technology cannot be thought of in modern times. Be it in any Bank, Corporate
buildings, Educational Institute, anywhere the utilization of vision based systems are used in
order to keep a check over the activities going at that place. Face detection and tracking using
Arduino on video stream is a system which can be used for security purposes. This project uses
different hardware and software to capture the frames from a camera, detect the faces, saves the
detected faces and tracks the faces.

ii
Acknowledgment

First of all, we are grateful to the Almighty God for enabling us to complete this project work.
We wish, also, to express our sincere gratitude to Mr. lijaddis G. for his expert, sincere and
valuable guidance and encouragement. We are thankful for their aspiring guidance, invaluably
constructive criticism and friendly advice during the project work. We are sincerely grateful to
them for sharing their truthful and illuminating views on a number of issues related to the
project.

Finally, we take this opportunity to sincerely thank all the faculty members of Electrical
and Computer Engineering for their help and encouragement in our educational endeavors.

iii
Contents
Approval ......................................................................................................................................... i

Abstract .......................................................................................................................................... ii

Acknowledgment .......................................................................................................................... iii

List of figure ................................................................................................................................. vi

CHAPTER ONE ........................................................................................................................... 1

INTODUCTION ........................................................................................................................... 1

1.1 Background .......................................................................................................................... 1

1.2 Problem Statement .............................................................................................................. 3

1.3.1 General objective .......................................................................................................... 5

1.3.2 Specific objective........................................................................................................... 5

1.4 Methodology ........................................................................................................................ 5

1.5 Contributions of the Project ............................................................................................... 7

1.6 Organizations of the Project............................................................................................... 7

CHAPTER TWO .......................................................................................................................... 8

REVIEW OF LITERATURES .................................................................................................... 8

2.1 Overview Face Detection and Face Tracking ................................................................... 8

2.2.1 Face Detection ............................................................................................................... 8

2.2 Face detection algorithms ................................................................................................... 9

2.2.1 Eigen faces ..................................................................................................................... 9

2.2.2 Skin color detection .................................................................................................... 10

2.2.3 Haar-based feature detection .................................................................................... 10

2.2 Face Tracking .................................................................................................................... 14

iv
2.3 System Components and Operations .............................................................................. 16

CHAPTER THREE .................................................................................................................... 23

SYSTEM DESIGN AND ANALYSIS ....................................................................................... 23

3.1 System model ..................................................................................................................... 23

CHAPTER FOUR ....................................................................................................................... 28

SIMULATION RESULT AND DISCUSION .......................................................................... 28

4.1 face detection and tracking............................................................................................... 28

CHAPTER FIVE ........................................................................................................................ 30

CONCLUSIONS AND FUTURE WORK ................................................................................ 30

5.1 Conclusion .......................................................................................................................... 30

5.2 Future Work ...................................................................................................................... 30

References .................................................................................................................................... 31

Appendix ...................................................................................................................................... 32

v
List of figure

figure 1. 1 same face with different illuminate variance .......................................................... 3

figure 1. 2 same face with occlusion ............................................................................................ 4

figure 1. 3 same face with facial feature ..................................................................................... 4

figure 1. 4 same face with different pose variance .................................................................... 4

figure 1. 5 diagram of methdology.............................................................................................. 6

figure 2. 1 Shapes for features ................................................................................................... 11

figure 2. 2 Aboost Agorithm ...................................................................................................... 12

figure 2. 3 stage image processing ............................................................................................ 15

figure 2. 4 Arduino UNO Board ................................................................................................ 17

figure 2. 5 Camera sensor (webcam) ......................................................................................... 18

figure 2. 6 servo motor ............................................................................................................... 18

figure 2. 7 DC motor diagram .................................................................................................... 19

figure 2. 8 Teardown of a servo motor ..................................................................................... 20

figure 2. 9 jumper wire ............................................................................................................... 21

figure 2. 10 Bread board ............................................................................................................. 22

figure 3. 1 Block diagram of face detection and tracking system model .............................................. 23

figure 3. 2 Experimental Setup ................................................................................................................ 24

figure 3. 3 Complete hardware setup of face detection and tracking .................................................. 25

figure 3. 4 Output of algorithm showing the face detection .................................................................. 27

figure 4. 1 face detected ............................................................................................................................ 29

figure 4. 2 face tracking ........................................................................................................................... 29

vi
List Acronyms
2D…………………………………………………………………………..…Two -Dimension

3D………………………………………………………………….……….….Three-Dimension
AdaBoost……………………………………………………………………….Adaptive Boost

GND …………………………………………………….…………. …………Ground

GPU …………………………………………………………….…………….Graphic Processing Unit


Open CV …………………………………………………………….…………Open Computer Vision

DC………………………………………………………………………………Direct Current

LED……………………………………………………………………………. light Emitted diode

I/O………………………………………………………………………………Input Output

PWM……………………………………………………………………………. Pulse Width Modulation

TX………………………………………………………………………...Transmission

RX………………………………………………………………………..Receiving

vii
CHAPTER ONE

INTODUCTION

1.1 Background

Face tracking and detection features in sequence is an important and fundamental problem in
computer vision. This area of research has a lot of applications in face identification systems,
model based coding, gaze detection, human computer, interaction, teleconferencing, etc.
human-computer interaction, teleconferencing, etc.

Arduino based face detection and tracking on video streams from surveillance cameras in public
or commercial places will discuss in this paper. In many situations it is useful to detect where the
people are looking for, e.g. in exhibits, commercial malls, and public places in buildings.
Prototype is designed to work with web cameras for the face detection and tracking system based
on open source platforms Arduino. The common face detection methods are: knowledge-based
approach, Statistics-based approach and integration approach with different features or methods.
The knowledge based approach can achieve face detection for complex background images to
some extent and also obtain high detection speed, but it needs more integration features to further
enhance the adaptability. Statistics-based approach detects face by judging all possible areas of
images by classifier, which is to look the face region as a class of models, and use a large
number of Face and non-face training samples to construct the classifier. The system is based on
AdaBoost algorithm and abstracts faces Haar-Like features. This system can be used for security
purpose to record the visitor face as well as to detect and track the face. A program is developed
using math lab that can detect people’s face and also track from the web camera. Arduino is an
open-source electronics prototyping platform based on flexible, easy-to-use hardware and
software.

The software consists of a standard programming language compiler and the boot loader that
runs on the board. Arduino can sense the environment by receiving input from a variety of
sensors and can affect its surroundings by controlling lights, motors, and other actuators.

1
The microcontroller on the board is programmed using the Arduino programming language and
the Arduino development environment. Arduino projects can be stand-alone or they can
communicate with software running on a computer.

The Processing software language is a text programming language specifically designed to


generate and modify images. Processing strives to achieve a balance between clarity and
advanced features. The system facilitates teaching many computer graphics and interaction
techniques including vector/raster drawing, image processing, color models, mouse and
keyboard events, network communication, and object-oriented programming. Libraries easily
extend Processing ability to generate sound, send/receive data in diverse formats, and to
import/export 2D and 3D file formats. Processing is for writing software to make images,
animations, and interactions. Processing is a dialect of a programming language called Java; the
language syntax is almost identical, but Processing adds custom features related to graphics and
interaction.

2
1.2 Problem Statement

Face detection is important for the interpretation of facial expressions in applications such as
intelligent man-machine interface and communication, intelligent visual surveillance and real-
time animation from live motion images. Face detection is a process, which is to analysis the
input image and to determine the number, location, size, position and the orientation of face.
Face detection is the base for face tracking and face recognition, whose results directly affect the
process and accuracy of face recognition. The most statement problem of the project are.

Large amounts of storage needed, Good quality images needed.

Illumination variation: that Images of the same face look different because the change of
the light. the result becomes worse since the light changes the color of the facet due to non-
uniform light condition.

Figure 1. 1 same face with different illuminate variance

Occlusion: face detection not only deal with different face, however, it needs deal with any
optional object. E.g. hairstyle, sunglass are example of occlusion in face detection. For global
feature, occlusion one major difficulty factor in face detection. Faces may be partially occluded
by other objects. In an image with a group of people, some faces may partially occlude other
faces.

3
Figure 1. 2 same face with occlusion

Facial expression. The appearance of faces is directly affected by a person’s facial


expression. The presence or absence of structural components: Facial features such as beards,
mustaches, and glasses may or may not be present and there is a great deal of variation among
these components including shape, color, and size.

Figure 1. 3 same face with facial feature

Pose Variation: variant pose is occurred because of people not always orient to camera and
same face in different angles could give a different output, not as accurate as other bio-metrics

Figure 1. 4 same face with different pose variance

4
1.3 Objective
1.3.1 General objective

The aim of this project is to detect a face and to track the movements of the detected face using

Arduino for security purpose in real time.

1.3.2 Specific objective


• To get much deeper understanding about face detection and object tracking algorithms.

• To increase the security level to prevent an unauthorized knocking of the door.

• To avoid economy loss for guard.

• For Entertainment and Human-robot/computer-interaction

1.4 Methodology

The main goal of this project is to design and model face detection and tracking using Arduino
for security purpose. This project is based on the study and simulation using scientific computer
software, MATLAB. The simulation result analysis by MATLAB, first we will identify the
parameters of the system and generate the MATLAB code.

For effective and efficient analysis, study and design Arduino based face detection and tracking,
the following methodologies will be conducted.

Data collection

It’s obvious to do dynamic system design data that are necessary for analyzing of overall
components of the system should be at the first step. The data will be gathered by collecting
valid data from internet, books, finding out literature reviews and interviewing specialized
person in the area.

5
Data analysis

After sufficient information is gathered and the concerning data sources arranging and analyzing
data is the next step for designing of the system. Because well organized and well analyzed data
are comfortable to assess the feasibility of the system, interpret and extrapolate to further
processes.

System Modeling

System modeling is the next step after data analysis. Modeling process will takes place through

MATLAB software.

System testing

Testing is the method of realizing proper functioning of each components of the modeled system.

This stage is considered as the last method.

Document preparation

Based on the analyzed data and results, well prepared project document will be organized.
Writing of the document will be conducted so as to put the findings of this study in a meaningful
way that anybody can understand.

Figure 1. 5 diagram of methodology

6
1.5 Contributions of the Project

Face Detection and tracking is a computer based technology to determine the location of a face
in an image regardless of its size, color, illumination and ignoring all other constituents of the
image. Face Detection makes it possible to use the facial images of a person to authenticate him
into secure system, for criminal identification, for passport verification etc. Face images are
projected onto a face space that encodes best variation among known face images. The face
space is collection of Eigen face. The project develops a system that can automatically detect
people’s face and tracking. A system which both detects and matches a face with distinct features
and generates and sends a control signal to the hardware according to the face recognized.

The face detection algorithm has been developed on MATLAB platform by the combination of
several image processing algorithms. Using the theory of Image Acquisition and Fundamentals
of Digital Image Processing, the face of a user has been detected in real time. By using Face
detection and Serial data communication, the state of Arduino board pin has been controlled.
MATLAB programming develops a computer vision system in the real time for face detection
and tracking using camera as image acquisition hardware. Arduino programming provides an
interfacing of a hardware prototype with control signals generated by real time face detection and
tracking. Face detection and Tracking System using Arduino is system which can be used for
security purposes. This project combines different hardware’s and software to capture the frames
from a camera, detect the faces, saves the detected faces and tracks the faces.

1.6 Organizations of the Project


This semi semester project is written as a partial fulfillment of the requirement for degree
in electronics and communication engineering. The broad objectives of this project is to study
“face detection and tracking based on Arduino”. The paper is organized in five chapters and its
outline is as follows. Chapter one presents the background, contribution of the project, problem
statement, objectives and the methodology used. Chapter two provides Review of literature:
Overview Face Detection and face tracking, Face detection algorithms and System Components
and Operations. Chapter three is concerned with System design and analysis. Chapter four
presents Results and discussion And Chapter five draws conclusion and future work.

7
CHAPTER TWO

REVIEW OF LITERATURES

2.1 Overview Face Detection and Face Tracking


2.2.1 Face Detection
Face detection is first and most crucial step and it is necessary for an efficient face detection
system in order to distinguish the face region and the non-face region of the person of interest.
However, it is difficult to detect the person of interest as it has too many variables, such as skin-
color, location, orientation, pose, facial expressions, illuminations, occlusions and so on [1, 2].
According to [3], there are two approaches for the face detection. Among them are feature-based
approach and image based approach. The examples of the feature-based approach are the skin
color, face geometry, motion analysis, and snakes and so on. For the image based approach, it
has addresses face detection as a general recognition problem such as face and non-face
prototype classes.

In fact, face detection area contains a wide range of pre-study knowledge which can be classified
in different methods.

Knowledge based method


These model is used human knowledge to find a face pattern from the testing images. Based on
nature of human faces, algorithm scan the image top to bottom and left to right order to find
facial feature. For instance, face should including eye and mouth. It is usually based on rule,
especially human coded rule, in order to encode the information of human face such as eyes
or/and mouth.

Feature invariant method

These model is bottom up approaches and used to find a facial feature (eyebrow, nose), even in
the presence of composition, perspective vary, and so it is difficult to find a face real time using
this method. Statically method are developed to determine the faces. Facial feature human face

8
are: shape, texture, skin. It includes structural features which do not depend to pose, lighting and
rotation of the face. For example, skin color.

Template matching method

These model is used several templates to find out the face class and extract facial feature. Rules
are pre-defined and decide whether there is face in the image. This method is based on
comparing the input images with the preselected images to find correlation between them.

Appearance based method

Appearance based method is similar to template matching method, however, the models are
learned from pre labelled training set and one of the essential algorithms is Eigen face method in
this category. Neural networks, Bayes classifiers, support vector machine, AdaBoost based face
detection, Fisher linear discriminant, statistical classifiers and hidden Markov models are all in
appearance based methods.

2.2 Face detection algorithms


2.2.1 Eigen faces

The Eigen face detection method is one of the earliest methods for detecting and recognizing
faces. Turk and Pentland implemented this method and the overall process is described as
follows:

 Begin with a large image set of faces.


 Subtract the mean of all faces from every face.
 Calculate the covariance matrix for all of the faces.
 Calculate the eigenvalues and corresponding eigenvectors of the covariance matrix. The
largest eigenvectors will be chosen as a basis for face representation. These eigenvectors
are called “Eigen faces”.

When trying to determine whether a detection window is a face, the image in the detection
window is transformed to the Eigen face basis. Since the Eigen basis is incomplete, image
information will be lost in the transformation process. The difference, or error, between the
detection window and its Eigen face representation will then be threshold to classify the
detection window. If the difference is small, then there is a large probability that the detection

9
window represents a face. While this method has been used successfully, one disadvantage is
that it is computationally heavy to express every detection window using the Eigen face basis.

2.2.2 Skin color detection

In normal lighting conditions, all human skin tones seem to have certain properties in color space
that can be used for skin tone detection. human faces can be found by detecting which pixels in
an image correspond to human skin, and further examining and labeling skin regions could lead a
program to a conclusion that the given region is a human face. It show that when all different
skin tones are plotted in different color spaces, such as RGB, YUV and HSL, they span certain
volumes of the color spaces. These volumes can be enclosed by surrounding planes. Color values
can then be tested to see if they lie within this skin tone volume. The main advantage of this
method is that it is well suited for GPU implementation: the pixel shader can efficiently examine
if a pixel color lies within a skin tone volume, since this type of per-pixel evaluation is what
GPUs are designed for. However, there are a few disadvantages with skin detection methods:

 For accuracy, normalized color images are required

 Each image has to be normalized before use Segmented of skin regions are could have noise or
undesired information

2.2.3 Haar-based feature detection


Haar-based feature detection methods are based on comparing the sum of intensities in adjacent
regions inside a detection window. This window is called the template or the mask. Usually,
several features should be tested in order to determine if what is inside the window is a face or
not. Features make use of the fact that objects often have some general properties. For example,
for faces it is the darker region around the eyes compared to the forehead, or the fact that the two
eye regions are similar for every face. Several features are tested within a detection window to
determine whether the window contains a face or not. In the implementation of face detection, Xi
contains a huge number of face features, and some of the features with low ϵi to train our strong
classifier are selected. By AdaBoost algorithm this can be achieved automatically [4]. For each
iteration ϵi with each feature in Xi can be calculated and then the lowest one is what we need.
For doing this, the face detection rapid could be very fast. In next part, you will find there are
many Haar-like features, so it is hard to make use of all them. Face features are abstracted from

10
the input image and are used to train the classifier, modify weights.
Face features are abstracted from the input images and are used to train the classifiers, modify
weights as mentioned in [4]. The Haar-like features are rectangle features and value is that the
sum of pixels in black district subtracts the sum of pixels in white district [7] as shown in figure
below.

There are many types of basic shapes for features, as the next figure shows:

Figure 2. 1 Shapes for features

By varying the position and size of these features within the detection window, an over complete
set of different features is constructed. The creation of the detector then consists of finding the
best combination of features to separate faces from non-faces. This advantage of this method is
that the detection process consists mainly of additions and threshold comparisons. This makes
the detection fast, perfect for our embedded use case, the main problem is that the accuracy of
the face detector is highly dependent on the database used for training. The main disadvantage is
the need to calculate sums of intensities for each feature evaluation. This will require lots of
lookups in the detection window, depending on the area covered by the feature.

11
ADABOOST

In 1995, Freund and Schapire first introduced the AdaBoost algorithm [8]. It was then widely
used in pattern recognition. AdaBoost is the “adaptive boosting” algorithm. The goal of boosting
is to improve the accuracy of any given learning algorithm. First, a weak classifier with an
accuracy on the training set greater then a chance is created, and then new component classifiers
are added to form an ensemble whose joint decision rule has arbitrarily high accuracy on the
training set. In AdaBoost each training pattern receives a weight that determines its probability
of being selected for a training set for an individual component classifier. If a training pattern is
accurately classified; then its chance of being used again in a subsequent component classifier is
reduced. Conversely, if the pattern is not accurately classified, then its chance of being used
again is raised. In this way, the AdaBoost focuses in on the difficult patterns.

The AdaBoost face detection algorithm is the combination of several weak classifiers. It is to
pick up a few thousand features and assigns weights to each one based on a set training images.
The aims of AdaBoost is assign each weak classifier with best weight. More results were proved
that it works in generalization performance. It consist of a numerous weak classifier to construct
strong classifier. Weak classifier just handle a slightly features, however, a numerous weak
classifier can be used to increase overall performance. One strong classifier can’t be computed a
large input features in real time.

Figure 2. 2 Adaboost Agorithm


The weak classifier selects and rejects single rectangular feature from optimal thershold
classification function. This process can be presented as follows hj(x) is weak classifier which
computed by feature fj,theshold, aj and partiy pj

12
The threshold of each classifier can be changed, as a result the false negative rate is trending to
zero. The key of algorithm of this algorithm is simple in individual achieve height detection rate
in overall performance. This boosting idea makes the process of learning to be simple and
efficient.

13
2.2 Face Tracking

Real time face tracking refers to the task of locating human faces in a video stream and tracking
the detected faces. Nowadays, there are many real world applications of face detection and other
image processing techniques. Face detection and tracking are important in video content analysis
since the most important objects in most video are human beings. Project on face tracking and
animation techniques has been improved due to its wide range of applications in security,
entertainment industry, gaming, psychological facial expression analysis and human computer
interaction. Recent advances in face video processing and compression have made face-to-face
communication be practical in real world applications. However, higher bandwidth is still highly
demanded due to the increasing intensive communication. Model based low bit rate transmission
with high quality video offers a great potential to mitigate the problem raised by limited
communication resources. However, after a decade’s effort, robust and realistic real time face
tracking and generation still pose a big challenge. The difficulty lies in a number of issues
including the real time face feature tracking under a variety of imaging conditions such as
lighting variation, pose change, self-occlusion and multiple non-rigid features deformation and
the real time realistic face modeling using a very limited number of feature parameters.

The appearance-driven approach requires a significant number of training data to enumerate all
the possible appearances of features. The model based approach assumes the knowledge of a
specific object is available, meanwhile the requirement of frontal facial views and constant
illumination limited its application. All above tracking methods have shown certain limitations
for accurate face feature tracking under complex imaging conditions. Different types of facial
features, like skin color, edges, feature points, motion, have been used for face tracking. Skin
color is tried for tracking face motion in X, Y direction and out-of-plane rotation in. It is often
too simple to encode structural knowledge of face, it is thus good for coarse face tracking.

14
Image processing stages

Figure 2. 3 stage image processing

Per the figure above, the first step is Image Capture. As the name implies, this means obtaining
our source image. Segmentation is a crucial stage in image processing. Segmentation is the
division of the source image into sub regions that are of interest; this could mean segmentation
by color, by size, open regions, closed regions, etc. It is important to note that the sum of all
regions equals the source image. The output of segmentation is very important because it will be
the input of the next stages. Representation is the step in which the segmented image is
15
represented in terms that the system can interpret. Feature detection and interpretation is the
step when the system needs to know the output of the segmentation. This means that the system
has to know if the required object is in the segmented image. For example, if the system
searching for a yellow ball, but there is a yellow cube in the segmented image instead. If it
segmented by color, then the cube is still the object of interest, but with feature detection and
interpretation the system will know that it is actually a cube, not a ball.

2.3 System Components and Operations


There are several tools used to assist in the face detection system for this project. These include
the Integrate Development Environment (IDE) used for hardware or software design, and the
window operating system used in completing this project. Among the IDE used are MATLAB
R2013a and Arduino IDE. MATLAB is a programming environment for algorithm development,
data analysis, visualization, and numerical computation.

The components of a project are important when planning projects because the cost depends on
the price of the components. Therefore, the availability of resources can dictate the development
of a project. During the research portion of a project, several options for components and
software are tested. Components for this project have been selected because of their availability,
low cost, and efficiency of function. The components used in this project are thoroughly
described, and the reasons for selecting these individual components is provided.

the component that are used for this project are PC preferably running windows 7 sp1, Arduino
uno or compatible plus power source (5v-dc), standard servos *2,webcam w/usb interface,
breadboard, jump wires, hobby wire to tie pan/tilt servos and webcam together.

Arduino UNO

Arduino UNO consists of the micro-controller 28 pins; half of those pins can be used for digital
I/O, six of those pins can be for analog input, and the other six can be used for PWM. This
microcontroller board is designed without an external crystal frequency, unlike other controllers.
Because the board is designed without this frequency, its functionality provides a non-
complicated environment for physical computing. Connecting the sensors or servo motors to
Arduino UNO is simple. Arduino UNO provides an inbuilt LED, a reset button, a TX and a RX

16
indicator, and an uploading indicator. This project requires a controller that can read serial data
from a serial port and can easily work with a low voltage. It is simple to provide power to this
controller board from a computer.

Arduino can sense the environment by receiving input from a variety of sensors and can affect its
surroundings by controlling lights, motors, and other actuators. The microcontroller on the board
is programmed using the Arduino programming language and the Arduino development
environment. Arduino projects can be stand-alone or they can communicate with software
running on a computer.

Figure 2. 4 Arduino UNO Board

Camera Sensor or webcam


A webcam is a hardware camera and input device that connects to a computer and captures either
still pictures or motion video of a user. The picture of the Logitech Webcam C270 is an example
of what a webcam may look. Today, most webcams are either embedded into the display with
laptop computers or connected to the USB port on the computer. A webcam is video camera that
feeds or streams its image in real time to or through a computer to computer network. When
"captured" by the computer, the video stream may be saved, viewed or sent on to other networks
via systems such as the internet, and emailed as an attachment. When sent to a remote location,
the video stream may be saved, viewed or on sent there. Unlike an ip camera (which connects
using Ethernet or wi fi), a webcam is generally connected by a USB cable, or similar cable, or
built into computer hardware, such as laptops.

17
The term "webcam “may also be used in its original sense of a video camera connected to the
web continuously for an indefinite time.

Figure 2. 5 Camera sensor (webcam)

Servo Motor

A Tower Pro micro servo motor was selected for the rotation application. It can rotate 180
degrees in each direction, rotating a total of 180 degrees as explained in the Tower Pro user
manual [16]. The Tower Pro servo motor has three wires: orange, red, and brown. The orange
connects to the digital pin (PWM), the red provides power to the motor, and the brown connects
to the ground. Compared to other servo motors available, this servo motor is efficient and
lightweight. The servo motor used in this project is shown in Figure below.

Figure 2. 6 servo motor

18
Principle of the DC motor
DC motor operation is based on the Lorentz law:

F = q [E + v × B] 2.1

q is the charge, E is the electric field density vector, v is the charge speed vector, B is
the magnetic field density vector. In a DC motor, coils and the flux density of the magnetic field
B are arranged in a perpendicular position to each other in order to generate torque for the rotor.
Hence, in Equation 2.1, E is zero and vector v and B become scalars v and B and Equation 2.1
can be simplified as:

F=qv×B (2.2)
To keep the torque in the same direction and hence the motor spinning, commutators are required
to reverse the current every half cycle [5].

Figure 2. 7 DC motor diagram


In sum, a DC motor has a two wire connection - a power wire and a ground wire. All
drive power is supplied over these two wires. When the DC motor is powered, it will start
spinning.

19
Servo motor model
A servo motor is based on a DC motor. A servo motor has four parts: a normal DC motor, a gear
reduction unit, a position-sensing device (usually a potentiometer), and a control circuit. Its shaft
can be positioned to specific angular positions by sending the servo a coded signal. As long as
the coded signal exists on the input line, the servo will maintain the angular position of the shaft.
As the coded signal changes, the angular position of the shaft changes as well. In practice, servos
are used in radio controlled airplanes to position control surfaces like the elevators and rudders.
They are also used in radio controlled cars, puppets, and of course, robots [5].

The servo motor has control circuits and a potentiometer that is connected to the output shaft. In
Figure below, the potentiometer can be seen on the right side of the circuit board.
This potentiometer allows the control circuitry to monitor the current angle of the servo motor. If
the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not
correct, it will turn the motor the necessary direction until the angle is correct. The output shaft
of the servo is capable of rotating about 180 degrees. A standard servo is used to control an
angular motion of between 0 and 180 degree.

Figure 2. 8 Teardown of a servo motor

20
How is the required shaft angle communicated to the servo? A control wire is used to
communicate the angle. The angle is determined by the duration of a pulse that is applied to the
control wire. This is called Pulse Width Modulation. The servo expects to see a pulse every 20
milliseconds. The length of the pulse will determine how far the motor turns.
A 1.5 millisecond pulse, for example, will make the motor turn to the 90 degree position
(often called the neutral position). If the pulse is shorter than 1.5 ms, then the motor will
turn the shaft to closer to 0 degrees. If the pulse is longer than 1.5ms, the shaft turns
closer to 180 degrees. The duration of the pulse dictates the angle of the output shaft.

Jumpers
Jump wire is an electrical wire or group of them in a cable with a connector or pin at each end,
which is normally used to interconnect the components of a breadboard or other prototype or test
circuit, internally or with other equipment or components, without soldering. Individual jump
wires are fitted by inserting their "end connectors" into the slots provided in a breadboard, the
header connector of a circuit board, or a piece of test equipment. Male jumper wires are used to
connect Arduino and servo motor, which are shown in Figure below.

Figure 2. 9 jumper wire

Breadboard
Used to make connections

21
Figure 2. 10 Bread board

22
CHAPTER THREE

SYSTEM DESIGN AND ANALYSIS


3.1 System model

The AdaBoost Algorithm is used in the proposed project because of its simple implementation
and effective results compared to other algorithms. To initiate the project, the trials were
conducted on the integrated webcam of a laptop using the Arduino. The Haar cascade classifier
detects a face on the provided image to check the accuracy of face detection, and then it is
implemented in the real time video, which successfully completes the initial task of detecting the
face. Serial communication between the Arduino and the computer is used to transmit and
receive data. The computer should send precise serial data to the Arduino when it detects a face.
The Arduino is programmed to read the serial data that is received from the computer. The
Arduino must blink the LED when reading the data. Specifically, the Arduino must blink the
LED when reading ‘H’ from the serial port, since ‘H’ is set as the precise data.

Figure 3. 1 Block diagram of face detection and tracking system model

23
The main approach of this project addresses retrieving data from the sensor and manipulating the
data to obtain the desired output using the system. The system consists of the following
components: a camera sensor, a computer, and a micro- controller board.

The output devices include the servo motor, which provides the analog output, and the feedback.
The block diagram of this project is represented above. In this project, the data is being extracted
from the camera sensor and the computer will process that data. After processing, the computer
sends a signal to the controller board, which in turn makes the servo motor rotate according to
the signals received from the computer.

figure 3. 2 Experimental Setup

24
Figure 3. 3 Complete hardware setup of face detection and tracking

Breadboard is used to make connections. The various connections required are as given below

SERVOS:

1. The yellow/signal wire for the pan (x axis) servo goes to digital pin 9.

2. The yellow/signal wire for the tilt (y axis) servo goes to digital pin 10.

3. The red/VCC wires of both servos go to the arduino's 5v pin.

4. The black/GND wires of both servos go to arduino's gnd pin.

25
WEBCAM:

The webcam's USB goes to the pc. The code will identify it via a number representing the USB
port its connected.

ARDUINO:

The Arduino Uno is connected to the pc via USB.

Processing takes the video input from the webcam and uses the Arduino IDE library to analyze
the video. If a face is detected in the video, the Arduino IDE library will give the
Processing sketch the coordinates of the face. The processing sketch will determine where the
face is located in the frame, relative to the center of the frame, and send this data through a
serial connection to an Arduino. The Arduino will use the data from the Processing sketch to
move the servos connected the Servo setup

a) Basically Haar-cascade classifier is used for detecting the faces.

b) The input video frame is read from camera and temporary memory storage is created
to store this frame.

c) A window is created to capture the display frame and frame is continuously monitored
for its existence.

d) A function is called to detect the face where the frame is passed as parameter.

e) Steps b-d is kept in a continuous loop until the user

Defined key is pressed.

f) The classifier, frame, memory storage & the window are destroyed.

g) The (X, Y) coordinate of the image is plotted according to movement of face.

h) The difference between face position and center is calculated and sent to Arduino serially.

Basically Arduino will analyze a serial input for commands and set the servo positions
accordingly. A command consists of two bytes: a servo ID and a servo position. If the Arduino
receives a servo ID, then it waits for another serial byte and then assigns the received position
value to the servo identified by the servo ID. The Arduino Servo library is used to easily control

26
the pan and tilt servos. There’s a character variable that will be used to keep track of the
characters that come in on the Serial port.

a) Library named servo.h is used in arduino to control the servo motors

b) Depending on the difference found in step8 the 2 servo motors are sent with appropriate
controls for the pan-tilt movement of camera.

c) Step b is kept in a continuous loop.

Figure 3. 4 Output of algorithm showing the face detection

27
CHAPTER FOUR

SIMULATION RESULT AND DISCUSION

4.1 face detection and tracking


The image of the face captured by web-cam with the help of Processing, Arduino IDE undergoes
different steps as mention below. Generate rectangle class which keeps track of the face
coordinates. Create an instance of the Arduino IDE library. This serial library is needed to
communicate with the Arduino. Adjust Screen Size Parameters on contrast/brightness values.
Convert the image coming from webcam to greyscale format. Find out if any faces were
detected. If a face is found, find the midpoint of the first face in the frame. Manipulate these
values to find the midpoint of the rectangle. Find out if the Y component of the face is below the
middle of the screen, if it is below the middle of the screen. Update the tilt position variable to
lower the tilt servo. Find out if the Y component of the face is above the middle of the screen.
Find out if the X component of the face is to the left of the middle of the screen. Update the pan
position variable to move the servo to the left. Find out if the X component of the face is to the
right of the middle of the screen. Update the pan position variable to move the servo to the right.
Update the servo positions by sending the serial command to the Arduino. The pan & tilt
position of the servo motor linked with web camera is directly proportional to the serial
command of the coordinates to the Arduino of the X & Y components of the face from midpoint
of the rectangle.
By using this approach, it was found that time taken to detect the face was less than 1 second
which means that this setup can be used in real time. The detection efficiency was greatly
improved by using Arduino. The average frame rate was found to be 15 fps.

The result of face detection is shown in Figure 7. Those are the frames extracted from the video.
Sometimes, face detection algorithm may get more than one result even there is only one face in
the frame. In this case, the post processing has been used. If the detector provides more than
one rectangle, which indicates the position of the face, the distance of center points of
these rectangles has been calculated. If this distance is smaller than a pre-set threshold, the
average of these rectangles will be computed and set as the final position of the detected face.

28
Figure 4. 1 face detected

The result of face tracking is shown in Figure 8. The result is acceptable but not accuracy.
Affected by non-uniform light condition, the result becomes worse since the light changes
the color of the face.

Figure 4. 2 face tracking

29
CHAPTER FIVE

CONCLUSIONS AND FUTURE WORK

5.1 Conclusion
Face detection and tracking is an important task in computer vision field. In face detection and
tracking it consist of two major processes, face detection and tracking. Face detection in video
image obtained from single camera with static background that means fixing camera is achieved
by background subtraction approach. The face detection and tracking performed using MATLAB
SIMULINK. In this thesis, we tried different videos with fixed camera with a single face and
multiple face to see it is able to detect face. AdaBoost algorithm has been used so tracker is
invariant to representation of interested face. Main aim of this prototype system is to detect a
face, track it, match it with stored Eigen faces and accordingly set digital pin of Arduino board
HIGH or LOW. The Eigen faces are stored first and then we take snapshot of user’s face
in real time. Then we match the user’s face with stored faces and we interfaced this
Face recognition with Arduino using Serial communication. Hence Arduino can be used as
control of mechanical setup tilt and facet location can be Tracked using Simulink and with the
servomotor that was used for the camera's tracking motion.

5.2 Future Work

This project implements an embedded system capable of detecting and tracking a face in real
time, using a servo motor.

In the future, we can extend the work to detect the moving object with non-static background,
having multiple cameras which can be used in real time surveillance applications and a fully-
embedded system can be created that works automatically and saves images directly to the
server. Along with face detection, face recognition may also be implemented.

30
References

[1]. Oliver Jesorsky, Klaus J. Kirchberg, and Robert W. Frischholz (2001), “Face Detection
Using the Hausdorff Distance”. Proc. Third International Conference on Audio- and Video-based
Biometric Person Authentication, Lecture Notes in Computer Science,Vol 4,.

[2]. M H Yang, D J Kriegman, N Ahuja (2002),”Detecting Faces in Images: A survey”, Pattern


Analysis and Machine Intelligence, 24(1):34-58.

[3]. Jones M.J.,Rehg J.M. (1999),”Statistical color models with application to skin
detection”,In:Proc.IEEE Conf.on Computer Vision and Pattern Recognition, Vol 1,pp.274-280.

[4]. P. Viola & M. Jones (2001), “Rapid Object Detection using a Boosted Cascade of Simple
Feature”, Conference on Computer Vision and Pattern Recognition. IEEE Press, Pp. 511–518.

[5] .L. Guo & Q.G. Wang (2009), “Research of Face Detection based on Adaboost Algorithm
and OpenCV Implementation”, J. Harbin University of Sci. and Tech., China, Vol. 14, Pp. 123
126.

[6].Seattle Robotics Society, Whats a servo: A quick tutorial, 2010, available at http:
//www.seattlerobotics.org/guide/servos.html.

[7]. Ming Hu and Qiang Zhang , Zhiping Wang (2008) “Application of Rough Sets to Image Pre-
processing for Face Detection” IEEE International Conference on Information and Automation ,
ICIA2008, vol 2,pp:245-248

31
Appendix

clc;
clear;
close all;
faceDetector = vision.CascadeObjectDetector();Semister project on arduino based face detection
and tracking
vid = videoinput('winvideo', 1);
set(vid,'ReturnedColorSpace', 'rgb');

figure = getsnapshot(vid);
figure, imshow(figure),title('Detected faces');

while(1)
data = getsnapshot(vid);
bboxx = step(faceDetector, data);
imshow(data)
if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1)
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')

line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');

line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...

32
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end
hold off;
drawnow;
end
end

% It can Detect the human face from selected photo


% let's start
clc;
clear;
close all;
faceDetector = vision.CascadeObjectDetector();
[FileName,PathName] = uigetfile('*.jpg','Select Image file');% Select the photo you want to
detect by our system
data = imread([PathName,FileName]);% Read the input image
bboxx = step(faceDetector, data); % To detect The face

33
figure, imshow(data), title('Detected faces'); % open the figure window and annotate the
detected faces

if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1) % Open the algorithim
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')

line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');

line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');

34
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end % Close the algorithim
hold off;
end
clc;
clear;
close all;
faceDetector = vision.CascadeObjectDetector();

vid = videoinput('winvideo', 1);


set(vid,'ReturnedColorSpace', 'rgb');

figure = getsnapshot(vid);
imshow(figure)
figure;

while(1)
data = getsnapshot(vid);
bboxx = step(faceDetector, data);
imshow(data)
if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1)
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')

line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');

35
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');

line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end
hold off;
drawnow;
end
end

36
37

You might also like