Arduino Based Face Detection and Tracking System
Arduino Based Face Detection and Tracking System
SEMESTER PROJECT
2. Ermias wasihun…..……………………….BDU0600650UR
3. Shambel zienawi…………………………..BDU0601472UR
4. Shegaw zigale…………….……………….BDU0601475UR
Jan.3. 2018
i
Approval
This final project has been approved in partial fulfillment of the requirements for the Degree of
Bachelor of Science in Electrical and Computer Engineering with specialization in Electronics
and Communication Engineering.
Team Members:
i
Abstract
An application for automatic face detection and tracking on video streams from surveillance
cameras in public or commercial places will discuss in this project. In many situations it is useful
to detect where the people are looking for, e.g. in exhibits, commercial malls, and public places
in buildings. Prototype is designed to work with web cameras for the face detection and
tracking system based on open source platforms Arduino. The system is based on AdaBoost
algorithm and abstracts faces Haar-Like features. This system can be used for security purpose to
record the visitor face as well as to detect and track the face. A program is developed using math
lab that can detect people’s face and also track from the web camera for security purpose.
Security Measures are one of the things in which technology had entered long time back.
Security without Technology cannot be thought of in modern times. Be it in any Bank, Corporate
buildings, Educational Institute, anywhere the utilization of vision based systems are used in
order to keep a check over the activities going at that place. Face detection and tracking using
Arduino on video stream is a system which can be used for security purposes. This project uses
different hardware and software to capture the frames from a camera, detect the faces, saves the
detected faces and tracks the faces.
ii
Acknowledgment
First of all, we are grateful to the Almighty God for enabling us to complete this project work.
We wish, also, to express our sincere gratitude to Mr. lijaddis G. for his expert, sincere and
valuable guidance and encouragement. We are thankful for their aspiring guidance, invaluably
constructive criticism and friendly advice during the project work. We are sincerely grateful to
them for sharing their truthful and illuminating views on a number of issues related to the
project.
Finally, we take this opportunity to sincerely thank all the faculty members of Electrical
and Computer Engineering for their help and encouragement in our educational endeavors.
iii
Contents
Approval ......................................................................................................................................... i
Abstract .......................................................................................................................................... ii
INTODUCTION ........................................................................................................................... 1
iv
2.3 System Components and Operations .............................................................................. 16
References .................................................................................................................................... 31
Appendix ...................................................................................................................................... 32
v
List of figure
figure 3. 1 Block diagram of face detection and tracking system model .............................................. 23
vi
List Acronyms
2D…………………………………………………………………………..…Two -Dimension
3D………………………………………………………………….……….….Three-Dimension
AdaBoost……………………………………………………………………….Adaptive Boost
DC………………………………………………………………………………Direct Current
I/O………………………………………………………………………………Input Output
TX………………………………………………………………………...Transmission
RX………………………………………………………………………..Receiving
vii
CHAPTER ONE
INTODUCTION
1.1 Background
Face tracking and detection features in sequence is an important and fundamental problem in
computer vision. This area of research has a lot of applications in face identification systems,
model based coding, gaze detection, human computer, interaction, teleconferencing, etc.
human-computer interaction, teleconferencing, etc.
Arduino based face detection and tracking on video streams from surveillance cameras in public
or commercial places will discuss in this paper. In many situations it is useful to detect where the
people are looking for, e.g. in exhibits, commercial malls, and public places in buildings.
Prototype is designed to work with web cameras for the face detection and tracking system based
on open source platforms Arduino. The common face detection methods are: knowledge-based
approach, Statistics-based approach and integration approach with different features or methods.
The knowledge based approach can achieve face detection for complex background images to
some extent and also obtain high detection speed, but it needs more integration features to further
enhance the adaptability. Statistics-based approach detects face by judging all possible areas of
images by classifier, which is to look the face region as a class of models, and use a large
number of Face and non-face training samples to construct the classifier. The system is based on
AdaBoost algorithm and abstracts faces Haar-Like features. This system can be used for security
purpose to record the visitor face as well as to detect and track the face. A program is developed
using math lab that can detect people’s face and also track from the web camera. Arduino is an
open-source electronics prototyping platform based on flexible, easy-to-use hardware and
software.
The software consists of a standard programming language compiler and the boot loader that
runs on the board. Arduino can sense the environment by receiving input from a variety of
sensors and can affect its surroundings by controlling lights, motors, and other actuators.
1
The microcontroller on the board is programmed using the Arduino programming language and
the Arduino development environment. Arduino projects can be stand-alone or they can
communicate with software running on a computer.
2
1.2 Problem Statement
Face detection is important for the interpretation of facial expressions in applications such as
intelligent man-machine interface and communication, intelligent visual surveillance and real-
time animation from live motion images. Face detection is a process, which is to analysis the
input image and to determine the number, location, size, position and the orientation of face.
Face detection is the base for face tracking and face recognition, whose results directly affect the
process and accuracy of face recognition. The most statement problem of the project are.
Illumination variation: that Images of the same face look different because the change of
the light. the result becomes worse since the light changes the color of the facet due to non-
uniform light condition.
Occlusion: face detection not only deal with different face, however, it needs deal with any
optional object. E.g. hairstyle, sunglass are example of occlusion in face detection. For global
feature, occlusion one major difficulty factor in face detection. Faces may be partially occluded
by other objects. In an image with a group of people, some faces may partially occlude other
faces.
3
Figure 1. 2 same face with occlusion
Pose Variation: variant pose is occurred because of people not always orient to camera and
same face in different angles could give a different output, not as accurate as other bio-metrics
4
1.3 Objective
1.3.1 General objective
The aim of this project is to detect a face and to track the movements of the detected face using
1.4 Methodology
The main goal of this project is to design and model face detection and tracking using Arduino
for security purpose. This project is based on the study and simulation using scientific computer
software, MATLAB. The simulation result analysis by MATLAB, first we will identify the
parameters of the system and generate the MATLAB code.
For effective and efficient analysis, study and design Arduino based face detection and tracking,
the following methodologies will be conducted.
Data collection
It’s obvious to do dynamic system design data that are necessary for analyzing of overall
components of the system should be at the first step. The data will be gathered by collecting
valid data from internet, books, finding out literature reviews and interviewing specialized
person in the area.
5
Data analysis
After sufficient information is gathered and the concerning data sources arranging and analyzing
data is the next step for designing of the system. Because well organized and well analyzed data
are comfortable to assess the feasibility of the system, interpret and extrapolate to further
processes.
System Modeling
System modeling is the next step after data analysis. Modeling process will takes place through
MATLAB software.
System testing
Testing is the method of realizing proper functioning of each components of the modeled system.
Document preparation
Based on the analyzed data and results, well prepared project document will be organized.
Writing of the document will be conducted so as to put the findings of this study in a meaningful
way that anybody can understand.
6
1.5 Contributions of the Project
Face Detection and tracking is a computer based technology to determine the location of a face
in an image regardless of its size, color, illumination and ignoring all other constituents of the
image. Face Detection makes it possible to use the facial images of a person to authenticate him
into secure system, for criminal identification, for passport verification etc. Face images are
projected onto a face space that encodes best variation among known face images. The face
space is collection of Eigen face. The project develops a system that can automatically detect
people’s face and tracking. A system which both detects and matches a face with distinct features
and generates and sends a control signal to the hardware according to the face recognized.
The face detection algorithm has been developed on MATLAB platform by the combination of
several image processing algorithms. Using the theory of Image Acquisition and Fundamentals
of Digital Image Processing, the face of a user has been detected in real time. By using Face
detection and Serial data communication, the state of Arduino board pin has been controlled.
MATLAB programming develops a computer vision system in the real time for face detection
and tracking using camera as image acquisition hardware. Arduino programming provides an
interfacing of a hardware prototype with control signals generated by real time face detection and
tracking. Face detection and Tracking System using Arduino is system which can be used for
security purposes. This project combines different hardware’s and software to capture the frames
from a camera, detect the faces, saves the detected faces and tracks the faces.
7
CHAPTER TWO
REVIEW OF LITERATURES
In fact, face detection area contains a wide range of pre-study knowledge which can be classified
in different methods.
These model is bottom up approaches and used to find a facial feature (eyebrow, nose), even in
the presence of composition, perspective vary, and so it is difficult to find a face real time using
this method. Statically method are developed to determine the faces. Facial feature human face
8
are: shape, texture, skin. It includes structural features which do not depend to pose, lighting and
rotation of the face. For example, skin color.
These model is used several templates to find out the face class and extract facial feature. Rules
are pre-defined and decide whether there is face in the image. This method is based on
comparing the input images with the preselected images to find correlation between them.
Appearance based method is similar to template matching method, however, the models are
learned from pre labelled training set and one of the essential algorithms is Eigen face method in
this category. Neural networks, Bayes classifiers, support vector machine, AdaBoost based face
detection, Fisher linear discriminant, statistical classifiers and hidden Markov models are all in
appearance based methods.
The Eigen face detection method is one of the earliest methods for detecting and recognizing
faces. Turk and Pentland implemented this method and the overall process is described as
follows:
When trying to determine whether a detection window is a face, the image in the detection
window is transformed to the Eigen face basis. Since the Eigen basis is incomplete, image
information will be lost in the transformation process. The difference, or error, between the
detection window and its Eigen face representation will then be threshold to classify the
detection window. If the difference is small, then there is a large probability that the detection
9
window represents a face. While this method has been used successfully, one disadvantage is
that it is computationally heavy to express every detection window using the Eigen face basis.
In normal lighting conditions, all human skin tones seem to have certain properties in color space
that can be used for skin tone detection. human faces can be found by detecting which pixels in
an image correspond to human skin, and further examining and labeling skin regions could lead a
program to a conclusion that the given region is a human face. It show that when all different
skin tones are plotted in different color spaces, such as RGB, YUV and HSL, they span certain
volumes of the color spaces. These volumes can be enclosed by surrounding planes. Color values
can then be tested to see if they lie within this skin tone volume. The main advantage of this
method is that it is well suited for GPU implementation: the pixel shader can efficiently examine
if a pixel color lies within a skin tone volume, since this type of per-pixel evaluation is what
GPUs are designed for. However, there are a few disadvantages with skin detection methods:
Each image has to be normalized before use Segmented of skin regions are could have noise or
undesired information
10
the input image and are used to train the classifier, modify weights.
Face features are abstracted from the input images and are used to train the classifiers, modify
weights as mentioned in [4]. The Haar-like features are rectangle features and value is that the
sum of pixels in black district subtracts the sum of pixels in white district [7] as shown in figure
below.
There are many types of basic shapes for features, as the next figure shows:
By varying the position and size of these features within the detection window, an over complete
set of different features is constructed. The creation of the detector then consists of finding the
best combination of features to separate faces from non-faces. This advantage of this method is
that the detection process consists mainly of additions and threshold comparisons. This makes
the detection fast, perfect for our embedded use case, the main problem is that the accuracy of
the face detector is highly dependent on the database used for training. The main disadvantage is
the need to calculate sums of intensities for each feature evaluation. This will require lots of
lookups in the detection window, depending on the area covered by the feature.
11
ADABOOST
In 1995, Freund and Schapire first introduced the AdaBoost algorithm [8]. It was then widely
used in pattern recognition. AdaBoost is the “adaptive boosting” algorithm. The goal of boosting
is to improve the accuracy of any given learning algorithm. First, a weak classifier with an
accuracy on the training set greater then a chance is created, and then new component classifiers
are added to form an ensemble whose joint decision rule has arbitrarily high accuracy on the
training set. In AdaBoost each training pattern receives a weight that determines its probability
of being selected for a training set for an individual component classifier. If a training pattern is
accurately classified; then its chance of being used again in a subsequent component classifier is
reduced. Conversely, if the pattern is not accurately classified, then its chance of being used
again is raised. In this way, the AdaBoost focuses in on the difficult patterns.
The AdaBoost face detection algorithm is the combination of several weak classifiers. It is to
pick up a few thousand features and assigns weights to each one based on a set training images.
The aims of AdaBoost is assign each weak classifier with best weight. More results were proved
that it works in generalization performance. It consist of a numerous weak classifier to construct
strong classifier. Weak classifier just handle a slightly features, however, a numerous weak
classifier can be used to increase overall performance. One strong classifier can’t be computed a
large input features in real time.
12
The threshold of each classifier can be changed, as a result the false negative rate is trending to
zero. The key of algorithm of this algorithm is simple in individual achieve height detection rate
in overall performance. This boosting idea makes the process of learning to be simple and
efficient.
13
2.2 Face Tracking
Real time face tracking refers to the task of locating human faces in a video stream and tracking
the detected faces. Nowadays, there are many real world applications of face detection and other
image processing techniques. Face detection and tracking are important in video content analysis
since the most important objects in most video are human beings. Project on face tracking and
animation techniques has been improved due to its wide range of applications in security,
entertainment industry, gaming, psychological facial expression analysis and human computer
interaction. Recent advances in face video processing and compression have made face-to-face
communication be practical in real world applications. However, higher bandwidth is still highly
demanded due to the increasing intensive communication. Model based low bit rate transmission
with high quality video offers a great potential to mitigate the problem raised by limited
communication resources. However, after a decade’s effort, robust and realistic real time face
tracking and generation still pose a big challenge. The difficulty lies in a number of issues
including the real time face feature tracking under a variety of imaging conditions such as
lighting variation, pose change, self-occlusion and multiple non-rigid features deformation and
the real time realistic face modeling using a very limited number of feature parameters.
The appearance-driven approach requires a significant number of training data to enumerate all
the possible appearances of features. The model based approach assumes the knowledge of a
specific object is available, meanwhile the requirement of frontal facial views and constant
illumination limited its application. All above tracking methods have shown certain limitations
for accurate face feature tracking under complex imaging conditions. Different types of facial
features, like skin color, edges, feature points, motion, have been used for face tracking. Skin
color is tried for tracking face motion in X, Y direction and out-of-plane rotation in. It is often
too simple to encode structural knowledge of face, it is thus good for coarse face tracking.
14
Image processing stages
Per the figure above, the first step is Image Capture. As the name implies, this means obtaining
our source image. Segmentation is a crucial stage in image processing. Segmentation is the
division of the source image into sub regions that are of interest; this could mean segmentation
by color, by size, open regions, closed regions, etc. It is important to note that the sum of all
regions equals the source image. The output of segmentation is very important because it will be
the input of the next stages. Representation is the step in which the segmented image is
15
represented in terms that the system can interpret. Feature detection and interpretation is the
step when the system needs to know the output of the segmentation. This means that the system
has to know if the required object is in the segmented image. For example, if the system
searching for a yellow ball, but there is a yellow cube in the segmented image instead. If it
segmented by color, then the cube is still the object of interest, but with feature detection and
interpretation the system will know that it is actually a cube, not a ball.
The components of a project are important when planning projects because the cost depends on
the price of the components. Therefore, the availability of resources can dictate the development
of a project. During the research portion of a project, several options for components and
software are tested. Components for this project have been selected because of their availability,
low cost, and efficiency of function. The components used in this project are thoroughly
described, and the reasons for selecting these individual components is provided.
the component that are used for this project are PC preferably running windows 7 sp1, Arduino
uno or compatible plus power source (5v-dc), standard servos *2,webcam w/usb interface,
breadboard, jump wires, hobby wire to tie pan/tilt servos and webcam together.
Arduino UNO
Arduino UNO consists of the micro-controller 28 pins; half of those pins can be used for digital
I/O, six of those pins can be for analog input, and the other six can be used for PWM. This
microcontroller board is designed without an external crystal frequency, unlike other controllers.
Because the board is designed without this frequency, its functionality provides a non-
complicated environment for physical computing. Connecting the sensors or servo motors to
Arduino UNO is simple. Arduino UNO provides an inbuilt LED, a reset button, a TX and a RX
16
indicator, and an uploading indicator. This project requires a controller that can read serial data
from a serial port and can easily work with a low voltage. It is simple to provide power to this
controller board from a computer.
Arduino can sense the environment by receiving input from a variety of sensors and can affect its
surroundings by controlling lights, motors, and other actuators. The microcontroller on the board
is programmed using the Arduino programming language and the Arduino development
environment. Arduino projects can be stand-alone or they can communicate with software
running on a computer.
17
The term "webcam “may also be used in its original sense of a video camera connected to the
web continuously for an indefinite time.
Servo Motor
A Tower Pro micro servo motor was selected for the rotation application. It can rotate 180
degrees in each direction, rotating a total of 180 degrees as explained in the Tower Pro user
manual [16]. The Tower Pro servo motor has three wires: orange, red, and brown. The orange
connects to the digital pin (PWM), the red provides power to the motor, and the brown connects
to the ground. Compared to other servo motors available, this servo motor is efficient and
lightweight. The servo motor used in this project is shown in Figure below.
18
Principle of the DC motor
DC motor operation is based on the Lorentz law:
F = q [E + v × B] 2.1
q is the charge, E is the electric field density vector, v is the charge speed vector, B is
the magnetic field density vector. In a DC motor, coils and the flux density of the magnetic field
B are arranged in a perpendicular position to each other in order to generate torque for the rotor.
Hence, in Equation 2.1, E is zero and vector v and B become scalars v and B and Equation 2.1
can be simplified as:
F=qv×B (2.2)
To keep the torque in the same direction and hence the motor spinning, commutators are required
to reverse the current every half cycle [5].
19
Servo motor model
A servo motor is based on a DC motor. A servo motor has four parts: a normal DC motor, a gear
reduction unit, a position-sensing device (usually a potentiometer), and a control circuit. Its shaft
can be positioned to specific angular positions by sending the servo a coded signal. As long as
the coded signal exists on the input line, the servo will maintain the angular position of the shaft.
As the coded signal changes, the angular position of the shaft changes as well. In practice, servos
are used in radio controlled airplanes to position control surfaces like the elevators and rudders.
They are also used in radio controlled cars, puppets, and of course, robots [5].
The servo motor has control circuits and a potentiometer that is connected to the output shaft. In
Figure below, the potentiometer can be seen on the right side of the circuit board.
This potentiometer allows the control circuitry to monitor the current angle of the servo motor. If
the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not
correct, it will turn the motor the necessary direction until the angle is correct. The output shaft
of the servo is capable of rotating about 180 degrees. A standard servo is used to control an
angular motion of between 0 and 180 degree.
20
How is the required shaft angle communicated to the servo? A control wire is used to
communicate the angle. The angle is determined by the duration of a pulse that is applied to the
control wire. This is called Pulse Width Modulation. The servo expects to see a pulse every 20
milliseconds. The length of the pulse will determine how far the motor turns.
A 1.5 millisecond pulse, for example, will make the motor turn to the 90 degree position
(often called the neutral position). If the pulse is shorter than 1.5 ms, then the motor will
turn the shaft to closer to 0 degrees. If the pulse is longer than 1.5ms, the shaft turns
closer to 180 degrees. The duration of the pulse dictates the angle of the output shaft.
Jumpers
Jump wire is an electrical wire or group of them in a cable with a connector or pin at each end,
which is normally used to interconnect the components of a breadboard or other prototype or test
circuit, internally or with other equipment or components, without soldering. Individual jump
wires are fitted by inserting their "end connectors" into the slots provided in a breadboard, the
header connector of a circuit board, or a piece of test equipment. Male jumper wires are used to
connect Arduino and servo motor, which are shown in Figure below.
Breadboard
Used to make connections
21
Figure 2. 10 Bread board
22
CHAPTER THREE
The AdaBoost Algorithm is used in the proposed project because of its simple implementation
and effective results compared to other algorithms. To initiate the project, the trials were
conducted on the integrated webcam of a laptop using the Arduino. The Haar cascade classifier
detects a face on the provided image to check the accuracy of face detection, and then it is
implemented in the real time video, which successfully completes the initial task of detecting the
face. Serial communication between the Arduino and the computer is used to transmit and
receive data. The computer should send precise serial data to the Arduino when it detects a face.
The Arduino is programmed to read the serial data that is received from the computer. The
Arduino must blink the LED when reading the data. Specifically, the Arduino must blink the
LED when reading ‘H’ from the serial port, since ‘H’ is set as the precise data.
23
The main approach of this project addresses retrieving data from the sensor and manipulating the
data to obtain the desired output using the system. The system consists of the following
components: a camera sensor, a computer, and a micro- controller board.
The output devices include the servo motor, which provides the analog output, and the feedback.
The block diagram of this project is represented above. In this project, the data is being extracted
from the camera sensor and the computer will process that data. After processing, the computer
sends a signal to the controller board, which in turn makes the servo motor rotate according to
the signals received from the computer.
24
Figure 3. 3 Complete hardware setup of face detection and tracking
Breadboard is used to make connections. The various connections required are as given below
SERVOS:
1. The yellow/signal wire for the pan (x axis) servo goes to digital pin 9.
2. The yellow/signal wire for the tilt (y axis) servo goes to digital pin 10.
25
WEBCAM:
The webcam's USB goes to the pc. The code will identify it via a number representing the USB
port its connected.
ARDUINO:
Processing takes the video input from the webcam and uses the Arduino IDE library to analyze
the video. If a face is detected in the video, the Arduino IDE library will give the
Processing sketch the coordinates of the face. The processing sketch will determine where the
face is located in the frame, relative to the center of the frame, and send this data through a
serial connection to an Arduino. The Arduino will use the data from the Processing sketch to
move the servos connected the Servo setup
b) The input video frame is read from camera and temporary memory storage is created
to store this frame.
c) A window is created to capture the display frame and frame is continuously monitored
for its existence.
d) A function is called to detect the face where the frame is passed as parameter.
f) The classifier, frame, memory storage & the window are destroyed.
h) The difference between face position and center is calculated and sent to Arduino serially.
Basically Arduino will analyze a serial input for commands and set the servo positions
accordingly. A command consists of two bytes: a servo ID and a servo position. If the Arduino
receives a servo ID, then it waits for another serial byte and then assigns the received position
value to the servo identified by the servo ID. The Arduino Servo library is used to easily control
26
the pan and tilt servos. There’s a character variable that will be used to keep track of the
characters that come in on the Serial port.
b) Depending on the difference found in step8 the 2 servo motors are sent with appropriate
controls for the pan-tilt movement of camera.
27
CHAPTER FOUR
The result of face detection is shown in Figure 7. Those are the frames extracted from the video.
Sometimes, face detection algorithm may get more than one result even there is only one face in
the frame. In this case, the post processing has been used. If the detector provides more than
one rectangle, which indicates the position of the face, the distance of center points of
these rectangles has been calculated. If this distance is smaller than a pre-set threshold, the
average of these rectangles will be computed and set as the final position of the detected face.
28
Figure 4. 1 face detected
The result of face tracking is shown in Figure 8. The result is acceptable but not accuracy.
Affected by non-uniform light condition, the result becomes worse since the light changes
the color of the face.
29
CHAPTER FIVE
5.1 Conclusion
Face detection and tracking is an important task in computer vision field. In face detection and
tracking it consist of two major processes, face detection and tracking. Face detection in video
image obtained from single camera with static background that means fixing camera is achieved
by background subtraction approach. The face detection and tracking performed using MATLAB
SIMULINK. In this thesis, we tried different videos with fixed camera with a single face and
multiple face to see it is able to detect face. AdaBoost algorithm has been used so tracker is
invariant to representation of interested face. Main aim of this prototype system is to detect a
face, track it, match it with stored Eigen faces and accordingly set digital pin of Arduino board
HIGH or LOW. The Eigen faces are stored first and then we take snapshot of user’s face
in real time. Then we match the user’s face with stored faces and we interfaced this
Face recognition with Arduino using Serial communication. Hence Arduino can be used as
control of mechanical setup tilt and facet location can be Tracked using Simulink and with the
servomotor that was used for the camera's tracking motion.
This project implements an embedded system capable of detecting and tracking a face in real
time, using a servo motor.
In the future, we can extend the work to detect the moving object with non-static background,
having multiple cameras which can be used in real time surveillance applications and a fully-
embedded system can be created that works automatically and saves images directly to the
server. Along with face detection, face recognition may also be implemented.
30
References
[1]. Oliver Jesorsky, Klaus J. Kirchberg, and Robert W. Frischholz (2001), “Face Detection
Using the Hausdorff Distance”. Proc. Third International Conference on Audio- and Video-based
Biometric Person Authentication, Lecture Notes in Computer Science,Vol 4,.
[3]. Jones M.J.,Rehg J.M. (1999),”Statistical color models with application to skin
detection”,In:Proc.IEEE Conf.on Computer Vision and Pattern Recognition, Vol 1,pp.274-280.
[4]. P. Viola & M. Jones (2001), “Rapid Object Detection using a Boosted Cascade of Simple
Feature”, Conference on Computer Vision and Pattern Recognition. IEEE Press, Pp. 511–518.
[5] .L. Guo & Q.G. Wang (2009), “Research of Face Detection based on Adaboost Algorithm
and OpenCV Implementation”, J. Harbin University of Sci. and Tech., China, Vol. 14, Pp. 123
126.
[6].Seattle Robotics Society, Whats a servo: A quick tutorial, 2010, available at http:
//www.seattlerobotics.org/guide/servos.html.
[7]. Ming Hu and Qiang Zhang , Zhiping Wang (2008) “Application of Rough Sets to Image Pre-
processing for Face Detection” IEEE International Conference on Information and Automation ,
ICIA2008, vol 2,pp:245-248
31
Appendix
clc;
clear;
close all;
faceDetector = vision.CascadeObjectDetector();Semister project on arduino based face detection
and tracking
vid = videoinput('winvideo', 1);
set(vid,'ReturnedColorSpace', 'rgb');
figure = getsnapshot(vid);
figure, imshow(figure),title('Detected faces');
while(1)
data = getsnapshot(vid);
bboxx = step(faceDetector, data);
imshow(data)
if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1)
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')
line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...
32
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end
hold off;
drawnow;
end
end
33
figure, imshow(data), title('Detected faces'); % open the figure window and annotate the
detected faces
if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1) % Open the algorithim
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')
line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
34
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end % Close the algorithim
hold off;
end
clc;
clear;
close all;
faceDetector = vision.CascadeObjectDetector();
figure = getsnapshot(vid);
imshow(figure)
figure;
while(1)
data = getsnapshot(vid);
bboxx = step(faceDetector, data);
imshow(data)
if(~isempty(bboxx))
hold on;
for i=1:size(bboxx,1)
bbox=bboxx(i,:);
rc=bbox+[-bbox(3)/4,-bbox(4)/4,bbox(3)/2,bbox(4)/2];
ht=round(rc(4)/20);
wd=round(rc(3)/20);
xm=round(rc(1)+(rc(3)/2));
ym=round(rc(2)+(rc(4)/2));
rectangle('Position',rc,...
'Curvature',0,...
'LineWidth',2,...
'LineStyle','--',...
'EdgeColor','y')
line([xm,xm],[rc(2),rc(2)+ht],...
'LineWidth',2,...
'Color','y');
35
line([rc(1),rc(1)+wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([xm,xm],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',2,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[ym,ym],...
'LineWidth',2,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2),rc(2)+ht],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2),rc(2)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)],[rc(2)+rc(4)-ht,rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1),rc(1)+wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)-wd],[rc(2)+rc(4),rc(2)+rc(4)],...
'LineWidth',4,...
'Color','y');
line([rc(1)+rc(3),rc(1)+rc(3)],[rc(2)+rc(4),rc(2)+rc(4)-ht],...
'LineWidth',4,...
'Color','y');
end
hold off;
drawnow;
end
end
36
37