Control
Control
An application for automatic face detection and tracking on video streams from surveillance cameras in public
or commercial places is discussed in this paper. In many situations it is useful to detect where the people are
looking for, e.g. in exhibits, commercial malls, and public places in buildings. Prototype is designed to work
with web cameras for the face detection and tracking system based on open source platforms Arduino and
OpenCV. The system is based on AdaBoost algorithm and abstracts faces Haar-Like features. This system can
be used for security purpose to record the visitor face as well as to detect and track the face. A program is
developed using OpenCV that can detect people's face and also track from the web camera.
computer. Computer vision is such kind of research field which tries to percept and represent the 3D information for world
objects. Its essence is to reconstruct the visual aspects of 3D object by analyzing the 2D information extracted accordingly. 3D
objects surface reconstruction and representation not only provide theoretical benefits, but also are required by numerous
applications.
Face detection is a process, which is to analysis the input image and to determine the number, location, size, position and the
orientation of face. Face detection is the base for face tracking and face recognition, whose results directly affect the process and
accuracy of face recognition. The common face detection methods are: knowledge-based approach, Statistics-based approach and
integration approach with different features or methods. The knowledge-based approach [Feng, 2004; Faizi, 2008] can achieve
face detection for complex background images to some extent and also obtain high detection speed, but it needs more integration
features to further enhance the adaptability. Statistics-based approach [Liang et al., 2002; Wang et al., 2008] detects face by
judging all possible areas of images by classifier, which is to look the face region as a class of models, and use a large number of
“Face” and “non-face” training samples to construct the classifier.
The method has strong adaptability and robustness, however, the detection speed needs to be improved, because it requires
test all possible windows by exhaustive search and has high computational complexity. The AdaBoost algorithm [Zhang,
2008; Guo & Wang, 2009] arose in recent years; it trains the key category features to the weak classifiers, and cascades
them into a strong classifier for face detection. The method has real-time detection speed and high detection accuracy, but
needs long training time. The digital image of the face generated is a representation of a two-dimensional image as a finite
set of digital values, called picture elements or pixels [Lu et al., 1999]. Pixel values typically represent gray levels, colours,
heights, opacities etc. It is to be noted that digitization implies that a digital image is an approximation of a real scene.
Recently there has been a tremendous growth in the field of computer vision. The conversion of this huge amount of low
level information into usable high level information is the subject of computer vision. It deals with the development of the
theoretical and algorithmic [Jiang, 2007] basis by which useful information about the 3D world can be automatically
extracted and analyzed from a single or multiple 2D images of the world as shown in figure 1
Computer Vision
This paper describes a system that can detect and track human face in real time using haar-like features where the detection
algorithm is based on wavelet transform. In computer vision, low level processing involves image processing tasks in which the
quality of the image is improved for the benefit of human observers and higher level routines to perform better [Viola & Jones,
2001]. Intermediate level processing involves the processes of feature extraction and pattern detection tasks.
DESCRIPTION OF TOOLS
In this section the tools and methodology to implement and evaluate face detection and tracking using OpenCV are detailed.
Most of the new developments and algorithms in OpenCV are now developed in the C++ interface [Bradski & Kaebler,
2009]. Unfortunately, it is much more difficult to provide wrappers in other languages to C++ code as opposed to C code;
therefore the other language wrappers are generally lacking some of the newer OpenCV 2.0 features. A CUDA-based GPU
interface has been in progress since September 2010.
Processing Software
The Processing language is a text programming language specifically designed to generate and modify images. Processing
strives to achieve a balance between clarity and advanced features. The system facilitates teaching many computer graphics and
interaction techniques including vector/raster drawing, image processing, color models, mouse and keyboard events, network
communication, and objectoriented programming. Libraries easily extend Processing‟s ability to generate sound, send/receive
data in diverse formats, and to import/export 2D and 3D file formats [Ben Fry & Casey Reas, 2007].
Processing is for writing software to make images, animations, and interactions. Processing is a dialect of a programming
language called Java; the language syntax is almost identical, but Processing adds custom features related to graphics and
interaction as shown in figure 3. The graphic elements of Processing are related to PostScript (a foundation of PDF) and
OpenGL (a 3D graphics specification). Because of these shared features, learning Processing is an entry-level step to
programming in other languages and using different software tools.
Arduino can sense the environment by receiving input from a variety of sensors and can affect its surroundings by
controlling lights, motors, and other actuators. The microcontroller on the board is programmed using the Arduino programming
language (based on Wiring) and the Arduino development environment (based on Processing). Arduino projects can be stand-
alone or they can communicate with software running on a computer (e.g. Flash, Processing,
and MaxMSP)
The board as shown in figure 5 can be built by hand or purchased preassembled the software can be downloaded for free.
The hardware reference designs (CAD files) are available under an o pen-source license.
ADABOOST
In 1995, Freund and Schapire first introduced the AdaBoost algorithm [Faizi, 2008]. It was then widely used in pattern
recognition.
The AdaBoost Algorithm
1. Input: Give sample set S = (x1, y1),.........(xn,yn) xiϵX, yiϵY = {-1,+1}, number of iterations T
2. Initialize: i = 1,......,N
3. For t = 1,2,...,T,
i) Train weak classifier using distribution Wt.
ii) Calculate the weight (wi) training error for each hypothesis
iii) Set:
FACE DETECTION
In this section the base algorithm used to detect the face is discussed [Feng, 2004]. AdaBoost algorithm is discussed first then
feature selection is discussed. The black/GND wires of both servos go to arduino's gnd pin.WEBCAM:
The webcam's USB goes to the pc. The code will identify it via a number representing the USB port its connected.
ARDUINO:
The arduino uno is connected to the pc via usb. Take note of the com port the USB is connected to. COM port can be found from
the arduino tools/serial ports menu. Check mark next to the active USB port shows the COM port which is used to communicate
with arduino.
IMPLEMENTATION
After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input
image. The classifier output is “1” if the region is likely to show the face and “0” otherwise. To search for the object in the whole
image one can move the search window across the image and check every location using the classifier. Here we use two different
codes for face detection and tracking respectively. The algorithm used for both the codes (Processing & Arduino) is detailed in
this section.
Extended Haar-like Features
1. Edge Features
2. Line Features
3. Centre-Surround Features
1. Implementation of Software
Processing takes the video input from the webcam and uses the OpenCV library to analyze the video. If a face is detected in the
video, the OpenCV library will give the Processing sketch the coordinates of the face. The processing sketch will determine
where the face is located in the frame, relative to the centre of the frame, and send this data through a serial connection to an
Arduino. The Arduino will use the data from the Processing sketch to move the servos connected the Servo setup as shown in
figure 9.
a) Basically haar-cascade classifier is used for detecting the faces.
b) The input video frame is read from camera and temporary memory storage is created to store this frame.
c) A window is created to capture the display frame and frame is continuously monitored for its existence.
d) A function is called to detect the face where the frame is passed as parameter.
e) Steps b-d is kept in a continuous loop until the user defined key is pressed.
f) The classifier, frame, memory storage & the window are destroyed.
g) The (X, Y) coordinate of the image is plotted according to movement of face.
h) The difference between face position and centre is calculated and sent to Arduino serially
2.Implementation of Hardware
Basically Arduino will analyze a serial input for commands and set the servo positions accordingly. A command consists of two
bytes: a servo ID and a servo position. If the Arduino receives a servo ID, then it waits for another serial byte and then assigns the
received position value to the servo identified by the servo ID. The Arduino Servo library is used to easily control the pan and tilt
servos. There's a character variable that will be used to keep track of the characters that come in on the Serial port.
a) Library named servo.h is used in arduino to control the servo motors, based on the data obtained by the openCV
through COM port.
b) Depending on the difference found in step8 the 2 servo motors are sent with appropriate controls for the pan-tilt
movement of camera.
c) Step b is kept in a continuous loop.
REFERENCES
[1] A. Faizi (2008), “Robust Face Detection using Template Matching Algorithm,”, University of Toronto, Canada.
[2] P. Feng (2004), “Face Recognition based on Elastic
Template,”, Beijing University of Technology, China.
[3] L.H. Liang, H.ZH. Ai & G.Y. Xu (2002), “A Survey of Human Face Detection,” J.Computers. China, Vol. 25, Pp. 1–10.
[4] K.J. Wang, SH.L. Duan & W.X. Feng (2008), “A Survey of Face Recognition using Single Training Sample”, Pattern Recognition and
Artificial Intelligence, China, Vol. 21, Pp. 635–642.
[5] Z. Zhang (2008), “Implementation and Research of Embedded Face Detection using Adaboost”, Shanghai JiaoTong
University, China.
[6] L. Guo & Q.G. Wang (2009), “Research of Face Detection based on Adaboost Algorithm and OpenCV Implementation”, J. Harbin
University of Sci. and Tech., China, Vol. 14, Pp. 123–126.
[7] CH. Y. Lu, CH.SH. Zhang & F. Wen (1999), “Regional Feature based Fast Human Face Detection”, J. Tsinghua Univ. (Sci. and Tech.),
China, Vol. 39, Pp. 101–105.
[8] H. J. Jiang (2007), “Research on Household Anti-Theft System based on Face Recognition Technology”, Nanjing University of
Aeronautics and Astronautics, China.
[9] P. Viola & M. Jones (2001), “Rapid Object Detection using a Boosted Cascade of Simple Feature”, Conference on Computer Vision and
Pattern Recognition. IEEE Press, Pp. 511–518.
[10] G. Bradski & A. Kaebler (2009), “Learning OpenCV”, China: Southeast Univ. Press.
[11] Ben Fry & Casey Reas (2007), “Processing: A Programming Handbook for Visual Designers and Artists”, MIT.
[12] Open Source Computer Vision Library Reference Manual-intel
[13] Gary Bradski & Adrian Kaehler O‟Reilly (2008), “Learning OpenCV”, O’REILLY Media.
[14] “Introduction to OpenCV”, [Online] Available: www.cse.iitm.ac.in/~sdas/courses/CV_DIP/PDF/INTRO_C
[15] “DCRP Review: Canon PowerShot S5 IS”, Available: http://www.dcresource.com/reviews/canon/powershot_s5review/
[16] “Energy Conservation”, Available: http://portal.acm.org.offcampus.lib.washington.edu/citation.cf m?
[17] Sarala A. Dabhade & Mrunal S. Bewoor (2012), “Real Time Face Detection and Recognition using Haar - based Cascade Classifier and
Principal Component Analysis”, International Journal of Computer Science and Management Research, Vol. 1, No. 1.
[18] Faizan Ahmad, Aaima Najam & Zeeshan Ahmed (2013), “Image-based Face Detection and Recognition: State of the Art”, IJCSI
International Journal of Computer Science Issues, Vol. 9, Issue. 6, No. 1.
Hussein Rady (2011), “Face Recognition using Principle Component Analysis with Different Distance Classifiers”, IJCSNS International
Journal of Computer Science and Network Security, Vol. 11, No. 10, Pp. 134–144
Contents
Abstract..............................................................................................................................................................................................
INTRODUCTION.........................................................................................................................................................................
DESCRIPTION OF TOOLS.........................................................................................................................................................
3.2. Processing Software...............................................................................................................................................................
FACE DETECTION......................................................................................................................................................................
IMPLEMENTATION....................................................................................................................................................................
6.1. Implementation of Software...................................................................................................................................................
6.2. Implementation of Hardware..................................................................................................................................................
RESULT AND ANALYSIS...........................................................................................................................................................
CONCLUSION..................................................................................................................................................................................
REFERENCES..................................................................................................................................................................................
..