Face Recognition
Face Recognition
CHAPTER 1
INTRODUCTION
Internship opportunity provides the students a great chance to relate their theoretical knowledge
with the competitive and tough real job environment. Moreover, the internship period is within
the bachelor program and the students have to return to the academia after completing it, the
skills that they have developed during the internship help them to gain a more sound academic
result. After returning from internship the students get one month for preparing himself/herself
for the company and their future career.
It was lucky to get the opportunity to complete internship attachment with TECHNOFLY
Solutions (P), Ltd. Internship consider timely to get chance to take a deep look to their
development methodology, working models, deals and industrial behavior. Intended to look into
the software industry and how it looks like, what are their rules, responsibility and environment.
Obviously they work with almost every platform and technologies. Internship worked with the
part of Machine Learning team and generated ideas based on their technology. The journey
wasn’t that simple. There were many obstacles, new technologies that have to handle with, yet
after overcoming each challenge internship has discovered a new potential. The skills that have
gathered are priceless and cannot wait to apply those in the upcoming semesters. In addition to
that, some of the non-technical skills that have procured, combined with those technical skills
will certainly prove handy in my future jobs
CHAPTER 2
Technofly Solutions is a leading electronics product design, development and services company.
The professionals with industrial experience in embedded technology, real time software,
process control and industrial electronics held the company.
The company is the pioneers in design and development of Single Board Computers, Compilers
for micro-controllers within India. Talented professional in the field of embedded hardware,
software design and development toil to reach its excellence.
Technofly Solutions & Consulting was found in year 2017 by a team with 14+ years of
experience in embedded systems domain. Technofly Solutions focuses globally on automotive
embedded technologies and VLSI Design, Corporate Training & Consulting. Till now we have
delivered more than 15+ Corporate Trainings for companies working in Embedded Automotive
Technologies in India. Also involved in the Development of OBD2 (On Board Diagnose Product
for Passenger cars) for clients in India.
Process Quality:
1. Experience in SPICE Level 3 development.
2. Functional Safety ISO 26262 - ASIL B products
3. Adaptable to Customer procedures and guidelines
2.2. Technologies:
1. Microcontrollers 8, 16, 32 bit
2. Embedded C, Python, IoT (PHP Front End & MY SQL Back End) Wireless – Bluetooth,
GPS, GPRS, Wi-Fi
3. Communication protocols – Spi, I2c, CAN, LIN
2.3. Management:
The Management team as mixture of Technical and Business development expertise with
14+years of experience in the Information Technology Field.
We can help you cut risk on embedded systems R&D, and accelerate time to market. Technofly
is your best choice for designing and developing embedded products from concept to delivery.
Our team is well-versed in product life cycles. We build complex software systems for real-time
environments and have unique expertise and core competencies in the following domains:
Wireless, Access and IOT/Cloud.
The team is associated with R&D in Wireless Communication Technologies department in the
company. The team is currently working on 4G-5G technologies associated with Cognitive
Devices such as WLAN, Bluetooth, Zigbee, other Mobile networks etc, for better achievable
network efficiencies. The work involves examining various methodologies currently available
and under development and implementation of the same for further analysis and in depth
understanding of the effects of these methods on network capacities.
The department is currently developing and examining optimal solutions for Network Data Rate
maximization in both co-operative and non-cooperative network users scenarios involving
cognitive(SU’s) and non-cognitive(PU’s) devices. The work is mainly concentrated on:
The department is actively involved in acquiring latest technologies related projects in Low
power VLSI, wireless domain and these projects are well thought out and detailed
implementations are carried out. Projects are mainly done on Verilog, MATLAB platform (from
math works) and may also depend on NS2, NetSim and Xilinx platforms as per the requirements
of the project in progress.
Current internship involves study implementation and analysis of High speed and Energy
Efficient Carry Skip adder (CSKA) with Hybrid model for achieving
1. Study Requirements: Low power VLSI design and fundamentals high speed and reducing
the power consumption of Digital circuits
2. Implementation Requirements: Verilog code / Modelsim tool
3. Detection Test Static: Simulation results
4. Platform: Verilog and simulated by Modelsim 6.4c and synthesized by Xilinx tool.
Real Time Embedded System and Low power VLSI design Department:
Technofly solution embedded software, hardware, system development, system integration,
verification and product realization services to customers in automotive electronics and
consumer electronics segments worldwide. Technofly solution has more than 14 years of
experience in embedded systems on a variety of platforms such as Microprocessors,
Programmable Logic Devices (PLDs) and ASICs. Accord develops applications based on the
various commercially available real time and embedded operating systems.
The hardware design and development follow stringent life cycle guidelines laid out at
Technofly solution while accomplishing the following –
Design Assurance
1. Signal Integrity
2. Cross-talk
3. Matching and Impedance control
4. Power supply design with due emphasis for Low-power battery operated
5. applications
6. Thermal analysis
7. Clock distribution
8. Timing analysis
9. PCB layer stacking
Design optimization
Selection of components keeping in mind
1. Cost , Size
2. Operating and storage temperature
3. MIL/Industrial/Commercial grades based on application
PCB design
1. Optimum number of layers for a given application
2. Material used for PCB
3. Rigid, Flexi and Rigid-Flexi designs based on applications
Software Development
Software design and development services are related to
1. Real-time Embedded Application Development
2. Device Driver Development
3. BSP Development
4. Processor/OS Porting Services
5. RTOS based development
6. Board bring-up
7. Digital Signal Processing Algorithms
8. Porting across platforms
Skill Set
1. Language: C, C++, Assembly languages, Verilog and SystemVerilog
2. Hardware Platforms: ADI DSPs, TI DSPs, ARM, PowerPC, Xscale architecture
3. RTOS: Integrity, VDK, DSP OS, Micro C OS and OASYS
4. FPGA: Xilinx (Spartan and Virtex), Actel, Altera
Tools
1. Development Tools: In-circuit emulators of various processor environments
2. Compilers: Compilers/IDEs of various processor environments
Simulation
1. Xilinx ModelSim SE
Dept. of CSE, AIEMS Page 9
Face Recognition 2020-21
Hardware Tools:
1. Spectrum Analyser
2. Signal Generators
3. Logic Analyser
4. Digital Storage Oscilloscopes
5. Multifunction Counters
6. Development Tools and In-circuit Emulators for all ADI DSP’s, TI DSP’s,
7. ARM Processor, PowerPC
8. ORCAD, Allegro, Pspice
9. Temperature and Humidity Chamber
Following are the skill sets Technofly solution has garnered in the area of software:
1. Programming Languages: C, C++, VC++, Java, C#, ASP.Net, PHP, Lex &Yacc, Perl,
Python, Assembly Language and Ada
2. Operating Environments: Real Time Operating Systems such as, GreenHills Integrity
and Micro C-OS. DSP OS, VDK, OASYS and MS-WINCE, MS-Windows,
Unix/Linux and MPE/iX are the operating systems that Accord provides services.
CHAPTER 3
INTRODUCTION TO PYTHON
3.1 Python
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. It was created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General Public License (GPL).
This tutorial gives enough understanding on Python programming language.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows interactive
testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
Extendable − you can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.
GUI Programming − Python supports GUI applications that can be created and ported to
many system calls, libraries and windows systems, such as Windows MFC, Macintosh,
and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than shell
scripting.
Flavors of python
• Pypy :-Internally JIT (just in time compiler) compiler is there so performance wise too good
• Stack less (python for concurrency): - parallely you execute so go for stack less
• Games:
Python has various modules, libraries and platforms that support development of games. For
example, PySoy is a 3D game engine supporting Python 3, and PyGame provides
functionality and a library for game development. There have been numerous games built
using Python including Civilization-IV, Disney’s Toon town Online, Vega Strike etc.
banking; Odoo – a consolidated suite of business applications; and Google App engine are a few
of the popular web applications based on Python.
4. Operating Systems:
Python is often an integral part of Linux distributions. For instance, Ubuntu’s Ubiquity Installer,
and Fedora’s and Red Hat Enterprise Linux’s Anaconda Installer are written in Python. Gentoo
Linux makes use of Python for Portage, its package management system.
5. Language Development:
Python’s design and module architecture has influenced development of numerous languages.
Boo language uses an object model, syntax and indentation, similar to Python. Further, syntax of
languages like Apple’s Swift, Coffee Script, Cobra, and OCaml all share similarity with Python.
6. Prototyping:
Besides being quick and easy to learn, Python also has the open source advantage of being free with
the support of a large community. This makes it the preferred choice for prototype development.
Further, the agility, extensibility and scalability and ease of refactoring code associated with Python
allow faster development from initial prototype. Since its origin in 1989, Python has grown to
become part of a plethora of web-based, desktop-based, graphic design, scientific, and
computational applications. With Python available for Windows, Mac OS X and Linux / UNIX,
it offers ease of development for enterprises. Additionally, the latest release Python 3.4.3 builds
on the existing strengths of the language, with drastic improvement in Unicode support, among
other new features.
CHAPTER 4
TASK PERFORMED
Task Remarks
Sl.no Week(date) Task Assigned completed
CHAPTER 5
Face recognition has been active research area in the pattern recognition and computer vision
domains. It has many potential applications, such as, surveillance, credit cards, passport,
security, etc. A number of methods have been proposed in the last decades. In the field of face
recognition, the dimension of the facial images is very high and require considerable amount of
computing time for classification. The classification and subsequent recognition time can be
reduced by reducing dimensions of the image data.
Principal Component Analysis (PCA) is one of the popular methods used for feature extraction
and data representation. It not only reduces the dimensionality of the image, but also retains
some of the variations in the image data and provides a compact representation of a face image.
The key idea of the PCA method is to transform the face images into a small set of
characteristics feature images, called Eigen faces, which are the Principal Components of the
initial training set of face images. PCA yields projection directions that maximize the total
scatter all classes, i.e. across all face images.
we focus on image-based face recognition. Given a picture taken from a digital camera, we’d
like to know if there is any person inside, where his/her face locates at, and who he/she is.
Towards this goal, we generally separate the face recognition procedure into three steps: Face
Detection, Feature Extraction, and Face Recognition.
CHAPTER 6
SYSTEM REQUIREMENTS
SPECIFICATION
Maintainability:
The system should be optimized for supportability, or ease of maintenance as far as
possible. This may be achieved through the use documentation of coding standards,
naming conventions, class libraries and abstraction.
System: Intel.
Hard Disk: 120 GB.
Monitor: 15” LED.
Input Devices: Keyboard, Mouse.
RAM: 4 GB.
CHAPTER 7
DESIGN
Fig.7.1.Workflow diagram
Capturing the frame from the video using the system’s camera initialises the execution of the
proposed system.
The Face Detection Algorithm then processes on the captured video frames to give out the
rectangular boxed face. This output from Face Detection Algorithm then gets processed
using AdaBoost Classifier to detect the eye region in the face.
Eye detected will be sent to check if there is any movement of eyeball.
If it’s there, then this movement will be tracked to give out the combination the patient is
using to express the dialogue.
Dept. of CSE, AIEMS Page 20
Face Recognition 2020-21
If not, then the blink pattern will be processed to give out the voice as well as the text input
with respective dialogue.
7.2. METHODOLOGY:
Face Detection:
The main function of this step is to determine (1) whether human faces appear in a given image,
and (2) where these faces are located at. The expected outputs of this step are patches containing
each face in the input image. In order to make further face recognition system more robust and
easy to design, face alignment are per- formed to justify the scales and orientations of these
patches. Besides serving as the pre-processing for face recognition, face detection could be used
for region-of-interest detection, retargeting, video and image classification, etc.
Feature Extraction:
After the face detection step, human-face patches are extracted from images. Directly using these
patches for face recognition have some disadvantages, first, each patch usually contains over
1000 pixels, which are too large to build a robust recognition system1. Second, face patches may
be taken from different camera alignments, with different face expressions, illuminations, and
may suffer from occlusion and clutter. To overcome these drawbacks, feature extractions are
performed to do in- formation packing, dimension reduction, salience extraction, and noise
cleaning. After this step, a face patch is usually transformed into a vector with fixed dimension
or a set of fiducially points and their corresponding locations. We will talk more detailed about
this step in Section 2. In some literatures, feature extraction is either included in face detection or
face recognition.
Face Recognition:
After formulizing the representation of each face, the last step is to recognize the identities of
these faces. In order to achieve automatic recognition, a face database is required to build. For
each person, several images are taken and their features are extracted and stored in the database.
Then when an input face image comes in, we perform face detection and feature extraction, and
compare its feature to each face class stored in the database. There have been many researches
and algorithms pro- posed to deal with this classification problem, and we’ll discuss them in later
sections. There are two general applications of face recognition, one is called identification and
another one is called verification. Face identification means given a face image, we want the
system to tell who he / she is or the most probable identification; while in face verification, given
a face image and a guess of the identification, we want the system to tell true or false about the
guess.
The tasks and cases discussed in the previous sections give an overview about pattern
recognition. To gain more insight on the performance of pattern recognition techniques, we need
to take care about some important factors. In template matching, the number of templates for
each class and the adopted distance metric directly affects the recognition result. In statistical
pattern recognition, there are four important factors: the size of the training data N, the
dimensionality of each feature vector d, the number of classes C, and the complexity of the
classifier h, and we summarize their meanings and relations .In syntactic approach, we expect
that the more rules are considered, the higher recognition performance we can achieve, while the
system will become more complicated. And sometimes, it’s hard to transfer and organize human
knowledge into algorithms. Finally in neural networks, the number of layers, the number of used
perceptron (neurons), the dimensionality of feature vectors, and the number of classes all has
effects on the recognition performance. More interesting, the neural networks have been
discussed and proved to have closed relationships with the statistical pattern recognition
techniques.
1. Pre-Processing:
To reduce the variability in the faces, the images are processed before they are fed into the
network. All positive examples that is the face images are obtained by cropping images with
frontal faces to include only the front view. All the cropped images are then corrected for
lighting through standard algorithms.
2. Classification:
Neural networks are implemented to classify the images as faces or nonfaces by training on these
examples. We use both our implementation of the neural network and the Matlab neural network
toolbox for this task. Different network configurations are experimented with to optimize the
results.
3. Localization:
The trained neural network is then used to search for faces in an image and if present localize
them in a bounding box. Various Feature of Face on which the work has done on:- Position Scale
Orientation Illumination.
CHAPTER 8
IMPLEMENTATION
8.1. MODULES:
o Image Capturing.
o Experimental Setup.
o Face recognition.
o Testing phase.
Image Capturing:
Proposed System consists of a rotating high definition camera, placed in the streets to
capture all the person. From these captured image frames, the person’s faces are detected
using opencv face detection technique.
Experimental Setup:
In this experiment we used Opencv using Cascade model, the hardware platform is 64-
bit operating system and Linux 16.4, processor 2.5 GHz, Memory 8 GB and 16 MP high
definition cameras. The setup was tested in a real classroom that contains 20 students
with all variation of poses. We tested the proposed face detection method and existing
face detection techniques using the benchmark dataset (FDDB) . This dataset contains
images of human faces in multiple poses. Out of 3500 of FDDB images, Haar Cascade
Classifiers technique detects the face with an accuracy of 94.71%.
Face Recognition:
We proposed face detection technique by incorporating Haar cascade classifier and
LBPH techniques. This technique does not play out any sub-sampling, but it optimizes
over all sub-windows. This method is much accurate to detect all varied faces positioned
frontal, tilted up/right/left/down and occluded faces with 99.69% accuracy. Following
figure shows some samples of detected face using proposed method.
Testing phase:
Whenever an image captured, the face encodings of the image are extracted and then
compared to the face encodings of the images stored in the database. If the distance
between the encoding of the captured image and the encoding of the image in the
database is less than or equal to the threshold, then the face in both the images is of the
same person as shown in Figure 1. If that is the case, the user is notified that a match is
found along with the picture from the database that matched with the uploaded picture. If
the distance between the encodings is more than the threshold, it means that the faces in
the images are not of the same person’s. By this way, our proposed system will help in
identifying the missing people.
8.2.4: Now we will create a dataset for training the machine. The more the number of data
collected, the more accurate the results will be. Here we will be collecting 50 data for training
purposes
8.2.7: After successful training of the dataset, we need to write a code for face recognition. So
when we execute the code recognition code, the machine will be able to detect the face.
CHAPTER 9
RESULT
Detected Face
As an output we get an ID of the image from the database if the test image is recognized.
CONCLUSION
This paper describes the mini-project for visual perception and autonomy module. Next, it
explains the technologies used in the project and the methodology used. Finally, it shows the
results, discuss the challenges and how they were resolved followed by a discussion.
Using Haar-cascades for face detection worked extremely well even when subjects wore
spectacles. Real time video speed was satisfactory as well devoid of noticeable frame lag.
Considering all factors, LBPH combined with Haar-cascades can be implemented as a cost
effective face recognition platform. An example is a system to identify known troublemakers in a
mall or a supermarket to provide the owner a warning to keep him alert or for automatic
attendance taking in a class.
REFERENCES
[1] Takeo Kanade. Computer recognition of human faces, volume 47. Birkh¨auser Basel, 1977.
[2] Lawrence Sirovich and Michael Kirby. Low-dimensional procedure for the characterization
of human faces. Josa a, 4(3):519–524, 1987.
[3] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience,
3(1):71–86, Jan 1991.
[4] Dong chen He and Li Wang. Texture unit, texture spectrum, and texture analysis. IEEE
Transactions on Geoscience and Remote Sensing, 28(4):509–512, Jul 1990.
[5] X. Wang, T. X. Han, and S. Yan. An hog-lbp human detector with partial occlusion
handling. In 2009 IEEE 12th International Conference on Computer Vision, pages 32–39, Sept
2009.
[7] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In
Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition. CVPR 2001, volume 1, pages I–511–I–518 vol.1, 2001.
[8] John G Daugman. Uncertainty relation for resolution in space, spatial frequency, and
orientation optimized by two-dimensional visual cortical filters. JOSA A, 2(7):1160–1169, 1985.