100% found this document useful (1 vote)
2K views62 pages

Report On Attendance Management System PDF

Uploaded by

Abhishek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views62 pages

Report On Attendance Management System PDF

Uploaded by

Abhishek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

A Project Report On

Attendance Management System Based On Face


Recognition
Submitted in the Partial fulfillment of the requirements for Degree of
Bachelor of Technology
in
Electrical Engineering
By
Tarun Saini (1608220029)
Ritik Sharma (1608220025)
Abhishek Saini (1608220002)
Km. Lavi (1608220015)

Under the guidance of

Project Guide Project Coordinator


Mr. Saurabh Saxena Dr. Rajul Misra (HOD)
(Assistant Professor) Mr. Saurabh Saxena
(Assistant Professor)

ELECTRICAL ENGINEERING DEPARTMENT


MORADABAD INSTITUTE OF TECHNOLOGY MORADABAD
Dr. A.P.J. Abdul Kalam Technical University, (U.P.), Lucknow

i
DECLARATION
We hereby declare that this submission is our own work and that to the best of our knowledge
and belief. It contains no materials previously published neither written by another person nor
material which to a substantial extent has been accepted for the award of any degree or
diploma of University or other institute of higher learning. Except where due
acknowledgement has been made in the text.

NAME : TARUN SAINI


ROLL NO : 1608220029
DATE :
SIGNATURE :

NAME : RITIK SHARMA


ROLL NO : 1608220025
DATE :
SIGNATURE :

NAME : ABHISHEK SAINI


ROLL NO : 1608220002
DATE :
SIGNATURE :

NAME : Km. LAVI


ROLL NO : 1608220015
DATE :
SIGNATURE :

ii
MORADABAD INSTITUTE OF TECHNOLOGY, MORADABAD
ELECTRICAL ENGINEERING DEPARTMENT
Session 2019–2020
CERTIFICATE

This is to certify that the major project entitled “Attendance Management System Based on
Face Recognition ” submitted by Tarun Saini (1608220029), Ritik Sharma (1608220025),
Abhishek Saini (1608220002), Km. Lavi (1608220015) in the partial fulfillment of the
requirement of the Degree of Bachelor of Technology in Electrical Engineering embodies
the work done by them under my guidance.

Signature of the Project Guide Signature of the Project coordinator

Name: Mr. Saurabh Saxena Name: Dr. Rajul Misra

Designation: Assistant Professor Designation: HOD

Date: Date:

Name: Mr. Saurabh Saxena

Designation: Assistant Professor

Date:

iii
ACKNOWLEDGEMENT
We express our deepest sense of gratitude towards our guide Mr. Saurabh Saxena
(Assistant Professor), Electrical Engineering Department, Moradabad Institute of
Technology, Moradabad for his patience, inspiration, constant encouragement, moral support,
keen interest, and valuable suggestions during preparation of the project.
Our heartfelt gratitude goes to Dr. Rajul Misra Head of Department and all faculty members
of Electrical Engineering Department, who with their encouraging and caring words and most
valuable suggestions have contributed, directly or indirectly, in a significantly way towards
completion of the project.
We are indebted to all our classmates for taking interest in discussing our problem and
encouraging us. We owe a debt of gratitude to our respective parents for their consistent
support, sacrifice, candid views, and meaningful suggestion given to us at different stages of
this work.
Last but not the least we are thankful to the Almighty who gave us the strength and health for
completing our project report.

Tarun Saini
Ritik Sharma
Abhishek Saini
Km. Lavi

iv
ABSTRACT
Nowadays educational institutes are concerned about regularity of the student attendance.
Mainly there are two conventional methods of marking attendance which are calling out the
roll numbers or by taking student’s sign on paper. They both are time consuming and
difficult. In this project we are implementing the attendance management system using face
recognition through computer vision. We are projecting our idea to implement “Attendance
Management System based on Facial Recognition” in which it imbibes large applications.
This project will save time and eliminate chances if proxy attendance. The main goal and
objective of this automated attendance system of face detection and recognition is to
present face recognition in real time environment for educational institutes or an
organization to see and mark the attendance of their students and employees on a daily
basis to keep track of their presence. The system will mark and record the attendance in any
environment. This system is purely automated and user can capture video and accordingly
attendance will be marked, improving the accuracy to great extent and finally the
attendance report will be generated.

Signature of Group Members Signature of Guide

v
List of Figures
FIGURES NO. NAME PAGE NO.

Fig-1 Raspberry Pi 3 B 3
Fig-2 Raspberry Pi 3 B 4
Fig-3 Raspberry Pi GPIO Pin Diagram 5
Fig-4 Broadcom BCM2837 System-on-chip 6
Fig-5 GPIO 7
Fig-6 SMSC LAN9514 7
Fig-7 Antenna 8

Fig-8 Keyboard 9

Fig-9 Computer Mouse 9

Fig-10 WebCam 10

Fig-11 HDMI Cable 11

Fig-12 Ethernet Cable 12

Fig-13 Raspberry Pi Website Downloads Page 13

Fig-14 Raspbian OS 14

Fig-15 Micro SD Card 16GB 14

Fig-16 Etcher Flashing OS into SD Card 15


16
Fig-17 Placing CONF and SSH File
17
Fig-18 Scanning IP of Raspberry Pi
17
Fig-19 Feeding IP Address into puTTY
18
Fig-20 Logging in Raspberry Pi
18
Fig-21 Enabling VNC
19
Fig-22 Connecting with VNC
20
Fig-23 Raspbian OS Homescreen
23
Fig-24 Fibonacci Series in Python
27
Fig-25 The YOLO Detection System
28
Fig-26 Classification and Object Detection
29
Fig-27 The Model
38
Fig-28 Example of Eigen Faces

vi
Fig-29 Example of Line Edge Map 39
Fig-30 Example of HOG 40
Fig-31 Block Diagram of Image Processing 41
Fig-32(A) Training Images 45
Fig-32(B) Training Images 45
Fig-32(C) Training Images 46
Fig-32(D) Training Images 46

Fig-33(A) Marking Attendance(Tracking Images) 50

Fig-33(B) Marking Attendance(Tracking Images) 50

vii
TABLE OF CONTENTS

CONTENTS PAGE NO.

Title Page i
Declaration ii
Certificate iii
Acknowledgement iv
Abstract v
List of Figure vi-vii
CHAPTER 1: INTRODUCTION 1

1.1 Hardware Implementation 2


1.1.1 Raspberry Pi 2
1.1.2 Raspberry pi 3 Model B V2 2-4
1.1.3 Pin Diagram 5
1.1.4 SoC 6
1.1.4.1 What is SoC 6
1.1.4.2 Raspberry Pi SoC 6

1.1.5 GPIO 7
1.1.6 USB Chip 7
1.1.7 Antenna 8
1.1.8 Peripherals 8
1.1.8.1 Peripherals Required 8

1.1.9 Display 10
1.1.9.1 HDMI 11
1.1.9.2 Ethernet Cable 11

viii
1.2 Software Implementation 12
1.2.1 Downloading the OS 13
1.2.2 Installing the OS 14
1.2.1.1 Writing Raspbian Image to SD Card 15
1.2.1.2 Getting Display on PC Screen 15-19

CHAPTER 2: PROGRAMMING 20

2.1 Technical Specification 20-21


2.2 Programming The Project 22
2.2.1 Python 22
2.2.2.1 Open CV 23
2.2.2.2 Numpy 23
2.2.2.3 Pandas 24
2.2.2.4 Image Module(Pillow) 25
2.2.2.5 Image Module (Pillow) 25
2.2.2.6 Datetime 25
2.2.2.7 Calendar 25

2.2.3 YOLO Algorithm For Object Detection 26


2.2.3.1 Unified Detection 27-28
2.2.3.2 YOLO Alogrithm on Images 29-32
2.2.3.3 Detection Through Live Video 32-36

2.2.4 Face Detection and Recognition 37-40


2.2.5 Training of Faces:(Coding) 41-45
2.2.6 Face Recognition (Coding) 46-49

ix
CHAPTER 3: CONCLUSION AND FUTURE SCOPE 50

3.1 Conclusion 50
3.2 Future Scope of Project 51

REFERENCES 52

x
CHAPTER-1

INTRODUCTION

Image Processing is a type of processing a signal for which the requirements are photograph,
video frame or an image. There are two types of Image processing: Analog and digital
processing. Analogue image processing is an image processing technique which can be used
for hard copies such as photographs and printouts, while the digital image processing
involves manipulation of the digital images by using advanced computers. Now a days
Student or attendance plays a significant role in many college, universities and schools.
There can be two types of attendance:

1. Attendance system (Manual)

2. Attendance system (Automated)

Automated attendance system will excerpt the image when person comes in the classroom
and will accordingly mark the attendance. On the other hand, manual attendance system will
verify and manage each and every record of student in paper which requires more time and
effort of the faculty or staff and also chances of proxies are also more in manual attendance.
This system will be efficient and more user friendly as it can be run on devices which
everyone has now a day. This study is the first attempt to provide an automated attendance
system that identifies students using face recognition technology through an image or video
stream for recording attendance in any classroom environment or and estimating the
efficiency accordingly. Through constantly detecting of facial info, this method will resolve
less efficiency of technologies which are already existing, and advance the accurateness of
recognition of faces. We studied and planned a technique or way that mark the presence or
attendance using face recognition constructed on non-stop surveillance. In this proposed
method or paper, our aim and purpose is to gain the images or video of the students face,
their position and attendance which are beneficial info in the lecture or classroom
environment.

1
1.1 HARDWARE IMPLEMENTATIONS
1.1.1 RASPBERRY PI
The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in
schools and in developing countries. The original model became far more popular than
anticipated, selling outside its target market for uses such as robotics. It is now widely used
even in research projects, such as for weather monitoring because of its low cost and
portability. Thanks to all these advantages of Raspberry Pi, this project will also make use of
Raspberry Pi 3 for its convenient and easy to use operating system.

The GPU provides Open GL ES 2.0, hardware-accelerated Open VG, and 1080p30 H.264
high-profile decode and is capable of 1Gpixel/s, 1.5Gtexel/s or 24 GFLOPs of general
purpose compute. What’s that all mean? It means that if you plug the Raspberry Pi 3 into
your HDTV, you could watch Blu-ray quality video, using H.264 at 40MBits/s.

Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz for the
Pi 4; on-board memory ranges from 256 MiB to 1 GiB random-access memory (RAM), with
up to 8 GiB available on the Pi 4. Secure Digital (SD) cards in MicroSDHC form factor
(SDHC on early models) are used to store the operating system and program memory. The
boards have one to five USB ports. For video output, HDMI and composite video are
supported, with a standard 3.5 mm tip-ring-sleeve jack for audio output. Lower-level output
is provided by a number of GPIO pins, which support common protocols like I²C. The B-
models have an 8P8C Ethernet port and the Pi 3, Pi 4 and Pi Zero W have on-board Wi-Fi
802.11n and Bluetooth.

1.1.2 RASPBERRY PI 3 MODEL B V2


Raspberry Pi 4 Model B was released in June 2019 with a 1.5 GHz 64-bit quad core ARM
Cortex-A72 processor, on-board 802.11ac Wi-Fi, Bluetooth 5, full gigabit Ethernet
(throughput not limited), two USB 2.0 ports, two USB 3.0 ports, and dual-monitor support
via a pair of micro HDMI (HDMI Type D) ports for up to 4K resolution . The Pi 4 is also
powered via a USB-C port, enabling additional power to be provided to downstream
peripherals, when used with an appropriate PSU.

2
(a)

(b) (c)

Fig. 1 – Raspberry Pi 3 B

The biggest change that has been enacted with the Raspberry Pi 3 is an upgrade to a next
generation main processor and improved connectivity with Bluetooth Low Energy (BLE)
and BCM43143 Wi-Fi on board. Additionally, the Raspberry Pi 3 has improved power
management, with an upgraded switched power source up to 2.5 Amps, to support more
powerful external USB devices.

3
The Raspberry Pi 3’s four built-in USB ports provide enough connectivity for a mouse,
keyboard, or anything else that you feel the RPi needs, but if you want to add even more you
can still use a USB hub. Keep in mind, it is recommended that you use a powered hub so as
not to overtax the on-board voltage regulator. Powering the Raspberry Pi 3 is easy, just plug
any USB power supply into the micro-USB port.

There’s no power button so the Pi will begin to boot as soon as power is applied, to turn it
off simply remove power. The four built-in USB ports can even output up to 1.2A enabling
you to connect more power hungry USB devices (This does require a 2Amp micro USB
Power Supply)

Fig. 2 – Raspberry Pi 3 B (different


components)

4
1.1.3 Pin Diagram
The following figure shows the pin diagram of Raspberry Pi 3 Model B.

Fig. 3 – Raspberry Pi GPIO pin


Diagram

5
1.1.4 SoC
1.1.4.1 What is SoC?
A system on a chip is an integrated circuit (also known as a "chip") that integrates all or most
components of a computer or other electronic system. These components almost always include
a central processing unit (CPU), memory, input/output ports and secondary storage – all on a
single substrate or microchip, the size of a coin

1.1.4.2 Raspberry Pi SoC?


Built specifically for the new Pi 3, the Broadcom BCM2837 system-on-chip (SoC) includes
four high-performance ARM Cortex-A53 processing cores running at 1.2GHz with 32kB
Level 1 and 512kB Level 2 cache memory, a VideoCore IV graphics processor, and is
linked to a 1GB LPDDR2 memory module on the rear of the board.

Fig. 4 - Broadcom BCM2837 system-on-chip (SoC)

6
1.1.5 GPIO
The Raspberry Pi 3 features the same 40-pin general-purpose input-output (GPIO) header as all the Pis
going back to the Model B+ and Model A+. Any existing GPIO hardware will work without
modification; the only change is a switch to which UART is exposed on the GPIO’s pins, but that’s
handled internally by the operating system.

Fig. 5 - GPIO

1.1.6 USB CHIP


The Raspberry Pi 3 shares the same SMSC LAN9514 chip as its predecessor, the Raspberry
Pi 2, adding 10/100 Ethernet connectivity and four USB channels to the board. As before,
the SMSC chip connects to the SoC via a single USB channel, acting as a USB-to-Ethernet
adaptor and USB hub.

Fig.6 - SMSC LAN9514

7
1.1.7 ANTENNA
There’s no need to connect an external antenna to the Raspberry Pi 3. Its radios are connected to
this chip antenna soldered directly to the board, in order to keep the size of the device to a
minimum. Despite its diminutive stature, this antenna should be more than capable of picking up
wireless LAN and Bluetooth signals – even through walls.

Fig.7 - Antenna

1.1.8 PERIPHERALS
Although often pre-configured to operate as a headless computer, the Raspberry Pi may also
optionally be operated with any generic USB computer keyboard and mouse. It may also be used
with USB storage, USB to MIDI converters, and virtually any other device/component with USB
capabilities, depending on the installed device drivers in the underlying operating system (many of
which are included by default). Other peripherals can be attached through the various pins and
connectors on the surface of the Raspberry Pi.
1.1.8.1 Peripherals Required For This Project
This project will need some peripherals, what can be connected to Raspberry Pi simply through
USB ports. These peripherals are Keyboard, Mouse, and Webcam. Regular keyboard, mouse, and
webcam used normally with computers can be used with Raspberry Pi. Along with that, initially,
for programming purpose, it will also need an HDMI cable or a Ethernet cable, so that it can be
connected to the Laptop screen and programming can be done, before we finally install the device
somewhere.

9
A computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to
act as mechanical levers or electronic switches. Following the decline of punch cards and paper
tape, interaction via teleprinter-style keyboards became the main input method for computers. It
will be useful to input data of students, such as name, roll number etc.

Fig.8 - Keyboard

A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a
surface. This motion is typically translated into the motion of a pointer on a display, which allows a
smooth control of the graphical user interface of a computer. Graphical control will be required for
easily feeding data, and mark your attendance.

Fig.9 – Computer Mouse

10
A webcam is a video camera that feeds or streams an image or video in real time to or through a
computer to a computer network, such as the Internet. Webcams are typically small cameras that sit
on a desk, attach to a user's monitor, or are built into the hardware. Webcams can be used during a
video chat session involving two or more people, with conversations that include live audio and
video. This will be the eyes of the project. It will feed all the visual information required by our
program into Raspberry Pi, and allow image detection and processing.

Fig.10 – WebCam

1.1.9 DISPLAY
For the sake of development phase, a Raspberry Pi can be connected directly to a PC or Laptop’s
screen through either an HDMI cable or an Ethernet cable, both will work fine. Initially they can be
used to install OS, required libraries and making the code work before actually connecting it to a
separate LCD screen made for Raspberry Pi. These LCD screen are a bit costly, so those using
Raspberry Pi only for educational purposes should learn to connect it to a Desktop screen.

11
1.1.9.1 HDMI
HDMI stands for High Definition Multimedia Interface and is the most frequently used HD signal
for transferring both high definition audio and video over a single cable. It is used both in the
commercial AV sector and is the most used cable in homes connecting devices such as digital TV,
DVD player, BluRay player, Xbox, Playstation and AppleTV with the television. Below is an
image of HDMI cable. The Raspberry Pi 3 model has a HDMI port and it can be connected to a
Laptop’s HDMI port directly, provided that it is supplied through a 2.5A power adapter.

Fig.11 – HDMI cable

1.1.9.2 Ethernet Cable


An Ethernet cable is a common type of network cable used with wired networks. Ethernet cables
connect devices such as PCs, routers, and switches within a local area network. These physical
cables are limited by length and durability. If a network cable is too long or of poor quality, it won't
carry a good network signal. These limits are one reason there are different types of Ethernet cables
that are optimized to perform certain tasks in specific situations. The figure shows an Ethernet
cable which can be used to get the display of our Raspberry Pi on our laptop screen and get the
work started.

12
It will help us get the virtual display of the OS running on our Raspberry Pi through the Ethernet
connection.

Fig.12 – Ethernet Cable

1.2 SOFTWARE IMPLEMENTATIONS


The first and foremost step in the software implementation of Raspberry Pi that the project requires
is the installation of an OS, i.e., an Operating System. Raspberry Pi OS (previously called
Raspbian) is the official operating system for all models of the Raspberry Pi. Other third party OS
can also be installed if your Raspberry Pi is not responding to the official version.

13
1.2.1 Downloading the OS
The OS can be installed by going to the website raspberrypi.org/downloads and downloading the
Raspbian OS or NOOBS OS.

Fig.13 – Raspberry Pi Website Downloads


Page

The Raspbian OS is available in 3 versions. One is “Raspberry Pi OS (32-bit) with desktop and
recommended software” (Image with desktop and recommended software based on Debian
Buster). The other is Raspberry Pi OS (32-bit) with desktop (Image with desktop based on Debian
Buster) and the third and the last is Raspberry Pi OS (32-bit) Lite (Minimal image based on Debian
Buster). Any one of these OS images can be downloaded and then later installed on the Raspberry
Pi. These three versions are shown in the image attached below.

14
Fig.14 – Raspbian OS

1.2.2 Installing the OS


Before moving to connect our Raspberry Pi to our laptop display, we need an SD card with the OS
installed.

Fig.15 – Micro SD Card 16GB

Many operating systems are available for the Raspberry Pi and most are focused around Linux, but
the most popular version is Raspbian. This OS not only provides a fully functional desktop
environment with commonly used programs such as chromium and word processing, but it also

15
includes a wide range of programming tools. Since 2015, the Raspberry Pi Foundation has declared
Raspbian as the primary operating system for Raspberry Pi and is open source.
1.2.2.1 Writing Raspbian Image to SD Card
The downloaded image can is needed to be written on the SD card. It can be done by a couple of
softwares, like Rufus, Etcher, or Raspberry Pi imager. Plug the SD card into a laptop and select the
image, the SD card in any of these softwares and flash the OS into the SD card. The figure shows
the image of Etcher flashing Raspbian OS into the SD card.

Fig.16 – Etcher Flashing OS into SD Card

1.2.2.2 Getting Display on PC Screen


Once the image has been written, Raspbian OS is installed. Now before inserting the SD card into
the Raspberry Pi, a couple of files are needed to be added in the SD card that will enable SSH and
will connect to Wi-Fi on the boot. A file named "wpa_supplicant.conf" has to be created and the
below code is put into it so that Raspberry Pi can connect to the Wi-Fi network.

16
Code:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=IN #Your country code

network={
ssid="Tarun" #Your Wi-Fi Name
psk="huehuehue" #Your Wi-Fi password
key_mgmt=WPA-PSK
}

Next, an empty file and name it as "ssh" without any extension has to be created. This will enable
SSH on Raspberry Pi and we will be able to connect to it from our PC.

Fig.17 – Placing CONF and SSH file

Now, the SC card can be removed and inserted into the Raspberry Pi. The Raspberry Pi boot up
automatically, and connects to our Wi-Fi once it is connected to the power supply. Ethernet cable
is also inserted in Raspberry Pi as well as our PC to connect to to our system.

17
After that, we need to find the IP address of our Raspberry Pi, for that advanced IP scanner can be
used as shown in the figure below.

Fig.18 – Scanning IP of Raspberry Pi

Now, we need to SSH into the Raspberry Pi. To SSH into Raspberry Pi, we will have to use
PuTTY software. The IP address so obtained through Advanced IP Scanner is fed into PuTTY
software as shown in the figure.

Fig.19 – Feeding IP Address into PuTTY

18
Alternatively, this can also be done by writing raspberrypi.mshome.net instead of the IP Address,
both will work fine. It will ask for username and password. Default username is 'pi' and password
is 'raspberry'.

Fig.20 – Logging in Raspberry Pi

Once, after being successfully logged in, we need to enable VNC so that we can use mouse and
keyboard to control it. To do that, the following command has to be used:
sudo raspi-config
The following window will pop-up, VNC can be enabled by going to Interfacing options>VNC and
enabling it.

Fig.21 – Enabling VNC

19
Now, we need to open VNC and type the IP address of your Raspberry Pi in it. In computing,
Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote
Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and
mouse events from one computer to another, relaying the graphical-screen updates back in the
other direction, over a network. It will ask for username and pass. The default username is 'pi' and
the password is 'raspberry'. This is also shown in the figure below.

Fig.22 – Connecting with VNC

Finally, the Raspberry Pi desktop should appear as a VNC window. We will be able to access the
GUI and do everything as if we were using the Pi’s keyboard, mouse, and monitor directly. After
clicking “OK” the final screen will look something like the image shown below. This is how our
freshly installed Raspbian OS looks like.

Fig.23 – Raspbian OS Homescreen

20
CHAPTER-2

PROGRAMMING

2.1 TECHNICAL SPECIFICATIONS


Processor
• Broadcom BCM2387 chipset.
• 1.2GHz Quad-Core ARM Cortex-A53 (64Bit)

802.11 b/g/n Wireless LAN and Bluetooth 4.1 (Bluetooth Classic and LE)
• IEEE 802.11 b / g / n Wi-Fi. Protocol: WEP, WPA WPA2, algorithms AES-CCMP
(maximum key length of 256 bits), the maximum range of 100 meters.
• IEEE 802.15 Bluetooth, symmetric encryption algorithm Advanced EncryptionStandard
(AES) with 128-bit key, the maximum range of 50 meters.

GPU
• Dual Core Video Core IV® Multimedia Co-Processor. Provides Open GL ES 2.0, hardware-
accelerated Open VG, and 1080p30 H.264 high-profile decode.
• Capable of 1Gpixel/s, 1.5Gtexel/s or 24GFLOPs with texture filtering and DMA
infrastructure

Memory
• 1GB LPDDR2

Operating System
• Boots from Micro SD card, running a version of the Linux operating system or Windows 10
IoT

Dimensions
• 85 x 56 x 17mm

Power
• Micro USB socket 5V1, 2.5A

Connectors:
Ethernet
• 10/100 Base T Ethernet socket

Video Output
• HDMI (rev 1.3 & 1.4)
• Composite RCA (PAL and NTSC)

Audio Output
• Audio Output 3.5mm jack
• HDMI
• USB 4 x USB 2.0 Connector
GPIO Connector
• 40-pin 2.54 mm (100 mil) expansion header: 2x20 strip

21
• Providing 27 GPIO pins as well as +3.3 V, +5 V and GND supply lines

Camera Connector
• 15-pin MIPI Camera Serial Interface (CSI-2)

Display Connector
• Display Serial Interface (DSI) 15 way flat flex cable connector with two data lanes and a
clock lane

Memory Card Slot


• Push/pull Micro SDIO

Features
• Low Cost
• Low Power
• High Availability
• High Reliablity

Applications
• Hobby Projects
• Low cost PC/tablet/laptop
• IoT Applications
• Media Center
• Robotics
• Industrial/Home Automation
• Server/Cloud Server
• Print Server
• Security Monitoring
• Web Camera
• Gaming
• Wireless Access Point
• Environmental Sensing/Monitoring (e.g. Weather Station)

22
2.2 PROGRAMMING THE PROJECT
The project will make use of complex things, like dealing with image processing and recognizing
human faces, which can only be done by high level programming language. We will use python in
this project and code our Raspberry Pi in python to perform certain tasks.
2.2.1 PYTHON
Python is an interpreted, high-level, general-purpose programming language. Created by Guido
van Rossum and first released in 1991, Python's design philosophy emphasizes code readability
with its notable use of significant whitespace. Its language constructs and object-oriented approach
aim to help programmers write clear, logical code for small and large-scale projects.
The following is the program of Fibonacci series in Python (3.7) to give you a brief overview of
the basic syntax of python.

Fig.24 – Fibonacci Series in Python

2.2.2 Python Modules Required For the Project


The following python modules will be required in the project for computer vision and image
processing:
 OpenCV
 Numpy
 Pandas

23
 Image
 Pickle
 TKinter
 Date and Time
 Calendar

2.2.2.1 OpenCV
OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly
aimed at real-time computer vision. Originally developed by Intel, it was later supported by
Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and
free for use under the open-source BSD license.
Features of this library:
 OpenCV is open source and released under the BSD 3-Clause License. It is free for
commercial use.
 OpenCV is a highly optimized library with focus on real-time applications.
 It is cross platform. It has C++, Python and Java interfaces support Linux, MacOS,
Windows, iOS, and Android.

2.2.2.2 Numpy
NumPy is a library for the Python programming language, adding support for large, multi-
dimensional arrays and matrices, along with a large collection of high-level mathematical functions
to operate on these arrays. NumPy brings the computational power of languages like C and Fortran
to Python, a language much easier to learn and use. With this power comes simplicity: a solution in
NumPy is often clear and elegant.
Features of this library:
 POWERFUL N-DIMENSIONAL ARRAYS: Fast and versatile, the NumPy vectorization,
indexing, and broadcasting concepts are the de-facto standards of array computing today.
 NUMERICAL COMPUTING TOOLS: NumPy offers comprehensive mathematical
functions, random number generators, linear algebra routines, Fourier transforms, and
more.

24
 INTEROPERABLE: NumPy supports a wide range of hardware and computing platforms,
and plays well with distributed, GPU, and sparse array libraries.
 PERFORMANT: The core of NumPy is well-optimized C code. Enjoy the flexibility of
Python with the speed of compiled code.
 EASY TO USE: NumPy’s high level syntax makes it accessible and productive for
programmers from any background or experience level.
 OPEN SOURCE: Distributed under a liberal BSD license, NumPy is developed and
maintained publicly on GitHub by a vibrant, responsive, and diverse community.

2.2.2.3 Pandas
Pandas is a software library written for the Python programming language for data manipulation
and analysis. In particular, it offers data structures and operations for manipulating numerical
tables and time series. It is free software released under the three-clause BSD license. The name is
derived from the term "panel data", an econometrics term for data sets that include observations
over multiple time periods for the same individuals. Pandas is a fast, powerful, flexible and easy to
use open source data analysis and manipulation tool, built on top of the Python programming
language.
Features of this library:
 DataFrame object for data manipulation with integrated indexing.
 Tools for reading and writing data between in-memory data structures and different file
formats.
 Data alignment and integrated handling of missing data.
 Reshaping and pivoting of data sets.
 Label-based slicing, fancy indexing, and subsetting of large data sets.
 Data structure column insertion and deletion.
 Group by engine allowing split-apply-combine operations on data sets.
 Data set merging and joining.
 Hierarchical axis indexing to work with high-dimensional data in a lower-dimensional data
structure.
 Time series-functionality: Date range generation and frequency conversion, moving
window statistics, moving window linear regressions, date shifting and lagging.
 Provides data filtration.

25
2.2.2.4 Image Module (Pillow)
Python Imaging Library is a free and open-source additional library for the Python programming
language that adds support for opening, manipulating, and saving many different image file
formats. It is available for Windows, Mac OS X and Linux. The Image module provides a class
with the same name which is used to represent a PIL image. The module also provides a number of
factory functions, including functions to load images from files, and to create new images.
Features of this library:
 Per-pixel manipulations,
 Masking and transparency handling,
 Image filtering, such as blurring, contouring, smoothing, or edge finding,
 Image enhancing, such as sharpening, adjusting brightness, contrast or color,
 Adding text to images and much more.

2.2.2.5 Image Module (Pillow)


Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI
toolkit, and is Python's de facto standard GUI. Tkinter is included with standard Linux, Microsoft
Windows and Mac OS X installs of Python. The name Tkinter comes from Tk interface. It is the
library responsible to give graphical interface to the user, making the device easy to use.
2.2.2.6 Datetime
The datetime module supplies classes for manipulating dates and times. While date and time
arithmetic is supported, the focus of the implementation is on efficient attribute extraction for
output formatting and manipulation.
2.2.2.7 Calendar
Calendar module in Python has the calendar class that allows the calculations for various task
based on date, month, and year. On top of it, the TextCalendar and HTMLCalendar class in Python
allows you to edit the calendar and use as per your requirement.

26
2.2.3 YOLO Algorithm for Object Detection Through Images
Humans glance at an image and instantly know what objects are in the image, where they are, and
how they interact. The human visual system is fast and accurate, allowing us to perform complex
tasks like driving with little conscious thought. Fast, accurate algorithms for object detection would
allow computers to drive cars without specialized sensors, enable assistive devices to convey real-
time scene information to human users, and unlock the potential for general purpose, responsive
robotic systems.

Fig.25 – The YOLO Detection System.

YOLO is an extremely fast real time multi object detection algorithm. YOLO stands for “You Only
Look Once”. The algorithm applies a neural network to an entire image. The network divides the
image into an S x S grid and comes up with bounding boxes, which are boxes drawn around
images and predicted probabilities for each of these regions.

YOLO is refreshingly simple: see Figure 25. A single convolutional network simultaneously
predicts multiple bounding boxes and class probabilities for those boxes. YOLO trains on full
images and directly optimizes detection performance. This unified model has several benefits over
traditional methods of object detection.

First, YOLO is extremely fast. Since we frame detection as a regression problem we don’t need a
complex pipeline. We simply run our neural network on a new image at test time to predict
detections. Our base network runs at 45 frames per second with no batch processing on a Titan X
GPU and a fast version runs at more than 150 fps. This means we can process streaming video in
real-time with less than 25 milliseconds of latency. Furthermore, YOLO achieves more than twice
the mean average precision of other real-time systems.

27
Second, YOLO reasons globally about the image when making predictions. Unlike sliding window
and region proposal-based techniques, YOLO sees the entire image during training and test time so
it implicitly encodes contextual information about classes as well as their appearance. Fast R-CNN,
a top detection method, mistakes background patches in an image for objects because it can’t see
the larger context. YOLO makes less than half the number of background errors compared to Fast
R-CNN.

Fig.26 – Classification and Object Detection

Third, YOLO learns generalizable representations of objects. When trained on natural images and
tested on artwork, YOLO outperforms top detection methods like DPM and R-CNN by a wide
margin. Since YOLO is highly generalizable it is less likely to break down when applied to new
domains or unexpected inputs.
YOLO still lags behind state-of-the-art detection systems in accuracy. While it can quickly identify
objects in images it struggles to precisely localize some objects, especially small ones.
2.2.3.1 Unified Detection
The YOLO design enables end-to-end training and realtime speeds while maintaining high average
precision. Each grid cell predicts B bounding boxes and confidence scores for those boxes. These
confidence scores reflect how confident the model is that the box contains an object and also how
accurate it thinks the box is that it predicts.

28
Formally we define confidence as:

If no object exists in that cell, the confidence scores should be zero. Otherwise we want the
confidence score to equal the intersection over union (IOU) between the predicted box and the
ground truth.
Each bounding box consists of 5 predictions: x, y, w, h, and confidence. The (x; y) coordinates
represent the center of the box relative to the bounds of the grid cell. The width and height are
predicted relative to the whole image. Finally the confidence prediction represents the IOU
between the predicted box and any ground truth box.
Each grid cell also predicts C conditional class probabilities, Pr(Classi | Object). These probabilities
are conditioned on the grid cell containing an object. We only predict one set of class probabilities
per grid cell, regardless of the number of boxes B. At test time we multiply the conditional class
probabilities and the individual box confidence predictions,

which gives us class-specific confidence scores for each box. These scores encode both the
probability of that class appearing in the box and how well the predicted box fits the object.

Fig.27 – The Model

29
2.2.3.2 YOLO Algorithm: On Images

import cv2

import numpy as np

# Load Yolo

net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

classes = []

with open("coco.names", "r") as f:

classes = [line.strip() for line in f.readlines()]

layer_names = net.getLayerNames()

output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]

colors = np.random.uniform(0, 255, size=(len(classes), 3)

30
# Loading image

img = cv2.imread("IMG_20191104_141251.jpg")

img = cv2.resize(img, None, fx=0.3, fy=0.2)

height, width, channels = img.shape

# Detecting objects

blob = cv2.dnn.blobFromImage(img, 0.00392, (320, 320), (0, 0, 0), True, crop=False)

net.setInput(blob)

outs = net.forward(output_layers)

# Showing informations on the screen

class_ids = []

confidences = []

boxes = []

for out in outs:

for detection in out:

scores = detection[5:]

class_id = np.argmax(scores)

31
confidence = scores[class_id]

if confidence > 0.5:

# Object detected

center_x = int(detection[0] * width)

center_y = int(detection[1] * height)

w = int(detection[2] * width)

h = int(detection[3] * height)

# Rectangle coordinates

x = int(center_x - w / 2)

y = int(center_y - h / 2)

boxes.append([x, y, w, h])

confidences.append(float(confidence))

class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)

print(indexes)

font = cv2.FONT_HERSHEY_PLAIN

32
for i in range(len(boxes)):

if i in indexes:

x, y, w, h = boxes[i]

label = str(classes[class_ids[i]])

color = colors[i]

cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)

cv2.putText(img, label, (x, y + 30), font, 2, color, 2)

cv2.imshow("Image", img)

cv2.waitKey(0)

cv2.destroyAllWindows()

2.2.3.3 YOLO Realtime Object Detection Technique – Detection through Live


Video
import cv2

import numpy as np

import time

33
# Load Yolo

net = cv2.dnn.readNet("weights/yolov3-tiny.weights", "cfg/yolov3-tiny.cfg")

classes = []

with open("coco.names", "r") as f:

classes = [line.strip() for line in f.readlines()]

layer_names = net.getLayerNames()

output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]

colors = np.random.uniform(0, 255, size=(len(classes), 3))

# Loading image

cap = cv2.VideoCapture(0)

font = cv2.FONT_HERSHEY_PLAIN

starting_time = time.time()

frame_id = 0

while True:

_, frame = cap.read()

frame_id += 1

34
height, width, channels = frame.shape

# Detecting objects

blob = cv2.dnn.blobFromImage(frame, 0.00392, (416, 416), (0, 0, 0), True, crop=False)

net.setInput(blob)

outs = net.forward(output_layers)

# Showing informations on the screen

class_ids = []

confidences = []

boxes = []

for out in outs:

for detection in out:

scores = detection[5:]

class_id = np.argmax(scores)

confidence = scores[class_id]

if confidence > 0.2:

# Object detected

35
center_x = int(detection[0] * width)

center_y = int(detection[1] * height)

w = int(detection[2] * width)

h = int(detection[3] * height)

# Rectangle coordinates

x = int(center_x - w / 2)

y = int(center_y - h / 2)

boxes.append([x, y, w, h])

confidences.append(float(confidence))

class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.8, 0.3)

for i in range(len(boxes)):

if i in indexes:

x, y, w, h = boxes[i]

label = str(classes[class_ids[i]])

36
confidence = confidences[i]

color = colors[class_ids[i]]

cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)

cv2.putText(frame, label + " " + str(round(confidence, 2)), (x, y + 30), font, 3, color,

elapsed_time = time.time() - starting_time

fps = frame_id / elapsed_time

cv2.putText(frame, "FPS: " + str(round(fps, 2)), (10, 50), font, 4, (0, 0, 0), 3)

cv2.imshow("Image", frame)

key = cv2.waitKey(1)

if key == 27:

break

cap.release()

cv2.destroyAllWindows()

37
2.2.4 Face Detection and Recognition
Face Recognition is a recognition technique used to detect faces of individuals whose images saved
in the data set. Despite the point that other methods of identification can be more accurate, face
recognition has always remained a significant focus of research because of its non-meddling nature
and because it is people’s facile method of personal identification.
There are many ways for face recognition. Here we use OpenCV for face recognition. In face
recognition, the image first prepared for preprocessing and then trained the face recognizer to
recognize the faces. After teaching the recognizer, we test the recognizer to see the results.
Existing algorithms are:
1. EigenFaces Face Recognizer : EigenFaces face recognizer views at all the training images
of all the characters as a complex and try to deduce the components. These components are
necessary and helpful (the parts that grab the most variance/change) and discard the rest of
the images, This way it not only extracts the essential elements from the training data but
also saves memory by rejecting the less critical segments.
i. This algorithm extracts the necessary information from an image and efficiently encodes
it.
ii. To obtain variations, a number of pictures of a single person is taken.
iii. For the set of images of faces, eigenvectors and its co-variance matric is calculated and
stored.
iv. Since every image represents an eigen vector, the data set helps produce variety for the
system.
v. A representation of these eigen vectors is called eigen faces.

Fig.28 – Example of EigenFaces

38
2. Line Edge Map: Line Edge Map Edge Information (LEM) is a useful object representation
feature that is insensitive to illumination changes to certain extent. Edge images of objects
could be used for object recognition and to achieve similar accuracy as gray-level pictures.
The above mentioned report made use of edge maps to measure the similarity of face
images. 92% accuracy was achieved. A Line Edge Map approach extracts lines from a face
edge map as features. This approach can be considered as a combination of template
matching and geometrical feature matching.
i. One of the popular methods is using the Line Edge Maps algorithm.
ii. In this method line matching is done to map the features of the face.
iii. This algorithm mainly uses the most prominent features of the face; mainly the eyes,
nose and mouth having high characteristics.
iv. The color images are converted to greyscale to observe and extract the similarities in the
faces.
v. Sobel edge detection algorithm is made use of to encode the greyscale images into binary
edge maps.
vi. This technique was developed by studying how we human beings remember others
people’s faces (remembering face’s prominent features).

Fig.29 – Example of Line Edge Map

39
3. Histogram of oriented gradients (HOG): In the HOG feature descriptors, the distribution
(histograms) of directions of gradients (oriented gradients) are used as features. Gradients
(x and y derivatives) of an image are useful because the magnitude of gradients is large
around edges and corners (regions of abrupt intensity changes) and we know that edges and
corners pack in a lot more information about object shape than flat regions.
i. This technique can be applied to detecting objects as well as faces.
ii. All images used are converted to greyscale and every pixel in this image in assigned an
integer.
iii. Every pixel compares its value to its neighboring pixels.
iv. The primary motive is to find the dark regions of the face in the image.
v. The direction pointing to that dark region will have a white arrow pointing towards it.
vi. This treatment is done for each pixel of the picture.

Fig.30 – Example of HOG

4. LBPH Algorithm: This is the algorithm that we are going to use in our project. LBPH
stands for Local Binary Pattern Histogram, a basic algorithm that’s used to detect faces
from the front side. It is used for object as well as face detection. The LBP operator helps to
get local features by Local Binary Pattern acts. The local special arrangement of the face is
shortened by these LBP acts. The LBP operator divides the face in the image into pixels.
Every pixel is associated with 8 neighbor pixels that surrounds it. Each pixel value is then
compared with the surrounding neighbor pixel values.

40
The equation is for this is:

Where, ic = value of the center pixel


(xc,yc),in = value of eight surrounding pixels

Fig.31 – Block Diagram of Image Processing

41
2.2.5 Training of Faces: (Coding)
import cv2

import os

import numpy as np

from PIL import Image

import pickle

BASE_DIR = os.path.dirname(os.path.abspath( file ))

image_dir = os.path.join(BASE_DIR, "images")

face_cascade = cv2.CascadeClassifier('cascades/data/haarcascade_frontalface_alt2.xml')

recognizer = cv2.face.LBPHFaceRecognizer_create()

current_id = 0

label_ids = {}

y_labels = []

x_train = []

42
for root, dirs, files in os.walk(image_dir):

for file in files:

if file.endswith("png") or file.endswith("jpg"):

path = os.path.join(root, file)

label = os.path.basename(root).replace(" ", "-").lower()

#print(label, path)

if not label in label_ids:

label_ids[label] = current_id

current_id += 1

id_ = label_ids[label]

#print(label_ids)

#y_labels.append(label) # some number

#x_train.append(path) # verify this image, turn into a NUMPY arrray,


GRAY

pil_image = Image.open(path).convert("L") # grayscale

size = (550, 550)

final_image = pil_image.resize(size, Image.ANTIALIAS)

image_array = np.array(final_image, "uint8")

#print(image_array)

faces = face_cascade.detectMultiScale(image_array, scaleFactor=1.5,


minNeighbors=5)

43
for (x,y,w,h) in faces:

roi = image_array[y:y+h, x:x+w]

x_train.append(roi)

y_labels.append(id_)

#print(y_labels)

#print(x_train)

with open("labels.pickle", 'wb') as f:

pickle.dump(label_ids, f)

recognizer.train(x_train, np.array(y_labels))

recognizer.save("recognizers/face-trainner.yml")

By using this code, one can train the software, i.e. feed data into the system, which later can be
checked with the incoming data, and mark the attendance. The training process, and the GUI of the
training program is shown in the screenshots below:

44
Fig.32(A) – Training Images

Fig.32(B) – Training Images

45
Fig.32(C) – Training Images

Fig.32(D) – Training Images

46
2.2.6 Face Recognition: (Coding)
import numpy as np

import cv2

import pickle

face_cascade = cv2.CascadeClassifier('cascades/data/haarcascade_frontalface_alt2.xml')

eye_cascade = cv2.CascadeClassifier('cascades/data/haarcascade_eye.xml')

smile_cascade = cv2.CascadeClassifier('cascades/data/haarcascade_smile.xml')

recognizer = cv2.face.LBPHFaceRecognizer_create()

recognizer.read("./recognizers/face-trainner.yml")

labels = {"person_name": 1}

with open("labels.pickle", 'rb') as f:

og_labels = pickle.load(f)

labels = {v:k for k,v in og_labels.items()}

cap = cv2.VideoCapture(0)

while(True):

# Capture frame-by-frame

47
ret, frame = cap.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

for (x, y, w, h) in faces:

#print(x,y,w,h)

roi_gray = gray[y:y+h, x:x+w] #(ycord_start, ycord_end)

roi_color = frame[y:y+h, x:x+w]

# recognize? deep learned model predict keras tensorflow pytorch scikit learn

id_, conf = recognizer.predict(roi_gray)

if conf>=35 and conf <= 90:

#print(5: #id_)

#print(labels[id_])

font = cv2.FONT_HERSHEY_SIMPLEX

name = labels[id_]

color = (255, 255, 255)

stroke = 2

cv2.putText(frame, name, (x,y), font, 1, color, stroke, cv2.LINE_AA)

img_item = "7.png"

cv2.imwrite(img_item, roi_color)

48
color = (255, 0, 0) #BGR 0-255

stroke = 2

end_cord_x = x + w

end_cord_y = y + h

cv2.rectangle(frame, (x, y), (end_cord_x, end_cord_y), color, stroke)

#subitems = smile_cascade.detectMultiScale(roi_gray)

#for (ex,ey,ew,eh) in subitems:

# cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

# Display the resulting frame

cv2.imshow('frame',frame)

if cv2.waitKey(20) & 0xFF == ord('q'):

break

# When everything done, release the capture

cap.release()

cv2.destroyAllWindows()

By using this code, the software tracks the images in live video and match them with the present
data base mark the attendance accordingly if a match is found. This process is shown in
screenshots below.

49
Fig.33(A) – Marking Attendance (Tracking Images)

Fig.33(B) – Marking Attendance (Tracking Images)

50
CHAPTER-3

3.1 Conclusion

The face recognition based attendance management system provides accurate attendance
information of the students in an easy way and stores the attendance in an excel file with all
the necessary details. This system is convenient to users, easy to use and gives better security.
This system can also detect whether the person is really there, or if someone is trying to show it a
photograph on the phone or a printed one. It operates on a mere 5V device, so it won’t consume
lot of power, and provide a faster means of marking attendance, which conventional methods, or
even fingerprint systems can’t offer, as only one attendance can be marked at a time, but once it
recognises the face, it can mark multiple attendance at the same time.

50
3.2 Future Scope

Although the system is quite fast and and have a high accuracy compared to the conventional
calling by name method of attendance, or even the newly introduced fingerprint based attencance
system, but still, the accuracy rate of this algorithm is still 80%, which can be improved by using
Microsoft Azure Face API which is a relatively new API developed by Microsoft used for Face
Recognition, and the efficiency rate is claimed to be 100%. Future use of Microsoft Face API
can be used to elevate the accuracy rate. Raspberry Pi Cam can also be used to increase the
efficiency rate.

51
References

[1] T.S.Lim Fac., Ayer Keroh,Malaysia S. C. Sim ; M. M. Mansor. 2009. RFID based attendance
system Industrial Electronics and Applications. ISIEA 2009. IEEE Symposium on (Vol. 2).
[2] Tsuyoshi Usagawa Dept. Computer Science And Electronic Engineering Kumamoto, Japan
Yuhei Nakashima, Yoshifumi Chikasi an attendance management system for Moodle using
student identification card and android device Information, Communication and System
(ICTS), 2014 International Conference Technology.
[3] Li Quan-Xi, Li Gang M. 2012. An Efficient Automatic Attendance System Using Fingerprint
Reconstruction Technique IJCSIS International Journal of Computer Science and Information
Security.
[4] Kenji R.Yamamoto and Paul G. Flikkema. 2011. RFID-Based Students Attendance
Management System. ISSN 2229-5518 IJSER © 2011 International Journal of Scientific and
Engineering Research. 4(2).
[5] U. Eze peter; Owerri Nigeria; C.K.A. Joe- Uzuegbu; Laz Uzoechi; F.K. 2013. Opera
biometric based attendance system with remote real time monitoring tertary institutions in
developing countries emerging and sustainable technologies for power and ICT in a developing
society NIGERCON, IEEE.
[6] Design and implementation of iris based attendance management system Electrical
Engineering and Information Communication Technology (ICEEICT), 2015 International
Conference
[7] Duhn, S. von; Ko, M. J.; Yin, L.; Hung, T.; Wei, X. (1 September 2007). "Three-View
Surveillance Video Based Face Modeling for Recogniton". Three-View Surveillance Video
Based Face Modeling for Recognition. pp. 1–6.
[8] Zhang, Jian, Yan, Ke, He, Zhen-Yu, and Xu, Yong (2014). "A Collaborative Linear
Discriminative Representation Classification Method for Face Recognition. In 2014
International Conference on Artificial Intelligence and Software Engineering (AISE2014).
Lancaster, PA: DEStech Publications, Inc. p.21

52

You might also like