0% found this document useful (0 votes)
20 views23 pages

Chapter-3 Modal Implementation and Analysis

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 23

CHAPTER-3

MODAL IMPLEMENTATION
AND ANALYSIS

2
3.1 INTRODUCTION:

Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because although
commonalities exist between faces, they can vary considerably in terms of age,
skin color and facial expression. The problem is further complicated by differing
lighting conditions, image qualities and geometries, as well as the possibility of
partial occlusion and disguise. An ideal face detector would therefore be able to
detect the presence of any face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps. The first
step is a classification task that takes some arbitrary image as input and outputs a
binary value of yes or no, indicating whether there are any faces present in the
image. The second step is the face localization task that aims to take an image as
input and output the location of any face or faces within that image as some
bounding box with (x, y, width, height).After taking the picture the system will
compare the equality of the pictures in its database and give the most related
result. We will use NVIDIA Jetson Nano Developer kit, Logitech C270 HD
Webcam, open CV platform and will do the coding in python language.
3.2 Modal Implementation:

Figure 3.1: Model Implement

2
The main components used in the implementation approach are open source
computer vision library (OpenCV). One of OpenCV’s goals is to provide a simple-
to-use computer vision infrastructure that helps people build fairly sophisticated
vision applications quickly. OpenCV library contains over 500 functions that span
many areas in vision. The primary technology behind Face recognition is OpenCV.
The user stands in front of the camera keeping a minimum distance of 50cm and his
image is taken as an input. The frontal face is extracted from the image then
converted to gray scale and stored. The Principal component Analysis (PCA)
algorithm is performed on the images and the eigen values are stored in an xml file.
When a user requests for recognition the frontal face is extracted from the captured
video frame through the camera. The eigen value is re-calculated for the test face
and it is matched with the stored data for the closest neighbour.

3.3 Design Requirements:


We used some tools to build the system. Without the help of these tools it
would not be possible to make it done. Here we will discuss about the most
important one.

3.3.1 Software Implementation:


1. OpenCV: We used OpenCV 3 dependency for python 3. OpenCV is library
where there are lots of image processing functions are available. This is
very useful library for image processing. Even one can get expected
outcome without writing a single code. The library is cross-platform and
free for use under the open-source BSD license. Example of some
supported functions are given bellow:
● Derivation: Gradient/Laplacian computing, contours delimitation
● Hough transforms: lines, segments, circles, and geometrical shapes
detection

2
● Histograms: computing, equalization, and object localization with back
projection algorithm
● Segmentation: thresholding, distance transform, foreground/background
detection, watershed segmentation

● Filtering: linear and nonlinear filters, morphological operations


● Cascade detectors: detection of face, eye, car plates

● Interest points: detection and matching

● Video processing: optical flow, background subtraction, camshaft


(object tracking)
● Photography: panoramas realization, high definition imaging (HDR),
image inpainting

So it was very important to install OpenCV. But installing OpenCV 3 is a complex


process. How we did it is given below:

Fig 3.2: Installing OpenCV

2
We copied this script and place it on a directory on our raspberry pi and saved it.
Then through terminal we made this script executable and then ran it.

Sudochmod755/myfile/pi/
these are the command line we used.
installopencv.bashsudo/myfile/pi/
2. Python IDE: There are lots of IDEs for python. Some of them are PyCharm,
installopencv.bash
Thonny, Ninja, Spyder etc. Ninja and Spyder both are very excellent and free but
we used Spyder as it feature- rich than ninja. Spyder is a little bit heavier than ninja
but still much lighter than PyCharm. You can run them in pi and get GUI on your
PC

through ssh-Y. We installed Spyder through the command line below.


1.sudoapt-getisntall spyder

3.3.2 Hardware Implementation:

Figure 3.3 Jetson Board

2
3.3.2.1 NVIDIA Jetson Nano Developer kit:
NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer lets you run
multiple neural networks in parallel for applications like image classification, object
detection, segmentation, and speech processing. All in an easy-to-use platform that runs
in as little as 5 watts.

It’s simpler than ever to get started! Just insert a microSD card with the system
image, boot the developer kit, and begin using the same NVIDIA JetPack SDK used
across the entire NVIDIA Jetson™ family of products. JetPack is compatible with
NVIDIA’s world-leading AI platform for training and deploying AI software, reducing
complexity and effort for developers.

Specifications:

GPU 128-core NVIDIA Maxwell™

CPU Quad-core ARM® A57 @ 1.43 GHz

Memory 2 GB 64-bit LPDDR4 25.6 GB/s

Storage microSD (Card not included)

Video Encode 4Kp30 | 4x 1080p30 | 9x 720p30 (H.264/H.265)

Video Decode 4Kp60 | 2x 4Kp30 | 8x 1080p30 | 18x 720p30 (H.264/H.265)

Connectivity Gigabit Ethernet, 802.11ac wireless†

2
Camera 1x MIPI CSI-2 connector

Display HDMI

USB 1x USB 3.0 Type A,2x USB 2.0 Type A, USB 2.0 Micro-B

Others 40-pin header (GPIO, I2C, I2S, SPI, UART)


12-pin header (Power and related signals,
UART) 4-pin Fan header†

Mechanical 100 mm x 80 mm x 29 mm

Table 3.1 Specifications of Jetson Nano Developer kit

The developer kit uses a microSD card as boot device and for main storage. It’s important
to have a card that’s fast and large enough for your projects; the minimum requirement is
a 32GB UHS-1 card.
So we used 64Gb microSD card.
Before utilizing it, we have to configure our NVIDIA Jetson Nano Board for Computer
Vision and Deep Learning with TensorFlow, Keras, TensorRT, and OpenCV.
The NVIDIA Jetson Nano packs 472GFLOPS of computational horsepower. While it is a

very capable machine, configuring it is not easy to configure.


Step #1: Flash NVIDIA’s Jetson Nano Developer Kit .img to a microSD for Jetson
Nano
In this step, we will download NVIDIA’s Jetpack 4.2 Ubuntu-based OS image
and flash it to a microSD. You will need the microSD flashed and ready to go to follow
along with the next steps. So ensure that you download the “Jetson Nano Developer Kit
SD Card image” as shown in the following screenshot:

2
Figure 3.4: The first step to configure your NVIDIA Jetson Nano for computer vision and deep learning is
to download the Jetpack SD card image
While your Nano SD image is downloading, go ahead and download and
install balenaEtcher, a disk image flashing tool:

Figure 3.5: Download and install balenaEtcher for your OS. You will use it to flash your Nano image to a
microSD card.

2
Once both (1) your Nano Jetpack image is downloaded, and (2) balenaEtcher is installed,
you are ready to flash the image to a microSD.
Insert the microSD into the card reader, and then plug the card reader into a USB port on
your computer. From there, fire up balenaEtcher and proceed to flash.

Figure 3.6: Flashing NVIDIA’s Jetpack image to a microSD card with balenaEtcher is one of the first steps
for configuring your Nano for computer vision and deep learning.
When flashing has successfully completed, you are ready to move on to Step #2.
Step #2: Boot your Jetson Nano with the microSD and connect to a network
● Insert your microSD into your Jetson Nano as shown in Figure 4:

Figure 3.7: To insert your Jetpack-flashed microSD after it has been flashed,
find the microSD slot as shown by the red circle in the image. Insert your microSD until it clicks into
place.

From there, connect your screen, keyboard, mouse, and network interface.

2
Finally, apply power. Insert the power plug of your power adapter into your Jetson Nano
(use the J48 jumper if you are using a 20W barrel plug supply).

Figure 3.8: Use the icon near the top right corner of your screen to configure networking settings on your
NVIDIA Jetson Nano. You will need internet access to download and install computer vision and deep
learning software.

Once you see your NVIDIA + Ubuntu 18.04 desktop, you should configure your wired or
wireless network settings as needed using the icon in the menubar as shown in Figure 5.
When you have confirmed that you have internet access on your NVIDIA Jetson Nano,
you can move on to the next step.

Step #3: Open a terminal or start an SSH session

In this step we will do one of the following:

1. Option 1: Open a terminal on the Nano desktop, and assume that you’ll perform
all steps from here forward using the keyboard and mouse connected to your
Nano

3
2. Option 2: Initiate an SSH connection from a different computer so that we can
remotely configure our NVIDIA Jetson Nano for computer vision and deep
learning

Both options are equally good.

Option 1: Use the terminal on your Nano desktop

For Option 1, open up the application launcher, and select the terminal app. You may
wish to right click it in the left menu and lock it to the launcher, since you will likely use
it often.

You may now continue to Step #4 while keeping the terminal open to enter commands.

Option 2: Initiate an SSH remote session

For Option 2, you must first determine the username and IP address of your Jetson Nano.
On your Nano, fire up a terminal from the application launcher, and enter the following
commands at the prompt:

$ whoami
nvidia
$ ifconfig
en0: flags=8863 mtu 1500
options=400
ether 8c:85:90:4f:b4:41
inet6 fe80::14d6:a9f6:15f8:401%en0 prefixlen 64 secured scopeid 0x8
inet6 2600:100f:b0de:1c32:4f6:6dc0:6b95:12 prefixlen 64 autoconf secured
inet6 2600:100f:b0de:1c32:a7:4e69:5322:7173 prefixlen 64 autoconf temporary
inet 192.168.1.4 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201
media: autoselect

3
status: active

3
Grab your IP address. Then, on a separate computer, such as your laptop/desktop, initiate
an SSH connection as follows:

$ ssh [email protected]
Notice how I’ve entered the username and IP address of the Jetson Nano in my command
to remotely connect.

Step #4: Update your system and remove programs to save space

In this step, we will remove programs we don’t need and update our system. First, let’s set
our Nano to use maximum power capacity:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

The nvpmodel command handles two power options for your Jetson Nano: (1) 5W is
mode 1 and (2) 10W is mode 0. The default is the higher wattage mode, but it is always
best to force the mode before running the jetson_clocks command.

After you have set your Nano for maximum power, go ahead and remove LibreOffice —
it consumes lots of space, and we won’t need it for computer vision and deep learning:

$ sudo apt-get purge libreoffice*


$ sudo apt-get clean

From there, let’s go ahead and update system level packages:

$ sudo apt-get update && sudo apt-get upgrade

In the next step, we’ll begin installing software.

3
Step #5: Install OpenCV system-level dependencies and other development
dependencies

Let’s now install OpenCV dependecies on our system beginning with tools needed to build
and compile OpenCV with parallelism:

$ sudo apt-get install build-essential pkg-config


$ sudo apt-get install libtbb2 libtbb-dev

Next, we’ll install a handful of codecs and image libraries:

$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev


$ sudo apt-get install libxvidcore-dev libavresample-dev
$ sudo apt-get install libtiff-dev libjpeg-dev libpng-dev

And then we’ll install a selection of GUI libraries:

$ sudo apt-get install python-tk libgtk-3-dev


$ sudo apt-get install libcanberra-gtk-module libcanberra-gtk3-module

Lastly, we’ll install Video4Linux (V4L) so that we can work with USB webcams and
install a library for FireWire cameras:

$ sudo apt-get install libv4l-dev libdc1394-22-dev

3
Step #6: Set up Python virtual environments on your Jetson Nano

Figure 3.9: Each Python virtual environment you create on your NVIDIA Jetson Nano is separate and
independent from the others.

I can’t stress this enough: Python virtual environments are a best practice when both
developing and deploying Python software projects.

Virtual environments allow for isolated installs of different Python packages. When you
use them, you could have one version of a Python library in one environment and another
version in a separate, sequestered environment.

In the remainder of this tutorial, we’ll create one such virtual environment; however, you
can create multiple environments for your needs after you complete this Step#6. Be sure
to read the RealPython guide on virtual environments if you aren’t familiar with them.

First, we’ll install the de facto Python package management tool, pip:

$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
$ rm get-pip.py

3
And then we’ll install my favorite tools for managing virtual
environments, virtualenv and virtualenvwrapper:

$ sudo pip install virtualenv virtualenvwrapper

The virtualenvwrapper tool is not fully installed until you add information to your bash
profile. Go ahead and open up your ~/.bashrc with the nano ediitor:

$ nano ~/.bashrc

And then insert the following at the bottom of the file:

# virtualenv and virtualenvwrapper


export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

Save and exit the file using the keyboard shortcuts shown at the bottom of the nano
editor, and then load the bash profile to finish the virtualenvwrapper installation:

$ source ~/.bashrc

3
Figure 3.10: Terminal output from the virtualenvwrapper setup installation indicates that there are no
errors. We now have a virtual environment management system in place so we can create computer
vision and deep learning virtual environments on our NVIDIA Jetson Nano.

So long as you don’t encounter any error messages, both virtualenv


and virtualenvwrapper are now ready for you to create and destroy virtual environments
as needed in Step #7.

Step #9: Create your ‘py3cv4’ virtual environment

This step is dead simple once you’ve installed virtualenv and virtualenvwrapper in the
previous step. The virtualenvwrapper tool provides the following commands to work with
virtual environments:

● mkvirtualenv

: Create a Python virtual environment

● lsvirtualenv

: List virtual environments installed on your system

3
● rmvirtualenv

: Remove a virtual environment

● workon

: Activate a Python virtual environment

● deactivate

: Exits the virtual environment taking you back to your system environment

Assuming Step #6 went smoothly, let’s create a Python virtual environment on our
Nano:

$ mkvirtualenv py3cv4 -p python3

I’ve named the virtual environment py3cv4 indicating that we will use Python 3 and
OpenCV 4. You can name yours whatever you’d like depending on your project and
software needs or even your own creativity.When your environment is ready, your bash
prompt will be preceded by (py3cv4). If your prompt is not preceded by the name of your
virtual environment name, at any time you can use the workon command as follows:

$ workon py3cv4

3
Figure 3.11: Ensure that your bash prompt begins with your virtual environment name for the remainder of
this tutorial on configuring your NVIDIA Jetson Nano for deep learning and computer vision.

For the remaining steps , you must be “in” the py3cv4 virtual environment.

3.3.2.2 Webcam:

Figure 3.12 Web Camera

3
Specifications:
• Logitech C270 Web Camera (960-000694) supports for NVIDIA jetson nano
developer kit.
• The C270 HD Webcam gives you sharp, smooth conference calls (720p/30fps) in
a widescreen format. Automatic light correction shows you in lifelike, natural
colors.
• Which is suitable to use with the NVIDIA Jetson Nano and NVIDIA Jetson
Xavier NX Development Kits.

3.4 Experimental Results:

The step of the experiments process are given below:

Face Detection:

Start capturing images through web camera of the client side: Begin:

● Pre-process the captured image and extract face image

● calculate the eigen value of the captured face image and compared
with eigen values of existing faces in the database.
● If eigen value does not matched with existing ones,save the new face
image information to the face database (xml file).
● If eigen value matched with existing one then recognition step will done.

End

Face Recognition:

Using PCA algorithm the following steps would be followed in for face
recognition:
Begin:

4
● Find the face information of matched face image in from the database.
● update the log table with corresponding face image and system
time that makes completion of attendance for an individua
students.
End
This section presents the results of the experiment conducted to capture the
face into a grey scale image of 50x50 pixels.

Test data Expected Result Observ Pass/


ed
Fail
Result
OpenCAM_CB( Connects with the Came pass
) installed camera ra
and starte
starts playing. d.
LoadHa Loads the Gets ready Pass
ar HaarClassifier for
Cascade files for
Classifie frontal face Extraction
r() .
Initiates the Paul-
ExtractFace() Viola Face Pass
extracted
Face extracting
Frame
work.
Learn Start the PCA Updates the Pass
() Algorithm facedata. xml
Recognize() It compares the input Nearest face Pass
face with the saved
faces.

Table 3.2 Experimental Results-1

4
Here is our data set sample.

4
Figure 3.13 : Dataset sample

Face Orientations Detection Rate Recognition


Rate
0o (Frontal face) 98.7 % 95%

1 80.0 % 78%
8
º
5 59.2 % 58%
4
º
7 0.00 % 0.00%
2
º
90º(Profile face) 0.00 % 0.00%

Table 3.3 Experimentaal Results-2


We performed a set of experiments to demonstrate the efficiency of the proposed
method. 30 different images of 10 persons are used in training set. Figure 3 shows a
sample binary image detected by the ExtractFace() function using Paul-Viola Face
extracting Frame work detection method.

You might also like