Chapter-3 Modal Implementation and Analysis
Chapter-3 Modal Implementation and Analysis
Chapter-3 Modal Implementation and Analysis
MODAL IMPLEMENTATION
AND ANALYSIS
2
3.1 INTRODUCTION:
Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because although
commonalities exist between faces, they can vary considerably in terms of age,
skin color and facial expression. The problem is further complicated by differing
lighting conditions, image qualities and geometries, as well as the possibility of
partial occlusion and disguise. An ideal face detector would therefore be able to
detect the presence of any face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps. The first
step is a classification task that takes some arbitrary image as input and outputs a
binary value of yes or no, indicating whether there are any faces present in the
image. The second step is the face localization task that aims to take an image as
input and output the location of any face or faces within that image as some
bounding box with (x, y, width, height).After taking the picture the system will
compare the equality of the pictures in its database and give the most related
result. We will use NVIDIA Jetson Nano Developer kit, Logitech C270 HD
Webcam, open CV platform and will do the coding in python language.
3.2 Modal Implementation:
2
The main components used in the implementation approach are open source
computer vision library (OpenCV). One of OpenCV’s goals is to provide a simple-
to-use computer vision infrastructure that helps people build fairly sophisticated
vision applications quickly. OpenCV library contains over 500 functions that span
many areas in vision. The primary technology behind Face recognition is OpenCV.
The user stands in front of the camera keeping a minimum distance of 50cm and his
image is taken as an input. The frontal face is extracted from the image then
converted to gray scale and stored. The Principal component Analysis (PCA)
algorithm is performed on the images and the eigen values are stored in an xml file.
When a user requests for recognition the frontal face is extracted from the captured
video frame through the camera. The eigen value is re-calculated for the test face
and it is matched with the stored data for the closest neighbour.
2
● Histograms: computing, equalization, and object localization with back
projection algorithm
● Segmentation: thresholding, distance transform, foreground/background
detection, watershed segmentation
2
We copied this script and place it on a directory on our raspberry pi and saved it.
Then through terminal we made this script executable and then ran it.
Sudochmod755/myfile/pi/
these are the command line we used.
installopencv.bashsudo/myfile/pi/
2. Python IDE: There are lots of IDEs for python. Some of them are PyCharm,
installopencv.bash
Thonny, Ninja, Spyder etc. Ninja and Spyder both are very excellent and free but
we used Spyder as it feature- rich than ninja. Spyder is a little bit heavier than ninja
but still much lighter than PyCharm. You can run them in pi and get GUI on your
PC
2
3.3.2.1 NVIDIA Jetson Nano Developer kit:
NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer lets you run
multiple neural networks in parallel for applications like image classification, object
detection, segmentation, and speech processing. All in an easy-to-use platform that runs
in as little as 5 watts.
It’s simpler than ever to get started! Just insert a microSD card with the system
image, boot the developer kit, and begin using the same NVIDIA JetPack SDK used
across the entire NVIDIA Jetson™ family of products. JetPack is compatible with
NVIDIA’s world-leading AI platform for training and deploying AI software, reducing
complexity and effort for developers.
Specifications:
2
Camera 1x MIPI CSI-2 connector
Display HDMI
USB 1x USB 3.0 Type A,2x USB 2.0 Type A, USB 2.0 Micro-B
Mechanical 100 mm x 80 mm x 29 mm
The developer kit uses a microSD card as boot device and for main storage. It’s important
to have a card that’s fast and large enough for your projects; the minimum requirement is
a 32GB UHS-1 card.
So we used 64Gb microSD card.
Before utilizing it, we have to configure our NVIDIA Jetson Nano Board for Computer
Vision and Deep Learning with TensorFlow, Keras, TensorRT, and OpenCV.
The NVIDIA Jetson Nano packs 472GFLOPS of computational horsepower. While it is a
2
Figure 3.4: The first step to configure your NVIDIA Jetson Nano for computer vision and deep learning is
to download the Jetpack SD card image
While your Nano SD image is downloading, go ahead and download and
install balenaEtcher, a disk image flashing tool:
Figure 3.5: Download and install balenaEtcher for your OS. You will use it to flash your Nano image to a
microSD card.
2
Once both (1) your Nano Jetpack image is downloaded, and (2) balenaEtcher is installed,
you are ready to flash the image to a microSD.
Insert the microSD into the card reader, and then plug the card reader into a USB port on
your computer. From there, fire up balenaEtcher and proceed to flash.
Figure 3.6: Flashing NVIDIA’s Jetpack image to a microSD card with balenaEtcher is one of the first steps
for configuring your Nano for computer vision and deep learning.
When flashing has successfully completed, you are ready to move on to Step #2.
Step #2: Boot your Jetson Nano with the microSD and connect to a network
● Insert your microSD into your Jetson Nano as shown in Figure 4:
Figure 3.7: To insert your Jetpack-flashed microSD after it has been flashed,
find the microSD slot as shown by the red circle in the image. Insert your microSD until it clicks into
place.
From there, connect your screen, keyboard, mouse, and network interface.
2
Finally, apply power. Insert the power plug of your power adapter into your Jetson Nano
(use the J48 jumper if you are using a 20W barrel plug supply).
Figure 3.8: Use the icon near the top right corner of your screen to configure networking settings on your
NVIDIA Jetson Nano. You will need internet access to download and install computer vision and deep
learning software.
Once you see your NVIDIA + Ubuntu 18.04 desktop, you should configure your wired or
wireless network settings as needed using the icon in the menubar as shown in Figure 5.
When you have confirmed that you have internet access on your NVIDIA Jetson Nano,
you can move on to the next step.
1. Option 1: Open a terminal on the Nano desktop, and assume that you’ll perform
all steps from here forward using the keyboard and mouse connected to your
Nano
3
2. Option 2: Initiate an SSH connection from a different computer so that we can
remotely configure our NVIDIA Jetson Nano for computer vision and deep
learning
For Option 1, open up the application launcher, and select the terminal app. You may
wish to right click it in the left menu and lock it to the launcher, since you will likely use
it often.
You may now continue to Step #4 while keeping the terminal open to enter commands.
For Option 2, you must first determine the username and IP address of your Jetson Nano.
On your Nano, fire up a terminal from the application launcher, and enter the following
commands at the prompt:
$ whoami
nvidia
$ ifconfig
en0: flags=8863 mtu 1500
options=400
ether 8c:85:90:4f:b4:41
inet6 fe80::14d6:a9f6:15f8:401%en0 prefixlen 64 secured scopeid 0x8
inet6 2600:100f:b0de:1c32:4f6:6dc0:6b95:12 prefixlen 64 autoconf secured
inet6 2600:100f:b0de:1c32:a7:4e69:5322:7173 prefixlen 64 autoconf temporary
inet 192.168.1.4 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201
media: autoselect
3
status: active
3
Grab your IP address. Then, on a separate computer, such as your laptop/desktop, initiate
an SSH connection as follows:
$ ssh [email protected]
Notice how I’ve entered the username and IP address of the Jetson Nano in my command
to remotely connect.
Step #4: Update your system and remove programs to save space
In this step, we will remove programs we don’t need and update our system. First, let’s set
our Nano to use maximum power capacity:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
The nvpmodel command handles two power options for your Jetson Nano: (1) 5W is
mode 1 and (2) 10W is mode 0. The default is the higher wattage mode, but it is always
best to force the mode before running the jetson_clocks command.
After you have set your Nano for maximum power, go ahead and remove LibreOffice —
it consumes lots of space, and we won’t need it for computer vision and deep learning:
3
Step #5: Install OpenCV system-level dependencies and other development
dependencies
Let’s now install OpenCV dependecies on our system beginning with tools needed to build
and compile OpenCV with parallelism:
Lastly, we’ll install Video4Linux (V4L) so that we can work with USB webcams and
install a library for FireWire cameras:
3
Step #6: Set up Python virtual environments on your Jetson Nano
Figure 3.9: Each Python virtual environment you create on your NVIDIA Jetson Nano is separate and
independent from the others.
I can’t stress this enough: Python virtual environments are a best practice when both
developing and deploying Python software projects.
Virtual environments allow for isolated installs of different Python packages. When you
use them, you could have one version of a Python library in one environment and another
version in a separate, sequestered environment.
In the remainder of this tutorial, we’ll create one such virtual environment; however, you
can create multiple environments for your needs after you complete this Step#6. Be sure
to read the RealPython guide on virtual environments if you aren’t familiar with them.
First, we’ll install the de facto Python package management tool, pip:
$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
$ rm get-pip.py
3
And then we’ll install my favorite tools for managing virtual
environments, virtualenv and virtualenvwrapper:
The virtualenvwrapper tool is not fully installed until you add information to your bash
profile. Go ahead and open up your ~/.bashrc with the nano ediitor:
$ nano ~/.bashrc
Save and exit the file using the keyboard shortcuts shown at the bottom of the nano
editor, and then load the bash profile to finish the virtualenvwrapper installation:
$ source ~/.bashrc
3
Figure 3.10: Terminal output from the virtualenvwrapper setup installation indicates that there are no
errors. We now have a virtual environment management system in place so we can create computer
vision and deep learning virtual environments on our NVIDIA Jetson Nano.
This step is dead simple once you’ve installed virtualenv and virtualenvwrapper in the
previous step. The virtualenvwrapper tool provides the following commands to work with
virtual environments:
● mkvirtualenv
● lsvirtualenv
3
● rmvirtualenv
● workon
● deactivate
: Exits the virtual environment taking you back to your system environment
Assuming Step #6 went smoothly, let’s create a Python virtual environment on our
Nano:
I’ve named the virtual environment py3cv4 indicating that we will use Python 3 and
OpenCV 4. You can name yours whatever you’d like depending on your project and
software needs or even your own creativity.When your environment is ready, your bash
prompt will be preceded by (py3cv4). If your prompt is not preceded by the name of your
virtual environment name, at any time you can use the workon command as follows:
$ workon py3cv4
3
Figure 3.11: Ensure that your bash prompt begins with your virtual environment name for the remainder of
this tutorial on configuring your NVIDIA Jetson Nano for deep learning and computer vision.
For the remaining steps , you must be “in” the py3cv4 virtual environment.
3.3.2.2 Webcam:
3
Specifications:
• Logitech C270 Web Camera (960-000694) supports for NVIDIA jetson nano
developer kit.
• The C270 HD Webcam gives you sharp, smooth conference calls (720p/30fps) in
a widescreen format. Automatic light correction shows you in lifelike, natural
colors.
• Which is suitable to use with the NVIDIA Jetson Nano and NVIDIA Jetson
Xavier NX Development Kits.
Face Detection:
Start capturing images through web camera of the client side: Begin:
● calculate the eigen value of the captured face image and compared
with eigen values of existing faces in the database.
● If eigen value does not matched with existing ones,save the new face
image information to the face database (xml file).
● If eigen value matched with existing one then recognition step will done.
End
Face Recognition:
Using PCA algorithm the following steps would be followed in for face
recognition:
Begin:
4
● Find the face information of matched face image in from the database.
● update the log table with corresponding face image and system
time that makes completion of attendance for an individua
students.
End
This section presents the results of the experiment conducted to capture the
face into a grey scale image of 50x50 pixels.
4
Here is our data set sample.
4
Figure 3.13 : Dataset sample
1 80.0 % 78%
8
º
5 59.2 % 58%
4
º
7 0.00 % 0.00%
2
º
90º(Profile face) 0.00 % 0.00%