0% found this document useful (0 votes)
95 views27 pages

Presentation On Gait Biometric

The document discusses gait biometric recognition. It describes the process of video capture, contour detection using Canny edge detection, silhouette segmentation, feature extraction and classification using supervised machine learning. The project uses computer vision, digital image processing and machine learning techniques. So far, the students have learned to write MATLAB programs for image acquisition and retrieval of hardware information.

Uploaded by

shreya singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views27 pages

Presentation On Gait Biometric

The document discusses gait biometric recognition. It describes the process of video capture, contour detection using Canny edge detection, silhouette segmentation, feature extraction and classification using supervised machine learning. The project uses computer vision, digital image processing and machine learning techniques. So far, the students have learned to write MATLAB programs for image acquisition and retrieval of hardware information.

Uploaded by

shreya singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

PRESENTATION

ON
GAIT BIOMETRIC RECOGNITION
By
(Project Id: 21D03)

Shreya Singh(1716410238)
Swapnil(1716410263)
Surabhi Singh(1716410259)
Shreya Pal(1716410237)

Under the supervision of


Deepti Sharma
(Assistance Professor)
Introduction
Biometrics:
Biometrics are used in a wide array of applications, which makes a precise
definition difficult to establish. The most general definition of a biometric is:

“A physiological or behavioral characteristic, which can be used to identify


and verify the identity of an individual”.

There are numerous biometric measures which can be used to help derive an
individuals identity. They can be classified into two distinct categories:
• Physiological
• Behavioral
Physiological – These are biometrics which are derived from a direct
measurement of a part of a human body. The most prominent and successful
of these types of measures to date are fingerprints, face recognition, iris-
scans and hand scans.

Behavioral – Extract characteristics based on an action performed by an


individual, they are an indirect measure of the characteristic of the human
form. The main feature of a behavioral biometric is the use of time as a
metric. Established measures include keystroke-scan and speech patterns.
Why Gait?
The definition of Gait is defined as:
“A particular way or manner of moving on foot”

We use the term gait recognition to signify the identification of an individual


from a video sequence of the subject walking. This does not mean that gait is
limited to walking, it can also be applied to running or any means of
movement on foot. Gait as a biometric can be seen as advantageous over
other forms of biometric identification techniques for the following reasons:

• Unobtrusive – The gait of a person walking, can be extracted without the


user knowing they are being analyzed and without any cooperation from the
user in the information gathering stage unlike fingerprinting or retina scans.

• Distance recognition – The gait of an individual can be captured at a


distance unlike other biometrics such as fingerprint recognition.
• Reduced detail – Gait recognition does not require images that have
been captured to be of a very high quality unlike other biometrics such as
face recognition, which can be easily affected by low resolution images.

• Difficult to conceal – The gait of an individual is difficult to disguise, by


trying to do so the individual will probably appear more suspicious. With
other biometric techniques such as face recognition, the individuals face can
easily be altered or hidden.
MODULES INTRODUCTION
VIDEO CAPTURE

• Video Capture is the very


first process in Gait
biometric recognition. A
video camera is used to
capture the motion when
the subject needs to be
identified remotely.
CONTOUR DETECTION
In this stage, contours of the
person are detected to specify the
outer boundary of human body.
There are several methods
available for the purpose and
choosing the right one depends on
system objectives and other sub-
systems like gait capture method
used.Usually we use CANNY EDGE
DETECTION ALGORITHM for
contour detection
CANNY EDGE DETECTION ALGORITHM
• Canny edge detection algorithm
is one of the most strictly
defined methods that provides
good and reliable detection.
Canny edge detection is a
technique to extract useful
structural information from
different vision objects and
dramatically reduce the amount
of data to be processed.
The general criteria for edge
detection include:

 
Detection of edge with low error rate, which means that the detection
should accurately catch as many edges shown in the image as possible.

The edge point detected from the operator should accurately localize
on the center of the edge.

A given edge in the image should only be marked once, and where
possible, image noise should not create false edges.
Silhouette segmentation

• This stage takes place when gait


data includes a video feed.
Silhouette is a binary image
extracted from the video feed of
the moving subject. Silhouette
segmentation makes an
important part in implementation
of machine vision. Extracting
silhouette of a moving subject
makes it easy to process and map
as system does not have to deal
with unnecessary 3D details. It
also takes less processing power
than processing 3D images.
FEATURE EXTRACTION AND CLASSIFICATION
• In these stages, gait features are
extracted and finally, a classifier is
used to identify a person. In the
classification, the similarity
between the extracted gait
feature and the stored ones is
computed to identify the walking
person.
• In this project we use
SUPERVISED MACHINE LEARNING
for feature extraction and
classification.
SUPERVISED MACHINE LEARNING
•In Supervised learning, you train
the machine using data which is
well "labeled." It means some
data is already tagged with the
correct answer. It can be
compared to learning which
takes place in the presence of a
supervisor or a teacher.
•A supervised learning algorithm
learns from labeled training
data, helps you to predict
outcomes for unforeseen data.
GAIT DATABASE
Technology used

• Computer vision
• Digital image processing
• Machine learning
Computer vision
computer vision is an interdisciplinary scientific field that deals with
how computers can gain high-level understanding from digital images or videos.
From the perspective of engineering, it seeks to understand and automate
tasks that the human visual system can do. Computer vision tasks include
methods for acquiring, processing, analyzing and understanding digital images,
and extraction of high-dimensional data from the real world in order to
produce numerical or symbolic information, e.g. in the forms of
decisions.Understanding in this context means the transformation of visual
images into descriptions of the world that make sense to thought processes
and can elicit appropriate action..
Digital image processing

In computer science, digital image processing is the use of a digital computer


to process digital images through an algorithm. As a subcategory or field
of digital signal processing, digital image processing has many advantages . It
allows a much wider range of algorithms to be applied to the input data
andan avoid problems such as the build-up of noise and distortion during
processing. Since images are defined over two dimensions (perhaps more)
digital image processing may be modeled in the form of multidimensional
systems. The generation and development of digital image processing are
mainly affected by three factors:
• the development of computers
• the development of mathematics
• the demand for a wide range of applications in environment, agriculture,
military, industry and medical science has increased.
Machine learning
• Machine learning (ML) is the study of computer algorithms that improve
automatically through experience. It is seen as a subset of artificial
intelligence. Machine learning algorithms build a mathematical
model based on sample data, known as "training data", in order to make
predictions or decisions without being explicitly programmed to do
so. Machine learning algorithms are used in a wide variety of applications,
such as email filtering and computer vision, where it is difficult or
infeasible to develop conventional algorithms to perform the needed
tasks.
Software used
• MATLAB  is a multi-paradigm numerical computing environment
and proprietary programming language developed by MathWorks.
MATLAB allows matrix manipulations, plotting of functions and data,
implementation of algorithms, creation of user interfaces, and interfacing
with programs written in other languages.
Project completion so far
• Learn matlab how to write program in matlab using image
acquisition toolbox: mage Acquisition Toolbox™ provides functions
and blocks for connecting cameras and lidar sensors to
MATLAB® and Simulink®. It includes a MATLAB app that lets you
interactively detect and configure hardware properties. You can then
generate equivalent MATLAB code to automate your acquisition in
future sessions. The toolbox enables acquisition modes such as
processing in-the-loop, hardware triggering, background acquisition,
and synchronizing acquisition across multiple devices.
• Image Acquisition Toolbox supports all major standards and
hardware vendors, including USB3 Vision, GigE Vision®, and
GenICam™ GenTL. You can connect to Velodyne LiDAR® sensors,
machine vision cameras, and frame grabbers, as well as high-end
scientific and industrial devices.
• Getting Started Doing Image Acquisition Programmatically
Step 1: Install Your Image Acquisition Device

• Installing the frame grabber board in your computer.


• Installing any software drivers required by the device. These are supplied by the
device vendor.
• Connecting a camera to a connector on the frame grabber board.
• Verifying that the camera is working properly by running the application software
that came with the camera and viewing a live video stream.
• Generic Windows image acquisition devices, such as webcams and digital video
camcorders, typically do not require the installation of a frame grabber board. You
connect these devices directly to your computer via a USB or FireWire port.
• After installing and configuring your image acquisition hardware, start MATLAB on
your computer by double-clicking the icon on your desktop. You do not need to
perform any special configuration of MATLAB to perform image acquisition.
• Step 2: Retrieve Hardware Information
• In this step, we get several pieces of information that the toolbox needs to
uniquely identify the image acquisition device we want to access. We use
this information when we create an image acquisition object.
• Step 3: Create a Video Input Object
• In this step you create the video input object that the toolbox uses to
represent the connection between MATLAB and an image acquisition
device. Using the properties of a video input object, you can control many
aspects of the image acquisition process.
• To create a video input object,use the videoinput function at MATLAB.The
deviceInfo structure return by the imaghwinfo function contains the default
videoinput function syntax for a device in the VideoInputConstructor
field.
• Step 4: Acquire Image Data
• Starting the video input object — You start an object by calling
the start function .
• Starting an object prepares the object for data acquisition.For
example, starting an object locks the values of certain object
properties (they become read only). Starting an object does not
initiate the acquiring of image frames,however. The initiation of data
logging depends on the execution of a trigger.
• Triggering the acquisition — To acquire data, a video input object
must execute a trigger. Triggers can occur in several ways,
depending on how the TriggerType property is configured. For
example, if you specify an immediate trigger, the object executes a
trigger automatically immediately after it starts.
• If you specify a manual trigger, the object waits for a call to
the trigger function waits for a call to the trigger function acquisition.
• Bringing data into the MATLAB workspace — The toolbox stores
acquired data in a memory buffer, a disk file, or both, depending on
the value of the video input object LoggingMode property. To work
with this data, you must bring it into the MATLAB workspace.To
bring multiple frames into the workspace, use the getdata function.
• Once the data is in the MATLAB workspace, you can  you can
manipulate it as you would any other data.
• Step 5: Clean Up
• When you finish using your image acquisition objects, you can
remove them from memory and clear the MATLAB workspace of the
variables associated with these objects.
• Graphical user interface:A GUI consists in a display containing
windows with controls, the ―components‖, in order to offer several
tasks to the user that does not need to develop Matlab routines or to
insert commands directly from the command line. Finally, the user
has just to create the desired GUI environment, without needing to
understand the way of functioning of the used tasks.
• The user can select from a set of tools like menus, toolbars, push
buttons, radio buttons, list boxes or sliders, with also the opportunity
of communicating with external sources, like reading and writing
files or showing tables and plots, or with other GUIs.
• Typically, GUIs are made in way to wait for the user to execute
some control, responding to the performed request acting on one or
more user-written routines, called ―callbacks‖ (these routines
―call back‖ to Matlab in order to perform actions); this is called
event-driven programming. Each callback can be activated by user
actions like pressing buttons, selecting a menu item, typing a string
or a value, etc. in an asynchronous way, or rather being triggered by
external events: in our case, the user can interact directly with the
created GUI, but this can then respond acting with external aimed
actions like creating a file or calling another computer device.
• Image preprocessing:
• scale down images into gray scale :
• The rgb2gray function converts RGB images to grayscale by
eliminating the hue and saturation information while retaining the
luminance. If you have Parallel Computing Toolbox™ installed,
rgb2gray can perform this conversion on a GPU. newmap =
rgb2gray( map ) returns a grayscale colormap equivalent to map .
• Noise Removal:
• Digital images are prone to various types of noise. Noise is the
result of errors in the image acquisition process that result in pixel
values that do not reflect the true intensities of the real scene. There
are several ways that noise can be introduced into an image,
depending on how the image is created. For example:
• If the image is scanned from a photograph made on film, the film
grain is a source of noise. Noise can also be the result of damage to
the film, or be introduced by the scanner itself.
• If the image is acquired directly in a digital format, the mechanism
for gathering the data (such as a CCD detector) can introduce
noise.
• Electronic transmission of image data can introduce noise.

You might also like