0% found this document useful (0 votes)
59 views7 pages

3D Security User Identification in Banking

Biometric systems may be divided into 2 categories betting on the characteristics used. One category uses physical characteristics that area unit associated with the form and presence of the body and body components, like fingerprint, finger knuckles, face (2-D and 3-D), DNA, hand and palm pure mathematics, iris texture, and retinal vasculature. Systems belong to the second category use activity characteristics, like gait, handwriting, keyboard writing, and speech. Analysis in face recognition
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views7 pages

3D Security User Identification in Banking

Biometric systems may be divided into 2 categories betting on the characteristics used. One category uses physical characteristics that area unit associated with the form and presence of the body and body components, like fingerprint, finger knuckles, face (2-D and 3-D), DNA, hand and palm pure mathematics, iris texture, and retinal vasculature. Systems belong to the second category use activity characteristics, like gait, handwriting, keyboard writing, and speech. Analysis in face recognition
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Available online at www.sikhiva.

org

Volume 1, Issue 2 | Pages 72-78


Received: 06 May 2019 | Accepted: 07 May 2019 | Published: 08 May 2019

3D SECURITY USER IDENTIFICATION IN BANKING


N. NITHYA1* & N. CHARUMATHI2
1
Assistant Professor, Department of Computer Applications, Dhanalakshmi Srinivasan College of Arts and Science for
Women (Autonomous), Bharathidhasan University, Perambalur, Tamil Nadu, India
2
Research Scholar, Department of Computer Applications, Dhanalakshmi Srinivasan College of Arts and Science for
Women (Autonomous), Bharathidhasan University, Perambalur, Tamil Nadu, India

*Corresponding Author: N. Nithya (MCA, MPhil), Assistant Professor, Department of Computer Applications,
Dhanalakshmi Srinivasan College of Arts and Science for Women (Autonomous), Bharathidhasan University,
Perambalur, Tamil Nadu, India

ABSTRACT
Biometric systems may be divided into 2 categories betting on the characteristics used. One category uses
physical characteristics that area unit associated with the form and presence of the body and body
components, like fingerprint, finger knuckles, face (2-D and 3-D), DNA, hand and palm pure mathematics,
iris texture, and retinal vasculature. Systems belong to the second category use activity characteristics, like
gait, handwriting, keyboard writing, and speech. Analysis in face recognition has endlessly been challenged
by alien (head create, lighting conditions) and intrinsic (facial expression, aging) sources of variability.
During this system is employed to several organizations and lots of applications for security purpose.
Many approaches area unit face recognition exists, during this project, specialize in a comparative
study of 3D face recognition beneath expression variations. First 3D face databases with expressions area
unit listed, and therefore the most significant ones area unit conferred and their complexness is quantified
victimisation principal part analysis, linear discriminate analysis and native binary patterns. The project to
be real time enforced datasets to reason the assorted varieties of expressions. Pictures in terms of the
popularity performance are evaluated with 3 completely different techniques (principal component analysis,
linear discriminant analysis, and native binary patterns) on face recognition grand challenge and strait 3D
face databases.

KEYWORDS: fingerprint, 3D face recognition, principal component analysis, linear discriminant analysis, native binary patterns

INTRODUCTION

Biometrics (or biometric validation) refers to the documentation of humans by their physical characteristics or
personalities. Life science is employed in computing as a variety of identification and access management. It's
additionally accustomed determine people in teams that square measure beneath police investigation. Biometric
identifiers square measure the distinctive, measurable characteristics accustomed label and describe people. Biometric
identifiers square measure usually classified as physiological versus behavioural characteristics. Physiological
characteristics square measure associated with the form of the body. Examples embody, however don't seem to be
restricted to fingerprint, face recognition, DNA, Palm print, hand pure mathematics, iris recognition, tissue layer and

~ 72 ~
IJISE N. Nithya & N. Charumathi www.shikiva.org

odour/scent.

Behavioral characteristics square measure associated with the pattern of behavior of an individual, together with
however not restricted to: writing rhythm, gait, and voice. Some researchers have coined the term behavioural metrics to
explain the latter category of life science. Recognition of humans has become a substantial Topic these days because the
would like for security applications grows unendingly. Life science allows reliable and economical identity Management
systems by exploiting physical and behavioral Characteristics of the themes that square measure permanent, universal
and simple to access. The motivation to enhance the Security Systems supported single or multiple biometric traits rather
the Passwords and tokens emanates management Person’s identity is a smaller amount unsafe than dominant what he/she
Possesses or is aware of. In addition, biometry-based procedures obviate the necessity to recollect a PIN or carry a badge.
Each having their own limitations, various biometric systems exist that utilize varied human characteristics such as iris,
voice, face, fingerprint, gait or desoxyribonucleic acid. The system constraints and requirements should be taken into
consideration moreover because the functions of use-context that embody technical, social and moral factors.

Face recognition stands out with its favorable reconcilement between accessibility and dependableness. It
permits identification at relatively long distances for unaware subjects that don't have to work. Like alternative biometric
traits, the face recognition problem may be in brief taken as identification or verification of 1 or additional persons by
matching the extracted patterns from second or 3D still image or a video with the templates antecedently hold on in an
exceedingly info. Image process could be a technique to convert a picture into digital type and perform some operations
on that, so as to urge associate increased image or to extract some helpful data from it. "It’s a sort of signal indulgence
within which input is image, like video casing or photograph and output is also image or physical characteristics related
to that image.". Typically image process system includes treating pictures as 2 dimensional signals whereas applying
already set signal process ways to them. It's among speedily growing technologies these days, with its applications in
varied aspects of a business. Image process forms core analysis space among engineering and computing disciplines too.

LITERATURE SURVEY

1. AUTOMATIC ASYMMETRIC 3D-2D FACE RECOGNITION


Di Huang, Mohsen Ardabilian, Y unhong W ang, Liming Chen, MI Department, LIRIS Laboratory
Ecole Centrale de Lyon (2010)
An uneven 3D-2D face recognition technique, planning to limit the utilization of 3D knowledge to
wherever it very helps to enhance performance. The approach utilizes rough-textured 3D face models for
enrollment, while solely 2D facial pictures for identification, that makes it distinctive in comparison with the state
of the art. Since every 3D face model consists of 1 point-cloud and its corresponding 2D image, our approach
contains 2 separate matching steps: 2nd- 2D supported a Sparse Representation Classifier (SRC), 3D-2D by
Canonical Correlation Analysis (CCA). Each matching scores square measure combined for judgment. Lustiness
is greatly improved by a brand new preprocessing pipeline creating use of index Total variation (LTV) to decrease
Illumination influence and Active Appearance Models (AAM) to normalize cause standing. Distributed
illustration for signal classification (SRSC) was first planned to include reconstruction properties discriminative
power likewise as sparseness for strong classification. CCA could be a powerful analysis formula particularly
helpful for relating 2 sets of variables, by increasing correlation within the CCA mathematical space. Here, it's
introduced to find out the mapping between vary and 2D LBP faces.

~ 73 ~
IJISE N. Nithya & N. Charumathi www.shikiva.org

2. AN EFFICIENT IRIS AND EYE CORNERS EXTRACTION METHOD


Nesli Erdogmus, Jean-Luc Dugelay
The facial region within the image is assumed to be legendary and therefore the eye region is taken to be
the non-skin region within the higher 1/2 the facial image with the belief of frontal face with the nose being
vertical. Firstly, a rough localization of the irises is performed within the calculable eye region by circle detection
mistreatment Hough rework. The detected circles are subjected to elimination with the assistance of a priori data
regarding relative size and position of irises. Afterwards, the colour pictures of the attention regions (window
round the coarsely detected iris centers) are more processed to refine the iris radius and site. Finally, the cropped
eye pictures are divided into 3 color regions and contrary to previous works, the attention lid contours are
calculable 1st to get the eye corners on their intersection points. The attention region within the facial image is
extracted underneath the assumptions that the face is frontal with the road connecting the attention centers on the
brink of horizontal. Hence, the higher 1/2 the face is taken to be analyzed. Even if the face image is cropped into
its higher [*fr1] wherever the eyes are placed, still the skin pixels represent the bulk. Taking the bar graph under
consideration, a threshold is ready in step with the most count and therefore the image size. Afterwards, the pixels
with higher price than this threshold is eliminated as skin pixels. Lastly, the little islands within the obtained
binary mask are removed. When getting the attention regions, first edge maps are made by smart edge detector.
The downside of this edge detection technique is that it needs an honest adjustment of the brink. So as to beat this
issue, we tend to propose to use the sting detector iteratively, by standardization the brink parameter till a
descriptive edge map is obtained. Within the eye corners extraction, first the attention lids contours are aimed to
be detected which might be accustomed confirm the eye corners.

3. AUTOMATIC MULTI-VIEW FACE RECOGNITION VIA 3D MODEL BASED POSE REGULARIZATION


K oichiro Niinuma, Hu Han, and Anil K . Jain Department of Computer Science and Engineering
Michigan State University, East Lansing (2013)
The projected technique presents a replacement absolutely automatic multi-view face recognition
technique via 3D model primarily based cause regularization, and extends existing face recognition systems into
multi-view eventualities. Illustrates the projected approach that consists of 2 main modules: (i) cause
regularization supported 3D model, and (ii) Face matching with block primarily based multi-scale LBP (MLBP)
options. Not like previous cause standardisation approaches, wherever non-frontal face pictures were remodeled
into frontal pictures, the projected 3D model primarily based cause regularization technique generates artificial
target pictures to tally the cause variations in question pictures. We must always denote that generating non-
frontal views from frontal face pictures is way easier and a lot of correct than convalescent frontal views from
non-frontal face pictures. This can be as a result of it's troublesome to mechanically discover correct landmarks
below giant cause variations that are needed to create a 3D face model. to boot, since several are as of a face are
considerably occluded below giant cause variations, it's problematic to recover the frontal read for the occluded
facial regions. within the 3D Model primarily based cause regularization, is employed to create 3D model from
every frontal target face image, that is employed to get artificial target face pictures. The cause of a question face
image is additionally calculable so the generated artificial target face pictures are ready to tally the cause variation
of a question face image.

~ 74 ~
IJISE N. Nithya & N. Charumathi www.shikiva.org

4. ILLUMINATION NORMALIZATION OF FACIAL IMAGE BY REVERSING THE PROCESS OF IMAGE


FORMATION
Faisal R. Al-Osaimi M. Bennamoun · A. Mian (2011)

A fully automatic illumination standardization rule for color facial pictures. Our rule falls within the third
class and in contrast to different techniques it takes under consideration the forged shadows, multiple directional
lightweight sources (including extended lightweight sources), the impact of illumination on colours and each
Lumberton and specular light reflections. Additionally, it doesn't assume any previous information concerning the
facial cause or expressions. Experiments were performed on the Face Recognition Grand Challenge (FRGC) v2.0
dataset (9,900 2D and 3D faces) that is difficult within the sense that the faces have major expression variations
and square measure lit by varied extended lightweight sources. Our results show that our rule will complete
lighting variations while not compromising the native options or poignant the colour of the face reflective power.
The planned rule is meant for knowledge non-inheritable by 3D digitizers, because it assumes the existence of co-
registered 2D pictures and 3D purpose clouds. Most current industrial 3D digitizers like the Minolta vivid 910
offer co-registered 2D pictures and 3D purpose clouds. Co-registered 2D pictures and 3D purpose clouds have
several potential applications particularly in 2D and 3D fusion for face recognition that has recently gained vital
attention among the pc vision community. The planned approach takes advantage of the without delay accessible
form info to tackle the illumination standardization downside with smallest assumptions and aim for increased
responsibility.

5. ILLUMINATION ALIGNMENT USING LIGHTING RATIO: APPLICATION TO 3D-2D FACE


RECOGNITION
Xi Zhao, Shishir K . Shah, and Ioannis A. Kakadiaris (2011)

It victimization the lighting quantitative relation to approximate the factors resulting in illumination
variations and thus align the dominant lighting conditions on facial textures. Lighting quantitative relation is that
the quantitative relation between associate degree input image and its smoothened version with adjusted lighting
conditions. presumptuous that the majority of the illumination effects vary slowly on the facial textures, which the
bulk of the energy of illumination is distributed among the low frequencies, to estimate the lighting quantitative
relation via low-pass filtering within the frequency domain. so as to settle on the cut-off frequency for the filter
used for pictures below numerous lighting conditions, associate degree image-specific low-pass filter. The
lighting quantitative relation is adjusted scale back} the Fresenius norm between the end result of the division and
a reference texture to more reduce the distinction and exposure variations on facial pictures thanks to skin sort,
camera parameters, and lighting conditions. The lighting quantitative relation primarily based illumination
alignment strategies area unit then utilized in a 3D-2D face recognition system to deal with illumination
challenges. The 3D unsmooth information within the gallery are often accustomed register 2D pictures below
numerous creates and normalize head orientations into a frontal pose.

6. E-BLIND EXAMINATION SYSTEM


Akshay Naik K avita Patil, Department of Computer Engineering PVPPCOE Mumbai

The proposed system seems to be far better and efficient in terms of technology and integration point of view.
The accuracy of the speech recognition system was among the top challenges. The proposed system will provide a better
option for Blind people to appear for the examination.

~ 75 ~
IJISE N. Nithya & N. Charumathi www.shikiva.org

PROPOSED SYSTEM
In the projected system, the enrollment is assumed to bed one in each 2D and 3D for every subject underneath a
controlled environment – frontal face pictures with a neutral expression and underneath close illumination.
illumin The obtained
3D form of the facial surface along with the registered texture is preprocessed, 1st to extract the face region. On the
extracted facial surface, scanner-induced holes and spikes square measure cleansed and a bilateral smoothing filter is
employed to get rid of dissonance whereas protective the perimeters. After the outlet and noise free face model (texture
and shape) is obtained, seventeen feature points square measure mechanically detected using either form, texture or each,
in line with the regional properties of the face.

MODULES

• Face Image Acquisition


• Preprocessing
• Facial Points Description
• Expression Recognition
• Performance Evaluation

Face Image Acquisition

In the face images are captured or upload the datasets. The uploaded datasets contains 3D face images. In face
IJISE N. Nithya & N. Charumathi www.shikiva.org

registration identify the faces which are captured by the web camera. The web camera images known as 2D images, then
these face images are converted into 3D images.

Preprocessing

In the preprocessing steps such as gray scale conversion, invert, and border analysis, detects edges and region
identification are used. The Grayscale images are also called monochromatic, denoting the presence of only one (mono)
color (chrome). The edge detection is used to analyze the connected curves that indicate the boundaries of objects, the
boundaries of surface markings as well as curves that corresponds to discontinuities in surface orientation. Then extract
the regions and boundaries of images to extract the features of 3D images.

Facial Point’s Description

In this module, we are able to divide the examined image into cells (e.g. 16x16 pixels for every cell). For every
element in an exceedingly cell, compare the element to every of its eight neighbors (on its left-top, left-middle, left-
bottom, right-top, etc.). Follow the pixels on a circle, i.e. dextrorotatory or counter-clockwise. The feature vector will
currently be processed exploitation the binary patterns or another machine-learning formula to classify pictures. Such
classifier is used for face gratitude or quality analysis. And implement Principal part analysis, Linear Discriminate
analysis and native binary patterns to extract the landmark points from face pictures.

The 3D surface round the eyes tends to be howling owing to the reflective properties of the sclerotic coat, the
pupil and therefore the eyelashes. On the opposite hand, its texture carries extremely descriptive info concerning the form
of the eye. To start with, the yaw angle of the face is corrected in 3D. For this purpose, the horizontal curve passing
through the nose tip is examined. Ideally, the world underneath this curve should be equally separated by a vertical line
passing through its most (assuming the nose is symmetrical). The work on faces with neutral expressions, the mouth is
assumed to be closed. A closed mouth continually yields to a darker line between the 2 lips. The contact purpose of the
lips is found by applying a vertical projection analysis.

Expression Recognition

Classifications are supervised learning models with associated learning algorithms that analyze the data and
recognize patterns, used for classification and regression analysis. The training algorithm builds a model that assigns new
examples into one category or the other, making it a non-probabilistic binary linear classifier. A model is a representation
of points in space, mapped to the examples of the separate categories are divided by a clear gap. The proposed
classification analyzes the expression recognition.

Performance Evaluation

The algorithm able to perform with good performance under substantial occlusions, expressions, and small pose
variations. Provide best accuracy results in face recognition. In our proposed system, provide improved verification rate
and identification rate. And reduce the error rate and PCA algorithm provide best performance than other algorithms.

CONCLUSIONS

Automatic emotion recognition from facial expression and face recognition and human–computer interaction.
Due to the lack of 3-D feature and dynamic analysis the functional aspect of affective computing is insufficient for
natural interaction. The automatic face recognition with expression variations approach from real time datasets based on

~ 77 ~
IJISE N. Nithya & N. Charumathi www.shikiva.org

a landmark point’s controlled 3-D facial model. The facial region is first detected with local normalization in the input
dataset. The 17 landmark points are then located on the facial region and tracked through algorithms such as PCA, LDA
and LBP. The displacement of the lank mark points may be used to synthesize the input expressions. So we easily
recognize faces under various expressions.

ABOUT THE AUTHOR

N. CHARUMATHI

N. Charumathi graduated from Bon Secours College of Arts and Science for Women,
Thanjoor with a BCA in Computer Application. Received MCA Degree in Computer
Applications from Dhanalakshmi Srinivasan College of Arts & Science for women
Bharathidhasan University, Trichy India in the year 2016 and 2019.

REFERENCES
1. Di Huang, Mohsen Ardabilian, Yunhong Wang, Liming Chen, MI Department, Automatic Asymmetric 3D-2D Face
Recognition LIRIS Laboratory Ecole Centrale de Lyon (2010).
2. Koichiro Niinuma, Hu Han, and Anil K. Jain “Automatic Multi-view Face Recognition via 3D Model Based Pose
Regularization” Department of Computer Science and Engineering Michigan State University, East Lansing
(2013).
3. Faisal R. Al-Osaimi M. Bennamoun A. Mian “Illumination normalization of facial image by reversing the process
of image formation” (2011)
4. Xi Zhao, Shishir K. Shah, and Ioannis A. Kakadiaris “Illumination Alignment Using Lighting Ratio: Application to
3D-2D Face Recognition” (2011).
5. Nesli Erdogmus, Jean-Luc Dugelay “An Efficient Iris and Eye Corners Extraction Method”.

~ 78 ~

You might also like