Signature Verification
Signature Verification
net/publication/339299291
Signature Verification
CITATIONS READS
0 1,903
3 authors, including:
SEE PROFILE
All content following this page was uploaded by Muhammad Saif Ullah Khan on 26 January 2021.
In Partial Fulfillment
of the requirements for the degree
Bachelors of Engineering in Software Engineering (BESE)
Supervisor: Signature
Date: ______________
Place: ______________
1
DEDICATION
2
ACKNOWLEDGEMENT
First of all, we would like to thank all members of our project committee for their
guidance and encouragement throughout the course of this project, without which
we might not have been able to accomplish this project. We would like to
especially thank our project supervisor, Dr. Muhammad Imran Malik, whose
patience and support was instrumental in overcoming all the many challenges that
we faced in our research.
We are also grateful to our fellow students for their feedback and cooperation
which allowed us to complete and fine-tune our project. In addition, we would also
like to express gratitude to our families and friends whose support kept our morale
up and helped us successfully conclude this project.
3
TABLE OF CONTENTS
ABSTRACT 9
1 INTRODUCTION 10
4
2.2.3 Handcrafted Feature Extractors 18
4 METHODOLOGY 29
5
4.3 OUR ALGORITHM 34
6
6.2 CORE FUNCTIONALITIES 45
9 REFERENCES 57
7
LIST OF FIGURES
Fig 1(a): Original Signature
Fig 1(b): SURF Features of Original Signature
Fig 2: Forgery Types
Fig 3: Research Methodology
Fig 4: High-Level System Overview
Fig 5: Subsystem Overview
Fig 6(a): ROC curves for signet_f model
Fig 6(b): ROC curves for signet and facenet models
LIST OF TABLES
Table 1: Accuracy of One-Class SVM on Signet Features
Table 2: Equal Error Rates on ICFHR datasets
8
ABSTRACT
9
Chapter 1
INTRODUCTION
10
However, because of their popularity, offline verification is an active research area
with different techniques being used to improve its accuracy and generalization
[3]. Signature is a behavioural biometric which requires active participation of user
and is characterised by a behavioural trait that can be learnt and acquired over time
rather than a physiological trait. No two signatures by same person are exactly
identical, giving rise to intrapersonal variation. Signatures even change over time.
Therefore, verification of offline signatures is not a trivial pattern recognition
problem.
11
Furthermore, offline signature verification techniques also have potential
applications which range from forensic investigations to commonplace signature
authentication. Therefore, research work aiming to improve the state-of-the-art
techniques of offline signature verification will aid in enhancing forensic
investigation and biometric authentication systems, as well as in bridging the gap
between pattern recognition based techniques of verification and forensic science.
The phases of signature verification for bank cheques are briefly defined in
the following subsections.
12
define its intrinsic characteristics which allow us to associate it with a particular
author.
13
Chapter 2
LITERATURE REVIEW
14
to move horizontally from left to right on the approximation area. The width of the
window is fixed to a certain number of pixels and the height of the pixel will be set
according to the height of the approximation area. As the sliding window moves by
one pixel at a time, the density of the pixels within the current window is
calculated. The entropy is a better choice than density because it introduces larger
range of values leading to easier and more accurate segmentation. A major
problem of this approach is that it is based on a priori information about the
location of the signature.
15
Detection at a particular scale σi proceeds in three steps. First, convolve the
image with a Gaussian kernel Gσi , re-sample it using the Lanczos filter [12] at the
factor dσi , and compute its edges using the Canny edge detector [13]. This is
effectively obtaining a coarse representation of the original image in which small
gaps in the curve are bridged by smoothing followed by re-sampling.
16
and classification, while the local descriptors used for object
recognition/identification.
Global features describe the image as a whole to the generalize the entire
object whereas the local features describe the image patches (key points in the
image) of an object. Global features include contour representations, shape
descriptors, and texture features and local features represent the texture in an image
patch. Shape Matrices [17], Invariant Moments [18], Histogram Oriented
Gradients (HOG) and Co-HOG [19] are some examples of global descriptors. SIFT
(Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features), LBP
(Local Binary Patterns), Binary Robust Invariable Scalable Keypoints (BRISK),
Maximally Stable Extremal Regions (MSER) and Fast Retina Keypoints (FREAK)
are some examples of local descriptors [20][21][22][23][24].
17
An alternative to the personal approach is the global approach or
writer-independent model. It is based on the forensic questioned document
examination approach and classifies the writing, in terms of authenticity, into
genuine and forgery, using for that one global model.
1. Geometric Features
Geometric features measure the overall shape of a signature. This includes
basic descriptors, such as the signature height, width, caliber
(height-to-width ratio) and area.
2. Graphometric Features
Forensic document examiners use the concepts of graphology and
graphometry to examine handwriting for several purposes, including
detecting authenticity and forgery
3. Directional Features
Directional features seek to describe the image in terms of the direction of
the strokes in the signature.
4. Mathematical Transformations
Researchers have used a variety of mathematical transformations as feature
extractors such as Hadamard transform and spectrum analysis, Contourlet
transform, Discrete Radon transform, Wavelet transform.
5. Shadow-Code
Sabourin et al. proposed an Extended Shadow Code in [28] for signature
verification. A grid is overlaid on top of the signature image, containing
horizontal, vertical and diagonal bars, each bar containing a fixed number
18
of bins. Each pixel of the signature image is then projected to its closest bar
in each direction, activating the respective bin. The count of active bins in
the projections is then used as a descriptor of the signature.
6. Texture Features
Texture features, in particular variants of Local Binary Patterns (LBP),
have been used in many experiments in recent years. The LBP operator
describe the local patterns in the image, and the histogram of these patterns
is used as a feature descriptor.
7. Interest Point Matching
Interest point matching methods, such as SIFT (Scale-Invariant Feature
Transform) and SURF (Speeded Up Robust Features) have been largely
used for computer vision tasks. Ruiz-del-Solar et al. used SIFT to extract
local interest points from both query and reference samples to build a
writer-dependent classifier [29]. After extracting interest points from both
images, they generated a set of 12 features.
8. Pseudo-Dynamic Features
Oliveira et al., presented a set of pseudo-dynamic features, based on
graphometric studies: Distribution of pixels, Progression - that measures
the tension in the strokes, providing information about the speed, continuity
and uniformity, Slant and Form - measuring the concavities in the signature
[30].
Early work applying representation learning for the task used private
datasets and did not report much success: Ribeiro et al. used RBMs to learn a
representation for signatures, but only reported a visual representation of the
learned weights, and not the results of using such features to discriminate between
19
genuine signatures and forgeries [31]. Khalajzadeh used CNNs for Persian
signature verification, but only considered random forgeries in their tests [32].
Considering work that targeted the classification between genuine signatures and
skilled forgeries, we find two main approaches in recent literature:
20
feature representation that will therefore assign small distances when comparing a
genuine signature to another (reference) genuine signature, and larger distances
when comparing a skilled forgery with a reference.
21
2.3.2 Support Vector Machines
Support Vector Machines have been extensively used for signature
verification, for both writer-dependent and writer independent classification,
empirically showing to be the one of the most effective classifiers for the task [41].
In recent years, Guerbai et al. used One-Class SVMs for the task. This type of
model attempt to only model one class (in the case of signature verification, only
the genuine signatures), which is a desirable property, since for the actual users
enrolled in the system we only have the genuine signatures to train the model.
However, the low number of genuine signatures present an important challenge for
this strategy [42].
22
Chapter 3
PROBLEM DEFINITION
23
and by who a signature is written, signatures can be divided into following five
categories:
Simple forgery is written by a person who knows the shape of the original
signature, but does not have much practice doing it. This type of forgery is also
called blind forgery.
3.3 CHALLENGES
Now that we have given this problem a context and defined the main
terminologies, in this section we discuss the major challenges faced in handwritten
signature verification.
24
3.3.1 Challenges in Signature Extraction
Completely automatic, reliable and accurate extraction of handwritten
signature from bank cheques is a challenging problem because of a number of
factors.
Text lines and any other foreground elements which are overlapping with
the signature can pose another challenge in signature extraction.
25
including mood, time, tiredness level, etc. No two signatures by the same person
are exactly identical, unless one was copied from another sample by some method,
which would make it a forgery. Some people's signatures vary more on each try
than other people, and these variations themselves are different in nature for
different people. For example, for some people the aspect ratio and center of mass
of the signature remain almost similar but inclination of signature varies, while for
other people aspect ratio and center of mass change too. Features which remain
mostly unchanged across different samples of signatures from the same author are
called stable features and can be used to get a good classification for that user.
However, it is not always necessary that the same set of features which are stable
for some authors would be stable for all authors.
26
that is, thin all strokes to one pixel width. However, some people argue that
thinning can lead to loss of other important signature features.
Secondly, for training only genuine signatures are available in real world
scenarios. Much of the research work done around signature verification treats this
as a multi-class problem where more than one class of signatures for each author is
available when training the classifier. But it is impractical to assume that banks
will have forgeries for all of their customers available too. They only have a few
genuine signature samples, which makes signature verification a one-class problem
in reality. This has a significant negative impact on accuracy of the classifier,
27
because multi-class classifiers are known to perform generally better than
one-class classifiers [50].
28
Chapter 4
METHODOLOGY
29
4. Requirements Validation: We also checked for the completeness and
accuracy of specifications.
4.1.3 Testing
As we followed an iterative development approach, so testing was
interleaved with the implementation. We constantly looked for bugs and fixed
parts of code that could break the system by using exception handling.
30
4.2 HOW WE CAME UP WITH A SOLUTION
Before starting the project we met domain expert such as Branch manager
of a famous local bank. We told about the system and asked about the need for
such system. He told that banks are in urgent need of such a system, but accuracy
is the key. He told that introduction of such system must not hinder in day to day
operations of the bank. And must not increase the task of the cashier or the person
scanning the Signature Specimen cards.
Thus, we got the idea of developing an end to end fully functional system
that we can deploy with little changes to any bank. And a system that would scale
easily as the number of bank account holders increases. The work of cashier is not
increased as system only needs a scanned image of the check and MICR code of
check. MICR is already read by MICR reader, to check if that bank cheque is
genuine.
Our vision was that we should develop such a user friendly system that
cashier only has to scan the check and all the heavy lifting is done by the system at
the backend. And the cashier gets the results displayed on his computer screen. The
work of the person (account holders data entry person) who scans signature
specimen cards is not increased, As, he previously had to scan only one card per
account holder. Now again, after deployment of our system, he only has to scan
one page (having 8 genuine signature samples collected at time of account
opening). And the system automatically crops out and saves the genuine samples
of a user. In fact the system only keeps one genuine ample image as it only needs
the deep feature vectors of all genuine samples.
31
We found that for a generalized and scalable system, we must use a Writer
independent approach. We researched the literature and learned about the state of
the art in image classification and recognition. Thus we found out the deep
learning models are found to perform very well in scenarios where there are lots of
intra-class variations (just like in our problem, i.e., signatures of one person vary
from time to time). We came across [35] in our research, which trained a deep
CNN to distinguish between genuine signatures of 531 users of thee GPDS-960
dataset (which is the largest available signature dataset of real users till date,
according to our best of knowledge). The the last softmax layer of the trained
classifier network was removed. The 2048 activations on the fc7 layer (last layer
just before the softmax), were taken as writer independent features. This is a
reasonable assumption because these activations were the means by which the
network used to decide and classify the signatures. Thus, these were the deep
features that captured the unique handwriting characteristics of people.
For all the genuine signature specimens of signatures (taken by the bank at
the time of account opening), these 2048 features are computed. This is a very
cheap in terms of computation and a near constant time as forward pass of an
already trained neural network is constant time and computationally cheap
operation. These feature vectors for each specimen of a person is stored in a file on
the bank server. When a bank cheque arrives, the check is scanned. The user ID is
extracted from MICR (Magnetic Ink Character Recognition) and signature is
detected and extracted automatically by the system from the scanned check image.
Thus the cashier does not need to perform any extra laborious task. The system
actually returns the answer to the question, “Is the signature in the scanned image
written by the author whose user ID is this”? Then the image is sent to bank server
along with the user ID (Bank account number in this case). Thus, the 2048 deep
features are extracted from the questioned signature. A one class SVM is trained
on the features of genuine signature samples of that specific ID and the feature
vector of questioned signature is predicted. Thus, we get a probability score of
weather the signature belongs to the genuine class or not (is a forgery). This
32
probability as well as a boolean decision (Genuine or Forged), is returned by the
bank server to the cashiers computer and is displayed on a GUI. Thus, The cashiers
task of signature matching and verification is simplified. He may or may not accept
a signature as genuine, which the system has flagged as “Forged”.
The boolean decision returned by the system based on the probability score
returned by the One Class SVM is based on a global threshold value set at the time
of system development. Its value has been set empirically to a value where False
Acceptance Rate equals the False Rejection Rate and is termed as Equal Error
Rate. For banks we may want false rejection higher than false acceptance, as a
genuine signature flagged as forged may not cause any financial loss as the signer
may be requested to sign again.
33
Now in the next section, we describe, in detail, the algorithm that we have
implemented.
● Automatic and exact signature extraction from scanned bank cheque image
● Signature verification i.e. classification of signatures as genuine or forged.
34
decision tree were trained with these SURF features from signatures and
non-signatures.
4.2.1.2 Usage
1. The bottom-right quarter of the cheque image is cropped out
because the signature is generally located in this region.
2. Using the Adaptive Gaussian Threshold with block size 25,
background from the cheque image is removed to get a binary
image.
3. A heuristic which looks at each scan line to see if it contains text
lines is used to remove all text lines. Any horizontal or vertical scan
line which is made up of 25% or more foreground pixels is removed
by marking each pixel as the background pixel. Any pixels on these
removed lines whose neighboring pixels are foreground pixels are
not marked as background. In this way, the strokes of handwritten
text crossing the scan lines are preserved.
4. After removing lines and filling any broken strokes, 8-way
connected component analysis is performed for all the remaining
foreground pixels.
5. Then, for each of the connected component, SURF features are
computed using 400 Hessian threshold. These features are the
passes to the trained decision-tree classifier, which marks some of
the strokes as signature and the rest as non-signature. All
35
non-signature pixels are marked as background, leaving us only the
signature.
6. A bounding box is computed around the remaining foreground
pixels, and the signature is cropped out.
4.2.2 Preprocessing
The extracted signature, which is a binary image, is thinned using the
algorithm proposed by Zhang et al. in [53]. This makes the signature invariant to
pen and paper type used. Next, the signature is centered inside a 150x220 white
canvas such that aspect ratio of the signature is preserved. This makes our
algorithm robust to changes in scale, and it becomes invariant to the variations in
font size of an author’s different signatures.
36
4.2.4 Signature Verification
For verification, a scanned image of a bank cheque and a user ID is
received as input. Signature is extracted from it and preprocessed as described in
sections 4.2.1 and 4.2.2 respectively. A 2048-value feature vector representing
writer-independent features of the signature is computed from this signature
similarly as above. Then, 40 feature vectors of the user whose ID is given are
retrieved from the database and used to train a one-class SVM. The trained SVM is
then used to classify the test signature extracted from cheque as either forged or
genuine. The SVM returns an integer distance value from the hyperplane fitted on
the reference feature vectors. If this distance is a negative value, then the signature
is forged, otherwise genuine.
Accuracy of different SVMs and CNNs, among others, was tested with
extracted feature sets with aim of outperforming the state-of-the-art techniques and
their results described in [7]. The block diagram below sketches our research
methodology.
37
Fig 3: Research Methodology
38
Chapter 5
39
mobile device (android smartphone) and its camera to scan and verify signatures.
But this had inherent issues of scalability, reliability, availability, security and
robustness. So, we switched to a client server model. All our core libraries and
signature verification code is placed on the server. And client only need to send a
signature image to our server. The server performs the verification (in case of
plane signature image with white background), or extraction+verification (in case
of handwritten signature written on a complex cluttered background such as bank
cheque). The server also provides enrollment of new users capability. This
architecture allows our system to be highly scalable and flexible. We can use all
sorts of clients with this system. Only our client needs to capture a signature image
(using camera in mobile devices and scanner in desktop clients). Thus we can
change the client application without doing a single change to our core library
code. This helps in code robustness.
We wrote our code in the form of modules, so that each module interacts
with other module seamlessly. This reduced our system development time and also
resulted in a system with high cohesion in the modules and low coupling between
them.
40
such as managers, are able to enroll new people into the system. The cashiers are
only able to verify that a given signature belongs to a specific person or not and
they cannot enroll new user. The second functionality is signature extraction from
the bank cheque. This is important due to the fact that the more accurately we
extract the signature from the bank cheques, the more the accuracy of our
verification algorithm. The signatures are extracted from the bank cheque, then
they are preprocessed by the preprocessing module, before being sent to the
verification module. The verification module extracts deep features from the
signatures and based on these deep features it gives the decisions of the signature
being genuine or false.
As can be seen above, the system takes a scanned cheque as input, from
which signature image is extracted by the extraction module, which along with the
user ID is fed to the verification module. Verification module sends the user ID to
database to get reference models of the given user, and the signature image is
processed and a feature vector is computed. This feature vector, along with the
41
reference feature vectors are then fed to a classifier which decides whether the new
signature matches with the model learned from reference signatures of the user. A
binary output is returned by the system.
42
5.2.2 Verification Sequence
43
5.2.3.2 Desktop Application
44
Chapter 6
45
application and the Android application for their operation. These functionalities
can be divided into three atomic modules:
This system has two main use cases, enrollment of new users in the system,
and verification of signature of an enrolled user. Both are considered separately
below.
46
6.3.1 User Registration
First, we discuss the enrollment module of the system. A new user must be
registered with the system using this module, before their signatures can be
verified using the verification module.
When a new user is to be added in the system, they are give an A4-sized
white sheet of paper with a 4x2 grid printed on it. The new user gives eight
samples of their genuine signatures on this sheet, which is then scanned using a
scanning device, and fed to our system. In case of a bank, signature acquisition
would be done when a customer wants to open a new bank account, and the bank
personnel would then scan that sheet of paper and feed it to our system at a later
time. A unique identifier for this new user is manually provided to the system
along with the scanned image of the specimen sheet. This unique identifier would
be the bank account number in the banks’ use case.
Upon receiving these two inputs; a unique user identifier and an image of
eight signatures on the specified specimen paper, the system works behind the
scenes to complete the registration process. If the provided user identifier already
exists, an error message is displayed. All eight signatures are automatically located
and then extracted from the specimen sheet. Specimen sheet with located
signatures outlined is show in the software, to provide feedback to the software
user who can use it to verify that the signatures are being located correctly.
The eight signatures are extracted into eight separate images which are
preprocessed as discussed in Chapter 4, and then writer-independent features from
each of these eight signatures are then extracted, by giving these signatures to the
CNN-based feature extractor. This gives us eight 2048-sized feature vectors, called
the reference feature vectors, which are saved in a filesystem-based database under
the unique identifier of this user. This concludes the user registration sequence.
47
6.3.2 Signature Verification
In this section, we discuss the signature verification module, which takes
either an image of a bank cheque with a signature on it, or an image of a signature
only, and an ID of the user, and verifies if the provided signature belongs to the
user with the given ID or not.
When an enrolled users’ signature comes for test, the system checks if the
user with provided ID is registered or not. If the second input is image of a bank
cheque, then the signature is automatically located and extracted from this cheque.
Once the system has the image of the signature only, this image too is preprocessed
like the reference signatures in previous section. Then the same CNN-based feature
extractor as above is used to extract 2048 deep features, called the test feature
vector, from the questioned signature. The eight reference feature vectors of the
specified user are taken from the database and used to train a one-class SVM,
which is then asked to compare the test feature vector with the learned model from
reference feature vectors classify the questioned signature as either genuine or
forged. The verification result is displayed by the system.
48
down the system into components and developed it component by component first
of all we focused on the verification module that is the heart of the system. For
verification we first started off with classic approaches of machine learning such as
grid based approach. The grid based approach was not robust enough to capture the
variations that occur in the signatures. So, we switched towards the modern
techniques of deep learning. We first learned the techniques of deep learning and
then tried of a number of techniques from literature. In literature, a deep CNN was
used for signature verification problem. The author of the system trained a CNN on
a signature dataset named GPDS-960. He took 531 users randomly from the
dataset and train the CNN as a classifier. So that, job of the CNN was to
distinguish between the signatures of 531 users. The author's hypothesis was that if
a convolutional neural network is able to distinguish between the users, it means
that it has captured certain deep features that characterize the signature. He he
trained the CNN into formulations first was a classifier only using the genuine
signatures. In the second formulation, he trained the neural network in a way that
the signature image, the signer ID, and a label i.e. genuine or forged, was given to
the network. For the label, there was additional output that was binary. 1 told it
was forged, and zero told it was genuine. This way the network learn to classify the
signatures as genuine or forged. And also was able to distinguish between different
users. So, the features learnt by the system essentially was something that can tell
about genuine signatures from forgeries to test is hypothesis it took the FC 7 layer
feature vector from the train Network and trained a one-class SVM on it. Then
predicted the signatures of a validation set using this approach. This approach
surpassed state-of-the-art. We mainly used this approach, but we tried other
networks also. We did an experiment on Google facenet. Google facenet was
designed for one shot learning of facial images. We took the pre-trained model,
that was trained on the facial images. We tried it on signatures. What we saw was,
that the network recognised the genuine signatures as similar to original specimen
signatures and forged signatures as dissimilar. That was really the problem we
were trying to solve, so we decided to train the face that on GPDS-960 dataset.
49
Retrain the Resnet model on the GPDS-960 dataset of Signature images. Then we
used the resnet deep features (fc7 layer) that characterize a signature and trained
the SVM in a similar way as described above. The results show that this approach
surpasses, the state-of-the-art accuracy, in signature verification on standard
datasets.
Now, we come towards the signature extraction from bank cheque. For this
we used an approach that is based on SURF (Speeded Up Robust Features)
features. For this the check image is converted to grayscale and then is binarized.
The connected components of the binary image are extracted for each connected
component. The SURF features are computed. Feature vectors of each connected
component is predicted by a nearest neighbour classifier that compares the feature
vectors with a database of SURF features descriptors of handwritten connected
components and connected components in a machine printed image. So, if most of
the feature vectors of a connected component are closer to a machine printed text
features then that connected component is white out. So, only handwritten text
remains on the bank cheque image. The handwritten text contains the courtesy
amount and other information as well as the handwritten signature. We are only
interested in the handwritten signature in this case so we use the prior knowledge
of bank cheque format. We crop out the right bottom of the image so we get the
signature image. The signature image is reconstructed using morphological
operations so that the parts that are lost during connected component classification
stage are reconstructed. Then the signature image is fed to the verification module
for verification. The result of verification is then displayed on the GUI (Graphical
User Interface).
50
Chapter 7
False acceptance rate (FAR) is the ratio of number of forged signatures
accepted as genuine to the total number of forged signatures. False rejection rate
(FRR) is the ratio of number of genuine signatures rejected as forged to total
number of genuine signatures.
51
Therefore, we decided to test Sabourin’s network with a one-class SVM with RBF
kernel.
Sabourin has proposed two different models, namely signet and signet_f,
both of which are publicly available, along with the features extracted with these
models from GPDS, MYCT, and CEDAR datasets. We took these features
extracted with signet model and gave them to the one-class SVM. Following
results were observed
Dataset Accuracy
GPDS 93.13%
MYCT 84.70%
CEDAR 93.64%
Table 1: Accuracy of One-Class SVM on Signet Features
52
Facenet [54], which is a network based on ResNet architecture [55], was
originally developed for facial recognition. During our experiments, we tried using
the Facenet model trained on faces to classify signatures, and observed that it was
giving promising results. Therefore, we retrained Facenet on signatures. The
facenet was trained on genuine signatures of first 531 users from GPDS database.
It is supposed to be used as a classifier, however, instead of doing that, we took
embeddings calculated by facenet from images as 128-value feature vectors and
used these to classify signatures with our one-class SVM. As can be seen in Table
2 above, this gave results comparable to Sabourin’s models, and sometimes even
better.
The figures above and below show ROC curves of ICFHR datasets when
features were extracted from them with Sabourin’s signet, signet_f with lambda
0.95, then with lambda 0.999, and finally with our own facenet.
53
Fig 6(b): ROC curves for signet and facenet models
54
Chapter 8
The problem statement that has been described in the start of this report has
been addressed. We have developed a complete end-to-end solution that consists of
user enrollment module, signature extraction, signature verification. So, this end to
end system can be easily deployed in a bank; with little or no changes to the
existing system. Just the dependencies need to be installed in the bank’s current
system. There is no need for the bank to deploy a server with GPU, as we will use
a trained model.
Predicting from a trained model does not require high compute and it is a
near constant time operation. The system that we have proposed is a writer
independent approach. So, no matter how many people open account in a bank
everyday, the system does not need to be retrained.
55
We can work on the system in the future to enhance the performance of the
signature Extraction module as well as the signature verification module. We can
enhance the performance of signature extraction module by using the flilformity
approach by Dijezi et al. This approach is inspired by human visual system, as to
how it can effortlessly distinguish between handwritten and printed text. We can
also work for better reconstruction of signatures after removal of the printed line
(line printed on cheques marking the place of signatures).
For future work one can enhance the signature verification accuracy by
doing more and more research. But the core system does not need to be changed,
because the system is flexible enough to incorporate all the changes easily without
any change in the pipeline. We have described an experiment to use Google
FaceNet’s embeddings (feature vectors for one-shot learning of facial images).
Thus, we can refine this approach and train the FaceNet Siamese Network on
GPDS-960 offline handwritten signature images. We can also do transfer learning
on already trained Facenet and then retrain it on signature images. This may result
in a better performance.
56
Chapter 9
REFERENCES
[1] Jain, A., Hong, L., & Pankanti, S. (2000). Biometric identification.
Communications of the ACM, 43(2), 90-98.
[2] Kalera, M. K., Srihari, S., & Xu, A. (2004). Offline signature verification
and identification using distance statistics. International Journal of Pattern
Recognition and Artificial Intelligence, 18(07), 1339-1360.
[3] Al-Omari, Y. M., Abdullah, S. N. H. S., & Omar, K. (2011, June).
State-of-the-art in offline signature verification system. In Pattern Analysis
and Intelligent Robotics (ICPAIR), 2011 International Conference on (Vol.
1, pp. 59-64). IEEE.
[4] Check Fraud Statistics & Techniques You Should Know About. (2017, July
07). Retrieved from
http://www.relyco.com/blog/laser-check-printing/check-fraud-statistics-tec
hniques-know/
[5] Malik, M. I. (2015). Automatic Signature Verification: Bridging the Gap
between Existing Pattern Recognition Methods and Forensic Science.
[6] Radhika, K. R., Venkatesha, M. K., & Sekhar, G. N. (2010). Off-line
signature authentication based on moment invariants using support vector
machine. Journal of Computer Science, 6(3), 305.
[7] Impedovo, D., & Pirlo, G. (2008). Automatic signature verification: The
state of the art. IEEE Transactions on Systems, Man, and Cybernetics, Part
C (Applications and Reviews), 38(5), 609-635.
[8] Malik, M. I., Liwicki, M., & Dengel, A. (2011, September). Evaluation of
Local and Global Features for Offline Signature Verification. In AFHA (pp.
26-30).
57
[9] Madasu, V. K., Mohd. Hafizuddin Mohd. Yusof, Hanmandlu, M., & Kubik,
K. (2003, December). Automatic Extraction of Signatures from Bank
Cheques and Other Documents. In DICTA (Vol. 3, pp. 591-600).
[10] Djeziri, S., Nouboud, F., & Plamondon, R. (1998). Extraction of signatures
from check background based on a filiformity criterion. IEEE Transactions
on Image Processing, 7(10), 1425-1438.
[11] Zhu, G., Zheng, Y., Doermann, D., & Jaeger, S. (2007, June). Multi-scale
structural saliency for signature detection. In Computer Vision and Pattern
Recognition, 2007. CVPR'07. IEEE Conference on (pp. 1-8). IEEE.
[12] Duchon, C. E. (1979). Lanczos filtering in one and two dimensions.
Journal of applied meteorology, 18(8), 1016-1022.
[13] Canny, J. (1987). A computational approach to edge detection. In Readings
in Computer Vision (pp. 184-203).
[14] Mandal, R., Roy, P. P., & Pal, U. (2011, September). Signature
segmentation from machine printed documents using conditional random
field. In Document Analysis and Recognition (ICDAR), 2011 International
Conference on (pp. 1170-1174). IEEE.
[15] Jiang, R., Al-Maadeed, S., Bouridane, A., Crookes, D., & Beghdadi, A.
(Eds.). (2016). Biometric Security and Privacy: Opportunities &
Challenges in the Big Data Era. Springer.
[16] Novaković, J. (2016). Toward optimal feature selection using ranking
methods and classification algorithms. Yugoslav Journal of Operations
Research, 21(1).
[17] Goshtasby, A. (1985). Description and discrimination of planar shapes
using shape matrices. IEEE Transactions on Pattern Analysis and Machine
Intelligence, (6), 738-743.
[18] Khotanzad, A., & Hong, Y. H. (1990). Invariant image recognition by
Zernike moments. IEEE Transactions on pattern analysis and machine
intelligence, 12(5), 489-497.
58
[19] Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for
human detection. In Computer Vision and Pattern Recognition, 2005.
CVPR 2005. IEEE Computer Society Conference on (Vol. 1, pp. 886-893).
IEEE.
[20] Lowe, D. G. (1999). Object recognition from local scale-invariant features.
In Computer vision, 1999. The proceedings of the seventh IEEE
international conference on (Vol. 2, pp. 1150-1157). Ieee.
[21] Bay, H., Tuytelaars, T., & Van Gool, L. (2006, May). Surf: Speeded up
robust features. In European conference on computer vision (pp. 404-417).
Springer, Berlin, Heidelberg.
[22] Ojala, T., Pietikainen, M., & Harwood, D. (1994, October). Performance
evaluation of texture measures with classification based on Kullback
discrimination of distributions. In Pattern Recognition, 1994. Vol.
1-Conference A: Computer Vision & Image Processing., Proceedings of the
12th IAPR International Conference on (Vol. 1, pp. 582-585). IEEE.
[23] Leutenegger, S., Chli, M., & Siegwart, R. Y. (2011, November). BRISK:
Binary robust invariant scalable keypoints. In Computer Vision (ICCV),
2011 IEEE International Conference on (pp. 2548-2555). IEEE.
[24] Alahi, A., Ortiz, R., & Vandergheynst, P. (2012, June). Freak: Fast retina
keypoint. In Computer vision and pattern recognition (CVPR), 2012 IEEE
conference on (pp. 510-517). Ieee.
[25] Pavelec, D., Justino, E., Batista, L. V., & Oliveira, L. S. (2008, March).
Author identification using writer-dependent and writer-independent
strategies. In Proceedings of the 2008 ACM symposium on Applied
computing (pp. 414-418). ACM.
[26] Pal, S., Blumenstein, M., & Pal, U. (2011, February). Off-line signature
verification systems: a survey. In Proceedings of the International
Conference & Workshop on Emerging Trends in Technology (pp. 652-657).
ACM.
59
[27] Satyarthi, D., Maravi, Y. P. S., Sharma, P., & Gupta, R. K. (2013).
Comparative study of offline signature verification techniques.
International Journal of Advancements in Research & Technology, 2(2),
1-6.
[28] Sabourin, R., & Genest, G. (1994, October). An extended-shadow-code
based approach for off-line signature verification. i. evaluation of the bar
mask definition. In Pattern Recognition, 1994. Vol. 2-Conference B:
Computer Vision & Image Processing., Proceedings of the 12th IAPR
International. Conference on (Vol. 2, pp. 450-453). IEEE.
[29] Ruiz-del-Solar, J., Devia, C., Loncomilla, P., & Concha, F. (2008,
September). Offline signature verification using local interest points and
descriptors. In Iberoamerican Congress on Pattern Recognition (pp.
22-29). Springer, Berlin, Heidelberg.
[30] Oliveira, L. S., Justino, E., Freitas, C., & Sabourin, R. (2005, June). The
graphology applied to signature verification. In 12th Conference of the
International Graphonomics Society (pp. 286-290).
[31] Ribeiro, B., Gonçalves, I., Santos, S., & Kovacec, A. (2011, November).
Deep learning networks for off-line handwritten signature recognition. In
Iberoamerican Congress on Pattern Recognition (pp. 523-532). Springer,
Berlin, Heidelberg.
[32] Khalajzadeh, H., Mansouri, M., & Teshnehlab, M. (2012). Persian
signature verification using convolutional neural networks. International
Journal of Engineering Research and Technology, 1.
[33] Bertolini, D., Oliveira, L. S., Justino, E., & Sabourin, R. (2010). Reducing
forgeries in writer-independent off-line signature verification through
ensemble of classifiers. Pattern Recognition, 43(1), 387-396.
[34] Hafemann, L. G., Sabourin, R., & Oliveira, L. S. (2016, July).
Writer-independent feature learning for offline signature verification using
deep convolutional neural networks. In Neural Networks (IJCNN), 2016
International Joint Conference on (pp. 2576-2583). IEEE.
60
[35] Hafemann, L. G., Sabourin, R., & Oliveira, L. S. (2017). Learning features
for offline handwritten signature verification using deep convolutional
neural networks. Pattern Recognition, 70, 163-176.
[36] Justino, E. J., Bortolozzi, F., & Sabourin, R. (2005). A comparison of SVM
and HMM classifiers in the off-line signature verification. Pattern
recognition letters, 26(9), 1377-1385.
[37] Dolfing, J. G. A., Aarts, E. H., & Van Oosterhout, J. J. G. M. (1998,
August). On-line signature verification with Hidden Markov Models. In
Pattern Recognition, 1998. Proceedings. Fourteenth International
Conference on (Vol. 2, pp. 1309-1312). IEEE.
[38] Coetzer, J., Herbst, B. M., & du Preez, J. A. (2004). Offline signature
verification using the discrete radon transform and a hidden Markov model.
EURASIP Journal on applied signal processing, 2004, 559-571.
[39] Kashi, R. S., Hu, J., Nelson, W. L., & Turin, W. (1997, August). On-line
handwritten signature verification using hidden Markov model features. In
Document Analysis and Recognition, 1997., Proceedings of the Fourth
International conference on (Vol. 1, pp. 253-257). IEEE.
[40] Yang, L., Widjaja, B. K., & Prasad, R. (1995). Application of hidden
Markov models for signature verification. Pattern recognition, 28(2),
161-170.
[41] Kumar, R., & Singhal, P. (2017). Review on Offline Signature Verification
by SVM.
[42] Guerbai, Y., Chibani, Y., & Hadjadji, B. (2015). The effective use of the
one-class SVM classifier for handwritten signature verification based on
writer-independent parameters. Pattern Recognition, 48(1), 103-113.
[43] Huang, K., & Yan, H. (1997). Off-line signature verification based on
geometric feature extraction and neural network classification. Pattern
Recognition, 30(1), 9-17.
[44] Shekar, B. H., Bharathi, R. K., Kittler, J., Vizilter, Y. V., & Mestestskiy, L.
(2015, May). Grid structured morphological pattern spectrum for off-line
61
signature verification. In Biometrics (ICB), 2015 International Conference
on (pp. 430-435). IEEE.
[45] Soleimani, A., Araabi, B. N., & Fouladi, K. (2016). Deep multitask metric
learning for offline signature verification. Pattern Recognition Letters, 80,
84-90.
[46] Jain, A. K., Bolle, R., & Pankanti, S. (Eds.). (2006). Biometrics: personal
identification in networked society (Vol. 479). Springer Science & Business
Media.
[47] Masek, L. (2003). Recognition of human iris patterns for biometric
identification.
[48] Zhao, W., Chellappa, R., Phillips, P. J., & Rosenfeld, A. (2003). Face
recognition: A literature survey. ACM computing surveys (CSUR), 35(4),
399-458.
[49] Ratha, N., & Bolle, R. (Eds.). (2003). Automatic fingerprint recognition
systems. Springer Science & Business Media.
[50] Rifkin, R., & Klautau, A. (2004). In defense of one-vs-all classification.
Journal of machine learning research, 5(Jan), 101-141.
[51] Lewis, D., Agam, G., Argamon, S., Frieder, O., Grossman, D., & Heard, J.
(2006, August). Building a test collection for complex document
information processing. In Proceedings of the 29th annual international
ACM SIGIR conference on Research and development in information
retrieval (pp. 665-666). ACM.
[52] Agam, G., Argamon, S., Frieder, O., Grossman, D., & Lewis, D. (2006).
The complex document image processing (CDIP) test collection project.
Illinois Institute of Technology.
[53] Zhang, T. Y., & Suen, C. Y. (1984). A fast parallel algorithm for thinning
digital patterns. Communications of the ACM, 27(3), 236-239.
[54] Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified
embedding for face recognition and clustering. In Proceedings of the IEEE
conference on computer vision and pattern recognition (pp. 815-823).
62
View publication stats
[55] Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017, February).
Inception-v4, inception-resnet and the impact of residual connections on
learning. In AAAI (Vol. 4, p. 12).
63