Mpip12 - Final Document
Mpip12 - Final Document
After rigorous testing, the Random Forest Classifier (RFC) demonstrated the highest
accuracy, achieving a score of approximately 0.99. This suggests that RFC is the most effective
model among those tested for distinguishing between genuine and forged signatures. On average,
the system has proven to be highly successful in verifying signature images with a significant
level of accuracy. The results of this study indicate that machine learning-based approaches,
particularly RFC, can provide a reliable method for signature authentication, which could be
beneficial in various real-time applications such as banking, legal document verification, and
identity authentication.
TABLE OF CONTENTS
ABSTRACT i
v
LIST OF FIGURES
LIST OF SYMBOLS vii
xi
LIST OF ABBREVIATIONS
LIST OF TABLES xii
1. CHAPTER 1 : INTRODUCTION
1.1 GENERAL
1.2 OBJECTIVE
1.3 EXISTING SYSTEM
1.3.1EXISTINGSYSTEM DISADVANTAGES
1.3.2 LITERATURE SURVEY
1.4 PROPOSED SYSTEM
1.4.1 PROPOSED SYSTEM ADVANTAGES
2. CHAPTER 2 :PROJECT DESCRIPTION
2.1 GENERAL
2.2 METHODOLOGIES
2.2.1 MODULES NAME
2.2.2 MODULES EXPLANATION
2.2.3 MODULE DIAGRAM
2.2.4GIVEN INPUTAND EXPECTED OUTPUT
2.3 TECHNIQUE OR ALGORITHM
3. CHAPTER 3 : REQUIREMENTS
3.1 GENERAL
LIST OF FIGURES
FIGURE NO NAME OF THE FIGURE PAGE NO.
2.3.2 Module Diagram
LIST OF SYSMBOLS
NOTATION
S.NO NAME NOTATION DESCRIPTION
Class Name
-private -attribute
Communication between
7. Communication
various use cases.
Represents physical
modules which are a
collection of components.
14. Component
Represents physical
15. Node modules which are a
collection of components.
Represents communication
Transition
18. that occurs between
processes.
5. AI Artificial Intelligence
A lots of research had already been examined and shown their pivot contribution in the field of
biometric system mainly in off-line signature verification. Since off-line signature verification is
the most important biometric verification in today’s era, still it is very challenging area of
research. The main contributions of our work include: (1) Three kinds of pseudo-dynamic
features based on gray level were deliberately modeled and adapted to meet the special needs of
SV. The proposed features contain complementary information, so their fusion could achieve
better results. (2) We employed an ensemble learning method, RF, for writer-independent o²ine
SV and achieved competitive performance with state-of-the-art algorithms. (3) We empirically
analyzed two kinds of methods to generate negative samples for writer-independent SV and
found that different methods can create different decision boundaries and are suitable for
different forgery styles.
1.3 Existing System:
A signature verification system and the techniques used to solve this problem can be divided into
two classes online and Off-line. On-line approach uses an electronic tablet and a stylus
connected to a computer to extract information about a signature and takes dynamic information
like pressure, velocity, speed of writing etc. for verification purpose. Offline signature
verification involves less electronic control and uses signature images captured by scanner or
camera. An offline signature verification system uses features extracted from scanned signature
image. The features used for offline signature verification are much simpler. In this only the
pixel image needs to be evaluated. But, the off-line systems are difficult to design as many
desirable characteristics such as the order of strokes, the velocity and other dynamic information
are not available in the off-line case [4, 5]. The verification process has to wholly rely on the
features that can be extracted from the trace of the static signature images only.
Disadvantages:
Year: 2009.
Description:
This chapter presents an off-line signature verification and forgery detection system based on
fuzzy modeling. The various handwritten signature characteristics and features are first studied
and encapsulated to devise a robust verification system. The verification of genuine signatures
and detection of forgeries is achieved via angle features extracted using a grid method. The
derived features are fuzzified by an exponential membership function, which is modified to
include two structural parameters. The structural parameters are devised to take account of
possible variations due to handwriting styles and to reflect other factors affecting the scripting of
a signature. The efficacy of the proposed system is tested on a large database of signatures
comprising more than 1,200 signature images obtained from 40 volunteers.
Title: a new signature verification technique based on a two stage neural network classifier
Year: 2001.
Description:
This paper presents a new technique for off-line signature recognition and verification. The
proposed system is based on global, grid and texture features. For each one of these feature sets a
special two stage Perceptron OCON (one-class-one-network) classification structure has been
implemented. In the first stage, the classifier combines the decision results of the neural networks
and the Euclidean distance obtained using the three feature sets. The results of the first-stage
classifier feed a second-stage radial base function (RBF) neural network structure, which makes
the final decision. The entire system was extensively tested and yielded high recognition and
verification rates.
Title: Dynamic selection of generative–discriminative ensembles for off-line signature
verification.
Year: 2012.
Description:
This chapter presents an off-line signature verification and forgery detection system based on
fuzzy modeling. The various handwritten signature characteristics and features are first studied
and encapsulated to devise a robust verification system. The verification of genuine signatures
and detection of forgeries is achieved via angle features extracted using a grid method. The
derived features are fuzzified by an exponential membership function, which is modified to
include two structural parameters. The structural parameters are devised to take account of
possible variations due to handwriting styles and to reflect other factors affecting the scripting of
a signature. The efficacy of the proposed system is tested on a large database of signatures
comprising more than 1,200 signature images obtained from 40 volunteers.
Title: Recovery of temporal information from static images of handwriting.
Year: 2001.
Description:
A taxonomy of local, regional, and global temporal clues that, along with a detailed examination
of the document, allow temporal properties to be recovered from the image is provided. It is
shown that this system will benefit from obtaining a comprehensive understanding of the
handwriting signal and that it requires a detailed analysis of stroke and sub-stroke properties. It is
suggested that this task requires breaking away from traditional thresholding and thinning
techniques, and a framework for such analysis is presented. It is shown how the temporal clues
can reliably be extracted from this framework and how many of the seemingly ambiguous
situations can be resolved by the derived clues and knowledge of the writing process.
Title: Towards automated transactions based on the offline handwritten signatures.
Year: 2013.
Description:
Automating business transactions over the Internet relies on digital signatures, a replacement of
conventional handwritten signatures in paper-based processes. Although they guarantee data
integrity and authenticity, digital signatures are not as convenient to users as the manuscript
ones. In this paper, a methodology is proposed to produce digital signatures using off-line hand-
written signatures. This methodology facilitate the automation of business processes, where users
continually employ their handwritten signatures for authentication. Users are isolated from the
details related to the generation of digital signatures, yet benefit from enhanced security. First,
signature templates from a user are captured and employed to lock his private key in a fuzzy
vault. Then, when the user signs a document by hand, his handwritten signature image is
employed to unlock his private key. The unlocked key produces a digital signature that is
attached to the digitized document. The verification of the digital signature by a recipient implies
authenticity of the manuscript signature and integrity of the signed document. Experimental
results on the Brazilian off-line signature database (that includes various forgeries) confirms the
viability of the proposed approach. Private keys of 1024-bits were unlocked by signature images
with Average Error Rate of about 7.8%.
1.4 Proposed System
They were not able to adapt to the movement changes efficiently. The freezing of gait
proposed by freezing ratio (freezing frequencies divided by the power of gait frequencies of
horizontal shank acceleration). Reference [50] also proposed a similar implementation of
wearable sensors that collected datasets from healthy and osteoarthritis affected patients
walking on the ground surface. Gait parameters like joint kinematics, segment orientation, and
joint forces were noted. The paper reviewed researches on different body parts which affected
the gait of the patient. It showed fewer articles were involved in feedback than sensing. Hip,
Thigh, and Pelvis were the least noted body parts for both sensing and feedback gait
parameters.
1.4.2 ADVANTAGES: -
2.1 METHODOLOGIES
1. Signature Pre-Processing
2. Binarization
3. Complementation
4. Feature Extraction
1. Signature Pre-Processing
The preprocessing step is applied for already stored signature in database as well as the signature
is to be test. The purpose of this step is to make the signature normalized in one form and
improve the quality of image which is suitable for feature extraction. The preprocessing stages
includes.
2. Binarization
Binarization of gray scale signature is obtained by adaptive thresholding value. It begin with a
threshold value for different signature which is being calculated by the method of thresholding.
For each signature pixel element it computes associate intensity gradient by selecting a
maximum of difference of left and right signature pixel intensity and upper and lower signature
pixel intensity respectively to calculate the corresponding threshold. Finally, thresholding
method is exploited for Binarization. During this technique intensity of every signature pixel
intensity is compared with the threshold value. The signature pixel intensity is set to 1 for the
pixel intensity value higher than the threshold while for lower value it is set to 0.
3. Complementation
Complement of a binarized signature means converting the zeros into ones and ones into zeros.
Complementation of signature image results in better visibility of great difference of gray levels.
It helps us to identify the fine detail in precise manner for signature image which helps to
correctly calculate the features to classify about signature between genuine or forged.
Complemented signature image is having more clarity for further operation since the lighter
pixel in signature become dark and the dark area become lighter.
4. Feature Extraction
It is quite difficult that which feature is to be choose for further verification of signature. Feature
of signature image is thus been extracted and can be used as a reference points for further
verification of signature. In our work, feature as a vector of seven entities which include entropy,
number of closed loop and so on are used to precisely authenticate and verify the test signature.
These features that has been depicted in Table 1.
CHAPTER 3
REQUIREMENTS ENGINEERING
3.1 GENERAL
We can see from the results that on each database, the error rates are very low due to the
discriminatory power of features and the regression capabilities of classifiers. Comparing the
highest accuracies (corresponding to the lowest error rates) to those of previous works, our
results are very competitive.
3.2 HARDWARE REQUIREMENTS
The hardware requirements may serve as the basis for a contract for the
implementation of the system and should therefore be a complete and consistent
specification of the whole system. They are used by software engineers as the
starting point for the system design. It should what the system do and not how it
should be implemented.
The software requirements document is the specification of the system. It should include both a
definition and a specification of requirements. It is a set of what the system should do rather than
how it should do it. The software requirements provide a basis for creating the software
requirements specification. It is useful in estimating cost, planning team activities, performing
tasks and tracking the teams and tracking the team’s progress throughout the development
activity.
Platform : Spyder3
EFFICIENCY
Our multi-modal event tracking and evolution framework is suitable for multimedia documents
from various social media platforms, which can not only effectively capture their multi-modal
topics, but also obtain the evolutionary trends of social events and generate effective event
summary details over time. Our proposed mmETM model can exploit the multi-modal property
of social event, which can effectively model social media documents including long text with
related images and learn the correlations between textual and visual modalities to separate the
visual-representative topics and non-visual-representative topics.
CHAPTER 4
DESIGN ENGINEERING
4.1 GENERAL
Design Engineering deals with the various UML [Unified Modelling language] diagrams
for the implementation of project. Design is a meaningful engineering representation of a thing
that is to be built. Software design is a process through which the requirements are translated into
representation of the software. Design is the place where quality is rendered in software
engineering.
4.2 UML DIAGRAMS
Read Images
Preprocessing
Random Forest
Dataset Input User Input
Predictions
EXPLANATION:
The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted. The above diagram consists of
user as actor. Each will play a certain role to achieve the concept.
4.2.2 CLASS DIAGRAM
Random Forest
User Input Predictions
train & extract features
user Signature image input Predict values From train data
classification()
inputPath() predict()
sklearn()
HTML()
display()
EXPLANATION
In this class diagram represents how the classes with attributes and methods are linked together
to perform the verification with security. From the above diagram shown the various classes
involved in our project.
4.2.3 OBJECT DIAGRAM
EXPLANATION:
In the above digram tells about the flow of objects between the classes. It is a diagram that shows
a complete or partial view of the structure of a modeled system. In this object diagram represents
how the classes with attributes and methods are linked together to perform the verification with
security.
4.2.8 STATE DIAGRAM
Dataset Input
Read Image
Pre-processing
Random Forest
User Input
Predictions
EXPLANATION:
State diagram are a loosely defined diagram to show workflows of stepwise activities and
actions, with support for choice, iteration and concurrency. State diagrams require that the
system described is composed of a finite number of states; sometimes, this is indeed the case,
while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which
differ slightly and have different semantics.
4.2.9 ACTIVITY DIAGRAM
User Login
Read Image
Pre-processings
Random Forest
User Input
Predictions
EXPLANATION:
Activity diagrams are graphical representations of workflows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modeling Language,
activity diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
4.2.6 SEQUENCE DIAGRAM
EXPLANATION:
Signature Real/Fraud
User Input
detection
EXPLANATION:
A collaboration diagram, also called a communication diagram or interaction diagram, is
an illustration of the relationships and interactions among software objects in the Unified
Modeling Language (UML). The concept is more than a decade old although it has been refined
as modeling paradigms have evolved.
4.2.4 COMPONENT DIAGRAM
Pre-processing
User Login
EXPLANATION
In the Unified Modeling Language, a component diagram depicts how components are wired
together to form larger components and or software systems. They are used to illustrate the
structure of arbitrarily complex systems. User gives main query and it converted into sub queries
and sends through data dissemination to data aggregators. Results are to be showed to user by
data aggregators. All boxes are components and arrow indicates dependencies.
4.2.5 DEPLOYMENT DIAGRAM
Signature Real/Fraud
Predictions
detection
EXPLANATION:
Deployment Diagram is a type of diagram that specifies the physical hardware on which the
software system will execute. It also determines how the software is deployed on the underlying
hardware. It maps software pieces of a system to the device that are going to execute it.
Data Flow Diagram
Level-0:
User
Dataset input
Pre-Processing
Level-1
Splitting
Random Forest
User Input
System Prediction
DEVELOPMENT TOOLS
5.1 Python
Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU General
Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
Easy-to-maintain − Python's source code is fairly easy-to-maintain.
A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows interactive
testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.
Databases − Python provides interfaces to all major commercial databases.
GUI Programming − Python supports GUI applications that can be created and ported
to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than shell
scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are listed
below −
CHAPTER 6
IMPLEMENTATION
6.1 GENERAL
Coding:
# Importing the libraries
import numpy as np
import pandas as pd
dataset = pd.read_csv('../Dataset/diabetes.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 8].values
# Splitting the dataset into the Training set and Test set
# Feature Scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Parameter evaluation
treeclf = DecisionTreeClassifier(random_state=42)
'max_features': [1, 2, 3, 4]
gridsearch.fit(X,y)
print(gridsearch.best_params_)
print(gridsearch.best_score_)
min_samples_split = 5,
random_state=42)
tree.fit(X_train, y_train)
y_pred = tree.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
round(roc_auc_score(y_test,y_pred),5)
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, cross_val_score
dataset = pd.read_csv('../Dataset/diabetes.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 8].values
# Splitting the dataset into the Training set and Test set
# Feature Scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Parameter evaluation
knnclf = KNeighborsClassifier()
parameters={'n_neighbors': range(1, 20)}
gridsearch.fit(X, y)
print(gridsearch.best_params_)
print(gridsearch.best_score_)
knnClassifier.fit(X_train, y_train)
y_pred = knnClassifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
round(roc_auc_score(y_test,y_pred),5)
import pandas as pd
import numpy as np
dataset = pd.read_csv('../Dataset/diabetes.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 8].values
# Splitting the dataset into the Training set and Test set
# Feature Scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
'gamma': (1,2,3,'auto'),'decision_function_shape':('ovo','ovr'),
'shrinking':(True,False)}
print()
svm = GridSearchCV(SVC(), parameters, cv=5,
scoring='%s_macro' % score)
svm.fit(X_train, y_train)
print()
print(svm.best_params_)
print()
print()
means = svm.cv_results_['mean_test_score']
stds = svm.cv_results_['std_test_score']
print()
print()
print(classification_report(y_true, y_pred))
print()
svm_model.fit(X_train, y_train)
spred = svm_model.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
svm.fit(X_train, y_train)
round(roc_auc_score(y_test,y_pred),5)
CHAPTER 7
SNAPSHOTS
General:
This project is implements like application using python and the Server process is maintained
using the SOCKET & SERVERSOCKET and the Design part is played by Cascading Style
Sheet.
SNAPSHOTS
CHAPTER 8
SOFTWARE TESTING
8.1 GENERAL
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
8.3Types of Tests
Any project can be divided into units that can be further performed for detailed
processing. Then a testing strategy for each of this unit is carried out. Unit testing helps to
identity the possible bugs in the individual component, so the component that has bugs can be
identified and can be rectified from errors.
CHAPTER 9
FUTURE ENHANCEMENT
In the future, we will incorporate some additional ideas such as selecting an elective
preprocessing method, using structural features and utilizing advanced feature selection and
classi¯ers.13–15, 24–26, 31, 51, 52 In addition, it is a promising research topic to investigate
how to use fewer reference signatures for verification and still get a reasonable result.
CHAPTER 10
REFERENCES
1. M. Ammar, Y. Yoshida and T. Fukumura, A new e®ective approach for automatic o®-line
veri¯cation of signatures by using pressure features, in Int. Conf. Pattern Recognition, IEEE
Computer Society Press (Washington DC, USA, 1986), pp. 566–569.
4. N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, IEEE Comput.
Soc. Conf. Comput. Vision Pattern Recognit. 1 (2005) 886–893.
11. G. S. Eskander, R. Sabourin and E. Granger, O²ine signature-based fuzzy vault (OSFV):
Review and new results, arXiv:1408.3985.
12. M. A. Ferrer, F. Vargas, C. M. Travieso and J. B. Alonso, Signature veri¯cation using local
directional pattern (LDP), in Int. Carnahan.Conf. on Security Technology (ICCST), (Institute of
Electrical and Electronics Engineers Inc., New Jersey, USA, 2010), pp. 336–340.
13. Z. Fu, X. Sun, Q. Liu, L. Zhou and J. Shu, Achieving e±cient cloud search services:
Multikeyword ranked search over encrypted cloud data supporting parallel computing, IEICE
Trans. Commun. E98-B(1) (2015) 190–200.
14. B. Gu, V. S. Sheng, K. Y. Tay, W. Romano and S. Li, Incremental support vector learning
for ordinal regression, IEEE Trans. Neural Netws. Learn. Syst. 26(7) (2015) 1403–1416.
15. B. Gu, V. S. Sheng, Z. Wang, D. Ho, S. Osman and S. Li, Incremental learning for -support
vector regression, Neural Netws. 67 (2015) 140–150.
16. Y. Guerbai, Y. Chibani and B. Hadjadji, The e®ective use of the one-class SVM classi¯er for
handwritten signature veri¯cation based on writer-independent parameters, Pattern Recognit
48(1) (2015) 103–113.
17. J. Guo, D. Doermann and A. Rosenfeld, Forgery detection by local correspondence, Int. J.
Pattern Recognit. Artif. Intell. 15(4) (2001) 579–641.
19. J. Hu and Y. Chen, Fusion of features and classi¯ers for o®-line handwritten signature
veri¯cation, in Asian Conf. Pattern Recognition (IEEE Computer Society, Washington, USA,
2011), pp. 174–178.
20. J. Hu and Y. Chen, O²ine signature veri¯cation using real adaboost classi¯er combination of
pseudo-dynamic features, in Int. Conf. Document Analysis and Recognition (ICDAR) (IEEE
Computer Society, Washington, USA, 2013), pp. 1345–1349.
21. K. Huang and H. Yan, O®-line signature veri¯cation using structural feature correspondence,
Pattern Recognit. 35(11) (2002) 2467–2477.
23. R. Kumar, J. D. Sharma and B. Chanda, Writer-independent o®-line signature veri¯- cation
using surroundedness feature, Pattern Recognit. Lett. 33(3) (2012) 301–308.
24. Z. Lai, W. Wong, Y. Xu, C. Zhao and M. Sun, Sparse alignment for robust tensor learning,
IEEE Trans. Neural Netws. Learn. Syst. 25(10) (2014) 1779–1792.
25. Z. Lai, W. Wong, Y. Xu, J. Yang and D. Zhang, Approximate orthogonal sparse embedding
for dimensionality reduction, IEEE Trans. Neural Netws. Learn. Syst. 27(4) (2016) 723–735.
26. Z. Lai, Y. Xu, Q. Chen, J. Yang and D. Zhang, Multilinear sparse principal component
analysis, IEEE Trans. Neural Netws. Learn. Syst. 25(10) (2014) 1942–1950.
27. F. Leclerc and R. Plamondon, Automatic signature veri¯cation: the state of the art — 1989–
1993, Int. J. Pattern Recogn. Artif. Intell. 8(3) (1994) 643–660.
28. S. Lee and J. C. Pan, O²ine tracing and representation of signatures, IEEE Trans. System
Man Cybern. 22(4) (1992) 755–771.
29. A. Liaw and M. Wiener, Classi¯cation and regression by random forest, R news 2(3) (2002)
18–22.
32. H. Lv, W. Wang, C. Wang and Q. Zhuo, O®-line Chinese signature veri¯cation based on
support vector machines, Pattern Recognit. Lett. 26(15) (2005) 2390–2399.
34. A. Mitra, P. Kumar and C. Ardil, Automatic authenti¯cation of handwritten documents via
low density pixel measurements, Int. J. Comput. Intell. 2(4) (2005) 219–223.
35. V. Nguyen, M. Blumenstein and G. Leedham, Global features for the o®-line signature
veri¯cation problem, in Int. Conf. Document Analysis and Recognition, (IEEE Computer
Society, Washington, USA, 2009), pp. 1300–1304.
36. T. Ojala, M. Pietikäinen and T. Mäenpää, Multiresolution gray-scale and rotation invariant
texture classi¯cation with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24(7)
(2002) 971–987.
37. L. Oliveira, E. Justino, C. Freitas and R. Sabourin, The graphology applied to signature
veri¯cation, in Int. Conf. of Graphonomics Society, (Institute of Physics Publishing, Bristol, UK,
2005), pp. 286–290.
39. M. Parodi and C. G. Juan, Online signature veri¯cation: Improving performance through pre-
classi¯cation based on global features, New Trends in Image Analysis and Processing (ICIAP)
(Springer Verlag, Berlin, Germany, 2013), pp. 69–76.
40. G. Pirlo, D. Impedovo and M. Fairhurst (eds.), Advances in Digital Handwritten Signature
Processing: A Human Artifact for E-Society (World Scienti¯c, 2014).
41. R. Plamondon and S. N. Srihari, Online and o®-line handwriting recognition: A
comprehensive survey, IEEE Trans. Pattern Anal. Mach. Intell. 22(1) (2000) 63–84.
43. G. Ridgeway, The state of boosting, Comput. Sci. Statist. 31 (1999) 172–181.
45. I. Siddiqi and N. Vincent, A set of chain code based features for writer recognition, in Int.
Conf. on Document Analysis and Recognition, (IEEE Computer Society, Washington, USA
2009), pp. 981–985.
46. S. N. Srihari, C. Huang, H. Srinivasan and V. Shah, Biometric and forensic aspects of digital
document processing, Digital Document Processing (Springer London, 2007), pp. 379–405.
47. S. N. Srihari, A. Xu and M. K. Kalera, Learning strategies and classi¯cation methods for o®-
line signature veri¯cation, in 9th Int. Workshop on Frontiers in Handwriting Recognition (IEEE
Computer Society, Washington, USA, 2004), pp. 161–166.
53. B. Xu, D. Lin, L. Wang, H. Chao, W. Li and Q. Liao, Performance comparison of local
directional pattern to local binary pattern in o®-line signature veri¯cation system, Int. Cong.
Image Signal Process. (CISP) (Institute of Electrical and Electronics Engineers Inc., NY, USA,
2014) 308–312.
54. M. B. Yilmaz and B. Yanikoglu, Score level fusion of classi¯ers in o®-line signature
veri¯cation, Inform. Fusion 32 (2016) 109–119.
55. M. B. Yilmaz, B. Yanikoglu, C. Tirkaz and A. Kholmatov, O²ine signature veri¯cation using
classi¯er combination of HOG and LBP features, in Int. Joint Conf. on Biometrics (IJCB), (IEEE
Computer Society, Washington, USA, 2011), pp. 1–7.