Data Classification Using Support Vector Machine: Durgesh K. Srivastava, Lekha Bhambhu
Data Classification Using Support Vector Machine: Durgesh K. Srivastava, Lekha Bhambhu
Data Classification Using Support Vector Machine: Durgesh K. Srivastava, Lekha Bhambhu
www.jatit.org
ABSTRACT
Classification is one of the most important tasks for different application such as text categorization, tone
recognition, image classification, micro-array gene expression, proteins structure predictions, data
Classification etc. Most of the existing supervised classification methods are based on traditional statistics,
which can provide ideal results when sample size is tending to infinity. However, only finite samples can
be acquired in practice. In this paper, a novel learning method, Support Vector Machine (SVM), is applied
on different data (Diabetes data, Heart Data, Satellite Data and Shuttle data) which have two or multi class.
SVM, a powerful machine method developed from statistical learning and has made significant
achievement in some field. Introduced in the early 90s, they led to an explosion of interest in machine
learning. The foundations of SVM have been developed by Vapnik and are gaining popularity in field of
machine learning due to many attractive features and promising empirical performance. SVM method does
not suffer the limitations of data dimensionality and limited samples [1] & [2].
In our experiment, the support vectors, which are critical for classification, are obtained by learning
from the training samples. In this paper we have shown the comparative results using different kernel
functions for all data samples.
1. INTRODUCTION
The Support Vector Machine (SVM) was first process is commonly referred to as model selection.
proposed by Vapnik and has since attracted a high One practical issue with model selection is that this
degree of interest in the machine learning research process is very time consuming. We have
community [2]. Several recent studies have experimented with a number of parameters
reported that the SVM (support vector machines) associated with the use of the SVM algorithm that
generally are capable of delivering higher can impact the results. These parameters include
performance in terms of classification accuracy choice of kernel functions, the standard deviation of
than the other data classification algorithms. Sims the Gaussian kernel, relative weights associated
have been employed in a wide range of real world with slack variables to account for the non-uniform
problems such as text categorization, hand-written distribution of labeled data, and the number of
digit recognition, tone recognition, image training examples.
classification and object detection, micro-array For example, we have taken four different
gene expression data analysis, data classification. It applications data set such as diabetes data, heart
has been shown that Sims is consistently superior to data and satellite data which all have different
other supervised learning methods. However, for features, classes, number of training data and
some datasets, the performance of SVM is very different number of testing data. These all data
sensitive to how the cost parameter and kernel taken from RSES data set and
parameters are set. As a result, the user normally http://www.ics.uci.edu/~mlearn/MLRepository.htm
needs to conduct extensive cross validation in order l [5]. This paper is organized as follows. In next
to figure out the optimal parameter setting. This section, we introduce some related background
1
Journal of Theoretical and Applied Information Technology
www.jatit.org
including some basic concepts of SVM, kernel parallel hyperplanes. Parallel hyperplanes can be
function selection, and model selection (parameters described by equation
selection) of SVM. In Section 3, we detail all
experiments results. Finally, we have some w.x + b = 1
conclusions and feature direction in Section 4. w.x + b = -1
2. SUPPORT VECTOR MACHINE If the training data are linearly separable, we can
select these hyperplanes so that there are no points
In this section we introduce some basic concepts between them and then try to maximize their
of SVM, different kernel function, and model distance. By geometry, We find the distance
selection (parameters selection) of SVM. between the hyperplane is 2 / w. So we want to
minimize w. To excite data points, we need to
2.1 OVERVIEW OF SVM ensure that for all I either
{(x1,y1),(x2,y2),(x3,y3),(x4,y4).,(xn, yn)}.
Figure.1 Maximum margin hyperplanes for a
Where yn=1 / -1 , a constant denoting the class to
SVM trained with samples from two classes
which that point xn belongs. n = number of
sample. Each x n is p-dimensional real vector. The
Samples along the hyperplanes are called
scaling is important to guard against variable
Support Vectors (SVs). A separating hyperplane
(attributes) with larger varience. We can view this
with the largest margin defined by M = 2 / w
Training data , by means of the dividing (or
that is specifies support vectors means training
seperating) hyperplane , which takes
data points closets to it. Which satisfy?
w.x+b=o ----- (1)
y j [wT . x j + b] = 1 , i =1 -----(3)
Where b is scalar and w is p-dimensional Vector.
Optimal Canonical Hyperplane (OCH) is a
The vector w points perpendicular to the separating
canonical Hyperplane having a maximum margin.
hyperplane . Adding the offset parameter b allows
For all the data, OCH should satisfy the following
us to increase the margin. Absent of b, the
constraints
hyperplane is forsed to pass through the origin ,
restricting the solution. As we are interesting in the
yi[wT . xi + b] 1 ; i =1,2l ------(4)
maximum margin , we are interested SVM and the
2
Journal of Theoretical and Applied Information Technology
www.jatit.org
Where l is Number of Training data point. In order Note that the dual Lagrangian Ld() is expressed in
to find the optimal separating hyperplane having a terms of training data and depends only on the
maximul margin, A learning macine should scalar products of input patterns (xiT xj).More
minimize w2 subject to the inequality detailed information on SVM can be found in
constraints Reference no.[1]&[2].
This optimization problem solved by the saddle Training vectors xi are mapped into a higher
points of the Lagranges Function (may be infinite) dimensional space by the
l function . Then SVM finds a linear separating
LP = L(w, b, ) = 1/2w2 - i (yi (wT xi + b )-1) hyperplane with the maximal margin in this higher
i=1 dimension space .C > 0 is the penality parameter of
the error term.
l Furthermore, K(xi , xj) (xi)T (xj) is called
= 1/2 wT w - i (yi(wT xi + b )-1) ---(5) the kernel function[2]. There are many kernel
i=1 functions in SVM, so how to select a good kernel
Where i is a Lagranges multiplier .The search for function is also a research issue.However, for
an optimal saddle points ( w0, b0, 0 ) is necessary general purposes, there are some popular kernel
because Lagranges must be minimized with respect functions [2] & [3]:
to w and b and has to be maximized with respect to
nonnegative i (i 0). This problem can be
Linear kernel: K (xi , xj) = xiT xj.
solved either in primal form (which is the form of
w & b) or in a dual form (which is the form of i
Polynomial kernel:
).Equation number (4) and (5) are convex and KKT
K (xi , xj) = ( xiT xj + r)d , >0
conditions, which are necessary and sufficient
conditions for a maximum of equation (4).
Partially differentiate equation (5) with respect to RBF kernel :
saddle points ( w0, b0, 0 ). K (xi , xj) = exp(- xi - xj2) , >0
L / w0 = 0 Sigmoid kernel:
l K (xi , xj) = tanh( xiT xj + r)
i .e w 0 = i yi x i -----------(6)
i =1 Here, , r and d are kernel parameters. In these
popular kernel functions, RBF is the main kernel
And L / b0 = 0 function because of following reasons [2]:
l
i .e i yi = 0 -----------(7) 1. The RBF kernel nonlinearly maps samples
i =1 into a higher dimensional space unlike to
Substituting equation (6) and (7) in equation (5). linear kernel.
We change the primal form into dual form. 2. The RBF kernel has less hyperparameters
than the polynomial kernel.
l 3. The RBF kernel has less numerical
Ld () = i - 1/2 i j yi yj xiT xj -------(8) difficulties.
i =1
In order to find the optimal hyperplane, a dual 2.3 MODEL SELECTION OF SVM
lagrangian (Ld) has to be maximized with respect
to nonnegative i (i .e. i must be in the Model selection is also an important issue in
nonnegative quadrant) and with respect to the SVM. Recently, SVM have shown good
equality constraints as follow performance in data classification. Its success
depends on the tuning of several parameters which
i 0 , i = 1,2...l affect the generalization error. We often call this
l parameter tuning procedure as the model selection.
i yi = 0 If you use the linear SVM, you only need to tune
i =1 the cost parameter C. Unfortunately, linear SVM
are often applied to linearly separable problems.
3
Journal of Theoretical and Applied Information Technology
www.jatit.org
Many problems are non-linearly separable. For be also called the equivalency class for the
example, Satellite data and Shuttle data are not indiscernibility relation. For X U and P inferior
linearly separable. Therefore, we often apply approximation P1 and superior approximation P1
nonlinear kernel to solve classification problems, are defined as follows
so we need to select the cost parameter (C) and
kernel parameters (, d) [4] & [5]. P1(X) = U{Y U/ IND(P): Y Xl}
We usually use the grid-search method in
cross validation to select the best parameter set. P1(X= U{Y U / INE(P): Y X }
Then apply this parameter set to the training
dataset and then get the classifier. After that, use Rough Set Theory is successfully used in
the classifier to classify the testing dataset to get feature selection and is based on finding a reduct
the generalization accuracy. from the original set of attributes. Data mining
algorithms will not run on the original set of
3. INTRODUCTION OF ROUGH SET attributes, but on this reduct that will be equivalent
with the original set. The set of attributes Q from
Rough set is a new mathematic tool to deal with the informational system S = (U, Q, V, f) can be
un-integrality and uncertain knowledge. It can divided into two subsets: C and D, so that C Q,
effectively .analyze and deal with all kinds of D Q, C D = . Subset C will contain the
fuzzy, conflicting and incomplete information, and attributes of condition, while subset D those of
finds out the connotative knowledge from it, and decision. Equivalency classes U/IND(C) and
reveals its underlying rules. It was first put forward U/IND(D) are called condition classes and decision
by Z.Pawlak, a Polish mathematician, in 1982. In classes
recent years, rough set theory is widely The degree of dependency of the set of attributes
emphasized for the application in the fields of data of decision D as compared to the set of attributes
mining and artificial intelligence. of condition C is marked with c (D) and is defined
by
3.1 THE BASIC DEFINITIONS OF ROUGH
SET
4
Journal of Theoretical and Applied Information Technology
www.jatit.org
Applic Testi Best c and g with Cross
An attribute a is indispensable for C if POSC at-ions Train ng five fold validati
(D) POSC[a] (D). The core of C is the union of ing data C on
data rate
all indispensable attributes in C. The core has two Diabet 500 200 75.6
equivalent definitions. More detailed information es data 211=20 2- 7=
on RSES can be found in .[1]&[2]. 48 .007812
5
Heart 200 70 82.5
4 RESULTS OF EXPERIMENTS Data 25=32 2-7 =
.007812
The classification experiments are conducted on 5
different data like Heart data, Diabetes data, Satellit 4435 2000 91.725
Satellite data and Shuttle data. These data taken e Data 21=2 21=2
Shuttle 4350 1443
from Data 0 5 215= 21=2 99.92
http://www.ics.uci.edu/~mlearn/MLRepository.htm 32768
l and RSES data sets . In these experiments, we Table 1
done both method on different data set. Firstly, Use
LIBSVM with different kernel linear , polinomial ,
sigmoid and RBF[5]. RBF kernel is employed.
Applications Total Execution Time to
Accordingly, there are two parameters, the RBF Predict
kernel parameter and the cost parameter C, to be
set. Table 1 lists the main characteristics of the SVM RSES
three datasets used in the experiments. All three Heart data
71 14
data sets, diabetes , heart, and satellite, are from the
Diabetes data
machine learning repository collection. In these 22 7. 5
experiments, 5-fold cross validation is conducted to Satellite data
determine the best value of different parameter C 74749 85
and .The combinations of (C, ) is the most Shuttle Data
252132.1 220
appropriate for the given data classification Table 2: Execution Time in Seconds using SVM & RSES
problem with respect to prediction accuracy. The
value of (C , ) for all data set are shown in Table 1.
Second, RSES Tool set is used for data
classification with all data set using different Fig. 2, 3 shows, Accuracy comparison of
classifier technique as Rule Based classifier, Rule Diabetes data Set after taking different training set
Based classifier with Discretization, K-NN and all testing set for both technique (SVM &
classifier and LTF (Local Transfer Function) RSES) using RBF kernel function for SVM and
Classifier. The hardware platform used in the Rule Base Classifier for RSES.
experiments is a workstation with Pentium-IV-
1GHz CPU, 256MB RAM, and the Windows
XP(using MS-DOS Prompt).
The following three tables represent the different
experiments results. Table 1 shows the best value of
different RBF parameter value (C , ) and cross
validation rate with 5-fold cross validation using
grid search method[5]&[6]. . Table 2 shows the
Total execution time for all data to predict the
accuracy in seconds.
5
Journal of Theoretical and Applied Information Technology
www.jatit.org
Press 1992.
[2] V. Vapnik. The Nature of Statistical Learning
Theory. NY: Springer-Verlag. 1995.
[3] Chih-Wei Hsu, Chih-Chung Chang, and Chih-
Jen Lin. A Practical Guide to Support Vector
Classification . Deptt of Computer Sci.
National Taiwan Uni, Taipei, 106, Taiwan
http://www.csie.ntu.edu.tw/~cjlin 2007
[4] C.-W. Hsu and C. J. Lin. A comparison of
methods for multi-class support vector
machines. IEEE Transactions on Neural
Networks, 13(2):415-425, 2002.
[5] Chang, C.-C. and C. J. Lin (2001). LIBSVM:
a library for support vector machines.
Fig: 3 Accuracy of Diabetes data with SVM & RSES
http://www.csie.ntu.edu.tw/~cjlin/libsvm .
[6] Li Maokuan, Cheng Yusheng, Zhao Honghai
Unlabeleddata classification via SVM and k-
means Clustering. Proceeding of the
REFERENCES:
[1] Boser, B. E., I. Guyon, and V. Vapnik (1992).
A training algorithm for optimal margin
classifiers . In Proceedings of the Fifth
Annual Workshop on Computational
Learning Theory, pages. 144 -152. ACM
6
Journal of Theoretical and Applied Information Technology
www.jatit.org
BIOGRAPHY:
Mr Durgesh K.
Sriavastava received the
degree in Information &
Technology (IT) from
MIET, Meerut, UP, INDIA
in 2006. He was a research
student of Birla Institute of
Technology (BIT), Mesra,
Ranchi, Jharkhand, INDIA) in 2008. Currently,
he is an Assistant Professor (AP) at BRCM CET,
Bahal, Bhiwani, Haryana, INDIA. His interests
are in Software engineering & modeling and
design, Machine Learning.