0% found this document useful (0 votes)
18 views7 pages

SVM Model

Uploaded by

Noor Adil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views7 pages

SVM Model

Uploaded by

Noor Adil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Computer Applications (0975 – 8887)

Volume 128 – No.3, October 2015

Time Complexity Analysis of Support Vector Machines


(SVM) in LibSVM
Abdiansah Abdiansah Retantyo Wardoyo
Intelligent System Laboratory Intelligent System Laboratory
Comp. Science Department Comp. Science & Electronic Dept.
Sriwijaya University Gadjah Mada University

ABSTRACT of SVM library. Many researchers used it both as


Support Vector Machines (SVM) is one of machine learning experiments and real applications.
methods that can be used to perform classification task. This paper contains LibSVM analysis using algorithm
Many researchers using SVM library to accelerate their complexity (all routines in LibSVM) and it has tested using
research development. Using such a library will save their two popular programming languages i.e C++ and Java.
time and avoid to write codes from scratch. LibSVM is one Yields of experiment are expected to provide an information
of SVM library that has been widely used by researchers to related to the complexity of the algorithm in LibSVM and
solve their problems. The library also integrated to WEKA, knowing the running-time indicator of training and testing
one of popular Data Mining tools. This article contain results both for C++ and Java. Furthermore, this paper is organized
of our work related to complexity analysis of Support Vector as follows: Section 2 contains methodology are used in this
Machines. Our work has focus on SVM algorithm and its experiment i.e data analysis, brief theory of SVM, LibSVM
implementation in LibSVM. We also using two popular and the tools has been used. Section 3 contains analysis and
programming languages i.e C++ and Java with three different experiment i.e finding main routines in LibSVM, compute
dataset to test our analysis and experiment. The results of the algorithms complexity and implementing it using C++
our research has proved that the complexity of SVM and Java. Section 4 is discussion about results and Section 5
(LibSVM) is O(n3) and the time complexity shown that C++ is conclusion.
faster than Java, both in training and testing, beside that the
data growth will be affect and increase the time of 2. METHODOLOGY
computation.
2.1 Data Analysis
General Terms Generally, data is formed as datasets with a particular format.
Machines Learning, Support Vector Machines, LibSVM, The following is data format used in LibSVM:
Complexity <label> <index1>:<value1> <index2>:<value2> ...
Keywords Where <label> is binary-class (-1, 1) or multi-class. <index>
SVM, LibSVM, C++, Java, WEKA, Data Mining is attribute represent integer numbers from 1 to n. <value> is
the value of attribute represent real numbers. There are three
1. INTRODUCTION datasets used in this experiment i.e:
Support Vector Machines (SVM) proposed by Vapnik in
 LibSVM: heart_scale (binary-class, 270 records,
early 1990s is used for computational tool for supervised
13 attributes)
learning which has advantages than other methods in various
types of application [18]. In the classification task, SVM  Hsu et al. [5]: train.3 (binary-class, 2.000 records,
leading than other methods because SVM provides a global 22 attributes)
solution for data classification. SVM gives a unique global
hyperplane to separate data points for different classes. SVM  WEKA: iris.train (multi-class, 150 records, 4
used Structural Risk Minimization (SRM) principle to reduce attributes).
risk during training phase. SVM generally used for WEKA using LibSVM to apply SVM algorithm, therefore
classification and regression [1][3][6][10][11][16]. At the we conducted an experiment to measure the accuracy of
first time, SVM only be used for binary classification, but classification between (WEKA + LibSVM) versus (CPP +
now it can be used for multi-class [7][15][9]. SVM is widely LibSVM) and (Java + LibSVM) using iris.train dataset, the
used for classification in the areas such as disease detection, result is same. The experiments show that LibSVM give
text categorization, software defect, intruder detection, time- consistent result even with different media. LibSVM also
series forecasting, detection and others. good in compatibility because it is use standar programming
LibSVM is a programming library to facilitate researchers to library.
perform SVM classification, it is developed by [8]. Many
researchers used LibSVM to cut off software development 2.2 Support Vector Machines
process so that they only focus on data rather than tools. Support Vector Machines (SVM) is one of machine learning
LibSVM also integrated into WEKA as a default SVM algorithms using supervised learning models for pattern
module. WEKA is an application as a tool for testing and recognition. SVM is often used for classification and
evaluation, where it contains many algorithms for data regression analysis. [17] showed that SVM classification is
mining areas such as: preprocessing, classification, quite good with accuracy above 80.0%. e.g, given a training
clusterization, association, selection and attributes set,
visualization. It makes LibSVM should be considered as one
(X i ,Y i ),i= 1,. .. ,n,

28
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

where, n n n
1
max ∑ αi − ∑ ∑ α α y y k x ,x (6 )
X i = (x i , . .. ,x id ) i= 1 2 i=1 j= 1 i j i j ( i j )

y ∈ {1,− 1 } There are four basic types of kernels: linear, polynomial,


is a sample of d-dimension and i is a label radial basis function and sigmoid. SVM can be used for
given to sample. The SVM task is to find a linear multi-class, which is using a strategy of one-against-one [14]
T which has been tested by [5] and the results are quite good.
g (x )=w X+w 0
discriminant function so that,
T 2.3 LibSVM
w xi +w0 ≥ +1 y i =+1 LibSVM is a programming library for SVM algorithm was
for
developed by [8], it is used researcher for classification and
T
w xi +w0 ≤ − 1 yi= − 1 regression task. LibSVM also integrated into WEKA that
for contains a collection of machine learning algorithms for Data
Mining. Existing algorithms in Weka can be used directly or
solutions for these problems must satisfy the following
invoked using Java library. LibSVM has been used for
equation:
various areas starting from 2000 to 2010, more than 250,000
(
y i w T xi +w0 ≥ 1 ) i=1,... ,n (1)
have download it and 10,000 emails from users who asked
related to the library [2]. LibSVM support three functions i.e
1) SVC (Support Vector Classification - binary-class and
optimal linear function can be obtained by minimizing the
multi-class), which can be used for classifications task; 2)
following quadratic programming problems [13]:
SVR (Support Vector Regression), is used for regressions
n task; and 3) One-Class SVM, which is used for distribution
1 T
min w w− ∑ α y 1 wT x i +w0 − 1 (2)
( ( ) ) estimation. This paper will only discuss LibSVM for
2 i= 1 classification because the concept of its libraries are the same
for all functions.
which will produce the following solutions:
In Figure 1, we can see the library organization in LibSVM
n for training process. svm_train is a main routine used to
w= ∑ α y i xi (3 ) perform SVM training. svm_train_one which underneath is a
i= 1 routine to select one of three functions LibSVM (SVC, SVR
and one-class SVM). Under svm_train_one there are various
where,
{α,i= 1,.. ,n;α≥ 0 } is Lagrange multipliers. types of SVM functions it can be used depends on the choice
of svm_train_one. These options produced a solving model
to make data separated linearly, feature space mapped into a for the data that has been trained earlier.
high dimensional space. The technique used to perform
mapping function is called Kernel. The kernel is a function:
k : χ× χ→ℝ
which took two samples from input space and mapped into a
real number that indicates the level of similarity. For all,
xi ,x j ∈ χ,

then the Kernel function must satisfy:

k (xi ,x j )= ⟨∅ (x i ), ∅ (x j )⟩ (4 )

∅ χ
where is an explicit mapping from input space to
Η Figure 1. Library organization in LibSVM
features of dot product of space [4]. To applied the
Kernel into SVM, generally, the equation (2) were solved by Classification Accuracy
the following equation: After training and testing phase, the accuracy is measured
n n n using the following equation:
1
max ∑ αi − ∑ ∑ α α y y x ⋅ x (5)
i= 1 2 i=1 j= 1 i j i j i j Accuracy= # correctly predicted data x 100 (7)
# total testing data
x⋅ x Short Example of LibSVM
where i j is inner product of the two samples is implicit
x LibSVM's technical tutorial can be read at README file
kernel in the equation similarity measure between i and and a paper written by [8]. This example is taken from [2],
xj they used default LibSVM dataset (heart_scale) that contain
inner product can be replaced with another kernel 270 records is divided into 170 records for training
function so that the equation (5) will be as the following (heart_scale.tr) and 100 records for testing (heart_scale.te).
equation: There are two executable files i.e svm_train to conduct
training and svm_predict to classify. In Figure 2 and 3 shows
the results of the execution of each applications.

29
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

$ ./svm_train heart_scale.tr
***
optimization finished, #iter = 87
nu = 0.471645
obj = -67, 299458, rho = 0.203495
nSV = 88, nBSV = 72
Total nSV = 88
Figure 4. Relationship the third of files
Figure 2. Result of svm_train
In Figure 4, svm_train and svm_predict will call svm which
contains routines of SVM algorithm. svm_train produce a
Figure 2. Result of svm_train model that will be the input for svm_predict. In svm_train
svm_train automatically created heart_scale.tr.model file. there are several routines are analyzed and has computed i.e:
The file will used as input by svm_predict.  parse_command_line
$./svm_predict heart_scale.tr heart_scale.tr.model output  read_problem, svm_train (call)
Accuracy = 83% (83/100) (classification)
Figure 3. Result of svm_predict  svm_check_parameter (call)
 svm_save_model (call)
Figure 3. Result of svm_predict
In svm_predict i.e:
LibSVM Code Organization
 svm_load_model (call)
All of SVM's (LibSVM) algorithm for training and testing
implemented by svm file (svm.cpp/svm.java). Both of  svm_predict (call)
svm_train and svm_predict that has discussed earlier is an
example of user interface application. These application will  Predict
call methods in svm file to perform classification task.  svm_check_probability_model (call)
Therefore, this paper will discuss the complexity of codes in
the svm file.  svm_predict_ probability (call)
2.4 Experimental Tools
In this experiment we used two of LibSVM's libraries i.e
C++ and Java. Both libraries were tested using a computer
with the following specifications: Intel (R) Core (TM) i5-
3230M - 2.60 GHz (4 CPU), RAM - 4 GiB, LINUX
operating system with Linux Mint KDE distro (Ubuntu Core
- 14:04 LTS - 32 bit), NetBeans 8.0.2, 1.8.0 JDK for Java
IDE and Code :: Block 13:12, GNU GCC compiler for C++.

3. ANALYSIS AND EXPERIMENTS


This section will discuss about analysis and experiments are
conducted on LibSVM. Analysis was done by tracking codes
of LibSVM's libraries, which are implemented in three files:
svm_train, svm_predict and svm. Next is finding main
routines and calculated its complexity using Big-O notation.
Figure 5. Hierarchy of methods in LibSVM
Lastly is run the LibSVM application using C ++ and Java to
see the results of running time. svm routines called by svm_train and svm_training and both
marked with the symbol 'call in parentheses' and 'down
3.1 Finding LibSVM's Routines arrow' (see Figure 5). In Figure 5, it can be seen the
The experiments is conducted using default parameter and hierarchy of routines, e.g the main code of svm_train can be
the type of problem is classification. Based on search of seen in Figure 6.
results, there are routines that are ignored, because it is not
suitable for default parameter. This paper only examines int main(int argc, char **argv) {
some routines are used in the file svm_train, svm_predict and char input_file_name[1024];
svm (as method not file). There are ten methods are analyzed
and has computed its complexity. In general, relationship of char model_file_name[1024];
the third of files can be seen in Figure 4. const char *error_msg;

parse_command_line(argc, argv, input_file_name,


model_file_name);

30
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

read_problem(input_file_name); 3.2 Algorithm Complexity


error_msg = svm_check_parameter(&prob,&param); The complexity of an algorithm generally calculated using
Big-O notation. Complexity can be divided into two kinds of
complexity i.e: 1) time complexity, deal with how long the
if(error_msg) { algorithm is executed, and 2) space complexity, deal with
how much memory is used by it's algorithm. In this paper we
fprintf(stderr,"ERROR: %s\n",error_msg); only discussed time complexity. An algorithm will process
exit(1); amounts of data, where N is a symbol of amounts of data. If
an algorithm does not depend on N then the algorithm has
} constant complexity or symbolized by O(1) (Big-O one). On
if(cross_validation) the contrary, if the algorithm is dependent on N, the
complexity depends on line code in algorithm and it is can be
do_cross_validation();
O(n), O(n2), O(log n) and others.
else {
To explain the calculation of Big-O that has used in this
model = svm_train(&prob,&param); article, we will give an example to compute complexity of
if(svm_save_model(model_file_name,model)) {
svm_check_parameter. The code snippets can be seen in
Figure 8.
fprintf(stderr, "can't save model to file %s\n",
for(i=0;i<nr_class;i++) {
model_file_name); int n1 = count[i];
exit(1); for(int j=i+1;j<nr_class;j++) {
int n2 = count[j];
}
if(param->nu*(n1+n2)/2 > min(n1,n2)) {
svm_free_and_destroy_model(&model); free(label);
free(count);
}
return "specified nu is infeasible";
svm_destroy_param(&param); }
free(prob.y); free(prob.x); }
}
free(x_space); free(line);
for(i=0;i<nr_class;i++) {
return 0; ...
} for(int j=i+1;j<nr_class;j++) {
...
}
Figure 6. svm_train main code
}
In Figure 6 can be seen that the routines were analyzed (in
bold) and are not analyzed (crossed out). do_cross_validation Figure 8. Code snippets of svm_check_parameter
routine is one example of a routine is not analyzed, because
Figure 8 is divided in two parts i.e top and bottom. The upper
cross_validation variable contains the 0 (false) which is
part contains a complete code snippet and the lower part
default value. Figure 7 shows the default parameters in the
contains an incomplete piece of code that simply take the
file svm_train.
code for nesting. The code will be considered as constant or
// default values the complexity is O(1) if it's not loop. Contrary, the
param.svm_type = C_SVC; complexity affected by lower bound, upper bound and total
number of loop iteration (N). Next is to assign nr_class with
param.kernel_type = RBF;
5 then find the total of interation on the existing code in
param.degree = 3; Figure 8 (bottom), Tabel 1 shown the simulation of
param.gamma = 0; // 1/num_features iterations.
param.coef0 = 0; Table 1. Loop Simulation
param.nu = 0.5;
i j Numbers of Iteration
param.cache_size = 100;
param.C = 1; 0 1, 2, 3, 4 4 times
param.eps = 1e-3; 1 1, 2, 3 3 times
param.p = 0.1;
2 1, 2 2 times
param.shrinking = 1;
param.probability = 0; 3 1 1 time
param.nr_weight = 0; 4 0 -
param.weight_label = NULL;
Total of Iteration 10 times
param.weight = NULL;
cross_validation = 0; rom these simulations we made equation based on analytic
experiment for number of N data, which is formulated as
Figure 7. Default parameters of LibSVM follows:

31
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

n− 1
∑ (n− 1 )− i (8)
i= 0

where N (nr_class) is the numbers of data and i is the counter


variable. To search the complexity, equation (8) must be first
converted into form of equation using sigma equation, the
equation broken down to be:
n− 1 n− 1 n− 1
∑ (n− 1 )− i= ∑ (n− 1 )− ∑ i (9)
i= 0 i=0 i=0

equation (9) can be parsed becomes,

(n– 1)2 –½ (n– 1 )((n– 1)– 1)

(n– 1)2 –½ (n– 1 )(n– 2 )


Figure 11. svm complexity
(n2 – 2n+1)− (½ (n2 – 3 n+2)) Table 2. Total Of Complexity
2 2
(2 n – 4 n+2 )− (n – 3 n+ 2) Num. Methods Complexity (Big-O)
2
1 parse_command_line O(1)+O(n)
2
n −n 2 read_problem O(1)+2*O(m*n)
. .. (9 )
2 3 svm_predict (main) dan O(1)+O(n)+O(n*m)
predict
Equation (9) is a formula to find total number of iteration
from numbers of data (N). Based on equation (9), the 4 svm_check_parameter O(1)+O(m*n)+O(n2)
complexity of code snippet in Figure 8 is: O (n2/2) or O (n2).
Complexity sought to take a significant variable, so in this 5 svm_train O(1)+9*O(n)+2*O(m*n)+
case n2 is more significant than the n. Complexity for 6*O(n2*m)
svm_train, svm_predict and svm can be seen in Figures 9, 10
6 svm_save_model O(1)+5*O(n)+2*O(m*n)
and 11. Tabel 2 shows total of complexity for all methods in
SVM. 7 svm_load_model O(1)+2*O(n)+2*O(m*n)
8 svm_check_probability_mo O(1)
del
9 svm_predict_probability O(1)+3*O(n)+O(n2)
10 svm_predict O(1)+4*O(n)+2*O(n3)
Table 2 shows some methods in LibSVM where svm_predict
provide the highest complexity is O(n3). Actually, svm_train
also have the same complexity with svm_predict, but in this
case we limit our computation only to main routine without
any further search the sub-routine, and then some routine in
svm_train considered as O(1). Our analysis shows that SVM
Figure 9. svm_train complexity algorithm requires three loops in nested loop so that the
complexity of SVM is O(n3). This analysis in accordance to
the standards of SVM complexity [12].

3.3 LibSVM Implementation


LibSVM implemented using two programming languages are
C ++ and Java. The reason for choosing these languages are
subjective based on general knowledge and opinion that both
languages are quite widely used among researchers. This
implementation merely shows LibSVM performance against
tested datasets to see how much time it took to perform the
classification (running time). The time is recorded from the
beginning execution until the end using TIME aplication
Figure 10. svm_predict complexity (default application on Linux), this technique referring to
[5].Tables 3 and 4 is results of implementation using C++
and Java with three sample dataset. Experiments were
conducted to look for running-time of training and testing. In

32
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

this experiment, we used training data for data test. Tables 5


and 6 give results of running-time for train.3 dataset which is
divided into five subsets i.e: 400 records, 800 records, 1200
records, 1600 records and 2000 records.
Table 3. Running Time In C++

Datasets Training Time Testing Time


(sec.) (Sec.)
heart_scale 0.018 0.013
train.3 0.752 0.598
iris.train 0.007 0.005
Figure 13. Running-time graphics using Java
Figures 12 and 13 shows running-time graph for C++ and
Table 4. Running Time In Java
Java with train.3 datasets that are divided into five subsets.
Datasets Training Time (sec.) Testing Time (Sec.) Based on the chart it can be concluded that the running-time
of C++ is faster than Java, both for training and testing. In
the graph also can be seen that the data growth will affects
heart_scale 0.145 0.122
the running-time, if there are more data then it need more
time.
train.3 1.557 1.185
4. DISCUSSION
iris.train 0.112 0.102 The experiments for LibSVM complexity have been done
and the results has been obtained, but the calculation is not
involve all the existing routine. Experiments in LibSVM
Table 5. Running Time Of Train.3 Using C++ restricted to default parameters such mentioned previously.
There are several LibSVM functions were not counted
Datasets Training Time (sec.) Testing Time (Sec.) because by default the functions is not executed.
train.3 (400) 0.047 0.041 Furthermore, the implementation of LibSVM done by re-
compile the original library. Experiments showed that
train.3 (800) 0.145 0.117 LibSVM portability is very good so that it is not difficult to
train.3 (1200) 0.392 0.301 re-implemented. Problems arise when we have different
results if we run more than once, but the difference is not
train.3 (1600) 0.535 0.435 significance. In order to obtain an average time we run more
train.3 (2000) 0.752 0.598 than once and use three digits behind comma to get high-
precision.

Table 6. Running Time Of Train.3 Using Java There are two scenarios of experiment that aimed to see how
the results of running-time. The first scenario uses three
Datasets Training Time (sec.) Testing Time (Sec.) datasets are: heart scale, train.3 and iris.train, where the
results of the experiment showed that the C++ is faster than
train.3 (400) 0.255 0.182
Java because it C++ is native. Testing's time is smaller than
train.3 (800) 0.428 0.339 training's time and a large data will increasing computation
time. In second scenario, experiments focus on the data
train.3 (1200) 0.657 0.485 train.3 were divided into five subsets, it is aims to look at the
train.3 (1600) 0.928 0.689 effect of running-time where the data is growing up to be
bigger.
train.3 (2000) 1.557 1.185
5. CONCLUSION
Support Vector Machines is one of machines learning using
supervised learning as knowledge training. In the
classification task, SVM is more favored than the other
methods because SVM provides a global solution for data
classification. To facilitate researchers using SVM algorithm,
Lin et al. develop LibSVM that has been widely used by
researchers and has been integrated into WEKA. This paper
contains analysis of LibSVM by doing such experiments:
compute the complexity of algorithm and implementing
using two programming language C ++ and Java.
Experimental has obtained results that the running-time using
C ++ is faster than Java because C++ is native. The results
also showed that the running-time for training and testing
with dataset train.3 is rise quadratic. Broadly speaking, the
Figure 12. Running-time graphics using C++ experimental results may indicate that the running-time of
testing is smaller than training.

33
International Journal of Computer Applications (0975 – 8887)
Volume 128 – No.3, October 2015

6. REFERENCES recognition. Geometric modeling and imaging—new


[1] Agarwal. S (2011). Weighted support vector regression trends, pp. 145–149.
approach for remote healthcare monitoring. In 2011 [10] Suykens, A.K. Johan, Brabanter Jos De, Lukas Lukas,
International Conference on Recent Trends in Vandewalle Joos. (2002). Weighted least squares
Information Technology (ICRTIT), IEEE, pp. 969– support vector machines: robustness and sparse
974.Che, JinXing. (2013). Support vector regression approximation. Neurocomputing 48 (1) 85–105.
based on optimal training subset and adaptive particle
swarm optimization algorithm, Appl. Soft Comput. 13 [11] Tomar, Divya, Arya Ruchi, Agarwal Sonali. (2011).
(8), pp. 3473–3481. Prediction of profitability of industries using weighted
SVR. Int. J. Comput. Sci. Eng. 3 (5) pp. 1938–1945.
[2] Chang, C. C., & Lin, C. J. (2011). LIBSVM: A library
for support vector machines. ACM Transactions on [12] Tsang, I. W., Kwok, J. T., & Cheung, P. M. (2005).
Intelligent Systems and Technology (TIST), 2(3), 27. Core vector machines: Fast SVM training on very large
data sets. In Journal of Machine Learning Research (pp.
[3] Gunn S, R . (1998). Support vector machines for 363-392).
classification and regression, ISIS technical report 14.
[13] Vapnik, V. (2000). The nature of statistical learning
[4] Hofmann, T., Schölkopf, B., & Smola, A. J. (2008). theory. Springer Science & Business Media.
Kernel methods in machine learning. The annals of
statistics, 1171-1220. [14] Webb, A. R. (2002). Statistical pattern recognition, 2nd
Edition. John Wiley & Sons.
[5] Hsu, C. W., & Lin, C. J. (2002). A comparison of
methods for multiclass support vector machines. Neural [15] Weston, J, Watkins, C. (1998). Multi-class support
Networks, IEEE Transactions on, 13(2), 415-425. vector machines. CSD-TR-98-04 royal holloway,
University of London, Egham, UK.
[6] Huang, Weimin, Leping Shen. (2008). Weighted
support vector regression algorithm based on data [16] Xue, Zhenxia, Liu Wanli. (2012). A fuzzy rough
description. In ISECS International Colloquium on support vector regression machine. In 2012 9th
Computing, Communication, Control, and Management International Conference on Fuzzy Systems and
CCCM’08, vol. 1, IEEE, pp. 250–254. Knowledge Discovery (FSKD), Dover, pp. 840–844.

[7] Lee, Y, Lin, Y, G. Wahba. (2001). Multicategory [17] Zhang, D., & Lee, W. S. (2003). Question classification
support vector machines. Comput. Sci. Stat. 33, pp. using support vector machines. In Proceedings of the
498–512. 26th annual international ACM SIGIR conference on
Research and development in informaion retrieval (pp.
[8] Lin, C. J., Hsu, C. W., & Chang, C. C. (2003 – Last 26-32). ACM.
updated: April 15, 2010). A practical guide to support
vector classification. National Taiwan U., www. csie. [18] Zhu, G., Huang, D., Zhang, P., & Ban, W. (2015). ε-
ntu. edu. tw/cjlin/papers/guide/guide. Pdf. Proximal support vector machine for binary
classification and its application in vehicle recognition.
[9] Nemmour, H, Chibani, Y. (2006). Multi-class SVMs Neurocomputing, 161, 260-266.
based on fuzzy integral mixture for handwritten digit

IJCATM : www.ijcaonline.org 34

You might also like