0% found this document useful (0 votes)
7 views7 pages

Paper Oltean Gordan

The paper discusses a time-continuous formulation for the classification phase of Support Vector Machines (SVMs) aimed at enabling analog implementation for real-time applications. It proposes a method that allows for a partially sequential and partially parallel computation of dot products between test vectors and support vectors, validated through a Simulink model and experiments. This approach seeks to balance speed and complexity, making it suitable for large-scale classification tasks such as image analysis.

Uploaded by

pkhaghani916
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views7 pages

Paper Oltean Gordan

The paper discusses a time-continuous formulation for the classification phase of Support Vector Machines (SVMs) aimed at enabling analog implementation for real-time applications. It proposes a method that allows for a partially sequential and partially parallel computation of dot products between test vectors and support vectors, validated through a Simulink model and experiments. This approach seeks to balance speed and complexity, making it suitable for large-scale classification tasks such as image analysis.

Uploaded by

pkhaghani916
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/273764548

TOWARDS ANALOG IMPLEMENTATION OF SUPPORT VECTOR MACHINES: A


TIME-CONTINUOUS FORMULATION OF THE CLASSIFICATION PHASE

Conference Paper · June 2005

CITATIONS READS

2 1,691

2 authors:

Gabriel Oltean Mihaela Gordan


Technical University of Cluj-Napoca Technical University of Cluj-Napoca
76 PUBLICATIONS 247 CITATIONS 79 PUBLICATIONS 490 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Gabriel Oltean on 19 March 2015.

The user has requested enhancement of the downloaded file.


TOWARDS ANALOG IMPLEMENTATION OF SUPPORT VECTOR
MACHINES: A TIME-CONTINUOUS FORMULATION OF THE
CLASSIFICATION PHASE

G. OLTEAN, M. GORDAN
TECHNICAL UNIVERSITY OF CLUJ-NAPOCA, ROMANIA

KEYWORDS: Support vector machine, Fast classification, Time-


continuous signal processing, Analog implementation

ABSTRACT: Support vector machines (SVMs) are powerful classifiers for large tasks as image/video-sequence analysis.
A practical implementation issue for such problems is their real time operation in the classification phase. One recent
strategy is the development of algorithmic formulation of SVMs suitable for hardware implementation. We propose such
an approach, that allows a partially sequential – partially parallel implementation of the SVM classification, through the
description of the feature vectors as time-continuous signals. This allows a simple analog implementation of the dot
product between each test and support vector in a sequential fashion and a parallel computation of all the dot products
for a test vector. The functionality of the algorithm is validated through a Simulink model and a set of experiments on
the IRIS dataset. The solution will offer a good speed-complexity compromise in SVM classification implementation.

INTRODUCTION closest negative to the hyperplane are called the support


vectors of the SVM classifier. The support vectors,
Support vector machines (SVMs) represent a powerful together with their associated positive Lagrange
machine learning technique that gained recently a great multipliers αs and with their labels ys, completely define
interest from the scientific community, especially due to the decision function of the SVM classifier, in the form:
their performance in solving difficult classification and Ns
pattern recognition tasks. One of the most noticeable f ( x ) = ∑ α s y s K ( x, x s ) + b , (1)
application fields where SVM classifiers proved their s =1
very good performance is image analysis. Several where x denotes an unlabeled pattern to be classified by
applications are recently reported in face detection, the trained SVM, x ∈ ℜ N ; b represents the bias term of
object tracking in video-sequences, face recognition, the hyperplane; K (⋅,⋅) represents a kernel function used
facial feature localization etc. [1,2]. These are difficult to compute the dot product between x and xs, either in
recognition tasks, due to the variability of the
appearance of the same pattern and to the difficulty of their original space ℜ N (for a linear SVM) or in a
defining and extracting reliable features to describe the higher dimensional feature space ℜ M , M>>N, where
patterns, so that to maximize the inter-class and the data are projected to become linearly separable (for
minimize the intra-class variance. SVM classifiers prove non-linear SVMs) [3]. Typical forms of kernels are:
able to learn from small sets of (often sparse) examples, a) For the linear SVM:
without the need of a carefully selected feature
extraction strategy, with very small recognition error and K ( x, x s ) = x ⋅ x s = x T x s (2)
high generalization performance. b) For a non-linear SVM: the polynomial kernel of
In its basic form, an SVM is a binary classifier based on degree d, with p ∈ ℜ, q ∈ ℜ - coefficients:
the optimal separating hyperplane algorithm that
implements the Structural Risk Minimisation principle. (
K ( x, x s ) = p ⋅ x T x s + q)d
(3)
In its training phase, the SVM receives at its input a set The number of the support vectors Ns increases with the
of labelled training patterns in the form {xi,yi}, complexity of the classification task. Furthermore,
i=1,2,...,Ntrn, where xi is a vector of N real-valued difficult classification problems are in general associated
to a large dimensionality of the pattern space, i.e. N –
features, x i ∈ ℜ N , and yi is its label, y i ∈ {−1;+1} ; a
large. Therefore the more difficult the classification
value +1 is assigned to a positive example for the problem, the larger the computational complexity of the
classifier, whereas –1 is assigned to a negative example. SVM classification process, both in the training phase
Based on the training set the SVM learning algorithm [4] and in the classification phase [5]. Although a great
derives the so-called optimal separating hyperplane. effort was devoted to find efficient computational
This will (ideally) perfectly separate the positive from implementations both for the training phase (in which
the negative examples, ensuring a maximal distance the tractability of the optimisation process leading to the
between the closest positive and the closest negative SVM classifier is the most important issue) and for the
example to the hyperplane [3]. The training patterns xs, test phase (where the problem is the real-time evaluation
s=1,2,...,Ns, Ns<<Ntrn, that are the closest positive and of f(x)), in this paper we address only the latter aspect of
the problem. Reducing the numerical complexity of the of course kernel evaluations) will be done in parallel. A
classification phase is extremely important for number of Ns parallel analog circuit structures will be
applications that must run in real-time, as e.g. video- needed for the evaluation of f(x). The computation of
sequence analysis [2], therefore this topic is of actual xTxs in the analog sequential way is achieved by
interest for the scientific community. providing time-continuous descriptions of the pattern x
The computational complexity of the classification and support vectors xs, using signal reconstruction
phase can be evaluated in terms of the number of techniques from their samples. With the time-continuous
elementary operations needed for the evaluation of f(x) formulation of x and xs as analog signals x(t) and xs(t),
in every unlabeled pattern x. Examining the equations the computation of xTxs can be formulated as an analog
(1), (2) and (3), one can see that each evaluation signal multiplication followed by a signal integration;
requires Ns dot product computations in the form xTxs , these operations can be implemented with simple analog
s=1,2,...,Ns, followed by Ns kernel evaluations for every structures.
xTxs, Ns multiplies with the constant ysαs and Ns-1 In this paper we prove, through a mathematical
additions. The dot product is expressed as: demonstration, a Simulink model implementation and
N through a set of experimental results performed with the
x T x s = ∑ x i x si , (4) implemented Simulink model on a standard set for data
i =1
classification, the validity of the proposed algorithm. Its
x = [x 1 x 2 ... x N ]T ; x s = [x s1 x s 2 ... x sN ]T . circuit level implementation is currently an issue of our
Thus to evaluate every xTxs one needs N multiplies and ongoing research work in the field.
N-1 additions, which leads to a total of approximately
2N⋅Ns operations per pattern classification. Furthermore, TIME-CONTINUOUS FORMULATION
considering that many image analysis tasks decompose OF THE CLASSIFICATION PHASE
the image into a set of P patterns (partially overlapping
image windows), the number of operations needed for The motivation for the proposed time-continuous
the complete classification increases P times, to 2P⋅N⋅Ns formulation of the SVM classification phase originates
[1,2]. Therefore, speeding-up the classification phase from a typical application of SVM classifiers: the
becomes a non-trivial task for real-world applications of classification of grey-level images based on their content
SVM classifiers. The types of solutions reported in the [1]. In the simplest implementation, the feature vector
literature depend of the type of implementation: for such a task is obtained by scanning the grey-level
1) Solutions devoted to software implementations of digital image in row order. Thus every pattern x is just
SVM classifiers. In this case, the computational time can an ordered collection of grey levels, as illustrated in
be decreased only by reducing the number of support Figure 1a). However, one could consider not the digital
vectors Ns or the dimension of the feature space N since image representation, but the analog image
all the computations are performed sequentially. Ns can representation as in classical television applications. The
be minimised by selecting only the most significant analog version of the image can be represented as a
support vectors [5]. To reduce the feature space sequence of time-continuos signals corresponding to
dimension N, various data compression techniques can each scan line as in Figure 1b). The digital image is the
be used as e.g. principal component analysis [6]. sampled and quantized version of the analog signal.
2) Solutions achieved by the hardware implementation Thus x can also be represented as the sampled and
of SVM classifiers. In this case, the duration of the quantized version of the analog image, as in Figure 1c).
classification phase is reduced by parallelising to the
largest possible extent the computation of f(x). Analog
implementations are recently reported in the literature  x1 
[7]. It is easy to see from equation (1) that all the x 
 2
evaluations of the kernel function K(x,xs), s=1,2,...,Ns,  . 
and the N feature-by-feature multiplies from the ⇒x= 
.
computation of xTxs can be performed in parallel. This   b)
leads to massively parallel circuit structures.  . 
However, as the parallelism increases, the circuit level x N 
complexity of the SVM classifier increases as well. A
massively increased complexity resulting for a large- a)
scale classification problem might be unpractical for
certain applications. A compromise solution would be a c)
partially sequential – partially parallel operation, lying
Fig. 1. Image description: a) as a digital image and in vector
somewhere between the software and the hardware
form; b) in analog form (one scan line); c) alternate
implementation; this could provide a good trade-off representation as a sampled and quantized signal
between the duration of the classification phase and the
hardware complexity, thus being also more suitable for Let us denote the description of the pattern x as a
large-scale problems. Such a solution is proposed in this sampled signal by X(t).
paper. The proposed approach is the implementation of In analytical form, the sampled signal X(t) can be
each dot product xTxs, s=1,2,...,Ns, in a sequential analog expressed as follows:
fashion, whereas the Ns dot product computations (and
 i analog integrator whose output is read at the moment
x i , if t = t i = ; i = 0,1,..., N − 1 N/fS after the integration start and multiplied by the
X(t ) =  fS (5)
0, otherwise constant αsys. This is illustrated in Figure 2 which is the
detailed description of the blocks “Weighted sum” from
In equation (5), by fS we denote the sampling frequency Figure 3.
of the original analog signal from Figure 1b) to get the 2) Parallel computations of all the dot products xTxs
sampled signal in Figure 1c). Of course every support for the Ns support vectors. This process requires a
vector xs can be expressed just in the same fashion by a number of Ns structures as the one given in Figure 2. For
corresponding sampled signal Xs(t), s=1,2,...,Ns. a linear SVM, the Ns scalar outputs simply enter a
We can consider that the (for this case, available) analog summation block along with the bias term b. For a non-
versions of the images representing the pattern x and the linear SVM with polynomial kernel, every scalar output
support vectors xs, namely the time-continuous signals xTxs is first processed according to equation (3) prior to
denoted by x(t) and xs(t), are just more complete summation.
versions of the sampled signals X(t) and Xs(t). Then if Usually the number of support vectors Ns is much
we can express the decision function f(x) of the SVM smaller than the length of the feature vector N (in
classifier given by the equation (1) in respect to x(t) and practical applications – at least by a factor of 10).
xs(t), s=1,2,...,Ns, we will obtain a time-continuous Therefore the circuit complexity (defined as the number
formulation of the SVM classification phase. For the of simple analog building blocks needed for the
time being, we will consider in equation (1) the use of implementation) is much smaller than in the massively
the linear kernel or polynomial kernel, as in these parallel structures, which makes the approach suitable
functions x and xs appear only in the form of their dot even for large scale SVM classification problems. On
product, xTxs. Actually this is the only term that needs to the other hand the computational speed, although lower
be expressed in a time-continuous fashion, according to than in the case of a fully parallel system, is higher than
our implementation goal formulated in the previous in the case of a fully sequential system.
section.
Describing x and xs as the sampled signals X(t) and x(t)
Xs(t), xTxs given by equation (4) can be expressed as:
N xs(t)
X ∫ X
xT ⋅ xs

x T x s = ∑ X( t i )X s ( t i ) , (6) αsys
i =1
which furthermore, neglecting the effect of quantization, Fig. 2. Block diagram illustrating the sequential analog
can be expressed in terms of the analog time-continuous implementation of the dot product xTxs
signals x(t) and xs(t) as:
N The last issue to complete the proposed formulation of
x T x s = ∑ x(t i ) x s (t i ) . (7) the SVM classification phase is the generation of the
i =1 time-continuous signals x(t) and xs(t) through signal
Equation (7) can be written in continuous form as: reconstruction techniques, from their sampled versions
N/f X(t) and Xs(t), s=1,2,...,Ns.
S
xTxs = ∫ x ( t ) x s ( t )dt , (8) Two issues should be mentioned here:
t =0 1) Assuming the same sampling frequency fS for X(t)
which gives the basic of the time-continuous formulation and for any Xs(t), any pair of sampled signals in the
of the SVM classification proposed. form (X(t), Xs(t)) will ideally be perfectly synchronised.
Although this time-continuous formulation was initiated However in practice this case will never hold. Thus if
considering an image classification application, it can be one wants to use directly the sampled signals as inputs in
extended to any types of patterns x, regardless if their the computational block given in Figure 2, if any time
features are of the same kind, as long as they are real delay between X(t) and Xs(t) appears, a large error will
valued. However in the latter case, no time-continuous appear in the resulting dot product.
analog version of x and xs, s=1,2,...,Ns, was ever 2) In the general case when x and xs do not originate
available. Therefore signal reconstruction methods as from analog, time-continuous signals, a value fS→∞
e.g. generation of quantized but not sampled signals or (corresponding to the ideal sampling) will not be as
interpolation techniques must be applied to get analog beneficial as for analog signals. On the contrary, the
versions of the patterns. In practice, depending on the time-continuous signal multiplication will be again
reconstruction method, one might get as the result of the prone to errors, as in the first case considered here, for
integration a larger value than xTxs. However as long as any time delay between the 2 signals.
the result is proportional to the dot product by a known In order to avoid these problems, one needs to provide
constant factor, this is not a problem for the suitable reconstructed waveforms x(t) and xs(t) so that,
computation. even in the presence of a time delay τd, the dot product
With this formulation, the SVM classification phase can to be computed with a very small error. Two simple
be implemented using the equations (8), (1) and solutions to produce signals x(t) and xs(t) that satisfy this
(depending on the type of SVM) (2) or (3), as follows: condition are given in the following:
1) A sequential computation of each dot product xTxs, 1) The most simple solution is to consider x(t) and
s=1,2,...,Ns using an analog multiplier block that has at xs(t) in the form of some quantized but not sampled
its inputs the signals x(t) and xs(t), followed by an signals. In this case, “the restoration” of the time-
continuous signals is achieved by repeating, on each • The Lagrange multipliers αs, denoted by alpha s,
time interval [tk;tk+1), tk=k/fS, k=0,1,...,N-1, the s=1,2,...,Ns, as time constants.
corresponding sample (feature) value, namely, xk+1 for • The pattern to be classified x, denoted by Xtest,
x(t) and xs,k+1 for xs(t). Thus x(t) and xs(t) will have the described in the time-continuous fashion as x(t) given in
following expressions: the previous section.
x ( t ) = x k +1 for t ∈ [ t k ; t k +1 ), ∀ k = 0,1,..., N − 1 (9a) • The bias term b, denoted as Bias, as a time constant.
x s ( t ) = x s,k +1 for t ∈ [ t k ; t k +1 ), ∀k = 0,1,..., N − 1 (9b)
The resulting signals x(t) and xs(t) for some
s ∈ {1,2,..., N s } are illustrated in Figure 5 of the
Experimental results section, for a particular example
with Ns=2 and N=4 (the plots denoted XTest and SV1
on Scope 1). They can be considered periodical square
wave signals, with the period N/fS.
Thus, in the ideal case of no time delay between x(t) and
xs(t), at the output of the integrator from Figure 2 we
will have at the moment N/fS after the start of the
integration, the numerical value:
N/f (k +1) / f Fig. 3. Block diagram of the time-continuous SVM classifier
S N −1 S
∫ x ( t ) x s ( t )dt = ∑ ∫ x ( t ) x s ( t )dt
0 k =0 k/f The number of the computational blocks “Weighted sum
S (10)
1 N 1 T s” equals the number of support vectors, to compute the
= ∑ x i x si = ⋅ x x s , s = 1,2,..., N s , weighted sum of the corresponding support vector and
f S i =1 fS
Xtest vector. The resulting time continuous signal from
that is proportional to the dot product by a factor of 1/fS. these blocks, WSigma s, are added together with the
If fS=1, equation (10) gives exactly the dot product xTxs. bias term, Bias, by the Sum block. The result is the
2) Another simple solution would be to generate the signal f(Xtest), that gives the evolution of the decision
time-continuous signals x(t) and xs(t), s=1,2,...,Ns, from function of the classifier evaluated in Xtest as a time-
their discrete samples by interpolation. The most simple continuous signal. It worth to mention that the
procedure is the linear interpolation. The advantage of intermediate values of f(Xtest) show the effect of each
such an approach in the generation of x(t) and xs(t) is individual feature on the decision function. This is a
that the signals will not have any local discontinuities. specific characteristic of our implementation as
However when x(t) and xs(t) are piecewise linear, their compared to other implementations. Anyway the final
product on each time interval [tk;tk+1) will be a 2nd value of the decision function will be read as the value
degree polynomial, thus its integral will no longer be of f(Xtest) at the moment N/fS, as described in the
proportional to the dot product xTxs by a constant factor. previous section. The final classification result, i.e. the
Therefore better interpolation methods should be found Label, is provided by the Zero threshold comparator
to generate x(t) and xs(t). The investigation of such block, as +1 if f(Xtest)>0, and -1 if f(Xtest)<0.
methods will make the object of our future work. To verify the operation of the designed analog SVM
classifier, we implemented it in Simulink, under Matlab
A SIMULINK MODEL OF THE TIME- environment. To keep the implementation simple, in
CONTINUOUS SVM CLASSIFIER order to observe the full behavior of the classifier, we
chose a classifier with two support vectors. The detailed
The model of the time-continuous implementation of the Simulink model is presented in Figure 4. As one can see
classification phase using SVM is shown in Figure 3. the Simulink model has two computation channels, the
This is a general bloc diagram that can be used for any first one corresponds to the first support vector (upper
number of support vectors Ns. In this model, we describe part) and the second one corresponds to the second
each support vector by the pair (xs,ys). The support support vector (lower part).
vectors, their corresponding Lagrange multipliers αs, First of all we generate the continuous time signals
s=1,2,...,Ns, and the bias term of the hyperplane b are corresponding to the features of all involved vectors:
previously detected in the training phase of the support - Xtest signal for test vector, using the block TestVect;
vector machine. Our implementation of the classification - SV1 signal for first support vector, using the block
phase is independent of the training method (i.e. SupVect1;
software or hardware). - SV2 signal for second support vector, using the block
The signals entering the classification phase are: SupVect2.
• The support vectors, denoted as Support Vector s, An example of these time-continuous signals can be
s=1,…Ns, each vector being described by its seen in the experimental results section, in Figure 5.
corresponding time-continuous signal xs(t) and its label
ys (as a time constant).
moment t=4s, after all the vectors’ features were
processed.
The Simulink model can be very easy extended for any
number of support vectors by simply replicating the
computing channels and adding the necessary inputs to
the summation block.
Our implementation is also very useful from the
didactical point of view. Due to the fact we have access
and can see the signals in all the intermediate points of
the computing flow, one can easy understand the
operations involved in the classification phase of the
SVM. Also, the verification of our future
implementation ideas mentioned in the previous section
is very easy with this model, due to its interactive mode
of operation.

EXPERIMENTAL RESULTS
Fig. 4. The Simulink model of the time-continuous SVM
classifier
We tested our implementation on a standard data set for
These signals are collected with Scope1, Scope2 and
classification tasks, namely, the IRIS data [8]. Each
Scope in the Simulink model. In our model we
pattern in the data set is described by 4 features of an
considered a duration of 1s for each feature of the input
iris flower: the petal length and width and the sepal
vector. The Product11 and Product21 blocks are
length and width. There are 3 classes of irises: Setosa,
responsible for the multiplication of the test vector (the
Versicolor and Virginica. The goal is to classify each
signal Xtest) with the first support vector (the signal
individual pattern in one of the 3 classes. We consider in
SV1) and with the second support vector (the signal
our experiment the binary classification of an unknown
SV2), thus producing the time-continuous signals
pattern in the class Setosa vs. the other 2 classes. For the
Xtest*SV1, Xtest*SV1. The integration of each product
SVM training (needed to obtain the support vectors,
signal by the blocks Integrator1 and Integrator2 gives
their Lagrange multipliers and the bias term) we used
the cumulated sum corresponding to the dot product.
Steve Gunn’s Matlab application [9]. The IRIS data set
The resulting signals are Sigma1 and Sigma2, that are
contains 75 training patterns. The training is performed
further multiplied by the corresponding Lagrange
for a linear SVM with the error penalty parameter C=1.
multiplier and the support vector’s label. The resulting
After the training phase we get 2 support vectors, whose
weighted signals are WSigma1 and WSigma2. Finally,
features, labels and Lagrange multipliers are presented
from these “partial” signals, the decision function
in Figure 4 and 5, along with the bias term of the
f(Xtest) is obtained according to the equation (1)
resulting classifier.
applied for a linear SVM, using a summation block with
As test vectors, we selected 12 patterns from the
three inputs: WSigma1, WSigma2 and Bias.
standard IRIS test set to be classified using two
The signal f(Xtest) on the Scope represents the
implementation: Steve Gunn’s implementation
evolution in time of the decision function of the
(considered here as reference classifier, and denoted as
classifier. Although the classification result is given by
SG) and our Simulink SVM classification
the value of f(Xtest) only at the moment N/fS (in our
implementation (denoted as SCT). The values of the
case, t=4s), one can notice that the proposed
decision function in the 2 implementations along with
implementation allows us for an even better observation
the “target” classification results are presented in Table
of the SVM classification process, as follows. We can
1. Since the classification labels always match the target,
consider in our example four significant time moment: 3
we do not list them in the table, but only the target ones,
intermediate and one final. At t=1s, the decision
denoted there as simply “Label”.
function contains only the classification result according
As one can notice by examining the results in Table 1,
to the first feature and so on, up to t=4s, when we get the
the differences in the real value of the decision function
final value of the decision function we are interested in.
(error column in Table 1) computed by SG
Thus, in order to reduce the computation time, with a
implementation and our SCT implementation can be
proper ordering of the features according to their
considered zero, since their order is 10e-7 for all the test
significance, it yields possible to consider as final value
vectors. So our SCT implementation always provides
of the decision function the value at an intermediate time
correct values, for various signs and magnitudes of the
moment, after taking into consideration only the most
decision function.
significant features.
The computational details in each step of the algorithm
The label assignment to the unknown pattern Xtest to be
can be seen in Figure 5. This figure illustrates the
classified is implemented by the Sign block, in the
waveforms for all the input, intermediate and output
Simulink model. This block is a simple comparator
signals during the classification of the test vector 1 from
having the zero value for its threshold. The value of the
Table 1. The signal Xtest is very similar with the signal
output signal of this block, Label, should be read et the
SV1 but different from the signal SV2. This observation
is very important, being a qualitative information about
the membership of the test vector to the same class as between the duration of the classification phase and its
support vector 1. The time variation of the decision complexity in analog hardware implementation. The
function f(Xtest) is very important because it offers proposed algorithm is based on the description of the
information about an intermediate membership of the feature vector x and support vectors xs of the SVM as
test vector after taking into consideration only the firsts analog signals x(t) and respectively xs(t), which makes
features of the test vector, after each time period. In possible to describe the kernel evaluation in a simple
accordance with the first feature (time moment t =1s), analog signal processing fashion. The equivalence of the
f(Xtest)=0.364>0, label=+1, so the test vector belongs to standard formulation and the proposed time-continuous
the Setosa class. Also considering the first two featurs formulation of the SVM classification phase is proven
(t=2s), three features (t=3s), or all four features (t=4s) by building a Simulink model and by a set of
we have f(Xtest)>0, label=+1, so Setosa class. experiments on the IRIS classification set. The graphical
TABLE 1. Classification results
illustration of the result provided by the time-continuous
representation of the signals is also useful to understand
Xtest f(Xtest) Label the SVM classification and to further reduce the
SG SCT error computational time, providing an ordering of the
1. 0.8415397971 0.8415394766 -3.20e-07 1 features based on their significance can be found. In our
2. 0.9282835252 0.9282832184 -3.06e-07 1 future work we will implement the proposed solution
3. 0.7703558444 0.7703555818 -2.62e-07 1 with simple analog circuits and search interpolation
4. 0.9946513779 0.9946510561 -3.22e-07 1 methods to generate the continuous signals x(t) and xs(t).
5. 1.2129277395 1.2129273884 -3.51e-07 1
6. 1.7282735109 1.7282731483 -3.63e-07 1 THE AUTHORS
7. -4.5566642408 -4.5566646840 -4.43e-07 -1
8. -1.2689215012 -1.2689217966 -2.95e-07 -1 Gabriel Oltean and Mihaela Gordan are with the Basis
9. -2.3058037663 -2.3058041356 -3.69e-07 -1 of Electronics Department, Technical University of
10. -2.2193649549 -2.2193653424 -3.87e-07 -1 Cluj-Napoca,C. Daicoviciu 15, Cluj-Napoca, Romania.
11. -1.3893638414 -1.3893641758 -3.34e-07 -1 E-mail: [email protected]
12. -3.3223390851 -3.3223394769 -3.91e-07 -1
REFERENCES
[1] I. Buciu, C. Kotropoulos, I. Pitas, “Combining
support vector machines for accurate face
detection”, IEEE Int. Conf. on Image Processing,
Thessaloniki, Greece, 2001, pp. 1054-1057
[2] V. P. Kumar, T. Poggio, “Learning-based approach
to real time tracking and analysis of faces”, Proc.
of AFGR, 2000, France, 2000, pp. 96--101
[3] V. N. Vapnik, Statistical Learning Theory, J.
Wiley, N.Y., 1998
[4] T. Joachims. Making large-scale SVM learning
practical. Advances in kernel methods - support
vector learning, B. Scoelkopf and C. Burges and
A. Smola (ed.). MIT-Press, 1999.
[5] Ding Ai-ling, Liu Fang, Zhao Xiang-mo, “The
massive data classifiers based on reduced set
vectors method”, IEEE 2002 Int. Conf. on
Comm., Circuits and Syst., vol. 2, pp. 1239 – 1242
[6] N. Ancona, G. Cicirelli, E. Stella and A. Distante,
“Object detection in images: complexity reduction
and parameter selection”, Proc. ICPR02, vol. 2,
pp. 426-429
[7] R. Genov, S. Chakrabartty, and G. Cauwenberghs
“Silicon Support Vector Machine with On-Line
Learning,” Int. J. of Pat. Recog. and Artificial
Fig. 5. Classification of the test vector 1 using the SCT Intelligence, World Scientific, 2003, pp. 385-404
implementation [8] R. A. Fisher, „The Use of Multiple Measurements
in Axonomic Problems”, Annals of Eugenics 7,
CONCLUSIONS 179-188, 1936
[9] S.R. Gunn, MATLAB Support Vector Machine
In this paper we proposed a new algorithm for the Toolbox (Internet), March 1998
implementation of the classification phase in SVM
classifiers. The aim is a good compromise solution

View publication stats

You might also like