0% found this document useful (0 votes)
15 views4 pages

Edge Detection by Using The Support Vector Machines

This paper presents a novel edge detection method using Support Vector Machines (SVM), demonstrating its effectiveness in image processing with fewer training examples compared to traditional methods. The authors explain how SVM can classify pixels as part of an edge or not, and how parameters like σ and the cost of misclassification affect performance. Results indicate that SVM can achieve comparable edge detection results to classical methods while providing new avenues for improvement in digital image processing.

Uploaded by

gdidslr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views4 pages

Edge Detection by Using The Support Vector Machines

This paper presents a novel edge detection method using Support Vector Machines (SVM), demonstrating its effectiveness in image processing with fewer training examples compared to traditional methods. The authors explain how SVM can classify pixels as part of an edge or not, and how parameters like σ and the cost of misclassification affect performance. Results indicate that SVM can achieve comparable edge detection results to classical methods while providing new avenues for improvement in digital image processing.

Uploaded by

gdidslr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ECCTD’01 - European Conference on Circuit Theory and Design, August 28-31, 2001, Espoo, Finland

Edge Detection by Using the Support Vector Machines


H. Gómez-Moreno*; S.Maldonado-Bascón*; F. López-Ferreras*; F.J. Acevedo-Rodríguez*;
P. Martín-Martín*
Abstract. In this paper a new method for edge of the problem. The principles of SVM have been
detection that is based into the use of support vector developed by Vapnik [4] and they are well founded
machines is presented. This new method shows how the in the statistical theory.
support vector machines may be used into image An explanation of the goal of SVM can be found in
processing in a simple way and by using a few training
Fig. 1. In this example we have a number of vectors
examples. The results presented are comparable to
those from classical edge detection methods. Besides we divided into two groups, and we must find the
show how the training parameters affect the edge optimal decision frontier to separate the sets. The
detection and how they may be used to improve the frontier chosen may be anyone that divide the sets
overall performance of the support vector machines in but only one of then is the optimal election. The
this task.
* optimal election will be the one that maximizes the
distance of the frontier to the data. Fig. 1 shows the
1 Introduction two dimensional case where the frontier is a line, in a
multidimensional space the frontier will be an
The edge detection methods are an important part hyperplane.
of digital image processing and there are a great
number of papers dedicated to these methods. There
are several ways to perform the edge detection but
the most usual is the use of two-dimensional FIR
filters (or masks) that approximates the derivative.
The Sobel, Prewitt or Roberts filters are some of
these approximations [1].
In this paper we present a new way to detect the
edges by using a different point of view. We do not
try to approximate the derivative or to use other
mathematics method. The main idea in this work is
to train the computer to recognize the presence of
edges into an image. In order to perform this idea we
Figure 1. Hyperplane separation example.
use the support vector machines (SVM) tool that is
given good results in other classification problems
Normally the data are not linearly separable and
[2][3].
this scheme can not be used directly. To avoid this
The training is performed using a few own-created
problem, the SVM can map the input data into a high
images that represents a clearly defined edges with
dimensional feature space. The SVM constructs an
those easily located. The results obtained are similar
optimal hyperplane in the high dimensional space
to those from previous methods but using a different
and then returns to the original space transforming
point of view and given to the investigator new paths
this hyperplane in a non-linear decision frontier. The
to improve the edge detection.
chose of the non-linear mapping function or kernel is
very important in the performance of the SVM used.
2 SVM Classification
It is possible that some data into the sets can not be
There are several ways to classify, Bayesian separated and then the SVM can include a penalty
decision, neural networks or support vector term that makes more or less important the mismatch
machines, for example. In this work the SVM classification. This term and the kernel are the only
classifier is the best option to be chosen since the parameters that must be chosen to obtain the SVM.
number of training examples is required to be little. The SVM applied in this work uses the radial basis
The SVM tool gives us a simple way to obtain function to perform the mapping, since the best
good classification results with a reduced knowledge results have been obtained with this kernel. The
function has the next expression,
*
Dpto. de Teoría de la Señal y Comunicaciones.

K (x , y ) = exp −
(x − y )2  (1)
Universidad de Alcalá. Crta. Madrid-Barcelona km. 33,600  2σ
2 
D.P. 28871. Alcalá de Henares. Madrid (Spain). Email:  
[email protected]. Tfno: +34918856703

III-145
The σ parameter in (1) must be chosen to reflect the bright one and vice versa. The results obtained
the degree of generalization that is applied to the over real images show that the edges are correctly
data used. The more data is obtained the less generalized in other directions different from vertical
generalization needed in the SVM. A big σ reflects and horizontal.
more generalization and a little one represents less The dark and bright zones are not homogenous but
generalization. Another parameter to be set is the the intensity at each pixel is a random value
cost of misclassification errors due to the no (gaussian values). The values at each zone never
separation of data. The more little is this parameter reach those of the other zone. By using these random
the more important is the misclassification error. values we avoid the false detection of an edge when
in a zone there are little changes between pixel
values not due to the edges. Several tests by using
images with no random background show that it is
easy the presence of isolated points considered as
edges inaccurately.
The random nature of the training images makes
random the SVM training. For example, the number
of support vector may change every time we use
different images. But the results obtained are very
similar in all cases.
Vertical edge 1 Vertical edge 2
4 Results
The results presented here have been obtained by
using the LIBSVM [5] as implementation for the
SVM. The programs used were written in C++ and
compiled using the Visual C++ 6.0 compiler.
The decision function of the classifier returns a
value for each pixel into the image. This value must
Horizontal edge 1 Horizontal edge 2 be (ideally) +1 or –1 but normally it is a value
positive or negative.
Figure 2. Training images We can use the sign of these values to say when a
pixel is an edge or not but, this way, a lost of
3 Training information is produced. It is better to use the values
obtained and say that there is a gradual change
The first step in the training of our SVM is to between “no edge” and “edge”. Then these values
define what and how is the decision that we want to indicate a probability of being an edge or not.
perform. In this case, the decision needed is between Fig. 3 shows the original image where the method
"the pixel is part of an edge" or "the pixel is not part presented here is proved. It is a 256x256 eight bits
of an edge". gray-scaled image.
In order to obtain this decision we must extract the The results presented are gray-scaled images where
information needed from the images. In this work a the values from SVM have been translated to gray
vector is formed for each pixel given the difference values and 0 (black) represents edge detection and
between this one and the pixels in a 3x3 255 (white) represents no edge detection (this
neighborhood around it. This way an eight translation makes better the visualization by avoiding
components vector is calculated at each pixel except the black background into the figures). The
for the border of the image, because in this case the intermediate values show a gradation between these
differences can not be calculated. This vector is used extreme values.
as input to the SVM both in the training process and Fig. 4 shows a comparison between the method
when we use the trained SVM over real images. presented here and the classical Sobel operator
The images used to train the SVM are shown in applied in horizontal and vertical directions. The
Fig. 2. They are images created by us trying to obtain SVM in this case is trained using a value of σ equal
a good model for the detection. The only edges used to 60 and a cost parameter of misclassification (C)
in the training are vertical and horizontal ones and equal to 100.
we expect that the other edges will be generalized by It is easy to see in this figure how the performance
the SVM. The pixels considered as edges are those of both methods is similar by comparing the gray-
into each image that are in the border between bright scaled images from each one.
and dark zones, i.e. the points in the dark zone near

III-146
Figure 3. Original image of Saturn.

(a) (b)

Figure 4. (a) Edge image with SVM method (σ=60). (b) Edge image by using the Sobel operator.

(a) (b)

Figure 5. (a) Edge image with SVM method (σ=40). (b) Edge image with SVM method (σ=20).

III-147
By using the method proposed here we might The binary image obtained is strongly dependent
obtain not only one edge image but also multiple on the σ and on the image used and it is useful after
ones by changing the σ parameter. This change of σ various tests with different σ. Then, it is better to use
acts over the resulting edge images like in Fig. 5 the gray-scaled image to make the decision between
where the edges are more marked when the σ edge and no edge.
parameter is reduced.
The effect of σ is not only over the resultant image 5 Conclusions
but also in the number of support vectors. The
number of support vector indicates which of the The main conclusion is that the SVM may be used
training vectors used were significant in the to perform the edge detection. This way we have a
separation problem and it is an important parameter new tool for this task.
since a great number makes slow the execution of the The results presented show that the method
classification. Then, it is important to reduce the proposed has a parameter (σ) which controls the
number of support vectors to improve the execution edge detection. Then, we can control the overall
speed but without loosing any edge detection process by setting different parameters in the training
performance. The cost parameter C controls the task (not only σ but C as well).
number of support vectors too but we have used a The control obtained in this process gives the
cost equal to 100 in all tests since the bests results possibility to adapt the edge detection to a defined
are obtained using it. type of images and obtain better results.
This work shows that the SVM may be used in
digital image processing and that the investigation is
still open in this area.

References
[1] A.K. Jain. "Fundamentals of Digital Image
Processing". Prentice Hall, Englewood Cliffs, NJ,
1989.
[2] N. Cristianini and J. Shawe-Taylor. "An
introduction to Support Vector Machines and
other kernel-based methods". Cambridge
University Press, Cambridge, UK, 2000.
[3] H. Gómez, S. Maldonado, F. López, P. Martín,
J.M. Villafranca, “Motion Detection Using
Support Vector Machines”, Proc. IASTED Int.
Conf. on Signal Processing and Communications,
Marbella, Spain, pp. 244-248, September 2000.
Figure 6. Binary decision image of Saturn. [4] V. Vapnik. "The Nature of Statistical Learning
Theory". Springer-Verlag, New York, 1995.
With the appropriate σ, the SVM method may
work as a binary decision by using the sign of the [5] C.C. Chang and C.J. Lin. "Libsvm: Introduction
values obtained to decide between edge and no edge. and benchmarks".
This way, we obtain a binary image with black pixels http://www.csie.ntu.edu.tw/~cjlin/papers.
marking edge points. Fig. 6 shows an example with a
value of σ equal to 20

III-148

You might also like