10.1007@3 540 36131 655
10.1007@3 540 36131 655
10.1007@3 540 36131 655
Object Modeling
1 Introduction
F.J. Garijo, J.C. Riquelme, and M. Toro (Eds.): IBERAMIA 2002, LNAI 2527, pp. 536–545, 2002.
c Springer-Verlag Berlin Heidelberg 2002
Application of Learning Machine Methods to 3D Object Modeling 537
dimensional noisy data. The proposed approach to 3D object modeling takes ad-
vantage of this low cost surface representation capacity inherent to these learning
algorithms.
In this paper we describe the application of the Support Vector Kernel
Method (SVM) [5], the Kohonen Self-Organizing Feature Maps [6] and the Grow-
ing Grid [7] machine learning algorithms to model objects from data sets that
contain information from one or more 3D objects. In general the application of
the algorithms start with a cloud of 3D data points and no a priori information
regarding the shape or topology of the object or objects in the scene. These
data points are applied to the learning algorithms, that with simple modifica-
tions on the learning rules, generate adaptively a surface adjusted to the surface
points. In case of the Kohonen Feature Map and the Growing Grid, a K-means
clustering algorithm, with as many prototype vectors as objects in the scene,
is initially applied to the data. Next, a spherical network is initialized in the
neighborhood of each prototype vector, in the interior of the clouds of points.
Finally the learning rule is applied and the networks deform and grow (Growing
Grid) until they reach stability at the surface of the clouds of points. In case
of the Support Vector Kernel Method the data points are mapped to a high
dimensional feature space, induced by a Gaussian kernel, where support vectors
are used to define a sphere enclosing them. The boundary of the sphere forms in
data space (3D) a set of closed surfaces containing the clouds of points. As the
width parameter of the Gaussian kernel is increased, these surfaces fit the data
more tightly and splitting of surfaces can occur allowing the modeling of several
objects in the scene. At the end of each of these processes we will have a model
for each object.
The organization of the paper is as follows: in the second section, an overview
of the applied learning machine methods and their modifications for 3D object
modeling are discussed. In the third section experimental results are presented
and finally in the fourth section the conclusions and further work are described.
ξi µi = 0 . (7)
2 2
(R + ξi − Φ(xi ) − a ) βi = 0 . (8)
From these relations it is easy to verify that a point xi with ξi > 0 is outside
the sphere in feature space, such points have µi = 0 and βi = C. A point with
ξi = 0 is inside or on the surface of the sphere in feature space. To be on the
surface it must have βi not equal to zero. Points with 0 < βi < C will be referred
to as Support Vectors.
The above relations allow the derivation of the Wolf dual of the Lagrangian:
W = βi Φ(xi ) · Φ(xi )
− βi βj Φ(xi ) · Φ(xj )
. (9)
i i,j
and the problem is solved by maximizing the dual. The dot products
Φ(xi ) · Φ(xj )
can be conveniently replaced by a suitable Mercer kernel
K(xi , xj ) in this way the Wolf dual can be rewritten as
W = βi K(xi , xi ) − βi βj K(xi , xj ) . (10)
i i,j
Application of Learning Machine Methods to 3D Object Modeling 539
In feature space the square of the distance of each point to the center of the
sphere is
R2 (x) = Φ(x) − a2 . (12)
The radius of the sphere is
{x | R(x) = R} . (14)
where the learning rate ε(t) is a linear time decreasing function, x(t) is the input
vector at time t, and Φij (t) is the neighborhood function with the form:
−|j − i|2
Φij (t) = exp . (16)
σ 2 (t)
Here j is the position of the BMU in the topological network and i the
position of a unit in its neighborhood. The width parameter σ(t) is also a linear
time decreasing function. It can be noted that since the learning rate and the
width parameter both decrease in time the adjustments made on the weight
vectors become smaller as the training progresses. On a more abstract level, this
means that the map will become more stable in the later stages of the training.
It can be appreciated that the result of learning is that the weight vectors of
the units resemble the training vectors. In this way the Kohonen Feature Map
540 C. Garcı́a and J.A. Moreno
This model [7] is an enhancement of the feature map. The main difference is that
the initially constrained network topology grows during the learning process. The
initial architecture of the units is a constrained spherical network with a small
number of units. A series of adaptation steps, similar to the Kohonen learning
rule, are executed in order to update the weight vectors of the units and to gather
local error information at each unit. This error information is used to decide
where to insert new units. A new unit is always inserted by splitting the longest
edge connection emanating from the unit with maximum accumulated error.
In doing this, additional units and edges are inserted such that the topological
structure of the network is conserved.
The implemented process involves three different phases: an initial phase
where the number of units is held constant allowing the grid to stretch, a phase
where new units are inserted and the grid grows, and finally a fine tuning phase.
3 Experimental Results
The learning algorithms for modeling are applied on several synthetic objects
represented in the form of clouds of 3000 points each obtained from the applica-
tion of a 3D lemniscate with 4, 5 and 6 foci. The initial and final parameters used
for the Kohonen Feature Map were: ε0 = 0.5, εf = 0.0001, σ0 = 4, σf = 0.01,
with a total of 105 iterations. For the Growing Grid: ε0 = 0.05, εf = 0.001, and
a constant σ = 0.7, the iteration number is distributed as: 10 x Unit number in
the stretching phase; 200 x Unit number in the growing phase, and 200 x Unit
number for fine tuning. The parameters in the SVM algorithm are C = 1.0 and
q = 2 (Fig. 1 and Fig. 3), q = 4 (Fig. 2), and q = 0.00083 (Fig. 4) and q = 3.33
(Fig. 5).
In Fig. 1 the original surface (5 foci lemniscate) and the surface models
resulting from the application of the three learning methods (a) Kohonen Feature
Map (b) Growing Grid and (c) SVM algorithms are shown. The Kohonen Map
consisted on a spherical topological network with 182 units initialized in the
interior of the cloud of points. The Growing Grid was also a spherical network
with 6 initial units initialized in the interior of the cloud, the network was grown
up to 162 units. The SVM model was constructed with 49 support vectors. It can
Application of Learning Machine Methods to 3D Object Modeling 541
Fig. 1. Results of the three machine learning methods in the modeling of a surface
from a solid generated by a 5 foci lemniscate. (a) Kohonen Feature Map (b) Growing
Grid and (c) SVM Kernel method
In Fig. 2 the original surface (6 foci lemniscate) and the surface models re-
sulting from de application of the three learning methods (a) Kohonen Feature
Map (b) Growing Grid and (c) SVM algorithms are shown. The Kohonen Map
consisted on a spherical topological network with 266 units initialized in the in-
terior of the cloud of points. The Growing Grid was also a spherical network with
6 initial units initialized in the interior of the cloud and grown up to 338 units.
The SVM model was constructed with 77 support vectors. It can be appreciated
that again in this experiment the three algorithms achieve a reasonable modeling
of the original object. The best results are produced by the Growing Grid and
SVM methods.
In Fig. 3 the original surface (4 foci lemniscate) and the surface models re-
sulting from the application of the two learning methods Growing Grid (a-c)
and SVM (b) algorithms are shown. In these experiments the object is of partic-
ular interest since it consists of two parts joined by a point, an approximation
to a scene of two separate objects. For the experiment leading to Fig. 3(a), a
single Growing Grid initialized as a spherical network with 6 initial units in the
center of the cloud of points was used. The network was grown up to 134 units.
It is clear that the model produced in this case is not a good one. In order to
overcome this deficiency the data is initially clustered by a K-means algorithm
with two prototype vectors. Then two Growing grids, similar to the latter, are
initialized in the neighborhood of the prototype coordinates. A better model,
shown in Fig. 3(c), is then obtained. The SVM model was constructed with 38
support vectors. In this experiment both the Kohonen feature Map and the sin-
gle Growing Grid do not produce good models of the surface. However, it can
542 C. Garcı́a and J.A. Moreno
Fig. 2. Results of the three machine learning methods in the modeling of a surface
from a solid generated by a 6 foci lemniscate. (a) Kohonen Feature Map (b) Growing
Grid and (c) SVM Kernel method
be appreciated that the SVM method and the multiple Growing Grids achieve
reasonable models and are superior.
Fig. 3. Results of two machine learning methods in the modeling of a surface from a
solid generated by a 4 foci lemniscate. (a) Growing Grid and (b) SVM Kernel method
(c) Double Growing Grid
In Fig. 4 the original surface (experimental data from the left ventricle of a
human heart echocardiogram) and the surface models resulting from the appli-
cation of the two learning methods (a) Growing Grid and (b) SVM algorithms
are shown. For this case the Growing Grid was a spherical network with 6 initial
units initialized in the interior of the cloud of points. The network was grown
up 282 units. The SVM model was constructed with 46 support vectors.
Application of Learning Machine Methods to 3D Object Modeling 543
Fig. 4. Results of two machine learning methods in the modeling of a surface from
experimental data (left ventricle of a human heart echocardiogram). (a) Growing Grid
and (b) SVM Kernel method
In Fig. 5 the original surface (4 foci disconnected lemniscate) and the surface
models resulting from de application of the two learning methods (a) Double
Growing Grid and (b) SVM algorithms are shown. In these experiments the
methods are applied to two separate objects in the scene. In the application
of the Growing Grid algorithm an initial clustering step with a two prototype
K-means algorithm is carried out. Then two Growing Grids initialized as spher-
ical networks with 6 units in the neighborhood of the prototype coordinates
are evolved. The grids are grown up to 240 units each. The SVM model was
constructed with 50 support vectors. It can be appreciated that both the SVM
method and the multiple Growing Grids achieve reasonable good models in this
multiple object case.
Fig. 5. Results of two machine learning methods in the modeling of a surface from a
solid generated by a 4 foci disconnected lemniscate (Two separate objects). (a) Double
Growing Grid and (b) SVM Kernel method
544 C. Garcı́a and J.A. Moreno
In Table 1 the average execution times for the three machine learning al-
gorithms, measured on a Pentium III 500 MHz. Processor, are shown. It is to
be appreciated that the execution time for the Growing Grid method is an or-
der of magnitude greater than those of the other two methods. In any case the
computational costs of the algorithms are reasonably low.
Table 1. Average execution times for the three machine learning methods
This work compared the application of three machine learning algorithms in the
task of modeling 3D objects from a cloud of points that represents either one or
two disconnected objects. The experiments show that the Kohonen Feature Map
and the Growing Grid methods generate reasonable models for single objects
with smooth spheroidal surfaces. If the object possess pronounced curvature
changes on its surface the modeling produced by these methods is not very
good. An alternative to this result is to allow the number of units in the network
to increase together with a systematic fine tuning mechanism of the units in
order to take account of the abrupt changes on the surface. This modifications
are theme of further work in case of the Growing Grid algorithm.
On the other hand, the experimental results with the Support Vector Ker-
nel Method are very good. In the case of single smooth objects the algorithm
produces a sparse (small number of support vectors) model for the objects. A
very convenient result for computer graphics manipulations. This extends to the
case with two objects in which the method is able to produce models with split
surfaces. A convenient modification of the SVM algorithm would be to include
a better control on the number of support vectors needed in the model. This
possibility could hinder the rounding tendency observed in the SVM models and
Application of Learning Machine Methods to 3D Object Modeling 545
allow the modeling of abrupt changes of the surface as seen on the data of the
echocardiogram.
To model several objects, with the Kohonen and Growing Grid methods, a
good alternative is to evolve multiple networks conveniently initialized in the
interior of the clouds of points of each object. To this end, an initial K-means
clustering algorithm, with as many prototypes as objects in the scene, is ap-
plied to the data. The networks are then initialized in the neighborhood of the
coordinates of the prototypes.
The considered data sets for multiple objects are not a representation of
a real scene. In a real scene the clusters of data points will be connected by
background information (walls and floor). The future work will also include the
extension of the actual algorithms to make them applicable to real scenes.
Finally it must be noted that in all cases the computational costs of the
algorithms are reasonably low, a fact that can lead to real time implementations.
References
1. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int. J. Of
Computer Vision 2 (1998) 321–331
2. Gibson, S.F.F, Mirtich, B.: A Survey of Deformable Modeling in Computer Graph-
ics. Mitsubishi Electric Research Laboratory Technical Report (1997)
3. Delingette, H.: Simplex Meshes: A General Representation for 3D Shape Recon-
struction. Technical Report 2214, INRIA, France (1994)
4. Delingette, H.: General Object Reconstruction based on Simplex Meshes. Technical
Report 3111, INRIA, France (1997)
5. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines.
Cambridge University Press (2000)
6. Kohonen, T.: Self-Organization Associative Memory. 3rd edn. Springer-Verlag,
Berlin (1989)
7. Fritzke, B.: Some Competitive Learning Methods. Institute for Neural Computa-
tion, Ruhr-Universität Bochum, Draft Report (1997)
8. Bro-Nielsen, B.: Active Nets and Cubes. Technical Report, Institute of Mathemat-
ical Modeling Technical University of Denmark (1994)
9. Metaxas, D.: Physics-based Deformable Models: Applications to Computer Vision,
Graphics, and Medical Imaging. Kluwer Academic, Boston (1997)
10. Yoshino, K., Kawashima, T., Aoki, Y.: Dynamic Reconfiguration of Active Net
Structure. Proc. Asian Conf. Computer Vision (1993) 159–162
11. Ben-Hur, A., Horn, D., Siegelmann, H. T., Vapnik, V.: A Support Vector Method
for Clustering. International Conference on Pattern Recognition (2000)
12. Platt, J.: Fast Training of Support Vector Machines Using Sequential Minimal
Optimization. http://www.research.microsoft.com/ jplatt