Fuzzy and Possibilistic Shell Clustering Algorithms and Their Application To Boundary Detection and Surface Approximation-Part I
Fuzzy and Possibilistic Shell Clustering Algorithms and Their Application To Boundary Detection and Surface Approximation-Part I
Abstruct- Traditionally, prototype-based fuzzy clustering al- than to the cluster centers. Coray seems to have been the
gorithms such as the Fuzzy C Means (FCM) algorithm have first to suggest the use of this idea to find circular clusters
been used to find “compact” or “filled” clusters. Recently, there [lo]. More recently, Dave’s Fuzzy C Shells (FCS) algorithm
have been attempts to generalize such algorithms to the case [13] and the Adaptive Fuzzy C-Shells (AFCS) algorithm [16]
of hollow or “shell-like” clusters, i.e., clusters that lie in sub-
spaces of feature space. The shell clustering approach provides have proven to be successful in detecting circular and elliptical
a powerful means to solve the hitherto unsolved problem of shapes. These algorithms are computationally rather intensive
simultaneously fitting multiple curveslsurfaces to unsegmented, since one needs to solve coupled nonlinear equations to update
scattered and sparse data. In this paper, we present several fuzzy the shell parameters in every iteration [5]. They also assume
and possibilistic algorithms to detect linear and quadric shell that the number of clusters is known. A computationally
clusters. We also introduce generalizations of these algorithms in
which the prototypes represent sets of higher-order polynomial simpler Fuzzy C Spherical Shells algorithm for clustering
functions. The suggested algorithms provide a good trade-off hyperspherical shells and an unsupervised version to be used
between computational complexity and performance. Since the when the number of clusters is unknown have also been
objective function used in these algorithms is the sum of squared introduced [361. Extensions to more general quadric shapes
distances, the clustering is sensitive to noise and outliers. We show have also been proposed [16], [31], [32]. One problem with
that by using a possibilistic approach to clustering, one can make
the proposed algorithms robust. the proposed extensions is that they use a highly nonlinear
algebraic distance, which results in unsatisfactory performance
when the data are scattered [16], [32]. Finally, none of the
I. INTRODUCTION
above shell clustering algorithms can deal with situations in
~
30 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 3, NO. 1, FEBRUARY 1995
functions. In Section IX, we describe a possibilistic approach [23]. It uses the distance measure d2(xj,ci) = ICill/n(~j -
to clustering, which has the advantage that the partition and c;)~C;'(X~ - ci). The centers are updated as above, and the
the prototype estimates are much less sensitive to noise when covariance matrices are updated by
compared with the fuzzy approach.
FUZZYCLUSTERING
11. PROTOTYPE-BASED
Let X = {xj I j = 1 . . . N} be a set of feature vectors The G-K algorithm and the unsupervised fuzzy parti-
in an n-dimensional feature space with coordinate-axis labels tion-optimum number of clusters algorithm due to Gath and
[zl,zz, ... ,z,], where xj = [zjl,zjz ..,. , 2 j n l T . Let B = Geva [20], assume that the clusters are compact ellipsoids and
(PI,. . . ,pc) represent a C-tuple of prototypes each of which allow each cluster to have a different size and orientation. They
characterizes one of the C clusters. Each pi consists of a set of can also be used to detect linear clusters in 2-D and planar
parameters. In the following, we use to denote both cluster clusters in 3-D, since these are extreme cases of ellipsoids
i and its prototype. Let uij represent the grade of membership W l .
of feature point xj in cluster Pi. The C x N matrix U = [uij] The general form of prototype-based clustering algorithms
is called a constrained fuzzy C-partition matrix if it satisfies is given below.
the following conditions [4], [25]
N
u;j E [0,1] for all i , 0 < uij < N for all i ,j and PROTOTYPE-BASED FUZZY CLUSTERING
j=1 Fix the number of clusters C;& m, m E [l,CO);
C Initialize the fuzzy C-partition U;
E U 2' 3. -- 1 for all j. (1) REPEAT
i=l Update the parameters of each cluster prototype;
The problem of fuzzily partitioning the feature vectors into C Update the partition matrix U by using (3);
clusters can be formulated as the minimization of an objective UNTIL(llAUII < E ) ;
function J ( B ,U;X) of the form
C N
in (1) gives us [4] Ties are broken arbitrarily. In practice, the hard versions do
not perform as well as their fuzzy counterparts. Due to space
limitation, we do not deal with the hard algorithms in this
paper.
where
'I c = (0.2)
x T A i x + x T b ; + ci = 0. Then the distance d g i j can be
obtained by minimizing llxj - z1I2 subject to
zTA;z + ZTbi + C; =0 (12)
where z is a point on the quadric pi. By using a Lagrange
multiplier A, the solution is found to be
Fig. 2. An example to illustrate the sensitivity of the algebraic distance to 1
the size of the curve as well as location of the feature point with respect to the z = -(I
2 - XA;)-l(Xb; + 2xj). (13)
curve. Points A , B,and C are all geometrically equidistant from the bigger
ellipse, and point A is equidistant from both ellipses. However, the algebraic
distance does not reflect this.
Substituting (13) in (12) yields a quartic (fourth-degree) equa-
tion in X in the 2-D case (see Appendix A for details), and
has at most four real roots A k , k = 1, . . . ,4.The four roots
distance is biased towards smaller curves (surfaces), and for can be computed using the standard closed-form solution. For
a particular curve (surface) it is biased towards points inside higher dimensions, the equation is of sixth degree or higher,
the curve (surface) as opposed to points outside. Thus, the and iterative root finding techniques need to be used. For each
distance measure gives rather eccentric and highly curved real root X I , so computed, we calculate the corresponding z
fits if the data is scattered as the prototypes try to enclose vector z k using (13). Then, we compute dCzJ using
more points inside the curve. The distance also is sensitive to
the placement of the feature point with respect to the curve. d$zJ = minllx, - zk1I2. (14)
k
Consider for example, two ellipses El and E2 as shown in
Fig. 2. Ellipse El is centered at (0,O) and its major and minor Minimization of the objective function in (2) with respect
semi-axes have the values 2 and 1. Ellipse E2 is centered at to pz when dCZ, is used as the underlying distance measure
(5, 0) and its major and minor semi-axes have the values 1 can be achieved only by using iterative techniques such as
and 1/2. Consider points A = (3,0), B = (1,O) and C = the Levenberg-Marquardt algorithm [39], [46]. To overcome
( 0 , 2 ) .Note that all three points are equidistant from El in the this problem, we may assume that we can obtain approxi-
Euclidean sense, and point A is equidistant from both El and mately the same values for pz by using ( l l ) , which will
Ez. Keeping in mind constraint iv) in (lo), we can write the be true if all the feature points lie reasonably close to the
expressions for the algebraic distances of a point x = (z, y) hyperquadric shells. This leads to a modified FCQS algorithm,
from El and E2 as d2(x,E1)= (16/17){x2/4 y2 - 1}2 + in which the memberships are computed using d$zJ, but
+
and d2(x,E2) = (1/17){(x - 5)2 4y2 - 1}2. Thus, we the parameters are updated using d&. An alternative is to
see that d2(A,El)= 25/17,d 2 ( B ,El)= 9/17,d2(C,E l ) = use the Levenberg-Marquardt algorithm after initializing it
144/17,and d2(A,E2) = 9/17.This clearly illustrates the bias with the solution obtained by (11) in each iteration. This
of this distance measure as discussed above. Another problem is implementationally and computationally more complex,
with the nonlinear distance is that it makes the fit of each however, and is recommended only for small data sets. We
curve rather sensitive to the presence of other curves, which have observed that this can increase the CPU time by an
sometimes leads to unstable hyperbolic fits. An example is order of magnitude, although the overall number of iterations
shown in the next section. required for the FCQS algorithm to converge is somewhat
Here we would like to note that although the distance used lower. Moreover, our simulations indicate that the performance
in the AFCS algorithm [ 161 for elliptical clusters is not biased of the modified FCQS algorithm is adequate for most computer
towards points inside the curve, it is still sensitive to the vision applications. The initialization procedure recommended
size of the curve as well as the placement of the feature for the FCQS algorithm can also be used for the modified
point with respect to the curve. For example, the expressions version with good results.
for the AFCS distance of a point x = ( q y ) from El and Fig. 3(a) shows a data set with three curves for which the
E2 can be written as d 2 ( x , E 1 )= {[x2/4 y2]1/2 - l}z + original version of the FCQS algorithm fails. It can be seen
and d2(x,E2)= {[(x - 5)' +
4y2]1/2 - 1}2. Thus, we see that due to the presence of other curves, the fit for the circle
that d (A , El) = 1/4,d (B,El)= 1/4,d2(C,E1)= 1 and becomes distorted, resulting in a hyperbola. Fig. 3(b) shows
d2(A,E2) = 1. In the next sections we introduce modifications the result of the modified FCQS algorithm, illustrating the
to the basic FCQS algorithm to mitigate this problem, keeping advantage of the more meaningful membership assignments
in mind that one needs to keep the computational complexity in the modified version. It is to be noted that in the modified
as low as possible. algorithm, although the memberships are based on geometric
distances, the parameters are still estimated by minimizing the
Iv. MODIFICATIONS
TO THE FUZZY algebraic distance. This may give poor fits when the data
C QUADRICSHELLSALGORITHM is highly scattered. An example of this behavior is shown
in Section VI. Another problem with the modified FCQS
One possible way to alleviate the problem due to the algorithm is that d;,, has a closed-form solution only in the
nongeometric nature of d& is to use the geometric (per-
2-D case. In higher dimensions, solving for dCZJis not trivial.
pendicular) distance denoted by d& between the point x; Henceforth we will simply use the acronym FCQS to denote
and the shell pi. To compute dCij we first rewrite (7) as the modified version.
KRISHNAPURAM et al.: SHELL CLUSTERING ALGORITHMS-PART I 33
‘
I (a) (b)
Fig. 4. Examples illustrating the tendency of the FCQS algorithm to fit
x; c = c 2;+
Initialize the new linear prototypes as the two tangents
to the hyperbola at its two vertices;
END IF;
END FOR;
pathological prototypes to scattered linear data. (a) A “flat” hyperbolic fit
Run the G-K algorithm on the data set x with C clusters
for two parallel lines. (b) An extremely elongated elliptical fit for two parallel using the initialization for the prototype of each cluster.
lines.
V. LINEDETECTIONUSING THE FCQS ALGORITHM Appendix B summarizes the various conditions that one
needs to check to determine the nature of the second degree
The FCQS algorithm can be used to find linear clusters, even
curve. The initialization procedures for the various cases in the
though the constraint forces all prototypes to be of second
line detection algorithm are described in Appendix C. Since
degree. This is because in practice this algorithm fits a pair
the initialization is excellent, the G-K algorithm converges in a
of coincident lines for a single line, a hyperbola for two
couple of iterations. The above algorithm successfully handles
intersecting lines, and a very “flat” hyperbola (see Fig. 4(a)),
the pathological cases shown in Fig. 4.
or an elongated ellipse (see Fig. 4(b)), or a pair of lines for
two parallel lines. Hyperbolas and extremely elongated ellipses
occur rarely in practice. When the data set contains many linear VI. THEFUZZYC PLANO-QUADRIC
clusters, the FCQS algorithm characterizes them variously as SHELLS(FCPQS) ALGORITHM
hyperbolas, extremely elongated ellipses, etc. In this case, we When the exact distance is too complex to compute, one
can group all the points belonging to such pathological clusters could use what is known as the “approximate distance” (first-
into a data set and then run a line finding algorithm such as the order approximation of the exact distance) given by [24],
G-K algorithm (see [30]) on this data set with an appropriate [461
initialization. The parameters of the lines can be determined
from the centers and the covariance matrices of the clusters. d2(X,, Pz) = = ___
d&3
PT C ( u i j ) " [ ~ ( e ) D ( q j ) T Pi
['
j=1
f o r i = l , . . . , C , or
p'Dip; = N; f o r i = l , . . . , C
1
l = Ni
(18)
(19)
where
=
i=l
C
pTEip;
N
where Ni is as in (3,and E; = C(u;j)"Mj.
N j=1
Di = Cbij)"[D(qj)D(qj)T]. (20) The above objective function is essentially the same as the
j=1
one used in the FCQS algorithm, however, the constraint used
Minimization of (16) with respect to pi subject to (19) is different. Minimization of the simplified objective function
yields a complicated equation which cannot be solved for subject to the constraint in (19) leads to
pi explicitly. To avoid an iterative solution, we make the
following assumptions: E;pi = AiDipi.
i) All data points are reasonably close to some cluster, i.e., It is easily verified that
the u;j are close to being hard. This assumption is valid
when the data is not very scattered. (It is reasonable for 2Xjl 0 .'. 0
0 .-. 0
El
2xj2
the noisy case if we use possibilistic memberships, as will
be discussed in Section IX.) 0 0 ... 0
D(qj)= A:! , where AI= . ... .
ii) The magnitude of the gradient at all points xj that have
a high membership in pi is approximately constant, i.e.,
. ... .
lVdQij12 = pTD(qj)D(qj)TpiM 1.
-1 0 e . .
We now discuss the second assumption for the special case of xj1 *.. 0
xj2 0 1 ...
hyperspheres. Hyperspheres are described by xj3 0 ..* 0 0 0 ...
PTq = bilrPi2,.. . ,Pi(n+l),Pi(n+2)1
0 0 ... 0 . . ...
. ... >andA3= .
+ + + +
x [(x? xf xi . . . x?), 2 1 , . . . ,xn, 1]* . ...
. ...
. . ...
= 0. (21) . ...
0 0 .*.
Here pi represents a prototype parameter vector. Let xj =
[ x l j , . . .,xnjlT denote a feature point, and let qj = [(xTj + Since the last row of D(qj) is always equal to [ O O . . . O ] , D;
+ + +
xij xij . . . x i j ) , xlj, . . . ,xnj, 1IT. Then we have is singular, and the above generalized eigenvector problem
IVdQij12 = (2Pilxjl + Pi2)' + ' . + (2Pilxjn + Pi(n+l))2
' cannot be converted to regular eigenvector problem, however,
= 4Pii (piiX?l + .. * -k pii& + pi2Xji + ... we may solve it using the QZ algorithm [46]. Care must be
exercised while solving (22), because the matrices D; and E;
+ P i ( n + I ) X j n ) + (Pa+ . . + P&+l)). are highly unbalanced. Several methods for balancing matrices
KRISHNAPURAM et al.: SHELL CLUSTERING ALGORITHMS-PART I 35
Fig. 6. An example illustrating the advantage of the FCPQS algorithm over Fig. 7. Effect of the reweight procedure on the FCQS algorithm. The
the FCQS algorithm for scattered data. (a) Prototype found by the FCQS prototype for two intersecting lines found by the FCQS algorithm (a) without
algorithm for a scattered ellipse. (b) Prototype found by the FCPQS algorithm reweight and (b) with reweight.
for the same scattered ellipse.
VIII. GENERALIZATION
TO PROTOTYPES REPRESENTED where Ik is a k x k identity matrix, and
BY SETS OF HIGHER-ORDER
POLYNOMIAL FUNCTIONS N
Consider the set of zeros of f (x) defined by D; = C ( ~ i j ) " [ D ( ~ j ) O ( x j ) " ] .
j=1
fl(X)=O;fZ(X) =o;"',fk(x) =o. (25)
1.
D(xj) is the Jacobian matrix D(x) evaluated at x, given by
Here X" = [ X I , 2 2 , . . . ,x,] is an n-dimensional coordinate 8Fl ( X , ) aFh(X3)
vector, as before. It is to be noted that f (x) is a vector con-
sisting of k functions, all of which have to be simultaneously
satisfied. Thus, if n = 2 and k = 1, we have a planar curve,
if n = 3 and IC = 1, we have a 3-D surface, and if n = 3
D(Xj) =
[-::: . . . . a?
aFh ( X j
8%
It can be shown [46] that the solution to the minimization of
and k = 2, we have a space curve. For example, a circle of (26) subject to (27) is given by the eigenvectors corresponding
radius 1 on the xy-plane in 3-D is defined by the equations to the least IC eigenvalues of the generalized eigenvector
+
x2 y2 - 1 = 0, and z = 0. Let problem given below.
F(x) = [Fl(x),Fz(x),...,Fh(X)lT piEi = XipiDi.
be a vector of h polynomials. Each of the F,(x) is a Each of the eigenvector solutions pi gives us one row of
polynomial of degree p or less in each of the coordinate Pi. As in the case of the FCPQS algorithm, the use of
axis labels x1,x2,..',x,. For example, when n = 2 the constraint in (27) gives us fits that correspond to the
and k = 1, and p = 2, F(x) can be chosen to be approximate distance, even though the objective function uses
qT = [z:, zF,. . . , x i , 2 1 ~ 2 , ... , x,-~x,, ~ 1 , ~ .2. , z,. 11 the algebraic distance, especially when reweighting is used.
as in (8), in which case each element of F(x) is a monomial. It is to be noted that in this general case, each prototype
For a suitable choice of F(x), we can write (25) as is represented by a set of higher-order polynomials. In this
sense, the above algorithm is also a generalization of the Fuzzy
f(x) = P F ( x ) = 0
C Regression Models (FCRM) introduced by Hathaway and
where P is a k x h matrix of parameters consisting of Bezkek recently [25]. The FCRM considers two-dimensional
the coefficients of the functions f2(x).Thus P represents prototypes of the form y = f ( z ) , where f ( x ) is a polynomial.
the prototype parameters, where each prototype is a set of In other words, y is explicitly considered as a dependent
functions f(x). We can construct an objective function based variable, and higher powers of y do not appear in this model.
on C such prototypes as The FCRM uses the algebraic distance, and the implicit
C N constraint that the coefficient of y is one.
J P ,U;X)= ~(U2J)"IIP2F(x,)112 (26) Ix. POSSIBILISTIC MEMBERSHESFOR ROBUSTCLUSTERING
z=1 ,=1
Fuzzy clustering algorithms do not always estimate the
where II = (PI,. . . , Pc) represent a C-tuple of prototypes. prototype parameters of the clusters accurately. The main
Here I(PzF(x,)l12 represents the algebraic distance from x, to source of this problem is the probabilistic constraint used
prototype P , . The above objective function can be written as in fuzzy clustering, which states that the memberships of a
C N data point across all clusters must sum to one. [See (l)]. This
J P U;
, X)= ~(2LZ,)"Tr[P2F(x,)FT(X,)PT1 problem has been discussed in detail in [34]. Here we present
2=1 ,=1 two simple examples to illustrate its drawbacks as related to
C N shell clustering. Fig. 8(a) shows a situation where there are two
= C(U2,)mTr[PzM,PTI linear clusters. Fuzzy clustering would produce very different
2=13=1 (asymmetric) memberships in cluster 1 for points A and B ,
C even though they are equidistant from the prototype. Similarly,
= Tr[PiE;PT] point A and point C may have equal membership values in
i=l cluster 1, even though point C is far less typical of cluster 1
where than point A. The resulting fit for the left cluster would thus
N be skewed. Fig. 8(b) presents a situation with two intersecting
Mj = F ( x j ) F " ( ~ j ) , and Ei = E ( u i j ) " M j . circular shell clusters. In this case, intuitively, one point A
j=1 might be considered a "good" member of both clusters, where
The above objective function can be minimized subject to the as point B might be considered a "poor" member, and point C
constraint an outlier. Here again, the constraint in (1) would force points
A, B, and C have memberships of 0.5 in both clusters. The
r N 1
membership values cannot distinguish between a moderately
a typical member and an extremely atypical member, because
the membership of a point in a class is a relative number. In
otherwords, the memberships that result from the constraint in
(1) denote degrees of sharing rather than degrees of typicality.
38 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 3, NO. 1, FEBRUARY 1995
'C One can cast the clustering problem into the framework of
possibility theory [18], [49] by relaxing the constraint in (1)
and reformulating the objective function in (2) as [34]
C N
J" U;X) = C('"ij)"d2(Xj> Pi)
i = l j=1
C N
+ vi C(1- Q j ) m (28)
' i=l j=1
(a) (b) where are suitable positive numbers. The first term in
Fig. 8. Disadvantages of constrained fuzzy memberships. (a) A data set with (28) requires that the distance from the feature vectors to
two linear clusters. Membership values in cluster 1 for points Aand B will the prototypes be as low as possible, whereas the second
be different, even though they are equidistant from the prototype. Point A
and point B may have equal membership values in cluster 1, even though term forces u i j to be as large as possible, thus avoiding the
point C is far less typical of cluster 1 than point A. (b) A data set with two trivial Solution. It is easy to show [34] that U may be a
intersecting circular-shell clusters. The memberships of points A, B, and C global minimum of J,(B, U;X) only if the memberships
are all 0.5 in the two clusters, even though point B is less typical of the
clusters, and point c is an outlier. are updated by
a good initialization for the shell clustering algorithms, thus Since A: is a diagonal matrix, (I - X A:)-’ can be easily
significantly reducing the computational burden. This is a inverted, and from (A3) we obtain
fuzzy generalization of the method suggested by O’Gorman
and Clowes [40], who obtain a crude segmentation of the data
set using the HT and then fit lines to obtain more accurate
xp:,
ZIT = [ +
2x1, Xpi5 2x1,
2(1 - Xpll) 2(1 - Xp:2)
+(A4) 1.
results. This approach is particularly viable when the parameter Substituting (A4) into (A21 yields the following quartic equa-
space is of low dimensionality. tion in X
Finally, we would like to note that to our knowledge, no
general proof of convergence has been presented for any
C4X4 + C3X3 + C2X2+ CIX1 + CO= 0 (A3
of the shell clustering algorithms, although in practice these where
algorithms always seem to converge. This is an important topic c4= 4p:1p:2(4p! ! ! - p! (2 - p! !z)
zlPz2Pz6 t2pz4 zlPz5
that needs to be researched in the future.
(73 = 8P:lP:2(P:: P13+ + 8(P:;P:: + P:;P:24)
. . Pir/2 pin J
APPENDIXB
I:
SUMMARY OF SECOND DEGREECURVE AND SURFACE TYPES
and c; =pi,.
b; =
A. Two-Dimensional Case
Pi(r+n)
The nature of the graph of the general quadratic equation
We first rotate the cluster prototype p; and the point x j so
in 21 and x2 given by
that the matrix A; becomes diagonal. This does not change
the distance. The angle of rotation in the 2-D case is given by
I”’
5Pi5
B. Three-Dimensional Case
The nature of the graph of the general quadratic equation
in x1,x2, and 2 3 given by
where 2
Plxl+ PZZ22 + p3232 + P4Z122 + p5x123
+ P6z223 + p721 + p822 + P9x3 + pl0 = 0
-Sinai COSQ; 1 is described in Table 11.
KRISHNAPURAM er al.: SHELL CLUSTERING ALGORITHMS-PART I 41
TABLE I TABLE I1
TWO-DIMENSIONAL CURVETYPES
QUADRATIC .L QUADRATII
THREE-DIMENSION SURFACE TYPES
no Hyperbolic paraboloid
Parabolic cylinder
Real parallel planes
Imaginary parallel planes
Coincident planes
B . Line Detection from a Pair of Parallel Lines or a in this case, Condition (C2) reduces to
Very Elongated Ellipse or a Very Flat Hyperbola
If cluster is a pair of parallel lines, after the prototype
is rotated, the two lines can be either parallel to the x1 axis
or to the x2 axis. If the lines are parallel to the x1 axis, then On the other hand, if the inequality in (C3) is reversed, then the
p:, M p14 0, and the equations of the two lines are given by above expressions for conjugate and transverse axes lengths
are interchanged, and the negative sign inside the root appears
22 = and 22 =
in the expression for the transverse axis length. In this case,
where Condition (C2) reduces to
[2 + $
[I81 D. Dubois and H. Prade, Possibility Theory: An Approach to Computer-
ized Processing of Uncertainty. New York: Plenum, 1988.
Transverse Axis Length = 2,/-$ - p6.1 . [I91 0. D. Faugeras and M. Hebert, “The representation, recognition, and
~P;Z positioning of 3D shapes from range data,” in Techniques for 3 0
KRISHNAPURAM er al.: SHELL CLUSTERING ALGORITHMS-PART I 43
Machine Perception, A. Rosenfeld, Ed. Amsterdam, The Netherlands: [45] R. D. Sampson, “Fitting conic sections to ‘very scattered’ data: An
Elsevier, 1986, pp, 113-148. iterative refinement of Bookstein algorithm,” Computer Vision and
I. Gath and A. B. Geva, “Unsupervised optimal fuzzy clustering,” IEEE Image Processing, vol. 18, pp. 97-108, 1982.
Trans. PAMI, vol. 11, no. 7, pp. 773-781, Jul. 1989. [46] G. Taubin, “Estimation of planar curves, surfaces, and nonplanar space
R. Gnanadesikan, Methods for Statistical Data Analysis of Multivariate curves defined by implicit equations with application to edge and range
Observations. New York: Wiley, 1977. image segmentation,” IEEE Trans. on Pattern Anal. Machine Intell., vol.
W. E. L. Grimson and D. P. Huttenlocher, “On the sensitivity of the 13, no. 11, pp. 1115-1138, Nov. 1991.
Hough transform for object recognition,” IEEE Trans. PAMI, vol. 12, [47] I. Weiss, “Straight line fitting in a noisy image,” in Proc. IEEE Conf.
no. 3, pp. 255-274, 1990. Computer Vision Pattern Recognition, 1988, pp. 647-652.
E. E. Gustafson and W. C. Kessel, “Fuzzy clustering with a fuzzy [48] P. Whaite and F. P. Feme, “From uncertainty to visual exploration,”
covariance matrix,” in Proc. IEEE CDC, San Diego, CA, 1979, pp. IEEE Trans. Putt. Anal. Machine Intell., vol. 13, no. IO, pp. 1038-1049,
761-766. Oct. 1990.
R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, vol. I. [49] L. A. Zadeh, “Fuzzy sets as a basis for a theory of possibility,” Fuzzy
Reading, MA: Addison-Wesley, 1992, Appendices. Sets and Systems, vol. 1, 1978, pp. 3-28.
R. J. Hathaway and J. C. Bezdek, “Switching regression models and
fuzzy clustering,” IEEE Trans. Fuzzy Syst., vol. 1, no. 3, Aug. 1993,
pp. 195-204.
J. Illingworth and J. Kittler, “A survey of Hough transforms,” Computer
Vision, Graphics Image Processing, vol. 44, no. 1, Oct. 1988, pp.
87-1 16. Raghu Krishnapuram (S’83-M’84) received the
A. K. Jain and R. C. Dubes, Algorithms for Clustering Data. Engle- B.Tech. degree in electrical engineering from the
wood Cliffs, NJ: Prentice-Hall, 1988. Indian Institute of Technology, Bombay, in 1978.
J.-M. Jolion, P. Meer, and S. Bataouche, “Robust clustering with He obtained the M.S. degree in electrical engineer-
applications in computer vision,” IEEE Trans. Pattern Anal. Machine ing from Louisiana State University, Baton Rouge,
Inrell., vol. 13, no. 8, pp. 791-801, Aug. 1991. in 1985 and the Ph.D. degree in electrical and com-
J.-M. Jolion and A. Rosenfeld, “Cluster detection in background noise,” puter engineering from Camegie Mellon University,
Pattern Recognition, vol. 22, no. 5, pp. 603-607, 1989. Pittsburgh, in 1987.
R. Krishnapuram and C.-P. Freg, “Fitting an Unknown Number of Lines Dr. Krishnapuram was with Bush India, Bombay
and Planes to Image Data through Compatible Cluster Merging,” Pattern for a year where he participated in developing elec-
Recognition, vol. 25, no. 4, 1992, pp. 385-400. tronic audio entertainment equipment. From 1979
R. Krishnapuram, H. Frigui, and 0. Nasraoui, “New fuzzy shell cluster- to 1982, he was a deputy engineer at Bharat Electronics Ltd., Bangalore,
ing algorithms for boundary detection and pattern recognition,” in Proc. India, manufacturers of defense equipment. He is currently an Associate
SPIE Conf. Robotics and Computer Vision, Boston, Nov. 1991, SPIE Professor in the Electrical and Computer Engineering Department at the
vol. 1607, pp. 458-465. University of Missouri, Columbia. In 1993, he visited the European Laboratory
-, ”Quadratic shell clustering algorithms and the detection of for Intelligent Techniques Engineering (ELITE), Aachen, Germany, as a
second degree curves,” Pattern Recognition Lett., vol. 14, no. 7, Jul. Humboldt Fellow. His current research interests are many aspects of computer
1993, pp. 545-552. vision and pattern recognition as well as applications of fuzzy set theory and
-, “A fuzzy clustering algorithm to detect planar and quadric neural networks to pattern recognition and computer vision.
shapes,’’ in Proc. N . Am. Fuzzy Inform. Process. Soc. Workshop, Puerto
Vallarta, Mexico, vol. I, Dec. 1992, pp. 59-68.
R. Krishnapuram and J. M. Keller, “A possibilistic approach to clus-
tering,” IEEE Transactions on Fuzzy Systems, vol. 1, no. 2, May 1993,
pp. 98-110.
-, “Fuzzy and possibilistic clustering methods for computer vi- Hichem Frigui received the B.S. degree in electncal
sion,” in Neural Fuzzy Syst., S. Mitra, M. Gupta, and W. Kraske, Eds., and computer engineering in 1990 and the M.S.
SPIE Institute Series, vol. IS-12, 1994, pp. 133-159. degree in electrical engineenng in 1992, both from
R. Krishnapuram, 0. Nasraoui and H. Frigui, “The fuzzy C spherical the University of Missoun, Columbia.
shells algorithms: A new approach,” IEEE Trans. on Neural Networks, From 1992 to 1994 he worked with IDEE, Tunis,
vol. 3, no. 5, Sept. 1992, pp. 663471. where he participated in the development of banking
D. G. Lowe, “Fitting parametrized three-dimensional models to images,” software applications. He is currently pursuing
IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 5, pp. 441-450, the Ph.D. degree in electrical engineering at the
May 1991. University of Missouri, Columbia.
V. Milenkovic, “Multiple resolution search techniques for the Hough His current research interests include pattem
transform in high dimensional parameter spaces,” in A. Rosenfeld, Ed., recognition, computer vision, fuzzy set theory, and
Techniquesfor 3 0 Machine Perception. Amsterdam, The Netherlands: artificial intelligence.
Elsevier, 1986, pp. 231-255.
J. J. Moore, “The Levenberg-Marquardt algorithm: Implementation and
theory,” in Numerical Analysis, G. A, Watson, Ed., Lecture Notes in
Mathematics. Berlin: Springer-Verlag. 1977, pp. 105-1 16.
F. O’Gorman and M. B. Clowes, “Finding picture edges through
collinearity of feature points,” IEEE Trans. Comput., vol. 25, 1976, pp. Olfa Nasraoui received the B.S. degree in electrical
133-142. and computer engineering, and the M.S. degree
K. Paton, “Conic sections in chromosome analysis,” Pattern Recogni- in electrical engineering, in 1990 and 1992,
tion, vol. 2, no. I, pp. 39-51, Jan. 1970. respectively, from the University of Missouri,
V. Pratt, “Direct least squares fitting of algebraic surfaces,” Computer Columbia.
Graphics, vol. 21, no. 4, pp. 145-152, 1987. She worked as a Software Engineer with IDEE,
J. Princen, J. Illingworth, and J. Kittler, “A hierarchical approach to line Tunis, from 1992 to 1994 and is currently pursuing
extraction based on the Hough transform,” Computer Vision, Graphics the Ph.D. degree in electical engineering at the
and Image Processing, vol. 52, 1990, pp. 57-77. University of Missouri, Columbia.
-, “Hypothesis testing: A framework for analyzing and optimizing Her current research interests include pattem
the Hough transform performance,” IEEE Trans. Partern Anal. Machine recognition, computer vision, neural networks, and
Intell., vol. 16, no. 4, pp. 329-341, Apr. 1994. applications of fuzzy set theory.