An Optimal Algorithm For Approximate Nearest
An Optimal Algorithm For Approximate Nearest
SUNIL ARYA
The Hong Kong University of Science and Technology, Kowloon, Hong Kong
DAVID M. MOUNT
University of Maryland, College Park, Maryland
NATHAN S. NETANYAHU
University of Maryland, College Park, Maryland and NASA Goddard Space Flight Center, Greenbelt,
Maryland
RUTH SILVERMAN
University of the District of Columbia, Washington, D.C., and University of Maryland, College Park,
Maryland
AND
ANGELA Y. WU
The American University, Washington, D.C.
Abstract. Consider a set S of n data points in real d-dimensional space, R d , where distances are
measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data
A preliminary version of this paper appeared in Proceedings of the 5th Annual ACM-SIAM Symposium
on Discrete Algorithms. ACM, New York, 1994, pp. 573–582.
The work of S. Arya was partially supported by HK RGC grant HKUST 736/96E. Part of this
research was conducted while S. Arya was visiting the Max-Planck-Institut für Informatik, Saar-
brücken, Germany.
The work of D. M. Mount was supported by the National Science Foundation (NSF) under grant
CCR 97-12379.
The research of N. S. Netanyahu was carried out, in part, while the author held a National Research
Council NASA Goddard Associateship.
The work of R. Silverman was supported by the NSF under grant CCR 93-10705.
Authors’ present addresses: S. Arya, Department of Computer Science, The Hong Kong University of
Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China, e-mail: [email protected];
D. M. Mount, Department of Computer Science and Institute for Advanced Computer Studies,
University of Maryland, College Park, MD 20742, e-mail: [email protected]; N. S. Netanyahu,
Department of Mathematics and Computer Science, Bar-Ilan University, Ramat-Gan 52900, Israel,
e-mail: [email protected]; R. Silverman, Department of Computer Science, University of the
District of Columbia, Washington, DC and Center for Automation Research, University of Maryland,
College Park, MD 20742, e-mail: [email protected]; A. Y. Wu, Department of Computer Science
and Information Systems, The American University, Washington, DC, e-mail: [email protected].
Permission to make digital / hard copy of part or all of this work for personal or classroom use is
granted without fee provided that the copies are not made or distributed for profit or commercial
advantage, the copyright notice, the title of the publication, and its date appear, and notice is given
that copying is by permission of the Association for Computing Machinery (ACM), Inc. To copy
otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission
and / or a fee.
© 1999 ACM 0004-5411/99/1100-0891 $5.00
Journal of the ACM, Vol. 45, No. 6, November 1998, pp. 891–923.
892 ARYA ET AL.
structure, so that given any query point q [ R d , is the closest point of S to q can be reported quickly.
Given any positive real e, a data point p is a (1 1 e)-approximate nearest neighbor of q if its distance
from q is within a factor of (1 1 e) of the distance to the true nearest neighbor. We show that it is
possible to preprocess a set of n points in R d in O(dn log n) time and O(dn) space, so that given a
query point q [ R d , and e . 0, a (1 1 e)-approximate nearest neighbor of q can be computed in
O(c d, e log n) time, where c d, e # d 1 1 6d/ e d is a factor depending only on dimension and e. In
general, we show that given an integer k $ 1, (1 1 e )-approximations to the k nearest neighbors of
q can be computed in additional O(kd log n) time.
Categories and Subject Descriptors: E.1 [Data]: Data Structures; F.2.2 [Analysis of Algorithms and
Problem Complexity]: Nonnumerical Algorithms and Problems; H.3.3 [Information Storage and
Retrieval]: Information Search and Retrieval
General Terms: Algorithms, Theory.
Additional Key Words and Phrases: Approximation algorithms, box-decomposition trees, closest-
point queries, nearest neighbor searching, post-office problem, priority search.
1. Introduction
Nearest neighbor searching is the following problem: We are given a set S of n
data points in a metric space, X, and the task is to preprocess these points so
that, given any query point q [ X, the data point nearest to q can be reported
quickly. This is also called the closest-point problem and the post office problem.
Nearest neighbor searching is an important problem in a variety of applications,
including knowledge discovery and data mining [Fayyad et al. 1996] pattern
recognition and classification [Cover and Hart 1967; Duda and Hart 1973],
machine learning [Cost and Salzberg 1993], data compression [Gersho and Gray
1991], multimedia databases [Flickner et al. 1995], document retrieval [Deer-
wester et al. 1990], and statistics [Devroye and Wagner 1982].
High-dimensional nearest neighbor problems arise naturally when complex
objects are represented by vectors of d numeric features. Throughout we will
assume the metric space X is real d-dimensional space R d . We also assume
distances are measured using any Minkowski L m distance metric. For any integer
m $ 1, the L m -distance between points p 5 ( p 1 , p 2 , . . . , p d ) and q 5 (q 1 , q 2 ,
. . . , q d ) in R d is defined to be the mth root of ( 1 # i # d up i 2 q i u m . In the
limiting case, where m 5 `, this is equivalent to max1#i#d up i 2 q i u. The L 1 , L 2 ,
and L ` metrics are the well-known Manhattan, Euclidean and max metrics,
respectively. We assume that the distance between any two points in R d can be
computed in O(d) time. (Note that the root need not be computed when
comparing distances.) Although this framework is strong enough to include many
nearest neighbor applications, it should be noted that there are applications that
do not fit within this framework (e.g., computing the nearest neighbor among
strings, where the distance function is the edit distance, the number of single
character changes).
Obviously the problem can be solved in O(dn) time through simple brute-
force search. A number of methods have been proposed which provide relatively
modest constant factor improvements (e.g., through partial distance computation
[Bei and Gray 1985], or by projecting points onto a single line [Friedman et al.
1975; Guan and Kamel 1992; Lee and Chen 1994]). Our focus here is on methods
using data structures that are stored in main memory. There is a considerable
literature on nearest neighbor searching in databases. For example, see Berch-
Approximate Nearest Neighbor Searching 893
told et al. [1996; 1997], Lin et al. [1994], Roussopoulas et al. [1995], and White
and Jain [1996].
For uniformly distributed point sets, good expected case performance can be
achieved by algorithms based on simple decompositions of space into regular
grids. Rivest [1974] and later Cleary [1979] provided analyses of these methods.
Bentley et al. [1980] also analyzed a grid-based method for distributions satisfy-
ing certain bounded-density assumptions. These results were generalized by
Friedman et al. [1977] who showed that O(n) space and O(log n) query time are
achievable in the expected case through the use of kd-trees. However, even these
methods suffer as dimension increases. The constant factors hidden in the
asymptotic running time grow at least as fast as 2 d (depending on the metric).
Sproull [1991] observed that the empirically measured running time of kd-trees
does increase quite rapidly with dimension. Arya et al. [1995] showed that if n is
not significantly larger than 2 d , as arises in some applications, then boundary
effects mildly decrease this exponential dimensional dependence.
From the perspective of worst-case performance, an ideal solution would be to
preprocess the points in O(n log n) time, into a data structure requiring O(n)
space so that queries can be answered in O(log n) time. In dimension 1, this is
possible by sorting the points, and then using binary search to answer queries. In
dimension 2, this is also possible by computing the Voronoi diagram for the point
set and then using any fast planar point location algorithm to locate the cell
containing the query point. (For example, see de Berg et al. [1997], Edelsbrunner
[1987], and Preparata and Shamos [1985].) However, in dimensions larger than
2, the worst-case complexity of the Voronoi diagram grows as O(n d/ 2 ). Higher
dimensional solutions with sublinear worst-case performance were considered by
Yao and Yao [1985]. Later Clarkson [1988] showed that queries could be
answered in O(log n) time with O(n d/ 21 d ) space, for any d . 0. The
O-notation hides constant factors that are exponential in d. Agarwal and
Matoušek [1993] generalized this by providing a trade-off between space and
query time. Meiser [1993] showed that exponential factors in query time could be
eliminated by giving an algorithm with O(d 5 log n) query time and O(n d1 d )
space, for any d . 0. In any fixed dimension greater than 2, no known method
achieves the simultaneous goals of roughly linear space and logarithmic query
time.
The apparent difficulty of obtaining algorithms that are efficient in the worst
case with respect to both space and query time for dimensions higher than 2,
suggests that the alternative approach of finding approximate nearest neighbors is
worth considering. Consider a set S of data points in R d and a query point q [
R d . Given e . 0, we say that a point p [ S is a (1 1 e)-approximate nearest
neighbor of q if
we cannot provide bounds on running time, other than a trivial O(dn log n) time
bound needed to search the entire tree by our search algorithm. Unfortunately,
exponential factors in query time do imply that our algorithm is not practical for
large values of d. However, our empirical evidence in Section 6 shows that the
constant factors are much smaller than the bound given in Theorem 1 for the
many distributions that we have tested. Our algorithm can provide significant
improvements over brute-force search in dimensions as high as 20, with a
relatively small average error. There are a number of important applications of
nearest neighbor searching in this range of dimensions.
The algorithms for both preprocessing and queries are deterministic and easy
to implement. Our data structure is based on a hierarchical decomposition of
space, which we call a balanced box-decomposition (BBD) tree. This tree has
O(log n) height, and subdivides space into regions of O(d) complexity defined
by axis-aligned hyperrectangles that are fat, meaning that the ratio between the
longest and shortest sides is bounded. This data structure is similar to balanced
structures based on box-decomposition [Bern et al. 1993; Callahan and Kosaraju
1992; Bespamyatnikh 1995], but there are a few new elements that have been
included for the purposes of nearest neighbor searching and practical efficiency.
Space is recursively subdivided into a collection of cells, each of which is either a
d-dimensional rectangle or the set-theoretic difference of two rectangles, one
enclosed within the other. Each node of the tree is associated with a cell, and
hence it is implicitly associated with the set of data points lying within this cell.
Each leaf cell is associated with a single point lying within the bounding rectangle
for the cell. The leaves of the tree define a subdivision of space. The tree has
O(n) nodes and can be built in O(dn log n) time.
Here is an intuitive overview of the approximate nearest neighbor query
algorithm. Given the query point q, we begin by locating the leaf cell containing
the query point in O(log n) time by a simple descent through the tree. Next, we
begin enumerating the leaf cells in increasing order of distance from the query
point. We call this priority search. When a cell is visited, the distance from q to
the point associated with this cell is computed. We keep track of the closest point
seen so far. For example, Figure 1(a) shows the cells of such a subdivision. Each
cell has been numbered according to its distance from the query point.
Let p denote the closest point seen so far. As soon as the distance from q to
the current leaf cell exceeds dist(q, p)/(1 1 e ) (illustrated by the dotted circle in
Figure 1(a)), it follows that the search can be terminated, and p can be reported
as an approximate nearest neighbor to q. The reason is that any point located in
896 ARYA ET AL.
2. The BBD-Tree
In this section, we introduce the balanced box-decomposition tree or BBD-tree,
which is the primary data structure used in our algorithm. It is among the general
class of geometric data structures based on a hierarchical decomposition of space
into d-dimensional rectangles whose sides are orthogonal to the coordinate axes.
The main feature of the BBD-tree is that it combines in one data structure two
important features that are present in these data structures.
First, consider the optimized kd-tree [Friedman et al. 1975]. This data
structure recursively subdivides space by a hyperplane that is orthogonal to one
of the coordinate axes and that partitions the data points as evenly as possible.
As a consequence, as one descends any path in the tree, the cardinality of points
associated with the nodes on this path decreases exponentially. In contrast,
consider quadtree-based data structures, which decompose space into regions
that are either hypercubes, or generally rectangles whose aspect ratio (the ratio of
the length of the longest side to the shortest side) is bounded by a constant.
These include PR-quadtrees (see Samet [1990]), and structures by Clarkson
Approximate Nearest Neighbor Searching 897
[1983], Feder and Greene [1988], Vaidya [1989], Callahan and Kosaraju [1992]
and Bern [1993], among others. An important feature of these data structures is
that as one descends any path in these trees, the geometric size of the associated
regions of space (defined, for example, to be the length of the longest side of the
associated rectangle) decreases exponentially. The BBD-tree is based on a spatial
decomposition that achieves both exponential cardinality and geometric size
reduction as one descends the tree.
The BBD-tree is similar to other balanced structures based on spatial decom-
position into rectangles of bounded aspect ratio. In particular, Bern et al. [1993],
Schwartz et al. [1994], Callahan and Kosaraju [1995], and Bespamyatnikh [1995]
have all observed that the unbalanced trees described earlier can be combined
with auxiliary balancing data structures, such as centroid decomposition trees
[Chazelle 1982], dynamic trees [Sleator and Tarjan 1983], or topology trees
[Frederickson 1993] to achieve the desired combination of properties. However,
these auxiliary data structures are of considerable complexity. We will show that
it is possible to build a single balanced data structure without the need for any
complex auxiliary data structures. (This is a major difference between this and
the earlier version of this paper [Arya et al. 1994].)
The principal difference between the BBD-tree and the other data structures
listed above is that each node of the BBD-tree is associated not simply with a
d-dimensional rectangle, but generally with the set theoretic difference of two
such rectangles, one enclosed within the other. Note, however, that any such
region can always be decomposed into at most 2d rectangles by simple hyper-
plane cuts, but the resulting rectangles will not generally have bounded aspect
ratio. We show that a BBD-tree for a set of n data points in R d can be
constructed in O(dn log n) time and has O(n) nodes.
Before describing the construction algorithm, we begin with a few definitions.
By a rectangle in R d we mean the d-fold product of closed intervals on the
coordinate axes. For 1 # i # d, the ith length of a rectangle is the length of the
ith interval. The size of a rectangle is the length of its longest side. We define a
box to be a rectangle whose aspect ratio (the ratio between the longest and
shortest sides) is bounded by some constant, which for concreteness we will
assume to be 3.
Each node of the BBD-tree is associated with a region of space called a cell. In
particular, define a cell to be either a box or the set theoretic difference of two
boxes, one enclosed within the other. Thus, each cell is defined by an outer box
and an optional inner box. Each cell will be associated with the set of data points
lying within the cell. Cells are considered to be closed, and hence points which lie
on the boundary between cells may be assigned to either cell. The sizes of a cell is
the sizes of its outer box.
An important concept which restricts the nature of inner boxes is a property
called stickiness. Consider a cell with outer box b O and inner box b I . Intuitively,
the box b I is sticky for b O if each face is either sufficiently far from or else
touching the corresponding face of b O . To make this precise, consider two closed
intervals, [ x I , y I ] # [ x O , x O ], and let w 5 y I 2 x I denote the width of the inner
interval. We say that [ x I , y I ] is sticky for [ x O , y O ] if each of the two distances
between the inner interval and the outer interval, x I 2 x O and y O 2 y I , is either
0 or at least w. The inner box b I is sticky for the outer box b O if each of the d
intervals of b I is sticky for the corresponding interval of b O . (See Figure 2(a).)
898 ARYA ET AL.
2.2. PARTITIONING POINTS. Before presenting the details on how the splitting
plane or shrinking box is computed, we describe how the points are partitioned.
We employ a technique for partitioning multidimensional point sets due to
Vaidya [1989]. We assume that the data points that are associated with the
current node are stored in d separate doubly-linked lists, each sorted according
to one of the coordinate axes. Actually, the coordinates of each point are stored
only once. Consider the list for the ith coordinate. Each entry of this doubly-
linked list contains a pointer to the point’s coordinate storage, and also a cross
900 ARYA ET AL.
link to the instance of this same point in the list sorted along coordinate i 1 1
(where indices are taken modulo d). Thus, if a point is deleted from any one list,
it can be deleted from all other lists in O(d) time by traversing the cross links.
Since each point is associated with exactly one node at any stage of the
construction, the total space needed to store all these lists is O(dn). The initial
lists containing all the data points are built in O(dn log n) time by sorting the
data points along each of the d coordinates.
To partition the points, we enumerate the points associated with the current
node, testing which side of the splitting plane or shrinking box each lies. We label
each point accordingly. Then, in O(dn) time, it is easy to partition each of the d
sorted lists into two sorted lists. Since two nodes on the same level are associated
with disjoint subsets of S, it follows that the total work to partition the nodes on
a given level is O(dn). We will show that the tree has O(log n) depth. From this
it will follow that the total work spent in partitioning over the entire construction
algorithm will be O(dn log n). (The d sorted lists are not needed for the
efficiency of this process, but they will be needed later.)
To complete the description of the construction algorithm, it suffices to
describe how the splitting plane and shrinking box are computed and show that
this can be done in time linear in the number of points associated with each
node. We will present two algorithms for these tasks, the midpoint algorithm and
the middle-interval algorithm (borrowing a term from Bespamyatnikh [1995]). The
midpoint algorithm is conceptually simpler, but its implementation assumes that
nonalgebraic manipulations such as exclusive-or, integer division, and integer
logarithm can be performed on the coordinates of the data points. In contrast,
the middle-interval algorithm does not make these assumptions, but is somewhat
more complex. The midpoint algorithm is a variant of the one described in an
earlier version of this paper [Arya et al. 1994], and the middle-interval algorithm
is a variant of the algorithm given by Callahan and Kosaraju [1995] and
developed independently by Bespamyatnikh [1995].
Midpoint splitting rule. Let b be a midpoint box, and let i be the longest side
of b (and among all sides having the same length, i has the smallest coordinate
index). Split b into two identical boxes by a hyperplane passing through the
center of b and orthogonal to the ith coordinate axis. (See Figure 3(a).)
This can be seen as a binary variant of the standard quadtree splitting rule
[Samet 1990]. We split through the midpoint each time by a cyclically repeating
sequence of orthogonal hyperplanes. The midpoint algorithm uses only midpoint
boxes in the BBD-tree. It is easy to verify that every midpoint box has an aspect
ratio of either 1 or 2. If we assume that C is scaled to the unit hypercube [0, 1] d
then the length of each side of a midpoint box is a nonnegative power of 1/2, and
if the ith length is 1/ 2 k , then the endpoints of this side are multiples of 1/ 2 k . It
follows that if b O and b I are midpoint boxes, with b I , b O , then b I is sticky for
b O (since the ith length of b O is at least as long as that of b I , and hence is aligned
with either the same or a smaller power of 1/2). Thus we need to take no special
Approximate Nearest Neighbor Searching 901
Problem 2. The resulting shrinking box does not necessarily contain the inner
box of the original cell, as required in the shrink operation.
To remedy Problem 1, we need to accelerate the decomposition algorithm
when points are tightly clustered. Rather than just repeatedly splitting, we
combine two operations, first shrinking to a tight enclosing midpoint box and
then splitting this box. From the sorted coordinate lists, we can determine a
minimum bounding rectangle (not necessarily a box) for the current subset of
data points in O(d) time. Before applying each midpoint split, we first compute
902 ARYA ET AL.
the smallest midpoint box that contains this rectangle. We claim that this can be
done in O(d) time, assuming a model of computation in which exclusive-or,
integer floor, powers of 2, and integer logarithm can be computed on point
coordinates. (We omit the proof here, since we will see in the next section that
this procedure is not really needed for our results. See Bern et al. [1993] for a
solution to this problem based on a bit-interleaving technique.) Then we apply
the split operation to this minimal enclosing midpoint box. From the minimality
of the enclosing midpoint box, it follows that this split will produce a nontrivial
partition of the point set. Therefore, after at most n c /3 5 O(n c ) repetitions of
this shrink-and-split combination, we will have succeeded in reducing the number
of remaining points to at most 2n c /3.
To remedy Problem 2, we replace the single stage shrink described in the
simple approach with a 3-stage decomposition, which shrinks, then splits, then
shrinks. Suppose that we are applying the centroid shrink to a cell that contains
an inner box b I . When we compute the minimum enclosing rectangle for the data
points, we make sure that it includes b I as well. This can be done easily in O(d)
time, given the minimum enclosing rectangle for the data points. Now we apply
the above iterated shrinking/splitting combination, until (if ever) we first encoun-
ter a split that separates b I from the box containing the majority of the remaining
points. Let b denote the box that was just split. (See Figure 4(b).) We create a
shrinking node whose shrinking box is b. The outer child contains the points
lying outside of b. The inner child consists of a splitting node, with the box
containing b I on one side, and the box containing the majority of the data points
on the other side. Finally, we continue with the centroid shrinking procedure
with the child cell that contains the majority of points. Since this cell has no inner
box, the above procedure will correctly compute the desired shrinking node. The
nodes created are illustrated in Figure 4(c). The final result from the centroid
shrink is box c in the lower left. Note that this figure illustrates the most general
case. For example, if the first split separates b I from the majority, then there is
no need for the first shrinking node. The (up to) four remaining cells are
decomposed recursively. Also note that none of these cells contains more than
2n c /3 data points.
LEMMA 1. Given a parent node associated with nc points, and assuming that the
points have been stored in coordinate-sorted lists, each split and centroid shrink can
be performed in O(dnc) time.
PROOF. The centroid shrink is clearly the more complex of the two opera-
tions, so we will present its analysis only. We begin by making a copy of the d
Approximate Nearest Neighbor Searching 903
T~k! 5 1 if k # 2n c /3,
T~k! 5 max1#j#k/ 2 ~ dj 1 T ~ k 2 j !! otherwise.
An easy induction argument shows that T(n c ) # dn c , and hence the total
running time for each operation is O(dn c ). e
In conclusion, we can compute the splitting plane and shrinking box in O(dn c )
time. Since we alternate splits with shrinks, and shrinking reduces the number of
points in each cell by a constant factor, it follows that the resulting tree has
height O(log n). From the arguments made earlier, it follows that the entire tree
can be constructed in O(dn log n) time.
If there is an inner box b I , then care must be taken that the splitting
hyperplane does not intersect the interior of b I , and that stickiness is not violated
after splitting. Consider the projection of b I onto the longest side of b O . If this
projection fully covers the longest side of b O , then we consider the second
longest side of b O , and so on until finding one for which b I does not fully cover
this side. One side must exist since b I Þ b O . Observe that, by stickiness, this
projected interval cannot properly contain the central third of this side. If the
projection of b I lies partially within the central third of this side, we select a
splitting plane in the central third passing through a face of b I (see Figure 5(a)).
Otherwise the projection of b I lies entirely within either the first or last third. In
this case we split at the further edge of the center strip (see Figure 5(b)).
It is easy to see that this operation preserves stickiness. We show in the
following lemma that the aspect ratio is preserved as well.
LEMMA 2. Given a cell consisting of outer box bO and inner box bI satisfying the
3:1 aspect ratio bound, the child boxes produced by the middle-interval split
algorithm also satisfy the bound.
PROOF. First, observe that the longest side of the two child boxes is not
greater than the longest side of b O . We consider two cases: first, where the
longest side of b O is split, and second, where some other side of b O is split. In the
first case, if the shortest side of the child is any side other than the one that was
split, then clearly the aspect ratio cannot increase after splitting. If the shortest
side of the child is the splitting side, then by our construction it is at least one
third the length of its parent’s longest side, implying that it is at least one third
the length of its own longest side.
In the second case, the longest side of b O was not split. Then by our
construction this implies that the projection of b I along this dimension fully
covers this side. It follows that b I and b O have the same longest side length, that
is, the same size. By hypothesis, b I satisfies the aspect ratio bound, and so it
suffices to show that each side of each child is at least as long as the shortest side
of b I . For concreteness, suppose that the high child contains b I (as in Figure
5(c)). Clearly the high child satisfies this condition. The low child differs from
the high child in only one dimension (namely the dimension that was split). Let
x O , x I , and x c denote the lengths of b O , b I , and the low child, respectively, along
this dimension. We assert that b I overlaps the middle interval of b O . If not, then
it follows that x I , x O /3 # size(b O )/3 5 size(b I )/3, contradicting the hypothesis
that b I satisfies the aspect ratio bound. Since b I overlaps the middle interval, the
splitting plane passes through a face of b I , implying that the distance from b I to
Approximate Nearest Neighbor Searching 905
the low side of the low child is x c . But, since b I is sticky for b O , it follows that
x c $ x I . This completes the proof. e
Computing a centroid shrink is more complicated, but the same approach used
in the previous section can still be applied. Recall that the goal is to decompose
the current cell into a constant number of cells, such that each contains at most a
fraction of 2/3 of the data points. As before, this can be done by repeatedly
applying fair splits and recursing on the cell containing the majority of the
remaining points, until the number of points falls below 2/3 of the original.
Problems 1 and 2, which arose in the previous section, arise here as well.
Problem 2 is solved in exactly the same way as before, thus each centroid shrink
will generally produce three nodes in the tree, first a shrink to a box containing
the old inner box, a split separating the inner box from the majority of points,
and a shrink to the new inner box.
To solve Problem 1 we need to eliminate the possibility of performing more
than a constant number of splits before succeeding in nontrivially partitioning the
remaining points. As before, the idea is to compute a minimal box that encloses
both the data points and any inner box that may already be part of the current
cell. Achieving both minimality and stickiness is rather difficult, but if r denotes
the minimum rectangle (not necessarily a box) which encloses the data points
and inner box, then it suffices to construct a box b which contains this rectangle,
and whose size is at most a constant factor larger than the size of r. Once such a
box is computed, O(d) splits are sufficient to generate a nontrivial partition of r.
This in turn implies a nontrivial partition of the point set, or a partition
separating the inner box from the majority of points. This box b must also satisfy
the stickiness conditions: b is sticky for the current outer box, and the inner box
(if it exists) is sticky for b. The construction of such a box is presented in the
proof of the next lemma.
LEMMA 3. Given a cell and the minimum bounding rectangle r enclosing both
the subset of data points and the inner box of the cell (if there is an inner box), in
O(d) time it is possible to construct a box b which is contained within the cell’s outer
box and which contains r, such that
(i) the longest side of b is at most a constant factor larger than the longest side of r,
(ii) the cell’s inner box (if it exists) is sticky for b, and
(iii) b is sticky for the cell’s outer box.
PROOF. Let b O denote the cell’s outer box. Recall that the size of a rectangle
is the length of its longest side. First, observe that if the size of r is within a
constant factor of 1/36 of the size of b O , then we can let b 5 b O . Otherwise, let
us assume that the size of r is at most a factor of 1/36 of the size of b O . (We have
made no attempt to optimize this constant.) We construct b by applying a series
of expansions to r.
First, we consider whether the cell has an inner box. If so, let b I be this box. By
hypothesis, r contains b I . We expand each side of r so that it encloses the
intersection of b O with the 3 d regular grid of copies of b I surrounding b I . (See
Figure 6(a).) Note that because b I is sticky for b O , this expansion will necessarily
lie within b O . Subsequent expansions of r cannot cause stickiness with respect to
b I to be violated. This may increase the longest side of r by a factor of 3, so the
906 ARYA ET AL.
size of r is at most 1/12 of the size of b O . Because b O satisfies the aspect ratio
bound, the size of r is at most 1/4 of the side length of any side of b O .
Next, we expand r to form a hypercube. Let l max denote the size of r. Each side
of r whose length is less than l max is expanded up to l max. (See Figure 6(b).) Since
l max is less than the length of each side of b O , this expansion can be contained
within b O . This expansion does not increase the length of the longest side of r.
Finally, we consider whether r is sticky for b O . If it is not, then we expand each
of the violating sides of r until it meets the corresponding side of b O . (See Figure
6(c).) Let b be this expanded rectangle. Since each side of r is not more than 1/4
of the length of the corresponding side of b O , it follows that this expansion will at
most double the length of any side of r. (In particular, r may be expanded in one
direction along each dimension, but not in both directions.) Thus, the longest
side of b is at most 2l max , and its shortest side is at least l max . Thus, b satisfies
the aspect ratio bound. This establishes (i). By the construction, b also satisfies
properties (ii) and (iii). The size of b is at most 6 times the size of r. Finally, each
of the three expansion steps can easily be performed in O(d) time. e
This lemma solves Problem 1. The centroid shrinking box is computed
essentially as it was in the previous section. We repeatedly compute the enclosing
box b described above. Then we perform O(d) splits until nontrivially partition-
ing the point set. (Note that each trivial split can be performed in O(1) time,
since no partitioning is needed.) Finally, we recurse on the larger half of the
partition. This process is repeated until the number of points decreases by a
factor of 2/3. In spite of the added complexity, the operation generates only three
new nodes in the tree. Partitioning of data points is handled exactly as it was in
the previous algorithm. Thus, the entire construction can be performed in O(dn
log n) time.
2.5. FINAL MODIFICATIONS. This concludes the description of the construc-
tion algorithm for the BBD-tree. However, it will be necessary to perform a few
final modifications to the tree, before describing the nearest neighbor algorithm.
A split or shrink is said to be trivial if one of the children contains no data points.
It is possible for the tree construction algorithms to generate trivial splits or
shrinks (although it can be shown that there can never be more than a constant
number of consecutive trivial partitions). It is not hard to see, however, that any
contiguous sequence of trivial splits and shrinks can be replaced by a single trivial
shrink. We may assume that the data points all lie within the inner box of such a
shrinking node, for otherwise we could simply remove this inner box without
affecting the subdivision. After constructing the BBD-tree, we replace each such
sequence of trivial splits and shrinks by a single trivial shrink. This can be done in
O(n) time by a simple tree traversal.
Approximate Nearest Neighbor Searching 907
We would like to be able to assume that each leaf contains at least one data
point, but this is generally not the case for the leaf nodes resulting from trivial
shrinks. We claim that we can associate a data point with each such empty leaf
cell by borrowing a point from its inner box. Furthermore, we claim that this can
be done so that each data point is associated with at most two leaf cells. To see
this, consider the following borrowing procedure. Each nontrivial split or shrink
node recursively borrows one point from each of its two children, and passes
these to its parent. If the parent is a trivial shrink, it uses one of the points for its
empty leaf child, and passes the other up the tree. Because there are no two
consecutive trivial shrinks or splits, the grandparent must be nontrivial, and so
this procedure succeeds in borrowing a different data point for each empty leaf.
In summary, we have the following characterization of the BBD-tree.
THEOREM 2. Given a set of n data points S in Rd, in O(dn log n) time, we can
construct a binary tree having O(n) nodes representing a hierarchical decomposition
of Rd into cells (satisfying the stickiness properties given earlier) such that
(i) The height of the tree is O(log n) and in general, with every 4 levels of descent
in the tree, the number of points associated with the nodes decreases by at least
a factor 2/3.
(ii) The cells have bounded aspect ratio, and with every 4d levels of descent in the
tree, the sizes of the associated cells decrease by at least a factor of 2/3.
(iii) Each leaf cell is associated with one data point, which is either contained
within the cell, or contained within the inner box of the cell. No data point is
associated with more than two leaf cells.
and deletions in O(log n) time each. See either Callahan and Kosaraju [1995] or
Bespamyatnikh [1995] for details. A somewhat more practical approach to
insertion and deletion would be to achieve O(log n) amortized time for insertion
and deletion by rebuilding unbalanced subtrees, using the same ideas as scape-
goat trees [Galperin and Rivest 1993]. The key fact is that given an arbitrarily
unbalanced subtree of a box-decomposition tree, it is possible to replace it with a
balanced subtree (representing the same underlying spatial subdivision) in time
linear in the size of the subtree. For example, this can be done by building a
topology tree for the subtree [Frederickson 1985].
3. Essential Properties
Before describing the nearest neighbor algorithm, we enumerate some important
properties of the BBD-tree, which will be relevant to nearest neighbor searching.
These will be justified later. Recall that each cell is either a rectangle, or the
difference of two rectangles, one contained within the other. Recall that the leaf
cells of the BBD-tree form a subdivision of space. The cells of this subdivision
satisfy the following properties.
PROOF. From the 3:1 aspect ratio bound, the smallest side length of a box of
size s is at least s/3. We say that a set of boxes are disjoint if their interiors are
pairwise disjoint. We first show that the maximum number of disjoint boxes of
side length at least s/3 that can overlap any Minkowski ball of radius r is 1 1
6r/s d . For any m, an L m Minkowski ball of radius r can be enclosed in an
axis-aligned hypercube of side length 2r. (The tightest fit is realized in the L `
case, where the ball and the hypercube are equal). The densest packing of
axis-aligned rectangles of side length at least s/3 is realized by a regular grid of
cubes of side length exactly s/3. Since an interval of length 2r can intersect at
most 1 1 6r/s intervals of length s/3, it follows that the number of grid cubes
overlapping the cube of side length 2r is at most 1 1 6r/s d . Therefore, this is
an upper bound on the number of boxes of side length s that can overlap any
Minkowski ball of radius r.
The above argument cannot be applied immediately to the outer boxes of the
leaf cells, because they are not disjoint from the leaves contained in their inner
boxes. To complete the proof, we show that we can replace any set of leaf cells
each of size at least s that overlap the Minkowski ball with an equal number of
disjoint boxes (which are not necessarily part of the spatial subdivision) each of
size at least s that also overlap the ball. Then we apply the above argument to
these disjoint boxes.
For each leaf cell of size at least s that either has no inner box, has an inner
box of size less than s, or has an inner box that does not overlap the ball, we take
the outer box of this cell to be in the set. In these cases, the inner box cannot
contribute a leaf to the set of overlapping cells.
On the other hand, consider a leaf cell c, formed as the difference of an outer
box b O and inner box b I , such that the size of b I is at least s, and both b I and c
overlap the ball. Since b O has at most one inner box, and by convexity of boxes
and balls, it follows that there is a point p on the boundary between c and b I that
lies within the ball. Let p denote such a point. (See Figure 7.) Any neighborhood
about p intersects the interiors of both c and b I . By stickiness, we know that the
3 d 2 1 congruent copies b I , surrounding b I either lie entirely within b O or their
interiors are disjoint from b O . Clearly, there must be one such box containing p
on its boundary, and this box is contained within b O . (In Figure 7, this is the box
lying immediately below p). We take this box to replace c in the set. This box is
disjoint from b I , its size is equal to the size of b I , and it overlaps the ball.
Because leaf cells have disjoint interiors, and each has only one inner box, it
follows that the replacement box will be disjoint from all other replacement
boxes. Now, applying the above argument to the disjoint replacement boxes
completes the proof. e
910 ARYA ET AL.
determined in O(d log n) time. Thus, the time needed to enumerate the nearest
m cells to the query point is O(md log n). Thus, property (e) is established.
Before implementing this data structure as stated, there are a number of
practical compromises that are worth mentioning. First, we have observed that
the size of the priority queue is typically small enough that it suffices to use a
standard binary heap (see, e.g., Cormen et al. [1990]), rather than the somewhat
more sophisticated Fibonacci heap. It is also worth observing that splitting nodes
can be processed quite a bit more efficiently than shrinking nodes. Each
shrinking node requires O(d) processing time, to determine whether the query
point lies within the inner box, or to determine the distance from the query point
to the inner box. However, it is possible to show that splitting nodes containing
no inner box can be processed in time independent of dimension. It takes only
one arithmetic comparison to determine on which side of the splitting plane the
query point lies. Furthermore, with any Minkowski metric, it is possible to
incrementally update the distance from the parent box to each of its children
when a split is performed. The construction, called incremental distance compu-
tation is described in Arya and Mount [1993b]. Intuitively, it is based on the
observation that, for any Minkowski metric, it suffices to maintain the sum of the
appropriate powers of the coordinate differences between the query point and
the nearest point of the outer box. When a split is performed, the closer child is
at the same distance as the parent, and the further child’s distance differs only in
the contribution of the single coordinate along the splitting dimension. The
resulting improvement in running time can be of significant value in higher
dimensions. This is another reason that shrinking should be performed sparingly,
and only when it is needed to guarantee balance in the BBD-tree.
LEMMA 5. The number of leaf cells visited by the nearest neighbor algorithm is
at most Cd,e # 1 1 6d/ed for any Minkowski metric.
912 ARYA ET AL.
PROOF. Let r denote the distance from the query point to the last leaf cell
that did not cause the algorithm to terminate. We know that all cells that have
been encountered so far are within distance r from the query point. If p is the
closest data point encountered so far, then because we did not terminate we have
r ~ 1 1 e ! # dist~ q, p ! .
We claim that no cell seen so far can be of size less than r e /d. Suppose that such
a cell was visited. This cell is within distance r of q, and hence overlaps a ball of
radius r centered at q. The diameter of this cell in any Minkowski metric is at
most d times its longest side length (in general, d 1/m times the longest side in the
L m metric), and hence is less than r e . Since the cell is associated with a data
point lying within the outer box of the cell, the search must have encountered a
data point at distance less than r 1 r e 5 r(1 1 e ) from q. However, this
contradicts the hypothesis that p is the closest point seen so far.
Thus, the number of cells visited up until termination is bounded by the
number of cells of size at least r e /d that can overlap a ball of radius r.
From property (d), we know that the number of such cells is a function of e
and d. Using the bounds derived in Lemma 4, the number of cells is at most
1 1 6d/ed . e
By combining the results of this and previous sections, we have established
Theorem 1(i). The extra factor of d differentiating c d, e in the theorem and C d, e
in the lemma above is due to the O(d) processing time to compute the distance
from the query point to each visited node in the tree.
PROOF. To bound the number of leaf cells visited by the algorithm, recall
from property (b) that each point is associated with at most two cells. Thus, the k
data points reported by the search were contributed by at most 2k leaf cells that
were visited in the search. We claim that the algorithm encounters at most C d, e
other noncontributing leaf cells.
The argument is a straightforward generalization of the one used in Lemma 5.
Consider the set of visited leaf cells that did not contribute a point to the final
answer. Let r denote the distance to the last cell of this set that did not cause
termination. Let p be the kth closest point encountered so far. As in Lemma 5,
we have r(1 1 e ) # dist(q, p), and so none of the noncontributing cells seen so
far can be of size less than r e /d, or else they would have contributed a point that
is closer than p. The final result follows by applying Lemma 4. e
To complete the proof, we recall that the algorithm spends O(d log n) time to
process each leaf cell, and in time O(log k) # O(log n) we determine whether
the point is among the k nearest points encountered so far, and add it to the set
if it is. Combining this with the earlier remarks of this section establishes
Theorem 1(ii).
6. Experimental Results
In order to establish the practical value of our algorithms, we implemented them
and ran a number of experiments on a number of different data sizes and with
point sets sampled from a number of different distributions.
Our implementation differed slightly from the description of the previous
sections. First, in preprocessing we did not perform the partitioning using the
asymptotically efficient method described in Section 2, of storing the points
sorted along each of the d dimensions. Instead we opted for the much simpler
technique of applying a standard partitioning algorithm as used in QuickSort (see
Cormen et al. [1990]). This does not affect the structure of the resulting tree, but
if splits are very unbalanced then the preprocessing may take longer than O(dn
log n) time. On the other hand, we save a factor of d with each invocation, since
only one coordinate is accessed with each partition. Second, we did not use the
rather sophisticated algorithms for accelerating the shrinking operation. We just
performed repeated splits. We observed no unusually high preprocessing times
for the data sets that were tested.
We mentioned earlier that splitting is generally preferred to shrinking because
of the smaller factors involved. However, splitting without shrinking may result in
trees of size greater than O(n) and height greater than O(log n). In our
implementation we performed shrinking only if a sequence of d/ 2 consecutive
splits failed to reduce the fraction of points by at least one half. For most of the
distributions that we tested, no shrinking nodes were generated. Even for the
highly clustered distributions, a relatively small fraction of shrinking was ob-
served (ranging from 5–20% of the total nodes in the tree). In part, this explains
why simple data structures such as kd-trees perform well for most point
distributions.
As in Arya and Mount [1993b], incremental distance calculation (described in
Section 3) was used to speed up distance calculations for each node. We
experimented with two schemes for selecting splitting planes. One was the
midpoint-split rule described in Section 2.3 and the other was a variant of the
914 ARYA ET AL.
middle-interval rule described in Section 2.4. The latter rule, called the fair-split
rule, was inspired by the term introduced in Callahan and Kosaraju [1992]. Given
a box, we first determine the sides that can be split without violating the 3:1
aspect ratio bound. Given a subset of the data points, define the spread of these
points along some dimension to be the difference between the maximum and
minimum coordinates in this dimension. Among the sides that can be split, select
the dimension along which the points have maximum spread, and then split along
this dimension. The splitting hyperplane is orthogonal to this dimension and is
positioned so the points are most evenly distributed on either side, subject to the
aspect ratio bound.
We ran experiments on these two data structures, and for additional compari-
son we also implemented an optimized kd-tree [Friedman et al. 1977]. The cut
planes were placed at the median, orthogonal to the coordinate axis having
maximum spread. Although the kd-tree is guaranteed to be of logarithmic depth,
there is no guaranteed bound on the aspect ratios of the resulting cells (and
indeed ratios in the range from 10:1 to 20:1 and even higher were not
uncommon). We know of no prior work suggesting the use of a kd-tree for
approximate nearest neighbor queries, but the same termination condition given
in Section 4 can be applied here. Unlike the box-decomposition tree, we cannot
prove upper bounds on the execution time of query processing. Given the
similarity to our own data structure, one would expect that running times would
be similar for typical point distributions, and our experiments bear this out.
Our experience showed that adjusting the bucket size, that is, the maximum
number of points allowed for each leaf cell, affects the search time. For the more
flexible kd-tree and the fair-split rule, we selected a bucket size of 5, but found
that for the more restricted midpoint-split rule, a bucket size of 8 produced
somewhat better results.
The experiments were run on a Sun Sparc 20 running Solaris. Each experiment
consisted of 100,000 data points in dimension 16 and the averages were
computed over 1,000 query points. More query points were taken when measur-
ing CPU times, due to greater variations in CPU time caused by varying system
loads. For each query we computed the nearest neighbor in the L 2 metric.
Except where noted, query points and data points were taken from the same
distribution.
Typical preprocessing times ranged from 20 to 100 CPU seconds. The higher
running times were most evident with highly clustered data sets and when using
the midpoint-split rule. This is because shrinking was needed the most in these
cases. In contrast, the optimized kd-tree, whose running time is independent of
the data distribution, had preprocessing times uniformly around 20 CPU seconds.
6.1. DISTRIBUTIONS TESTED. The distributions that we tested are listed below.
The correlated Gaussian and correlated Laplacian point distributions were
chosen to model data from applications in speech processing. These two point
distributions were formed by grouping the output of autoregressive sources into
vectors of length d. An autoregressive source uses the following recurrence to
generate successive outputs:
X n 5 r X n21 1 W n ,
Approximate Nearest Neighbor Searching 915
Uniform. Each coordinate was chosen uniformly from the interval [0, 1].
Gaussian. Each coordinate was chosen from the Gaussian distribution with zero
mean and unit variance.
Laplace. Each coordinate was chosen from the Laplacian distribution with zero
mean and unit variance.
Correlated Gaussian. W n was chosen so that the marginal density of X n is
normal with variance unity.
Correlated Laplacian. W n was chosen so that the marginal density of X n is
Laplacian with variance unity.
Clustered Gaussian. Ten points were chosen from the uniform distribution and
a Gaussian distribution with a standard deviation 0.05 was centered at each
point.
Clustered Segments. Eight orthogonal-line segments were sampled from a hyper-
cube as follows. For each line segment a random coordinate axis x k was
selected, and a point p was sampled uniformly from the hypercube. The line
segment is the intersection of the hypercube with the line parallel to x k ,
passing through p. An equal number of points were generated uniformly along
the length of each line segment plus a Gaussian error with standard deviation
of 0.001.
For the clustered segments distribution, five trials were run, with newly
generated cluster centers for each trial, and each involving 200 query points.
Query points were sampled from a uniform distribution. We show results only for
the uniform distribution and two extreme cases, the correlated Laplacian and
clustered segments distributions. The results for other distributions generally
varied between the uniform case and the correlated Laplacian.
FIG. 9. (a) Floating point operations and (b) CPU time versus e for the uniform distribution.
FIG. 10. (a) Floating point operations and (b) CPU time versus e for the correlated Laplacian
distribution.
FIG. 11. (a) Floating point operations and (b) CPU time versus e for the clustered segments
distribution.
over all query points is called the average relative error (or simply average error).
This is shown in Figure 12 for the uniform and correlated Laplacian distribu-
tions. Again, most of the other distributions showed a similar behavior. The
results show that for even very large values of e, the average error committed is
typically at least an order of magnitude smaller. Although we have no theoretical
justification for this phenomenon, this better average-case performance may be
of interest in applications where average error over a large number of queries is
of interest, and suggests an interesting topic for future study.
A related statistic is how often the algorithm succeeds in finding the true
nearest neighbor as a function of e. We found that the algorithm manages to
locate the true nearest neighbor in a surprisingly large number of instances, even
with relative large values of e. To show this, we plotted the fraction of instances
in which the algorithm fails to return the true nearest neighbor for these
distributions. Results are shown in Figure 13.
FIG. 12. Average error for the (a) uniform and (b) correlated Laplacian distribution versus e.
FIG. 13. Fraction of nearest neighbors missed for the (a) uniform and (b) correlated Laplacian
distributions versus e.
more accurate sense of what sort of factors could be expected in practice, we ran
an experiment to measure how the number of cells visited by the algorithm varies
as a function of d and e. We also sought an analytical explanation of these
results.
We chose a relatively well-behaved case to consider for these experiments,
namely uniformly distributed points in a unit hypercube, and the L ` metric.
Because of the negligible differences in the various data structures for uniformly
distributed data (as evidenced by Figure 9 above), we ran experiments only for
the kd-tree using a bucket size of 1. We considered dimensions varying from 1 to
16, and values of e varying from 0 to 10. We considered data sets of size 100,000,
and for each data set averaged results over 1000 queries.
A plot of the relationship between the logarithm (base 10) of the number of
leaf cells visited versus e and dimension is shown in Figure 14(a). Indeed, the
figure shows that the number of cells is significantly smaller than the huge values
predicted by the above formula. For example, for e 5 1 and dimension 16, the
formula provides the unusable bound of 1032, whereas the plot shows that the
number of cells is roughly 100 for this distribution.
We can provide an informal analytical justification for these empirical results.
We follow the general structure of the analysis by Friedman et al. [1977]. For
large uniformly distributed data sets, it is reasonable to model a kd-tree’s
decomposition of a unit hypercube as a regular grid of hypercubes where each
hypercube has side length of roughly a 5 1/n 1/d . Ignoring boundary effects, the
Approximate Nearest Neighbor Searching 919
expected side length of the L ` nearest neighbor ball for a random query point is
also 1/n 1/d . For e . 0, our algorithm will need to visit any leaf cell that overlaps
a shrunken nearest neighbor ball whose side length is b 5 a/(1 1 e ). It is easy
to see that the expected number of intervals of width a that are overlapped by a
randomly placed interval of width b is (1 1 b/a). It follows that the number of
grid cubes of width a that are overlapped by a randomly placed cube of width b is
S D S D S D
d d d
b 1 21e
11 5 11 5 .
a 11e 11e
From this, it follows that for any fixed dimension, a linear relationship is to be
expected between the logarithm of the number of cells and the logarithm of (2 1
e)/(1 1 e). This relationship is evidenced in Figure 14(b). (Note that both axes
are on a logarithmic scale.) Boundary effects probably play a role since the
empirically observed values are somewhat smaller than predicted by the formula
[Arya et al. 1995].
7. Conclusions
We have showed that through the use of the BBD-tree, (1 1 e)-approximate
nearest neighbor queries for a set of n points in R d can be answered in O(c d, e
log n) time, where c d, e # d 1 1 6d/ e d is a constant depending only on
dimension and e. The data structure uses optimal O(dn) space and can be built
in O(dn log n) time. The algorithms we have presented are simple (especially
the midpoint splitting rule) and easy to implement. Empirical studies indicate
good performance on a number of different point distributions. Unlike many
recent results on approximate nearest neighbor searching, the preprocessing is
independent of e, and so different levels of precision can be provided from one
data structure. Although constant factors in query time grow exponentially with
dimension, constant factors in space and preprocessing time grow only linearly in
d. We have also shown that the algorithms can be generalized to enumerate
approximate k-nearest neighbors in additional O(kd log n) time. Using auxiliary
data structures, it is possible to handle point insertions and deletions in O(log n)
time each.
A somewhat simplified version of the BBD-tree has been implemented in
C11. The software is available on the web from http://www.cs.umd.edu/
;mount/ANN/.
There are a number of important open problems that remain. One is that of
improving constant factors for query time. Given the practical appeal of a data
structure of optimal O(dn) size for large data sets, an important question is what
lower bounds can be established for approximate nearest neighbor searching
using data structures of this size. Another question is whether the approximate
kth nearest neighbor can be computed in time that is polylogarithmic in both n
and k.
REFERENCES
AGARWAL, P. K., AND MATOUŠEK, J. 1993. Ray shooting and parametric search. SIAM J. Comput.
22, 4, 794 – 806.
ARYA, S., AND MOUNT, D. M. 1993a. Algorithms for fast vector quantization. In Proceedings of the
DCC’93: Data Compression Conference. J. A. Storer and M. Cohn, eds. IEEE Press, New York, pp.
381–390.
ARYA, S., AND MOUNT, D. M. 1993b. Approximate nearest neighbor queries in fixed dimensions.
In Proceedings of the 4th ACM–SIAM Symposium on Discrete Algorithms. ACM, New York, pp.
271–280.
ARYA, S., AND MOUNT, D. M. 1995. Approximate range searching. In Proceedings of the 11th
Annual ACM Symposium on Computational Geometry (Vancouver, B.C., Canada, June 5–7). ACM,
New York, pp. 172–181.
ARYA, S., MOUNT, D., AND NARAYAN, O. 1995. Accounting for boundary effects in nearest
neighbor searching. In Proceedings of the 11th Annual ACM Symposium on Computational Geometry
(Vancouver, B.C., Canada, June 5–7). ACM, New York, pp. 336 –344.
ARYA, S., MOUNT, D. M., NETANYAHU, N., SILVERMAN, R., AND WU, A. Y. 1994. An optimal
algorithm for approximate nearest neighbor searching in fixed dimensions. In Proceedings of the 5th
ACM–SIAM Symposium on Discrete Algorithms. ACM, New York, pp. 573–582.
BEI, C.-D., AND GRAY, R. M. 1985. An improvement of the minimum distortion encoding
algorithm for vector quantization. IEEE Trans. Commun. 33, 1132–1133.
Approximate Nearest Neighbor Searching 921
BENTLEY, J. L., WEIDE, B. W., AND YAO, A. C. 1980. Optimal expected-time algorithms for closest
point problems. ACM Trans. Math. Softw. 6, 4, 563–580.
BERCHTOLD, S., BÖHM, C., KEIM, D. A., AND KRIEGEL, H.-P. 1997. A cost model for nearest
neighbor search in high-dimensional data space. In Proceedings of the 16th Annual ACM SIGACT-
SIGMOD-SIGART Symposium on Principles of Database Systems (Tucson, Az., May 12–14). ACM,
New York, pp. 78 – 86.
BERCHTOLD, S., KEIM, D. A., AND KRIEGEL, H.-P. 1996. The X-tree: An index structure for high
dimensional data. In Proceedings of the 22nd VLDB Conference. pp. 28 –39.
BERN, M. 1993. Approximate closest-point queries in high dimensions. Inf. Proc. Lett. 45, 95–99.
BERN, M., EPPSTEIN, D., AND TENG, S.-H. 1993. Parallel construction of quadtrees and quality
triangulations. In Proceedings of the 3rd Workshop Algorithms Data Structures. Lecture Notes in
Computer Science, vol. 709. Springer-Verlag, New York, pp. 188 –199.
BESPAMYATNIKH, S. N. 1995. An optimal algorithm for closest pair maintenance. In Proceedings of
the 11th Annual ACM Symposium on Computational Geometry (Vancouver, B.C., Canada, June
5–7). ACM, New York, pp. 152–161.
CALLAHAN, P. B., AND KOSARAJU, S. R. 1992. A decomposition of multi-dimensional point-sets
with applications to k-nearest-neighbors and n-body potentional fields. In Proceedings of the 24th
Annual ACM Symposium on the Theory of Computing (Vancouver, B.C., Canada, May 4 – 6). ACM,
New York, pp. 546 –556.
CALLAHAN, P. B., AND KOSARAJU, S. R. 1995. Algorithms for dynamic closest pair and n-body
potential fields. In Proceedings of the 6th Annual ACM–SIAM Symposium on Discrete Algorithms
(San Francisco, Calif., Jan. 22–24). ACM, New York, pp. 263–272.
CHAN, T. 1997. Approximate nearest neighbor queries revisited. In Proceedings of the 13th Annual
ACM Symposium on Computational Geometry (Nice, France, June 4 – 6). ACM, New York, pp.
352–358.
CHAZELLE, B. 1982. A theorem on polygon cutting with applications. In Proceedings of the 23rd
Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society Press, Los
Alamitos, Calif., pp. 339 –349.
CLARKSON, K. L. 1983. Fast algorithms for the all nearest neighbors problem. In Proceedings of the
24th Annual IEEE Symposium on the Foundations of Computer Science. IEEE Computer Society
Press, Los Alamitos, Calif., pp. 226 –232.
CLARKSON, K. L. 1988. A randomized algorithm for closest-point queries. SIAM J. Comput. 17, 4,
830 – 847.
CLARKSON, K. L. 1994. An algorithm for approximate closest-point queries. In Proceedings of the
10th Annual ACM Symposium on Computational Geometry (Stony Brook, N.Y., June 6 – 8). ACM,
New York, pp. 160 –164.
CLEARY, J. G. 1979. Analysis of an algorithm for finding nearest neighbors in Euclidean space.
ACM Trans. Math. Softw. 5, 2 (June), 183–192.
CORMEN, T. H., LEISERSON, C. E., AND RIVEST, R. L. 1990. Introduction to Algorithms. MIT Press,
Cambridge, Mass.
COST, S., AND SALZBERG, S. 1993. Aweighted nearest neighbor algorithm for learning with
symbolic features. Mach. Learn. 10, 57–78.
COVER, T. M., AND HART, P. E. 1967. Nearest neighbor pattern classification. IEEE Trans. Info.
Theory 13, 57– 67.
DE BERG, M., VAN KREVELD, M., OVERMARS, M., AND SCHWARZKOPF, O. 1997. Computational
Geometry: Algorithms and Applications. Springer-Verlag, New York.
DEERWESTER, S., DUMALS, S. T., FURNAS, G. W., LANDAUER, T. K., AND HARSHMAN, R. 1990.
Indexing by latend semantic analysis. J. Amer. Soc. Inf. Sci. 41, 391– 407.
DEVROYE, L., AND WAGNER, T. J. 1982. Nearest neighbor methods in discrimination. In Handbook
of Statistics, vol. 2. P. R. Krishnaiah and L. N. Kanal, eds. North-Holland, Amsterdam, The
Netherlands.
DUDA, R. O., AND HART, P. E. 1978. Pattern Classification and Scene Analysis. Wiley, New York.
EDELSBRUNNER, H. 1987. Algorithms in Combinatorial Geometry, vol. 10 of EATCS Monographs on
Theoretical Computer Science. Springer-Verlag, New York.
FARVARDIN, N., AND MODESTINO, J. W. 1985. Rate-distortion performance of DPCM schemes for
sutoregressive sources. IEEE Trans. Inf. Theory 31, 3 (May), 402– 418.
FAYYAD, U. M., PIATETSKY-SHAPIRO, G., SMYTH, P., AND UTHURUSAMY, R. 1996. Advances in
Knowledge Discovery and Data Mining. AAAI Press/MIT Press, Cambridge, Mass.
922 ARYA ET AL.
FEDER, T., AND GREENE, D. 1988. Optimal algorithms for approximate clustering. In Proceedings
of the 20th Annual ACM Symposium on Theory of Computing (Chicago, Ill., May 2– 4). ACM, New
York, pp. 434 – 444.
FLICKNER, M., SAWHNEY, H., NIBLACK, W., ASHLEY, J., HUANG, Q., DOM, B., GORKANI, M., HAFNER,
J., LEE, D., PETKOVIC, D., STEELE, D., AND YANKER, P. 1995. Query by image and video content:
The QBIC system. IEEE Computer 28, 23–32.
FREDERICKSON, G. N. 1985. Data structures for on-line updating of minimum spanning trees, with
applications. SIAM J. Comput. 14, 781–798.
FREDERICKSON, G. N. 1993. A data structure for dynamically maintaining rooted trees. In Proceed-
ings of the 4th Annual ACM–SIAM Symposium on Discrete Algorithms. ACM, New York, pp.
175–194.
FREDMAN, M. L., AND TARJAN, R. E. 1987. Fibonacci heaps and their uses in improved network
optimization algorithms. J. ACM 34, 3 (July), 596 – 615.
FRIEDMAN, J. H., BENTLEY, J. L., AND FINKEL, R. A. 1977. An algorithm for finding best matches
in logarithmic expected time. ACM Trans. Math. Softw. 3, 3 (Sept.), 209 –226.
FRIEDMAN, J. H., BASKETT, F., AND SHUSTEK, L. J. 1975. An algorithm for finding nearest
neighbors. IEEE Trans. Comput. C-24, 10, 1000 –1006.
GALPERIN, I., AND RIVEST, R. L. 1993. Scapegoat trees. In Proceedings of the 4th ACM–SIAM
Symposium on Discrete Algorithms. ACM, New York, pp. 165–174.
GERSHO, A., AND GRAY, R. M. 1991. Vector Quantization and Signal Compression. Kluwer
Academic, Boston, Mass.
GUAN, L., AND KAMEL, M. 1992. Equal-average hyperplane partitioning method for vector quan-
tization of image data. Patt. Recog. Lett. 13, 693– 699.
INDYK, P., AND MOTWANI, R. 1998. Approximate nearest neighbors: Towards removing the curse
of dimensionality. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing
(Dallas, Tex., May 23–26). ACM, New York, pp. 604 – 613.
KLEINBERG, J. M. 1997. Two algorithms for nearest-neighbor search in high dimensions. In
Proceedings of the 29th Annual ACM Symposium on Theory of Computing (El Paso, Tex., May 4 – 6).
ACM, New York, pp. 599 – 608.
KUSHILEVITZ, E., OSTROVSKY, R., AND RABANI, Y. 1998. Efficient search for approximate nearest
neighbor in high dimensional spaces. In Proceedings of the 30th Annual ACM Symposium on the
Theory of Computing (Dallas, Tex., May 23–26). ACM, New York, pp. 614 – 623.
LEE, C.-H., AND CHEN, L.-H. 1994. Fast closest codeword search algorithm for vector quantisation.
IEEE Proc. Vis. Image Sig. Proc. 141, 143–148.
LIN, K. I., JAGDISH, H. V., AND FALOUTSOS, C. 1994. The TV-tree: An index structure for high
dimensional data. VLDB J. 3, 4, 517–542.
MEISER, S. 1993. Point location on arrangements of hyperplanes. Inf. Comput. 106, 2, 286 –303.
MOUNT, D. M., NETANYAHU, N., SILVERMAN, R., AND WU, A. Y. 1995. Chromatic nearest
neighbor searching: A query sensitive approach. In Proceedings of the 7th Canadian Conference on
Computer Geometry. (Quebec City, Que., Canada, Aug. 10 –13). pp. 261–266.
PREPARATA, F. P., AND SHAMOS, M. I. 1985. Computational Geometry: An Introduction. Springer-
Verlag, New York.
RIVEST, R. L. 1974. On the optimality of Elais’s algorithm for performing best-match searches. In
Information Processing. North-Holland Publishing Company, Amsterdam, The Netherlands, pp.
678 – 681.
ROUSSOPOULOS, N., KELLEY, S., AND VINCENT, F. 1995. Nearest neighbor queries. In Proceedings
of the 1995 ACM SIGMOD Conference on Management of Data (San Jose, Calif., May 23–25).
ACM, New York, pp. 71–79.
SAMET, H. 1990. The Design and Analysis of Spatial Data Structures. Addison-Wesley, Reading,
Mass.
SCHWARZ, C., SMID, M., AND SNOEYINK, J. 1994. An optimal algorithm for the on-line closest-pair
problem. Algorithmica 12, 18 –29.
SLEATOR, D. D., AND TARJAN, R. E. 1983. A data structure for dynamic trees. J. Comput. Syst. Sci.
26, 362–391.
SPROULL, R. L. 1991. Refinements to nearest-neighbor searching. Algorithmica 6, 579 –589.
VAIDYA, P. M. 1989. An O(n log n) algorithm for the all-nearest-neighbors problem. Disc.
Comput. Geom. 4, 101–115.
Approximate Nearest Neighbor Searching 923
WHITE, D. A., AND JAIN, R. 1996. Similarity indexing with the SS-tree. In Proceedings of the 12th
IEEE International Conference on Data Engineering. IEEE Computer Society Press, Los Alamitos,
Calif., pp. 516 –523.
YAO, A. C., AND YAO, F. F. 1985. A general approach to d-dimensional queries. In Proceedings of
the 17th Annual ACM Symposium on Theory of Computing (Providence, R.I., May 6 – 8). ACM, New
York, pp. 163–168.