Robust Kinetic Convex Hulls in 3D: Abstract. Kinetic Data Structures Provide A Framework For Comput
Robust Kinetic Convex Hulls in 3D: Abstract. Kinetic Data Structures Provide A Framework For Comput
Abstract. Kinetic data structures provide a framework for computing combinatorial properties of continuously moving objects. Although
kinetic data structures for many problems have been proposed, some
diculties remain in devising and implementing them, especially robustly. One set of diculties stems from the required update mechanisms used for processing certicate failuresdevising ecient update
mechanisms can be dicult, especially for sophisticated problems such
as those in 3D. Another set of diculties arises due to the strong assumption in the framework that the update mechanism is invoked with a single
event. This assumption requires ordering the events precisely, which is
generally expensive. This assumption also makes it dicult to deal with
simultaneous events that arise due to degeneracies or due to intrinsic
properties of the kinetized algorithms. In this paper, we apply advances
on self-adjusting computation to provide a robust motion simulation
technique that combines kinetic event-based scheduling and the classic
idea of xed-time sampling. The idea is to divide time into a lattice of
xed-size intervals, and process events at the resolution of an interval.
We apply the approach to the problem of kinetic maintenance of convex
hulls in 3D, a problem that has been open since 90s. We evaluate the
eectiveness of the proposal experimentally. Using the approach, we are
able to run simulations consisting of tens of thousands of points robustly
and eciently.
Introduction
Acar, Blelloch, and Tangwongsan are supported in part by a gift from Intel.
D. Halperin and K. Mehlhorn (Eds.): ESA 2008, LNCS 5193, pp. 2940, 2008.
c Springer-Verlag Berlin Heidelberg 2008
30
certicates fail, i.e., change value. When a certicate fails, the motion simulator
noties the data structure representing the property. The data structure then
updates the computed property and the proof, by deleting the certicates that
are no longer valid and by inserting new certicates. To determine the time at
which the certicates fail, it is typically assumed that the points move along
polynomial trajectories of time. When a comparison is performed, the polynomial that represents the comparison is calculated; the roots of this polynomial
at which the sign of the polynomial changes becomes the failure times of the
computed certicate.
Since their introduction [BGH99], many kinetic data structures have been designed and analyzed. We refer the reader to survey articles [AGE+ 02, Gui04] for
references to various kinetic data structures, but many problems, especially in
three-dimensions, remain essentially open [Gui04]. Furthermore, several diculties remain in making them eective in practice [AGE+ 02, GR04, RKG07, Rus07].
One set of diculties stems from the fact that current KDS update mechanisms
strongly depend on the assumption that the update is invoked to repair a single
certicate failure [AGE+ 02]. This assumption requires a precise ordering of the
roots of the polynomials so that the earliest can always be selected, possibly requiring exact arithmetic. The assumption also makes it particularly dicult to
deal with simultaneous events. Such events can arise naturally due to degeneracies in the data, or due to the intrinsic properties of the kinetized algorithm1 .
Another set of diculties concerns the implementation of the algorithms. In
the standard scheme, the data structures need to keep track of what needs to
be updated on a certicate failure, and properly propagate those changes. This
can lead to quite complicated and error-prone code. Furthermore, the scheme
makes no provision for composing codethere is no simple way, for example, to
use one kinetic algorithm as a subroutine for another. Together, this makes it
dicult to implement sophisticated algorithms.
Recent work [ABTV06] proposed an alternative approach for kinetizing algorithms using self-adjusting computation [ABH+ 04, Aca05, ABBT06, AH06]. The
idea is that one implements a static algorithm for the problem, and then runs
it under a general-purpose interpreter that keeps track of dependences in the
code (e.g., some piece of code depends on the value of a certain variable or on
the outcome of a certain test). Now when the variable or test outcome changes,
the code that depends on it is re-run, in turn possibly invalidating old code
and updates, and making new updates. The algorithm that propagates these
changes is called a change propagation algorithm and it is guaranteed to return
the output to the same state as if the static algorithm was run directly on the
modied input. The eciency of the approach for a particular static algorithm
and class of input/test changes can be analyzed using trace stability, which can
be thought as an edit distance between computations represented as sequences
of operations [ABH+ 04].
1
For example, the standard incremental 3D convex hull algorithm can perform a planeside-test between a face and a point twice, once when deleting a face and once when
identifying the conict between a point and the face.
31
The approach can make it signicantly simpler to implement kinetic algorithms for a number of reasons: only the static algorithms need to be implemented2 ; algorithms are trivial to compose as static algorithms compose in the
normal way; and simultaneous update of multiple certicates are possible because the change propagation algorithm can handle any number of changes.
Acar et al. [ABTV06] used the ability to process multiple updates to help deal
with numerical inaccuracy. The observation was that if the roots can be limited
to an interval in time (e.g. using interval arithmetic), then one need only identify
a position in time not covered by any root. It is then safe to move the simulation forward to that position and simultaneously process all certicates before
it. Although the approach using oating-point number arithmetic worked for 2D
examples in that paper, it has proved to be more dicult to nd such positions
in time for problems in three dimensions.
In this paper, we propose another approach to advancing time for robust
motion simulation and apply it to a 3D convex hull algorithm. We then evaluate
the approach experimentally. The approach is a hybrid between kinetic eventbased scheduling and classic xed-time sampling. The idea is to partition time
into a lattice of intervals of xed size , and only identify events to the resolution
of an interval. If many roots fall within an interval, they are processed as a batch
without regard to their ordering. As with kinetic event-based scheduling, we
maintain a priority queue, but in our approach, the queue maintains non-empty
intervals each possibly with multiple events. To separate roots to the resolution
of intervals, we use Sturm sequences in a similar way as used for exact separation
of roots [GK99], but the xed resolution allows us to stop the process early. More
specically, in exact separation, one nds smaller and smaller intervals (e.g. using
binary search) until all roots fall into separate intervals. In our case, once we
reach the lattice interval, we can stop without further separation. This means
that if events are degenerate and happen at the same time, for example, we need
not determine this potentially expensive fact.
For kinetic 3D convex hulls, we use a static randomized incremental convex
hull algorithm [CS89, BDH96, MR95] and kinetize it using self-adjusting computation. To ensure that the algorithm responds to kinetic events eciently, we
make some small changes to the standard incremental 3D convex-hull algorithm.
This makes progress on the problem of kinetic 3D convex hulls, which was identied in late 1990s [Gui98]. To the best of our knowledge, currently the best way
to compute the 3D kinetic convex hulls is to use the kinetic Delaunay algorithm
of the CGAL package [Boa07], which computes the convex hull as a byproduct
of the 3D Delaunay triangulation (of which the convex hull would be a subset).
As shown in our experiment, this existing solution generally requires processing
many more events than necessary for computing convex hulls.
We present experimental results for the the proposed kinetic 3D convex hull
algorithm with the robust motion simulator. Using our implementation, we can
run simulations with tens of thousands of moving points in 3D and test their
accuracy. We can perform robust motion simulation by processing an average
2
In the current system, some annotations are needed to mark changeable values.
32
of about two certicate failures per step. The 3D hull algorithm seems to take
(poly) logarithmic time on average to respond to a certicate failure as well
as an integrated eventan insertion or deletion that occurs during a motion
simulation.
33
h
a
0
b c
*
d
2
e f
3
i
5
x
*
time
7
algorithm in the self-adjusting framework. This computes a certicate polynomial for each comparison. For each certicate, we nd the lattice intervals at
which the sign of the corresponding polynomial changes, and for each such interval, we insert an event into the priority queue. After the initialization, we
simulate motion by advancing the time to the smallest lattice point t such that
the lattice interval [t , t) contains an event. To nd the new time t we remove from the priority queue all the events contained in the earliest nonempty
interval. We then change the outcome of the removed certicates and perform a
change-propagation at time t. Change propagation updates the output and the
queue by inserting new events and removing invalidated ones. We repeat this
process until there is no more certicate failure. Figure 1 shows a hypothetical
example with = 1. We perform change propagation at times 1, 2, 3, 5, 7. Note
that multiple events are propagated simultaneously at time 2 (events b and c),
time 5 (events e and f ), and time 7 (events h, i and, x).
When performing change propagation at a given time t, we may encounter a
polynomial that is zero at t representing a degeneracy. In this case, we use the
derivatives of the polynomial to determine the sign immediately before t. Using
this approach, we are able to avoid degeneracies throughout the simulation, as
long as the certicate polynomials are not identically zero.
We note that the approach described here is quite dierent from the approach
suggested by Ali Abam et al. [AAdBY06]. In that approach, root isolation is
avoided by allowing certicate failures to be processed out of order. This can
lead to incorrect transient results and requires care in the design of the kinetic
structures. We do not process certicates out of order but rather as a batch.
Algorithm
34
crucial for stabilityi.e., that minimize the number of operations that need to
be updated when a certicate fails.
Given S R3 , the convex hull of S, denoted by conv(S), is the smallest convex
polyhedron enclosing all points in S. During the execution of the algorithm on
input S, each face f of the convex hull will be associated with a set (f ) S
of points (possibly empty). Each input point p will be given a real number
(p) [0, 1], called its priority. Each face f will have the priority (f ) :=
min{(p) : p (f )}. We say that a face of the hull is visible from a point if
the point is outside the plane dened by the face.
The algorithm takes as input a set of points S = {p1 , p2 , . . . , pn }, and performs
the following steps:
1.
2.
3.
4.
5.
In our implementation, we maintain a priority queue of faces ordered by priorities of the faces. We also store at each face the point in (f ) with priority
(f ). This allows us to perform step 5(a) eciently.
Even though the algorithm presented above is fairly standard, certain key
elements of this implementation appear to be crucial for stabilitywithout them,
the algorithm would be unstable. For stability, we want the edit distance between
the traces to be small. Towards this goal, the algorithm should always insert
points in the same ordereven when new points are added or old points deleted.
We ensure this by assigning a random priority to every input point. The use of
random priorities makes it easy to handle new points, and obviates the need to
explicitly remember the insertion order.
For better stability, we also want the insertion of a point p to visit faces of the
convex hull in the same order every time. While the presented algorithm cannot
guarantee this, we use the following heuristic to enhance stability. The pointto-face assignment with respect to a center point c ensures that the insertion
of p always starts excavating at the same face, increasing the likelihood that
the faces are visited in the same order. Note that the choice of the center point
is arbitrary, with the only requirement that the center point has to lie in the
convex hull. Our implementation takes c to be the centroid of the tetrahedron
formed by A4 .
35
Implementation
Our implementation consists of three main components: 1) the self-adjustingcomputation library, 2) the incremental 3D convex-hull algorithm, and 3) the
motion simulator. Previous work [ABBT06] provided an implementation of the
self-adjusting computation library. The library requires that the user adds some
notations to their static algorithms to mark what values can change and what
needs to be memoized. These notations are used by the system to track the
dependences and know when to reuse subcomputations.
In our experiments, we use both the original static 3D convex-hull algorithm
and the self-adjusting version with the annotations added. The static version
uses exact arithmetic predicates to determine the outcomes of comparisons precisely (we use the static version for checking the robustness of the simulation).
The self-adjusting version uses the root solver to nd the roots of the polynomial certicates, and inserts them into the event queue of the motion simulator.
We implement a motion simulator as described in Section 2. Given a precision
parameter and a bound Mt on the simulation time, the simulator uses an
event scheduler to perform a motion simulation on the lattice with precision
until Mt is reached. We model the points with an initial location traveling at
constant speed in a xed direction. For each coordinate, we use B and Bv bits
to represent the initial location and the velocity respectively; B and Bv can be
assigned to arbitrary positive natural numbers.
Experiments
Time (sec)
36
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
CGAL
Our algorithm
5000
10000
15000 20000
25000
CGAL
Our Algorithm
Input
#
Total
#
Total
Size Events Time (s) Events Time (s)
22
357
13.42
71
2.66
49
1501
152.41
151
11.80
73
2374
391.31
218
23.42
109
4662 1270.24
316
40.37
163
7842 3552.48
380
70.74
244 15309 12170.08
513
125.16
Time (sec)
Kinetic w/ Sturm
15x(Static w/ Sturm)
90x(Static w/ Floating)
5000
25000
kinetic simulation. When the inputs are large (more than 1000 points), we check
the output at randomly selected events (with varying probabilities between 1
and 20%) to save time.
Baseline Comparison. To assess the eciency of the static version of our
algorithm, we compare it to CGAL 3.3s implementation of the incremental
convex-hull algorithm. Figure 2 shows the timings for our static algorithm and
for the CGAL implementation with the Homogeneous<double> kernel. Inputs
to the algorithms are generated by sampling from the same distribution; the
reported numbers averaged over three runs. Our implementation is about 30%
slower than CGALs. Implementation details or our use of a high-level, garbagecollected language may be causing this dierence.
We also want to compare our kinetic implementation with an existing kinetic
implementation capable of computing 3D convex hulls. Since there is no direct
implementation for kinetic 3D convex hulls, we compare our implementation
with CGALs kinetic 3D Delaunay-triangulation implementation, which computes the convex hull as part of the triangulation. Figure 3 shows the timings
for our algorithm and for CGALs implementation of kinetic 3D Delaunay (using the Exact_simulation_traits traits).These experiments are run until the
event queue is empty. As expected, the experiments show that kinetic Delaunay
processes many more events than necessary for computing convex hulls.
Kinetic motion simulation. To perform a motion simulation, we rst run
our kinetic algorithm on the given input at time t = 0, which we refer to as
the initial run. This computes the certicates and inserts them into the priority
queue of the motion scheduler. Figure 4 illustrates the running time for the initial
37
run of the kinetic algorithm compared to that of our static algorithm which
does not create certicates. Timings show a factor of about 15 gap between the
kinetic algorithm (using Sturm sequences) and the static algorithm that uses
exact arithmetic. The static algorithm runs by a factor of 6 slower when it uses
exact arithmetic compared to using oating-point arithmetic. These experiments
indicate that the overheads of initializing the kinetic simulations is moderately
high: more than an order of magnitude over the static algorithm with exact
arithmetic and almost two orders of magnitude over the the static algorithm with
oating-point arithmetic. This is due to both the cost of creating certicates and
to the overhead of maintaining the dependence structures used by the change
propagation algorithm.
After completing the initial run, we are ready to perform the motion simulation. One measure of the eectiveness of the motion simulation is the average
time for a kinetic event, calculated as the total time for the simulation divided
by the number of events. Figure 5 shows the average times for a kinetic event
when we use our -precision root solver. These averages are for the rst 5 n
events on an input size of n. The average time per kinetic event appears asymptotically bounded by the logarithm of the input size. A kinetic structure is said
to be responsive if the cost per kinetic event is small, usually in the worst case.
Although our experiments do not indicate responsiveness in the worst case, they
do indicate responsiveness in the average case.
One concern with motion simulation with kinetic data structures is that the
overhead of computing the roots can exceed the speedup that we may hope to
obtain by performing ecient updates. This does not appear to be the case in
our system. Figure 6 shows the speedup for a kinetic event, computed as the
time for change propagation divided by the time for a from-scratch execution of
the static algorithm using our solver.
In many cases, we also want to be able to insert and remove points or change
the motion parameters during the motion simulation. This is naturally supported
in our system, because self-adjusting computations can respond to any combination of changes to their data. We perform the following experiment to study
the eectiveness of our approach at supporting these integrated changes. During
the motion simulation, at every event, the motion function of an input point
is updated from r(t) to 34 r(t). We update these points in the order they appear in the input, ensuring that every point is updated at least once. From this
0.015
Speedup
Time (sec)
0.012
0.009
0.006
0.003
0
0
5000
10000
15000
20000
25000
350
300
250
200
150
100
50
0
Speedup
5000
25000
0.014
0.012
0.01
0.008
0.006
0.004
0.002
0
Dynamic+Kinetic Event
0.0012!log(n)
0
5000
Internal/External Ratio
Time (sec)
38
500
Interval / External
4
0.035!log (n)
400
300
200
100
0
0
5000
10000
15000 20000
25000
# Certificates (Million)
experiment, we report the average time per integrated event, calculated by dividing the total time to the number of events. Figure 7 shows the average time
per integrated event for dierent input sizes. The time per integrated event appears asymptotically bounded by the logarithm of the input size and are similar
to those for kinetic events only. A kinetic structure is said to have good locality
if the number of certicates a point is involved in is small. We note that the
time for a dynamic change is directly aected by the number of certicates it
is involved in. Again, although our experiments do not indicate good locality in
the worst case, they do indicate good locality averaged across points.
In a kinetic simulation, we say that an event is internal if it does not cause
the output to change. Similarly, we say that an event is external if it causes
the output to change. A kinetic algorithm is said to be ecient if the ratio of
interval events to external events is small. Figure 8 shows this ratio in complete
simulations with out algorithm. The ratio can be reasonably large but appears
to grow sublinearly.
Another measure of the eectiveness
6.0
20!n!log(n)
# Certificates
5.0
of a kinetic motion simulation is com4.0
pactness, which is a measure of the to3.0
tal number of certicates that are live
2.0
at any time. Since our implementation
1.0
0.0
uses change-propagation to update the
0
5000
10000
15000
20000
25000
Input Size (n)
computation when a certicate fails, it
guarantees that the total number of cerFig. 9. Number of certicates
ticates is equal to the number of certicates created by a from-scratch execution at the current position of the points.
Figure 9 shows the total number of certicates created by a from-scratch run of
the algorithm with the initial positions. The number of certicates appears to
be bounded by O(n log n).
Conclusion
39
References
[AAB08]
40
[AH06]
[BDH96]
[BGH99]
[Boa07]
[CS89]
[GK99]
[GR04]
[Gui98]
[Gui04]
[MLt]
[MR95]
[RKG07]
[Rus07]
[Wee06]
Acar, U.A., Hudson, B.: Optimal-time dynamic mesh renement: preliminary results. In: Proceedings of the 16th Annual Fall Workshop on Computational Geometry (2006)
Barber, C.B., Dobkin, D.P., Huhdanpaa, H.: The quickhull algorithm for
convex hulls. ACM Trans. Math. Softw. 22(4), 469483 (1996)
Basch, J., Guibas, L.J., Hershberger, J.: Data structures for mobile data.
Journal of Algorithms 31(1), 128 (1999)
CGAL Editorial Board. CGAL User and Reference Manual, 3.3 edn. (2007)
Clarkson, K.L., Shor, P.W.: Applications of random sampling in computational geometry, II. Discrete and Computational Geometry 4(1), 387421
(1989)
Guibas, L.J., Karavelas, M.I.: Interval methods for kinetic simulations. In:
SCG 1999: Proceedings of the fteenth annual symposium on Computational geometry, pp. 255264. ACM Press, New York (1999)
Guibas, L., Russel, D.: An empirical comparison of techniques for updating
delaunay triangulations. In: SCG 2004: Proceedings of the twentieth annual
symposium on Computational geometry, pp. 170179. ACM Press, New
York (2004)
Guibas, L.J.: Kinetic data structures: a state of the art report. In: WAFR
1998: Proceedings of the third workshop on the algorithmic foundations of
robotics on Robotics: the algorithmic perspective, Natick, MA, USA, pp.
191209. A. K. Peters, Ltd (1998)
Guibas, L.: Modeling motion. In: Goodman, J., ORourke, J. (eds.) Handbook of Discrete and Computational Geometry, 2nd edn., pp. 11171134.
Chapman and Hall/CRC (2004)
MLton
Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)
Russel, D., Karavelas, M.I., Guibas, L.J.: A package for exact kinetic data
structures and sweepline algorithms. Comput. Geom. Theory Appl. 38(12), 111127 (2007)
Ruseel, D.: Kinetic Data Structures in Practice. PhD thesis, Department
of Computer Science, Stanford University (March 2007)
Weeks, S.: Whole-Program Compilation in Mlton. In: ML 2006: Proceedings of the 2006 workshop on ML, p. 1. ACM, New York (2006)