0% found this document useful (0 votes)
31 views24 pages

3 Dmodelpipeline

The document discusses the 3D model acquisition pipeline, highlighting the increasing affordability of 3D scanning technologies and their potential applications in virtual reality. It outlines the fundamental processes involved in converting raw 3D data into usable models, including range image registration, mesh integration, and surface property extraction. The authors emphasize the importance of refining the alignment of multiple scans and the challenges associated with various scanning techniques.

Uploaded by

Avi Hamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views24 pages

3 Dmodelpipeline

The document discusses the 3D model acquisition pipeline, highlighting the increasing affordability of 3D scanning technologies and their potential applications in virtual reality. It outlines the fundamental processes involved in converting raw 3D data into usable models, including range image registration, mesh integration, and surface property extraction. The authors emphasize the importance of refining the alignment of multiple scans and the challenges associated with various scanning techniques.

Uploaded by

Avi Hamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Volume 21 (2002), number 2 pp.

149–172 COMPUTER GRAPHICS for um

The 3D Model Acquisition Pipeline


Fausto Bernardini and Holly Rushmeier

IBM Thomas J. Watson Research Center, Yorktown Heights, New York, USA

Abstract
Three-dimensional (3D) image acquisition systems are rapidly becoming more affordable, especially systems
based on commodity electronic cameras. At the same time, personal computers with graphics hardware capable
of displaying complex 3D models are also becoming inexpensive enough to be available to a large population.
As a result, there is potentially an opportunity to consider new virtual reality applications as diverse as cultural
heritage and retail sales that will allow people to view realistic 3D objects on home computers.
Although there are many physical techniques for acquiring 3D data—including laser scanners, structured light
and time-of-flight—there is a basic pipeline of operations for taking the acquired data and producing a usable
numerical model. We look at the fundamental problems of range image registration, line-of-sight errors, mesh
integration, surface detail and color, and texture mapping. In the area of registration we consider both the
problems of finding an initial global alignment using manual and automatic means, and refining this alignment
with variations of the Iterative Closest Point methods. To account for scanner line-of-sight errors we compare
several averaging approaches. In the area of mesh integration, that is finding a single mesh joining the data from
all scans, we compare various methods for computing interpolating and approximating surfaces. We then look
at various ways in which surface properties such as color (more properly, spectral reflectance) can be extracted
from acquired imagery. Finally, we examine techniques for producing a final model representation that can be
efficiently rendered using graphics hardware.
Keywords: 3D scanning, range images, reflectance models, mesh generation, texture maps sensor fusion
ACM CSS: I.2.10 Vision and Scene Understanding—Modeling and recovery of physical attributes, shape,
texture; I.3.5 Computational Geometry and Object Modeling—Geometric algorithms, languages and systems;
I.3.7 Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture; I.4.1 Digitization and
Image Capture—Reflectance, sampling, scanning

1. Introduction Three-dimensional scanning has been widely used for


many years for reverse engineering and part inspection [1].
The past few years have seen dramatic decreases in the cost Here we focus on acquiring 3D models for computer
of three-dimensional (3D) scanning equipment, as well as in graphics applications. By 3D model, we refer to a numerical
the cost of commodity computers with hardware graphics description of an object that can be used to render images
display capability. These trends, coupled with increasing of the object from arbitrary viewpoints and under arbitrary
Internet bandwidth, are making the use of complex 3D lighting conditions. We consider models that can be used
models accessible to a much larger audience. The potential to simulate the appearance of an object in novel synthetic
exists to expand the use of 3D models beyond the well environments. Furthermore, the models should be editable
established games market to new applications ranging from to provide the capability of using existing physical objects as
virtual museums to e-commerce. To realize this potential, the starting point for the design of new objects in computer
the pipeline from data capture to usable 3D model must be modeling systems. The geometry should be editable—i.e.
further developed. In this report we examine the state of the holes can be cut, the object can be stretched, or appended
art of the processing of the output of range scanners into to other objects. The surface appearance properties should
efficient numerical representations of objects for computer also be editable—i.e. surfaces can be changed from shiny to
graphics applications. dull, or the colors of the surface can be changed.

c The Eurographics Association and Blackwell Publishers Ltd


2002. Published by Blackwell Publishers, 108 Cowley Road,
Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148,
USA. 149
150 F. Bernardini and H. Rushmeier / 3D Model Acquisition

To achieve this flexibility in the use of scanned objects, we Range Intensity


consider systems which output shape in the form of clouds of images images
points that can be connected to form triangle meshes, and/or
fitted with NURBS or subdivision surfaces. The 3D points
are augmented by additional data to specify surface finish Texture-to-geometry
Registration
and color. With the exception of surfaces with relatively registration
uniform spatial properties, fine scale surface properties such
as finish and color are ultimately stored as image maps
covering the geometry.
Line-of-sight error Computation of
The shape of 3D objects may be acquired by a variety compensation illumination invariants
of techniques, with a wide range in the cost of the acqui-
sition hardware and in the accuracy and detail of the ge-
ometry obtained. On the high cost end, an object can be
Integration of scans
CAT scanned [2], and a detailed object surface can be ob-
into a single mesh
tained with isosurface extraction techniques. On the low cost
end, models with relatively sparse 3D spatial sampling can
be constructed from simple passive systems such as video
streams by exploiting structure from motion [3], or by ob- Postprocessing and Texture map
serving silhouettes and using space carving techniques [4]. parameterization reconstruction

In this report we focus on scanning systems that capture


range images—that is an array of depth values for points on
the object from a particular viewpoint. While these scanners
Textured Model
span a wide range of cost, they are generally less expensive
and more flexible than full 3D imaging systems such as
Figure 1: The sequence of steps required for the reconstruc-
CAT scanners, while obtaining much more densely sampled
tion of a model from multiple overlapping scans.
shapes than completely passive systems. We briefly review
various types of range image scanners, and the principles
they work on. However, for this report we consider a range
scanner as a generic component, and consider the model pattern formed by an ordinary light source passing through
building process given range images as input. a mask or slide. A sensor, frequently a CCD camera, senses
the reflected light from the object. Software provided with
The process of building models from a range scanning
the scanner computes an array of depth values, which can
system is shown in Figure 1. There are fundamentally two
be converted to 3D point positions in the scanner coordinate
streams of processing—one for the geometry, and one for the
systems, using the calibrated position and orientation of the
fine scale surface appearance properties. As indicated by the
light source and sensor. The depth calculation may be made
dotted lines, geometric and surface appearance information
robust by the use of novel optics, such as the laser scanning
can be exchanged between the two processing streams to
systems developed at the National Research Council of
improve both the quality and efficiency of the processing of
Canada [5]. Alternatively, calculations may be made robust
each type of data. In the end, the geometry and fine scale
by using multiple sensors [6]. A fundamental limitation of
surface appearance properties are combined into a single
compact numerical description of the object. what can be scanned with a triangulation system is having
an adequate clear view for both the source and sensor
to see the surface point currently being scanned. Surface
2. Scanning Hardware reflectance properties affect the quality of data that can be
Many different devices are commercially available to obtain obtained. Triangulation scanners may perform poorly on
range images. Extensive lists of vendors are maintained at materials that are shiny, have low surface albedo, or that
various web sites. To build a model, a range scanner can be have significant subsurface scattering.
treated as a “black box” that produces a cloud of 3D points. It
An alternative class of range scanners are time-of-flight
is useful however to understand the basic physical principles
systems. These systems send out a short pulse of light, and
used in scanners. Characteristics of the scanner should be
estimate distance by the time it takes the reflected light
exploited to generate models accurately and efficiently.
to return. These systems have been developed with near
The most common range scanners are triangulation real time rates, and can be used over large (e.g. 100 m)
systems, shown generically in Figure 2. A lighting system distances. Time-of-flight systems require high precision in
projects a pattern of light onto the object to be scanned— time measurements, and so errors in time measurement
possibly a spot or line produced by a laser, or a detailed fundamentally limit how accurately depths are measured.


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 151

may only be able to acquire data for points spaced millime-


ters apart on the surface. Resolution provides a fundamen-
tal bound on the dimensions of the reconstructed surface
elements, and dictates the construction of intermediate data
structures used in forming the integrated representation.
Range scanners do not simply provide clouds of 3D
points [7], but implicitly provide additional information.
Simply knowing a ray from each 3D point to the scanning
sensor indicates that there are no occluding surfaces along
that ray, and provides an indicator of which side of the
point is outside the object. Since range images are organized
as two-dimensional (2D) arrays, an estimate of the surface
normal at each point can be obtained by computing vector
cross products for vectors from each point to its immediate
neighbors. These indicators of orientation can be used to
more efficiently reconstruct a full surface from multiple
range images.
Laser projector CCD sensor
3. Registration
Figure 2: Principles of a laser triangulation system. A laser
projector shines a thin sheet of light onto the object. The For all but the simplest objects, multiple range scans must be
CCD sensor detects, on each scan line, the peak of reflected acquired to cover the whole object’s surface. The individual
laser light. 3D point positions are computed by intersecting range images must be aligned, or registered, into a common
the line through the pixel with the known plane of laser light. coordinate system so that they can be integrated into a single
3D model.
In high-end systems registration may be performed by
Basic characteristics to know about a range scanner are accurate tracking. For instance, the scanner may be attached
its scanning resolution, and its accuracy. Accuracy is a to a coordinate measurement machine that tracks its position
statement of how close the measured value is to the true and orientation with a high degree of accuracy. Passive
value. The absolute accuracy of any given measurement is mechanical arms as well as robots have been used. Optical
unknown, but a precision that is a value for the standard tracking can also be used, both of features present in the
deviation that typifies the distribution of distances of scene or of special fiducial markers attached to the model or
the measured point to true point can be provided by the scanning area.
manufacturer. The tests used by manufacturers to determine
In less expensive systems an initial registration is found by
precision are based on standard tests for length measurement
scanning on a turntable, a simple solution that limits the size
developed for coordinate measurement machines or survey- and geometric complexity of scanable objects (they must fit
ing applications, depending on the scale of the application. on the turntable and the system provides only a cylindrical
The absolute value of error increases with distance between scan which cannot re-construct self-occluding objects), and
the scanner and object. The deviation of measurements is that leaves unsolved the problem of registration for scans
a thin ellipsoid rather than a sphere—the error is greatest of the top and bottom of the object. Many systems rely on
along the line-of-sight of the sensor. The precision of the interactive alignment: a human operator is shown side-by-
measurements may vary across a range image. There are side views of two overlapping scans, and must identify three
some effects that produce random errors of comparable or more matching feature points on the two images which are
magnitude at each point. Other effects may be systematic, used to compute a rigid transformation that aligns the points.
increasing the error towards the edges of the scan. Because
models are built from points acquired from many different Automatic feature matching for computing the initial
range images, it is important to understand the relative alignments is an active area of research (recent work
reliability of each point to correctly combine them. includes [8–12]). The most general formulation of the
problem, that makes no assumptions on type of features
Resolution is the smallest distance between two points (in the range and/or associated intensity images) and
that the instrument measures. The accuracy of measured 3D initial approximate registration is extremely hard to solve.
points may be different than the resolution. For example, a Approximate position and orientation of the scanner can be
system that projects stripes on an object may be able to find tracked with fairly inexpensive hardware in most situations,
the depth at a particular point with submillimeter accuracy. and can be used as a starting point to avoid searching a large
However, because the stripes have some width, the device parameter space.


c The Eurographics Association and Blackwell Publishers Ltd 2002
152 F. Bernardini and H. Rushmeier / 3D Model Acquisition

3.1. Registration of two views

Neither the controlled motion nor the feature matching


techniques can usually achieve the same degree of accuracy
as the range measurements. The initial alignment must q
therefore be refined by a different technique. The most
successful approach to solve this problem has been the Q P
p
Iterative Closest Point (ICP) algorithm, originally proposed
by Besl and McKay [13], Chen and Medioni [14], and Figure 3: One step of the ICP algorithm. Point matches
Zhang [15]. are defined based on shortest Euclidean distance. Scan P is
then transformed to minimize the length of the displacement
The ICP algorithm consists of two steps: in the first step, vectors, in the least-squares sense.
pairs of candidate corresponding points are identified in
the area of overlap of two range scans. Subsequently, an
optimization procedure computes a rigid transformation that Q
reduces the distance (in the least-squares sense) between
 P
the two sets of points. The process is iterated until some
convergence criterion is satisfied. The general idea is that at q'
q
each iteration the distance between the two scans is reduced,
allowing for a better identification of true matching pairs,
and therefore an increased chance of a better alignment at p
the next iteration. It has been proved [13] that the process
converges to a local minimum, and in good implementations Figure 4: In Chen and Medioni’s method, a matching pair is
it does so in few steps. However, the algorithm may or may created between a control point p on scan P and the closest
not converge to a global minimum, depending on the initial point q on the tangent plane to Q at q  . q  is the sample point
configuration. One obvious problem arises with surfaces that on Q closest to the intersection with the line  perpendicular
have few geometric features: two aligned partial scans of a to P in p.
cylindrical surface can slide relative to each other while the
distance between corresponding points remains zero. When
available, features in co-acquired texture images can help
error function to be minimized. They report more accurate
solve this underconstrained problems (see Section 3.3).
registration results than Chen and Medioni’s original method
Variations of the algorithm differ in how the candidate in controlled experiments. In related work, Dorai et al. [17]
matching pairs are identified, which pairs are used in check distance constraints (given points p1 and p2 on the
computing the rigid transformation, and in the type of first surface, and corresponding points q1 , q2 on the second
optimization procedure used. Besl and McKay [13] use the surface, | p1 − p2  − q1 − q2 | < ε must hold) to prune
Euclidean closest point as the matching candidate to a given incompatible matches, also leading to improved registration
point. Chen and Medioni [14] find the intersection between results. Many researchers have proposed incorporating other
a line normal to the first surface at the given point and features for validating matches: for example thresholding
the second surface, then minimize the distance between the the maximum distance, discarding matches along surface
given point and the tangent plane to the second surface at the discontinuities, evaluating visibility, and comparing surface
intersection point. This technique has two advantages: it is normals, curvature or surface color information (see for
less sensitive to non-uniform sampling, and poses no penalty example the good review in [18]). Use of the texture images
for two smooth surfaces sliding tangentially one with respect as an aid to registration is further discussed in Section 3.3.
to the other, a desirable behavior because in flat areas false
Given the two sets of matching points P = { p1 , . . . , pn },
matches can easily occur. See Figures 3 and 4.
Q = {q1 , . . . , qn }, the next problem is computing a rotation
Points from the first surface (control points) can be matrix R and translation vector T such that the sum of
selected using uniform subsampling, or by identifying squares of pair wise distances
surface features. The set of candidate pairs can be weighted

n
and/or pruned based on estimates of the likelihood of e=  pi − (Rqi + T )2
an actual match, and confidence in the data. Zhang [15] i=1
introduces a maximum tolerable distance and an orientation
consistency check to filter out spurious pairings. Dorai is minimized. This problem can be solved in closed form by
et al. [16] model sensor noise and study the effect of expressing the rotation as a quaternion [19], by linearizing
measurement errors on the computation of surface normals. the small rotations [14], or by using the Singular Value
They employ a minimum variance estimator to formulate the Decomposition. More statistically robust approaches have


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 153

been investigated to avoid having to preprocess the data to improve robustness and efficiency. Invalid matches are
eliminate outliers [20,21]. detected and discarded at each iteration.
A different class of methods models the problem by
imagining a set of springs attached to point pairs, and
3.2. Registration of multiple views
simulating the relaxation of the dynamic system. Stoddart
When pair wise registration is used sequentially to align and Hilton [28] assume that point pairs are given and
multiple views errors accumulate, and the global registration remain fixed. Eggert et al. [18] link each data point to
is far from optimal. Turk and Levoy [22] use a cylindrical the corresponding tangent plane in another view with a
scan that covers most of the surface of the object, and then spring. They use a hierarchical subsampling that employs
incrementally register other scans to it. In their variation of an increasing number of control points as the algorithm
ICP, they compute partial triangle meshes from the range progresses, and update correspondences at each iteration.
scans, then consider the distance from each vertex of one They report better global registration error and a larger
mesh to the triangulated surface representing the other scan. radius of convergence than other methods, at the expense of
longer computation times. Their method also assumes that
Bergevin et al. [23] extend the incremental approach to each portion of the object surface appears in at least two
handle multiple views. One of the views is selected as views.
the central (or reference) view. All the other views are
transformed into the reference frame of the central view.
At each iteration, each view is registered with respect to all 3.3. Using the textures to aid registration
other views using a varation of Chen and Medioni’s method. Images that record the ambient light reflected from an object
The process is repeated until all incremental registration (rather than a structured light pattern used for triangulation)
matrices are close to the identity matrix. Benjemaa and may also be captured coincidently with the range images.
Schmitt [24] use a similar approach, but accelerate finding Color or grayscale images are recorded to be used at texture
matching pairs by resampling the range images from a maps (see Section 7). Range and texture images in systems
common direction of projection, and then performing the that acquire both coincidently are registered to one another
searches for the closest points on these images. by calibration. That is, the relative position and orientation
of the texture and range sensors are known, and so the
Pulli [25] describes another incremental multiview regis-
projective mapping of the texture image onto the range
tration method that is particularly suited to the registration
image is known. When texture images registered to the
of large datasets. Pulli’s method consists of two steps: in the
range images are available, they may be used in the scan
first step, range scans are registered pair wise using Chen and
registration process. This is particularly advantageous when
Medioni’s method. Matching points are discarded if they lie
the texture images have a higher spatial resolution than the
on scan’s boundaries, if the estimated normals differ by more
range images, and/or the object itself has features in the
than a constant threshold, or when their distance is too large.
surface texture in areas that have few geometric features.
A dynamic fraction, that increases as the registration gradu-
ally improves, of the best remaining pairs (the shorter ones) Texture images may be used in the initial alignment
is then used for the alignment. After this initial registration, phase. Gagnon et al. [29] use texture data to assist a human
the overlap areas of each pair of scans is uniformly sampled, operator in the initial alignment. Pairs of range images are
and the relative position of sample points stored and used in aligned manually by marking three points on overlapping
the successive step: the algorithm will assume that the pair texture images. The locations of the matching points are
wise registration is exact and will try to minimize relative refined by an algorithm that searches in their immediate
motion. The second step considers the scans one at a time, neighborhoods using image cross-correlation [30]. A
and aligns each to the set of scans already considered. An least-squares optimization follows to determine a general
inner loop in the algorithm considers all the scans that over- 3D transformation between the scans that minimizes the
lap with the current scan, and recursively aligns each of these distances between the point pairs.
scans until the relative change is smaller than a threshold,
diffusing error evenly among all scans. By using a small Roth [9] used textures in an automatic initial alignment
number of pairs of points in the global registration phase, procedure. “Interest” points in each texture image, such
the need to have all the scans in memory is eliminated. as corners, are identified using any of a variety of image
processing techniques. A 3D Delaunay tetrahedralization is
Blais and Levine [26] search for a simultaneous solution computed for all interest points in each scan. All matching
of all the rigid motions using a simulated annealing triangles are found from pairs of potentially overlapping
algorithm. Execution times for even just a few views are scans, and the transformation that successfully registers the
reportedly long. Neugebauer [27] uses the Levenberg– most matching triangles is used. The advantage of using
Marquardt method to solve a linearized version of the the triangles is that it imposes a rigidity constraint that
least-squares problem. A resolution hierarchy is used to helps insure that the matches found are valid. The method


c The Eurographics Association and Blackwell Publishers Ltd 2002
154 F. Bernardini and H. Rushmeier / 3D Model Acquisition

requires an adequate number of “interest” points in the


Si
textures. However, a relatively sparse pattern of points can
be projected onto an object using laser light to guarantee
that such points are available. Projected points were added to Di
Sj
texture maps in the case study presented by Bernardini and
Rushmeier [31], however the number of points per scan were D̃ j Sm
not adequate for a completely automatic initial alignment. D̃i

Texture images may also be used in the refinement of the Dm


initial alignment. In general, there are two major approaches Dj
to using texture image data in the refinement phase. In one
approach, the color image values are used as additional
coordinates defining each point captured in the scan. In the
other approach, matching operations are performed using the
images directly.
Figure 5: Registration methods that work with images begin
Johnson and Kang [32,33] describe a method in which by projecting overlapping textures into the same view. Here
they use color from a texture as an additional coordinate geometries Si and S j are used to project the corresponding
for each point in an ICP optimization. Because the range texture maps Di and D j into the same view as a third scan
images they use are of lower spatial resolution than the Sm .
texture images, the range images are first supersampled
to the texture resolution, and a color triplet is associated
with each 3D point. The color triplets need to be adjusted pairs. Pulli [36] describes a method similar to Weik’s that
to be comparable in influence to the spatial coordinates. replaces the use of image gradient and differences with a full
They recommend scaling the color coordinates so that image registration to find corresponding points. Pulli’s tech-
the range of values matches the range of values in the nique uses a version of planar perspective warping described
spatial coordinates. Further, to minimize image-to-image by Szeliski and Shum [37] for image registration. To make
illumination variations they recommend using color in terms the registration more robust, Pulli describes a hierarchical
of Y I Q rather than RG B, and applying a scale factor to implementation. Similar to Kang and Johnson, Pulli exam-
the luminance, Y coordinate, that is much smaller than the ines alternative color spaces to minimize the effects of illu-
chrominance I Q coordinates. The closest point search now mination variations. For the test cases used—small objects
becomes a search in 6D space, and a 6D k-d tree is used with rich geometric and textural features—there appears to
to accelerate the search. For tests using scanned models be no advantage of using images in color spaces other than
of rooms which have many planar areas with high texture RG B.
variation, they demonstrate order of magnitude reductions
in alignment errors. Schütz et al. [34] present a similar Both Weik’s and Pulli’s methods require operations on the
extended-coordinate ICP method, that uses scaled normals full high-resolution texture images. A high degree of overlap
data (with normals derived from the range data) as well as is required, and scan-to-scan variability in illumination
color data. introduces error. Fine scale geometry is matched only if
these details are revealed by lighting in the images. Both
The alternative approach to using texture image data is to methods can be effective if there are substantial albedo
perform matching operations on image data directly. This variations in the scans that dominate illumination variations.
allows image structure to be exploited, and avoids search
in high dimensional coordinate space. To compare texture Bernardini et al. [38] present a registration method
images directly, these types of methods begin by using that combines elements of several of the other texture-
the range scan and an initial estimate of registration to based techniques. The initial alignment is first refined
project the texture images into a common view direction, as with a purely geometric ICP. Similar to Weik and Pulli,
illustrated in Figure 5. the texture images are projected into a common view.
Similar to Roth, feature points are located in the texture
Weik [35] projects both the texture image and the texture images. However, unlike Roth the method does not attempt
gradient image of a source scan to be aligned with a sec- to match feature points. Rather, similar to the approach
ond destination scan. The difference in intensities in the two by Gagnon et al. the initial correspondences are refined
images in the same view are then computed. The texture dif- by doing a search in a small neighborhood around each
ference image and gradient image are then used to estimate point, and finding corresponding pixels where an image
the locations of corresponding points in the two images. A cross-correlation measure is minimized. A rigid rotation is
rigid transformation is then computed that minimizes the then found that minimizes the distance between the newly
sum of the 3D distances between the corresponding point identified corresponding points.


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 155

3.4. Future directions

Successful refinement of an initial registration has


been demonstrated for a large class of objects. This
step does not appear to be a major obstacle to a fully
automatic model-building pipeline. Robust solutions for the
automatic alignment of totally uncalibrated views are not
available, although some progress is being made. Scanner
instrumentation with an approximate positioning device
seems a feasible solution in most cases. Very promising is Real surface
the use of improved feature-tracking algorithms from video Measurement 1
sequences as an inexpensive way of producing the initial Measurement 2
registration estimate.
Figure 6: Probabilistic model of measurement error
(adapted from Rutishauser et al. [39]).
4. Line-of-sight Error

After the scans have been aligned the individual points


would ideally lie exactly on the surface of the reconstructed
Soucy and Laurendeau [41] model error in a laser triangu-
object. However, one still needs to account for residual error
lation system as proportional to the fraction of illuminance
due to noise in the measurements, inaccuracy of sensor
received by the sensor, expressed by the cosine square of
calibration, and imprecision in registration. The standard
the angle between the surface normal at the measured point
approach to deal with the residual error is to define new
and the sensor viewing direction. Overlapping range data is
estimates of actual surface points by averaging samples from
overlapping scans. Often the specific technique used is cho- resampled on a common rectangular grid lying on a plane
sen to take advantage of the data structures used to integrate perpendicular to the average of the viewing directions of
the multiple views into one surface. Because of this, details all contributing scans. Final depth values are computed as
of the assumed error model and averaging method are often weighted averages of the resampled values, where the weight
lost or overlooked by authors. We believe that this problem used is the same cosine square defined above. These points
is important enough to deserve a separate discussion. In are then connected into a triangle mesh.
addition, line-of-sight error compensation, together with
Turk and Levoy [22] employ a similar method, but invert
resampling and outlier filtering, is a necessary preprocessing
the steps of creating a triangulated surface and finding better
step for interpolatory mesh integration methods.
surface position estimates. In their approach individual range
Among the first to recognize the need for a mathemat- scans are first triangulated, then stitched together. In areas of
ical model of scanner inaccuracies and noise were Hébert overlap, vertices of the resulting mesh are moved along the
et al. [40], in the context of data segmentation and poly- surface normal to a position computed as the average of all
nomial section fitting. Their error model incorporates the the intersection of a line through the point in the direction of
effects of viewing angle and distance, and is expressed as the normal and all the overlapping range scans.
an uncertainty ellipsoid defined by a Gaussian distribution.
Other sources of non-Gaussian error, such as shadows, sur- Neugebauer [27] adjusts point positions along the scanner
face specularities and depth discontinuities, which generally line-of-sight. He uses a weighted average where each weight
produce outliers, are not included in the model. For a typical is the product of three components: the first is the cosine
triangulation scanner the error in estimating the x, y position of the angle between surface normal and sensor viewing
of each sample is much smaller than the error in estimat- direction (if the cosine is smaller than 0.1, the weight
ing the depth z. Therefore the ellipsoid is narrow with its is set to zero); the second contribution is a function that
longer axis aligned with the direction towards the sensor, approximates the square distance of a sample point to the
see Figure 6. Building on the work of Hébert et al. [40], scan boundary, allowing a smooth transition between scans;
Rutishauser et al. [39] define an optimal reconstruction of a the third component is Tukey’s biweight function, used to
surface from two sets of estimates, in the sense of probability filter outliers. The weighting is applied iteratively.
theory. However, they have to resort to some approxima-
tions in their actual computations. For a measured point on In volumetric methods line-of-sight error compensation
one scan, they find the best matching point (again, in the is done by computing a scalar field that approximates the
probabilistic sense) on the triangle defined by the three clos- signed distance to the true surface, based on a weighted
est samples on the second scan. The optimal estimation of average of distances from sample points on individual range
point location is then computed using the modified Kalman scans. The details of the various methods will be discussed
minimum-variance estimator. in the next section.


c The Eurographics Association and Blackwell Publishers Ltd 2002
156 F. Bernardini and H. Rushmeier / 3D Model Acquisition

5. Scan Integration A commercial software product by Geomagic is based on


a different technique to extract the subcomplex, called the
For most applications, it is desirable to merge the aligned wrap complex [48]. The technique can handle non-uniform
multiple scans into a unified, non-redundant surface repre- samplings, but requires some interactive input.
sentation. A significant amount of research in this direction
has been done in the past. In this section, we will try to clas- Amenta et al. [49,50] introduce the concept of crust, the
sify this work based on the type of assumptions and approach subcomplex of the Delaunay complex of S ∪ P, where P
taken, and we will point to recent publications that are repre- is the set of poles of the Voronoi cells of S, formed by only
sentative of each category, without trying to exhaustively cite those simplices whose vertices belong to S. The poles of
the vast literature available on this subject. Previous reviews a sample point s ∈ S are the two farthest vertices of its
of work in this field include [42–44]. Voronoi cell. The algorithm automatically handles non-
uniform samplings, and its correctness, under somewhat
The goal of scan integration is to reconstruct the geometry stringent sampling density conditions, has been proven, both
and topology of the scanned object from the available data. in the sense of a topologically correct reconstruction and of
The problem is difficult because in general the data points convergence to the actual surface for increasing sampling
are noisy, they may contain outliers, parts of the surface may density. Experimental results prove that the algorithm
not have been reached by the scanner, and in general there is performs well in practice for much less dense samplings
no guarantee that the sampling density is even sufficient for than the theoretical bound. Based on a similar concept,
a correct reconstruction. but leading to a more efficient and robust implementation
is the power crust algorithm [51,52]. The first step of the
Some progress is being made in characterizing the prob-
power crust algorithm is to compute a piecewise-linear
lem more rigorously, at least in restricted settings. A first
approximation of the medial axis transform, interpolating
classification of methods can be made based on whether the
the poles P of the Voronoi cells of S, defined as above. The
input data is assumed to be unorganized points (point cloud)
poles are weighted with the associate (approximate) radius
or a set of range scans. Techniques that deal with the first
of the maximal balls that do not intersect the surface. The
kind of input are more general, but also usually less robust
second step computes a piecewise-linear approximation of
in the presence of noise and outliers. The second category
the surface as a subset of the faces of the power diagram
uses information in addition to simple point position, such as
of the set of weighted poles. One additional benefit of the
estimated surface normal, partial connectivity embedded in
algorithm is that it produces a closed (“watertight”) surface
the range scan, sensor position, to better estimate the actual
in the presence of uneven sampling density. Sampling
surface.
assumptions and theoretical guarantees are defined in [52].
A second classification groups techniques based on the ap- Practical extensions to deal with sharp features, holes and
proach taken to reconstruct surface connectivity. A practical noise are discussed in [51]. Experimental results for datasets
consequence of this choice is the size of the problem that can containing several hundred thousand points are shown.
be solved using given computing resources. We will review Also using the concept of poles to define a local surface
selected work based on this second categorization. approximation is the cocones algorithm proposed by Amenta
et al. [53]. Here the poles of the Voronoi diagram of P are
used to define an approximate normal for each sample point.
5.1. Delaunay-based methods
The complement (restricted to the Voronoi cell of the point)
The Delaunay complex D(S) associated with a set of points of a double cone centered at p, with axis aligned with the
S in R 3 decomposes the convex hull of S and imposes a sample point normal and an aperture of 3π/8, is defined
connectivity structure. Delaunay-based methods reconstruct as the cocone of p. It is proved that the cocone constitutes
a surface by extracting a subcomplex from D(S), a process a good approximation for the surface in the neighborhood
sometime called sculpting. This class of algorithms usually of p. The local surface reconstrucion is then defined by
assumes only a point cloud as input. A recent review and the collection of Delaunay triangles incident on p that are
unified treatment of these methods appears in [45]. dual to Voronoi edges contained in the cocone of p. The
union of all the local reconstruction constitutes a superset
One technique to select an interesting subcomplex, in fact of the final manifold triangulation, which is obtained with a
a parameterized family of subcomplexes, is based on alpha- global prune and walk algorithm. These results are presented
shapes [46]. Bajaj et al. [44,47] use a binary search on in the context of a practical implementation by Dey et al.
the parameter α to find a subcomplex that defines a closed [54]. The authors employ a divide and conquer method
surface containing all the data points. Smaller concave based on an octree partition of the input points to avoid
features not captured by the alpha-shape are found with the a global Voronoi computation. The pointsets contained in
use of heuristics. The surface is then used to define a signed each octree node are padded with enough points from
distance. A C 1 implicit piecewise-polynomial function is neighboring nodes to enforce the computation of compatible
then adaptively fit to the signed distance field. triangulations along common boundaries. Again, a global


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 157

prune and walk algorithms selects a manifold subset of dataset. This allowed the triangulation of a large collection
the candidate triangles. The divide and conquer approach of scans with millions of samples.
leads to reduced computation times and memory usage,
allowing the treatment of datasets with millions of samples Gopi et al. [59] compute local 2D Delaunay triangulations
on common workstations. by projecting each point and its neighborhood on a tangent
plane, and then lift the triangulation to 3D.
In the context of Delaunay-based methods it is possible
to study the sampling conditions the guarantee a correct Surface based methods can easily process large datasets,
reconstruction. Attempts so far have been mostly restricted and can handle (and compensate for) small-scale noise in
to the 2D case [55–57], with the exception of [50] and [52]. the data. Robustness issues arise when the noise makes it
The main shortcomings of these methods are their sensitivity difficult to locally detect the correct topology of the surface.
to noise and outliers (these algorithms interpolate the data
points, so outliers must be removed in preprocessing), and 5.3. Volumetric methods
their computational complexity. Robustly computing and
representing the connectivity of the 3D Delaunay complex Volumetric methods [60–62] are based on computing
can be a costly task. Experimental results are usually limited a signed distance field in a regular grid enclosing the
to “clean” datasets with less than a few hundred thousand data (usually, only in proximity of the surface), and then
points (with the exception of [54]). extracting the zero-set of the trivariate function using the
marching cube algorithm [63]. The various approaches
differ on the details of how the signed distance is estimated
5.2. Surface-based methods from the available data.
Surface-based methods create the surface by locally param- Curless and Levoy [60] compute the signed distance from
eterizing (or implicitly assuming a local parameterization each scan by casting a ray from the sensor through each
of) the surface and connecting each points to its neighbors voxel near the scan. The length of the ray from the voxel to
by local operations. Some methods make use of the partial the point in which it intersects the range surface is computed
connectivity implicit in the range images. and accumulated at the voxel with values computed from
The zippering approach of Turk and Levoy [22] works by other scans using weights dependent, as usual, on surface
first individually triangulating all the range scans. The partial normal and viewing direction. This approach may lead to a
meshes are then eroded to remove redundant, overlapping biased estimate of surface location, as noted in [61]. Hilton
triangles. The intersecting regions are then locally retrian- et al. [62] also blend signed distances from individual scans,
gulated and trimmed to create one seamless surface. Vertex and use extra rules to handle correctly the case of of different
positions are then readjusted to reduce error, as described in surfaces in close proximity, both with the same and opposite
Section 4. orientation. Wheeler et al. [61] propose a solution that is
less sensitive to noise, outliers, and orientation ambiguities.
Soucy and Laurendeau [41] use canonical Venn diagrams They assign to each voxel the signed distance to the closest
to partition the data into regions that can be easily parameter- point on the consensus surface, a weighted average of
ized. Points in each region are resampled and averaged (see nearby measurements. Only measurements for which a user-
Section 4), and locally triangulated. Patches are then stitched specified quorum of samples with similar position and
together with a constrained Delaunay algorithm. orientation is found are used.

A recent paper by Bernardini et al. [58] describes an algo- Boissonnat and Cazals [64] use natural neighbor interpo-
rithm to interpolate a point cloud that is not based on sculpt- lation to define a global signed distance function. The natural
ing a Delaunay triangulation. Their method follows a region neighbors of a point x are the neighbors of x in the Delau-
growing approach, based on a ball-pivoting operation. A ball nay triangulation of P {x}. Using natural neighbors avoids
of fixed radius (approximately the spacing between two sam- some of the pitfalls of other local surface approximations
ple points) is placed in contact with three points, which form (for example taking just the points within a given distance
a seed triangle. The three edges initialize a queue of edges from x, or its k closest neighbors). However, it requires the
on the active boundary of the region. Iteratively, an edge computation of a global Delaunay triangulation, which lim-
is extracted from the queue, and the ball pivots around the its the size of the datasets that can be handled by the al-
extracted edge until it touches a new point. A new triangle is gorithm in practice. Since the Delaunay triangulation of the
formed, the region boundary updated, and the process con- points must be computed, it can also be used as the starting
tinues. The approach can easily be extended to restart with point for the construction of piecewise-linear approximation
a larger ball radius to triangulate regions with sparser data of the surface that satisfies a user-specified tolerance. The
points. This method was implemented to make efficient use initial approximation is formed by all those Delaunay trian-
of memory by loading at any time only the data in the region gles whose dual Voronoi edge is bipolar, that is such that
currently visited by the pivoting ball, rather than the entire the global signed distance function has different signs at its


c The Eurographics Association and Blackwell Publishers Ltd 2002
158 F. Bernardini and H. Rushmeier / 3D Model Acquisition

two endpoints. This triangulation is then incrementally re- 6. Postprocessing


fined until the tolerance condition is satisfied. Examples of
Postprocessing operations are often necessary to adapt the
reconstruction from datasets of moderate size are shown in
model resulting from scan integration to the application
the paper.
at hand. Very common is the use of mesh simplification
Volumetric methods are well suited for very large datasets. techniques to reduce mesh complexity [75].
Once the individual range scans have been processed
To relate a texture map to the integrated mesh, the surface
to accumulate signed distance values, storage and time
must be parameterized with respect to a 2D coordinate
complexity are output sensitive: they mainly depend on the
system. A simple parameterization is to treat each triangle
chosen voxel size, or resolution of the output mesh. Memory
separately [32,76] and to pack all of the individual texture
usage can be reduced by explicitly representing only voxels
maps into a larger texture image. However, the use of
in close proximity to the surface [60] and by processing
mip-mapping in this case is limited since adjacent pixels
the data in slices. The choice of voxel size is usually left
in the texture may not correspond to adjacent points on
to the user. Small voxels produce an unnecessarily large
the geometry. Another approach is to find patches of
number of output triangles and increase usage of time and
geometry which are height fields that can be parameterized
space. Large voxels lead to oversmoothing and loss of small
by projecting the patch onto a plane. Stitching methods [2]
features. These problems can be alleviated by using an
use this approach by simply considering sections of the
adaptive sampling (e.g. octree rather than regular grid [65])
scanned height fields as patches.
and/or by postprocessing the initial mesh with a data fitting
procedure [66–68]. Many parameterization methods have been developed
for the general problem of texture mapping. Several
Volumetric methods are also well suited to producing methods seek to preserve the relative distance between 3D
water-tight models. By using the range images to carve out a points in their pairing to a 2D coordinate system [77,78].
spatial volume, an object definition can be obtained without Marschner [79] describes an example of applying a
holes in the surface. Reed and Allen [69] demonstrate the relative distance preserving parameterization in a scanning
evolution of a solid model from a series of range images, application. The surface is subdivided into individual
with the data from each image carving away the solid that patches by starting with seed triangles distributed over the
lies between the scanner and each sample point. Rocchini et object, and growing regions around each seed. Harmonic
al. [70] also describe a volumetric method that fills holes. maps are found to establish a 2D coordinate system for each
patch, so individual patches need not be height fields.
5.4. Deformable surfaces Sloan et al. [80] have observed that maintaining relative
distances may not produce optimal parameterizations
Another class of algorithms is based on the idea of deform- for texture mapping. They suggest that uniform texture
ing an initial approximation of a shape, under the effect of information, rather than distance preservation, should drive
external forces and internal reactions and constraints. the parameterization. They applied this idea to synthetic
Terzopoulos et al. [71] use an elastically-deformable textures only, but it may prove to be an effective approach
model with intrinsic forces that induce a preference for in some scanning applications as well.
symmetric shapes, and apply them to the reconstruction Another important step for applications that involve edit-
of shapes from images. The algorithm is also capable of ing and animating the acquired model is the conversion of
inferring non-rigid motion of an object from a sequence of the mesh to a parametric, higher-order surface representa-
images. tion, for example using NURBS or a subdivision scheme.
Pentland and Sclaroff [72] adopted an approach based on The technique of Hoppe et al. [81] starts with a triangle
the finite element method and parametric surfaces. They start mesh and produces a smooth surface based on Loop’s sub-
with a simple solid model (like a sphere or cylinder) and division scheme [82]. Their method is based on minimizing
attach virtual “springs” between each data point and a point an energy function that trades off conciseness and accuracy-
on the surface. The equilibrium condition of this dynamic of-fit to the data, and is capable of representing surfaces
system is the reconstructed shape. They also show how the containing sharp features, such as creases and corners.
set of parameters that describe the recovered shape can be
used in object recognition. More recently, Eck and Hoppe [83] proposed an alter-
native surface fitting approach based on tensor-product B-
Recently a number of methods based on the concept of spline patches. They start by using a signed-distance zero-
levels sets have been proposed. These methods combine surface extraction method [84]. An initial parameterization
a robust statistical estimation of surface position in the is built by projecting each data point onto the closest face.
presence of noise and outliers with an efficient framework The method continues with building from the initial mesh
for surface evolution. See e.g. [73,74]. a base complex (a quadrilateral-domain complex, with the


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 159

same topology of the initial mesh) and a continuous param- 7.1. Texture-geometry registration
eterization from the base complex to the initial mesh, lever-
aging on the work of Eck et al. [78]. A tangent-plane con- It is possible to capture the spectral reflectance of an
tinuous network of tensor-product B-spline patches, having object as points are acquired with a polychromatic laser
the base complex as parametric domain, is then fit to the scanner [87]. However, data for texture is typically acquired
data points, based on the scheme of Peters [85]. The fitting by an electronic color camera or using conventional color
process is cast as an iterative minimization of a functional, photographs that are subsequently scanned into electronic
which is a weighted sum of the distance functional (the sum form. The texture images need to be registered with the
of square Euclidean distances of the data points from the sur- acquired 3D points. The most straightforward system for
face) and a fairness functional (thin plate energy functional). doing this is registration by calibration. That is, color images
corresponding to each range image are acquired at the same
Another NURBS fitting technique is described by time, using a camera with a known, measured position
Krishnamurthy and Levoy [86]. The user interactively and orientation relative to the sensor used for obtaining
chooses how to partition the mesh into quadrilateral patches. geometry. As discussed in Section 3.3, an advantage of
Each polygonal patch is parametrized and resampled, this approach is that acquired texture can be used in the
using a spring model and a relaxation algorithm. Finally, geometric registration process.
a B-spline surface is fit to each quadrilateral patch. In
addition, a displacement map is computed that captures the When textures are acquired separately from geometry, the
fine geometric detail present in the data. texture-to-geometry registration is performed after the full
mesh integration phase. Finding the camera position and ori-
Commercial packages that allow a semi-automated
entation associated with a 2D image of a 3D object is the
parametrization and fitting are available.
well-known camera calibration problem. Numerous refer-
ences on solutions to this problem can be found in the Price’s
7. Texture Computer Vision bibliography [88], Section 15.2, “Camera
Calibration Techniques.” Camera calibration involves esti-
In addition to the overall shape of an object, the rendering
mating both the extrinsic and intrinsic parameters. The ex-
of high quality images requires the fine scale surface
trinsic parameters are the translation and rotation to place
appearance, which includes surface color and finish. We
the camera viewpoint correctly in the object coordinate sys-
will refer to such properties generically as the surface
tem. The intrinsic parameters include focal length and radial
texture. Beyond color and finish, texture may also include
distortion. For objects which have an adequate number of
descriptions of fine scale surface geometry, such as high
unique geometric features, it is possible to manually identify
spatial-resolution maps of surface normals or bidirectional
pairs of corresponding points in the 2D images and on the
textures.
numerical 3D object. Given such correspondences, classic
Surface color and finish are informal terms. Color is ac- methods such as that described by Tsai [89], can be used to
tually a perceived quantity, depending on the illumination register the captured color images to the 3D model [2].
of an object, human visual response, and the intrinsic spec-
tral reflectance of the object. Finish—such as smoothness For some objects it may not be possible for a user to
or gloss—is also not a directly acquired property, but is a find a large number of accurate 2D–3D correspondences.
consequence of an object’s intrinsic reflectance properties. Neugebauer and Klein [90] describe a method for refining
The fundamental quantity that encodes the intrinsic proper- the registration of a group of existing texture images
ties of the surface is the Bidirectional Reflectance Distribu- to an existing 3D geometric model. The method begins
tion Function (BRDF). To fully render an accurate image, with a rough estimate of the camera parameters for each
the BRDF must be known for all points on a surface. The image in the set, based on correspondences that are not
BRDF fr (λ, x, y, ωi , ωr ) at a surface point (x, y) is the required to be highly accurate. The parameters for all of the
ratio of radiance reflected in a direction ωr to an incident texture images are improved simultaneously by assuming
energy flux density from direction ωi for wavelength λ. The the intrinsic camera parameters are the same for all images,
BRDF can vary significantly with position, direction and and enforcing criteria that attempt to match the object
wavelength. Most scanning systems consider detailed po- silhouettes in the image with the silhouette of the 3D model,
sitional variations only, with wavelength variations repre- and to match the image characteristics at locations in texture
sented by an RG B triplet, and Lambertian (i.e. uniform for images that correspond to the same 3D point.
all directions) behavior assumed. Furthermore, most scan-
Lensch et al. [91] present a method for finding the camera
ning systems acquire relative estimates of reflectance, rather
position for texture images in terms of a geometric object co-
than attempting to acquire an absolute value.
ordinate system using comparisons of binary images. First,
Here we will consider how texture data is acquired, and a binary version of each texture image is computed by seg-
then how it is processed to provide various types of BRDF menting the object from the background. This is compared to
estimates, and estimates of fine scale surface structure. a synthetic binary image generated by projecting the known


c The Eurographics Association and Blackwell Publishers Ltd 2002
160 F. Bernardini and H. Rushmeier / 3D Model Acquisition

geometry into a camera view based on an initial guess of


camera parameters. The values of the camera parameters are
refined by using a downhill simplex method to minimize the ωr ωi
difference between the binary texture image and the syn-
thetic image. In a subsequent step the camera parameters no
for all views of an object are adjusted simultaneously to
minimize the error in overlapping textures from neighboring
views. rs
ωc
Nishino et al. [92] apply an alternative technique that
relies on image intensities rather than identifying features
nc
or extracting contours. They employ the general approach ωs
developed by Viola [93] that formulates the alignment as
the maximization of the mutual information between the 3D ns
model and the texture image.

Rather than using an ad hoc method for deciding the


As
positions for capturing texture images, Matsushita and
Kaneko [94] use the existing 3D geometric model to plan Figure 7: Generic geometry of texture map acquisition.
the views for capturing texture. Methods to plan texture
image capture can draw on the numerous computer vision
techniques for view planning, e.g. see [88] Section 15.1.4.1, a pixel p. The radiance L p (λ) is related to the object BRDF
“Planning Sensor Position.” Matsushita and Kaneko develop by:
a table of a set of candidate views, and the object facets 
that are visible in each view. Views are selected from the L p (λ) = fr (λ, x, y, ωi , ωr )
table to obtain the views that image the largest number
of yet to be imaged facets. After the view set is selected, ×L s (λ, ωs )no · ωi ns · ωs dAs /rs2 . (1)
synthetic images which form the views are generated. For
each synthetic image the real camera then is guided around The energy per unit area and time E p (λ) incident on the
the object to find the view that approximates the synthetic pixel from direction ωc for an exposure time of τ is:
image, and a texture image is captured. The texture image to 
model registration is refined after capture using a variation E p (λ) = τ L p (λ)nc · ωc d (2)
of Besl and McKay’s ICP algorithm [13] that acts on points
on the silhouettes of the real and synthetic images. where  is the solid angle of the object area viewed by the
pixel, determined by the camera focal length and pixel size.
This is converted to a 0–255 value (for an 8-bit sensor) C
7.2. Illumination invariance where C corresponds to the red (R), green (G) or blue (B)
channel by:
The goal of capturing texture is to obtain a surface
 γ
description that is illumination invariant—i.e. intrinsic to
the surface and independent of specific lighting conditions. C=K E p (λ)sC (λ) dλ + Co (3)
λ
The pixel values in an image acquired by an electronic
camera depend on the environmental lighting and the where K is the system sensitivity, sC (λ) is the normalized
camera transfer parameters as well as the object properties. sensor spectral response for channel C, Co is the response
Approximate illumination invariants can be obtained for zero illumination, and γ is the system non-linearity. Even
directly by appropriate lighting and camera design. More cameras with sensors that have an essentially linear response
complete estimates require processing of the acquired to light may produce images that have values adjusted with
images. The variety of techniques can be understood by a value of γ other than one for the efficient use of the 0–255
examining the specific relationships between the physical range.
acquisition equipment and the end numerical value stored in
an image.
7.3. Direct use of captured images
Figure 7 shows a generic simplified system for obtaining
a texture image. A light source with radiance L s (λ, ωs ) in Most inexpensive systems attempt to capture a relative
direction ωs from the normal of the source surface is at estimate of Lambertian reflectance, expressed directly in
distance rs from the object. Light incident from direction terms of RG B. A Lambertian reflector reflects the same
ωi is reflected with radiance L p (λ, ωr ) into the direction of radiance in all directions for any incident energy flux density.


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 161

The Lambertian reflectance ρd is the fraction of incident


energy reflected, and is related to the BRDF by:

fr (λ, x, y, ωi , ωr ) = ρd (λ, x, y)/π. (4)

The radiance reflected for a Lambertian surface then is:



L p (λ) = ρd (λ, x, y) L s (λ, ωs )no · ωi ns · ωs dAs /rs2 .
(5)
The reflected radiances measured at each pixel then are a
good estimate of the relative spatial variation for Lambertian
surfaces if no · ωi ns · ωs and rs2 are approximately the
same for all points on the surface imaged at any given (a) (b)
time. Maintaining constant rs is relatively straightforward
for systems with a fixed scanner location and object placed
on a turntable. As long as the distance to the light source is
large relative to the size of the surface area being imaged,
the effect of varying rs will be small. One approach to
controlling the variation due to the changing incident angle
is to use a large diffuse light source, so that each point on
the surface is illuminated by nearly the entire hemisphere
above it. Relying on indirect illumination in a room can
achieve this effect. Alternatively, for systems that acquire
texture simultaneously with range images, a camera flash
can be used nearly collocated with the camera sensor (the
standard design for a commodity camera). Surfaces obtained (c) (d)
in each range image are oriented so that the surface normal
is nearly parallel in the direction of the camera sensor. The Figure 8: An example of a texture-mapped model obtained
captured points then will all be illuminated with a value of from an inexpensive scanner; (a) the captured geometry;
no · ωi ns · ωs close in value to 1.0. An additional advantage (b) texture displayed as-captured; (c) textured model relit
of using the flash built into the camera is that it is designed from above; and (d) textured model relit from the back.
to be compatible with the spectral sensitivity of the camera
sensor to produce good color match.

Captured image data can represent rich appearance details white card with the object, or by obtaining separate spot
as can be seen by contrasting the model shown in Figure 8(b) measurements of the spectral reflectance of the object.
with a texture map with geometry alone Figure 8(a). The
details of the fur can be seen in the texture, that would High-end systems that capture very accurate, dense range
be essentially impossible to capture as geometry. However, images, coupled with low noise high resolution color
there are clearly shadows on the bunny’s coat that are fixed cameras may also be used to capture texture images. In
in the texture. Figures 8(c) and (d) show the model relit from these systems, images can be corrected using the geometric
novel directions. The texture looks flatter because the detail information to adjust for variations in angle and distance.
shadows do not appear consistent with the overall lighting Thresholding can be used to eliminate low values for
direction. values in shadow, and high values in specular highlights.
Alternatively the geometry can be used to predict areas that
will be in shadow or potentially in narrow specular peaks.
7.4. Correcting captured images Levoy et al. [95] describe the use of a CCD digital still
camera with a laser stripe laser scanner to acquire accurate
While they produce approximations of the relative estimates of Lambertian reflectance.
reflectance, inexpensive camera systems leave the texture
pixels in the form given by (3). If data from such systems
are to be used in rendering systems that use true physical 7.5. Spatially uniform, directionally varying BRDF
parameters, a grayscale card should be used to estimate
the γ of the color camera. A grayscale card image can An alternative to acquiring a spatially detailed map of BRDF
also be used to assess the effect of the light source and that has no directional variation, is to acquire details of
camera spectral sensitivities on the RG B values. Absolute a directionally varying BRDF on objects with no spatial
reflectance values can be estimated by capturing a reference variation of surface properties. Such methods have been


c The Eurographics Association and Blackwell Publishers Ltd 2002
162 F. Bernardini and H. Rushmeier / 3D Model Acquisition

(which result from either attached or cast shadows). After


the estimates of ρd and ωi are made, an iterative process
over non-Lambertian pixels distinguishes specular versus
interreflection pixels based on observed angle relative to the
angle of reflection. From the values of radiance recorded
for specular pixel, values of the specular reflectance and
surface roughness parameter are estimated. Alternatively
Baribeau et al. [87] capture samples of BRDF for a variety of
c
incident/reflected angle pairs using the polychromatic range
sensor. These data are then fit to the parametric model using
b a non-linear least-squares algorithm.
These spatially uniform techniques of course do not
a require objects that are completely uniform, but objects with
surfaces that can be segmented into reasonably large uniform
areas.

7.6. Spatially and directionally varying BRDF


Figure 9: Torrance–Sparrow inspired reflectance models
attempt to model the magnitude of Lambertian reflected To capture both spatially and directionally varying BRDF,
light a with a parameter ρd , the magnitude of directionally methods based on photometric stereo are used. Photometric
reflected light b with a parameter ρs and the width of the stereo, introduced by Woodham [98] uses N images of an
directional lobe c with a parameter σ . object from a single viewpoint under N different lighting
conditions. Initially, photometric stereo was used to estimate
surface normals, and from the normals surface shape.
described by Ikeuchi and Sato [96] for a range and intensity Assuming a Lambertian surface, and small light sources
image pair, and Baribeau et al. [87] for polychromatic laser of uniform strength an equation for the surface normal no
data. These methods use systems in which the angle between visible through each pixel p in each image m for each light
sensor and light source position is fixed. However, because source in direction ωm,i is given by:
the scanner sees a uniform BRDF surface with a variety of
surface orientations, data are obtained for L r (λ, ωi , ωr ) for ωm,i · no = ξ G m, p (7)
a variety of values of (ωi , ωr ). The methods compensate for where G i, p is the image grayscale value after correction for
not sampling the entire range of angles over the hemisphere non-linear γ values, and ξ is a scaling constant that includes
by using the observed data to fit a parametric reflectance the light source radiance and subtended solid angle. Since
model. Each paper uses a version of the Torrance–Sparrow no has unit length and thus represents only two independent
model [97]. Torrance–Sparrow-inspired models of BRDF variables, we can solve three equations for no and ξ .
are expressed generically as:
Kay and Caelli [99] couple the idea of images from
fr (λ, ωi , ωr ) = ρd (λ)/π + ρs (λ)g(σ, ωi , ωr ) (6) a photometric stereo system with a range image obtained
where ρd is the fraction of incident light reflected diffusely from the same viewpoint to expand on the idea introduced
(i.e. as a Lambertian reflector), ρs is the fraction of light by Ikeuchi and Sato. Rather than sampling a variety of
reflected near the specular direction in excess of the diffusely directions by viewing many orientations across the surface,
reflected light in that direction, and g is a function that multiple incident light directions are observed for each
depends on a parameter σ characterizing surface roughness surface point from the set of photometric images. Kay
as well as the angles of incidence and reflection. Methods and Caelli used high dynamic range images to be able
attempt to estimate the three parameters ρd , ρs and σ to to capture specular objects by taking pairs of images for
give the shape of the reflectance function diagrammed in each lighting condition with and without a grayscale filter.
Figure 9. Because the directional sampling is still sparse, the data are
fit to a Torrance–Sparrow-inspired reflectance model. The
For example, Ikeuchi and Sato [96] begin by assuming all fitting process proceeds in four passes. First, weights are
pixels reflected diffusely, and estimate values of ρd and the estimated to account for noise in the surface and image data.
light source direction (assumed uniform across the surface). Next, pixels are classified as to whether there is enough
This value is then refined by thresholding pixels which have data to estimate the model parameters. In the third pass
values well above that predicted by the product of ρd and the parameters are estimated where data is adequate. In the
no · ωi (which result either from specular reflections or final pass parameters are estimated for the areas in which
surface interreflections) and well below the predicted value there was insufficient data from the intensity maps. The only


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 163

restriction on the technique is that interreflections are not


accounted for, so strictly the method applies only to convex
objects.

Sato et al. [100] presented a method for obtaining an


estimate of BRDF for a full object. Range and color images
are obtained for an object, with the object, sensor and light
source positions registered by calibration by moving the
object with a robot arm manipulator. After the full object is
reconstructed, the color images—showing the object from a
variety of views and illumination directions—are used to fit
a Torrance–Sparrow-inspired model. The parameter fitting
problem is simplified by separating diffusely and specularly (a) (b)
reflected light in each image by examining the color of each
point on the surface in various images. Assuming non-white, Figure 10: An example of a normals map used to enhance
dielectric materials, the diffuse component will be the color the display of geometric detail. (a) Shows the underlying
of the object (i.e. the result of body reflection), while the 2 mm resolution geometry. (b) Shows the geometry displayed
specular component will be the color of the light source (i.e. with a 0.5 mm resolution normals map. The illumination is
the result of surface reflection) [101]. Because the specular from a novel direction—i.e. not the direction of any of the
component is sampled sparsely along the surface (there is no illumination in any of the captured images.
way to guarantee that a specular highlight will be obtained
for each point even with a large number of images) the
estimate of specular reflectance parameters are interpolated resolution geometry sampled at approximately every 2 mm,
over larger areas of the object. and the same geometry with a normals map added to show
detailed features every 0.5 mm.
Lensch et al. [102] take a “top-down” approach to
estimating spatially varying BRDF. High dynamic range Dana et al. [105] observed that even full BRDF and
images are taken from multiple views of an object of known normals maps are not adequate for capturing the change in
shape. Initially, it is assumed that the pixel luminance all detail surface appearance with lighting and view for surfaces
represent samples from a single BRDF. After this global with fine scale geometric complexity such as bread and
BRDF is computed, two groups of luminances values are velvet. They developed the concept of bidirectional textures,
formed based on their distance from the global BRDF which are sets of images (rather than individual values) of
estimate, and two new BRDFs are computed. This splitting surfaces for varying light and viewpoint.
process is repeated until the distance of samples in a group
No scanning method has been developed to truly capture
from the BRDF computed from them falls to some threshold.
bidirectional textures for complete objects. However, there
Point by point variations are computed by computing each
have been a number of techniques that use the concept of
BRDF for each point on the final model as a linear
view dependent texture maps. View dependent texture maps
combination of the set of BRDFs formed from the groups.
were introduced by Debevec et al. [106] in the context
of building models from photogrammetry and generic
7.7. Capturing reflectance and small scale structure parameterized models. A different texture map, obtained
from points closest to the current view, is used for each
Methods for obtaining texture may not just estimate view of a model. View dependent texture maps can portray
reflectance, but may also capture small scale details at a the variation of surface appearance due to changes in
resolution finer than the underlying range image. Rushmeier self-occlusion as well as BRDF. View dependent texture
et al. [103] developed a photometric stereo system attached maps as described in [106] are not varied for different
to a range imaging system. The photometric system allowed lighting conditions. Pulli et al. [107] applied the idea to
the calculation of normals maps on the surface at a higher texturing range images. In an interactive viewer, only the
spatial resolution than the underlying range image. They range and color images that would be visible from the
developed a method [104] to use the normals of the current view are used. Texture is synthesized on the fly
underlying low spatial resolution range image to adjust the using a combination of the three acquired textures closest
images acquired by the photometric system to insure that to the current view. The effect is to render the effects of
the fine detail normals that are computed are consistent with BRDF, occlusion and shadowing for the lighting conditions
the underlying mesh. Given the range images and detailed that existing during acquisition. Since the textures were
normals, the acquired color images were then adjusted acquired with both lighting and view changing, the effect
to produce estimates of the Lambertian reflectance of the is approximately the same as observing the object with a
surface. Figure 10 shows an example of an underlying low headlight at the viewer position.


c The Eurographics Association and Blackwell Publishers Ltd 2002
164 F. Bernardini and H. Rushmeier / 3D Model Acquisition

Miller et al. [108] developed the idea of surface light fields


that represent the light leaving each point on a surface in
all directions. They applied this idea to synthetic models.
Nishino et al. [109] developed the Eigen-Texture system
to capture and represent surface light field data. In their p1
p2
method, a light is fixed to the object coordinate system, P
and M views of an object are obtained using a turntable. Ai
The result is M small texture maps for each triangle on a Zi
simplified version of the geometry obtained from the range a1i
images. The series of M small texture maps are compressed z1i a2i
by performing an eigenstructure analysis on the series and z2i
finding a small number of textures that can be used as an
approximate basis set to form textures in the view space
encompassed by the originally M textures. The textures then
represent the effects of BRDF, self-shadowing, and self- Ci
occlusion effects for the single lighting condition. Eigen-
Textures obtained for many different lighting conditions can A a1 a2
be combined linearly to generate textures for novel lighting
conditions. Wood et al. [110] proposed an alternate method Figure 11: In determining which parts of a captured texture
for capturing and storing surface light fields using a different image Ai can be used in a texture map A for a surface P,
approach for data compression. They also demonstrated how occlusion effects must be accounted for. Here the captured
small changes could be made in an object represented by texture pixel ai1 should not appear in the final texture map
surface light fields while maintaining a plausible, if not pixel ai because the point pi is occluded from the point of
completely accurate, appearance. view of camera Ci .

8. Texture Map Reconstruction


visible in that image are identified. As shown in Figure 11,
Texture map reconstruction involves combining all the tex- simply checking that a surface is contained within the image
ture maps acquired for an object into a single non-redundant view frustum and is oriented toward the camera position is
map over the entire object. Texture map reconstruction may not adequate. A full rendering of the model is required to
start with meshes that store a color for each vertex point, detect whether another surface occludes the surface being
and form images. Other methods begin with acquired (and mapped.
possibly processed) images. Methods for texture map recon-
struction starting with images may either select one piece Methods for reconstructing non-redundant texture for in-
from one acquired image to texture each surface area or they expensive scanner systems that use captured images directly
may combine multiple maps that cover each surface area. for building maps generally select a piece of a single im-
age for each triangle in the mesh. An example of this sort
Soucy et al. [76] developed a method for generating a of method is described by Matsumoto et al. [111]. There
texture map from color per vertex models. The dense triangle are two desirable properties in selecting the image that con-
mesh is simplified to reduce the triangle count. Barycentric tributes the texture for a given triangle—it should be from
coordinates are saved for each color triplet for which the the viewpoint in which the triangle projects to the largest
vertex has been removed. Separate texture maps are created area, and it should be from the same image as adjacent tri-
for each triangle in the simplified mesh. The texture image angles. Matsumoto et al. cast this as an energy minimization
for each triangle is required to be a half-square triangle. problem, where the energy is defined as the difference be-
Appropriate colors are assigned to texture pixels using the tween a penalty function expressing the distance between the
original vertex colors and their barycentric coordinates. images used for adjacent triangles and the scaled projected
Continuity between the texture maps is insured by requiring area of a triangle on an an image.
vertices to coincide with pixel centers in the texture map,
and by requiring the number of pixels along the edges of An example of a texture map produced by an inexpensive
maps for adjacent texture maps to be integer multiples of one scanning system that selects image segments as large as
another. With this constraint, pixels representing the same possible and then packs them into a single texture map is
location on two different texture maps can be forced to have shown in Figure 12 for the model that was shown in Figure 8.
identical values. All of the individual texture maps are then
Individual textures may be selected for regions of the
packed into a single texture image.
surface encompassing multiple triangles. Rocchini et al.
Methods for reconstructing texture from sets of images [2] describe a method for selecting one source texture per
have in common that for each texture image, the triangles region, with regions covered by a single source map made


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 165

(a) (b)

Figure 12: An example of the texture image used to display


the model in Figure 8.

as large as possible. First, a list of images containing each


vertex is found. Then in an iterative procedure, regions are
grown so that large regions of the surface are mapped to
the same image. The problem remains then of adjusting the
boundaries between regions so seams are not visible. For the (c) (d)
triangles on boundaries between different source images, a
detailed local registration is performed so that details from Figure 13: An example of the zippering approach to
the two source texture images match. combining texture maps: (a) and (b) show two input scans
to be merged; (c) shows the merged textures without
Methods that use zippering for mesh integration use the
adjustment; (d) shows the final texture after adjustment.
original texture map for each section of range image used in
the final mesh. Just as overlapping meshes are used to adjust
point positions to reduce line-of-sight errors, overlapping
textures are used to adjust texture values to eliminate three types of weights in combining three source textures.
abrupt color changes in the texture. Texture in the overlap First a weight representing the angle between the current
region is the weighted average of the two overlapping view and each source view is computed. Then, similar to
textures, with the weight of each texture decreasing with Johnson and Kang, the surface normal to view angle is used.
distance to the edge of the corresponding range image. Finally, similar to the zippering methods, these weights are
Figures 13(a) and (b) show two overlapping scans to be combined with a weight that decreases with distance to the
merged. Figure 13(c) shows the result after the geometries texture edge. Neugebauer and Klein [90] combine multiple
have been zippered (or stitched) together, with the original textures using weights that account for the angle of the
texture maps. Figure 13(d) shows the final result after the surface normal and view direction, and the distance to the
texture in the overlap region has been adjusted. edge of the region of a texture image that will be used in the
final model. Because they use images that may still contain
Rather than just use multiple textures pair wise, other artifacts such as specular highlights, Neugebauer and Klein
methods use data from multiple textures that contain each use a third weight that eliminates outliers.
triangle. Such methods are successful and avoid ghosting
and blurring artifacts if they are preceded by registration Bernardini et al. [38] describe a method that uses
techniques that make use of texture image data. Johnson all available maps representing reflectance and normals
and Kang [32] use all textures containing each triangle, at each triangle obtained using the system described
with a weighted average that uses the angle of the surface in [103]. To minimize color variations, before the maps
normal to the direction to camera for each image as the representing reflectance are combined, a global color
weight. In Pulli et al. ’s [107] view-dependent texturing uses balance is performed [104]. A set of points are randomly


c The Eurographics Association and Blackwell Publishers Ltd 2002
166 F. Bernardini and H. Rushmeier / 3D Model Acquisition

sampled on the integrated surface mesh. All of the maps


representing reflectance are projected onto the integrated
mesh, and for each point all of the maps that contain the
point are identified. A set of linear equations is formed for a
scalar correction factor for each channel for each image that
sets the colors in each map representing a common point
equal. A least squares solution is performed to compute
the correction factors for this over determined system. The
normals maps were previously made consistent with one
another by the process used to make them consistent with
the underlying integrated mesh. The map values are then
combined using three weights. Similar to the other methods,
one is based on the area of the triangle in the image using
the dot product of normal to view direction, combined with
the distance from the camera sensor to the triangle. Another
is a weight which diminishes with distance to the edge of the
texture. Finally a third weight is used that indicates whether
it was possible to compute a normal from the photometric
images, or if the normal from the underlying integrated
Figure 14: (left) A photograph of Michelangelo’s Florentine
mesh was used.
Pietà. (right) A synthetic picture from the 3D computer
model.
9. Scanning Systems and Projects
By combining different features from the various methods Shihuang Terra Cotta Warriors and Horses is conducting an
for each step outlined in Figure 1, it is possible to compose extensive scanning project to build models of relics found
many different systems for producing a 3D model of an at the site [114]. A custom portable laser scanner coupled
existing object suitable for computer graphics modeling with a digital video camera was designed for the project.
and rendering. The design of the particular processing Besides presenting the models as they are, the project seeks
pipeline depends on the requirements of the end application, to facilitate piecing together damaged relics, and digitally
constrained by the budgetary limitations for acquiring the restoring full color to figures using pigment fragments that
data. have been found.
A number of scanning applications with emphasis on Ikeuchi et al. have developed many techniques for the
graphic display as the end product have been documented. steps in the model acquisition pipeline. These techniques
Major application areas include scanning historical objects are now being applied to building a model of the 13 m
for scholarly study and virtual museums, scanning of tall Kamakura Buddha from color images and time-of-flight
humans and e-commerce. range scanning data [115].
The National Research Council of Canada has conducted Levoy et al. recently used a combination of laser
a series of projects over the past 15 years scanning historical triangulation range scanning and high-resolution digital
artifacts ranging from 3D representations of oil paintings color imaging to acquire models of many of the major
to archeological sites. Their experiences acquiring and works of Michelangelo [95]. The high-end equipment
displaying geometry and color reflectance of a variety employed produced large quantities of data. To make the
of objects are described in various publications [112]. In results usable, they developed a novel rendering system
particular Beraldin et al. [113] present a detailed practical that generates images directly from points rather than from
discussion of using a portable scanner (i.e. suitcase-sized) triangle primitives [116].
to scan a number of sculptural and architectural features
on site in Italy. As an example of current capabilities, they Bernardini et al. [117] used a lower resolution structured
describe the scanning of Pisano’s Madonna col Bambino in light system coupled with a photometric lighting system
the Cappella degli Scrovegni in Padova. The were able to for higher resolution reflectance and normals maps to scan
acquire 150 scans at 1 mm resolution of the approximately Michelangelo’s Florentine Pietà. A rendering of the model
1 m tall statue in a 7 h period. The range images were is shown next to a photograph of the statue in Figure 14.
registered and integrated using PolyworksTM software.
Several projects are addressing the scanning of human
Many other cultural heritage projects are ongoing or shape, e.g. [118]. Many of these applications address purely
recently completed. Zheng, of the Kyushu Institute of geometric issues such as fit and ergonomic design, rather
Technology in collaboration with the Museum of Qin than preparing models for computer graphics display. For


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 167

animation systems however, there has been a great deal of • methods for assessing global model accuracy after range
interest in the scanning of human faces. Building a realistic scan registration.
face model is one of the most demanding applications,
because of human familiarity with the smallest details of Scanning and reconstruction technology will enable a more
the face. Yau [119] described a system for building a face extensive use of 3D computer graphics in a wide range of
model from a range scan that used light striping and a applications.
color image for texture mapping. Nahas et al. [120] describe
obtaining a realistic face model from a laser range scanner References
that captures reflectance information as well. Marschner
et al. [121] described a method to obtain skin BRDF for 1. T. Várady, R. R. Martin and J. Cox. Reverse engineer-
realistic faces using color images and a detailed model from ing of geometric models—an introduction. Computer
a range scanner, in a method similar to that used by Ikeuchi Aided Design, 29(4):255–268, 1997.
and Sato [96]. This work was extended to spatially varying
skin reflectance [122]. 2. C. Rocchini, P. Cignoni, C. Montani and R. Scopigno.
Multiple textures stitching and blending on 3D objects.
Debevec et al. [123] designed a specialized rig for In Proceedings of the 10th Eurographics Workshop on
obtaining hundreds of images of an individual face with Rendering, Granada, Spain: pp. 127–138. June, 1999.
calibrated lighting. They use this data to compute spatially
varying BRDFs and normals that are mapped onto a lower 3. M. Polleyfeys, R. Koch, M. Vergauwen and L. V.
resolution model of the face that is obtained with a structured Gool. Hand-held acquisition of 3D models with a
light system. Haro et al. [124] describe a less rigorous, but video camera. In Proceeding of the 2nd International
also much less expensive, method for obtaining detailed Conference on 3D Digital Imaging and Modeling,
facial geometry. Photometric stereo is used to capture Ottawa, Canada: pp. 14–23. October, 1999.
the geometry of small patches of skin impressions made 4. J. Y. Zheng. Acquiring 3D models from sequences of
in a polymeric material. These patches are placed onto contours. IEEE Transactions on Pattern Analysis and
appropriate areas of the face model, and grown using texture Machine Intelligence, 16(2):163–178, 1994.
synthesis techniques to cover the whole face.
5. J.-A. Beraldin, S. F. El-Hakim and F. Blais. Perfor-
The cultural heritage and human face applications dis- mance evaluation of three active vision systems built at
cussed above have emphasized using relatively high-end sys- the National Research Council of Canada. In Proceed-
tems. An emerging application for acquired 3D models is e- ings of the Optical 3D Measurement Techniques III,
commerce—using 3D models to allow shoppers to examine NRC Technical Report 39165. Vienna: pp. 352–361.
and/or customize items for purchase over the internet. This 1995.
new application requires both inexpensive equipment, and
a much higher level of “ease-of-use.” Companies targeting 6. C. Zitnick and J. A. Webb. Multi-baseline stereo using
this application area are offering systems at relatively low surface extraction. In Technical Report CMU-CS-96-
(<$10,000) price for scanning small objects. 196. Computer Science Department, Carnegie Mellon
University, Pittsburgh, PA, 1996.
10. Conclusions 7. P. Boulanger. Knowledge representation and analysis
of range data, Tutorial Notes, Proceedings of the 2nd
The current state of the art allows the acquisition of a
International Conference on 3D Digital Imaging and
large class of objects, but requires expert operators and
Modeling, 1999.
time consuming procedures for all but the simplest cases.
Research is needed to improve the acquisition pipeline in 8. G. M. Cortelazzo, C. Doretto and L. Lucchese. Free-
several key aspects: form textured surfaces registration by a frequency
domain technique. In International Conference on
• planning methods for data acquisition; Image Processing, ICIP ’98, pp. 813–817. 1998.
• reliable capture and robust processing of data for a
9. G. Roth. Registering two overlapping range images.
larger class of objects, including large size objects,
In Proceeding of the 2nd International Conference on
environments, and objects with challenging surface
3D Digital Imaging and Modeling, Ottawa, Canada:
properties;
pp. 191–200. October, 1999.
• automation of all the steps, to minimize user input;
10. D. Zhang and M. Hebert. Harmonic maps and their
• real-time feedback of the acquired surface; applications in surface matching. In Proceedings of
• improved capture and representation of surface appear- the IEEE Conference on Computer Vision and Pattern
ance; Recognition (CVPR ’99), pp. 524–530. 1999.


c The Eurographics Association and Blackwell Publishers Ltd 2002
168 F. Bernardini and H. Rushmeier / 3D Model Acquisition

11. A. E. Johnson and M. Hebert. Using spin images for 23. R. Bergevin, M. Soucy, H. Gagnon and D. Lau-
efficient object recognition in cluttered 3D scenes. rendeau. Towards a general multiview registration
IEEE Transactions on Pattern Analysis and Machine technique. IEEE Transactions on Pattern Analysis and
Intelligence, 21(5):433–449, May, 1999. Machine Intelligence, 18(5):540–547, May, 1996.

12. M. A. Greenspan and P. Boulanger. Efficient and reli- 24. R. Benjemaa and F. Schmitt. Fast global registration of
able template set matching for 3D object recognition. 3D sampled surfaces using a multi-z-buffer technique.
In Proceedings of the 2nd International Conference on In Proceedings of the International Conference on
3D Digital Imaging and Modeling (3DIM), pp. 230– Recent Advances in 3D Digital Imaging and Modeling,
239. 1999. Ottawa, Canada: pp. 113–120. May, 1997.

13. P. J. Besl and N. D. McKay. A method for registration 25. K. Pulli. Multiview registration for large data sets. In
of 3D shapes. IEEE Transactions on Pattern Analysis Proceedings of the 2nd International Conference on
and Machine Intelligence, 14(2):239–256, February, 3D Digital Imaging and Modeling, Ottawa, Canada:
1992. pp. 160–168. October, 1999.

14. Y. Chen and G. G. Medioni. Object modeling by 26. G. Blais and M. D. Levine. Registering multiview
registration of multiple range images. Image and range data to create 3D computer objects. IEEE Trans-
Vision Computing, 10(3):145–155, 1992. actions on Pattern Analysis and Machine Intelligence,
17(8):820–824, August, 1995.
15. Z. Zhang. Iterative point matching for registration of
free-form curves and surfaces. International Journal 27. P. J. Neugebauer. Reconstruction of real-world objects
of Computer Vision, 13(2):119–152, 1994. via simultaneous registration and robust combination
of multiple range images. International Journal of
16. C. Dorai, J. Weng and A. K. Jain. Optimal registration Shape Modeling, 3(1 & 2):71–90, 1997.
of object views using range data. IEEE Transac-
28. A. J. Stoddart and A. Hilton. Registration of multiple
tions on Pattern Analysis and Machine Intelligence,
point sets. In Proceedings of the 13th International
19(10):1131–1138, October, 1997.
Conference on Pattern Recognition, Vienna, Austria:
17. C. Dorai, G. Wang, A. K. Jain and C. Mercer. pp. B40–B44. 1996.
Registration and integration of multiple object views
29. E. Gagnon, J.-F. Rivest, M. Greenspan and N. Burtnyk.
for 3D model construction. IEEE Transactions on
A computer-assisted range image registration system
Pattern Analysis and Machine Intelligence, 20(1):83–
for nuclear waste cleanup. IEEE Transactions on
89, January, 1998.
Instrumentation and Measurement, 48(3):758–762,
18. D. W. Eggert, A. W. Fitzgibbon and R. B. Fisher. 1999.
Simultaneous registration of multiple range views for 30. R. C. Gonzalez and R. E. Woods. Digital Image
use in reverse engineering of CAD models. Com- Processing. Addison-Wesley, Reading, MA, 1993.
puter Vision and Image Understanding, 69(3):253–
272, March, 1998. 31. F. Bernardini and H. Rushmeier. Strategies for regis-
tering range images from unknown camera positions.
19. B. K. P. Horn. Closed form solutions of absolute ori- In Three-Dimensional Image Capture and Applica-
entation using unit quaternions. Journal of the Optical tions III, (Proceedings of SPIE 3958). pp. 200–206.
Society of America, 4(4):629–642, April, 1987. 2000.
20. R. M. Haralick, H. Joo, C. Lee, X. Zhuang, V. G. 32. A. Johnson and S. Kang. Registration and integration
Vaidya and M. B. Kim. Pose estimation from corre- of textured 3D data. In Proceedings of the Interna-
sponding point data. IEEE Transactions on Systems, tional Conference on Recent Advances in 3D Digital
Man and Cybernetics, 19(6):1426–1446, 1989. Imaging and Modeling, Ottawa, Canada: pp. 234–241.
May, 1997.
21. T. Masuda, K. Sakaue and N. Yokoya. Registration
and integration of multiple range images for 3D 33. A. Johnson and S. Kang. Registration and integration
model construction. In Proceeding of ICPR ’96, IEEE. of textured 3D data. In Technical Report CRL 96/4.
pp. 879–883. 1996. DEC-CRL. September, 1996.

22. G. Turk and M. Levoy. Zippered polygon meshes 34. C. Schütz, T. Jost and H. Hügli. Multi-feature match-
from range images. In A. Glassner (ed), Proceedings ing algorithm for free-form 3D surface registration. In
of SIGGRAPH 94, Computer Graphics Proceedings, Proceedings of the International Conference on Pat-
Annual Conference Series. pp. 311–318. July, 1994. tern recognition. Brisbane, Australia: August, 1998.


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 169

35. S. Weik. Registration of 3D partial surface models us- 46. H. Edelsbrunner and E. P. Mücke. Three-dimensional
ing luminance and depth information. In Proceedings alpha shapes. ACM Transactions on Graphics,
of the International Conference on Recent Advances 13(1):43–72, January, 1994.
in 3D Digital Imaging and Modeling, Ottawa, Canada:
pp. 93–100. May, 1997. 47. C. Bajaj, F. Bernardini and G. Xu. Automatic recon-
struction of surfaces and scalar fields from 3D scans.
36. K. Pulli. Surface reconstruction and display from In Proceedings of SIGGRAPH 95, Computer Graphics
range and color data, PhD Thesis, Department of Proceedings, Annual Conference Series. pp. 109–118.
Computer Science and Engineering, University of 1995.
Washington, 1997.
48. H. Edelsbrunner. Surface reconstruction by wrapping
37. R. Szeliski and H. Shum. Creating full panoramic finite sets in space. In Technical Report 96-001.
mosaics and environment maps. In Proceedings of Raindrop Geomagic Inc., 1996.
SIGGRAPH 97, Computer Graphics Proceedings, An-
nual Conference Series. pp. 251–258. 1997. 49. N. Amenta, M. Bern and M. Kamvysselis. A new
Voronoi-based surface reconstruction algorithm. In
38. F. Bernardini, I. Martin and H. Rushmeier. High- Proceedings of SIGGRAPH 96, Computer Graphics
quality texture reconstruction. IEEE Transactions on Proceedings, Annual Conference Series. pp. 415–412.
Visnalization and Computer Graphics, 7(4):318–332, July, 1998.
2001.
50. N. Amenta and M. Bern. Surface reconstruction by
39. M. Rutishauser, M. Stricker and M. Trobina. Merging Voronoi filtering. Discrete Computational Geometry,
range images of arbitrarily shaped objects. In Proceed- 22(4):481–504, 1999.
ings of CVPR ’94, pp. 573–580. 1994.
51. N. Amenta, S. Choi and R. Kolluri. The power crust.
In Proceedings of the 6th ACM Symposium on Solid
40. P. Hébert, D. Laurendeau and D. Poussart. Scene
Modeling and Applications. 2001.
reconstruction and description: geometric primitive
extraction from multiple view scattered data. In Pro- 52. N. Amenta, S. Choi and R. Kolluri. The power crust,
ceedings of the IEEE Conference on Computer Vision unions of balls, and the medial axis transform. Inter-
and Pattern Recognition, pp. 286–292. 1993. national Journal of Computational Geometry and its
Applications (special issue on surface reconstruction,
41. M. Soucy and D. Laurendeau. A general surface
in press).
approach to the integration of a set of range views.
IEEE Transactions on Pattern Analysis and Machine 53. N. Amenta, S. Choi, T. K. Dey and N. Leekha. A
Intelligence, 17(4):344–358, April, 1995. simple algorithm for surface reconstruction. In Pro-
ceedings of 16th ACM Symposium on Computational
42. R. M. Bolle and B. C. Vemuri. On surface reconstruc-
Geometry, pp. 213–222. 2000.
tion methods. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 13(1):1–13, 1991. 54. T. K. Dey, J. Giesen and J. Hudson. Delaunay based
shape reconstruction from large data. In Proceedings
43. R. Mencl and H. Müller. Interpolation and approxima- of IEEE Visualization 2001. 2001, (in press).
tion of surfaces from three-dimensional scattered data
points. In State of the Art Report (STAR), Eurograph- 55. F. Bernardini and C. Bajaj. Sampling and reconstruct-
ics ’98. 1998. ing manifolds using alpha-shapes. In Proceedings
of the 9th Canadian Conference on Computational
44. F. Bernardini, C. Bajaj, J. Chen and D. Schikore. Auto- Geometry, pp. 193–198. August, 1997. Updated on-
matic reconstruction of 3D CAD models from digital line version available at www.qucis.queensu.ca/
scans. International Journal of Computational Ge- cccg97.
ometry and Applications, 9(4 & 5):327–370, August–
October, 1999. 56. D. Attali. r -regular shape reconstruction from unor-
ganized points. Computational Geometry: Theory and
45. H. Edelsbrunner. Shape reconstruction with delaunay Applications, 10:239–247, 1998.
complex. In A. V. Moura and C. L. Lucchesi (eds),
LATIN’98: Theoretical Informatics. Third Latin Amer- 57. T. K. Dey, K. Mehlhorn and E. A. Ramos. Curve
ican Symposium, Campinas, Brazil, Lecture Notes in reconstruction: connecting dots with good reason.
Computer Science, LNCS 1380. New York: Springer, Computational Geometry: Theory and Applications,
pp. 119–132. 1998. 15:229–244, 2000.


c The Eurographics Association and Blackwell Publishers Ltd 2002
170 F. Bernardini and H. Rushmeier / 3D Model Acquisition

58. F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva component. Image and Vision Computing, (17):99–
and G. Taubin. The ball-pivoting algorithm for sur- 111, 1999.
face reconstruction. IEEE Transactions on Visualiza-
tion and Computer Graphics, 5(4):349–359, October– 70. C. Rocchini, P. Cignoni, F. Ganovelli, C. Montani, P.
December, 1999. Pingi and R. Scopigno. Marching intersections: an effi-
cient resampling algorithm for surface management. In
59. M. Gopi, S. Krishnan and C. T. Silva. Surface recon- Proceedings of the International Conference on Shape
struction based on lower dimensional localized De- Modeling and Applications (SMI 2001). Genova, Italy:
launay triangulation. In Proceedings of Eurographics 7–11 May, 2001.
2000, pp. 467–478. 2000.
71. D. Terzopoulos, A. Witkin and M. Kass. Constraints
60. B. Curless and M. Levoy. A volumetric method on deformable models: recovering 3D shape and non-
for building complex models from range images. In rigid motion. Artificial Intelligence, 36:91–123, 1988.
Proceedings of SIGGRAPH 96, ACM SIGGRAPH,
72. A. Pentland and S. E. Sclaroff. Closed form solutions
Computer Graphics Proceedings, Annual Conference
for physically based shape modelling and recovery.
Series. pp. 303–312. August, 1996.
IEEE Transactions on Pattern Analysis and Machine
61. M. Wheeler, Y. Sato and K. Ikeuchi. Consensus Intelligence, 13(7):715–729, 1991.
surfaces for modeling 3D objects from multiple range
73. R. T. Whitaker. A level-set approach to 3D recon-
images. In Sixth International Conference on Com-
struction from range data. International Journal of
puter Vision, IEEE. pp. 917–924. 1998.
Computer Vision, 29(3):203–231, 1998.
62. A. Hilton, A. Stoddart, J. Illingworth and T. Windeatt.
74. J. Gomes and O. Faugeras. Level sets and distance
Reliable surface reconstruction from multiple range
functions. In Proceedings of the 6th European Con-
images. In Fourth European Conference on Computer
ference on Computer Vision, pp. 588–602. 2000.
Vision, pp. 117–126. 1996.
75. M. Garland. Multiresolution modelling: survey and
63. W. Lorensen and H. Cline. Marching cubes: a high res- future opportunities. In State of the Art Report (STAR),
olution 3D surface construction algorithm. Computer Eurographics ’99. 1999.
Graphics, 21(4):163–170, 1987.
76. M. Soucy, G. Godin and M. Rioux. A texture-
64. J.-D. Boissonnat and F. Cazals. Smooth surface re- mapping approach for the compression of colored
construction via natural neighbour interpolation of 3D triangulations. The Visual Computer, 12:503–513,
distance function. In Proceedings of the 16th ACM 1996.
Symposium on Computational Geometry, pp. 223–
232. 2000. 77. J. Maillot, H. Yahia and A. Verroust. Interactive
texture mapping. In Proceedings of SIGGRAPH 93,
65. K. Pulli, T. Duchamp, H. Hoppe, J. McDonald, L. Computer Graphics Proceedings, Annual Conference
Shapiro and W. Stuetzle. Robust meshes from multiple Series. pp. 27–34. 1993.
range maps. In Proceedings of the International Con-
ference on Recent Advances in 3D Digital Imaging and 78. M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M.
Modeling, Ottawa, Canada: pp. 205–211. May, 1997. Lounsbery and W. Stuetzle. Multiresolution analysis
of arbitrary meshes. In R. Cook (ed), Proceedings of
66. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald SIGGRAPH 95, Computer Graphics Proceedings, An-
and W. Stuetzle. Mesh optimization. In Proceedings nual Conference Series. pp. 173–182. August, 1995.
of SIGGRAPH ’93, Computer Graphics Proceedings,
Annual Conference Series. pp. 19–26. 1993. 79. S. R. Marschner. Inverse rendering for computer
graphics, PhD Thesis, Cornell University, 1998.
67. M.-E. Algorri and F. Schmitt. Surface reconstruction
from unstructured 3D data. Computer Graphics Fo- 80. P. Sloan, D. Weinstein and J. Brederson. Importance
rum, 15(1):47–60, 1996. driven texture coordinate optimization. Computer
Graphics Forum, 17(3):97–104, 1998. Proceedings of
68. J. Neugebauer and K. Klein. Adaptive triangulation EUROGRAPHICS ’98.
of objects reconstructed from multiple range images.
In IEEE Visualization ’97, Late Breaking Hot Topics. 81. H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H.
1997. Jin, J. McDonald, J. Schwitzer and W. Stuelzle. Piece-
wise smooth surface reconstruction. In Proceedings
69. M. Reed and P. Allen. 3D modeling from range of SIGGRAPH 94, Computer Graphics Proceedings,
imagery: an incremental method with a planning Annual Conference Series. pp. 295–302. 1994.


c The Eurographics Association and Blackwell Publishers Ltd 2002
F. Bernardini and H. Rushmeier / 3D Model Acquisition 171

82. C. Loop. Smooth subdivision surfaces based on trian- 94. K. Matsushita and T. Kaneko. Efficient and handy
gles, MS Thesis, Department of Mathematics. Univer- texture mapping on 3D surfaces. Computer Graphics
sity of Utah, August, 1987. Forum, 18(3):C349–C357, 1999. Proceedings of EU-
ROGRAPHICS ’99.
83. M. Eck and H. Hoppe. Automatic reconstruction of
B-splines surfaces of arbitrary topological type. In 95. M. Levoy et al. The digital Michelangelo project:
Proceedings of SIGGRAPH 96, Computer Graphics 3D scanning of large statues. In Proceedings of SIG-
Proceedings, Annual Conference Series. pp. 325–334. GRAPH 00, Computer Graphics Proceedings, Annual
1996. Conference Series. pp. 131–144. 2000.

84. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald and 96. K. Ikeuchi and K. Sato. Deterimining reflectance
W. Stuelzle. Surface reconstruction from unorganized properties of an object using range and brightness
points. Computer Graphics, 26(2):71–78, July, 1992. images. IEEE Transactions on Pattern Analysis and
Proceedings of SIGGRAPH 92. Machine Intelligence, 13(11):1139–1153, 1991.

85. J. Peters. Constructing C 1 surfaces of arbitrary topol- 97. K. Torrance and E. Sparrow. Theory for off-specular
ogy using biquadratic and bicubic splines. In N. reflection for roughened surface. Journal of the
Sapidis (ed), Designing Fair Curves and Surfaces, Optical Society of America, 57:1105–1114, Septem-
SIAM, pp. 277–293. 1994. ber, 1967.

86. V. Krishnamurthy and M. Levoy. Fitting smooth 98. R. J. Woodham. Photometric method for determining
surfaces to dense polygon meshes. In Proceedings of surface orientation from multiple images. Optical
SIGGRAPH 96, ACM SIGGRAPH, Computer Graph- Engineering, 19:139–144, 1980.
ics Proceedings, Annual Conference Series. pp. 313–
324. August, 1996. 99. G. Kay and T. Caelli. Inverting an illumination model
from range and intensity maps. CVGIP—Image Un-
87. R. Baribeau, M. Rioux and G. Godin. Color reflectance derstanding, 59(2):183–201, 1994.
modeling using a polychromatic laser range sensor.
IEEE Transactions on Pattern Analysis and Machine 100. Y. Sato, M. Wheeler and K. Ikeuchi. Object shape and
Intelligence, 263–269, 1992. reflectance modeling from observation. In Proceedings
of SIGGRAPH 97, Computer Graphics Proceedings,
88. K. Price. Computer vision bibiliography, Annual Conference Series. pp. 379–388. 1997.
http://iris.usc.edu/Vision-Notes/
bibliography/contents.html. 101. G. Klinker, S. Shafer and T. Kanade. Using a color
reflection model to separate highlights from object
89. R. Y. Tsai. An efficient and accurate camera calibration color. In International Conference on Computer Vi-
technique for 3D machine vision. Computer Vision and sion, pp. 145–150. 1987.
Pattern Recognition, 364–374, June, 1986.
102. H. P. A. Lensch, J. Kautz, M. Goesele, W. Heidrich
90. P. Neugebauer and K. Klein. Texturing 3D models and H.-P. Seidel. Image-based reconstruction of spa-
of real world objects from multiple unregistered tially varying materials. In K. Myzkowski and S.
photographics views. Computer Graphics Forum, Gortler (eds), Rendering Techniques ’01 (Proceedings
18(3):C245–C256, 1999. Proceedings of EURO- of the 12th Eurographics Rendering Workshop). 2001.
GRAPHICS ’99.
103. H. Rushmeier, F. Bernardini, J. Mittleman and G.
91. H. P. A. Lensch, W. Heidrich and H.-P. Seidel. Taubin. Acquiring input for rendering at appropriate
Automated texture registration and stitching for real level of detail: digitizing a Pietà. In Proceedings of
world models. In W. Wang, B. A. Barsky and the 9th Eurographics Workshop on Rendering, Vienna,
Y. Shinagawa (eds), Proceedings of the 8th Pacific Austria: pp. 81–92. June, 1998.
Conference on Computer Graphics and Applications,
pp. 317–326. 2000. IEEE Computer Society. 104. H. Rushmeier and F. Bernardini. Computing consistent
normals and colors from photometric data. In Proceed-
92. K. Nishino, Y. Sato and K. Ikeuchi. Appearance ing of the 2nd International Conference on 3D Digital
compression and synthesis based on 3D model for Imaging and Modeling, Ottawa, Canada: pp. 99–108.
mixed reality. In Proceeding of ICCV ’99, Vol. 1. October, 1999.
pp. 38–45. September, 1999.
105. K. Dana, B. van Ginneken, S. Nayar and J. Koen-
93. P. Viola. Alignment by maximization of mutual infor- derink. Reflectance and texture of real-world surfaces.
mation, PhD Thesis, MIT AI-Lab, June, 1995. ACM Transactions on Graphics, 1:34, 1999.


c The Eurographics Association and Blackwell Publishers Ltd 2002
172 F. Bernardini and H. Rushmeier / 3D Model Acquisition

106. P. Debevec, C. Taylor and J. Malik. Modeling and 115. K. Ikeuchi et al. Modeling cultural heritage through
rendering architecture from photographs: a hybrid observation. In Proceedings of the First Pacific Rim
geometry- and image-based approach. In Proceedings Conference on Multimedia. Sydney, Australia: Decem-
of SIGGRAPH 96, Computer Graphics Proceedings, ber, 2000, (in press).
Annual Conference Series. pp. 11–20. August, 1996.
116. S. Rusinkiewicz and M. Levoy. A multiresolution
107. K. Pulli, M. Cohen, T. Duchamp, H. Hoppe, L. Shapiro point rendering system for large meshes. In Proceed-
and W. Stuetzle. View-based rendering: visualizing ings of SIGGRAPH 00, Computer Graphics Proceed-
real objects from scanned range and color data. In ings, Annual Conference Series. pp. 343–352. 2000.
Proceedings of the 8th Eurographics Workshop on
Rendering, St Etienne, France: pp. 23–34. June, 1997. 117. F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier
and G. Taubin. Building a digital model of Michelan-
108. G. S. P. Miller, S. Rubin and D. Ponceleon. Lazy gelo’s Florentine Pietá. IEEE Computer Graphics and
decompression of surface light fields for precomputed Applications, 22(1):59–67, 2002.
global illumuniation. In Proceedings of the 9th Eu-
rographics Workshop on Rendering, Vienna, Austria: 118. K. Robinette, H. Daanen and E. Paquet. The Cae-
pp. 281–292. June, 1998. sar project: a 3D surface anthropometry survey. In
Proceedings of the 2nd International Conference on
109. K. Nishino, Y. Sato and K. Ikeuchi. Eigen-texture
3D Digital Imaging and Modeling, Ottawa, Canada:
method: appearance compression based on 3D model.
pp. 380–387. October, 1999.
In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pp. 618–624. June, 119. J. Yau. A texture mapping approach to 3D facial image
1999. synthesis. Computer Graphics Forum, 7(2):129–134,
1988.
110. D. N. Wood, D. I. Azuma, K. Aldinger, B. Curless,
T. Duchamp, D. H. Salesin and W. Stuetzle. Surface
120. M. Nahas, H. Hutric, M. Rioux and J. Domey. Facial
light fields for 3D photography. In Proceedings of SIG-
image synthesis using skin texture recording. The
GRAPH 00, Computer Graphics Proceedings, Annual
Visual Computer, 6(6):337–343, 1990.
Conference Series. pp. 287–296. 2000.
121. S. Marschner, S. Westin, E. Lafortune, K. Torrance
111. Y. Matsumoto, H. Terasaki, K. Sugimoto and T.
and D. Greenberg. Image-based BRDF measurement
Arakawa. A portable three-dimensional digitizer. In
including human skin. In Proceedings of the 10th Eu-
Proceedings of the International Conference on Re-
rographics Workshop on Rendering, Granada, Spain:
cent Advances in 3D Digital Imaging and Modeling,
pp. 131–144. June, 1999.
Ottawa, Canada: pp. 197–204. May, 1997.

112. P. Boulanger, J. Taylor, S. F. El-Hakim and M. Rioux. 122. S. Marschner, B. Guenter and S. Raghupathy. Mod-
How to virtualize reality: an application to the re- eling and rendering for realistic facial animation. In
creation of world heritage sites. In Proceedings of the Proceedings of the 11th Eurographics Workshop on
Conference on Virtual Systems and MultiMedia. Gifu, Rendering. Brno, Czech Republic: June, 2000.
Japan. NRC Technical Report 41599. 1998.
123. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W.
113. J.-A. Beraldin, F. Blais, L. Cournoyer, R. Rodella, F. Sarokin and M. Sagar. Acquiring the reflectance field
Bernier and N. Harrison. Digital 3D imaging system of a human face. In Proceedings of SIGGRAPH 00,
for rapid response on remote sites. In Proceeding of the Computer Graphics Proceedings, Annual Conference
2nd International Conference on 3D Digital Imaging Series. pp. 145–156. 2000.
and Modeling, Ottawa, Canada: pp. 34–45. October,
1999. 124. A. Haro, B. Guenter and I. Essa. Real-time, photo-
realistic, physically based rendering of fine scale
114. J. Y. Zheng and L. Z. Zhong. Virtual recovery of human skin structure. In K. Myzkowski and S.
excavated relics. IEEE Computer Graphics and Appli- Gortler (eds), Rendering Techniques ’01 (Proceedings
cations, 19(3):6–11, May–June, 1999. of the 12th Eurographics Rendering Workshop). 2001.


c The Eurographics Association and Blackwell Publishers Ltd 2002

You might also like