0% found this document useful (0 votes)
26 views4 pages

3D Is Here: Point Cloud Library (PCL)

The Point Cloud Library (PCL) is an open-source library designed for 3D perception and point cloud processing, offering advanced algorithms for filtering, feature estimation, segmentation, and more. It is optimized for performance on modern CPUs and integrates with the Robot Operating System (ROS), making it suitable for various robotics applications. PCL supports multiple platforms and aims to enhance documentation and functionality while encouraging community contributions.

Uploaded by

Yash Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views4 pages

3D Is Here: Point Cloud Library (PCL)

The Point Cloud Library (PCL) is an open-source library designed for 3D perception and point cloud processing, offering advanced algorithms for filtering, feature estimation, segmentation, and more. It is optimized for performance on modern CPUs and integrates with the Robot Operating System (ROS), making it suitable for various robotics applications. PCL supports multiple platforms and aims to enhance documentation and functionality while encouraging community contributions.

Uploaded by

Yash Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

3D is here: Point Cloud Library (PCL)

Radu Bogdan Rusu and Steve Cousins


Willow Garage
68 Willow Rd., Menlo Park, CA 94025, USA
{rusu,cousins}@willowgarage.com

Abstract— With the advent of new, low-cost 3D sensing


hardware such as the Kinect, and continued efforts in advanced
point cloud processing, 3D perception gains more and more
importance in robotics, as well as other fields.
In this paper we present one of our most recent initiatives in
the areas of point cloud perception: PCL (Point Cloud Library
– http://pointclouds.org). PCL presents an advanced
and extensive approach to the subject of 3D perception, and Fig. 1. The Point Cloud Library logo.
it’s meant to provide support for all the common 3D building
blocks that applications need. The library contains state-of- formance in mind on modern CPUs, the underlying data
the art algorithms for: filtering, feature estimation, surface structures in PCL make use of SSE optimizations heavily.
reconstruction, registration, model fitting and segmentation.
Most mathematical operations are implemented with and
PCL is supported by an international community of robotics
and perception researchers. We provide a brief walkthrough of based on Eigen, an open-source template library for linear
PCL including its algorithmic capabilities and implementation algebra [1]. In addition, PCL provides support for OpenMP
strategies. (see http://openmp.org) and Intel Threading Building
Blocks (TBB) library [2] for multi-core parallelization. The
I. I NTRODUCTION
backbone for fast k-nearest neighbor search operations is
For robots to work in unstructured environments, they need provided by FLANN (Fast Library for Approximate Nearest
to be able to perceive the world. Over the past 20 years, Neighbors) [3]. All the modules and algorithms in PCL pass
we’ve come a long way, from simple range sensors based data around using Boost shared pointers (see Figure 2), thus
on sonar or IR providing a few bytes of information about avoiding the need to re-copy data that is already present
the world, to ubiquitous cameras to laser scanners. In the in the system. As of version 0.6, PCL has been ported to
past few years, sensors like the Velodyne spinning LIDAR Windows, MacOS, and Linux, and Android ports are in the
used in the DARPA Urban Challenge and the tilting laser works.
scanner used on the PR2 have given us high-quality 3D From an algorithmic perspective, PCL is meant to incor-
representations of the world - point clouds. Unfortunately, porate a multitude of 3D processing algorithms that operate
these systems are expensive, costing thousands or tens of on point cloud data, including: filtering, feature estimation,
thousands of dollars, and therefore out of the reach of many surface reconstruction, model fitting, segmentation, registra-
robotics projects. tion, etc. Each set of algorithms is defined via base classes
Very recently, however, 3D sensors have become available that attempt to integrate all the common functionality used
that change the game. For example, the Kinect sensor for throughout the entire pipeline, thus keeping the implementa-
the Microsoft XBox 360 game system, based on underlying tions of the actual algorithms compact and clean. The basic
technology from PrimeSense, can be purchased for under interface for such a processing pipeline in PCL is:
$150, and provides real time point clouds as well as 2D
• create the processing object (e.g., filter, feature estima-
images. As a result, we can expect that most robots in the
tor, segmentation);
future will be able to ”see” the world in 3D. All that’s
• use setInputCloud to pass the input point cloud dataset
needed is a mechanism for handling point clouds efficiently,
to the processing module;
and that’s where the open source Point Cloud Library, PCL,
• set some parameters;
comes in. Figure 1 presents the logo of the project.
• call compute (or filter, segment, etc) to get the output.
PCL is a comprehensive free, BSD licensed, library for
n-D Point Clouds and 3D geometry processing. PCL is The sequence of pseudo-code presented in Figure 2 shows
fully integrated with ROS, the Robot Operating System (see a standard feature estimation process in two steps, where a
http://ros.org), and has been already used in a variety NormalEstimation object is first created and passed an input
of projects in the robotics community. dataset, and the results together with the original input are
then passed together to an FPFH [4] estimation object.
II. A RCHITECTURE AND I MPLEMENTATION To further simplify development, PCL is split into a series
PCL is a fully templated, modern C++ library for 3D of smaller code libraries, that can be compiled separately:
point cloud processing. Written with efficiency and per- • libpcl filters: implements data filters such as downsam-
PointCloudConstSharedPtr &cloud
with point cloud processing can be formulated as a concrete
set of building blocks that are parameterized to achieve dif-
NormalEstimation ferent results. For example, there is no algorithmic difference
between a wall detection algorithm, or a door detection, or a
[PointCloud &normals]
table detection – all of them share the same building block,
PointCloudConstSharedPtr &normals which is in this case, a constrained planar segmentation
algorithm. What changes in the above mentioned cases is
FPFHEstimation a subset of the parameters used to run the algorithm.
With this in mind, and based on the previous experience of
designing other 3D processing libraries, and most recently,
[PointCloud &fpfh]
ROS, we decided to make each algorithm from PCL available
as a standalone building block, that can be easily connected
with other blocks, thus creating processing graphs, in the
Fig. 2. An example of the PCL implementation pipeline for Fast Point same way that nodes connect together in a ROS ecosystem.
Feature Histogram (FPFH) [4] estimation. Furthermore, because point clouds are extremely large in
nature, we wanted to guarantee that there would be no
pling, outlier removal, indices extraction, projections,
unnecessary data copying or serialization/deserialization for
etc;
critical applications that can afford to run in the same
• libpcl features: implements many 3D features such as
process. For this we created nodelets, which are dynamically
surface normals and curvatures, boundary point estima-
loadable plugins that look and operate like ROS nodes, but
tion, moment invariants, principal curvatures, PFH and
in a single process (as single or multiple threads).
FPFH descriptors, spin images, integral images, NARF
A concrete nodelet PPG example for the problem of
descriptors, RIFT, RSD, VFH, SIFT on intensity data,
identifying a set of point clusters supported by horizontal
etc;
planar areas is shown in Figure 3.
• libpcl io: implements I/O operations such as writing
to/reading from PCD (Point Cloud Data) files;
• libpcl segmentation: implements cluster extraction,
model fitting via sample consensus methods for a va-
riety of parametric models (planes, cylinders, spheres,
lines, etc), polygonal prism extraction, etc’
• libpcl surface: implements surface reconstruction tech-
niques, meshing, convex hulls, Moving Least Squares,
etc;
• libpcl registration: implements point cloud registration
methods such as ICP, etc;
• libpcl keypoints: implements different keypoint extrac-
tion methods, that can be used as a preprocessing step
to decide where to extract feature descriptors; Fig. 3. A ROS nodelet graph for the problem of object clustering on planar
surfaces.
• libpcl range image: implements support for range im-
ages created from point cloud datasets.
To ensure the correctness of operations in PCL, the IV. V ISUALIZATION
methods and classes in each of the above mentioned libraries PCL comes with its own visualization library, based on
contain unit and regression tests. The suite of unit tests is VTK [5]. VTK offers great multi-platform support for ren-
compiled on demand and verified frequently by a dedicated dering 3D point cloud and surface data, including visualiza-
build farm, and the respective authors of a specific compo- tion support for tensors, texturing, and volumetric methods.
nent are being informed immediately when that component The PCL Visualization library is meant to integrate PCL
fails to test. This ensures that any changes in the code are with VTK, by providing a comprehensive visualization layer
tested throughly and any new functionality or modification for n-D point cloud structures. Its purpose is to be able
will not break already existing code that depends on PCL. to quickly prototype and visualize the results of algorithms
In addition, a large number of examples and tutorials operating on such hyper-dimensional data. As of version 0.2,
are available either as C++ source files, or as step-by-step the visualization library offers:
instructions on the PCL wiki web pages. • methods for rendering and setting visual properties
(colors, point sizes, opacity, etc) for any n-D point cloud
III. PCL AND ROS
dataset;
One of the corner stones in the PCL design philosophy • methods for drawing basic 3D shapes on screen (e.g.,
is represented by Perception Processing Graphs (PPG). The cylinders, spheres, lines, polygons, etc) either from sets
rationality behind PPGs are that most applications that deal of points or from parametric equations;
• a histogram visualization module (PCLHistogramVisu-
alizer) for 2D plots;
• a multitude of geometry and color handlers. Here, the
user can specify what dimensions are to be used for the
point positions in a 3D Cartesian space (see Figure 4),
or what colors should be used to render the points (see
Figure 5);
• RangeImage visualization modules (see Figure 6).
The handler interactors are modules that describe how
colors and the 3D geometry at each point in space are
computed, displayed on screen, and how the user interacts
with the data. They are designed with simplicity in mind,
and are easily extendable. A code snippet that produces
results similar to the ones shown in Figure 4 is presented
in Algorithm 1.

Algorithm 1 Code example for the results shown in Figure 4. Fig. 6. An example of a RangeImage display using PCL Visualization
using namespace pcl visualization; (bottom) for a given 3D point cloud dataset (top).
PCLVisualizer p (“Test”);
PointCloudColorHandlerRandom handler (cloud);
p.addPointCloud (cloud, handler, ”cloud random”);
p.spin (); V. U SAGE E XAMPLES
p.removePointCloud (”cloud random”);
PointCloudGeometryHandlerSurfaceNormal handler2 (cloud); In this section we present two code snippets that exhibit
p.addPointCloud (cloud, handler2, ”cloud random”); the flexibility and simplicity of using PCL for filtering
p.spin ();
and segmentation operations, followed by three application
examples that make use of PCL for solving the perception
The library also offers a few general purpose tools for problem: i) navigation and mapping, ii) object recognition,
visualizing PCD files, as well as for visualizing streams of and iii) manipulation and grasping.
data from a sensor in real-time in ROS. Filtering constitutes one of the most important operations
that any raw point cloud dataset usually goes through, before
any higher level operations are applied to it. Algorithm 2 and
Figure 7 present a code snippet and the results obtained after
running it on the point cloud dataset from the left part of the
figure. The filter is based on estimating a set of statistics for
the points in a given neighborhood (k = 50 here), and using
them to select all points within 1.0·σ distance from the mean
distance µ, as inliers (see [6] for more information).

Algorithm 2 Code example for the results shown in Figure 7.


pcl::StatisticalOutlierRemoval<pcl::PointXYZ> f;
f.setInputCloud (input cloud);
f.setMeanK (50);
f.setStddevMulThresh (1.0);
Fig. 4. An example of two different geometry handers applied to the same f.filter (output cloud);
dataset. Left: the 3D Cartesian space represents XYZ data, with the arrows
representing surface normals estimated at each point in the cloud, right: the
Cartesian space represents the 3 dimensions of the normal vector at each
point for the same dataset.

Fig. 7. Left: a raw point cloud acquired using a tilting laser scanner,
middle: the resultant filtered point cloud (i.e., inliers) after a StatisticalOut-
lierRemoval operator was applied, right: the rejected points (i.e., outliers).

The second example constitutes a segmentation operation


for planar surfaces, using a RANSAC [7] model, as shown
in Algorithm 3. The input and output results are shown in
Fig. 5. An example of two different color handers applied to the Figure 8. In this example, we are using a robust RANSAC
same dataset. Left: the colors represent the distance from the acquisition
viewpoint, right: the color represent the RGB texture acquired at each point. estimator to randomly select 3 non-collinear points and
calculate the best possible model in terms of the overall
number of inliers. The inlier thresholding criterion is set to a
maximum distance of 1cm of each point to the plane model.

Algorithm 3 Code example for the results shown in Figure 8.


pcl::SACSegmentation<pcl::PointXYZ> s;
f.setInputCloud (input cloud);
f.setModelType (pcl::SACMODEL PLANE);
f.setMethodType (pcl::SAC RANSAC); Fig. 10. Experiments with PCL in grasping applications [11], from
f.setDistanceThreshold (0.01); left to right: a visualization of the collision environment including points
f.segment (output cloud); associated with unrecognized objects (blue), and obstacles with semantic
information (green); detail showing 3D point cloud data (grey) with 3D
meshes superimposed for recognized objects; successful grasping showing
the bounding box associated with an unrecognized object (brown) attached
to the gripper.
Bonn, University of British Columbia, ETH Zurich, Univer-
sity of Freiburg, Intel Reseach Seattle, LAAS/CNRS, MIT,
University of Osnabrück, Stanford University, University of
Tokyo, TUM, Vienna University of Technolog, and Wash-
Fig. 8. Left: the input point cloud, right: the segmented plane represented ington University in St. Louis.
by the inliers of the model marked with purple color. Our current plan for PCL is to improve the documentation,
An example of a more complex navigation and mapping unit tests, and tutorials and release a 1.0 version. We will
application is shown in the left part of Figure 9, where the continue to add functionality and make the system available
PR2 robot had to autonomously identify doors and their on other platforms such as Android, and we plan to add
handles [8], in order to explore rooms and find power support for GPUs using CUDA and OpenCL.
We welcome any new contributors to the project, and
sockets [9]. Here, the modules used included constrained
we hope to emphasize the importance of code sharing for
planar segmentation, region growing methods, convex hull
3D processing, which is becoming crucial for advancing the
estimation, and polygonal prism extractions. The results of
robotics field.
these methods were then used to extract certain statistics
about the shape and size of the door and the handle, in order R EFERENCES
to uniquely identify them and to reject false positives. [1] G. Guennebaud, B. Jacob, et al., “Eigen v3,” http://eigen.tuxfamily.org,
The right part of Figure 9 shows an experiment with 2010.
real-time object identification from complex 3D scenes [10]. [2] J. Reinders, Intel threading building blocks : outfitting C++ for multi-
core processor parallelism. O’Reilly, 2007.
Here, a set of complex 3D keypoints and feature descriptors [3] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with
are used in a segmentation and registration framework, that automatic algorithm configuration,” in International Conference on
aims to identify previously seen objects in the world. Computer Vision Theory and Application VISSAPP’09). INSTICC
Press, 2009, pp. 331–340.
Figure 10 presents a grasping and manipulation applica- [4] R. B. Rusu, N. Blodow, and M. Beetz, “Fast Point Feature Histograms
tion [11], where objects are first segmented from horizontal (FPFH) for 3D Registration,” in Proceedings of the IEEE International
planar tables, clustered into individual units, and a registra- Conference on Robotics and Automation (ICRA), Kobe, Japan, May
12-17 2009.
tion operation is applied that attaches semantic information [5] W. Schroeder, K. Martin, and B. Lorensen, Visualization Toolkit: An
to each cluster found. Object-Oriented Approach to 3D Graphics, 4th Edition. Kitware,
December 2006.
VI. C OMMUNITY AND F UTURE P LANS [6] R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz,
“Towards 3D Point Cloud Based Object Maps for Household Envi-
PCL is a large collaborative effort, and it would not ronments,” Robotics and Autonomous Systems Journal (Special Issue
exist without the contributions of several people. Though on Semantic Knowledge), 2008.
[7] A. M. Fischler and C. R. Bolles, “Random sample consensus: a
the community is larger, and we accept patches and im- paradigm for model fitting with applications to image analysis and
provements from many users, we would like to acknowledge automated cartography,” Communications of the ACM, vol. 24, no. 6,
the following institutions for their core contributions to the pp. 381–395, June 1981.
[8] R. B. Rusu, W. Meeussen, S. Chitta, and M. Beetz, “Laser-based
development of the library: AIST, UC Berkeley, University of Perception for Door and Handle Identification,” in International Con-
ference on Advanced Robotics (ICAR), June 22-26 2009.
Door [9] W. Meeussen, M. Wise, S. Glaser, S. Chitta, C. McGann, P. Mihelich,
E. Marder-Eppstein, M. Muja, V. Eruhimov, T. Foote, J. Hsu, R. Rusu,
B. Marthi, G. Bradski, K. Konolige, B. Gerkey, and E. Berger,
Handle
“Autonomous Door Opening and Plugging In with a Personal Robot,”
in Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA), Anchorage, Alaska, May 3-8 2010.
[10] B. Steder, R. B. Rusu, K. Konolige, and W. Burgard, “Point Feature
Extraction on 3D Range Scans Taking into Account Object Bound-
aries,” in Submitted to the IEEE International Conference on Robotics
Fig. 9. Left: example of door and handle identification [8] during a
and Automation (ICRA), Shanghai, China, May 9-13 2010.
navigation and mapping experiment [9] with the PR2 robot. Right: object
[11] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and
recognition experiments (chair, person sitting down, cart) using Normal
I. A. Sucan, “Towards reliable grasping and manipulation in household
Aligned Radial Features (NARF) [10] with the PR2 robot.
environments,” New Delhi, India, 12/2010 2010.

You might also like