PCL (Point Cloud Library)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
PCL (Point Cloud Library)
Pcl (PointClouds library) logo with text.png
Original author(s) Willow Garage
Stable release 1.7.1 / October 7, 2013 (2013-10-07)
Operating system Cross-platform
Type Library
License BSD license
Website pointclouds.org

PCL (Point Cloud Library)[1] is a standalone open-source framework including numerous state of the art algorithms for n-dimensional point clouds and 3D geometry processing. The library contains algorithms for filtering, feature estimation, surface reconstruction, registration, model fitting, and segmentation. PCL is developed by a large consortium of researchers and engineers around the world. It is written in C++ and released under the BSD license. These algorithms can be used, for example, for perception in robotics to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them.[2]

History[edit]

The development of the Point Cloud Library started in March 2010 at Willow Garage. The project initially resided on a sub domain of Willow Garage then moved to a new website www.pointclouds.org in March 2011.[3] PCL's first official release (Version 1.0) was released two months later in May 2011.[4]

Modules[edit]

PCL is split into a number of modular libraries.[5] The most important set of released PCL modules is shown below:

Filters[edit]

Noise removal is an example of a filter presented in PCL. Due to measurement errors, certain datasets present a large number of shadow points. This complicates the estimation of local point cloud 3D features. Some of these outliers can be filtered by performing a statistical analysis on each point’s neighborhood, and trimming those that do not meet a certain criteria. The sparse outlier removal implementation in PCL is based on the computation of the distribution of point to neighbor distances in the input dataset. For each point, the mean distance from it to all its neighbors is computed. By assuming that the resulting distribution is Gaussian with a mean and a standard deviation, all points whose mean distances are outside an interval defined by the global distances mean and standard deviation can be considered as outliers and trimmed from the dataset.

Features[edit]

The features library contains data structures and mechanisms for 3D feature estimation from point cloud data. 3D features are representations at certain 3D points, or positions, in space, which describe geometrical patterns based on the information available around the point. The data space selected around the query point is usually referred to as the k-neighborhood. Two of the most widely used geometric point features are the underlying surface’s estimated curvature and normal at a query point p. Both of them are considered local features, as they characterize a point using the information provided by its k closest point neighbors. For determining these neighbors efficiently, the input dataset is usually split into smaller chunks using spatial decomposition techniques such as octrees or kD-trees, and then closest point searches are performed in that space. Depending on the application one can opt for either determining a fixed number of k points in the vicinity of p, or all points which are found inside of a sphere of radius r centered at p. One the easiest methods for estimating the surface normals and curvature changes at a point p is to perform an eigendecomposition (i.e., compute the eigenvectors and eigenvalues) of the k-neighborhood point surface patch.

Keypoints[edit]

The keypoints library contains implementations of two point cloud keypoint detection algorithms. Keypoints (also referred to as interest points) are points in an image or point cloud that are stable, distinctive, and can be identified using a well-defined detection criterion. Typically, the number of interest points in a point cloud will be much smaller than the total number of points in the cloud, and when used in combination with local feature descriptors at each keypoint, the keypoints and descriptors can be used to form a compact, yet descriptive, representation of the original data.

Registration[edit]

Combining several datasets into a global consistent model is usually performed using a technique called point set registration. The key idea is to identify corresponding points between the data sets and find a transformation that minimizes the distance (alignment error) between corresponding points. This process is repeated, since correspondence search is affected by the relative position and orientation of the data sets. Once the alignment errors fall below a given threshold, the registration is said to be complete. The registration library implements a plethora of point cloud registration algorithms for both organized and unorganized (general purpose) datasets. For instance, PCL contains a set of powerful algorithms that allow the estimation of multiple sets of correspondences, as well as methods for rejecting bad correspondences, and estimating transformations in a robust manner.

KdTree[edit]

The PCL kdtree library provides the kd-tree data-structure, using FLANN, that allows for fast nearest neighbor searches. A Kd-tree (k-dimensional tree) is a space-partitioning data structure that stores a set of k-dimensional points in a tree structure that enables efficient range searches and nearest neighbor searches. Nearest neighbor searches are a core operation when working with point cloud data and can be used to find correspondences between groups of points or feature descriptors or to define the local neighborhood around a point or points.

Octree[edit]

The octree library provides efficient methods for creating a hierarchical tree data structure from point cloud data. This enables spatial partitioning, downsampling and search operations on the point data set. Each octree node has either eight children or no children. The root node describes a cubic bounding box which encapsulates all points. At every tree level, this space becomes subdivided by a factor of 2 which results in an increased voxel resolution. The octree implementation provides efficient nearest neighbor search routines, such as “Neighbors within Voxel Search”, “K Nearest Neighbor Search” and “Neighbors within Radius Search”. It automatically adjusts its dimension to the point data set. A set of leaf node classes provide additional functionality, such as spacial “occupancy” and “point density per voxel” checks. Functions for serialization and deserialization enable to efficiently encode the octree structure into a binary format. Furthermore, a memory pool implementation reduces expensive memory allocation and deallocation operations in scenarios where octrees need to be created at a high rate.

Segmentation[edit]

The segmentation library contains algorithms for segmenting a point cloud into distinct clusters. These algorithms are best suited for processing a point cloud that is composed of a number of spatially isolated regions. In such cases, clustering is often used to break the cloud down into its constituent parts, which can then be processed independently.

Sample Consensus[edit]

The sample_consensus library holds SAmple Consensus (SAC) methods like RANSAC and models like planes and cylinders. These can be combined freely in order to detect specific models and their parameters in point clouds. Some of the models implemented in this library include: lines, planes, cylinders, and spheres. Plane fitting is often applied to the task of detecting common indoor surfaces, such as walls, floors, and table tops. Other models can be used to detect and segment objects with common geometric structures (e.g., fitting a cylinder model to a mug).

Surface[edit]

The surface library deals with reconstructing the original surfaces from 3D scans. Depending on the task at hand, this can be for example the hull, a mesh representation or a smoothed/resampled surface with normals. Smoothing and resampling can be important if the cloud is noisy, or if it is composed of multiple scans that are not aligned perfectly. The complexity of the surface estimation can be adjusted, and normals can be estimated in the same step if needed. Meshing is a general way to create a surface out of points, and currently there are two algorithms provided: a very fast triangulation of the original points, and a slower meshing that does smoothing and hole filling as well. Creating a convex or concave hull is useful for example when there is a need for a simplified surface representation or when boundaries need to be extracted.

Range Image[edit]

The range_image library contains two classes for representing and working with range images. A range image (or depth map) is an image whose pixel values represent a distance or depth from the sensor’s origin. Range images are a common 3D representation and are often generated by stereo or time-of-flight cameras. With knowledge of the camera’s intrinsic calibration parameters, a range image can be converted into a point cloud.

IO[edit]

The io library contains classes and functions for reading and writing point cloud data (PCD) files, as well as capturing point clouds from a variety of sensing devices.

Visualization[edit]

The visualization library was built to allow rapid prototyping and visualization of algorithms operating on 3D point cloud data. Similar to OpenCV’s highgui routines for displaying 2D images and for drawing basic 2D shapes on screen, the library offers: a) methods for rendering and setting visual properties (colors, point sizes, opacity, etc) for any n-D point cloud datasets in pcl::PointCloud<T> format; b) methods for drawing basic 3D shapes on screen (e.g., cylinders, spheres, lines, polygons, etc) either from sets of points or from parametric equations; c) a histogram visualization module (PCLHistogramVisualizer) for 2D plots; d) a multitude of Geometry and Color handlers for pcl::PointCloud<T> datasets; e) a pcl::RangeImage visualization module.

Common[edit]

The common library contains the common data structures and methods used by the majority of PCL libraries. The core data structures include the PointCloud class and a multitude of point types that are used to represent points, surface normals, RGB color values, feature descriptors, etc. It also contains numerous functions for computing distances/norms, means and covariances, angular conversions, geometric transformations, and more.

Search[edit]

The search library provides methods for searching for nearest neighbors using different data structures, including: #KdTree, #Octree, brute force, and specialized search for organized datasets.

References[edit]

  1. ^ PointClouds official website: http://www.pointclouds.org
  2. ^ Robot Operating System: http://www.ros.org/wiki
  3. ^ B. Rusu, Radu (28 March 2011). "PointClouds.org: A new home for Point Cloud Library (PCL)". Willow Garage. Retrieved 26 November 2012. 
  4. ^ "PCL 1.0!". PCL. 12 May 2011. Retrieved 24 May 2013. 
  5. ^ PCL documentation and tutorials: http://pointclouds.org/documentation/

External links[edit]