Jump to content

Stixel

From Wikipedia, the free encyclopedia
Top: Grayscale input image with stixels superimposed to it, with colour denoting depth (from red denoting closer, to blue denoting farther). Bottom: Dense disparity map, with brighter intensity denoting higher values of disparity (lower depth), darker intensity denoting lower values of disparity (higher depth), and black denoting invalid disparity.

In computer vision, a stixel (portmanteau of "stick" and "pixel") is a superpixel representation of depth information in an image, in the form of a vertical stick that approximates the closest obstacles within a certain vertical slice of the scene. Introduced in 2009,[1] stixels have applications in robotic navigation and advanced driver-assistance systems, where they can be used to define a representation of robotic environments and traffic scenes with a medium level of abstraction.[2][3]

Definition

[edit]

One of the problems of scene understanding in computer vision is to determine horizontal freespace around the camera, where the agent can move, and the vertical obstacles delimiting it. An image can be paired with depth information (produced e.g. from stereo disparity, lidar, or monocular depth estimation), allowing a dense tridimensional reconstruction of the observed scene. One drawback of dense reconstruction is the large amount of data involved, since each pixel in the image is mapped to an element of a point cloud. Vision problems characterised by planar freespace delimited by mostly vertical obstacles, such as traffic scenes or robotic navigation, can benefit from a condensed representation that allows to save memory and processing time.

Stixels are thin vertical rectangles representing a slice of a vertical surface belonging to the closest obstacle in the observed scene. They allow to dramatically reduce the amount of information needed to represent a scene in such problems. A stixel is characterised by three parameters: vertical coordinate of the bottom, height of the stick, and depth. Stixels have fixed width, with each stixel spanning over a certain number of image columns, allowing downsampling of the horizontal image resolution. In the original formulation, each column of the image would contain at most one stixel, and later extensions were developed to allow multiple stixels on each column, allowing to represent multiple objects at different distances.[4]

Stixel estimation

[edit]

The input to stixel estimation is a dense depth map, that can be computed from stereo disparity or other means. The original approach computes an occupancy grid that can be segmented to estimate the freespace, with dynamic programming providing an efficient method to find an optimal segmentation.[5] Alternative approaches can be used instead of occupancy grid mapping, such as manifold-based methods.[6]

The freespace boundary provides the base points of the obstacles at closest longitudinal distance, however multiple objects at different distances might appear in each column of the image. To fully define the obstacles, their height should be estimated, and this is accomplished by segmenting the depth of the object from the depth of the background. A membership function over the pixels can be defined based on the depth value, where the membership represents the confidence of a pixel belonging to the closest vertical obstacle or to the background, and a cut separating the obstacles from the background can again be computed effectively with dynamic programming.

Once both the freespace and the obstacle height are known, the stixels can be estimated by fusing the information over the columns spanned by each stixel, and finally a refined depth of the stixel can be estimated via model fitting over the depth of the pixels covered by the stixel, possibly paired with confidence information (e.g. disparity confidence produced by methods such as semi-global matching).[7]

References

[edit]

Sources

[edit]
  • Badino, Hernán; Franke, Uwe; Pfeiffer, David (2009). The stixel world – A compact medium level representation of the 3D-world. Joint Pattern Recognition Symposium.
  • Benenson, Rodrigo; Mathias, Markus; Timofte, Radu; Gool, Luc Van (2012). Fast stixel computation for fast pedestrian detection. European Conference on Computer Vision.
  • Erbs, Friedrich; Barth, Alexander; Franke, Uwe (2011). Moving vehicle detection by optimal segmentation of the dynamic stixel world. 2011 IEEE Intelligent Vehicles Symposium (IV).
  • Pfeiffer, David (2012). The stixel world – A Compact Medium-level Representation for Efficiently Modeling Dynamic Three-dimensional Environments (Ph.D. thesis). Humboldt University of Berlin.
  • Saleem, Noor Haitham; Rezaei, Mahdi; Klette, Reinhard (2017). Extending the stixel world using polynomial ground manifold approximation. 2017 24th International Conference on Mechatronics and Machine Vision in Practice (M2VIP).