Hyperplane separation theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Illustration of the hyperplane separation theorem.

In geometry, the hyperplane separation theorem is either of two theorems about disjoint convex sets in n-dimensional Euclidean space. In the first version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In the second version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint.

The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces.

A related result is the supporting hyperplane theorem. In geometry, a maximum-margin hyperplane is a hyperplane which separates two 'clouds' of points and is at equal distance from the two. The margin between the hyperplane and the clouds is maximal. See the article on Support Vector Machines for more details.


Let A and B be two disjoint, closed, convex sets, and suppose that A is compact. Then A and B have a closest pair of points p and q. (The distance function d(p,B) is a continuous function that vanishes only on B, and since A is compact, it must have a positive minimum p on A.) Then any hyperplane H which is perpendicular to the segment I(p,q) from p to q, and which meets the interior of this segment, must separate A from B.

For the second version of the theorem, suppose that A and B are disjoint, convex, and open. Then they can be exhausted by sequences of compact, convex subsets An and Bn. The first version of the theorem supplies a sequence of separating hyperplanes Hn which must have a subsequence that converges to a hyperplane H. This hyperplane must separate A from B.

Counterexamples and uniqueness[edit]

The theorem does not apply if one of the bodies is not convex.

If one of A or B is not convex, then there are many possible counterexamples. For example, A and B could be concentric circles. A more subtle counterexample is one in which A and B are both closed but neither one is compact. For example, if A is a closed half plane and B is bounded by one arm of a hyperbola, then there is no separating hyperplane:

A = \{(x,y) : x \le 0\}
B = \{(x,y) : x > 0, y \geq 1/x \}.\

(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A.

In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation.

Use in collision detection[edit]

The separating axis theorem (SAT) says that:

Two convex objects do not overlap if there exists a line (called axis) onto which the two objects' projections do not overlap.

SAT suggests an algorithm for testing whether two convex solids intersect or not.

Regardless of dimensionality, the separating axis is always a line. For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane.

The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature directions is used as a separating axis, as well as the cross products. Note that this yields possible separating axes, not separating lines/planes.

If the cross products were not used, certain edge-on-edge non-colliding cases would be treated as colliding. For increased efficiency, parallel axes may be calculated as a single axis.

See also[edit]


  • Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN 978-0-521-83378-3. 
  • Golshtein, E. G.; Tretyakov, N.V.; translated by Tretyakov, N.V. (1996). Modified Lagrangians and monotone maps in optimization. New York: Wiley. p. 6. ISBN 0-471-54821-9. 
  • Shimizu, Kiyotaka; Ishizuka, Yo; Bard, Jonathan F. (1997). Nondifferentiable and two-level mathematical programming. Boston: Kluwer Academic Publishers. p. 19. ISBN 0-7923-9821-1. 

External links[edit]