Hyperplane separation theorem

From Wikipedia, the free encyclopedia
  (Redirected from Separating axis theorem)
Jump to: navigation, search
Illustration of the hyperplane separation theorem.

In geometry, the hyperplane separation theorem is either of two theorems about disjoint convex sets in n-dimensional Euclidean space. In the first version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In the second version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint.

The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces.

A related result is the supporting hyperplane theorem. In geometry, a maximum-margin hyperplane is a hyperplane which separates two 'clouds' of points and is at equal distance from the two. The margin between the hyperplane and the clouds is maximal. See the article on Support Vector Machines for more details.

Statements and proof[edit]

Hyperplane separation theorem[1] — Let A and B be two disjoint nonempty convex subsets of Rn. Then there exists a nonzero vector v and a real number c such that

\langle x, v \rangle \ge c \, \text{ and } \langle y, v \rangle \le c

for all x in A and y in B; i.e., the hyperplane \langle \cdot, v \rangle = c, v the normal vector, separates A and B.

The proof is based on the following lemma:

Lemma — Let K be a nonempty closed convex subset of Rn. Then there exists a unique vector in K of minimum norm (length).

Proof of lemma: Let \delta = \inf \{ |x| \mid x \in K \}. Let x_j be a sequence in K such that |x_j| \to \delta. Note that (x_i + x_j)/2 is in K since K is convex and so |x_i + x_j|^2 \ge 4 \delta^2. Since

|x_i - x_j|^2 = 2|x_i|^2 + 2|x_j|^2 - |x_i + x_j|^2 \le 2|x_i|^2 + 2|x_j|^2 - 4\delta^2 \to 0

as i, j \to \infty, x_i is a Cauchy sequence and so has limit x in K. It is unique since if y is in K and has norm δ, then|x - y|^2 \le 2|x|^2 + 2|y|^2 - 4\delta^2 = 0and x = y. \square

Proof of theorem: Given disjoint nonempty convex sets A, B, let

K = A + (-B) = \{ x - y \mid x \in A, y \in B \}.

Since -B is convex and the sum of convex sets is convex, K is convex. By the lemma, the closure \overline{K} of K, which is convex, contains a vector v of minimum norm. Since \overline{K} is convex, for any x in K, the line segment

v + t(x - v), \, 0 \le t \le 1

lies in \overline{K} and so

|v|^2 \le |v + t(x - v)|^2 = |v|^2 + 2 t \langle v, x - v \rangle + t^2|x-v|^2.

For 0 < t \le 1, we thus have:

0 \le 2 \langle v, x \rangle - 2 |v|^2 + t|x-v|^2

and letting t \to 0 gives: \langle x, v \rangle \ge |v|^2. Hence, for any x in A and y in B, we have: \langle x - y, v \rangle \ge |v|^2. Thus, if v is nonzero, the proof is complete since

\inf_{x \in A} \langle x, v \rangle \ge |v|^2 + \sup_{y \in B} \langle y, v \rangle.

In general, if the interior of K is nonempty, then it can be exhausted by compact convex subsets K_n. Since 0 is not in K, each K_n contains a nonzero vector v_n of minimum length and by the argument in the early part, we have: \langle x, v_n \rangle \ge 0 for any x \in K_n. We also normalize them to have length one; then the sequence v_n contains a convergent subsequence with limit v, which has length one and is nonzero. We have \langle x, v \rangle \ge 0 for any x in the interior of K and by continuity the same holds for all x in K. We now finish the proof as before. Finally, if K has empty interior, the affine set that it spans has dimension less than that of the whole space. Consequently K is contained in some hyperplane \langle \cdot, v \rangle = c; thus, \langle x, v \rangle \ge c for all x in K and we finish the proof as before. \square

The above proof also proves the first version of the theorem mentioned in the lede (to see it, note that K is closed under the hypothesis below.)

Separation theorem I —  Let A and B be two disjoint nonempty closed convex sets, one of which is compact. Then there exists a nonzero vector v and real numbers c_1 < c_2 such that

\langle x, v \rangle > c_2 \, \text{ and } \langle y, v \rangle < c_1

for all x in A and y in B.

Here, the compactness in the hypothesis cannot be relaxed; see an example in the next section. Also,

Separation theorem II —  Let A and B be two disjoint nonempty convex sets. If A is open, then there exists a nonzero vector v and real number c such that

\langle x, v \rangle > c \, \text{ and } \langle y, v \rangle \le c

for all x in A and y in B.

This follows from the standard version since the separating hyperplane cannot intersect the interiors of the convex sets.

Counterexamples and uniqueness[edit]

The theorem does not apply if one of the bodies is not convex.

If one of A or B is not convex, then there are many possible counterexamples. For example, A and B could be concentric circles. A more subtle counterexample is one in which A and B are both closed but neither one is compact. For example, if A is a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane:

A = \{(x,y) : x \le 0\}
B = \{(x,y) : x > 0, y \geq 1/x \}.\

(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A.

In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation.

Use in collision detection[edit]

The separating axis theorem (SAT) says that:

Two convex objects do not overlap if there exists a line (called axis) onto which the two objects' projections do not overlap.

SAT suggests an algorithm for testing whether two convex solids intersect or not.

Regardless of dimensionality, the separating axis is always a line. For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane.

The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature directions is used as a separating axis, as well as the cross products. Note that this yields possible separating axes, not separating lines/planes.

If the cross products were not used, certain edge-on-edge non-colliding cases would be treated as colliding. For increased efficiency, parallel axes may be calculated as a single axis.

See also[edit]


  1. ^ Boyd–Vandenberghe, Exercise 2.22.


  • Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN 978-0-521-83378-3. 
  • Golshtein, E. G.; Tretyakov, N.V.; translated by Tretyakov, N.V. (1996). Modified Lagrangians and monotone maps in optimization. New York: Wiley. p. 6. ISBN 0-471-54821-9. 
  • Shimizu, Kiyotaka; Ishizuka, Yo; Bard, Jonathan F. (1997). Nondifferentiable and two-level mathematical programming. Boston: Kluwer Academic Publishers. p. 19. ISBN 0-7923-9821-1. 

External links[edit]