# Chan's algorithm

In computational geometry, Chan's algorithm,[1] named after Timothy M. Chan, is an optimal output-sensitive algorithm to compute the convex hull of a set ${\displaystyle P}$ of ${\displaystyle n}$ points, in 2- or 3-dimensional space. The algorithm takes ${\displaystyle O(n\log h)}$ time, where ${\displaystyle h}$ is the number of vertices of the output (the convex hull). In the planar case, the algorithm combines an ${\displaystyle O(n\log n)}$ algorithm (Graham scan, for example) with Jarvis march (${\displaystyle O(nh)}$), in order to obtain an optimal ${\displaystyle O(n\log h)}$ time. Chan's algorithm is notable because it is much simpler than the Kirkpatrick–Seidel algorithm, and it naturally extends to 3-dimensional space. This paradigm[2] has been independently developed by Frank Nielsen in his Ph. D. thesis.[3]

## Algorithm

Initially, we assume that the value of ${\displaystyle h}$ is known and make a parameter ${\displaystyle m=h}$. This assumption is not realistic, but we remove it later. The algorithm starts by arbitrarily partitioning ${\displaystyle P}$ into at most ${\displaystyle 1+n/m}$ subsets ${\displaystyle Q}$ with at most ${\displaystyle m}$ points each. Then, it computes the convex hull of each subset ${\displaystyle Q}$ using an ${\displaystyle O(n\log n)}$ algorithm (for example, Graham scan). Note that, as there are ${\displaystyle O(n/m)}$ subsets of ${\displaystyle O(m)}$ points each, this phase takes ${\displaystyle O(n/m)\cdot O(m\log m)=O(n\log m)}$ time.

The second phase consists of executing Jarvis's march algorithm and using the precomputed convex hulls to speed up the execution. At each step in Jarvis's march, we have a point ${\displaystyle p_{i}}$ in the convex hull, and need to find a point ${\displaystyle p_{i+1}=f(p_{i},P)}$ such that all other points of ${\displaystyle P}$ are to the right of the line ${\displaystyle p_{i}p_{i+1}}$. If we know the convex hull of a set ${\displaystyle Q}$ of ${\displaystyle m}$ points, then we can compute ${\displaystyle f(p_{i},Q)}$ in ${\displaystyle O(\log m)}$ time, by using binary search. We can compute ${\displaystyle f(p_{i},Q)}$ for all the ${\displaystyle O(n/m)}$ subsets ${\displaystyle Q}$ in ${\displaystyle O(n/m\log m)}$ time. Then, we can determine ${\displaystyle f(p_{i},P)}$ using the same technique as normally used in Jarvis's march, but only considering the points that are ${\displaystyle f(p_{i},Q)}$ for some subset ${\displaystyle Q}$. As Jarvis's march repeats this process ${\displaystyle O(h)}$ times, the second phase also takes ${\displaystyle O(n\log m)}$ time, and therefore ${\displaystyle O(n\log h)}$ time if ${\displaystyle m=h}$.

By running the two phases described above, we can compute the convex hull of ${\displaystyle n}$ points in ${\displaystyle O(n\log h)}$ time, assuming that we know the value of ${\displaystyle h}$. If we make ${\displaystyle m, we can abort the execution after ${\displaystyle m+1}$ steps, therefore spending only ${\displaystyle O(n\log m)}$ time (but not computing the convex hull). We can initially set ${\displaystyle m}$ as a small constant (we use 2 for our analysis, but in practice numbers around 5 may work better), and increase the value of ${\displaystyle m}$ until ${\displaystyle m>h}$, in which case we obtain the convex hull as a result.

If we increase the value of ${\displaystyle m}$ too slowly, we may need to repeat the steps mentioned before too many times, and the execution time will be large. On the other hand, if we increase the value of ${\displaystyle m}$ too quickly, we risk making ${\displaystyle m}$ much larger than ${\displaystyle h}$, also increasing the execution time. Similar to strategy used by Chazelle and Matoušek's,[4] algorithm, Chan's algorithm squares the value of ${\displaystyle m}$ at each iteration, and makes sure that ${\displaystyle m}$ is never larger than ${\displaystyle n}$. In other words, at iteration ${\displaystyle t}$ (starting at 0), we have ${\displaystyle m=\min(n,2^{2^{t}})}$. The total running time of the algorithm is

${\displaystyle \sum _{t=0}^{\lceil \log \log h\rceil }O\left(n\log(2^{2^{t}})\right)=O(n)\sum _{t=0}^{\lceil \log \log h\rceil }O(2^{t})=O\left(n\cdot 2^{1+\lceil \log \log h\rceil }\right)=O(n\log h).}$

To generalize this construction for the 3-dimensional case, an ${\displaystyle O(n\log n)}$ algorithm to compute the 3-dimensional convex hull should be used instead of Graham scan, and a 3-dimensional version of Jarvis's march needs to be used. The time complexity remains ${\displaystyle O(n\log h)}$.

## Implementation

Chan's paper contains several suggestions that may improve the practical performance of the algorithm, for example:

• When computing the convex hulls of the subsets, eliminate the points that are not in the convex hull from consideration in subsequent executions.
• The convex hulls of larger point sets can be obtained by merging previously calculated convex hulls, instead of recomputing from scratch.
• With above idea, the dominant cost of algorithm lies in the pre-processing, i.e., the computation of the convex hulls of the groups. To reduce this cost, we may consider reusing hulls computed from the previous iteration and merging them as the group size is increased.

## Extensions

Chan's paper contains some other problems whose known algorithms can be made optimal output sensitive using his technique, for example:

• Computing the lower envelope ${\displaystyle L(S)}$ of a set ${\displaystyle S}$ of ${\displaystyle n}$ line segments, which is defined as the lower boundary of the unbounded trapezoid of formed by the intersections.
• Hershberger[5] gave an ${\displaystyle O(n\log n)}$ algorithm which can be sped up to ${\displaystyle O(n\log h)}$, where h is the number of edges in the envelope
• Constructing output sensitive algorithms for higher dimensional convex hulls. With the use of grouping points and using efficient data structures, ${\displaystyle O(n\log h)}$ complexity can be achieved provided h is of polynomial order in ${\displaystyle n}$.