# Karmarkar's algorithm

Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice.

Where $n$ is the number of variables and $L$ is the number of bits of input to the algorithm, Karmarkar's algorithm requires $O(n^{3.5} L)$ operations on $O(L)$ digit numbers, as compared to $O(n^6 L)$ such operations for the ellipsoid algorithm. The runtime of Karmarkar's algorithm is thus

$O(n^{3.5} L^2 \cdot \log L \cdot \log \log L)$

using FFT-based multiplication (see Big O notation).

Karmarkar's algorithm falls within the class of interior point methods: the current guess for the solution does not follow the boundary of the feasible set as in the simplex method, but it moves through the interior of the feasible region, improving the approximation of the optimal solution by a definite fraction with every iteration, and converging to an optimal solution with rational data.[1]

## The Algorithm

Consider a Linear Programming problem in matrix form:

 maximize cTx subject to Ax ≤ b.

The algorithm determines the next feasible direction toward optimality and scales back by a factor 0 < γ ≤ 1.

Karmarkar's algorithm is rather complicated. Interested readers can refer.[2][3][4][5][6] A simplified version, called the affine-scaling method, proposed and analyzed by others,[7] can be described succinctly as follows. Note that the affine-scaling algorithm, while efficient in practice, is not a polynomial time algorithm.

Algorithm Affine-Scaling
Input:  A, b, c, $x^0$, stopping criterion, $\gamma$.

  $k \leftarrow 0$
do while stopping criterion not satisfied
$v^k \leftarrow b-Ax^k$
$D_v \leftarrow \operatorname{diag}(v_1^k,\ldots,v_m^k)$
$h_x\leftarrow (A^TD_v^{-2}A)^{-1}c$
$h_v\leftarrow -Ah_x$
if $h_v \ge 0$ then
return unbounded
end if
$\alpha \leftarrow \gamma\cdot \min\{-v_i^k/(h_v)_i \,\,|\,\, (h_v)_i < 0,\, i=1,\ldots,m\}$
$x^{k+1}\leftarrow x^k + \alpha h_x$
$k\leftarrow k+1$
end do

• "←" is a shorthand for "changes to". For instance, "largestitem" means that the value of largest changes to the value of item.
• "return" terminates the algorithm and outputs the value that follows.

## Example

Example solution

Consider the linear program

 maximize $x_1$ + $x_2$ subject to $2p x_1$ + $x_2$ $\leq$ $p^2+1$, $p=0.0, 0.1, 0.2,\ldots, 0.9, 1.0.$

That is, there are 2 variables $x_1, x_2$ and 11 constraints associated with varying values of $p$. This figure shows each iteration of the algorithm as red circle points. The constraints are shown as blue lines.

## Patent controversy --- "Can Mathematics be patented ?"

At the time he invented the algorithm, Narendra Karmarkar was employed by AT&T and they realized that his invention could be of practical importance. In April 1985, AT&T promptly applied for a patent on Karmarkar's algorithm and that became more fuel for the ongoing controversy over the issue of software patents.[8] This left many mathematicians uneasy, such as Ronald Rivest (himself one of the holders of the patent on the RSA algorithm), who expressed the opinion that research proceeded on the basis that algorithms should be free. Even before the patent was actually granted, it seemed that there was prior art that might have been applicable.[9] Mathematicians who specialize in numerical analysis such as Philip Gill and others showed that Karmarkar's algorithm is actually equivalent to a projected Newton barrier method with a logarithmic barrier function, if the parameters are chosen suitably.[10] Methods referred to by Gill were widely used for nonlinear programming since the 1960s. In fact, one well-known book first published in 1968 described the technique specifically in the context of linear programming.[11] However, some say Gill's argument is flawed, insofar as the method they describe does not even qualify as an "algorithm", since it requires choices of parameters that don't follow from the internal logic of the method, but rely on external guidance, essentially from Karmarkar's algorithm.[12] Furthermore, Karmarkar's contributions are considered far from obvious in light of all prior work, including Fiacco-McCormick, Gill and others cited by Saltzman.[12][13][14] The patent was debated in U.S. senate and granted in recognition of the essential originality of Karmarkar's work, as U.S. Patent 4,744,026: "Methods and apparatus for efficient resource allocation" in May 1988. AT&T supplied systems KORBX[15] [16] based on this patent to Pentagon,[17][18] which enabled them to solve mathematical programming problems which were previously considered unsolvable.Opponents of software patents have further alleged that the patents ruined the positive interaction cycles that previously characterized the relationship between researchers in linear programming and industry, and specifically it isolated Karmarkar himself from the network of mathematical researchers in his field. [19]

The patent itself expired in April 2006, and the algorithm is presently in the public domain.

## References

1. ^ Strang, Gilbert (1 June 1987). "Karmarkar’s algorithm and its place in applied mathematics". The Mathematical Intelligencer (New York: Springer) 9 (2): 4–10. doi:10.1007/BF03025891. ISSN 0343-6993. MR '''883185'''.
2. ^ http://dl.acm.org/citation.cfm?id=808695
4. ^ http://onlinelibrary.wiley.com/doi/10.1002/j.1538-7305.1989.tb00316.x/abstract
5. ^ http://www.ams.org/books/conm/114/conm114-endmatter.pdf