Lattice reduction

Lattice reduction in two dimensions: the black vectors are the given basis for the lattice (represented by blue dots), the red vectors are the reduced basis

In mathematics, the goal of lattice basis reduction is given an integer lattice basis as input, to find a basis with short, nearly orthogonal vectors. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.

Nearly orthogonal

One measure of nearly orthogonal is the orthogonality defect. This compares the product of the lengths of the basis vectors with the volume of the parallelepiped they define. For perfectly orthogonal basis vectors, these quantities would be the same.

Any particular basis of ${\displaystyle n}$ vectors may be represented by a matrix ${\displaystyle B}$, whose columns are the basis vectors ${\displaystyle b_{i},i=1,\ldots ,n}$. In the fully dimensional case where the number of basis vectors is equal to the dimension of the space they occupy, this matrix is square, and the volume of the fundamental parallelepiped is simply the absolute value of the determinant of this matrix ${\displaystyle \det(B)}$. If the number of vectors is less than the dimension of the underlying space, then volume is ${\displaystyle {\sqrt {\det(B^{T}B)}}}$. For a given lattice ${\displaystyle \Lambda }$, this volume is the same (up to sign) for any basis, and hence is referred to as the determinant of the lattice ${\displaystyle \det(\Lambda )}$ or lattice constant ${\displaystyle d(\Lambda )}$.

The orthogonality defect is the product of the basis vector lengths divided by the parallelepiped volume;

${\displaystyle \delta (B)={\frac {\Pi _{i=1}^{n}\|b_{i}\|}{\sqrt {\det(B^{T}B)}}}={\frac {\Pi _{i=1}^{n}\|b_{i}\|}{d(\Lambda )}}}$

From the geometric definition it may be appreciated that ${\displaystyle \delta (B)\geq 1}$ with equality if and only if the basis is orthogonal.

If the lattice reduction problem is defined as finding the basis with the smallest possible defect, then the problem is NP-hard. However, there exist polynomial time algorithms to find a basis with defect ${\displaystyle \delta (B)\leq c}$ where c is some constant depending only on the number of basis vectors and the dimension of the underlying space (if different). This is a good enough solution in many practical applications.

In two dimensions

For a basis consisting of just two vectors, there is a simple and efficient method of reduction closely analogous to the Euclidean algorithm for the greatest common divisor of two integers. As with the Euclidean algorithm, the method is iterative; at each step the larger of the two vectors is reduced by adding or subtracting an integer multiple of the smaller vector.

The pseudocode of the algorithm, often known as Lagrange's algorithm or the Lagrange-Gauss algorithm, is as follows:

    Input: ${\textstyle (u,v)}$ a basis for the lattice ${\textstyle L}$. Assume that ${\textstyle ||v||\leq ||u||}$, otherwise swap them.
Output: A basis ${\textstyle (u,v)}$ with ${\textstyle ||u||=\lambda _{1}(L),||v||=\lambda _{2}(L)}$.

    While ${\textstyle ||v||\leq ||u||}$:
${\textstyle q:=\lfloor {\langle u,{\frac {v}{||v||^{2}}}\rangle }\rceil }$
${\textstyle r:=u-qv}$
${\textstyle u:=v}$
${\textstyle v:=r}$


See the section on Lagrange's algorithm in [1] for further details.

Applications

Lattice reduction algorithms are used in a number of modern number theoretical applications, including in the discovery of a spigot algorithm for ${\displaystyle \pi }$. Although determining the shortest basis is possibly an NP-complete problem, algorithms such as the LLL algorithm[2] can find a short (not necessarily shortest) basis in polynomial time with guaranteed worst-case performance. LLL is widely used in the cryptanalysis of public key cryptosystems.

When used to find integer relations, a typical input to the algorithm consists of an augmented ${\displaystyle n}$ x ${\displaystyle n}$ identity matrix with the entries in the last column consisting of the ${\displaystyle n}$ elements (multiplied by a large positive constant ${\displaystyle w}$ to penalize vectors that do not sum to zero) between which the relation is sought.

The LLL algorithm for computing a nearly-orthogonal basis was used to show that integer programming in any fixed dimension can be done in polynomial time.[3]

Algorithms

The following algorithms reduce lattice bases; several public implementations of these algorithms are also listed.

Year Algorithm Implementation
1773 Lagrange/Gauss reduction for 2D lattices
1982 Lenstra–Lenstra–Lovász reduction NTL, fplll (GitHub link)
1987 Block Korkine Zolotarev[4] NTL, fplll (GitHub link)
1993 Seysen Reduction[5]

References

1. ^ Nguyen, Phong Q. (2009). "Hermite's Constant and Lattice Algorithms". The LLL Algorithm. Information Security and Cryptography. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 19–69. doi:10.1007/978-3-642-02295-1_2. ISBN 978-3-642-02294-4. ISSN 1619-7100.
2. ^ Lenstra, A. K.; Lenstra, H. W. Jr.; Lovász, L. (1982). "Factoring polynomials with rational coefficients". Mathematische Annalen. 261 (4): 515–534. CiteSeerX 10.1.1.310.318. doi:10.1007/BF01457454. hdl:1887/3810. MR 0682664. S2CID 5701340.
3. ^ Lenstra, H.W. (1983). "Integer programming with a fixed number of variables". Math. Oper. Res. 8 (4): 538–548. CiteSeerX 10.1.1.431.5444. doi:10.1287/moor.8.4.538.
4. ^ Hanrot, Guillaume; Stehlé, Damien (2008). "Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases". arXiv:0801.3331 [math.NT].
5. ^ Seysen, Martin (September 1993). "Simultaneous reduction of a lattice basis and its reciprocal basis". Combinatorica. 13 (3): 363–376. doi:10.1007/BF01202355. S2CID 206791637.