# Binary Goppa code

In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.

## Construction and properties

A binary Goppa code is defined by a polynomial ${\displaystyle g(x)}$ of degree ${\displaystyle t}$ over a finite field ${\displaystyle GF(2^{m})}$ without multiple zeros, and a sequence ${\displaystyle L}$ of ${\displaystyle n}$ distinct elements from ${\displaystyle GF(2^{m})}$ that aren't roots of the polynomial:

${\displaystyle \forall i,j\in \{0,\ldots ,n-1\}:L_{i}\in GF(2^{m})\land ((i=j)\lor (L_{i}\neq L_{j}))\land g(L_{i})\neq 0}$

Codewords belong to the kernel of syndrome function, forming a subspace of ${\displaystyle \{0,1\}^{n}}$:

${\displaystyle \Gamma (g,L)=\left\{c\in \{0,1\}^{n}\left|\sum _{i=0}^{n-1}{\frac {c_{i}}{x-L_{i}}}\equiv 0\mod g(x)\right.\right\}}$

Code defined by a tuple ${\displaystyle (g,L)}$ has minimum distance ${\displaystyle 2t+1}$, thus it can correct ${\displaystyle t=\left\lfloor {\frac {(2t+1)-1}{2}}\right\rfloor }$ errors in a word of size ${\displaystyle n-mt}$ using codewords of size ${\displaystyle n}$. It also possesses a convenient parity-check matrix ${\displaystyle H}$ in form

${\displaystyle H=VD={\begin{pmatrix}1&1&1&\cdots &1\\L_{0}^{1}&L_{1}^{1}&L_{2}^{1}&\cdots &L_{n-1}^{1}\\L_{0}^{2}&L_{1}^{2}&L_{2}^{2}&\cdots &L_{n-1}^{2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\L_{0}^{t-1}&L_{1}^{t-1}&L_{2}^{t-1}&\cdots &L_{n-1}^{t-1}\end{pmatrix}}{\begin{pmatrix}{\frac {1}{g(L_{0})}}&&&&\\&{\frac {1}{g(L_{1})}}&&&\\&&{\frac {1}{g(L_{2})}}&&\\&&&\ddots &\\&&&&{\frac {1}{g(L_{n-1})}}\end{pmatrix}}}$

Note that this form of the parity-check matrix, being composed of a Vandermonde matrix ${\displaystyle V}$ and diagonal matrix ${\displaystyle D}$, shares the form with check matrices of alternant codes, thus alternant decoders can be used on this form. Such decoders usually provide only limited error-correcting capability (in most cases ${\displaystyle t/2}$).

For practical purposes, parity-check matrix of a binary Goppa code is usually converted to a more computer-friendly binary form by a trace construction, that converts the ${\displaystyle t}$-by-${\displaystyle n}$ matrix over ${\displaystyle GF(2^{m})}$ to a ${\displaystyle mt}$-by-${\displaystyle n}$ binary matrix by writing polynomial coefficients of ${\displaystyle GF(2^{m})}$ elements on ${\displaystyle m}$ successive rows.

## Decoding

Decoding of binary Goppa codes is traditionally done by Patterson algorithm, which gives good error-correcting capability (it corrects all ${\displaystyle t}$ design errors), and is also fairly simple to implement.

Patterson algorithm converts a syndrome to a vector of errors. The syndrome of a word ${\displaystyle c=(c_{0},\dots ,c_{n-1})}$ is expected to take a form of

${\displaystyle s(x)\equiv \sum _{i=0}^{n-1}{\frac {1}{x-L_{i}}}\mod g(x)}$

Alternative form of a parity-check matrix based on formula for ${\displaystyle s(x)}$ can be used to produce such syndrome with a simple matrix multiplication.

The algorithm then computes ${\displaystyle v(x)\equiv {\sqrt {s(x)^{-1}-x}}\mod g(x)}$. That fails when ${\displaystyle s(x)\equiv 0}$, but that is the case when the input word is a codeword, so no error correction is necessary.

${\displaystyle v(x)}$ is reduced to polynomials ${\displaystyle a(x)}$ and ${\displaystyle b(x)}$ using the extended euclidean algorithm, so that ${\displaystyle a(x)\equiv b(x)\cdot v(x)\mod g(x)}$, while ${\displaystyle \deg(a)\leq \lfloor t/2\rfloor }$ and ${\displaystyle \deg(b)\leq \lfloor (t-1)/2\rfloor }$.

Finally, the error locator polynomial is computed as ${\displaystyle \sigma (x)=a(x)^{2}+x\cdot b(x)^{2}}$. Note that in binary case, locating the errors is sufficient to correct them, as there's only one other value possible. In non-binary cases a separate error correction polynomial has to be computed as well.

If the original codeword was decodable and the ${\displaystyle e=(e_{0},e_{1},\dots ,e_{n-1})}$ was the error vector, then

${\displaystyle \sigma (x)=\prod _{i=0}^{n-1}(x-L_{i})}$

Factoring or evaluating all roots of ${\displaystyle \sigma (x)}$ therefore gives enough information to recover the error vector and fix the errors.

## Properties and usage

Binary Goppa codes viewed as a special case of Goppa codes have the interesting property that they correct full ${\displaystyle \deg(g)}$ errors, while only ${\displaystyle \deg(g)/2}$ errors in ternary and all other cases. Asymptotically, this error correcting capability meets the famous Gilbert–Varshamov bound.

Because of the high error correction capacity compared to code rate and form of parity-check matrix (which is usually hardly distinguishable from a random binary matrix of full rank), the binary Goppa codes are used in several post-quantum cryptosystems, notably McEliece cryptosystem and Niederreiter cryptosystem.

## References

• Elwyn R. Berlekamp, Goppa Codes, IEEE Transactions on information theory, Vol. IT-19, No. 5, September 1973, https://web.archive.org/web/20170829142555/http://infosec.seu.edu.cn/space/kangwei/senior_thesis/Goppa.pdf
• Daniela Engelbert, Raphael Overbeck, Arthur Schmidt. "A summary of McEliece-type cryptosystems and their security." Journal of Mathematical Cryptology 1, 151–199. MR2345114. Previous version: http://eprint.iacr.org/2006/162/
• Daniel J. Bernstein. "List decoding for binary Goppa codes." http://cr.yp.to/codes/goppalist-20110303.pdf