Schönhage–Strassen algorithm

The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971.[1] It works by recursively applying fast Fourier transform (FFT) over the integers modulo 2n+1. The run-time bit complexity to multiply two n-digit numbers using the algorithm is ${\displaystyle O(n\cdot \log n\cdot \log \log n)}$ in big O notation.

The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007. It is asymptotically faster than older methods such as Karatsuba and Toom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits.[2] In 2007, Martin Fürer published an algorithm with faster asymptotic complexity.[3] In 2019, David Harvey and Joris van der Hoeven demonstrated that multi-digit multiplication has theoretical ${\displaystyle O(n\log n)}$ complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (see galactic algorithm).[4]

Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as the Great Internet Mersenne Prime Search and approximations of π, as well as practical applications such as Lenstra elliptic curve factorization via Kronecker substitution, which reduces polynomial multiplication to integer multiplication.[5][6]

Description

This section has a simplified version of the algorithm, showing how to compute the product ${\displaystyle ab}$ of two natural numbers ${\displaystyle a,b}$, modulo a number of the form ${\displaystyle 2^{n}+1}$, where ${\displaystyle n=2^{k}M}$ is some fixed number. The integers ${\displaystyle a,b}$ are to be divided into ${\displaystyle D=2^{k}}$ blocks of ${\displaystyle M}$ bits, so in practical implementations, it is important to strike the right balance between the parameters ${\displaystyle M,k}$. In any case, this algorithm will provide a way to multiply two positive integers, provided ${\displaystyle n}$ is chosen so that ${\displaystyle ab<2^{n}+1}$.

Let ${\displaystyle n=DM}$ be the number of bits in the signals ${\displaystyle a}$ and ${\displaystyle b}$, where ${\displaystyle D=2^{k}}$ is a power of two. Divide the signals ${\displaystyle a}$ and ${\displaystyle b}$ into ${\displaystyle D}$ blocks of ${\displaystyle M}$ bits each, storing the resulting blocks as arrays ${\displaystyle A,B}$ (whose entries we shall consider for simplicity as arbitrary precision integers).

We now select a modulus for the Fourier transform, as follows. Let ${\displaystyle M'}$ be such that ${\displaystyle DM'\geq 2M+k}$. Also put ${\displaystyle n'=DM'}$, and regard the elements of the arrays ${\displaystyle A,B}$ as (arbitrary precision) integers modulo ${\displaystyle 2^{n'}+1}$. Observe that since ${\displaystyle 2^{n'}+1\geq 2^{2M+k}+1=D2^{2M}+1}$, the modulus is large enough to accommodate any carries that can result from multiplying ${\displaystyle a}$ and ${\displaystyle b}$. Thus, the product ${\displaystyle ab}$ (modulo ${\displaystyle 2^{n}+1}$) can be calculated by evaluating the convolution of ${\displaystyle A,B}$. Also, with ${\displaystyle g=2^{2M'}}$, we have ${\displaystyle g^{D/2}\equiv -1{\pmod {2^{n'}+1}}}$, and so ${\displaystyle g}$ is a primitive ${\displaystyle D}$th root of unity modulo ${\displaystyle 2^{n'}+1}$.

We now take the discrete Fourier transform of the arrays ${\displaystyle A,B}$ in the ring ${\displaystyle \mathbb {Z} /(2^{n'}+1)\mathbb {Z} }$, using the root of unity ${\displaystyle g}$ for the Fourier basis, giving the transformed arrays ${\displaystyle {\widehat {A}},{\widehat {B}}}$. Because ${\displaystyle D=2^{k}}$ is a power of two, this can be achieved in logarithmic time using a fast Fourier transform.

Let ${\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}}$ (pointwise product), and compute the inverse transform ${\displaystyle C}$ of the array ${\displaystyle {\widehat {C}}}$, again using the root of unity ${\displaystyle g}$. The array ${\displaystyle C}$ is now the convolution of the arrays ${\displaystyle A,B}$. Finally, the product ${\displaystyle ab{\pmod {2^{n}+1}}}$ is given by evaluating ${\displaystyle ab\equiv \sum _{j}C_{j}2^{Mj}\mod {2^{n}+1}.}$

This basic algorithm can be improved in several ways. Firstly, it is not necessary to store the digits of ${\displaystyle a,b}$ to arbitrary precision, but rather only up to ${\displaystyle n'+1}$ bits, which gives a more efficient machine representation of the arrays ${\displaystyle A,B}$. Secondly, it is clear that the multiplications in the forward transforms are simple bit shifts. With some care, it is also possible to compute the inverse transform using only shifts. Taking care, it is thus possible to eliminate any true multiplications from the algorithm except for where the pointwise product ${\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}}$ is evaluated. It is therefore advantageous to select the parameters ${\displaystyle D,M}$ so that this pointwise product can be performed efficiently, either because it is a single machine word or using some optimized algorithm for multiplying integers of a (ideally small) number of words. Selecting the parameters ${\displaystyle D,M}$ is thus an important area for further optimization of the method.

Details

Every number in base B, can be written as a polynomial:

${\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}}$

Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:

${\displaystyle XY=\left(\sum _{i=0}^{N}{x_{i}B^{i}}\right)\left(\sum _{j=0}^{N}{y_{i}B^{j}}\right)}$

Because, for ${\displaystyle B^{k}}$: ${\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}}$, we have a convolution.

By using FFT (fast Fourier transform), used in original version rather than NTT,[7] with convolution rule; we get

${\displaystyle {\hat {f}}(a*b)={\hat {f}}\left(\sum _{i=0}^{k}a_{i}b_{k-i}\right)={\hat {f}}(a)\bullet {\hat {f}}(b).}$

That is; ${\displaystyle C_{k}=a_{k}\bullet b_{k}}$, where ${\displaystyle C_{k}}$ is the corresponding coefficient in fourier space. This can also be written as: fft(a * b) = fft(a) ● fft(b).

We have the same coefficients due to linearity under Fourier transform, and because these polynomials only consist of one unique term per coefficient:

${\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}}$ and
${\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )}$

Convolution rule: ${\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)}$

We have reduced our convolution problem to product problem, through FFT.

By finding ifft (polynomial interpolation), for each ${\displaystyle c_{k}}$, one get the desired coefficients.

Algorithm uses divide and conquer strategy, to divide problem to subproblems.

Convolution under mod N

${\displaystyle c_{k}=\sum _{(i,j):i+j\equiv k{\pmod {N(n)}}}a_{i}b_{j}}$, where ${\displaystyle N(n)=2^{n}+1}$ and ${\displaystyle N(N)=2^{N}+1}$ in Schönhage–Strassen algorithm.

By letting:

${\displaystyle a_{i}'=\theta ^{i}a_{i}}$ and ${\displaystyle b_{j}'=\theta ^{j}b_{j},}$

where ${\displaystyle \theta ^{N}=-1}$ is the n-th root

One sees that:[8]

{\displaystyle {\begin{aligned}C_{k}&=\sum _{(i,j):i+j=k\equiv {\pmod {N(n)}}}a_{i}b_{j}=\theta ^{-k}\sum _{(i,j):i+j\equiv k{\pmod {N(n)}}}a_{i}'b_{j}'\\[6pt]&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}'b_{j}'+\sum _{(i,j):i+j=k+n}a_{i}'b_{j}'\right)\\[6pt]&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&=\sum _{(i,j):i+j=k}a_{i}b_{j}+\theta ^{n}\sum _{(i,j):i+j=k+n}a_{i}b_{j}.\end{aligned}}}

This mean, one can use weight ${\displaystyle \theta ^{i}}$, and then multiply with ${\displaystyle \theta ^{-k}}$ after.

Instead of using weight; one can due to ${\displaystyle \theta ^{N}=-1}$, in first step of recursion (when ${\displaystyle n=N}$), calculate :

${\displaystyle C_{k}=\sum _{(i,j):i+j\equiv k{\pmod {N(N)}}}=\sum _{(i,j):i+j=k}a_{i}b_{j}-\sum _{(i,j):i+j=k+n}a_{i}b_{j}}$

In normal FFT, that operates over complex numbers, one would use:

${\displaystyle \exp \left({\frac {2k\pi i}{n}}\right)=\cos {\frac {2k\pi }{n}}+i\sin {\frac {2k\pi }{n}},\qquad k=0,1,\dots ,n-1.}$
{\displaystyle {\begin{aligned}C_{k}&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&=e^{-i2\pi k/n}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}e^{i2\pi k/n}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}e^{i2\pi (n+k)/n}\right)\end{aligned}}}

However, FFT can also be used as a NTT (number theoretic transformation) in Schönhage–Strassen. This means that we have to use θ that generate numbers in a finite field (for example ${\displaystyle \mathrm {GF} (2^{n}+1)}$).

A root of unity under a finite field GF(r), is an element a such that ${\displaystyle \theta ^{r-1}\equiv 1}$ or ${\displaystyle \theta ^{r}\equiv \theta }$. For example GF(p), where p is a prime, gives ${\displaystyle \{1,2,\ldots ,p-1\}}$.

Notice that ${\displaystyle 2^{n}\equiv -1}$ in ${\displaystyle \operatorname {GF} (2^{n}+1)}$ and ${\displaystyle {\sqrt {2}}\equiv -1}$ in ${\displaystyle \operatorname {GF} (2^{n+2}+1)}$. For these candidates, ${\displaystyle \theta ^{N}\equiv -1}$ under its finite field, and therefore act the way we want .

Same FFT algorithms can still be used, though, as long as θ is root of unity of a finite field.

To find FFT/NTT transform, we do the following:

{\displaystyle {\begin{aligned}C_{k}'&={\hat {f}}(k)={\hat {f}}\left(\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\right)\\[6pt]C_{k+k}'&={\hat {f}}(k+k)={\hat {f}}\left(\sum _{(i,j):i+j=2k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=n+2k}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&={\hat {f}}\left(\sum _{(i,j):i+j=2k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=2k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&={\hat {f}}\left(A_{k\leftarrow k}\right)\bullet {\hat {f}}(B_{k\leftarrow k})+{\hat {f}}(A_{k\leftarrow k+n})\bullet {\hat {f}}(B_{k\leftarrow k+n})\end{aligned}}}

First product gives contribution to ${\displaystyle c_{k}}$, for each k. Second gives contribution to ${\displaystyle c_{k}}$, due to ${\displaystyle (i+j)}$ mod ${\displaystyle N(n)}$.

To do the inverse:

${\displaystyle C_{k}=2^{-m}{\hat {f^{-1}}}(\theta ^{-k}C_{k+k}')}$ or ${\displaystyle C_{k}={\hat {f^{-1}}}(\theta ^{-k}C_{k+k}')}$

depending on whether fft one use normalize data or not.

One multiply by ${\displaystyle 2^{-m}}$, to normailize fft data to a specific range, where ${\displaystyle {\frac {1}{n}}\equiv 2^{-m}{\bmod {N}}(n)}$, where m is found using modular multiplicative inverse.

Implementation details

Why N = 2M + 1 in mod N

In Schönhage–Strassen algorithm, ${\displaystyle N=2^{M}+1}$. One should think of this as a binary tree, where one have values in ${\displaystyle 0\leq {\text{index}}\leq 2^{M}=2^{i+j}}$. By letting ${\displaystyle K\in [0,M]}$, one can for each K find all ${\displaystyle i+j=K}$: One can group all ${\displaystyle (i,j)}$ pairs into M different groups. Using ${\displaystyle i+j=k}$ to group ${\displaystyle (i,j)}$ pairs through convolution, is a classical problem in algorithms.[9] For example: Let k be total income and i be mans income and j womans income; by using convolution, one can group ${\displaystyle (i,j)}$ into K groups based on desired total income.

Having this in mind, ${\displaystyle N=2^{M}+1}$ help us to group ${\displaystyle (i,j)}$ into ${\displaystyle {\frac {M}{2^{k}}}}$ groups, for each group of subtasks in depth k; in tree with ${\displaystyle N=2^{\frac {M}{2^{k}}}+1}$

Notice that ${\displaystyle N=2^{M}+1=2^{2^{L}}+1}$, for some L. This is Fermat number. When doing mod ${\displaystyle N=2^{M}+1=2^{2^{L}}+1}$, we have something called Fermat ring.

Because some Fermat numbers are Fermat primes, one can in some cases avoid calculations.

There are other N that could have been used, of course, with same prime number advantages. By letting ${\displaystyle N=2^{k}-1}$, one have the maximal number in a binary number with ${\displaystyle k+1}$ bits. ${\displaystyle N=2^{k}-1}$ is a Mersenne number, that in some cases is a Mersenne prime. It is a natural candidate against Fermat number ${\displaystyle N=2^{2^{L}}+1}$

In search of another N

Doing several mod calculations against different N, can be helpful when it comes to solving integer product. By using the Chinese remainder theorem, after splitting M into smaller different types of N, one can find the answer of multiplication xy [10]

Fermat numbers and Mersenne numbers are just two types of numbers, in something called generalized Fermat Mersenne number (GSM); with formula:[11]

${\displaystyle G_{q,p,n}=\sum _{i=1}^{p}q^{(p-i)n}={\frac {q^{pn}-1}{q^{n}-1}}}$
${\displaystyle M_{p,n}=G_{2,p,n}}$

In this formula; ${\displaystyle M_{2,2^{k}}}$ is a Fermat number, and ${\displaystyle M_{p,1}}$ is a Mersenne number.

This formula can be used to generate sets of equations, that can be used in CRT (Chinese remainder theorem):[12]

${\displaystyle g^{\frac {(M_{p,n}-1)}{2}}\equiv -1{\pmod {M_{p,n}}}}$, where g is a number such that there exists a x where ${\displaystyle x^{2}\equiv g{\pmod {M_{p,n}}}}$, assuming ${\displaystyle N=2^{n}}$

Furthermore; ${\displaystyle g^{2^{(p-1)n}-1}\equiv a^{2^{n}-1}{\pmod {M_{p,n}}}}$, where a is an element that generates elements in ${\displaystyle \{1,2,4,...2^{n-1},2^{n}\}}$ in a cyclic manner.

If ${\displaystyle N=2^{t}}$, where ${\displaystyle 1\leq t\leq n}$, then ${\displaystyle g_{t}=a^{(2^{n}-1)2^{n-t}}}$.

How to choose K for a specific N

Following formula helps one out, finding a proper K (number of groups to divide N bits into) given bit size N by calculating efficiency :[13]

${\displaystyle E={\frac {{\frac {2N}{K}}+k}{n}}}$ N is bit size (the one used in ${\displaystyle 2^{N}+1}$) at outermost level. K gives ${\displaystyle {\frac {N}{K}}}$ groups of bits, where ${\displaystyle K=2^{k}}$.

n is found through N, K and k by finding the smallest x, such that ${\displaystyle 2N/K+k\leq n=K2^{x}}$

If one assume efficiency above 50%, ${\displaystyle {\frac {n}{2}}\leq {\frac {2N}{K}},K\leq n}$ and k is very small compared to rest of formula; one get

${\displaystyle K\leq 2{\sqrt {N}}}$

This means: When something is very effective; K is bound above by ${\displaystyle 2{\sqrt {N}}}$ or asymptotically bound above by ${\displaystyle {\sqrt {N}}}$

Pseudocode

Following alogithm, the standard Modular Schönhage-Strassen Multiplication algorithm (with some optimizations), is found in overview through [14]

1. Split both input numbers a and b into n coefficients of s bits each.

Use at least ${\displaystyle K+1}$ bits to store them,

to allow encoding of the value ${\displaystyle 2^{K}.}$
2. Weight both coefficient vectors according to (2.24) with powers of θ by performing cyclic shifts on them.
3. Shuffle the coefficients ${\displaystyle a_{i}}$ and ${\displaystyle b_{j}}$ .
4. Evaluate ${\displaystyle a_{i}}$ and ${\displaystyle b_{j}}$ . Multiplications by powers of ω are cyclic shifts.
5. Do n pointwise multiplications ${\displaystyle c_{k}:=a_{k}b_{k}}$ in ${\displaystyle Z/(2^{K}+1)Z}$. If SMUL is used recursively, provide K as parameter. Otherwise, use some other multiplication function like T3MUL and reduce modulo ${\displaystyle 2^{K}+1}$ afterwards.
6. Shuffle the product coefficients ${\displaystyle c_{k}}$.
7. Evaluate the product coefficients ${\displaystyle c_{k}}$.
8. Apply the counterweights to the ${\displaystyle c_{k}}$ according to (2.25). Since ${\displaystyle \theta ^{2n}\equiv 1}$ it follows that ${\displaystyle \theta ^{-k}\equiv \theta ^{n-k}}$
9. Normalize the ${\displaystyle c_{k}}$ with ${\displaystyle 1/n\equiv 2^{-m}}$ (again a cyclic shift).
10. Add up the ${\displaystyle c_{k}}$ and propagate the carries. Make sure to properly handle negative coefficients.
11. Do a reduction modulo ${\displaystyle 2^{N}+1}$.
• T3MUL = Toom–Cook multiplication
• SMUL = Schönhage–Strassen multiplication
• Evaluate = FFT/IFFT

Further study

For implemantion details, one can read the book Prime Numbers: A Computational Perspective.[15] This variant differs somewhat from Schönhage's original method in that it exploits the discrete weighted transform to perform negacyclic convolutions more efficiently. Another source for detailed information is Knuth's The Art of Computer Programming.[16]

Optimizations

This section explains a number of important practical optimizations, when implementing Schönhage–Strassen.

Use of other multiplications algorithm, inside algorithm

Below a certain cutoff point, it's more efficient to use other multiplication algorithms, such as Toom–Cook multiplication.[17]

Square root of 2 trick

The idea is to use ${\displaystyle {\sqrt {2}}}$ as root of unity of order ${\displaystyle 2^{n+2}}$ in finite field ${\displaystyle \mathrm {GF} (2^{n+2}+1)}$ ( it is a solution to equation ${\displaystyle \theta ^{2^{n+2}}\equiv 1{\pmod {2^{n+2}+1}}}$), when weighting values in NTT (number theoretic transformation) approach. It has been shown to save 10% in integer multiplication time.[18]

Granlund's trick

By letting ${\displaystyle m=N+h}$; one can compute ${\displaystyle uv{\bmod {2^{N}+1}}}$ and ${\displaystyle (u{\bmod {2^{h}}})(v{\bmod {2}}^{h})}$. In combination with CRT (Chinese Remainder Theorem), to find exact values of multiplication uv[19]

References

1. ^ Schönhage, Arnold; Strassen, Volker (1971). "Schnelle Multiplikation großer Zahlen" [Fast multiplication of large numbers]. Computing (in German). 7 (3–4): 281–292. doi:10.1007/BF02242355. S2CID 9738629.
2. ^ Karatsuba multiplication has asymptotic complexity of about ${\displaystyle O(n^{1.58})}$ and Toom–Cook multiplication has asymptotic complexity of about ${\displaystyle O(n^{1.46}).}$

Van Meter, Rodney; Itoh, Kohei M. (2005). "Fast Quantum Modular Exponentiation". Physical Review. 71 (5): 052320. arXiv:quant-ph/0408006. Bibcode:2005PhRvA..71e2320V. doi:10.1103/PhysRevA.71.052320. S2CID 14983569.

A discussion of practical crossover points between various algorithms can be found in: Overview of Magma V2.9 Features, arithmetic section Archived 2006-08-20 at the Wayback Machine

Luis Carlos Coronado García, "Can Schönhage multiplication speed up the RSA encryption or decryption? Archived", University of Technology, Darmstadt (2005)

The GNU Multi-Precision Library uses it for values of at least 1728 to 7808 64-bit words (33,000 to 150,000 decimal digits), depending on architecture. See:

"FFT Multiplication (GNU MP 6.2.1)". gmplib.org. Retrieved 2021-07-20.

"MUL_FFT_THRESHOLD". GMP developers' corner. Archived from the original on 24 November 2010. Retrieved 3 November 2011.

"MUL_FFT_THRESHOLD". gmplib.org. Retrieved 2021-07-20.

3. ^ Fürer's algorithm has asymptotic complexity ${\textstyle O{\bigl (}n\cdot \log n\cdot 2^{\Theta (\log ^{*}n)}{\bigr )}.}$
Fürer, Martin (2007). "Faster Integer Multiplication" (PDF). Proc. STOC '07. Symposium on Theory of Computing, San Diego, Jun 2007. pp. 57–66. Archived from the original (PDF) on 2007-03-05.
Fürer, Martin (2009). "Faster Integer Multiplication". SIAM Journal on Computing. 39 (3): 979–1005. doi:10.1137/070711761. ISSN 0097-5397.

Fürer's algorithm is used in the Basic Polynomial Algebra Subprograms (BPAS) open source library. See: Covanov, Svyatoslav; Mohajerani, Davood; Moreno Maza, Marc; Wang, Linxiao (2019-07-08). "Big Prime Field FFT on Multi-core Processors". Proceedings of the 2019 on International Symposium on Symbolic and Algebraic Computation (PDF). Beijing China: ACM. pp. 106–113. doi:10.1145/3326229.3326273. ISBN 978-1-4503-6084-5. S2CID 195848601.

4. ^ Harvey, David; van der Hoeven, Joris (2021). "Integer multiplication in time ${\displaystyle O(n\log n)}$" (PDF). Annals of Mathematics. Second Series. 193 (2): 563–617. doi:10.4007/annals.2021.193.2.4. MR 4224716. S2CID 109934776.
5. ^ This method is used in INRIA's ECM library.
6. ^ "ECMNET". members.loria.fr. Retrieved 2023-04-09.
7. ^ Becker, Hanno; Hwang, Vincent; J. Kannwischer, Matthias; Panny, Lorenz (2022). "Efficient Multiplication of Somewhat Small Integers using Number-Theoretic Transforms" (PDF).
8. ^ Lüders, Christoph (2014). "Fast Multiplication of Large Integers: Implementation and Analysis of the DKSS Algorithm". p. 26.
9. ^ Kleinberg, Jon; Tardos, Eva (2005). Algorithm Design (1 ed.). Pearson. p. 237. ISBN 0-321-29535-8.
10. ^ Gaudry, Pierrick; Alexander, Kruppa; Paul, Zimmermann (2007). "A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm" (PDF). p. 6.
11. ^ S. Dimitrov, Vassil; V. Cooklev, Todor; D. Donevsky, Borislav (1994). "Generalized Fermat-Mersenne Number Theoretic Transform". p. 2.
12. ^ S. Dimitrov, Vassil; V. Cooklev, Todor; D. Donevsky, Borislav (1994). "Generalized Fermat-Mersenne Number Theoretic Transform". p. 3.
13. ^ Gaudry, Pierrick; Kruppa, Alexander; Zimmermann, Paul (2007). "A GMP-based Implementation of Schönhage-Strassen's Large Integer Multiplication Algorithm" (PDF). p. 2.
14. ^ Lüders, Christoph (2014). "Fast Multiplication of Large Integers: Implementation and Analysis of the DKSS Algorithm". p. 28.
15. ^ R. Crandall & C. Pomerance. Prime Numbers – A Computational Perspective. Second Edition, Springer, 2005. Section 9.5.6: Schönhage method, p. 502. ISBN 0-387-94777-9
16. ^ Knuth, Donald E. (1997). "§ 4.3.3.C: Discrete Fourier transforms". The Art of Computer Programming. Vol. 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley. pp. 305–311. ISBN 0-201-89684-2.
17. ^ Gaudry, Pierrick; Kruppa, Alexander; Zimmermann, Paul (2007). "A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm" (PDF). p. 7.
18. ^ Gaudry, Pierrick; Kruppa, Alexander; Zimmermann, Paul (2007). "A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm" (PDF). p. 6.
19. ^ Gaudry, Pierrick; Kruppa, Alexander; Zimmermann, Paul (2007). "A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm" (PDF). p. 6.