# Talk:AKS primality test

WikiProject India (Rated C-class, Mid-importance)
C  This article has been rated as C-Class on the project's quality scale.
Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Mathematics (Rated C-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 C Class
 Mid Importance
Field: Number theory

## formula cleanup

Added request for help cleaning up the mathematical formulae on Wikipedia talk:WikiProject Mathematics. Hv 13:41, 10 August 2005 (UTC)

variables need to be defined 68.6.112.70 03:48, 21 November 2005 (UTC)
Which variables? -- Jitse Niesen (talk) 11:05, 21 November 2005 (UTC)
x and a need to be defined 140.211.137.232 21:04, 4 March 2006 (UTC)

For the original paper: try http://www.cse.iitk.ac.in/users/manindra/primality_original.pdf

## (mod n, x^r-1)??

I understand modular arithmetic but why are there two values in the modulo, n and x^r-1, I only know how to use modulo when there is only one value.

It means that things are equivalent if they differ by a multiple n, of xr - 1, or by a sum of two of these. You could do the same thing in the integers, but mod (10, 12) would be equivalent to mod (2), where 2 is the greatest common divisor. Integer polynomials don't work quite the same way. Septentrionalis 22:40, 4 April 2006 (UTC) (and we should put in a link to principal ideal domain.

Two questions, one name a fast algorithm for calculating these. Also, mod(10, 12) wouldn't be equivalent to mod (2), take an example, (2=0 mod 2) but the same isn't true for mod(10, 12).(Unless negative multiples are allowed, I guess they are)

Yes, negative multiples are allowed; they are in 0 = 2 (mod 2), which is true because 0 = 2 - 2. (For simple ideals, they're not necessary, if you are allowed both sides of the equation,) See Ideal (ring theory). Septentrionalis 04:22, 5 April 2006 (UTC)

Thank you, how would a computer calculate this?

This is getting a little off topic, I suggest you read a textbook on algorithmic number theory which should explain in enough detail how one would do that or most books on applied cryptography. JoshuaZ 20:45, 6 April 2006 (UTC)

ok, sry, just for me getting this right, (mod 9,11) would be same as (mod 1) because 5*9 - 4*11 = 1 , right? --Sur3 00:37, 7 August 2008 (MEZ) —Preceding unsigned comment added by 131.234.130.35 (talk)

Yes, in general for two integers, ${\displaystyle {\pmod {a_{1},\ldots ,a_{n}}}}$ is the same as ${\displaystyle {\pmod {\gcd(a_{1},\ldots ,a_{n})}}}$ --Nick

Is

${\displaystyle (x-a)^{n}\equiv (x^{n}-a){\pmod {n,x^{r}-1}}}$

equivalent to that for each x

${\displaystyle ((x-a)^{n}\mod {(x^{r}-1)})\mod {n}=((x^{n}-a)\mod {(x^{r}-1)})\mod {n}}$

where the ${\displaystyle (...)\mod {(x^{r}-1)}}$ is polynomial long division remainder

and ${\displaystyle (...)\mod {n}}$ is arithmetic division (mathematics) remainder?

Maksim-e (talk) 20:10, 13 May 2008 (UTC)

Both should be polynomial divisions. — Emil J. (formerly EJ) 10:14, 8 August 2008 (UTC)

## confusion over "mod a, b" meaning

I can't see anywhere on Wikipedia that officially explains the (mod a, b) syntax. I would have intuited this as (mod lcm(a, b)), so I think it would be better to disambiguate this. If it actually means (mod gcd(a, b)), I think we should change it to say that.

On a side note, it's hard to tell from the article what "r" is supposed to be. Is it supposed to be "all r", "any r within a certain range", "a single specific r", etc? CountingPine (talk) 14:31, 12 November 2008 (UTC)

There is no explanation because it is obvious. Note that the authors of the original paper also do not bother to define it. I don't understand why are you trying to complicate matters by inserting there those lcm or gcd. The congruence
${\displaystyle x\equiv y{\pmod {a}}\,}$
is defined as
${\displaystyle \exists u\,(x=y+ua),}$
and the generalization to more moduli
${\displaystyle x\equiv y{\pmod {a_{1},\dots ,a_{k}}}\,}$
is therefore naturally introduced as
${\displaystyle \exists u_{1},\dots ,u_{k}\,(x=y+u_{1}a_{1}+\dots +u_{k}a_{k}).}$
How else could it be defined in a sensible way? Even more generally, one sometimes encounters
${\displaystyle x\equiv y{\pmod {I}}\,}$
meaning
${\displaystyle x-y\in I,}$
where I is an arbitrary ideal. — Emil J. 14:54, 12 November 2008 (UTC)
Maybe it is obvious... But I am coming to this page as someone who's not an expert in the subject matter, and as far as I can tell a sensible way would be to define it as a stronger statement, rather than a weaker one, i.e. in a rough equivalence to your syntax, I would naturally introduce it as
${\displaystyle \exists u_{1},\dots ,u_{k}\,(x=y+u_{1}a_{1},x=y+u_{2}a_{2},\dots ,x=y+u_{k}a_{k}).}$
I think this is equally "obvious". Having no other examples of the syntax, the only clue that it means the weaker form is that with the weaker stronger form the n expression would probably have been omitted, since it's given earlier, and it would be an understatement to call it a "related" equivalence.
Despite this, and despite what you've said, I don't think the meaning is obvious, and I can't find an official definition of it anywhere on WP. Perhaps this is an issue that should be addressed on the Modular arithmetic page though...
Anyway, I hope that we can find a reasonable solution to this, because it seems like a page that could be understandable by people without too much grounding in the subject, but it just seems a little too easy to get lost on this bit. CountingPine (talk) 10:47, 15 November 2008 (UTC)
Aha, now I can see where you got the lcm, you read "${\displaystyle x\equiv y{\pmod {a,b}}}$" as a shorthand for "${\displaystyle x\equiv y{\pmod {a}}}$ and ${\displaystyle x\equiv y{\pmod {b}}}$". Well, that would be a pointless notation as it would be indeed equivalent to the simple ${\displaystyle x\equiv y{\pmod {lcm(a,b)}}}$. But I can see now that it is potentially confusing. It can't hurt to define the notation in the article after all. — Emil J. 14:13, 17 November 2008 (UTC)
Thanks for changing that, and for bearing with me. Are the polynomials f and g in x? CountingPine (talk) 16:17, 17 November 2008 (UTC)
Yes, all the polynomials are in x. — Emil J. 17:04, 17 November 2008 (UTC)

Just for some clarity. ${\displaystyle x\equiv y{\pmod {a,b}}}$ is equivalent to ${\displaystyle x\equiv y{\pmod {\gcd(a,b)}}}$. For example ${\displaystyle 1\equiv 3{\pmod {4,6}}}$, since ${\displaystyle 1=3+4\cdot (1)+6\cdot (-1)}$ and we have ${\displaystyle 1\equiv 3{\pmod {2}}}$. This comes from the fact that we are doing modulo over the ideal generated by ${\displaystyle a}$ and ${\displaystyle b}$, which in a Principal ideal domain is the same ideal that the one generated by ${\displaystyle \gcd(a,b)}$. The ${\displaystyle lcm}$ is used to satisfy several congruences simultaneously, but that is another matter. 193.144.198.250 (talk) 12:51, 11 March 2014 (UTC)

In section 3 of the published article, it is explained that the congruence in question is a shorthand for an equality in Zn[x]/(xr - 1), the quotient ring of a polynomial ring. I included this in the article. Nxavar (talk) 12:17, 10 December 2015 (UTC)

Here's an example of the long division involved taken from Example 1 step 5 part A) to hopefully put this issue to rest (Note: % is used as the mod operator ie 5%3=2):

calculate ${\displaystyle (x+a)^{31}\equiv x^{2}+a^{31}{\pmod {(x^{29}-1),31}}}$ by first finding the polynomial remainder then termwise modding that remainder by 31.

 x2 +31ax +465a2 x29 ‑1 x31 +31ax30 +465a2x29 +4495a3x28 +31465a4x27 +169911a5x26 +736281a6x25 +2629575a7x24 +7888725a8x23 +20160075a9x22 +44352165a10x21 +84672315a11x20 +141120525a12x19 +206253075a13x18 +265182525a14x17 +300540195a15x16 +300540195a16x15 +265182525a17x14 +206253075a18x13 +141120525a19x12 +84672315a20x11 +44352165a21x10 +20160075a22x9 +7888725a23x8 +2629575a24x7 +736281a25x6 +169911a26x5 +31465a27x4 +4495a28x3 +465a29x2 +31a30x +a31 ‑ x31 ↓ ‑x2 ↓ 0 +31ax30 +(456a2+1)x2 +31a30x ‑ +31ax30 ↓ ‑31ax ↓ 0 +465a2x29 +(31a+31a30)x +a31 ‑ +465a2x29 ‑465a2 0 +(465a2+a31) Polynomial Remainder 4495a3x28 +31465a4x27 +169911a5x26 +736281a6x25 +2629575a7x24 +7888725a8x23 +20160075a9x22 +44352165a10x21 +84672315a11x20 +141120525a12x19 +206253075a13x18 +265182525a14x17 +300540195a15x16 +300540195a16x15 +265182525a17x14 +206253075a18x13 +141120525a19x12 +84672315a20x11 +44352165a21x10 +20160075a22x9 +7888725a23x8 +2629575a24x7 +736281a25x6 +169911a26x5 +31465a27x4 +4495a28x3 +(1 + 465a29)x2 +(31a+31a30)x +(465a2+a31) %31 (4495%31)a3x28 +(31465%31)a4x27 +(169911%31)a5x26 +(736281%31)a6x25 +(2629575%31)a7x24 +(7888725%31)a8x23 +(20160075%31)a9x22 +(44352165%31)a10x21 +(84672315%31)a11x20 +(141120525%31)a12x19 +(206253075%31)a13x18 +(265182525%31)a14x17 +(300540195%31)a15x16 +(300540195%31)a16x15 +(265182525%31)a17x14 +(206253075%31)a18x13 +(141120525%31)a19x12 +(84672315%31)a20x11 +(44352165%31)a21x10 +(20160075%31)a22x9 +(7888725%31)a23x8 +(2629575%31)a24x7 +(736281%31)a25x6 +(169911%31)a26x5 +(31465%31)a27x4 +(4495%31)a28x3 +(1%31 + (465%31)a29)x2 +((31%31)a + (31%31)a30)x +((465%31)a2 + (1%31)a31) (0)a3x28 +(0)a4x27 +(0)a5x26 +(0)a6x25 +(0)a7x24 +(0)a8x23 +(0)a9x22 +(0)a10x21 +(0)a11x20 +(0)a12x19 +(0)a13x18 +(0)a14x17 +(0)a15x16 +(0)a16x15 +(0)a17x14 +(0)a18x13 +(0)a19x12 +(0)a20x11 +(0)a21x10 +(0)a22x9 +(0)a23x8 +(0)a24x7 +(0)a25x6 +(0)a26x5 +(0)a27x4 +(0)a28x3 +((0)a29+1)x2 +((0)a + (0)a30)x +((0)a2 + (1)a31) Polynomial Modulus x2 +a31

this by no means is the most efficient method, but it does demonstrate how to do the calculation by hand.199.34.4.20 (talk) 21:06, 5 April 2016 (UTC)

## Worked Example

Would it be possible for someone to provide a worked example, like has been done for the Miller–Rabin primality test? It'll be much appreciated. Thanks in advance. Gulliveig (talk) 15:17, 15 June 2009 (UTC)

The worked example provided for n=31 is the smallest prime to make it through to Step 5. Although the gain from (x+a)31 (mod x29-1) is only a reduction of the poly from a degree 31 to degree 28, n=31 does demonstrate the implementation of the full algorithm. Moreover while a bigger choice of n would show a more significant reduction of (x+a)n (mod xr-1), it would also make showing the expansion of (x+a)n a bit unwieldy.

Also my intent for an Example 2 was to find a n that is not prime which also gets past Step 3. Unfortunately n = 74,513 = 269*277, r = 263 is the first composite to get to Step 5, and wolframalpha.com will not currently run PolynomialMod[PolynomialRemainder[(x+a)74513, x263-1, x], 74513] for free... However if you choose to ignore the results of Step 3 for n=11*5=55, r=10 (Step 3: gcd(5,55)=5), then n=55,r=10 passes Step 4 (55>10) and you can see what it looks like for Step 5 to determine that n is composite.
Step 5: n=55, r=10
(x+a)55 (mod x10-1, 55) =
11a5+22a25+a55 +5a44x +10a33x2 +10a22x3 +5a11x4 +(22a30+11a50+1)x5

plugging in a={1, 2, 3, ..., max = ${\displaystyle \scriptstyle \lfloor }$log2 (55)${\displaystyle \scriptstyle {\sqrt {\varphi (10)}}\rfloor }$ = 11} and modding by 55 results in the following polynomials:
34+5x+10x2+10x3+5x4+34x5
54+25x+25x2+40x3+10x4+23x5
1+20x+50x2+35x3+15x4+23x5
1+15x+35x2+50x3+20x4+34x5
45+45x+40x2+30x3+25x4+x5
54+45x+15x2+30x3+30x4+34x5
54+15x+20x2+50x3+35x4+23x5
21+20x+5x2+35x3+40x4+23x5
1+25x+30x2+40x3+45x4+34x5
10+5x+45x2+10x3+50x4+x5
44+34x5

none of which are mod 55 congruent to x5+a = x55+a (mod x10-1)
71.211.195.249 (talk) 01:17, 10 February 2014 (UTC)

I think the worked example should be modified to show the use of repeated squaring mod n and mod x^r-1 to find the value of the congruence. After all, if the whole congruence is evaluated before the modulus as shown currently, the algorithm would take time proportional to n (exponential time).4.30.129.162 (talk) 06:40, 31 August 2014 (UTC)

## Finding r

In the section "algorithm" it's said: Find the smallest r such that or(n) > log2(n). That condition is not sufficient, because with only that limitation also Charmichael numbers outputs prime. The correct condition on r is: Find the smallest r such that or(n) > log2(n) and n-1 is not divisible by r. Kaluppollo (talk) 13:28, 15 November 2009 (UTC)

I don't understand your reasoning with Carmichael numbers, but at any rate if r divides n − 1, then the multiplicative order of n modulo r is 1, which is not greater than log2(n), hence your extra condition is redundant. — Emil J. 13:20, 16 November 2009 (UTC)

## New Algorithm Version error

It is stated in the article: "Again the AKS algorithm consists of two parts, and the first step is to find a suitable r; however, in the new version r is the smallest number such that or(n) > log2(n)." However, it seems to me that that is the same as what is listed in the original algorithm in Step 2. Perhaps I am incorrect, but if not that sentence needs to be changed at the very least because it implies that the step was updated when in fact it appears not to be. I propose that the sentence be reworded or deleted. —Preceding unsigned comment added by Jessemv (talkcontribs) 06:17, 20 November 2009 (UTC)

The algorithm incorrectly described as the "original algorithm" in the article is actually the new one. I think we should stick to that new version of the algorithm as it is the published version, but reduce the discussion of the preliminary version and its updates to a history section or something like that. — Emil J. 11:36, 20 November 2009 (UTC)
Done. — Emil J. 12:29, 20 November 2009 (UTC)
Better. Nice job EmilJ. However, I think the article should state the original algorithm somewhere, although you are correct in sticking with the new algorithm. Also, are you sure the Big-O notation is the "soft-O" symbol? The mathematics is a little too complicated for me to handle, but I would still like to make sure that the symbol is used correctly. The Big-O notation seems to differ between the algorithm versions. Jessemv (talk) 01:22, 23 November 2009 (UTC)
I see no reason to bother with an outdated preliminary version of an algorithm, and generally speaking, I don't understand why is it that this article seems to be pathologically obsessed with the old algorithm. This normally never happens in computer science or mathematics, people just refer to the final published version of a result in case of any differences. As for the big-O notation, all versions of the paper I've seen use O-tilde for the time bounds (and this is pretty much unavoidable, the extra polylog factors come from the time complexity of multiplication). Sometimes they write it as ${\displaystyle {\tilde {O}}}$ and sometimes as ${\displaystyle O^{\sim }\,}$, but that's only a typographic change. — Emil J. 13:00, 23 November 2009 (UTC)
Ok. Guess you're right about the algorithm. I just thought it would be a credit the original creators more to show what they thought up, but I now agree with you that the final version is more important. Thanks for the followup. Jessemv (talk) 22:41, 23 November 2009 (UTC)

## Formula Update

The algorithm was updated and improved shortly after release, but is not shown here: http://teal.gmu.edu/courses/ECE746/project/F06_Project_resources/Salembier_Southerington_AKS.pdf

In addition, (x a)^n x^n a (mod n) was changed to (x a)^n x^n a (mod x^r 1, n) — Preceding unsigned comment added by Cable729 (talkcontribs) 20:36, 23 March 2011 (UTC)

## History and running time

log^12(x) is log(log(...(x)...))

log(x)^12 is (log x)^12

Please synchronize formulas and textual description. Which is meant?--134.130.183.101 (talk) 08:35, 20 July 2011 (UTC)

log^12(x) can also mean (log x)^12. That's what's meant in this article. --Roentgenium111 (talk) 14:25, 6 October 2011 (UTC)

## Must r be prime?

In the external link to Scott Aaranson's paper, it states "The way they speed things up is to work, not only mod N, but also mod a polynomial XR − 1 (where R is a reasonably small prime)." The restriction of r being prime is not in the Wikipedia article. Which is right? Gaohoyt (talk) 18:41, 17 August 2012 (UTC)

There’s no point in r being prime, and in the published paper, they allow composite r. However, in the first version of the paper, they used prime r, and Aaronson’s paper was presumably written before the update.—Emil J. 19:25, 17 August 2012 (UTC)

## practicality

Could the article have some discussion about how fast it works in practice if known, and compare that with other methods? I mean, the article is saying for sufficiently big x: T < M(lnx)^6, but if it suffices for x more than 9!^9!^9!^9!, or if M is similarly large, it may not be practical (or still could be practical). -Richard Peterson198.189.194.129 (talk) 17:00, 19 September 2012 (UTC)

From the implementations I've done and seen, it's very slow in practice. A straightforward by-the-v6-paper AKS implementation is far too slow to be useful for basically any input size. There are lots of optimizations possible (see, for instance the paper of Crandall and Papadopoulos, as well as all the Bernstein variants), but I haven't heard of AKS being used over, say, APR-CL or ECPP. For numbers <= 2^64, use deterministic M-R (no more than 7 tests needed) or BPSW. Pocklington-Lehmer or its improvement BLS75 work pretty well for numbers <= ~ 2^128, are easy to write if you have half-decent factoring code, and are much faster than AKS for this input size. APR-CL (e.g. Pari 2.3+) and ECPP (e.g. PRIMO) are much faster.
This is all rather anecdotal and full of my opinions. One also has to make a distinction between the standard exponent 6+o(1) algorithm and the various exponent 4+o(1) variants (e.g. Bernstein). For a little more reliable evidence, Mathworld states "ECPP is the fastest known general-purpose primality testing algorithm." Bernstein's 2006 paper indicates that at that time he believed ECPP was faster than his exponent 4+o(1) algorithm, but it wasn't clear cut. The AKS article on The Primes Pages says similar things. DAJ NT (talk) 09:03, 28 December 2012 (UTC)

This 2010 presentation by R.P. Brent Uncertainty can be Better than Certainty: Some Algorithms for Primality Testing gives some timing results and estimates, and has in the conclusions, "The AKS algorithm is theoretically significant[...] AKS is not a practical algorithm. ECPP is much faster." His estimates based on Magma implementations show ECPP at ~16 million times faster at ~332-bits (100 digits), and about 1000 million times faster at 1024 bits. My personal experience gives even higher factors, as ECPP can run quite a bit faster.
Also of interest is the time to verify a certificate. n-1 certificates are extremely fast. ECPP certificates are very fast. Both are, in general, much faster to verify than produce. AKS (standard style) doesn't really have certificates -- to verify the results you have to reproduce the entire computation. While the argument goes that this is polynomial time, in practice this is milliseconds vs. years. DAJ NT (talk) 05:25, 27 May 2013 (UTC)

## Numberphile video

The numberphile video about the AKS test is completely wrong. I realize the presenter says it is the AKS test, but it is not. It describes Hardy and Wright theorems 75 and 76, not the AKS test. At the 2:26, the presenter waves his hands and says, they actually did something a little different, then moves on. See Agrawal and Biswas "Primality and Identity Testing via Chinese Remaindering" (FOCS 1999, as paper in 2003), or read the actual AKS paper. They are describing Lemma 2.1 of the V6 paper. This is not AKS, it is not polynomial time, and it is not fast.

Please remove the link. DAJ NT (talk) 16:40, 1 March 2014 (UTC)

Having put it there, if others want it gone, I won't put it back a third time. The argument isn't really with me, it's with James Grime of Cambridge. user:JMOprof ©¿©¬ 16:56, 1 March 2014 (UTC)

## r in the example is wrong

The smallest r such that Or(31) > Log(31)2 is 17, not 29. Can someone else confirm, or did I do something wrong?

It would help if some of the values were shown (e.g. what are maxk and maxr). maxk = floor(24.54406...) = 24. znorder(31,17) = 16, znorder(31,23) = 11, znorder(31,29) = 28. r=29 is correct. Double check with Pari/GP:
for (r=2,29,print(r" "znorder(Mod(31,r)))).
DAJ NT (talk) 23:09, 22 May 2015 (UTC)
According to the page, it's not max r, it's min r. The page says nothing about the binary logarithm; so Log(31)^2 = 11.7923, so if the page is right, your own calculation shows that 17 is correct over 29. — Preceding unsigned comment added by 75.168.59.69 (talk) 23:43, 22 May 2015 (UTC)
Nope, I'm wrong, it's in the text after the description of the algorithm. I'm going to add a subscript to indicate in the algorithm itself that the logarithm is binary. Thanks for catching my mistake! — Preceding unsigned comment added by 75.168.59.69 (talk) 23:46, 22 May 2015 (UTC)
Thanks, that's a good edit. It is in page 2 of the reference paper: "We use log for base 2 logarithm, and ln for natural logarithm." I've seen it trip up implementations. DAJ NT (talk) 00:16, 23 May 2015 (UTC)

## x is a variable

In fact, x is not "over the integers" or "over the integers mod n".

The correct statement is:

1. ${\displaystyle (x+a)^{n}\equiv (x^{n}+a){\pmod {n}}{\text{ in }}\mathbb {Z} [x]}$
or
2. ${\displaystyle (x+a)^{n}=(x^{n}+a){\text{ in }}\mathbb {Z} [x]/(n)}$
or
3. ${\displaystyle (x+a)^{n}=(x^{n}+a){\text{ in }}\mathbb {Z} _{n}[x]}$
Not-obviously-equivalent formulations are:
4. ${\displaystyle (x+a)^{n}\equiv (x^{n}+a){\pmod {n}}{\text{ for }}x\in \mathbb {Z} }$
and
5. ${\displaystyle (x+a)^{n}=(x^{n}+a){\text{ for }}x\in \mathbb {Z} _{n}[x]}$

What was in the article before Nxavar's edit resembles #4, while what was there afterward resembled #5, but we should be arguing about which of #1, #2, #3 should be in the article. — Arthur Rubin (talk) 17:56, 1 February 2016 (UTC)

I am in favour of 3, which is consistent with the formulation of later statements. Nxavar (talk) 19:29, 1 February 2016 (UTC)
if ${\displaystyle a}$ and ${\displaystyle b}$ were both integers then it would be understood that the congruence modulo ${\displaystyle n}$ notation ${\displaystyle a\equiv b{\pmod {n}}}$ would imply that ${\displaystyle a,b\in \mathbb {Z} _{n}}$; likewise ${\displaystyle a}$ and ${\displaystyle b}$ being polynomials with ${\displaystyle n}$ an integer, then the statement ${\displaystyle a\equiv b{\pmod {n}}}$ implies that ${\displaystyle a,b\in \mathbb {Z} _{n}[x]}$. More specifically ${\displaystyle (x+a)^{n}\in \mathbb {Z} _{n}[x]}$ and ${\displaystyle (x^{n}+a)\in \mathbb {Z} _{n}[x]}$, where the coefficients for both of these two poly's would be in ${\displaystyle \mathbb {Z} _{n}}$ and ${\displaystyle x}$ itself is a free variable of the poly-space which is not necessarily restricted to ${\displaystyle \mathbb {Z} _{n}}$ or even ${\displaystyle \mathbb {Z} }$. In fact restricting ${\displaystyle x\in \mathbb {Z} _{n}}$ would limit the generality of the statement, and stating that ${\displaystyle 0x^{0}+1x^{1}\in \mathbb {Z} _{n}[x]}$ is redundant. 199.34.4.20 (talk) 22:04, 4 April 2016 (UTC)

## Stage 1, Power of an Prime?

I have added a suggested more efficient algorithm for finding if n is a^b to the worked example. If n=ip or n=i^p where integer i>1 and prime p<P where P is the smallest prime for which n<P^P then output composite. If n = a^b < P^P then log n = b log a < P log P. Therefore either b < P or a < P. Therefore we could check for n=p^i and n=i^p. We don't need to check the first equation beyond the first division by p as other n=ip would be removed later. We only need check prime values as, for example, if n=9^9 it would also be 3^18 and 729^3. I am a newby so please be kind if I have not followed procedure correctly. TheOldRic (talk) 15:55, 15 February 2016 (UTC)

That would be (even if you entered correctly) a different algorithm. It's not AKS. — Arthur Rubin (talk) 09:10, 17 February 2016 (UTC)
I agree with Arthur, that it should just reflect the AKS algorithm. Steps 1 and 3 aren't an optimization, but required for step 5+6 to be correct. The implementation could certainly be different than the example, but I think that's unnecessary (a link to Perfect_power#Detecting_perfect_powers at best). There are certainly better methods for perfect power detection, but that really is going way too far for this simple example. If we were to do that, then we'd end up with a digression on fast modular polynomial multiplication via binary segmentation, which is far more critical to the algorithm performance. Note that in a practical situation we would preface this test with fast compositeness tests (e.g. trial division, M-R, BPSW), so we're not wasting time on easily detected composites. But that is purely up to an implementation -- the algorithm on the page includes everything necessary for correctness. DAJ NT (talk) 19:07, 17 February 2016 (UTC)

## Step 4

I was a bit confused over step 4. If n < r and step 3 takes r steps, then the number of applications of gcd is exponential in n. Then I noticed that the original paper has a footnote. As AKS prove that r is small (less than log^5 n), this step is only inserted for small n (AKS claim some nummber < 6). I think this footnote is necessary to avoid confusion and would like to see it added. — Preceding unsigned comment added by Leentorenvliet (talkcontribs) 12:26, 8 August 2016 (UTC)

It may be further worth pointing out that Step 3 is performing trial division from 2 to r. So practically we wouldn't want to do a gcd with every integer, but just divisibility tests (probably with 2+odds, a wheel, or primes). Step 4 then further works if r >= sqrt(n). DAJ NT (talk) 14:59, 8 August 2016 (UTC)

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).