Jump to content

Miller–Rabin primality test

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by TXiKiBoT (talk | contribs) at 04:55, 5 August 2010 (robot Adding: cs:Millerův-Rabinův test prvočíselnosti). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Miller–Rabin primality test or Rabin–Miller primality test is a primality test: an algorithm which determines whether a given number is prime, similar to the Fermat primality test and the Solovay–Strassen primality test. Its original version, due to Gary L. Miller, is deterministic, but the determinism relies on the unproven generalized Riemann hypothesis;[1] Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm.[2]

Concepts

Just like the Fermat and Solovay–Strassen tests, the Miller–Rabin test relies on an equality or set of equalities that hold true for prime values, then checks whether or not they hold for a number that we want to test for primality.

First, a lemma about square roots of unity in the finite field , where p is prime and p > 2. Certainly 1 and −1 always yield 1 when squared mod p; call these trivial square roots of 1. There are no nontrivial square roots of 1 mod p (a special case of the result that, in a field, a polynomial has no more zeroes than its degree). To show this, suppose that x is a square root of 1 mod p. Then:

In other words, p divides the product . It thus divides one of the factors and it follows that x is either congruent to 1 or −1 mod p.

Now, let n be an odd prime. Then n−1 is even and we can write it as 2s·d, where s and d are positive integers (d is odd). For each , either

or

for some

To show that one of these must be true, recall Fermat's little theorem:

By the lemma above, if we keep taking square roots of an − 1, we will get either 1 or −1. If we get −1 then the second equality holds and we are done.

In the case when we've taken out every power of 2 and the second equality never held, we are left with the first equality which also must be equal to 1 or −1, as it too is a square root. However, if the second equality never held, then it couldn't have held for r = 0, meaning that

Thus in case the second equality doesn't hold, the first equality must.

The Miller–Rabin primality test is based on the contrapositive of the above claim. That is, if we can find an a such that

and

for all

then a is a witness for the compositeness of n (sometimes misleadingly called a strong witness, although it is a certain proof of this fact). Otherwise a is called a strong liar, and n is a strong probable prime to base a. The term "strong liar" refers to the case where n is composite but nevertheless the equations hold as they would for a prime.

For every odd composite n, there are many witnesses a. However, no simple way of generating such an a is known. The solution is to make the test probabilistic: we choose nonzero randomly, and check whether or not it is a witness for the compositeness of n. If n is composite, most of the choices for a will be witnesses, and the test will detect n as composite with high probability. There is, nevertheless, a small chance that we are unlucky and hit an a which is a strong liar for n. We may reduce the probability of such error by repeating the test for several independently chosen a.

Example

Suppose we wish to determine if n = 221 is prime. We write n − 1 = 220 as 22·55, so that we have s = 2 and d = 55. We randomly select an a < n of 174. We proceed to compute:

  • a20·d mod n = 17455 mod 221 = 47 ≠ 1, n − 1
  • a21·d mod n = 174110 mod 221 = 220 = n − 1.

Since 220 ≡ −1 mod n, either 221 is prime, or 174 is a strong liar for 221. We try another random a, this time choosing a=137:

  • a20·d mod n = 13755 mod 221 = 188 ≠ 1, n − 1
  • a21·d mod n = 137110 mod 221 = 205 ≠ n − 1.

Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221 (which are 13 and 17).

Algorithm and running time

The algorithm can be written in pseudocode as follows:

Input: n > 3, an odd integer to be tested for primality; 
Input: k, a parameter that determines the accuracy of the test
Output: composite if n is composite, otherwise probably prime
write n − 1 as 2s·d with d odd by factoring powers of 2 from n − 1
LOOP: repeat k times:
   pick a randomly in the range [2, n − 2]
   xad mod n
   if x = 1 or x = n − 1 then do next LOOP
   for r = 1 .. s − 1
      xx2 mod n
      if x = 1 then return composite
      if x = n − 1 then do next LOOP
   return composite
return probably prime

Using modular exponentiation by repeated squaring, the running time of this algorithm is O(k log3 n), where k is the number of different values of a we test; thus this is an efficient, polynomial-time algorithm. FFT-based multiplication can push the running time down to O(k log2 n log log n log log log n) = Õ(k log2 n).

In the case that the algorithm returns "composite" because x = 1, it has also discovered that is (an odd multiple of) the order of a — a fact which can (as in Shor's algorithm) be used to factorize n, since n then divides but not either factor by itself. The reason Miller–Rabin does not yield a probabilistic factorization algorithm is that it if (i.e., n is not a pseudoprime to base a) then no such information is obtained about the period of a, and the second "return composite" is taken.

Accuracy of the test

The more bases a we test, the better the accuracy of the test. It can be shown that for any odd composite n, at least ¾ of the bases a are witnesses for the compositeness of n.[2][3] The Miller–Rabin test is strictly stronger than the Solovay–Strassen primality test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper. If n is composite then the Miller–Rabin primality test declares n probably prime with a probability at most 4k. On the other hand, the Solovay–Strassen primality test declares n probably prime with a probability at most 2k.

On average the probability that a composite number is declared probably prime is significantly smaller than 4k. Damgård, Landrock and Pomerance[4] compute some explicit bounds. Such bounds can, for example, be used to generate primes; however, they should not be used to verify primes with unknown origin, since in cryptographic applications an adversary might try to send you a pseudoprime in a place where a prime number is required. In such cases, only the error bound of 4k can be relied upon.

Deterministic variants of the test

The Miller–Rabin algorithm can be made deterministic by trying all possible a below a certain limit. The problem in general is to set the limit so that the test is still reliable.

If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group , which means that if we test all a from a set which generates , one of them must be a witness for the compositeness of n. Assuming the truth of the generalized Riemann hypothesis (GRH), it is known that the group is generated by its elements smaller than O((log n)2), which was already noted by Miller.[1] The constant involved in the Big O notation was reduced to 2 by Eric Bach.[5] This leads to the following conditional primality testing algorithm:

Input: n > 1, an odd integer to test for primality.
Output: composite if n is composite, otherwise prime
write n−1 as 2s·d by factoring powers of 2 from n−1
repeat for all :
    
    then return composite
return prime

The running time of the algorithm is Õ((log n)4). The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.[3]

This algorithm is not used in practice, as it is much slower than the randomized version of the Miller-Rabin test. For theoretical purposes, it was superseded by the AKS primality test, which does not rely on unproven assumptions.

When the number n to be tested is small, trying all a < 2(ln n)2 is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge and Wagstaff[6] and Jaeschke[7] have verified that

  • if n < 1,373,653, it is enough to test a = 2 and 3;
  • if n < 9,080,191, it is enough to test a = 31 and 73;
  • if n < 4,759,123,141, it is enough to test a = 2, 7, and 61;
  • if n < 2,152,302,898,747, it is enough to test a = 2, 3, 5, 7, and 11;
  • if n < 3,474,749,660,383, it is enough to test a = 2, 3, 5, 7, 11, and 13;
  • if n < 341,550,071,728,321, it is enough to test a = 2, 3, 5, 7, 11, 13, and 17.

(sequence A014233 in the OEIS)

See The Primes Page[8] and Zhang and Tang[9] for other criteria of this sort. These results give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.

There is a small list of potential witnesses for every possible input size (at most n values for n-bit numbers), which can be observed by noting that BPP is contained in P/poly. However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least (ln n)1/(3ln ln ln n).[10] They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order (log n log log n).

Notes

  1. ^ a b Miller, Gary L. (1976), "Riemann's Hypothesis and Tests for Primality", Journal of Computer and System Sciences, 13 (3): 300–317, doi:10.1145/800116.803773
  2. ^ a b Rabin, Michael O. (1980), "Probabilistic algorithm for testing primality", Journal of Number Theory, 12 (1): 128–138, doi:10.1016/0022-314X(80)90084-0
  3. ^ a b Schoof, René, "Four primality testing algorithms", Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography (PDF), Cambridge University Press, ISBN 0521808545
  4. ^ Damgård, I.; Landrock, P.; Pomerance, C. (1993), "Average case error estimates for the strong probable prime test" (PDF), Mathematics of Computation, 61 (203): 177–194 {{citation}}: Unknown parameter |lastauthoramp= ignored (|name-list-style= suggested) (help)
  5. ^ Bach, Eric (1990), "Explicit bounds for primality testing and related problems", Mathematics of Computation, 55 (191): 355–380, doi:10.2307/2008811
  6. ^ Pomerance, C.; Selfridge, J. L.; Wagstaff, S. S., Jr. (1980), "The pseudoprimes to 25·109", Mathematics of Computation, 35 (151): 1003–1026, doi:10.2307/2006210 {{citation}}: Unknown parameter |lastauthoramp= ignored (|name-list-style= suggested) (help)CS1 maint: multiple names: authors list (link)
  7. ^ Jaeschke, Gerhard (1993), "On strong pseudoprimes to several bases", Mathematics of Computation, 61 (204): 915–926, doi:10.2307/2153262 {{citation}}: Cite has empty unknown parameter: |lastauthoramp= (help)
  8. ^ The Primes Page
  9. ^ Zhang, Zhenxiang; Tang, Min (2003), "Finding strong pseudoprimes to several bases. II", Mathematics of Computation, 72 (44): 2085–2097, doi:10.1090/S0025-5718-03-01545-X {{citation}}: Unknown parameter |lastauthoramp= ignored (|name-list-style= suggested) (help)
  10. ^ Alford, W. R.; Granville, A.; Pomerance, C. (1994), Lecture Notes in Computer Science, 877, Springer-Verlag: 1–16, doi:10.1007/3-540-58691-1_36 http://www.math.dartmouth.edu/~carlp/PDF/reliable.pdf {{citation}}: |chapter= ignored (help); Cite has empty unknown parameter: |lastauthoramp= (help); Missing or empty |title= (help)