Sieve of Eratosthenes
In mathematics, the sieve of Eratosthenes (Greek: κόσκινον Ἐρατοσθένους), one of a number of prime number sieves, is a simple, ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the multiples of 2.
The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime.
The sieve of Eratosthenes is one of the most efficient ways to find all of the smaller primes. It is named after Eratosthenes of Cyrene, a Greek mathematician; although none of his works has survived, the sieve was described and attributed to Eratosthenes in the Introduction to Arithmetic by Nicomachus.
To find all the prime numbers less than or equal to a given integer n by Eratosthenes' method:
- Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).
- Initially, let p equal 2, the first prime number.
- Starting from p, enumerate its multiples by counting to n in increments of p, and mark them in the list (these will be 2p, 3p, 4p, ... ; the p itself should not be marked).
- Find the first number greater than p in the list that is not marked. If there was no such number, stop. Otherwise, let p now equal this new number (which is the next prime), and repeat from step 3.
When the algorithm terminates, all the numbers in the list that are not marked are prime.
The main idea here is that every value for p is prime, because we have already marked all the multiples of the numbers less than p. Note that some of the numbers being marked may have already been marked earlier (e.g., 15 will be marked both for 3 and 5).
As a refinement, it is sufficient to mark the numbers in step 3 starting from p2, as all the smaller multiples of p will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when p2 is greater than n.
Another refinement is to initially list odd numbers only, (3, 5, ..., n), and count in increments of 2p in step 3, thus marking only odd multiples of p. This actually appears in the original algorithm. This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds, i.e., numbers coprime with 2.
As the sieving range gets larger, it is necessary to change the implementation to only sieve primes per page segment, both for less of a memory requirement as then only the base primes up to the square root of the maximum limit of the current page needs to be stored for use on succeeding pages, and for better performance as to CPU cache associativity as memory access times can vary from about one CPU clock cycle for access of the CPU L1 cache to up to over a hundred CPU clock cycles for main RAM memory access when cache sizes are exceeded meaning that the array-based algorithm becomes memory access speed bound.
An incremental formulation of the sieve generates primes indefinitely (i.e., without an upper bound) by interleaving the generation of primes with the generation of their multiples (so that primes can be found in gaps between the multiples), where the multiples of each prime p are generated directly, by counting up from the square of the prime in increments of p (or 2p for odd primes). The generation must be initiated only when the prime's square is reached, to avoid adverse effects on efficiency.
Incremental versions of the sieve are always slower and take more memory than the non-incremental versions, at best by a constant factor. They are slower because they must store the future composite culls with one element per base prime less than the square root of the range, which can take the form of a Binary Tree Map or Priority Queue (PQ) or a Hash Table; in the case of the Map or PQ, there is an additional log(range) factor in the computational complexity; in the case of the Hash Table, there is no additional factor but Hash Tables are usually a constant factor slower to process per operation. Both are a large constant factor slower to process than the usual implementation using an array, often a bit packed array with each bit representing one prime candidate number. This array representation also takes much less memory for a page segmented version of the sieve than for the stored "future composites" representation required for the incremental sieve as the former requires only a few bits per base prime whereas the latter requires a full record for at least the current base prime and the current future cull position at 10's of bits total and many times more for some representations.
Trial division can be used to produce primes by filtering out the composites found by testing each candidate number for divisibility by its preceding primes. It is often confused with the sieve of Eratosthenes, although the latter directly generates the composites instead of testing for them. Trial division has worse theoretical complexity than that of the sieve of Eratosthenes in generating ranges of primes.
When testing each candidate number, the optimal trial division algorithm uses just those prime numbers not exceeding its square root. The widely known 1975 functional code by David Turner is often presented as an example of the sieve of Eratosthenes but is actually a sub-optimal trial division algorithm.
To find all the prime numbers less than or equal to 30, proceed as follows.
First generate a list of integers from 2 to 30:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
First number in the list is 2; cross out every 2nd number in the list after it by counting up from 2 in increments of 2 (these will be all the multiples of 2 in the list):
45 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 30
Next number in the list after 2 is 3; cross out every 3rd number in the list after it by counting up from 3 in increments of 3 (these will be all the multiples of 3 in the list):
45 67 8 9 1011 1213 14 15 1617 1819 20 21 2223 2425 26 27 2829 30
Next number not yet crossed out in the list after 3 is 5; cross out every 5th number in the list after it by counting up from 5 in increments of 5 (i.e. all the multiples of 5):
45 67 8 9 1011 1213 14 15 1617 1819 20 21 2223 24 25 26 27 2829 30
Next number not yet crossed out in the list after 5 is 7; the next step would be to cross out every 7th number in the list after 7, but they are all already crossed out at this point, as these numbers (14, 21, 28) are also multiples of smaller primes because 7*7 is greater than 30. The numbers left not crossed out in the list at this point are all the prime numbers below 30:
2 3 5 7 11 13 17 19 23 29
Input: an integer n > 1 Let A be an array of Boolean values, indexed by integers 2 to n, initially all set to true. for i = 2, 3, 4, ..., not exceeding √: if A[i] is true: for j = i2, i2+i, i2+2i, ..., not exceeding n : A[j] := false Output: all i such that A[i] is true.
Large ranges may not fit entirely in memory. In these cases it is necessary to use a segmented sieve where only portions of the range are sieved at a time.
For ranges with upper limit n so large that the sieving primes below √ as required by the page segmented sieve of Eratosthenes cannot fit in memory, a slower but much more space-efficient sieve like that sieve of Sorenson can be used instead.
The work performed by this algorithm is almost entirely the operations to cull the composite number representations which for the basic non-optimized version is the sum of the range divided by each of the primes up to that range or , where n is the sieving range in this and all further analysis.
The optimization of starting at the square of each prime and only culling for primes less than the square root changes the "n" in the above expression to the square root of n or n1/ and not culling until the square means that the sum of the base primes each minus two is subtracted from the operations. As the sum of the first x primes is  and the Prime number theorem says that x is approximately then the sum of primes to n is and therefore the sum of base primes to the square root of n is expressed as a factor of n. The extra offset of two per base prime is where is the Prime-counting function in this case or ; expressing this as a factor of n as are the other terms, this is . Combining all of this, the expression for the number of optimized operations without wheel factorization is .
For the wheel factorization cases, there is a further offset of the operations not done of where x is the highest wheel prime and a constant factor of the whole expression is applied which is the fraction of remaining prime candidates as compared to the repeating wheel circumference. The wheel circumference is and it can easily be determined that this wheel factor is as is the fraction of remaining candidates for the highest wheel prime, x, and each succeeding smaller prime leaves its corresponding fraction of the previous combined fraction.
Combining all of the above analysis, the total number of operations for a sieving range up to n including wheel factorization for primes up to x is approximately:
To show that the above expression is a good approximation to the number of composite number cull operations performed by the algorithm, following is a table showing the actually measured number of operations for a practical implementation of the sieve of Eratosthenes as compared to the number of operations predicted from the above expression with both expressed as a fraction of the range (rounded to four decimal places) for different sieve ranges and wheel factorizations (Note that the fifth column is a maximum practical wheel):
n no wheel odds 2/3/5 wheel 2/3/5/7 wheel 2/3/5/7/11/13/17/19 wheel 103 1.4090/1.3740 0.4510/0.4370 0.1000/0.0900 0.0580/0.0450 0.0060/------ 104 1.6962/1.6844 0.5972/0.5922 0.1764/0.1736 0.1176/0.1161 0.0473/0.0391 105 1.9299/1.9261 0.7148/0.7130 0.2388/0.2381 0.1719/0.1714 0.0799/0.0805 106 2.1218/2.1220 0.8109/0.8110 0.2902/0.2903 0.2161/0.2162 0.1134/0.1140 107 2.2850/2.2863 0.8925/0.8932 0.3337/0.3341 0.2534/0.2538 0.1419/0.1421 108 2.4257/2.4276 0.9628/0.9638 0.3713/0.3718 0.2856/0.2860 0.1660/0.1662
The above table shows that the above expression is a very good approximation to the total number of culling operations for sieve ranges of about a hundred thousand (105) and above.
As can be seen from the above by removing all constant offsets and constant factors and ignoring terms that trend to zero as n approaches infinity, the time complexity of calculating all primes below n in the random access machine model is operations, a direct consequence of the fact that the prime harmonic series asymptotically approaches . It has an exponential time complexity with regard to input size, though, which makes it a pseudo-polynomial algorithm. The basic algorithm requires of memory.
The normally implemented page segmented version has the same operational complexity of as the non-segmented version but reduces the space requirements to the very minimal size of the segment page plus the memory required to store the base primes less than the square root of the range used to cull composites from successive page segments of size .
To show that the above approximation in complexity is not very accurate even for about as large as practical a range, the following is a table of the estimated number of operations as a fraction of the range rounded to four places, the calculated ratio for a factor of ten change in range based on this estimate, and the factor based on the log log n estimate for various ranges and wheel factorizations (the combo column uses a frequently practically used pre-cull by the maximum wheel factorization but only the 2/3/5/7 wheel for the wheel factor as the full factorization is difficult to implement efficiently for page segmentation):
n no wheel odds 2/3/5 wheel 2/3/5/7 wheel combo wheel 2/3/5/7/11/13/17/19 wheel 106 2.122/1.102/1.075 0.811/1.137/1.075 0.2903/1.22/1.075 0.2162/1.261/1.075 0.1524/1.416/1.075 0.114/1.416/1.075 107 2.2863/1.077/1.059 0.8932/1.101/1.059 0.3341/1.151/1.059 0.2537/1.174/1.059 0.1899/1.246/1.059 0.1421/1.246/1.059 108 2.4276/1.062/1.048 0.9638/1.079/1.048 0.3718/1.113/1.048 0.286/1.127/1.048 0.2222/1.17/1.048 0.1662/1.17/1.048 109 2.5514/1.051/1.04 1.0257/1.064/1.04 0.4048/1.089/1.04 0.3143/1.099/1.04 0.2505/1.127/1.04 0.1874/1.127/1.04 1010 2.6615/1.043/1.035 1.0808/1.054/1.035 0.4342/1.073/1.035 0.3395/1.08/1.035 0.2757/1.101/1.035 0.2063/1.101/1.035 1011 2.7608/1.037/1.03 1.1304/1.046/1.03 0.4607/1.061/1.03 0.3622/1.067/1.03 0.2984/1.082/1.03 0.2232/1.082/1.03 1012 2.8511/1.033/1.027 1.1755/1.04/1.027 0.4847/1.052/1.027 0.3828/1.057/1.027 0.319/1.069/1.027 0.2387/1.069/1.027 1013 2.9339/1.029/1.024 1.217/1.035/1.024 0.5068/1.046/1.024 0.4018/1.049/1.024 0.3379/1.059/1.024 0.2528/1.059/1.024 1014 3.0104/1.026/1.022 1.2552/1.031/1.022 0.5272/1.04/1.022 0.4193/1.044/1.022 0.3554/1.052/1.022 0.2659/1.052/1.022 1015 3.0815/1.024/1.02 1.2907/1.028/1.02 0.5462/1.036/1.02 0.4355/1.039/1.02 0.3717/1.046/1.02 0.2781/1.046/1.02 1016 3.1478/1.022/1.018 1.3239/1.026/1.018 0.5639/1.032/1.018 0.4507/1.035/1.018 0.3868/1.041/1.018 0.2894/1.041/1.018
The above shows that the log log n estimate is not very accurate even for maximum practical ranges of about 1016.
One should also note that in using the calculated operation ratios to the sieve range, it must be less than about 0.2587 in order to be faster than the often compared sieve of Atkin if the operations take approximately the same time each in CPU clock cycles, which is a reasonable assumption for the one huge bit array algorithm. Using that assumption, the sieve of Atkin is only faster than the maximally wheel factorized sieve of Eratosthenes for ranges of over 1013 at which point the huge sieve buffer array would need about a Terabyte (1012 bytes) of RAM memory even if bit packing were used - i.e., not very practical! An analysis of the page segmented versions will show that the assumption that the time per operation stays the same between the two algorithms does not hold and that the sieve of Atkin operations get slower much faster than the sieve of Eratosthenes with increasing range. Thus for practical purposes, the maximally wheel factorized Sieve of Eratosthenes is faster than the Sieve of Atkin although the Sieve of Atkin is faster for lesser amounts of wheel factorization.
Euler's proof of the zeta product formula contains a version of the sieve of Eratosthenes in which each composite number is eliminated exactly once. It, too, starts with a list of numbers from 2 to n in order. On each step the first element is identified as the next prime and the results of multiplying this prime with each element of the list are marked in the list for subsequent deletion. The initial element and the marked elements are then removed from the working sequence, and the process is repeated:
 (3) 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 ...  (5) 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 65 67 71 73 77 79 ...  (7) 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 ...  (11) 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 ... [...]
Here the example is shown starting from odds, after the first step of the algorithm. Thus, on the kth step all the remaining multiples of the kth prime are removed from the list, which will thereafter contain only numbers coprime with the first k primes (cf., wheel factorization), so that the list will start with the next prime, and all the numbers in it below the square of its first element will be prime too.
Thus, when generating a bounded sequence of primes, when the next identified prime exceeds the square root of the upper limit, all the remaining numbers in the list are prime. In the example given above that is achieved on identifying 11 as next prime, giving a list of all primes less than or equal to 80.
Note that numbers that will be discarded by a step are still used while marking the multiples in that step, e.g., for the multiples of 3 it is 3 · 3 = 9, 3 · 5 = 15, 3 · 7 = 21, 3 · 9 = 27, ..., 3 · 15 = 45, ..., so care must be taken dealing with this.
- Horsley, Rev. Samuel, F. R. S., "Κόσκινον Ερατοσθένους or, The Sieve of Eratosthenes. Being an account of his method of finding all the Prime Numbers," Philosophical Transactions (1683–1775), Vol. 62. (1772), pp. 327–347.
- O'Neill, Melissa E., "The Genuine Sieve of Eratosthenes", Journal of Functional Programming, Published online by Cambridge University Press 9 October 2008 doi:10.1017/S0956796808007004, pp. 10, 11 (contains two incremental sieves in Haskell: a priority-queue–based one by O'Neill and a list–based, by Richard Bird).
- Nicomachus, Introduction to Arithmetic, I, 13. 
- J. C. Morehead, "Extension of the Sieve of Eratosthenes to arithmetical progressions and applications", Annals of Mathematics, Second Series 10:2 (1909), pp. 88–104.
- Clocksin, William F., Christopher S. Mellish, Programming in Prolog, 1984, p. 170. ISBN 3-540-11046-1.
- Colin Runciman, "FUNCTIONAL PEARL: Lazy wheel sieves and spirals of primes", Journal of Functional Programming, Volume 7 Issue 2, March 1997; also here.
- Turner, David A. SASL language manual. Tech. rept. CS/75/1. Department of Computational Science, University of St. Andrews 1975. (
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]; primes = sieve [2..])
- Sedgewick, Robert (1992). Algorithms in C++. Addison-Wesley. ISBN 0-201-51059-6. , p. 16.
- Jonathan Sorenson, An Introduction to Prime Number Sieves, Computer Sciences Technical Report #909, Department of Computer Sciences University of Wisconsin-Madison, January 2, 1990 (the use of optimization of starting from squares, and thus using only the numbers whose square is below the upper limit, is shown).
- Crandall & Pomerance, Prime Numbers: A Computational Perspective, second edition, Springer: 2005, pp. 121–24.
- J. Sorenson, The pseudosquares prime sieve, Proceedings of the 7th International Symposium on Algorithmic Number Theory. (ANTS-VII, 2006).
- E. Bach and J. Shallit, §2.7 in Algorithmic Number Theory, Vol. 1: Efficient Algorithms, MIT Press, Cambridge, MA, 1996.
- Pritchard, Paul, "Linear prime-number sieves: a family tree," Sci. Comput. Programming 9:1 (1987), pp. 17–35.
- Paul Pritchard, A sublinear additive sieve for finding prime numbers, Communications of the ACM 24 (1981), 18–23. MR 82c:10011
- Paul Pritchard, Explaining the wheel sieve, Acta Informatica 17 (1982), 477–485. MR 84g:10015
- Paul Pritchard, Fast compact prime number sieves (among others), Journal of Algorithms 4 (1983), 332–344. MR 85h:11080
- Eratosthenes, sieve of at Encyclopaedia of Mathematics
- Sieve of Eratosthenes by George Beck, Wolfram Demonstrations Project.
- Sieve of Eratosthenes in Haskell
- Sieve of Eratosthenes algorithm illustrated and explained. Java and C++ implementations.
- A related sieve written in x86 assembly language
- A highly optimized Sieve of Eratosthenes in C
- A parallel implementation in C#
- SieveOfEratosthenesInManyProgrammingLanguages c2 wiki page
- The Art of Prime Sieving Sieve of Eratosthenes in C from 1998 with nice features and algorithmic tricks explained.