Divergence of the sum of the reciprocals of the primes

From Wikipedia, the free encyclopedia
Jump to: navigation, search
The sum of the reciprocal of the primes increasing without bound. The x axis is in log scale, showing that the divergence is very slow. The purple function is a lower bound that also diverges.

The sum of the reciprocals of all prime numbers diverges; that is:

\sum_{p\text{ prime }}\frac1p = \frac12 + \frac13 + \frac15 + \frac17 + \frac1{11} + \frac1{13} + \frac1{17} + \cdots = \infty

This was proved by Leonhard Euler in 1737, and strengthens Euclid's 3rd-century-BC result that there are infinitely many prime numbers.

There are a variety of proofs of Euler's result, including a lower bound for the partial sums stating that

\sum_{\scriptstyle p\text{ prime }\atop \scriptstyle p\le n}\frac1p \ge \log \log (n+1) - \log\frac{\pi^2}6

for all natural numbers n. The double natural logarithm indicates that the divergence might be very slow, which is indeed the case, see Meissel–Mertens constant.

The harmonic series[edit]

First, we describe how Euler originally discovered the result. He was considering the harmonic series


 \sum_{n=1}^\infty \frac{1}{n} = 
  1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots

He had already used the following "product formula" to show the existence of infinitely many primes.


 \sum_{n=1}^\infty \frac{1}{n} = \prod_{p} \frac{1}{1-p^{-1}}
  = \prod_{p} \left( 1+\frac{1}{p}+\frac{1}{p^2}+\cdots \right)

(Here, the product is taken over all primes p; in the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes, unless noted otherwise.)

Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. Of course, the above "equation" is not necessary because the harmonic series is known (by other means) to diverge. This type of formal manipulation was common at the time, when mathematicians were still experimenting with the new tools of calculus.[citation needed]

Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series. (In modern language, we now say that the existence of infinitely many primes is reflected by the fact that the Riemann zeta function has a simple pole at s = 1.)

Proofs[edit]

First[edit]

Euler took the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the natural logarithm of each side, then he used the Taylor series expansion for ln(x) as well as the sum of a geometric series:


\begin{align}
 \ln \left( \sum_{n=1}^\infty \frac{1}{n}\right) & {} = \ln\left( \prod_p \frac{1}{1-p^{-1}}\right)
  = -\sum_p \ln \left( 1-\frac{1}{p}\right) \\
 & {} = \sum_p \left( \frac{1}{p} + \frac{1}{2p^2} + \frac{1}{3p^3} + \cdots \right) \\
 & {} = \left( \sum_{p}\frac{1}{p} \right) + \sum_p \frac{1}{p^2} \left( \frac{1}{2} + \frac{1}{3p} + \frac{1}{4p^2} + \cdots \right) \\
 & {} < \left( \sum_p \frac{1}{p} \right) + \sum_p \frac{1}{p^2} \left( 1 + \frac{1}{p} + \frac{1}{p^2} + \cdots \right) \\
 & {} = \left( \sum_p \frac{1}{p} \right) + \left( \sum_p \frac{1}{p(p-1)} \right) \\
 & {} = \left( \sum_p \frac{1}{p} \right) + C
\end{align}

for a fixed constant C < 1. Since the sum of the reciprocals of the first n positive integers is asymptotic to ln(n), (i.e. their ratio approaches one as n approaches infinity), Euler then concluded

\frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \cdots = \ln \ln (+ \infty)

It is almost certain that Euler meant that the sum of the reciprocals of the primes less than n is asymptotic to ln(ln(n)) as n approaches infinity. It turns out this is indeed the case; Euler had reached a correct result by questionable means.[citation needed]

A variation[edit]


\begin{align}
 & {} \quad \log \left( \sum_{n=1}^\infty \frac{1}{n}\right) = \log \left( \prod_p \frac{1}{1-p^{-1}}\right) = \sum_p \log \left( \frac{p}{p-1}\right) = \sum_p \log\left(1+\frac{1}{p-1}\right)
\end{align}

Since

 e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots

Shows that \scriptstyle e^x \,\geq\, 1 \,+\, x therefore \scriptstyle \log(e^x) \,\geq\, \log(1 \,+\, x) , so  \scriptstyle x \,\geq\, \log(1 \,+\, x). So

 \sum_p \frac{1}{p - 1}\geq\sum_p \log\left(1+\frac{1}{p-1}\right) = \log \left( \sum_{n=1}^\infty \frac{1}{n}\right)

Hence \scriptstyle \sum_p \frac{1}{p - 1} diverges. But \scriptstyle \frac{1}{p_i - 1} \,\leq\, \frac{1}{p_{i - 1}} (consider the i from 3). Where \scriptstyle p_i  is the i th prime, (because \scriptstyle {p_i-1} \,\geq\, {p_{i-1}}).

Hence \scriptstyle \sum_p \frac{1}{p} diverges.

Second[edit]

The following proof by contradiction is due to Paul Erdős.

Let pi denote the ith prime number. Assume that the sum of the reciprocals of the primes converges; i.e.,

\sum_{i=1}^\infty {1\over p_i} < \infty

Then there exists a smallest positive integer k such that

\sum_{i=k+1}^\infty {1\over p_i} < {1\over 2} \qquad(1)

For a positive integer x let Mx denote the set of those n in {1, 2, . . ., x} which are not divisible by any prime greater than pk. We will now derive an upper and a lower estimate for the number |Mx| of elements in Mx. For large x, these bounds will turn out to be contradictory.

Upper estimate[edit]

Every n in Mx can be written as n = r m2 with positive integers m and r, where r is square-free. Since only the k primes p1, …, pk can show up (with exponent 1) in the prime factorization of r, there are at most 2k different possibilities for r. Furthermore, there are at most √x possible values for m. This gives us the upper estimate

|M_x| \le  2^k\sqrt{x} \qquad(2)

Lower estimate[edit]

The remaining x − |Mx| numbers in the set difference {1, 2, . . ., x} \ Mx are all divisible by a prime greater than pk. Let Ni,x denote the set of those n in {1, 2, . . ., x} which are divisible by the ith prime pi. Then

\{1,2,\ldots,x\}\setminus M_{x}=\bigcup_{i=k+1}^\infty N_{i,x}

Since the number of integers in Ni,x is at most x/pi (actually zero for pi > x), we get

x-|M_x| \le \sum_{i=k+1}^\infty |N_{i,x}|< \sum_{i=k+1}^\infty {x\over p_i}

Using (1), this implies

{x\over 2} < |M_x| \qquad(3)

Contradiction[edit]

When x ≥ 22k + 2, the estimates (2) and (3) cannot both hold, because \tfrac{x}{2}\ge 2^k\sqrt{x}.

Third[edit]

Here is another proof that actually gives a lower estimate for the partial sums; in particular, it shows that these sums grow at least as fast as log(log(n)). The proof is an adaptation of the product expansion idea of Euler. In the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes.

The proof rests upon the following four inequalities:

  • Every positive integer i can be uniquely expressed as the product of a square-free integer and a square. This gives the inequality
 \sum_{i=1}^n{\frac{1}{i}} \le \prod_{p \le n}{\left(1 + \frac{1}{p}\right)}\sum_{k=1}^n{\frac{1}{k^2}}
where for every i between 1 and n the (expanded) product corresponds to the square-free part of i and the sum corresponds to the square part of i (see fundamental theorem of arithmetic).

   \log(n+1)
 = \int_1^{n+1}\frac{dx}x
 = \sum_{i=1}^n\underbrace{\int_i^{i+1}\frac{dx}x}_{{} \,<\, 1/i}
 < \sum_{i=1}^n{\frac{1}{i}}
  • Let n ≥ 2. The upper bound (using a telescoping sum) for the partial sums (convergence is all we really need)

   \sum_{k=1}^n{\frac{1}{k^2}}
 < 1 + \sum_{k=2}^n\underbrace{\left(\frac1{k - \frac{1}{2}} - \frac1{k + \frac{1}{2}}\right)}_{=\, 1/(k^2 - 1/4) \,>\, 1/k^2}
 = 1 + \frac23 - \frac1{n + \frac{1}{2}} < \frac53

Combining all these inequalities, we see that

\begin{align}
    {} & {} \log(n+1) \\
     < &\sum_{i=1}^n\frac{1}{i} \\
   \le &\prod_{p \le n}{\left(1 + \frac{1}{p}\right)}\sum_{k=1}^n{\frac{1}{k^2}} \\
     < &\frac53\prod_{p \le n}{\exp\left(\frac{1}{p}\right)} \\
     = &\frac53\exp\left(\sum_{p \le n}{\frac{1}{p}}\right)
\end{align}

Dividing through by 5/3 and taking the natural logarithm of both sides gives

\log \log(n + 1) - \log\frac53 < \sum_{p \le n}{\frac{1}{p}}

as desired. 

Using

\sum_{k=1}^\infty{\frac{1}{k^2}} = \frac{\pi^2}6

(see Basel problem), the above constant ln (5/3) = 0.51082… can be improved to ln(π2/6) = 0.4977…; in fact it turns out that


 \lim_{n \to \infty } \left(
  \sum_{p \leq n} \frac{1}{p} - \log \log(n)
 \right) = M

where M = 0.261497… is the Meissel–Mertens constant (somewhat analogous to the much more famous Euler–Mascheroni constant).

Fourth[edit]

From Dusart's inequality, we get

 p_n <  n \log n + n \log \log n \quad\mbox{for } n \ge 6

Then

\begin{align}
 \sum_{n=1}^\infty \frac1{ p_n}
  &\ge \sum_{n=6}^\infty \frac1{ p_n} \\
  &\ge \sum_{n=6}^\infty \frac1{ n \log n + n \log \log n} \\
  &\ge \sum_{n=6}^\infty \frac1{2n \log n} \\
  &= \infty
\end{align}

by the integral test for convergence. This shows that the series on the left diverges.

See also[edit]

References[edit]

External links[edit]