Jump to content

Hardy–Ramanujan–Littlewood circle method

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Charles Matthews (talk | contribs) at 16:28, 18 February 2005 (initial page). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, the Hardy-Littlewood circle method is one of the most frequently used techniques of analytic number theory. It is named for G. H. Hardy and J. E. Littlewood, who developed it in a series of papers on Waring's problem. The initial germ of the idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2005 the method still yields results.

The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers

an, n = 0, 1, 2, 3, ...

we want some asymptotic information of the type

an ~ F(n)

where we have some heuristic reason to guess the form taken by F, we write

f(z) = &Sigma anzn,

a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation.

From that formulation, it is direct from the residue theorem that

In = ∫ f(z)z− (n + 1)dz = 2πi·an

for integers n ≥ 0, where the integral is taken over the circle of radius r and centred at 0, for any r with

0 < r < 1.

That is, this is a contour integral, with the contour being the circle described traversed once anti-clockwise. So much is relatively elementary. We would like to take r = 1 directly, i.e. to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f are not in general defined there.

The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey series of rational numbers, or equivalently by the roots of unity

ζ = exp(2πir/s).

Here the denominator s, assuming that r/s is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ.

The Hardy-Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the ζ into two classes, according to whether sN, or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our disposal). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n).

Stated baldly like this, it is not at all clear that this can be made to work. Indeed, the insights involved are not so shallow. One clear source is the theory of theta functions. In the context of Waring's problem, powers of theta functions are the generating functions for sums of squares. Their analytic behaviour is known in much more accurate detail than for the cubes, for example.

File:Q-euler.jpeg

It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = − 1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity.

In the Waring problem case, one takes a sufficiently high power of the generating function, to force the situation in which the singularities, organised into the so-called singular series, do predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the partition function case, which signalled the possibility that in a favourable situation the losses from estimates could be controlled.

In the later, exponential sum formulation f(z) is replaced by a finite Fourier series, and the relevant integral In is a Fourier coefficient. Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1.

Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d. This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail.