Jump to content

Abel's test

From Wikipedia, the free encyclopedia
(Redirected from Abel's Test)

In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel, who proved it in 1826.[1] There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters.

Abel's test in real analysis

[edit]

Suppose the following statements are true:

  1. is a convergent series,
  2. is a monotone sequence, and
  3. is bounded.

Then is also convergent.

It is important to understand that this test is mainly pertinent and useful in the context of non absolutely convergent series . For absolutely convergent series, this theorem, albeit true, is almost self evident.[citation needed]

This theorem can be proved directly using summation by parts.

Abel's test in complex analysis

[edit]

A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers is decreasing monotonically (or at least that for all n greater than some natural number m, we have ) with

then the power series

converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R.[2] Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1.

Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each , we define

By multiplying this function by (1 − z), we obtain

The first summand is constant, the second converges uniformly to zero (since by assumption the sequence converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely: where the last sum is a converging telescoping sum. The absolute value vanished because the sequence is decreasing by assumption.

Hence, the sequence converges (even uniformly) on the closed unit disc. If , we may divide by (1 − z) and obtain the result.

Another way to obtain the result is to apply the Dirichlet's test. Indeed, for holds , hence the assumptions of the Dirichlet's test are fulfilled.

Abel's uniform convergence test

[edit]

Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts.

The test is as follows. Let {gn} be a uniformly bounded sequence of real-valued continuous functions on a set E such that gn+1(x) ≤ gn(x) for all x ∈ E and positive integers n, and let {fn} be a sequence of real-valued functions such that the series Σfn(x) converges uniformly on E. Then Σfn(x)gn(x) converges uniformly on E.

Notes

[edit]
  1. ^ Abel, Niels Henrik (1826). "Untersuchungen über die Reihe u.s.w.". J. Reine Angew. Math. 1: 311–339.
  2. ^ (Moretti, 1964, p. 91)

References

[edit]
[edit]