Table of nascent delta functions [ edit ]
One often imposes symmetry or positivity on the nascent delta functions. Positivity is important because, if a function has integral 1 and is non-negative (i.e., is a probability distribution ), then convolving with it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function.
Some nascent delta functions are:
η
ϵ
(
x
)
=
1
ϵ
π
e
−
x
2
/
ϵ
2
{\displaystyle \eta _{\epsilon }(x)={\frac {1}{\epsilon {\sqrt {\pi }}}}\mathrm {e} ^{-x^{2}/\epsilon ^{2}}}
Limit of a normal distribution
η
ϵ
(
x
)
=
1
π
ϵ
ϵ
2
+
x
2
=
1
2
π
∫
−
∞
∞
e
i
k
x
−
|
ϵ
k
|
d
k
{\displaystyle \eta _{\epsilon }(x)={\frac {1}{\pi }}{\frac {\epsilon }{\epsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} kx-|\epsilon k|}\;dk}
Limit of a Cauchy distribution
η
ϵ
(
x
)
=
e
−
|
x
/
ϵ
|
2
ϵ
=
1
2
π
∫
−
∞
∞
e
i
k
x
1
+
ϵ
2
k
2
d
k
{\displaystyle \eta _{\epsilon }(x)={\frac {e^{-|x/\epsilon |}}{2\epsilon }}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\frac {e^{ikx}}{1+\epsilon ^{2}k^{2}}}\,dk}
Cauchy φ (see note below)
η
ϵ
(
x
)
=
rect
(
x
/
ϵ
)
ϵ
=
{
1
ϵ
,
−
ϵ
2
<
x
<
ϵ
2
0
,
otherwise
=
1
2
π
∫
−
∞
∞
sinc
(
ϵ
k
2
π
)
e
i
k
x
d
k
{\displaystyle \eta _{\epsilon }(x)={\frac {{\textrm {rect}}(x/\epsilon )}{\epsilon }}={\begin{cases}{\frac {1}{\epsilon }},&-{\frac {\epsilon }{2}}<x<{\frac {\epsilon }{2}}\\0,&{\mbox{otherwise}}\end{cases}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\textrm {sinc}}\left({\frac {\epsilon k}{2\pi }}\right)e^{ikx}\,dk}
Limit of a rectangular function [ 1]
η
ϵ
(
x
)
=
1
π
x
sin
(
x
ϵ
)
=
1
2
π
∫
−
1
/
ϵ
1
/
ϵ
cos
(
k
x
)
d
k
{\displaystyle \eta _{\epsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\epsilon }}\right)={\frac {1}{2\pi }}\int _{-1/\epsilon }^{1/\epsilon }\cos(kx)\;dk}
Limit of the sinc function (or Fourier transform of the rectangular function; see note below)
η
ϵ
(
x
)
=
∂
x
1
1
+
e
−
x
/
ϵ
=
−
∂
x
1
1
+
e
x
/
ϵ
{\displaystyle \eta _{\epsilon }(x)=\partial _{x}{\frac {1}{1+\mathrm {e} ^{-x/\epsilon }}}=-\partial _{x}{\frac {1}{1+\mathrm {e} ^{x/\epsilon }}}}
Derivative of the sigmoid (or Fermi-Dirac ) function
η
ϵ
(
x
)
=
ϵ
π
x
2
sin
2
(
x
ϵ
)
{\displaystyle \eta _{\epsilon }(x)={\frac {\epsilon }{\pi x^{2}}}\sin ^{2}\left({\frac {x}{\epsilon }}\right)}
Limit of the sinc -squared function
η
ϵ
(
x
)
=
1
ϵ
A
i
(
x
ϵ
)
{\displaystyle \eta _{\epsilon }(x)={\frac {1}{\epsilon }}A_{i}\left({\frac {x}{\epsilon }}\right)}
Limit of the Airy function
η
ϵ
(
x
)
=
1
ϵ
J
1
/
ϵ
(
x
+
1
ϵ
)
{\displaystyle \eta _{\epsilon }(x)={\frac {1}{\epsilon }}J_{1/\epsilon }\left({\frac {x+1}{\epsilon }}\right)}
Limit of a Bessel function
η
ϵ
(
x
)
=
{
2
π
ϵ
2
ϵ
2
−
x
2
,
−
ϵ
<
x
<
ϵ
0
,
otherwise
{\displaystyle \eta _{\epsilon }(x)={\begin{cases}{\frac {2}{\pi \epsilon ^{2}}}{\sqrt {\epsilon ^{2}-x^{2}}},&-\epsilon <x<\epsilon \\0,&{\mbox{otherwise}}\end{cases}}}
Limit of the Wigner semicircle distribution (This nascent delta function has the advantage that, for all nonzero
a
{\displaystyle a}
, it has compact support and is continuous . It is not smooth, however, and thus not a mollifier.)
η
ϵ
(
x
)
=
Ψ
(
x
/
ϵ
)
∫
−
∞
∞
Ψ
(
x
/
ϵ
)
d
x
Ψ
(
x
)
=
{
e
−
1
/
(
1
−
|
x
|
2
)
if
|
x
|
<
1
0
if
|
x
|
≥
1
{\displaystyle \eta _{\epsilon }(x)={\frac {\Psi (x/\epsilon )}{\int _{-\infty }^{\infty }\Psi (x/\epsilon )\,dx}}\qquad \Psi (x)={\begin{cases}e^{-1/(1-|x|^{2})}&{\text{ if }}|x|<1\\0&{\text{ if }}|x|\geq 1\end{cases}}}
This is a mollifier : Ψ is a bump function (smooth, compactly supported), and the nascent delta function is just scaling this and normalizing so it has integral 1.
Note: If η(ε,x ) is a nascent delta function which is a probability distribution over the whole real line (i.e. is always non-negative between -∞ and +∞)
then another nascent delta function ηφ (ε, x ) can be built from its characteristic function as follows:
η
φ
(
ϵ
,
x
)
=
1
2
π
φ
(
1
/
ϵ
,
x
)
η
(
1
/
ϵ
,
0
)
{\displaystyle \eta _{\varphi }(\epsilon ,x)={\frac {1}{2\pi }}~{\frac {\varphi (1/\epsilon ,x)}{\eta (1/\epsilon ,0)}}}
where
φ
(
ϵ
,
k
)
=
∫
−
∞
∞
η
(
ϵ
,
x
)
e
−
i
k
x
d
x
{\displaystyle \varphi (\epsilon ,k)=\int _{-\infty }^{\infty }\eta (\epsilon ,x)e^{-ikx}\,dx}
is the characteristic function of the nascent delta function η(ε, x ). This result is related to the localization property of the continuous Fourier transform .
There are also series and integral representations of the Dirac delta function in terms of special functions, such as integrals of products of Airy functions, of Bessel functions, of Coulomb wave functions and of parabolic cylinder functions, and also series of products of orthogonal polynomials.[ 2]
Jacobi Elliptic Functions pq[u,m] as functions of {x,y} and {φ,dn}
q
c
s
n
d
p
c
1
x
/
y
=
cot
(
ϕ
)
{\displaystyle x/y=\cot(\phi )}
x
/
r
=
cos
(
ϕ
)
{\displaystyle x/r=\cos(\phi )}
x
=
cos
(
ϕ
)
/
d
n
{\displaystyle x=\cos(\phi )/dn}
s
y
/
x
=
tan
(
ϕ
)
{\displaystyle y/x=\tan(\phi )}
1
y
/
r
=
sin
(
ϕ
)
{\displaystyle y/r=\sin(\phi )}
y
=
sin
(
ϕ
)
/
d
n
{\displaystyle y=\sin(\phi )/dn}
n
r
/
x
=
sec
(
ϕ
)
{\displaystyle r/x=\sec(\phi )}
r
/
y
=
csc
(
ϕ
)
{\displaystyle r/y=\csc(\phi )}
1
r
=
1
/
d
n
{\displaystyle r=1/dn}
d
1
/
x
=
d
n
sec
(
ϕ
)
{\displaystyle 1/x=dn\sec(\phi )}
1
/
y
=
d
n
csc
(
ϕ
)
{\displaystyle 1/y=dn\csc(\phi )}
1
/
r
=
d
n
{\displaystyle 1/r=dn}
1
Extensions for L = 1[ edit ]
As seen in the previous example, the ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allows one to deal with this case. [ 3] [ 4] [ 5] [ 6] [ 7] [ 8] [ 9] [ 10] [ 11]
In all the tests below we assume that Σa n is a sum with positive a n . These tests also may be applied to any series with a finite number of negative terms. Any such series may be written as:
∑
n
=
1
∞
a
n
=
∑
n
=
1
N
a
n
+
∑
n
=
N
+
1
∞
a
n
{\displaystyle \sum _{n=1}^{\infty }a_{n}=\sum _{n=1}^{N}a_{n}+\sum _{n=N+1}^{\infty }a_{n}}
where aN is the highest-indexed negative term. The first expression on the right is a partial sum which will be finite, and so the convergence of the entire series will be determined by the convergence properties of the second expression on the right, which may be re-indexed to form a series of all positive terms beginning at n =1.
Each test defines a test parameter (ρn ) which specifies the behavior of that parameter needed to establish convergence or divergence. For each test, a weaker form of the test exists which will instead place restrictions upon limn->∞ ρn .
All of the tests have regions in which they fail to describe the convergence properties of ∑an . In fact, no convergence test can fully describe the convergence properties of the series[ 3] [ 9] . This is because if ∑an is convergent, a second convergent series ∑bn can be found which converges more slowly: i.e., it has the property that limn->∞ (bn /an ) = ∞. Furthermore, if ∑an is divergent, a second divergent series ∑bn can be found which diverges more slowly: i.e., it has the property that limn->∞ (bn /an ) = 0. Convergence tests essentially use the comparison test on some particular family of an , and fail for sequences which converge or diverge more slowly.
The De Morgan Heirarchy [ edit ]
Augustus De Morgan proposed a hierarchy of ratio-type tests[ 8]
The ratio test parameters (
ρ
n
{\displaystyle \rho _{n}}
) below all generally involve terms of the form
D
n
a
n
/
a
n
+
1
−
D
n
+
1
{\displaystyle D_{n}a_{n}/a_{n+1}-D_{n+1}}
. This term may be multiplied by
a
n
+
1
/
a
n
{\displaystyle a_{n+1}/a_{n}}
to yield
D
n
−
D
n
+
1
a
n
+
1
/
a
n
{\displaystyle D_{n}-D_{n+1}a_{n+1}/a_{n}}
. This term can replace the former term in the definition of the test parameters and the conclusions drawn will remain the same. Accordingly, there will be no distinction drawn between references which use one or the other form of the test parameter.
1. d’Alembert’s Ratio test[ edit ]
The first test in the De Morgan Heirarchy is the ratio test as described above.
This extension is due to Joseph Ludwig Raabe . Define:
ρ
n
≡
n
(
a
n
a
n
+
1
−
1
)
{\displaystyle \rho _{n}\equiv n\left({\frac {a_{n}}{a_{n+1}}}-1\right)}
The series will:[ 6] [ 9] [ 8]
Converge when there exists a c > 1 such that
ρ
n
≥
c
{\displaystyle \rho _{n}\geq c}
for all n > N .
Diverge when
ρ
n
≤
1
{\displaystyle \rho _{n}\leq 1}
for all n > N .
Otherwise, the test is inconclusive.
Defining
ρ
=
lim
n
→
∞
ρ
n
{\displaystyle \rho =\lim _{n\to \infty }\rho _{n}}
, the limit version states that the series will[ 11] [ 12] :
Converge if ρ > 1 (this includes the case ρ = ∞)
Diverge if ρ < 1.
If ρ=1, the test is inconclusive.
When the above limit does not exist, it may be possible to use limits superior and inferior[ 3] . The series will:
Converge if
lim inf
n
→
∞
ρ
n
>
1
{\displaystyle \liminf _{n\to \infty }\rho _{n}>1}
Diverge if
lim sup
n
→
∞
ρ
n
<
1
{\displaystyle \limsup _{n\rightarrow \infty }\rho _{n}<1}
Otherwise, the test is inconclusive.
Proof of Raabe's test[ edit ]
Defining
ρ
n
≡
n
(
a
n
a
n
+
1
−
1
)
{\displaystyle \rho _{n}\equiv n\left({\frac {a_{n}}{a_{n+1}}}-1\right)}
, we need not assume the limit exists; if
lim sup
ρ
n
<
1
{\displaystyle \limsup \rho _{n}<1}
, then
∑
a
n
{\displaystyle \sum a_{n}}
diverges, while if
lim inf
ρ
n
>
1
{\displaystyle \liminf \rho _{n}>1}
the sum converges.
The proof proceeds essentially by comparison with
∑
1
/
n
R
{\displaystyle \sum 1/n^{R}}
. Suppose first that
lim sup
ρ
n
<
1
{\displaystyle \limsup \rho _{n}<1}
. Of course
if
lim sup
ρ
n
<
0
{\displaystyle \limsup \rho _{n}<0}
then
a
n
+
1
≥
a
n
{\displaystyle a_{n+1}\geq a_{n}}
for large
n
{\displaystyle n}
, so the sum diverges; assume then that
0
≤
lim sup
ρ
n
<
1
{\displaystyle 0\leq \limsup \rho _{n}<1}
. There exists
R
<
1
{\displaystyle R<1}
such that
ρ
n
≤
R
{\displaystyle \rho _{n}\leq R}
for all
n
≥
N
{\displaystyle n\geq N}
, which is to say that
a
n
/
a
n
+
1
≤
(
1
+
R
n
)
≤
e
R
/
n
{\displaystyle a_{n}/a_{n+1}\leq \left(1+{\frac {R}{n}}\right)\leq e^{R/n}}
. Thus
a
n
+
1
≥
a
n
e
−
R
/
n
{\displaystyle a_{n+1}\geq a_{n}e^{-R/n}}
, which implies that
a
n
+
1
≥
a
N
e
−
R
(
1
/
N
+
⋯
+
1
/
n
)
≥
c
a
N
e
−
R
log
(
n
)
=
c
a
N
/
n
R
{\displaystyle a_{n+1}\geq a_{N}e^{-R(1/N+\dots +1/n)}\geq ca_{N}e^{-R\log(n)}=ca_{N}/n^{R}}
for
n
≥
N
{\displaystyle n\geq N}
; since
R
<
1
{\displaystyle R<1}
this shows that
∑
a
n
{\displaystyle \sum a_{n}}
diverges.
The proof of the other half is entirely analogous, with most of the inequalities simply reversed. We need a preliminary inequality to use
in place of the simple
1
+
t
<
e
t
{\displaystyle 1+t<e^{t}}
that was used above: Fix
R
{\displaystyle R}
and
N
{\displaystyle N}
. Note that
log
(
1
+
R
n
)
=
R
n
+
O
(
1
n
2
)
{\displaystyle \log \left(1+{\frac {R}{n}}\right)={\frac {R}{n}}+O\left({\frac {1}{n^{2}}}\right)}
. So
log
(
(
1
+
R
N
)
…
(
1
+
R
n
)
)
=
R
(
1
N
+
⋯
+
1
n
)
+
O
(
1
)
=
R
log
(
n
)
+
O
(
1
)
{\displaystyle \log \left(\left(1+{\frac {R}{N}}\right)\dots \left(1+{\frac {R}{n}}\right)\right)=R\left({\frac {1}{N}}+\dots +{\frac {1}{n}}\right)+O(1)=R\log(n)+O(1)}
; hence
(
1
+
R
N
)
…
(
1
+
R
n
)
≥
c
n
R
{\displaystyle \left(1+{\frac {R}{N}}\right)\dots \left(1+{\frac {R}{n}}\right)\geq cn^{R}}
.
Suppose now that
lim inf
ρ
n
>
1
{\displaystyle \liminf \rho _{n}>1}
. Arguing as in the first paragraph, using the inequality established in the previous paragraph, we see that there exists
R
>
1
{\displaystyle R>1}
such that
a
n
+
1
≤
c
a
N
n
−
R
{\displaystyle a_{n+1}\leq ca_{N}n^{-R}}
for
n
≥
N
{\displaystyle n\geq N}
; since
R
>
1
{\displaystyle R>1}
this shows that
∑
a
n
{\displaystyle \sum a_{n}}
converges.
This extension is due to Joseph Bertrand and Augustus De Morgan .
Defining:
ρ
n
≡
n
ln
n
(
a
n
a
n
+
1
−
1
)
−
ln
n
{\displaystyle \rho _{n}\equiv n\ln n\left({\frac {a_{n}}{a_{n+1}}}-1\right)-\ln n}
Bertrand's test[ 3] [ 9] asserts that the series will:
Converge when there exists a c > 1 such that
ρ
n
≥
c
{\displaystyle \rho _{n}\geq c}
for all n > N .
Diverge when
ρ
n
≤
1
{\displaystyle \rho _{n}\leq 1}
for all n > N .
Otherwise, the test is inconclusive.
Defining
ρ
=
lim
n
→
∞
ρ
n
{\displaystyle \rho =\lim _{n\to \infty }\rho _{n}}
, the limit version states that the series will:
Converge if ρ > 1 (this includes the case ρ = ∞)
Diverge if ρ < 1.
If ρ=1, the test is inconclusive.
When the above limit does not exist, it may be possible to use limits superior and inferior[ 3] [ 8] [ 13] . The series will:
Converge if
lim inf
ρ
n
>
1
{\displaystyle \liminf \rho _{n}>1}
Diverge if
lim sup
ρ
n
<
1
{\displaystyle \limsup \rho _{n}<1}
Otherwise, the test is inconclusive.
This extension is due to Carl Friedrich Gauss .
Assuming an > 0 and r > 1 , if a bounded sequence Bn can be found such that for all n :[ 3] [ 4] [ 6] [ 8] [ 9] :
a
n
a
n
+
1
=
1
+
ρ
n
+
B
n
n
r
{\displaystyle {\frac {a_{n}}{a_{n+1}}}=1+{\frac {\rho }{n}}+{\frac {B_{n}}{n^{r}}}}
then the series will:
Converge if
ρ
>
1
{\displaystyle \rho >1}
Diverge if
ρ
≤
1
{\displaystyle \rho \leq 1}
This extension is due to Ernst Kummer .
Let ζn be an auxiliary sequence of positive constants. Define:
ρ
n
≡
(
ζ
n
a
n
a
n
+
1
−
ζ
n
+
1
)
{\displaystyle \rho _{n}\equiv \left(\zeta _{n}{\frac {a_{n}}{a_{n+1}}}-\zeta _{n+1}\right)}
Kummer's test states that the series will:[ 4] [ 5] [ 9] [ 10] :
Converge if there exists a c > 0 such that
ρ
n
≥
c
{\displaystyle \rho _{n}\geq c}
for all n > N.
Diverge if
ρ
n
≤
0
{\displaystyle \rho _{n}\leq 0}
for all n > N and
∑
n
=
1
∞
1
/
ζ
n
{\displaystyle \sum _{n=1}^{\infty }1/\zeta _{n}}
diverges.
Otherwise, the test is inconclusive
Defining
ρ
=
lim
n
→
∞
ρ
n
{\displaystyle \rho =\lim _{n\to \infty }\rho _{n}}
, the limit version states that the series will[ 14] [ 6] [ 8] :
Converge if ρ > 0
Diverge if ρ < 0 and
∑
n
=
1
∞
1
/
ζ
n
{\displaystyle \sum _{n=1}^{\infty }1/\zeta _{n}}
diverges.
If ρ=0, the test is inconclusive.
When the above limit does not exist, it may be possible to use limits superior and inferior[ 3] . The series will:
Converge if
lim inf
n
→
∞
ρ
n
>
0
{\displaystyle \liminf _{n\to \infty }\rho _{n}>0}
Diverge if
lim sup
n
→
∞
ρ
n
<
0
{\displaystyle \limsup _{n\to \infty }\rho _{n}<0}
and
∑
1
/
ζ
n
{\displaystyle \sum 1/\zeta _{n}}
diverges.
Otherwise, the test is inconclusive
All of tests in De Morgan’s heirarchy except Gauss’s test can easily be seen as special cases of Kummer’s test:[ 3]
For the ratio test, let ζn =1. Then:
ρ
K
u
m
m
e
r
=
(
a
n
a
n
+
1
−
1
)
=
1
/
ρ
R
a
t
i
o
−
1
{\displaystyle \rho _{Kummer}=\left({\frac {a_{n}}{a_{n+1}}}-1\right)=1/\rho _{Ratio}-1}
For Raabe’s test, let ζn =n. Then:
ρ
K
u
m
m
e
r
=
(
n
a
n
a
n
+
1
−
(
n
+
1
)
)
=
ρ
R
a
a
b
e
−
1
{\displaystyle \rho _{Kummer}=\left(n{\frac {a_{n}}{a_{n+1}}}-(n+1)\right)=\rho _{Raabe}-1}
For Bertrand’s test, let ζn =n ln(n). Then:
ρ
K
u
m
m
e
r
=
n
ln
(
n
)
(
a
n
a
n
+
1
−
1
)
−
(
n
+
1
)
ln
(
n
+
1
)
{\displaystyle \rho _{Kummer}=n\ln(n)\left({\frac {a_{n}}{a_{n+1}}}-1\right)-(n+1)\ln(n+1)}
Using
ln
(
n
+
1
)
=
ln
(
n
)
+
ln
(
1
+
1
/
n
)
{\displaystyle \ln(n+1)=\ln(n)+\ln(1+1/n)}
and approximating
ln
(
1
+
1
/
n
)
→
1
/
n
{\displaystyle \ln(1+1/n)\rightarrow 1/n}
for large n , which is negligible compared to the other terms, ρKummer may be written:
ρ
K
u
m
m
e
r
=
n
ln
(
n
)
(
a
n
a
n
+
1
−
1
)
−
ln
(
n
)
−
1
=
ρ
B
e
r
t
r
a
n
d
−
1
{\displaystyle \rho _{Kummer}=n\ln(n)\left({\frac {a_{n}}{a_{n+1}}}-1\right)-\ln(n)-1=\rho _{Bertrand}-1}
Note that for these three tests, the higher they are in the De Morgan heirarchy, the more slowly the 1/ζn series diverges.
If
ρ
n
>
0
{\displaystyle \rho _{n}>0}
then fix a positive number
0
<
δ
<
ρ
n
{\displaystyle 0<\delta <\rho _{n}}
. There exists
a natural number
N
{\displaystyle N}
such that for every
n
>
N
,
{\displaystyle n>N,}
δ
≤
ζ
n
a
n
a
n
+
1
−
ζ
n
+
1
.
{\displaystyle \delta \leq \zeta _{n}{\frac {a_{n}}{a_{n+1}}}-\zeta _{n+1}.}
Since
a
n
+
1
>
0
{\displaystyle a_{n+1}>0}
, for every
n
>
N
,
{\displaystyle n>N,}
0
≤
δ
a
n
+
1
≤
ζ
n
a
n
−
ζ
n
+
1
a
n
+
1
.
{\displaystyle 0\leq \delta a_{n+1}\leq \zeta _{n}a_{n}-\zeta _{n+1}a_{n+1}.}
In particular
ζ
n
+
1
a
n
+
1
≤
ζ
n
a
n
{\displaystyle \zeta _{n+1}a_{n+1}\leq \zeta _{n}a_{n}}
for all
n
≥
N
{\displaystyle n\geq N}
which means that starting from the index
N
{\displaystyle N}
the sequence
ζ
n
a
n
>
0
{\displaystyle \zeta _{n}a_{n}>0}
is monotonically decreasing and
positive which in particular implies that it is bounded below by 0. Therefore the limit
lim
n
→
∞
ζ
n
a
n
=
L
{\displaystyle \lim _{n\to \infty }\zeta _{n}a_{n}=L}
exists.
This implies that the positive telescoping series
∑
n
=
1
∞
(
ζ
n
a
n
−
ζ
n
+
1
a
n
+
1
)
{\displaystyle \sum _{n=1}^{\infty }\left(\zeta _{n}a_{n}-\zeta _{n+1}a_{n+1}\right)}
is convergent,
and since for all
n
>
N
,
{\displaystyle n>N,}
δ
a
n
+
1
≤
ζ
n
a
n
−
ζ
n
+
1
a
n
+
1
{\displaystyle \delta a_{n+1}\leq \zeta _{n}a_{n}-\zeta _{n+1}a_{n+1}}
by the direct comparison test for positive series, the series
∑
n
=
1
∞
δ
a
n
+
1
{\displaystyle \sum _{n=1}^{\infty }\delta a_{n+1}}
is convergent.
On the other hand, if
ρ
<
0
{\displaystyle \rho <0}
, then there is an N such that
ζ
n
a
n
{\displaystyle \zeta _{n}a_{n}}
is increasing for
n
>
N
{\displaystyle n>N}
. In particular, there exists an
ϵ
>
0
{\displaystyle \epsilon >0}
for which
ζ
n
a
n
>
ϵ
{\displaystyle \zeta _{n}a_{n}>\epsilon }
for all
n
>
N
{\displaystyle n>N}
, and so
∑
n
a
n
=
∑
n
a
n
ζ
n
ζ
n
{\displaystyle \sum _{n}a_{n}=\sum _{n}{\frac {a_{n}\zeta _{n}}{\zeta _{n}}}}
diverges by comparison with
∑
n
ϵ
ζ
n
{\displaystyle \sum _{n}{\frac {\epsilon }{\zeta _{n}}}}
.
The Second Ratio Test [ edit ]
A more refined ratio test is the second ratio test:[ 6] [ 8]
For
a
n
>
0
{\displaystyle a_{n}>0}
define:
L
0
≡
lim
n
→
∞
a
2
n
a
n
{\displaystyle L_{0}\equiv \lim _{n\rightarrow \infty }{\frac {a_{2n}}{a_{n}}}}
L
1
≡
lim
n
→
∞
a
2
n
+
1
a
n
{\displaystyle L_{1}\equiv \lim _{n\rightarrow \infty }{\frac {a_{2n+1}}{a_{n}}}}
L
≡
max
(
L
0
,
L
1
)
{\displaystyle L\equiv \max(L_{0},L_{1})}
By the second ratio test, the series will:
Converge if
L
<
1
2
{\displaystyle L<{\frac {1}{2}}}
Diverge if
L
>
1
2
{\displaystyle L>{\frac {1}{2}}}
If
L
=
1
2
{\displaystyle L={\frac {1}{2}}}
then the test is inconclusive.
If the above limits do not exist, it may be possible to use the limits superior and inferior. Define:
L
0
≡
lim sup
n
→
∞
a
2
n
a
n
{\displaystyle L_{0}\equiv \limsup _{n\rightarrow \infty }{\frac {a_{2n}}{a_{n}}}}
L
1
≡
lim sup
n
→
∞
a
2
n
+
1
a
n
{\displaystyle L_{1}\equiv \limsup _{n\rightarrow \infty }{\frac {a_{2n+1}}{a_{n}}}}
ℓ
0
≡
lim inf
n
→
∞
a
2
n
a
n
{\displaystyle \ell _{0}\equiv \liminf _{n\rightarrow \infty }{\frac {a_{2n}}{a_{n}}}}
ℓ
1
≡
lim inf
n
→
∞
a
2
n
+
1
a
n
{\displaystyle \ell _{1}\equiv \liminf _{n\rightarrow \infty }{\frac {a_{2n+1}}{a_{n}}}}
L
≡
max
(
L
0
,
L
1
)
{\displaystyle L\equiv \max(L_{0},L_{1})}
ℓ
≡
min
(
ℓ
0
,
ℓ
1
)
{\displaystyle \ell \equiv \min(\ell _{0},\ell _{1})}
Then the series will:
Converge if
L
<
1
2
{\displaystyle L<{\frac {1}{2}}}
Diverge if
ℓ
>
1
2
{\displaystyle \ell >{\frac {1}{2}}}
If
ℓ
≤
1
2
≤
L
{\displaystyle \ell \leq {\frac {1}{2}}\leq L}
then the test is inconclusive.
The second ratio test can be generalized to an m -th ratio test, but higher orders are not found to be as useful[ 6] [ 8] .
^ McMahon 2008 , p. 108 harvnb error: no target: CITEREFMcMahon2008 (help )
^ Li & Wong 2008 harvnb error: no target: CITEREFLiWong2008 (help )
^ a b c d e f g h Bromwich, T. J. I’A (1908). An Introduction To The Theory of Infinite Series (PDF) . Merchant Books.
^ a b c d Knopp, Konrad (1954). Theory and Application of Infinite Series . London: Blackie & Son Ltd.
^ a b
Tong, Jingcheng (May 1994). "Kummer's Test Gives Characterizations for Convergence or Divergence of all Positive Series" . The American Mathematical Monthly . 101 (5): 450–452. doi :10.2307/2974907 . Retrieved 21 November 2018 .
^ a b c d e f
Ali, Sayel A. (2008). "The mth Ratio Test: New Convergence Test for Series" (PDF) . The American Mathematical Monthly . 115 (6): 514–524. Retrieved 21 November 2018 .
^
Samelson, Hans (November 1995). "More on Kummer's Test" . The American Mathematical Monthly . 102 (9): 817–818. doi :10.2307/2974510 . Retrieved 21 November 2018 .
^ a b c d e f g h Blackburn, Kyle (4 May 2012). "The mth Ratio Convergence Test and Other Unconventional Convergence Tests" (PDF) . University of Washington College of Arts and Sciences. Retrieved 27 November 2018 .
^ a b c d e f Duris, Frantisek (2009). Infinite series: Convergence tests (PDF) (Bachelor's thesis). Katedra Informatiky, Fakulta Matematiky, Fyziky a Informatiky, Univerzita Komensk´eho, Bratislava. Retrieved 28 November 2018 .
^ a b Duris, Frantisek (2 February 2018). "On Kummer's test of convergence and its relation to basic comparison tests" . arXiv:1612.05167v2 [math.HO] . Retrieved 18 November 2018 .
^ a b Hammond, Christopher N. B. (20 January 2018). "The Case for Raabe's Test" . arXiv:1801.07584v1 [math.HO] . Retrieved 30 November 2018 .
^ Weisstein, Eric W. "Raabe's Test" . MathWorld .
^ Weisstein, Eric W. "Bertrand's Test" . MathWorld .
^ Weisstein, Eric W. "Kummer's Test" . MathWorld .