# Power series solution of differential equations

In mathematics, the power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients.

## Method

Consider the second-order linear differential equation

$a_2(z)f''(z)+a_1(z)f'(z)+a_0(z)f(z)=0.\;\!$

Suppose a2 is nonzero for all z. Then we can divide throughout to obtain

$f''+{a_1(z)\over a_2(z)}f'+{a_0(z)\over a_2(z)}f=0.$

Suppose further that a1/a2 and a0/a2 are analytic functions.

The power series method calls for the construction of a power series solution

$f=\sum_{k=0}^\infty A_kz^k.$

If a2 is zero for some z, then the Frobenius method, a variation on this method, is suited to deal with so called singular points. The method works analogously for higher order equations as well as for systems.

## Example usage

Let us look at the Hermite differential equation,

$f''-2zf'+\lambda f=0;\;\lambda=1$

We can try to construct a series solution

$f=\sum_{k=0}^\infty A_kz^k$
$f'=\sum_{k=0}^\infty kA_kz^{k-1}$
$f''=\sum_{k=0}^\infty k(k-1)A_kz^{k-2}$

Substituting these in the differential equation

\begin{align} & {} \quad \sum_{k=0}^\infty k(k-1)A_kz^{k-2}-2z\sum_{k=0}^\infty kA_kz^{k-1}+\sum_{k=0}^\infty A_kz^k=0 \\ & =\sum_{k=0}^\infty k(k-1)A_kz^{k-2}-\sum_{k=0}^\infty 2kA_kz^k+\sum_{k=0}^\infty A_kz^k \end{align}

Making a shift on the first sum

\begin{align} & = \sum_{k+2=0}^\infty (k+2)((k+2)-1)A_{k+2}z^{(k+2)-2}-\sum_{k=0}^\infty 2kA_kz^k+\sum_{k=0}^\infty A_kz^k \\ & =\sum_{k=-2}^\infty (k+2)(k+1)A_{k+2}z^k-\sum_{k=0}^\infty 2kA_kz^k+\sum_{k=0}^\infty A_kz^k \\ & =(0)(-1)A_0 z^{-2} + (1)(0)A_{1}z^{-1}+\sum_{k=0}^\infty (k+2)(k+1)A_{k+2}z^k-\sum_{k=0}^\infty 2kA_kz^k+\sum_{k=0}^\infty A_kz^k \\ & =\sum_{k=0}^\infty (k+2)(k+1)A_{k+2}z^k-\sum_{k=0}^\infty 2kA_kz^k+\sum_{k=0}^\infty A_kz^k \\ & =\sum_{k=0}^\infty \left((k+2)(k+1)A_{k+2}+(-2k+1)A_k\right)z^k \end{align}

If this series is a solution, then all these coefficients must be zero, so:

$(k+2)(k+1)A_{k+2}+(-2k+1)A_k=0\;\!$

We can rearrange this to get a recurrence relation for Ak+2.

$(k+2)(k+1)A_{k+2}=-(-2k+1)A_k\;\!$
$A_{k+2}={(2k-1)\over (k+2)(k+1)}A_k\;\!$

Now, we have

$A_2 = {-1 \over (2)(1)}A_0={-1\over 2}A_0,\, A_3 = {1 \over (3)(2)} A_1={1\over 6}A_1$

We can determine A0 and A1 if there are initial conditions, i.e. if we have an initial value problem.

So we have

\begin{align} A_4 & ={1\over 4}A_2 = \left({1\over 4}\right)\left({-1 \over 2}\right)A_0 = {-1 \over 8}A_0 \\[8pt] A_5 & ={1\over 4}A_3 = \left({1\over 4}\right)\left({1 \over 6}\right)A_1 = {1 \over 24}A_1 \\[8pt] A_6 & = {7\over 30}A_4 = \left({7\over 30}\right)\left({-1 \over 8}\right)A_0 = {-7 \over 240}A_0 \\[8pt] A_7 & = {3\over 14}A_5 = \left({3\over 14}\right)\left({1 \over 24}\right)A_1 = {1 \over 112}A_1 \end{align}

and the series solution is

\begin{align} f & = A_0x^0+A_1x^1+A_2x^2+A_3x^3+A_4x^4+A_5x^5+A_6x^6+A_7x^7+\cdots \\[8pt] & = A_0x^0 + A_1x^1 + {-1\over 2}A_0x^2 + {1\over 6}A_1x^3 + {-1 \over 8}A_0x^4 + {1 \over 24}A_1x^5 + {-7 \over 240}A_0x^6 + {1 \over 112}A_1x^7 + \cdots \\[8pt] & = A_0x^0 + {-1\over 2}A_0x^2 + {-1 \over 8}A_0x^4 + {-7 \over 240}A_0x^6 + A_1x + {1\over 6}A_1x^3 + {1 \over 24}A_1x^5 + {1 \over 112}A_1x^7 + \cdots \end{align}

which we can break up into the sum of two linearly independent series solutions:

$f=A_0 \left(1+{-1\over 2}x^2+{-1 \over 8}x^4+{-7 \over 240}x^6+\cdots\right) + A_1\left(x+{1\over 6}x^3+{1 \over 24}x^5+{1 \over 112}x^7+\cdots\right)$

which can be further simplified by the use of hypergeometric series.

## A Simpler way using Taylor Series

A much simpler way of solving this equation (and power series solution in general) using the Taylor series form of the expansion. Here we assume the answer is of the form

$f=\sum_{k=0}^\infty A_kz^k\over {k!}$

If we do this, the general rule for obtaining the recurrence relationship for the coefficients is

$y^{[n]} -> A_{k+n}$

and

$x^m y^{[n]} -> (k)(k-1)...(k-m+1)A_{k+n-m}$

In this case we can solve the Hermite equation in fewer steps:

$f''-2zf'+\lambda f=0;\;\lambda=1$

becomes

$A_{k+2} -2kA_k +\lambda A_k=0$

or

$A_{k+2} = (2k-\lambda) A_k$

in the series

$f={\sum_{k=0}^\infty A_kz^k\over {k!}}$

## Nonlinear equations

The power series method can be applied to certain nonlinear differential equations, though with less flexibility. A very large class of nonlinear equations can be solved analytically by using the Parker-Sochacki method. Since the Parker-Sochacki method involves an expansion of the original system of ordinary differential equations through auxiliary equations, it is not simply referred to as the power series method. The Parker-Sochacki method is done before the power series method to make the power series method possible on many nonlinear problems. An ODE problem can be expanded with the auxiliary variables which make the power series method trivial for an equivalent, larger system. Expanding the ODE problem with auxiliary variables produces the same coefficients (since the power series for a function is unique) at the cost of also calculating the coefficients of auxiliary equations. Many times, without using auxiliary variables, there is no known way to get the power series for the solution to a system, hence the power series method alone is difficult to apply to most nonlinear equations.

The power series method will give solutions only to initial value problems (opposed to boundary value problems), this is not an issue when dealing with linear equations since the solution may turn up multiple linearly independent solutions which may be combined (by superposition) to solve boundary value problems as well. A further restriction is that the series coefficients will be specified by a nonlinear recurrence (the nonlinearities are inherited from the differential equation).

In order for the solution method to work, as in linear equations, it is necessary to express every term in the nonlinear equation as a power series so that all of the terms may be combined into one power series.

As an example, consider the initial value problem

$F F'' + 2 F'^2 + \eta F' = 0 \quad ; \quad F(1) = 0 \ , \ F'(1) = -\frac{1}{2}$

which describes a solution to capillary-driven flow in a groove. Note the two nonlinearities: the first and second terms involve products. Note also that the initial values are given at $\eta = 1$, which hints that the power series must be set up as:

$F(\eta) = \sum_{i = 0}^{\infty} c_i (\eta - 1)^i$

since in this way

$\frac{d^n F}{d \eta^n} \Bigg|_{\eta = 1} = n! \ c_n$

which makes the initial values very easy to evaluate. It is necessary to rewrite the equation slightly in light of the definition of the power series,

$F F'' + 2 F'^2 + (\eta - 1) F' + F' = 0 \quad ; \quad F(1) = 0 \ , \ F'(1) = -\frac{1}{2}$

so that the third term contains the same form $\eta - 1$ that shows in the power series.

The last consideration is what to do with the products; substituting the power series in would result in products of power series when it's necessary that each term be its own power series. This is where the Cauchy product

$\left(\sum_{i = 0}^{\infty} a_i x^i\right) \left(\sum_{i = 0}^{\infty} b_i x^i\right) = \sum_{i = 0}^{\infty} x^i \sum_{j = 0}^i a_{i - j} b_j$

is useful; substituting the power series into the differential equation and applying this identity leads to an equation where every term is a power series. After much rearrangement, the recurrence

$\sum_{j = 0}^i \left((j + 1) (j + 2) c_{i - j} c_{j + 2} + 2 (i - j + 1) (j + 1) c_{i - j + 1} c_{j + 1}\right) + i c_i + (i + 1) c_{i + 1} = 0$

is obtained, specifying exact values of the series coefficients. From the initial values, $c_0 = 0$ and $c_1 = -1/2$, thereafter the above recurrence is used. For example, the next few coefficients:

$c_2 = -\frac{1}{6} \quad ; \quad c_3 = -\frac{1}{108} \quad ; \quad c_4 = \frac{7}{3240} \quad ; \quad c_5 = -\frac{19}{48600} \ \dots$

A limitation of the power series solution shows itself in this example. A numeric solution of the problem shows that the function is smooth and always decreasing to the left of $\eta = 1$, and zero to the right. At $\eta = 1$, a slope discontinuity exists, a feature which the power series is incapable of rendering, for this reason the series solution continues decreasing to the right of $\eta = 1$ instead of suddenly becoming zero.