Matrix differential equation

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and of its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

For example, a simple matrix ordinary differential equation is

\mathbf{x'}(t) = \mathbf{Ax}(t)

where x(t) is an n×1 vector of functions of an underlying variable t, x'(t) is the vector of first derivatives of these functions, and A is an matrix, of which all elements are constants.

In the case where A has n distinct eigenvalues, this differential equation has the following general solution,

\mathbf{x}(t) = c_1 e^{\lambda_1 t} \mathbf{u}_1 + c_2 e^{\lambda_2 t} \mathbf{u}_2 + \cdots + c_n e^{\lambda_n t} \mathbf{u}_n  ~,

where λ1, λ2, ..., λn are the eigenvalues of A; u1, u2, ..., un are the respective eigenvectors of A ; and c1, c2, ...., cn are constants.

By use of the Cayley-Hamilton theorem and Vandermonde-type matrices, this formal matrix exponential solution may be reduced to a simple form.[1] Below, this solution is displayed in terms of Putzer's algorithm.[2]

Stability and steady state of the matrix system[edit]

The matrix equation

\mathbf{x}'(t) = \mathbf{Ax}(t) + \mathbf{b}

with n×1 parameter vector b is stable if and only if all eigenvalues of the matrix A have a negative real part. The steady state x* to which it converges if stable is found by setting

\mathbf{x}'(t)=\mathbf{0}~,

thus yielding

\mathbf{x}^* = -\mathbf{A}^{-1}\mathbf{b}~,

assuming A is invertible.

Thus, the original equation can be written in homogeneous form in terms of deviations from the steady state,

 \mathbf{x}'(t)=\mathbf{A}[\mathbf{x}(t)-\mathbf{x}^*]~.

An equivalent way of expressing this is that x* is a particular solution to the non-homogeneous equation, while all solutions are in the form

\mathbf{x}_h+\mathbf{x}^*   ~,

with \mathbf{x}_h a solution to the homogeneous equation (b=0).

Stability of the two-state-variable case[edit]

In the n = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix A each have a negative real part are equivalent to the conditions that the trace of A be negative and its determinant be positive.

Solution in matrix form[edit]

The formal solution of  \mathbf{x}'(t)=\mathbf{A}[\mathbf{x}(t)-\mathbf{x}^*] is the celebrated matrix exponential,

\mathbf{x}(t)=\mathbf{x}^*+e^{\mathbf{A}t}[\mathbf{x}(0)-\mathbf{x}^*] ~,

evaluated in a multitude of techniques.

Main article: matrix exponential

Putzer Algorithm for computing eAt[edit]

Given a matrix A with eigenvalues \lambda_1,\lambda_2,\dots,\lambda_n,

e^{\mathbf{A}t} = \sum_{j=0}^{n-1}r_{j+1}{\left(t\right)}\mathbf{P}_{j}

where

\mathbf{P}_0 = \mathbf{I}
\mathbf{P}_j = \prod_{k=1}^{j}\left(\mathbf{A}-\lambda_k \mathbf{I}\right)= \mathbf{P}_{j-1} \left(\mathbf{A}-\lambda_j \mathbf{I}\right),  \qquad j=1,2,\dots,n-1
\dot{r}_1 = \lambda_1 r_1
r_1{\left(0\right)}=1
\dot{r}_{j} = \lambda_j r_j + r_{j-1}, \qquad  j=2,3,\dots,n
r_j{\left(0\right)}=0,  \qquad  j=2,3,\dots,n

The equations for r_i{\left(t\right)} are simple first order inhomogeneous ODEs.

Note the algorithm does not require that the matrix A be diagonalizable and bypasses complexities of the Jordan canonical forms normally utilized.

Deconstructed example of a matrix ordinary differential equation[edit]

A first-order homogeneous matrix ordinary differential equation in two functions x(t) and y(t), when taken out of matrix form, has the following form:

\frac{dx}{dt}=a_1x+b_1y,\quad\frac{dy}{dt}=a_2x+b_2y

where a_1, a_2, b_1  \,\! and b_2  \,\! may be any arbitrary scalars.

Higher order matrix ODE's may possess a much more complicated form.

Solving deconstructed matrix ordinary differential equations[edit]

The process of solving the above equations and finding the required functions, of this particular order and form, consists of 3 main steps. Brief descriptions of each of these steps are listed below:

The final, third, step in solving these sorts of ordinary differential equations is usually done by means of plugging in the values, calculated in the two previous steps into a specialized general form equation, mentioned later in this article.

Solved example of a matrix ODE[edit]

To solve a matrix ODE according to the three steps detailed above, using simple matrices in the process, let us find, say, a function x and a function y both in terms of the single independent variable t, in the following linear differential equation of the first order,

\frac{dx}{dt}=3x-4y,\quad\frac{dy}{dt}=4x-7y~.

To solve this particular ordinary differential equation system, at some point of the solution process we shall need a set of two initial values (corresponding to the two state variables at the starting point). In this case, let us pick x(0)=y(0)=1.

First step[edit]

The first step, already mentioned above, is finding the eigenvalues of A in

\begin{pmatrix} x'\\y' \end{pmatrix} = \begin{pmatrix} 3 & -4\\4 & -7 \end{pmatrix}\begin{pmatrix} x\\y \end{pmatrix}~.

The derivative notation x' etc. seen in one of the vectors above is known as Lagrange's notation,(first introduced by Joseph Louis Lagrange. It is equivalent to the derivative notation dx/dt used in the previous equation, known as Leibniz's notation, honouring the name of Gottfried Leibniz.)

Once the coefficients of the two variables have been written in the matrix form A displayed above, one may evaluate the eigenvalues. To that end, one finds the determinant of the matrix that is formed when an identity matrix, I_n   \,\!, multiplied by some constant λ, is subtracted from the above coefficient matrix to yield the characteristic polynomial of it,

\det\left(\begin{bmatrix} 3 & -4\\4 & -7 \end{bmatrix} - \lambda\begin{bmatrix} 1 & 0\\0 & 1 \end{bmatrix}\right)~,

and solve for its zeroes.

Applying further simplification and basic rules of matrix addition yields

\det\begin{bmatrix} 3-\lambda & -4\\4 & -7-\lambda \end{bmatrix}~.

Applying the rules of finding the determinant of a single 2×2 matrix,yields the following elementary quadratic equation,

\det\begin{bmatrix} 3-\lambda & -4\\4 & -7-\lambda \end{bmatrix} = 0
-21 - 3\lambda + 7\lambda + \lambda^2 + 16 = 0  \,\!

which may be reduced further to get a simpler version of the above,

\lambda^2 + 4\lambda - 5 = 0  ~.

Now finding the two roots, \lambda_1  \,\! and \lambda_2  \,\! of the given quadratic equation by applying the factorization method yields

\lambda^2 + 5\lambda - \lambda - 5 = 0  \,\!
\lambda (\lambda + 5) - 1 (\lambda + 5) = 0  \,\!
(\lambda - 1)(\lambda + 5) = 0  \,\!
\lambda = 1, -5 ~.

The values \lambda_1 = 1  \,\! and \lambda_2 = -5  \,\!, calculated above are the required eigenvalues of A. In some cases, say other matrix ODE's, the eigenvalues may be complex, in which case the following step of the solving process, as well as the final form and the solution, may dramatically change.

Second step[edit]

As mentioned above, this step involves finding the eigenvectors of A from the information originally provided.

For each of the eigenvalues calculated we have an individual eigenvector. For the first eigenvalue, which is \lambda_1 = 1 \,\!, we have

\begin{pmatrix} 3 & -4\\4 & -7 \end{pmatrix}\begin{pmatrix} \alpha\\\beta \end{pmatrix} = 1\begin{pmatrix} \alpha\\\beta \end{pmatrix}.

Simplifying the above expression by applying basic matrix multiplication rules yields

3\alpha - 4\beta = \alpha
\alpha = 2\beta~.

All of these calculations have been done only to obtain the last expression, which in our case is α=2β. Now taking some arbitrary value, presumably a small insignificant value, which is much easier to work with, for either α or β (in most cases it does not really matter), we substitute it into α=2β. Doing so produces a simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick α=2, which, in turn determines that β=1 and, using the standard vector notation, our vector looks like

\mathbf{\hat{v}}_1 = \begin{pmatrix} 2\\1 \end{pmatrix}.

Performing the same operation using the second eigenvalue we calculated, which is \,\!\, \lambda = -5, we obtain our second eigenvector. The process of working out this vector is not shown, but the final result is

\mathbf{\hat{v}}_2 = \begin{pmatrix} 1\\2 \end{pmatrix}.

Third step[edit]

This final step actually finds the required functions that are 'hidden' behind the derivatives given to us originally. There are two functions, because our differential equations deal with two variables.

The equation, which involves all the pieces of information that we have previously found has the following form,

\begin{pmatrix} x\\y \end{pmatrix} = Ae^{\lambda_1t}\mathbf{\hat{v}}_1 + Be^{\lambda_2t}\mathbf{\hat{v}}_2.

Substituting the values of eigenvalues and eigenvectors yields

\begin{pmatrix} x\\y \end{pmatrix} = Ae^{t}\begin{pmatrix} 2\\1 \end{pmatrix} + Be^{-5t}\begin{pmatrix} 1\\2 \end{pmatrix}.

Applying further simplification,

\begin{pmatrix} 2 & 1\\1 & 2 \end{pmatrix}\begin{pmatrix} Ae^{t}\\Be^{-5t} \end{pmatrix} = \begin{pmatrix} x\\y \end{pmatrix}.

Simplifying further and writing the equations for functions x \,\! and y \,\! separately,

x = 2Ae^{t} + Be^{-5t} \,\!
y = Ae^{t} + 2Be^{-5t}. \,\!

The above equations are, in fact, the general functions sought, but they are in their general form; whilst we want to actually find their exact forms and solutions, so now is the time to look back at the information given to us, the so-called initial value problem. We chose x(0) = y(0) = 1 \,\!, which plays the role of starting point for our ordinary differential equation; application of this condition specifies the constants, A and B. As we see from the x(0) = y(0) = 1 \,\! condition, when t=0, the overall equation is equal to 1. Thus we may construct the following system of linear equations,

1 = 2A + B
1 = A + 2B~.

Solving these equations, we find that both constants A and B equal 1/3. Therefore substituting these values into the general form of these two functions specifies their exact forms,

x = \frac{2}{3}e^{t} + \frac{1}{3}e^{-5t}
y = \frac{1}{3}e^{t} + \frac{2}{3}e^{-5t}~,

the two functions sought.

See also[edit]

References[edit]

  1. ^ H. Moya-Cessa, F. Soto-Eguibar, DIFFERENTIAL EQUATIONS: AN OPERATIONAL APPROACH, RINTON PRESS, New Jersey, 2011. ISBN 978-1-58949-060-4
  2. ^ E. J. Putzer (1966). "Avoiding the Jordan Canonical Form in the Discussion of Linear Systems with Constant Coefficients", The American Mathematical Monthly, 73, No. 1 (1966) 2-7. [1], Paper on JSTOR