Lyapunov exponent

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation \delta \mathbf{Z}_0 diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by

 | \delta\mathbf{Z}(t) | \approx e^{\lambda t} | \delta \mathbf{Z}_0 | \,

where \lambda is the Lyapunov exponent.

The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents— equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the Maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.

The exponent is named after Aleksandr Lyapunov.

Definition of the maximal Lyapunov exponent[edit]

The maximal Lyapunov exponent can be defined as follows:

 \lambda = \lim_{t \to \infty} \lim_{\delta \mathbf{Z}_0 \to 0} 
\frac{1}{t} \ln\frac{| \delta\mathbf{Z}(t)|}{|\delta \mathbf{Z}_0|}.

The limit \delta \mathbf{Z}_0 \to 0 ensures the validity of the linear approximation at any time.[1]

For discrete time system (maps or fixed point iterations)  x_{n+1} = f(x_n) , for an orbit starting with x_0 this translates into:

 
\lambda (x_0) = \lim_{n \to \infty}  \frac{1}{n} \sum_{i=0}^{n-1}  \ln | f'(x_i)|

Definition of the Lyapunov spectrum[edit]

For a dynamical system with evolution equation f ^ t in an n–dimensional phase space, the spectrum of Lyapunov exponents

 \{ \lambda_1, \lambda_2, \ldots , \lambda_n \} \,,

in general, depends on the starting point x_0. (However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. Note: Hamiltonian systems do not have attractors, so this particular discussion does not apply to them.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix

 J^t(x_0) = \left. \frac{ d f^t(x) }{dx} \right|_{x_0}

The J^t matrix describes how a small change at the point x_0 propagates to the final point f^t(x_0). The limit

 L(x_0) = \lim_{t \rightarrow \infty} (J^t \cdot \mathrm{Transpose}(J^t) )^{1/2t}

defines a matrix L(x_0) (the conditions for the existence of the limit are given by the Oseledec theorem). If  \Lambda_i(x_0) are the eigenvalues of L(x_0), then the Lyapunov exponents \lambda_i are defined by

 \lambda_i(x_0) = \log \Lambda_i(x_0)

The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.

Lyapunov exponent for time-varying linearization[edit]

To introduce Lyapunov exponent let us consider a fundamental matrix  X(t) (e.g., for linearization along stationary solution x_0 in continuous system the fundamental matrix is  \exp\left( \left. \frac{ d f^t(x) }{dx} \right|_{x_0} t\right) ), consisting of the linear-independent solutions of the first approximation system. The singular values \{\alpha_j\big(X(t)\big)\}_{1}^{n} of the matrix X(t) are the square roots of the eigenvalues of the matrix X(t)^*X(t). The largest Lyapunov exponent \lambda_{max} is as follows [2]


   \lambda_{max}= \max\limits_{j}\limsup _{t \rightarrow \infty}\frac{1}{t}\ln\alpha_j\big(X(t)\big).

A.M. Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable. Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.

Perron effects of largest Lyapunov exponent sign inversion[edit]

In 1930 O. Perron constructed an example of the second-order system, the first approximation of which has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also it is possible to construct reverse example when first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov stable.[3][4] The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently called the Perron effect.[3][4]

Perron's counterexample shows that negative largest Lyapunov exponent does not, in general, indicate stability, and that positive largest Lyapunov exponent does not, in general, indicate chaos.

Therefore, time-varying linearization requires additional justification.[4]

Basic properties[edit]

If the system is conservative (i.e. there is no dissipation), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.

If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of L with an eigenvector in the direction of the flow.

Significance of the Lyapunov spectrum[edit]

The Lyapunov spectrum can be used to give an estimate of the rate of entropy production and of the fractal dimension of the considered dynamical system. In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Kaplan–Yorke dimension  D_{KY} , which is defined as follows:

 D_{KY}= k + \sum_{i=1}^k \frac{\lambda_i}{|\lambda_{k+1}|},

where  k is the maximum integer such that the sum of the  k largest exponents is still non-negative.  D_{KY} represents an upper bound for the information dimension of the system.[5] Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem.[6]

The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.

Numerical calculation[edit]

Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964.[7] Currently, the most commonly used numerical procedure estimates the L matrix based on averaging several finite time approximations of the limit defining L.

One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion.[8][9][10][11]

For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum,[12][13] provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored.[14]

Local Lyapunov exponent[edit]

Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes interesting to estimate the local predictability around a point x0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J 0(x0). These eigenvalues are also called local Lyapunov exponents.[15] (A word of caution: unlike the global exponents, these local exponents are not invariant under a nonlinear change of coordinates.)

Conditional Lyapunov exponent[edit]

This term is normally used in regards to the synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative.[16]

See also[edit]

References[edit]

  1. ^ Cencini et al., M. (2010). World Scientific, ed. "Chaos From Simple models to complex systems". ISBN 981-4277-65-7. 
  2. ^ Temam, R. (1988). Infinite Dimensional Dynamical Systems in Mechanics and Physics. Cambridge: Springer-Verlag. 
  3. ^ a b N.V. Kuznetsov, G.A. Leonov (2005). "On stability by the first approximation for discrete systems". 2005 International Conference on Physics and Control, PhysCon 2005. Proceedings Volume 2005: 596–599. doi:10.1109/PHYCON.2005.1514053. 
  4. ^ a b c G.A. Leonov, N.V. Kuznetsov (2007). "Time-Varying Linearization and the Perron effects". International Journal of Bifurcation and Chaos 17 (4): 1079–1107. Bibcode:2007IJBC...17.1079L. doi:10.1142/S0218127407017732. 
  5. ^ Kaplan, J. & Yorke, J. (1979). "Chaotic behavior of multidimensional difference equations". In Peitgen, H. O. & Walther, H. O. Functional Differential Equations and Approximation of Fixed Points. New York: Springer. ISBN 3-540-09518-7. 
  6. ^ Pesin, Y. B. (1977). "Characteristic Lyapunov Exponents and Smooth Ergodic Theory". Russian Math. Surveys 32 (4): 55–114. Bibcode:1977RuMaS..32...55P. doi:10.1070/RM1977v032n04ABEH001639. 
  7. ^ Miller, R. H. (1964). "Irreversibility in Small Stellar Dynamical Systems". The Astrophysical Journal 140: 250. doi:10.1086/147911.  edit
  8. ^ Benettin, G.; Galgani, L.; Giorgilli, A.; Strelcyn, J. M. (1980). "Lyapunov Characteristic Exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. Part 1: Theory". Meccanica 15: 9. doi:10.1007/BF02128236.  edit
  9. ^ Benettin, G.; Galgani, L.; Giorgilli, A.; Strelcyn, J. M. (1980). "Lyapunov Characteristic Exponents for smooth dynamical systems and for hamiltonian systems; A method for computing all of them. Part 2: Numerical application". Meccanica 15: 21. doi:10.1007/BF02128237.  edit
  10. ^ Shimada, I.; Nagashima, T. (1979). "A Numerical Approach to Ergodic Problem of Dissipative Dynamical Systems". Progress of Theoretical Physics 61 (6): 1605. doi:10.1143/PTP.61.1605.  edit
  11. ^ ""Ergodic theory of chaos and strange attractors"". doi:10.1103/RevModPhys.57.617.  edit
  12. ^ Bryant, P.; Brown, R.; Abarbanel, H. (1990). "Lyapunov exponents from observed time series". Physical Review Letters 65 (13): 1523. doi:10.1103/PhysRevLett.65.1523. PMID 10042292.  edit
  13. ^ Brown, R.; Bryant, P.; Abarbanel, H. (1991). "Computing the Lyapunov spectrum of a dynamical system from an observed time series". Physical Review A 43 (6): 2787. doi:10.1103/PhysRevA.43.2787.  edit
  14. ^ Bryant, P. H. (1993). "Extensional singularity dimensions for strange attractors". Physics Letters A 179 (3): 186. doi:10.1016/0375-9601(93)91136-S.  edit
  15. ^ Abarbanel, H.D.I.; Brown, R.; Kennel, M.B. (1992). "Local Lyapunov exponents computed from observed data". Journal of Nonlinear Science 2 (3). doi:10.1007/BF01208929. 
  16. ^ See, e.g., Pecora, L. M.; Carroll, T. L.; Johnson, G. A.; Mar, D. J.; Heagy, J. F. (1997). "Fundamentals of synchronization in chaotic systems, concepts, and applications". Chaos: an Interdisciplinary Journal of Nonlinear Science 7 (4): 520. doi:10.1063/1.166278.  edit

Further reading[edit]

Software[edit]

  • [1] R. Hegger, H. Kantz, and T. Schreiber, Nonlinear Time Series Analysis, TISEAN 3.0.1 (March 2007).
  • [2] Scientio's ChaosKit product calculates Lyapunov exponents amongst other Chaotic measures. Access is provided online via a web service and Silverlight demo .
  • [3] Dr. Ronald Joe Record's mathematical recreations software laboratory includes an X11 graphical client, lyap, for graphically exploring the Lyapunov exponents of a forced logistic map and other maps of the unit interval. The contents and manual pages of the mathrec software laboratory are also available.
  • [4] Software on this page includes LyapOde for cases where the equations of motion are known and also Lyap for cases involving time series data. LyapOde, which includes source code written in "C", can also calculate the conditional Lyapunov exponents for coupled identical systems. Lyap which includes source code written in Fortran, can also calculate the Lyapunov direction vectors and can characterize the singularity of the attractor, which is the main reason for difficulties in calculating the more negative exponents. In both cases there is extensive documentation and sample input files. The software can be compiled for running on Windows, Mac, or Linux/Unix systems. The software runs in a text window and has no graphics capabilities, but it is efficient and has no inherent limitations on the number of variables etc.

External links[edit]