Dynamic mode decomposition

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Physical systems, such as fluid flow or mechanical vibrations, behave in characteristic patterns, known as modes. In a recirculating flow, for example, one may think of a hierarchy of vortices, a big main vortex driving smaller secondary ones and so on. Most of the motion of such a system can be faithfully described using only a few of those patterns. The dynamic mode decomposition (DMD) provides a means of extracting these modes from numerical and experimental pairs of time-shifted snapshots. Each of the modes identified by DMD is associated with a fixed oscillation frequency and growth/decay rate, determined by DMD without requiring knowledge of the governing equations. This is to be contrasted with methods, such as the proper orthogonal decomposition, which produce a set of modes without the associated temporal information.


A time-evolving physical situation may be approximated by the action of a linear operator e^{\Delta t A} to the instantaneous state vector.

{q(t+\Delta t)}  \approx e^{\Delta t A} q(t)

The dynamic mode decomposition strives to approximate the evolution operator \tilde A := e^{\Delta t A} from a known sequence of observations, V_{0 \dots n}=\{q_0, q_1, q_2, \dots, q_n  \}. Thus, we ask the following matrix equation to hold:

V_{1 \dots n+1}=\tilde A   V_{0\dots n}

Generally, the vectors \{q_0, q_1, q_2, \dots, q_n  \}, and subsequently \tilde A, are very-high-dimensional, and so a strict eigendecomposition of \tilde A is computationally difficult. However, in DMD it is assumed that the set of \{q_0, q_1, q_2, \dots, q_n  \} does not span the entire vector space (a good assumption, especially if there is spatial structure in the signal). Thus, after a given time n, where n is much less than the dimensionality of the system, one can write q_{n+1} as a linear combination of the previous vectors, i.e., q_{n+1} = c_0q_0 + \cdots + c_nq_n =\{q_0, q_1, q_2, \dots, q_n  \}c. In matrix form, we then have:

V_{1 \dots n+1}=   V_{0\dots n} S

where S is the companion matrix

0 & 0 & \dots & 0 & c_0 \\
1 & 0 & \dots & 0 & c_1 \\
0 & 1 & \dots & 0 & c_2 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & 1 & c_n

The eigenvalues of S then approximate some of the eigenvalues of \tilde A. However, since the S is small (with dimensions (n + 1) × (n + 1) as compared to \tilde A), the eigenvalues and eigenvectors of S can be computed with ease.[1]

Theoretical Developments[edit]

While the original algorithm was proposed by Peter Schmid in 2010, considerable work in expanding DMD utility and robustness has occurred since. The DMD's dimensionality reduction is predicated on the Koopman Operator, which offers the ability to recast a finite-dimensional, nonlinear system as an infinitely-dimensional, linear system. For a limited set of cases, the infinitely-dimensional linear expansion may be truncated, revealing sparse linear behavior governing higher-rank nonlinear dynamics.[2] The utility of Koopman analysis proper has sparked further research deviating from the original DMD algorithm, with a general focus on equation-free modeling for controls.

Utility for Future-state Prediction[edit]

In addition to parsing dynamical data into mode structures, the DMD has offered ability to predict future states. This should be readily evident.

If the state vector for some time is given by

q(t+\Delta t) = e^{\Delta t A}q(t)

as before, then the experimenter may decompose the entire experimental set of data from t_i \to t_f and then allow the reconstruction, after truncation, to run past t_f and into the future. These predicted future states are generally valid only if the dynamical regime does not change. If the dynamics themselves change after t_f, the recombined solution may diverge.

This property has been used in analysis of stock portfolios.[3]

Limitations to Original Algorithm[edit]

As originally written, the dynamic mode decomposition requires certain strict conditions to be met for dimensionality reduction to be effective. For example, the amount of time between data snapshots, \Delta t, could not change between snapshots (consistent temporal resolution), nor could the size of the data increase or decrease between snapshots (consistent spatial resolution). Furthermore, the dynamics themselves must be stable in order to extract low-dimensional behavior—when this condition is not met, the experimenter may encounter "mode mixing," i.e. structures that might accurately be detailed by a handful of modes would then be spread out into hundreds of modes, defeating the purpose of the decomposition.

Multiresolution Analysis[edit]

In order to resolve the problem of changing dynamical regimes, it may be sufficient to apply an intelligently-selected multiresolution analysis to the DMD. In a sense, this acts as a wavelet transform within the DMD algorithm, first extracting high-energy, slow modes from the entire time window, and then proceeding by reconstructing the reduced set and evaluating, with the same technique, smaller sub-sections of time within the remaining data. Modes are extracted for each time "bin", starting with the entire window and narrowing to the smallest window of temporal resolution. The final signal is then reconstructed from these temporally-limited dynamical modes. This technique has been applied to data from National Oceanic and Atmospheric Administration, correctly identifying El Nino storms from thirty years of aggregate temperature data.[4]

Exact and Projected Modes[edit]

In order to allow decomposition of data sets that are not evenly-resolved in time, theorists have generalized the DMD algorithm to consider not sequential steps forward in time, but rather collected pairs of state vectors. This Exact DMD algorithm has then outperformed the older technique—which some theorists have referred to as Projected DMD—in parsing sparse structures from inconsistently temporally resolved experimental data.[5]

Exact DMD Algorithm[edit]

If we allow this generalization, we can consider the spatial measurements of a system at a given time as a column state vector, \hat{z}(t). When these state vectors are stacked row-wise, they form the total dataset

\bar{Z} = \sum^{t_f}_{t_i} \hat{z}(t)

We now regard the pairs of state vectors as \langle \hat{x}, \hat{y} \rangle where \hat{x}(t) = \hat{z}(t-1) and \hat{y}(t) = \hat{z}(t). Thus we are, as before, still considering the mapping \hat{x} \to \hat{y}. As with Z, we consider two auxiliary matrices X and Y as composed column state vectors across the relevant temporal range; i.e., X runs from the first to the second-to-last temporal measurement, and Y runs from the second to the last temporal measurement.

We proceed with the algorithm in On Dynamic Mode Decomposition, Theory and Application, page 5.

Perform a singular value decomposition on our "initial state" matrix:

X = U\Sigma V^{-1}

And use the SVD of X and our "final state" matrix Y to reconstruct our approximating evolution operator:

\tilde{A} = U^*YV\Sigma^{-1}

If the dynamics are sampled perfectly and purely linear, \tilde{A} = A where A is the exact evolution operator. Otherwise, \tilde{A} is the L2 best-fit approximation (i.e. least-squares). In either case, \tilde{A} is a linear approximation, and therefore the dynamical system can be resolved into a superposition of solutions. As \tilde{A} represents a function space, we can use an eigenvalue decomposition to arrive at the solution modes:

\tilde{A} \omega = \lambda \omega

Each nonzero lambda is now a DMD eigenvalue for which the mode is:

\hat{\phi} = \frac{1}{\lambda} YV\Sigma^{-1}\omega

As the eigenvalues only represent energy content, not the dynamics themselves, they may be re-scaled in reconstruction.


Trailing edge of a profile[edit]

Fig 1 Trailing edge Vortices (Entropy)

The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields (animated gif (1.9MB) ) and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries, since the computation was performed on a parallel computer using different computational blocks.

Fig.2 DMD-spectrum

Roughly a third of the spectrum was highly damped (large, negative \lambda_r) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the Eigenvector.

Joukowsky Karman Vortex Street EV real.pngJoukowsky Karman Vortex Street EV imag.png

Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above.

Joukowsky Karman Vortex Street ps EV real.png

Synthetic example of a traveling pattern[edit]

The DMD analysis assumes a pattern of the form 
q(x_1,x_2,x_3, \ldots)=e^ {c x_1 }\hat q(x_2,x_3,\ldots)
where  x_1 is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern

q(x,y,t)=e^{-i \omega t} \hat q (x,t) e^{-(y/b)^2} \Re \left\{  e^{i (k x - \omega t)}    \right\} + \text{random noise}

With the time as the preselected exponential factor.

A sample is given in the following figure with  \omega = 2\pi /0.1 ,  b=0.02 and  k = 2\pi/ b . The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern.

Q periodic.pngQ periodic noise.png

A DMD analysis is performed with 21 synthetically generated fields using a time interval  \Delta t =1/90\text{ s}, limiting the analysis to  f =45\text{ Hz}.

Synth Spec.png

The spectrum is symmetric and shows three almost undamped modes (small negative Real part), whereas the other modes are heavily damped. Their numerical values are  \omega_1=-0.201, \omega_{2/3}=-0.223 \pm i 62.768 respectively. The real one corresponds to the mean of the field, whereas  \omega_{2/3} corresponds to the imposed pattern with  f = 10\text{ Hz} . Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two Eigenmodes is depicted in the following figure.

Detected Eigenmode.png

See also[edit]

Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.


  1. ^ Schmid, Peter J. "Dynamic mode decomposition of numerical and experimental data." Journal of Fluid Mechanics 656.1 (2010): 5–28.
  2. ^ Brunton, Brunton, Proctor and Kutz (11 October 2015). [arXiv:1510.03007 "Koopman observable subspaces and finite linear representations of nonlinear dynamical systems for control"]. arXiv. 
  3. ^ Mann and Kutz (18 August 2015). [arXiv:1508.04487 "Dynamic Mode Decomposition for Financial Trading Strategies"]. arXiv. 
  4. ^ Kutz, Fu, and Brunton (1 June 2015). [arXiv:1506.00564 "Multi-Resolution Dynamical Mode Decomposition"]. arXiv. 
  5. ^ Tu, Rowley, Luchtenburg, Brunton, and Kutz (December 2014). "On Dynamic Mode Decomposition: Theory and Applications". American Institute of Mathematical Sciences. doi:10.3934/jcd.2014.1.391. 
  • Schmid, P. J. & Sesterhenn, J. L. 2008 Dynamic mode decomposition of numerical and experimental data. In Bull. Amer. Phys. Soc., 61st APS meeting, p. 208. San Antonio.
  • Hasselmann, K., 1988. POPs and PIPs. The reduction of complex dynamical systems using principal oscillation and interaction patterns. J. Geophys. Res., 93(D9): 10975–10988.

Dynamic Mode Decomposition of experimental data