# Observability

In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. The observability and controllability of a system are mathematical duals. The concept of observability was introduced by Hungarian-American engineer Rudolf E. Kálmán for linear dynamic systems.[1][2]

## Definition

Formally, a system is said to be observable if, for any possible sequence of state and control vectors (the latter being variables whose values one can choose), the current state (the values of the underlying dynamically evolving variables) can be determined in finite time using only the outputs. (This definition uses the state space representation.) Less formally, this means that one can determine the behavior of the entire system from the system's outputs. If a system is not observable, this means that the current values of some of its state variables cannot be determined through output sensors. This implies that their value is unknown to the controller (although they can be estimated by various means).

For time-invariant linear systems in the state space representation, there is a convenient test to check whether a system is observable. Consider a SISO system with ${\displaystyle n}$ state variables (see state space for details about MIMO systems). If the row rank of the following observability matrix

${\displaystyle {\mathcal {O}}={\begin{bmatrix}C\\CA\\CA^{2}\\\vdots \\CA^{n-1}\end{bmatrix}}}$

is equal to ${\displaystyle n}$ (where the notation is defined below), then the system is observable. The rationale for this test is that if ${\displaystyle n}$ rows are linearly independent, then each of the ${\displaystyle n}$ state variables is viewable through linear combinations of the output variables ${\displaystyle y(k)}$.

A module designed to estimate the state of a system from measurements of the outputs is called a state observer or simply an observer for that system.

Observability index

The observability index ${\displaystyle v}$ of a linear time-invariant discrete system is the smallest natural number for which the following is satisfied: ${\displaystyle {\text{rank}}{({\mathcal {O}}_{v})}={\text{rank}}{({\mathcal {O}}_{v+1})}}$, where

${\displaystyle {\mathcal {O}}_{v}={\begin{bmatrix}C\\CA\\CA^{2}\\\vdots \\CA^{v-1}\end{bmatrix}}.}$
Unobservable subspace

The unobservable subspace ${\displaystyle N}$ of the linear system ${\displaystyle (A,C)}$ is the kernel of the linear map ${\displaystyle G}$ given by[3]

${\displaystyle G\colon R^{n}\rightarrow {\mathcal {C}}(t_{0},t_{1};R^{n})}$
${\displaystyle x_{0}\mapsto C\Phi (t_{0},t_{1})x_{0}}$,

where ${\displaystyle {\mathcal {C}}(t_{0},t_{1};R^{n})}$ is the set of continuous functions ${\displaystyle f:[t_{0},t_{1}]\to R^{n}}$ and ${\displaystyle \Phi (t_{0},t_{1})}$ is the state-transition matrix associated to ${\displaystyle A}$.

If ${\displaystyle (A,C)}$ is an autonomous system, ${\displaystyle N}$ can be written as [4]

${\displaystyle N=\bigcap _{k=0}^{n-1}\ker(CA^{k})=\ker {\mathcal {O}}}$

Example: Consider ${\displaystyle A}$ and ${\displaystyle C}$ given by:

${\displaystyle A={\begin{bmatrix}1&0\\0&1\end{bmatrix}}}$, ${\displaystyle C={\begin{bmatrix}0&1\\\end{bmatrix}}}$.

If the observability matrix is defined by ${\displaystyle {\mathcal {O}}:=(C^{T}|A^{T}C^{T})^{T}}$, it can be calculated as follows:

${\displaystyle {\mathcal {O}}={\begin{bmatrix}0&1\\0&1\end{bmatrix}}}$

Let's now calculate the kernel of observability matrix.

${\displaystyle {\mathcal {O}}v=0}$

${\displaystyle {\begin{bmatrix}0&1\\0&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\to v={\begin{bmatrix}v_{1}\\0\end{bmatrix}}\to v=v_{1}{\begin{bmatrix}1\\0\end{bmatrix}}}$

${\displaystyle \ker({\mathcal {O}})=N=\mathop {\mathrm {span} } \left\{{\begin{bmatrix}1\\0\end{bmatrix}}\right\}}$

the system is observable if ${\displaystyle \mathop {\mathrm {rank} } ({\mathcal {O}})=n}$ where ${\displaystyle n}$ is the number of independent columns in the observability matrix. In this example, ${\displaystyle \mathop {\mathrm {det} } ({\mathcal {O}})=0}$, then ${\displaystyle \mathop {\mathrm {rank} } ({\mathcal {O}}) and the system is unobservable.

Since the kernel of a linear application, the unobservable subspace is a subspace of ${\displaystyle R^{n}}$. The following properties are valid: [5]

• ${\displaystyle N\subset Ke(C)}$
• ${\displaystyle A(N)\subset N}$
• ${\displaystyle N=\bigcup {\{S\subset R^{n}\mid S\subset Ke(C),A(S)\subset N\}}}$
Detectability

A slightly weaker notion than observability is detectability. A system is detectable if all the unobservable states are stable.[6] See also some new detectability conditions developed over sensor networks[7][8].

## Continuous time-varying system

Consider the continuous linear time-variant system

${\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)+B(t)\mathbf {u} (t)\,}$
${\displaystyle \mathbf {y} (t)=C(t)\mathbf {x} (t).\,}$

Suppose that the matrices ${\displaystyle A,B,{\text{ and }}C}$ are given as well as inputs and outputs ${\displaystyle u{\text{ and }}y}$ for all ${\displaystyle t\in [t_{0},t_{1}];}$ then it is possible to determine ${\displaystyle x(t_{0})}$ to within an additive constant vector which lies in the null space of ${\displaystyle M(t_{0},t_{1})}$ defined by

${\displaystyle M(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t,t_{0})^{T}C(t)^{T}C(t)\phi (t,t_{0})dt}$

where ${\displaystyle \phi }$ is the state-transition matrix.

It is possible to determine a unique ${\displaystyle x(t_{0})}$ if ${\displaystyle M(t_{0},t_{1})}$ is nonsingular. In fact, it is not possible to distinguish the initial state for ${\displaystyle x_{1}}$ from that of ${\displaystyle x_{2}}$ if ${\displaystyle x_{1}-x_{2}}$ is in the null space of ${\displaystyle M(t_{0},t_{1})}$.

Note that the matrix ${\displaystyle M}$ defined as above has the following properties:

• ${\displaystyle M(t_{0},t_{1})}$ is symmetric
• ${\displaystyle M(t_{0},t_{1})}$ is positive semidefinite for ${\displaystyle t_{1}\geq t_{0}}$
• ${\displaystyle M(t_{0},t_{1})}$ satisfies the linear matrix differential equation
${\displaystyle {\frac {d}{dt}}M(t,t_{1})=-A(t)^{T}M(t,t_{1})-M(t,t_{1})A(t)-C(t)^{T}C(t),\;M(t_{1},t_{1})=0}$
• ${\displaystyle M(t_{0},t_{1})}$ satisfies the equation
${\displaystyle M(t_{0},t_{1})=M(t_{0},t)+\phi (t,t_{0})^{T}M(t,t_{1})\phi (t,t_{0})}$[9]

### Observability

The system is observable in [${\displaystyle t_{0}}$,${\displaystyle t_{1}}$] if and only if there exists an interval [${\displaystyle t_{0}}$,${\displaystyle t_{1}}$] in ${\displaystyle \mathbb {R} }$ such that the matrix ${\displaystyle M(t_{0},t_{1})}$ is nonsingular.

If ${\displaystyle A(t),C(t)}$ are analytic, then the system is observable in the interval [${\displaystyle t_{0}}$,${\displaystyle t_{1}}$] if there exists ${\displaystyle {\bar {t}}\in [t_{0},t_{1}]}$ and a positive integer k such that[10]

${\displaystyle rank{\begin{bmatrix}&N_{0}({\bar {t}})&\\&N_{1}({\bar {t}})&\\&:&\\&N_{k}({\bar {t}})&\end{bmatrix}}=n,}$

where ${\displaystyle N_{0}(t):=C(t)}$ and ${\displaystyle N_{i}(t)}$ is defined recursively as

${\displaystyle N_{i+1}(t):=N_{i}(t)A(t)+{\frac {\mathrm {d} }{\mathrm {d} t}}N_{i}(t),\ i=0,\ldots ,k-1}$

### Example

Consider a system varying analytically in ${\displaystyle (-\infty ,\infty )}$ and matrices

${\displaystyle A(t)={\begin{bmatrix}t&1&0\\0&t^{3}&0\\0&0&t^{2}\end{bmatrix}}}$, ${\displaystyle C(t)={\begin{bmatrix}1&0&1\end{bmatrix}}.}$ Then ${\displaystyle {\begin{bmatrix}N_{0}(0)\\N_{1}(0)\\N_{2}(0)\end{bmatrix}}={\begin{bmatrix}1&0&1\\0&1&0\\1&0&0\end{bmatrix}}}$ and since this matrix has rank = 3, the system is observable on every nontrivial interval of ${\displaystyle \mathbb {R} }$.

## Nonlinear case

Given the system ${\displaystyle {\dot {x}}=f(x)+\sum _{j=1}^{m}g_{j}(x)u_{j}}$, ${\displaystyle y_{i}=h_{i}(x),i\in p}$. Where ${\displaystyle x\in \mathbb {R} ^{n}}$ the state vector, ${\displaystyle u\in \mathbb {R} ^{m}}$ the input vector and ${\displaystyle y\in \mathbb {R} ^{p}}$ the output vector. ${\displaystyle f,g,h}$ are to be smooth vectorfields.

Define the observation space ${\displaystyle {\mathcal {O}}_{s}}$ to be the space containing all repeated Lie derivatives, then the system is observable in ${\displaystyle x_{0}}$ if and only if ${\displaystyle {\textrm {dim}}(d{\mathcal {O}}_{s}(x_{0}))=n}$.

Note: ${\displaystyle d{\mathcal {O}}_{s}(x_{0})=\mathrm {span} (dh_{1}(x_{0}),\ldots ,dh_{p}(x_{0}),dL_{v_{i}}L_{v_{i-1}},\ldots ,L_{v_{1}}h_{j}(x_{0})),\ j\in p,k=1,2,\ldots .}$[11]

Early criteria for observability in nonlinear dynamic systems were discovered by Griffith and Kumar,[12] Kou, Elliot and Tarn,[13] and Singh.[14]

## Static systems and general topological spaces

Observability may also be characterized for steady state systems (systems typically defined in terms of algebraic equations and inequalities), or more generally, for sets in ${\displaystyle \mathbb {R} ^{n}}$.[15][16] Just as observability criteria are used to predict the behavior of Kalman filters or other observers in the dynamic system case, observability criteria for sets in ${\displaystyle \mathbb {R} ^{n}}$ are used to predict the behavior of data reconciliation and other static estimators. In the nonlinear case, observability can be characterized for individual variables, and also for local estimator behavior rather than just global behavior.

## References

1. ^ Kalman R. E., "On the General Theory of Control Systems", Proc. 1st Int. Cong. of IFAC, Moscow 1960 1481, Butterworth, London 1961.
2. ^ Kalman R. E., "Mathematical Description of Linear Dynamical Systems", SIAM J. Contr. 1963 1 152
3. ^ Sontag, E.D., "Mathematical Control Theory", Texts in Applied Mathematics, 1998
4. ^ Sontag, E.D., "Mathematical Control Theory", Texts in Applied Mathematics, 1998
5. ^ Sontag, E.D., "Mathematical Control Theory", Texts in Applied Mathematics, 1998
6. ^ http://www.ece.rutgers.edu/~gajic/psfiles/chap5traCO.pdf
7. ^ Li, W.; Wei, G.; Ho, D. W. C.; Ding, D. (November 2018). "A Weightedly Uniform Detectability for Sensor Networks". IEEE Transactions on Neural Networks and Learning Systems. 29 (11): 5790–5796. doi:10.1109/TNNLS.2018.2817244.
8. ^ Li, W.; Wang, Z.; Ho, D. W. C.; Wei, G. (2019). "On Boundedness of Error Covariances for Kalman Consensus Filtering Problems". IEEE Transactions on Automatic Control: 1–1. doi:10.1109/TAC.2019.2942826.
9. ^ Brockett, Roger W. (1970). Finite Dimensional Linear Systems. John Wiley & Sons. ISBN 978-0-471-10585-5.
10. ^ Eduardo D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems.
11. ^ Lecture notes for Nonlinear Systems Theory by prof. dr. D.Jeltsema, prof dr. J.M.A.Scherpen and prof dr. A.J.van der Schaft.
12. ^ Griffith E. W. and Kumar K. S. P., "On the Observability of Nonlinear Systems I, J. Math. Anal. Appl. 1971 35 135
13. ^ Kou S. R., Elliott D. L. and Tarn T. J., Inf. Contr. 1973 22 89
14. ^ Singh S.N., "Observability in Non-linear Systems with immeasurable Inputs, Int. J. Syst. Sci., 6 723, 1975
15. ^ Stanley G.M. and Mah, R.S.H., "Observability and Redundancy in Process Data Estimation, Chem. Engng. Sci. 36, 259 (1981)
16. ^ Stanley G.M., and Mah R.S.H., "Observability and Redundancy Classification in Process Networks", Chem. Engng. Sci. 36, 1941 (1981)