# Distributed parameter system

(Redirected from Distributed parameter systems)

A distributed parameter system (as opposed to a lumped parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.

## Linear time-invariant distributed parameter systems

### Abstract evolution equations

#### Discrete-time

With U, X and Y Hilbert spaces and $A\,$ ∈ L(X), $B\,$ ∈ L(UX), $C\,$ ∈ L(XY) and $D\,$ ∈ L(UY) the following equations determine a discrete-time linear time-invariant system:

$x(k+1)=Ax(k)+Bu(k)\,$
$y(k)=Cx(k)+Du(k)\,$

with $x\,$ (the state) a sequence with values in X, $u\,$ (the input or control) a sequence with values in U and $y\,$ (the output) a sequence with values in Y.

#### Continuous-time

The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:

$\dot{x}(t)=Ax(t)+Bu(t)\,$,
$y(t)=Cx(t)+Du(t)\,$.

An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples,[1] but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.

### Example: a partial differential equation

The partial differential equation with $t>0$ and $\xi\in[0,1]$ given by

$\frac{\partial}{\partial t}w(t,\xi)=-\frac{\partial}{\partial\xi}w(t,\xi)+u(t),$
$w(0,\xi)=w_0(\xi),$
$w(t,0)=0,$
$y(t)=\int_0^1 w(t,\xi)\,d\xi,$

fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as

$Ax=-x',~~~D(A)=\left\{x\in X: x\text{ absolutely continuous }, x'\in L^2(0,1)\text{ and }x(0)=0\right\}.$

It can be shown[2] that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as

$Bu=u,~~~Cx=\int_0^1 x(\xi)\,d\xi,~~~D=0.$

### Example: a delay differential equation

The delay differential equation

$\dot{w}(t)=w(t)+w(t-\tau)+u(t),$
$y(t)=w(t),$

fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be the product of the complex numbers with L2(−τ, 0). The operator A is defined as

$A\begin{pmatrix}r\\f\end{pmatrix}=\begin{pmatrix}r+f(-\tau)\\f'\end{pmatrix},~~~D(A)=\left\{\begin{pmatrix}r\\f\end{pmatrix}\in X: f\text{ absolutely continuous }, f'\in L^2([-\tau,0])\text{ and }r=f(0)\right\}.$

It can be shown[3] that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as

$Bu=\begin{pmatrix}u\\0\end{pmatrix},~~~C\begin{pmatrix}r\\f\end{pmatrix}=r,~~~D=0.$

### Transfer functions

As in the finite-dimensional case the transfer function is defined through the Laplace transform (continuous-time) or Z-transform (discrete-time). Whereas in the finite-dimensional case the transfer function is a proper rational function, the infinite-dimensionality of the state space leads to irrational functions (which are however still holomorphic).

#### Discrete-time

In discrete-time the transfer function is given in terms of the state space parameters by $D+\sum_{k=0}^\infty CA^kBz^k$ and it is holomorphic in a disc centered at the origin.[4] In case 1/z belongs to the resolvent set of A (which is the case on a possibly smaller disc centered at the origin) the transfer function equals $D+Cz(I-zA)^{-1}B$. An interesting fact is that any function that is holomorphic in zero is the transfer function of some discrete-time system.

#### Continuous-time

If A generates a strongly continuous semigroup and B, C and D are bounded operators, then[5] the transfer function is given in terms of the state space parameters by $D+C(sI-A)^{-1}B$ for s with real part larger than the exponential growth bound of the semigroup generated by A. In more general situations this formula as it stands may not even make sense, but an appropriate generalization of this formula still holds.[6] To obtain an easy expression for the transfer function it is often better to take the Laplace transform in the given differential equation than to use the state space formulas as illustrated below on the examples given above.

#### Transfer function for the partial differential equation example

Setting the initial condition $w_0$ equal to zero and denoting Laplace transforms with respect to t by capital letters we obtain from the partial differential equation given above

$sW(s,\xi)=-\frac{d}{d\xi}W(s,\xi)+U(s),$
$W(s,0)=0,$
$Y(s)=\int_0^1 W(s,\xi)\,d\xi.$

This is an inhomogeneous linear differential equation with $\xi$ as the variable, s as a parameter and initial condition zero. The solution is $W(s,\xi)=U(s)(1-e^{-s\xi})/s$. Substituting this in the equation for Y and integrating gives $Y(s)=U(s)(e^{-s}+s-1)/s^2$ so that the transfer function is $(e^{-s}+s-1)/s^2$.

#### Transfer function for the delay differential equation example

Proceeding similarly as for the partial differential equation example, the transfer function for the delay equation example is[7] $1/(s-1-e^{-s})$.

### Controllability

In the infinite-dimensional case there are several non-equivalent definitions of controllability which for the finite-dimensional case collapse to the one usual notion of controllability. The three most important controllability concepts are:

• Exact controllability,
• Approximate controllability,
• Null controllability.

#### Controllability in discrete-time

An important role is played by the maps $\Phi_n$ which map the set of all U valued sequences into X and are given by $\Phi_n u=\sum_{k=0}^n A^kBu_k$. The interpretation is that $\Phi_nu$ is the state that is reached by applying the input sequence u when the initial condition is zero. The system is called

• exactly controllable in time n if the range of $\Phi_n$ equals X,
• approximately controllable in time n if the range of $\Phi_n$ is dense in X,
• null controllable in time n if the range of $\Phi_n$ includes the range of An.

#### Controllability in continuous-time

In controllability of continuous-time systems the map $\Phi_t$ given by $\int_0^t {\rm e}^{As}Bu(s)\,ds$ plays the role that $\Phi_n$ plays in discrete-time. However, the space of control functions on which this operator acts now influences the definition. The usual choice is L2(0, ∞;U), the space of (equivalence classes of) U-valued square integrable functions on the interval (0, ∞), but other choices such as L1(0, ∞;U) are possible. The different controllability notions can be defined once the domain of $\Phi_t$ is chosen. The system is called[8]

• exactly controllable in time t if the range of $\Phi_t$ equals X,
• approximately controllable in time t if the range of $\Phi_t$ is dense in X,
• null controllable in time t if the range of $\Phi_t$ includes the range of ${\rm e}^{At}$.

### Observability

As in the finite-dimensional case, observability is the dual notion of controllability. In the infinite-dimensional case there are several different notions of observability which in the finite-dimensional case coincide. The three most important ones are:

• Exact observability (also known as continuous observability),
• Approximate observability,
• Final state observability.

#### Observability in discrete-time

An important role is played by the maps $\Psi_n$ which map X into the space of all Y valued sequences and are given by $(\Psi_nx)_k=CA^kx$ if k ≤ n and zero if k > n. The interpretation is that $\Psi_nx$ is the truncated output with initial condition x and control zero. The system is called

• exactly observable in time n if there exists a kn > 0 such that $\|\Psi_nx\|\geq k_n\|x\|$ for all x ∈ X,
• approximately observable in time n if $\Psi_n$ is injective,
• final state observable in time n if there exists a kn > 0 such that $\|\Psi_nx\|\geq k_n\|A^nx\|$ for all x ∈ X.

#### Observability in continuous-time

In observability of continuous-time systems the map $\Psi_t$ given by $(\Psi_t)(s)=C{\rm e}^{As}x$ for s∈[0,t] and zero for s>t plays the role that $\Psi_n$ plays in discrete-time. However, the space of functions to which this operator maps now influences the definition. The usual choice is L2(0, ∞, Y), the space of (equivalence classes of) Y-valued square integrable functions on the interval (0,∞), but other choices such as L1(0, ∞, Y) are possible. The different observability notions can be defined once the co-domain of $\Psi_t$ is chosen. The system is called[9]

• exactly observable in time t if there exists a kt > 0 such that $\|\Psi_tx\|\geq k_t\|x\|$ for all x ∈ X,
• approximately observable in time t if $\Psi_t$ is injective,
• final state observable in time t if there exists a kt > 0 such that $\|\Psi_tx\|\geq k_t\|{\rm e}^{At}x\|$ for all x ∈ X.

### Duality between controllability and observability

As in the finite-dimensional case, controllability and observability are dual concepts (at least when for the domain of $\Phi$ and the co-domain of $\Psi$ the usual L2 choice is made). The correspondence under duality of the different concepts is:[10]

• Exact controllability ↔ Exact observability,
• Approximate controllability ↔ Approximate observability,
• Null controllability ↔ Final state observability.

## Notes

1. ^ Curtain and Zwart
2. ^ Curtain and Zwart Example 2.2.4
3. ^ Curtain and Zwart Theorem 2.4.6
4. ^ This is the mathematical convention, engineers seem to prefer transfer functions to be holomorphic at infinity; this is achieved by replacing z by 1/z
5. ^ Curtain and Zwart Lemma 4.3.6
6. ^ Staffans Theorem 4.6.7
7. ^ Curtain and Zwart Example 4.3.13
8. ^ Tucsnak Definition 11.1.1
9. ^ Tucsnak Definition 6.1.1
10. ^ Tucsnak Theorem 11.2.1

## References

• Curtain, Ruth; Zwart, Hans (1995), An Introduction to Infinite-Dimensional Linear Systems theory, Springer
• Tucsnak, Marius; Weiss, George (2009), Observation and Control for Operator Semigroups, Birkhauser
• Staffans, Olof (2005), Well-posed linear systems, Cambridge University Press
• Luo, Zheng-Hua; Guo, Bao-Zhu; Morgul, Omer (1999), Stability and Stabilization of Infinite Dimensional Systems with Applications, Springer
• Lasiecka, Irena; Triggiani, Roberto (2000), Control Theory for Partial Differential Equations, Cambridge University Press
• Bensoussan, Alain; Da Prato, Giuseppe; Delfour, Michel; Mitter, Sanjoy (2007), Representation and Control of Infinite Dimensional Systems (second ed.), Birkhauser