# Discretization

Jump to navigation Jump to search

In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification).

Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused.

Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.

The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error.

Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold.

## Discretization of linear state space models

Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing.

The following continuous-time state space model

${\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)+\mathbf {w} (t)$ $\mathbf {y} (t)=\mathbf {C} \mathbf {x} (t)+\mathbf {D} \mathbf {u} (t)+\mathbf {v} (t)$ where v and w are continuous zero-mean white noise sources with power spectral densities

$\mathbf {w} (t)\sim N(0,\mathbf {Q} )$ $\mathbf {v} (t)\sim N(0,\mathbf {R} )$ can be discretized, assuming zero-order hold for the input u and continuous integration for the noise v, to

$\mathbf {x} [k+1]=\mathbf {A} _{d}\mathbf {x} [k]+\mathbf {B} _{d}\mathbf {u} [k]+\mathbf {w} [k]$ $\mathbf {y} [k]=\mathbf {C} _{d}\mathbf {x} [k]+\mathbf {D} _{d}\mathbf {u} [k]+\mathbf {v} [k]$ with covariances

$\mathbf {w} [k]\sim N(0,\mathbf {Q} _{d})$ $\mathbf {v} [k]\sim N(0,\mathbf {R} _{d})$ where

$\mathbf {A} _{d}=e^{\mathbf {A} T}={\mathcal {L}}^{-1}\{(s\mathbf {I} -\mathbf {A} )^{-1}\}_{t=T}$ $\mathbf {B} _{d}=\left(\int _{\tau =0}^{T}e^{\mathbf {A} \tau }d\tau \right)\mathbf {B} =\mathbf {A} ^{-1}(\mathbf {A} _{d}-I)\mathbf {B}$ , if $\mathbf {A}$ is nonsingular
$\mathbf {C} _{d}=\mathbf {C}$ $\mathbf {D} _{d}=\mathbf {D}$ $\mathbf {Q} _{d}=\int _{\tau =0}^{T}e^{\mathbf {A} \tau }\mathbf {Q} e^{\mathbf {A} ^{\top }\tau }d\tau$ $\mathbf {R} _{d}=\mathbf {R} {\frac {1}{T}}$ and $T$ is the sample time, although $\mathbf {A} ^{\top }$ is the transposed matrix of $\mathbf {A}$ . The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density.

A clever trick to compute Ad and Bd in one step is by utilizing the following property::p. 215

$e^{{\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {0} &\mathbf {0} \end{bmatrix}}T}={\begin{bmatrix}\mathbf {A_{d}} &\mathbf {B_{d}} \\\mathbf {0} &\mathbf {I} \end{bmatrix}}$ Where $\mathbf {A} _{d}$ and $\mathbf {B} _{d}$ are the discretized state-space matrices.

### Discretization of process noise

Numerical evaluation of $\mathbf {Q} _{d}$ is a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it

$\mathbf {F} ={\begin{bmatrix}-\mathbf {A} &\mathbf {Q} \\\mathbf {0} &\mathbf {A} ^{\top }\end{bmatrix}}T$ $\mathbf {G} =e^{\mathbf {F} }={\begin{bmatrix}\dots &\mathbf {A} _{d}^{-1}\mathbf {Q} _{d}\\\mathbf {0} &\mathbf {A} _{d}^{\top }\end{bmatrix}}.$ The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of G with the upper-right partition of G:

$\mathbf {Q} _{d}=(\mathbf {A} _{d}^{\top })^{\top }(\mathbf {A} _{d}^{-1}\mathbf {Q} _{d})=\mathbf {A} _{d}(\mathbf {A} _{d}^{-1}\mathbf {Q} _{d}).$ ### Derivation

Starting with the continuous model

$\mathbf {\dot {x}} (t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)$ we know that the matrix exponential is

${\frac {d}{dt}}e^{\mathbf {A} t}=\mathbf {A} e^{\mathbf {A} t}=e^{\mathbf {A} t}\mathbf {A}$ and by premultiplying the model we get

$e^{-\mathbf {A} t}\mathbf {\dot {x}} (t)=e^{-\mathbf {A} t}\mathbf {A} \mathbf {x} (t)+e^{-\mathbf {A} t}\mathbf {B} \mathbf {u} (t)$ which we recognize as

${\frac {d}{dt}}(e^{-\mathbf {A} t}\mathbf {x} (t))=e^{-\mathbf {A} t}\mathbf {B} \mathbf {u} (t)$ and by integrating..

$e^{-\mathbf {A} t}\mathbf {x} (t)-e^{0}\mathbf {x} (0)=\int _{0}^{t}e^{-\mathbf {A} \tau }\mathbf {B} \mathbf {u} (\tau )d\tau$ $\mathbf {x} (t)=e^{\mathbf {A} t}\mathbf {x} (0)+\int _{0}^{t}e^{\mathbf {A} (t-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau$ which is an analytical solution to the continuous model.

Now we want to discretise the above expression. We assume that u is constant during each timestep.

$\mathbf {x} [k]\ {\stackrel {\mathrm {def} }{=}}\ \mathbf {x} (kT)$ $\mathbf {x} [k]=e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau$ $\mathbf {x} [k+1]=e^{\mathbf {A} (k+1)T}\mathbf {x} (0)+\int _{0}^{(k+1)T}e^{\mathbf {A} ((k+1)T-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau$ $\mathbf {x} [k+1]=e^{\mathbf {A} T}\left[e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau \right]+\int _{kT}^{(k+1)T}e^{\mathbf {A} (kT+T-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau$ We recognize the bracketed expression as $\mathbf {x} [k]$ , and the second term can be simplified by substituting with the function $v(\tau )=kT+T-\tau$ . Note that $d\tau =-dv$ . We also assume that $\mathbf {u}$ is constant during the integral, which in turn yields

${\begin{matrix}\mathbf {x} [k+1]&=&e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{v(kT)}^{v((k+1)T)}e^{\mathbf {A} v}dv\right)\mathbf {B} \mathbf {u} [k]\\&=&e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{T}^{0}e^{\mathbf {A} v}dv\right)\mathbf {B} \mathbf {u} [k]\\&=&e^{\mathbf {A} T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{\mathbf {A} v}dv\right)\mathbf {B} \mathbf {u} [k]\\&=&e^{\mathbf {A} T}\mathbf {x} [k]+\mathbf {A} ^{-1}\left(e^{\mathbf {A} T}-\mathbf {I} \right)\mathbf {B} \mathbf {u} [k]\end{matrix}}$ which is an exact solution to the discretization problem.

When $\mathbf {A}$ is singular, the latter expression can still be used by replacing $e^{\mathbf {A} T}$ by its Taylor expansion,

$e^{{\mathbf {A} }T}=\sum _{k=0}^{\infty }{\frac {1}{k!}}({\mathbf {A} }T)^{k}.$ This yields

${\begin{matrix}\mathbf {x} [k+1]&=&e^{{\mathbf {A} }T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{{\mathbf {A} }v}dv\right)\mathbf {B} \mathbf {u} [k]\\&=&\left(\sum _{k=0}^{\infty }{\frac {1}{k!}}({\mathbf {A} }T)^{k}\right)\mathbf {x} [k]+\left(\sum _{k=1}^{\infty }{\frac {1}{k!}}{\mathbf {A} }^{k-1}T^{k}\right)\mathbf {B} \mathbf {u} [k],\end{matrix}}$ which is the form used in practice.

### Approximations

Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps $e^{\mathbf {A} T}\approx \mathbf {I} +\mathbf {A} T$ . The approximate solution then becomes:

$\mathbf {x} [k+1]\approx (\mathbf {I} +\mathbf {A} T)\mathbf {x} [k]+T\mathbf {B} \mathbf {u} [k]$ This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are $e^{\mathbf {A} T}\approx \left(\mathbf {I} -\mathbf {A} T\right)^{-1}$ , otherwise known as the backward Euler method and $e^{\mathbf {A} T}\approx \left(\mathbf {I} +{\frac {1}{2}}\mathbf {A} T\right)\left(\mathbf {I} -{\frac {1}{2}}\mathbf {A} T\right)^{-1}$ , which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system.

## Discretization of continuous features

In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions.

## Discretization of smooth functions

In generalized functions theory, discretization arises as a particular case of the Convolution Theorem on tempered distributions

${\mathcal {F}}\{f*\operatorname {III} \}={\mathcal {F}}\{f\}\cdot \operatorname {III}$ ${\mathcal {F}}\{\alpha \cdot \operatorname {III} \}={\mathcal {F}}\{\alpha \}*\operatorname {III}$ where $\operatorname {III}$ is the Dirac comb, $\cdot \operatorname {III}$ is discretization, $*\operatorname {III}$ is periodization, $f$ is a rapidly decreasing tempered distribution (e.g. a Dirac delta function $\delta$ or any other compactly supported function), $\alpha$ is a smooth, slowly growing ordinary function (e.g. the function that is constantly $1$ or any other band-limited function) and ${\mathcal {F}}$ is the (unitary, ordinary frequency) Fourier transform. Functions $\alpha$ which are not smooth can be made smooth using a mollifier prior to discretization.

As an example, discretization of the function that is constantly $1$ yields the sequence $[..,1,1,1,..]$ which, interpreted as the coefficients of a linear combination of Dirac delta functions, forms a Dirac comb. If additionally truncation is applied, one obtains finite sequences, e.g. $[1,1,1,1]$ . They are discrete in both, time and frequency.