# Backpropagation through time

"BPTT" redirects here. For the running events originally known as the Bushy Park Time Trial, see parkrun.

Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers[1][2][3]

## Algorithm

To train a recurrent neural network using BPTT, some training data is needed. This data should be an ordered sequence of input-output pairs, $\langle \mathbf{a}_0,\mathbf{y}_0 \rangle, \langle\mathbf{a}_1,\mathbf{y}_1 \rangle,\langle\mathbf{a}_2,\mathbf{y}_2\rangle,...,\langle\mathbf{a}_{n-1},\mathbf{y}_{n-1}\rangle$. Also, an initial value must be specified for $\mathbf{x}_0$. Typically, the vector with zero-magnitude is used for this purpose.

BPTT begins by unfolding a recurrent neural network through time as shown in this figure. This recurrent neural network contains two feed-forward neural networks, f and g. When the network is unfolded through time, the unfolded network contains k instances of f and one instance of g. In the example shown, the network has been unfolded to a depth of k=3.

Training then proceeds in a manner similar to training a feed-forward neural network with backpropagation, except that each epoch must run through the observations, $\mathbf{y}_t$, in sequential order. Each training pattern consists of $\langle\mathbf{x}_t,\mathbf{a}_t,\mathbf{a}_{t+1},\mathbf{a}_{t+2},...,\mathbf{a}_{t+k-1},\mathbf{y}_{t+k}\rangle$. (All of the actions for k time-steps are needed because the unfolded network contains inputs at each unfolded level.) Typically, backpropagation is applied in an online manner to update the weights as each training pattern is presented.

After each pattern is presented, and the weights have been updated, the weights in each instance of f ($f_1, f_2, ..., f_k$) are averaged together so that they all have the same weights. Also, $\mathbf{x}_{t+1}$ is calculated as $\mathbf{x}_{t+1}=f(\mathbf{x}_t, \mathbf{a}_t)$, which provides the information necessary so that the algorithm can move on to the next time-step, t+1.

## Pseudo-code

Pseudo-code for BPTT:

Back_Propagation_Through_Time(a, y)   // a[t] is the input at time t. y[t] is the output
Unfold the network to contain k instances of f
do until stopping criteria is met:
x = the zero-magnitude vector;// x is the current context
for t from 0 to n - 1         // t is time. n is the length of the training sequence
Set the network inputs to x, a[t], a[t+1], ..., a[t+k-1]
p = forward-propagate the inputs over the whole unfolded network
e = y[t+k] - p;           // error = target - prediction
Back-propagate the error, e, back across the whole unfolded network
Update all the weights in the network
Average the weights in each instance of f together, so that each f is identical
x = f(x);                 // compute the context for the next time-step


BPTT tends to be significantly faster for training recurrent neural networks than general-purpose optimization techniques such as evolutionary optimization.[4]