# Overlap–save method

In signal processing, Overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal $x[n]$ and a finite impulse response (FIR) filter $h[n]$ : Fig 1: A sequence of 4 plots depicts one cycle of the overlap–save convolution algorithm. The 1st plot is a long sequence of data to be processed with a lowpass FIR filter. The 2nd plot is one segment of the data to be processed in piecewise fashion. The 3rd plot is the filtered segment, with the usable portion colored red. The 4th plot shows the filtered segment appended to the output stream.[A] The FIR filter is a boxcar lowpass with M=16 samples, the length of the segments is L=100 samples and the overlap is 15 samples.
$y[n]=x[n]*h[n]\ \triangleq \ \sum _{m=-\infty }^{\infty }h[m]\cdot x[n-m]=\sum _{m=1}^{M}h[m]\cdot x[n-m],$ (Eq.1)

where h[m] = 0 for m outside the region [1, M].

The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. Consider a segment that begins at n = kL + M, for any integer k, and define:

$x_{k}[n]\ \triangleq {\begin{cases}x[n+kL],&1\leq n\leq L+M-1\\0,&{\textrm {otherwise}}.\end{cases}}$ $y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-m].$ Then, for kL + M  ≤  n  ≤  kL + L + M − 1, and equivalently M  ≤  n − kL  ≤  L + M − 1, we can write:

$y[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-kL-m]\ \ \triangleq \ \ y_{k}[n-kL].$ With the substitution  j ≜ n-kL,  the task is reduced to computing yk(j), for M  ≤  j  ≤  L + M − 1. These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1  ≤  j  ≤  L.[B]

If we periodically extend xk[n] with period N  ≥  L + M − 1, according to:

$x_{k,N}[n]\ \triangleq \ \sum _{\ell =-\infty }^{\infty }x_{k}[n-\ell N],$ the convolutions  $(x_{k,N})*h\,$ and  $x_{k}*h\,$ are equivalent in the region M  ≤  n  ≤  L + M − 1.  It is therefore sufficient to compute the N-point circular (or cyclic) convolution of $x_{k}[n]\,$ with $h[n]\,$ in the region [1, N].  The subregion [ML + M − 1] is appended to the output stream, and the other values are discarded.  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:

$y_{k}[n]\ =\ {\text{IDFT}}_{N}\displaystyle (\ {\text{DFT}}_{N}\displaystyle (x_{k}[n])\cdot \ {\text{DFT}}_{N}\displaystyle (h[n])\ ),$ (Eq.2)

where:

• DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and
• L is customarily chosen such that N = L+M-1 is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency.

## Pseudocode

(Overlap-save algorithm for linear convolution)
h = FIR_impulse_response
M = length(h)
overlap = M − 1
N = 8 × overlap    (see next section for a better choice)
step_size = N − overlap
H = DFT(h, N)
position = 0

while position + N ≤ length(x)
yt = IDFT(DFT(x(position+(1:N))) × H)
y(position+(1:step_size)) = yt(M : N)    (discard M−1 y-values)
position = position + step_size
end


## Efficiency considerations Fig 2: A graph of the values of N (an integer power of 2) that minimize the cost function ${\tfrac {N\left(\log _{2}N+1\right)}{N-M+1}}$ When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT.[E] Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about:

${\frac {N(\log _{2}(N)+1)}{N-M+1}}.\,$ (Eq.3)

For example, when M=201 and N=1024, Eq.3 equals 13.67, whereas direct evaluation of Eq.1 would require up to 201 complex multiplications per output sample, the worst case being when both x and h are complex-valued. Also note that for any given M, Eq.3 has a minimum with respect to N. Figure 2 is a graph of the values of N that minimize Eq.3 for a range of filter lengths (M).

Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length $N_{x}$ samples. The total number of complex multiplications would be:

$N_{x}\cdot (\log _{2}(N_{x})+1).$ Comparatively, the number of complex multiplications required by the pseudocode algorithm is:

$N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.$ Hence the cost of the overlap–save method scales almost as $O\left(N_{x}\log _{2}N\right)$ while the cost of a single, large circular convolution is almost $O\left(N_{x}\log _{2}N_{x}\right)$ .