Overlap–save method

Overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal $x[n]$ and a finite impulse response (FIR) filter $h[n]$ :

$y[n]=x[n]*h[n]\ \triangleq \ \sum _{m=-\infty }^{\infty }h[m]\cdot x[n-m]=\sum _{m=1}^{M}h[m]\cdot x[n-m],\,$ (Eq.1)

where h[m]=0 for m outside the region [1, M]. A sequence of 4 plots depicts one cycle of the overlap–save convolution algorithm. The 1st plot is a long sequence of data to be processed with a lowpass FIR filter. The 2nd plot is one segment of the data to be processed in piecewise fashion. The 3rd plot is the filtered segment, with the usable portion colored red. The 4th plot shows the filtered segment appended to the output stream. The FIR filter is a boxcar lowpass with M=16 samples, the length of the segments is L=100 samples and the overlap is 15 samples.

The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. Consider a segment that begins at n = kL + M, for any integer k, and define:

$x_{k}[n]\ \triangleq {\begin{cases}x[n+kL]&1\leq n\leq L+M-1\\0&{\textrm {otherwise}}.\end{cases}}$ $y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-m].$ Then, for kL + M  ≤  n  ≤  kL + L + M − 1, and equivalently M  ≤  n − kL  ≤  L + M − 1, we can write:

$y[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-kL-m]\ \ \triangleq \ \ y_{k}[n-kL].$ The task is thereby reduced to computing yk[n], for M  ≤  n  ≤  L + M − 1. The process described above is illustrated in the accompanying figure.

Now note that if we periodically extend xk[n] with period N  ≥  L + M − 1, according to:

$x_{k,N}[n]\ \triangleq \ \sum _{\ell =-\infty }^{\infty }x_{k}[n-\ell N],$ the convolutions  $(x_{k,N})*h\,$ and  $x_{k}*h\,$ are equivalent in the region M  ≤  n  ≤  L + M − 1. So it is sufficient to compute the N-point circular (or cyclic) convolution of $x_{k}[n]\,$ with $h[n]\,$ in the region [1, N].  The subregion [ML + M − 1] is appended to the output stream, and the other values are discarded.

The advantage is that the circular convolution can be computed very efficiently as follows, according to the circular convolution theorem:

$y_{k}[n]={\text{DFT}}^{-1}\displaystyle (\ {\text{DFT}}\displaystyle (x_{k}[n])\cdot {\text{DFT}}\displaystyle (h[n])\ ),$ where:

• DFT and DFT−1 refer to the Discrete Fourier transform and inverse Discrete Fourier transform, respectively, evaluated over N discrete points, and
• N is customarily chosen to be an integer power-of-2, which optimizes the efficiency of the FFT algorithm.
• Optimal N is in the range [4M, 8M].
• Unlike the third graph in the figure above, depicting separate leading and trailing edge-effects, this method causes them to be overlapped and added. So they are discarded together. In other words, with circular convolution, the first output value is a weighted average of the last M-1 samples of the input segment (and the first sample of the segment). The next M-2 outputs are weighted averages of both the beginning and the end of the segment. The Mth output value is the first one that combines only samples from the beginning of the segment.

Pseudocode

 

 (Overlap–save algorithm for linear convolution) h = FIR_impulse_response M = length(h) overlap = M-1 N = 4*overlap (or a nearby power-of-2) step_size = N-overlap H = DFT(h, N) position = 0 while position+N <= length(x) yt = IDFT( DFT( x(1+position : N+position), N ) * H, N ) y(1+position : step_size+position) = yt(M : N) #discard M-1 y-values position = position + step_size end 

Efficiency

When the DFT and its inverse is implemented by the FFT algorithm, the pseudocode above requires about N log2(N) + N complex multiplications for the FFT, product of arrays, and IFFT.[note 1] Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about:

${\frac {N\log _{2}(N)+N}{N-M+1}}.\,$ (Eq.2)

For example, when M=201 and N=1024, Eq.2 equals 13.67, whereas direct evaluation of Eq.1 would require up to 201 complex multiplications per output sample, the worst case being when both x and h are complex-valued. Also note that for any given M, Eq.2 has a minimum with respect to N. It diverges for both small and large block sizes.