Phase vocoder

From Wikipedia, the free encyclopedia

A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform.[1] The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting).

At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms. The STFT converts a time domain representation of sound into a time-frequency representation (the "analysis" phase), allowing modifications to the amplitudes or phases of specific frequency components of the sound, before resynthesis of the time-frequency domain representation into the time domain by the inverse STFT. The time evolution of the resynthesized sound can be changed by means of modifying the time position of the STFT frames prior to the resynthesis operation allowing for time-scale modification of the original sound file.

Phase coherence problem[edit]

The main problem that has to be solved for all cases of manipulation of the STFT is the fact that individual signal components (sinusoids, impulses) will be spread over multiple frames and multiple STFT frequency locations (bins). This is because the STFT analysis is done using overlapping analysis windows. The windowing results in spectral leakage such that the information of individual sinusoidal components is spread over adjacent STFT bins. To avoid border effects of tapering of the analysis windows, STFT analysis windows overlap in time. This time overlap results in the fact that adjacent STFT analyses are strongly correlated (a sinusoid present in analysis frame at time "t" will be present in the subsequent frames as well). The problem of signal transformation with the phase vocoder is related to the problem that all modifications that are done in the STFT representation need to preserve the appropriate correlation between adjacent frequency bins (vertical coherence) and time frames (horizontal coherence). Except in the case of extremely simple synthetic sounds, these appropriate correlations can be preserved only approximately, and since the invention of the phase vocoder research has been mainly concerned with finding algorithms that would preserve the vertical and horizontal coherence of the STFT representation after the modification. The phase coherence problem was investigated for quite a while before appropriate solutions emerged.


The phase vocoder was introduced in 1966 by Flanagan as an algorithm that would preserve horizontal coherence between the phases of bins that represent sinusoidal components.[2] This original phase vocoder did not take into account the vertical coherence between adjacent frequency bins, and therefore, time stretching with this system did produce sound signals that were missing clarity.

The optimal reconstruction of the sound signal from STFT after amplitude modifications has been proposed by Griffin and Lim in 1984.[3] This algorithm does not consider the problem of producing a coherent STFT, but it does allow finding the sound signal that has an STFT that is as close as possible to the modified STFT even if the modified STFT is not coherent (does not represent any signal).

The problem of the vertical coherence remained a major issue for the quality of time scaling operations until 1999 when Laroche and Dolson[4] proposed a means to preserve phase consistency across spectral bins. The proposition of Laroche and Dolson has to be seen as a turning point in phase vocoder history. It has been shown that by means of ensuring vertical phase consistency very high quality time scaling transformations can be obtained.

The algorithm proposed by Laroche did not allow preservation of vertical phase coherence for sound onsets (note onsets). A solution for this problem has been proposed by Roebel.[5]

An example of software implementation of phase vocoder based signal transformation using means similar to those described here to achieve high quality signal transformation is Ircam's SuperVP.[6][verification needed]

Use in music[edit]

British composer Trevor Wishart used phase vocoder analyses and transformations of a human voice as the basis for his composition Vox 5 (part of his larger Vox Cycle).[7] Transfigured Wind by American composer Roger Reynolds uses the phase vocoder to perform time-stretching of flute sounds.[8] The music of JoAnn Kuchera-Morin makes some of the earliest and most extensive use of phase vocoder transformations, such as in Dreampaths (1989).[9]

See also[edit]


  1. ^ Sethares, William. "A Phase Vocoder in Matlab". Retrieved 6 December 2020.
  2. ^ Flanagan J.L. and Golden, R. M. (1966). "Phase vocoder". Bell System Technical Journal. 45 (9): 1493–1509. doi:10.1002/j.1538-7305.1966.tb01706.x.
  3. ^ Griffin D. and Lim J. (1984). "Signal Estimation from Modified Short-Time Fourier Transform". IEEE Transactions on Acoustics, Speech, and Signal Processing. 32 (2): 236–243. CiteSeerX doi:10.1109/TASSP.1984.1164317.
  4. ^ J. Laroche and M. Dolson (1999). "Improved Phase Vocoder Time-Scale Modification of Audio". IEEE Transactions on Speech and Audio Processing. 7 (3): 323–332. doi:10.1109/89.759041.
  5. ^ Roebel A., "A new approach to transient processing in the phase vocoder", DAFx, 2003. pdf Archived 2004-06-17 at the Wayback Machine
  6. ^ "SuperVP",
  7. ^ Wishart, T. "The Composition of Vox 5". Computer Music Journal 12/4, 1988
  8. ^ Serra, X. 'A System for Sound Analysis/Transformation/Synthesis based on Deterministic plus Stochastic Decomposition', p.12 (PhD Thesis 1989)
  9. ^ Roads, Curtis (2004). Microsound, p.318. MIT Press. ISBN 9780262681544.

External links[edit]