Mel-frequency cepstrum

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In sound processing, the mel-frequency cepstrum (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency.

Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC.[1] They are derived from a type of cepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). The difference between the cepstrum and the mel-frequency cepstrum is that in the MFC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal spectrum. This frequency warping can allow for better representation of sound, for example, in audio compression that might potentially reduce the transmission bandwidth and the storage requirements of audio signals.

MFCCs are commonly derived as follows:[2]

  1. Take the Fourier transform of (a windowed excerpt of) a signal.
  2. Map the powers of the spectrum obtained above onto the mel scale, using triangular overlapping windows or alternatively, cosine overlapping windows.
  3. Take the logs of the powers at each of the mel frequencies.
  4. Take the discrete cosine transform of the list of mel log powers, as if it were a signal.
  5. The MFCCs are the amplitudes of the resulting spectrum.

There can be variations on this process, for example: differences in the shape or spacing of the windows used to map the scale,[3] or addition of dynamics features such as "delta" and "delta-delta" (first- and second-order frame-to-frame difference) coefficients.[4]

The European Telecommunications Standards Institute in the early 2000s defined a standardised MFCC algorithm to be used in mobile phones.

MFCC for speaker recognition[edit]

Since, Mel-frequency bands are distributed evenly in MFCC and they are much similar to the voice system of a human, thus, MFCC can efficiently be used to characterize speakers, for instance, it can be used to recognize the speaker's cell phone model details and further the details of the speaker.[3]

Talking about speech recognition to identify mobile phones, the production of electronic components in a phone have tolerances because different electronic circuit realizations do not have exact same transfer functions. The dissimilarities in the transfer function from one realization to another becomes more prominent if the task performing circuits are from different manufacturers. Hence, each cell phone introduces a convolutional distortion on input speech that leaves its unique impact on the recordings from the cell phone. Therefore,a particular phone can be identified from the recorded speech by multiplying original frequency spectrum with further multiplication of transfer function specific to each phone followed by signal processing techniques. Thus, by using MFCC one can characterize cell phone recordings to identify the brand and model of the phone.[4]

Considering recording section of a cellphone as Linear time-invariant (LTI ) filter:

Impulse response- h(n), recorded speech signal y(n) as output of filter in response to input x(n).

Hence, y(n)= x(n) * h(n) (convolution)

As speech is not stationary signal, it is divided into overlapped frames within which the signal is assumed to be stationary. So, pth short-term segment (frame) of recorded input speech is:


ypw(n) = [ x(n) w(pW-n) ] * h(n),


where w(n): windowed function of length W.

Hence, as specified the footprint of mobile phone of the recorded speech is the convolution distortion that helps to identify the recording phone.

The embedded identity of the cell phone requires a conversion to a better identifiable form, hence, taking short-time Fourier transform:


Ypw(f) = Xpw(f) H(f)


H(f) can be considered as a concatenated transfer function that produced input speech, and the recorded speech Ypw(f) can be perceived as original speech from cell phone.

So, equivalent transfer function of vocal tract and cell phone recorder is considered as original source of recorded speech. Therefore,


Xpw(f)= Xepw(f) Xv(f), H'(f) = H(f) Xv(f),


where Xew(f) is excitation function, Xv(f) is vocal tract transfer function for speech in pth frame and H'(f) is the equivalent transfer function that characterizes the cell phone.


Ypw(f) = Xepw(f) H'(f)


This approach can be useful for speaker recognition as the device identification and the speaker identification are very much connected.

Providing importance to the envelope of the spectrum which multiplied by filter bank (suitable cepstrum with mel-scale filter bank), after smoothing filter bank with transfer function U(f), the log operation on output energies are:


log [|Ypw(f)|] = log [|U(f)||Xepw(f)||H'(f)|]


Representing Hw(f) = U(f) H'(f)


log [|Ypw(f)|] = log [|Xepw(f)|] + log [|Hw(f)|]


MFCC is successful because of this nonlinear transformation with additive property.

Transforming back to time domain:


cy(j) = ce(j) + cw(j)


where, cy(j), ce(j), cw(j) are the recorded speech cepstrum and weighted equivalent impulse response of cell phone recorder that characterizes the cell phone, respectively, while j is the number of filters in the filter bank.

More precisely, the device specific information is in the recorded speech which is converted to additive form suitable for identification.

cy(j) can be further processed for identification of the recording phone.

Often used frame lengths- 20 or 20 ms.

Commonly used window functions- Hamming and Hanning windows.

Hence, Mel-scale is a commonly used frequency scale that is linear till 1000 Hz and logarithmic above it.

Computation of central frequencies of filters in Mel-scale:


fmel = 1000 log(1+f/1000)/log2, base 10.


Basic procedure for MFCC calculation:

  1. Logarithmic filter bank outputs are produced and multiplied by 20 to obtain spectral envelopes in decibels.
  2. MFCCs are obtained by taking Discrete Cosine Transform (DCT) of the spectral envelope.
  3. Cepstrum coefficients are obtained as:

, i= 1,2,....,L ,

where ci = cy(i) = ith MFCC coefficient, Nf is the number of triangular filters in the filter bank, Sn is the log energy output of nth filter coefficient and L is the number of MFCC coefficients that we want to calculate.

Applications[edit]

MFCCs are commonly used as features in speech recognition[5] systems, such as the systems which can automatically recognize numbers spoken into a telephone.

MFCCs are also increasingly finding uses in music information retrieval applications such as genre classification, audio similarity measures, etc.[6]

Noise sensitivity[edit]

MFCC values are not very robust in the presence of additive noise, and so it is common to normalise their values in speech recognition systems to lessen the influence of noise. Some researchers propose modifications to the basic MFCC algorithm to improve robustness, such as by raising the log-mel-amplitudes to a suitable power (around 2 or 3) before taking the discrete cosine transform (DCT), which reduces the influence of low-energy components.[7]

History[edit]

Paul Mermelstein[8][9] is typically credited with the development of the MFC. Mermelstein credits Bridle and Brown[10] for the idea:

Bridle and Brown used a set of 19 weighted spectrum-shape coefficients given by the cosine transform of the outputs of a set of nonuniformly spaced bandpass filters. The filter spacing is chosen to be logarithmic above 1 kHz and the filter bandwidths are increased there as well. We will, therefore, call these the mel-based cepstral parameters.[8]

Sometimes both early originators are cited.[11]

Many authors, including Davis and Mermelstein,[9] have commented that the spectral basis functions of the cosine transform in the MFC are very similar to the principal components of the log spectra, which were applied to speech representation and recognition much earlier by Pols and his colleagues.[12][13]

See also[edit]

References[edit]

  1. ^ Min Xu; et al. (2004). "HMM-based audio keyword generation" (PDF). In Kiyoharu Aizawa; Yuichi Nakamura; Shin'ichi Satoh (eds.). Advances in Multimedia Information Processing – PCM 2004: 5th Pacific Rim Conference on Multimedia. Springer. ISBN 978-3-540-23985-7. Archived from the original (PDF) on 2007-05-10.
  2. ^ Sahidullah, Md.; Saha, Goutam (May 2012). "Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition". Speech Communication. 54 (4): 543–565. doi:10.1016/j.specom.2011.11.004.
  3. ^ a b Fang Zheng, Guoliang Zhang and Zhanjiang Song (2001), "Comparison of Different Implementations of MFCC," J. Computer Science & Technology, 16(6): 582–589.
  4. ^ a b S. Furui (1986), "Speaker-independent isolated word recognition based on emphasized spectral dynamics"
  5. ^ T. Ganchev, N. Fakotakis, and G. Kokkinakis (2005), "Comparative evaluation of various MFCC implementations on the speaker verification task Archived 2011-07-17 at the Wayback Machine," in 10th International Conference on Speech and Computer (SPECOM 2005), Vol. 1, pp. 191–194.
  6. ^ Meinard Müller (2007). Information Retrieval for Music and Motion. Springer. p. 65. ISBN 978-3-540-74047-6.
  7. ^ V. Tyagi and C. Wellekens (2005), On desensitizing the Mel-Cepstrum to spurious spectral components for Robust Speech Recognition, in Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP ’05). IEEE International Conference on, vol. 1, pp. 529–532.
  8. ^ a b P. Mermelstein (1976), "Distance measures for speech recognition, psychological and instrumental," in Pattern Recognition and Artificial Intelligence, C. H. Chen, Ed., pp. 374–388. Academic, New York.
  9. ^ a b S.B. Davis, and P. Mermelstein (1980), "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences," in IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(4), pp. 357–366.
  10. ^ J. S. Bridle and M. D. Brown (1974), "An Experimental Automatic Word-Recognition System", JSRU Report No. 1003, Joint Speech Research Unit, Ruislip, England.
  11. ^ Nelson Morgan; Hervé Bourlard & Hynek Hermansky (2004). "Automatic Speech Recognition: An Auditory Perspective". In Steven Greenberg & William A. Ainsworth (eds.). Speech Processing in the Auditory System. Springer. p. 315. ISBN 978-0-387-00590-4.
  12. ^ L. C. W. Pols (1966), "Spectral Analysis and Identification of Dutch Vowels in Monosyllabic Words," Doctoral dissertation, Free University, Amsterdam, The Netherlands
  13. ^ R. Plomp, L. C. W. Pols, and J. P. van de Geer (1967). "Dimensional analysis of vowel spectra." J. Acoustical Society of America, 41(3):707–712.

External links[edit]