Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Mel-frequency cepstrum

From Wikipedia, the free encyclopedia
Signal representation used in automatic speech recognition

Insound processing, themel-frequency cepstrum (MFC) is a representation of the short-termpower spectrum of a sound, based on alinear cosine transform of alog power spectrum on anonlinearmel scale of frequency.

Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC.[1] They are derived from a type ofcepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). The difference between thecepstrum and the mel-frequencycepstrum is that in the MFC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal spectrum. This frequency warping can allow for better representation of sound, for example, inaudio compression that might potentially reduce the transmissionbandwidth and the storage requirements of audio signals.

MFCCs are commonly derived as follows:[2][3]

  1. Take theFourier transform of (a windowed excerpt of) a signal.
  2. Map the powers of the spectrum obtained above onto themel scale, usingtriangular overlapping windows or alternatively,cosine overlapping windows.
  3. Take thelogs of the powers at each of the mel frequencies.
  4. Take thediscrete cosine transform of the list of mel log powers, as if it were a signal.
  5. The MFCCs are the amplitudes of the resulting spectrum.

There can be variations on this process, for example: differences in the shape or spacing of the windows used to map the scale,[4] or addition of dynamics features such as "delta" and "delta-delta" (first- and second-order frame-to-frame difference) coefficients.[5]

TheEuropean Telecommunications Standards Institute in the early 2000s defined a standardised MFCC algorithm to be used inmobile phones.[6]

Applications

[edit]

MFCCs are commonly used asfeatures inspeech recognition[7] systems, such as the systems which can automatically recognize numbers spoken into a telephone.

MFCCs are also increasingly finding uses inmusic information retrieval applications such asgenre classification, audio similarity measures, etc.[8]

MFCC for speaker recognition

[edit]
This sectionmay beconfusing or unclear to readers. Please helpclarify the section. There might be a discussion about this onthe talk page.(August 2022) (Learn how and when to remove this message)
This articleis written like apersonal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. Pleasehelp improve it by rewriting it in anencyclopedic style.(March 2023) (Learn how and when to remove this message)

Since Mel-frequency bands are distributed evenly in MFCC, and they are very similar to the voice system of a human, MFCC can efficiently be used to characterize speakers. For instance, it can be used to recognize the speaker's cell phone model characteristics, and further the details of the speaker's voice.[4]

This type of mobile device recognition is possible because the production of electronic components in a phone have tolerances, because different electronic circuitrealizations do not have exact sametransfer functions. The dissimilarities in the transfer function from one realization to another becomes more prominent if the task performing circuits are from different manufacturers. Hence, each cell phone introduces aconvolutional distortion on input speech that leaves its unique impact on the recordings from the cell phone. Therefore, a particular phone can be identified from the recorded speech by multiplying the originalfrequency spectrum with further multiplications of transfer functions specific to each phone followed by signal processing techniques. Thus, by using MFCC one can characterize cell phone recordings to identify the brand and model of the phone.[5]

Considering recording section of a cellphone as Linear time-invariant (LTI) filter:

Impulse response-h(n), recorded speech signaly(n) as output of filter in response to inputx(n).

Hence,y(n)=x(n)h(n){\displaystyle y(n)=x(n)*h(n)} (convolution)

As speech is not stationary signal, it is divided into overlapped frames within which the signal is assumed to be stationary. So, thepth{\displaystyle p^{th}} short-term segment (frame) of recorded input speech is:

ypw(n)=[x(n)w(pWn)]h(n){\displaystyle y_{p}w(n)=[x(n)w(pW-n)]*h(n)},

wherew(n): windowed function of length W.

Hence, as specified the footprint of mobile phone of the recorded speech is the convolution distortion that helps to identify the recording phone.

The embedded identity of the cell phone requires a conversion to a better identifiable form, hence, taking short-time Fourier transform:

Ypw(f)=Xpw(f)H(f){\displaystyle Y_{p}w(f)=X_{p}w(f)H(f)}

H(f){\displaystyle H(f)} can be considered as a concatenated transfer function that produced input speech, and the recorded speechYpw(f){\displaystyle Y_{p}w(f)} can be perceived as original speech from cell phone.

So, equivalent transfer function of vocal tract and cell phone recorder is considered as original source of recorded speech. Therefore,

Xpw(f)=Xepw(f)Xv(f),H(f)=H(f)Xv(f),{\displaystyle X_{p}w(f)=Xe_{p}w(f)X_{v}(f),H'(f)=H(f)X_{v}(f),}

whereXew(f) is the excitation function,Xv(f){\displaystyle X_{v}(f)} is the vocal tract transfer function for speech in thepth{\displaystyle p^{th}} frame andH(f){\displaystyle H'(f)} is the equivalent transfer function that characterizes the cell phone.

Ypw(f)=Xepw(f)H(f){\displaystyle Y_{p}w(f)=Xe_{p}w(f)H'(f)}

This approach can be useful for speaker recognition as the device identification and the speaker identification are very much connected.

Providing importance to the envelope of the spectrum which multiplied by filter bank (suitable cepstrum with mel-scale filter bank), after smoothing filter bank with transfer function U(f), the log operation on output energies are:

log[|Ypw(f)|]=log[|U(f)||Xepw(f)||H(f)|]{\displaystyle \log[|Y_{p}w(f)|]=\log[|U(f)||Xe_{p}w(f)||H'(f)|]}

RepresentingHw(f)=U(f)H(f){\displaystyle H_{w}(f)=U(f)H'(f)}

log[|Ypw(f)|]=log[|Xepw(f)|]+log[|Hw(f)|]{\displaystyle \log[|Y_{p}w(f)|]=\log[|Xe_{p}w(f)|]+\log[|H_{w}(f)|]}

MFCC is successful because of this nonlinear transformation with additive property.

Transforming back to time domain:

cy(j)=ce(j)+cw(j){\displaystyle c_{y}(j)=c_{e}(j)+c_{w}(j)}

where, cy(j), ce(j), cw(j) are the recorded speech cepstrum and weighted equivalent impulse response of cell phone recorder that characterizes the cell phone, respectively, while j is the number of filters in the filter bank.

More precisely, the device specific information is in the recorded speech which is converted to additive form suitable for identification.

cy(j) can be further processed for identification of the recording phone.

Often used frame lengths- 20 or 20 ms.

Commonly used window functions- Hamming and Hanning windows.

Hence, Mel-scale is a commonly used frequency scale that is linear till 1000 Hz and logarithmic above it.

Computation of central frequencies of filters in Mel-scale:

fmel=1000log(1+f/1000)/log2{\displaystyle f_{mel}=1000\log(1+f/1000)/\log 2}, base 10.

Basic procedure for MFCC calculation:

  1. Logarithmic filter bank outputs are produced and multiplied by 20 to obtain spectral envelopes in decibels.
  2. MFCCs are obtained by taking Discrete Cosine Transform (DCT) of the spectral envelope.
  3. Cepstrum coefficients are obtained as:

ci=n=1NfSncos(i(n0.5)(πNf)){\displaystyle c_{i}=\sum _{n=1}^{N_{f}}S_{n}\cos \left(i(n-0.5)\left({\frac {\pi }{N_{f}}}\right)\right)} ,i=1,,L{\displaystyle i=1,\dots ,L},

whereci=cy(i){\displaystyle c_{i}=c_{y}(i)} corresponds to thei{\displaystyle i}-th MFCC coefficient,Nf{\displaystyle N_{f}} is the number of triangular filters in the filter bank,Sn{\displaystyle S_{n}} is the log energy output ofn{\displaystyle n}-th filter coefficient, andL{\displaystyle L} is the number of MFCC coefficients that we want to calculate.

Inversion

[edit]

An MFCC can be approximately inverted to audio in four steps: (a1) inverse DCT to obtain a mel log-power [dB] spectrogram, (a2) mapping to power to obtain a mel power spectrogram, (b1) rescaling to obtainshort-time Fourier transform magnitudes, and finally (b2) phase reconstruction and audio synthesis using Griffin-Lim. Each step corresponds to one step in MFCC calculation.[9]

Noise sensitivity

[edit]

MFCC values are not very robust in the presence of additive noise, and so it is common to normalise their values in speech recognition systems to lessen the influence of noise. Some researchers propose modifications to the basic MFCC algorithm to improve robustness, such as by raising the log-mel-amplitudes to a suitable power (around 2 or 3) before taking thediscrete cosine transform (DCT), which reduces the influence of low-energy components.[10]

History

[edit]

Paul Mermelstein[11][12] is typically credited with the development of the MFC. Mermelstein credits Bridle and Brown[13] for the idea:

Bridle and Brown used a set of 19 weighted spectrum-shape coefficients given by the cosine transform of the outputs of a set of nonuniformly spaced bandpass filters. The filter spacing is chosen to be logarithmic above 1 kHz and the filter bandwidths are increased there as well. We will, therefore, call these the mel-based cepstral parameters.[11]

Sometimes both early originators are cited.[14]

Many authors, including Davis and Mermelstein,[12] have commented that the spectral basis functions of the cosine transform in the MFC are very similar to theprincipal components of the log spectra, which were applied to speech representation and recognition much earlier by Pols and his colleagues.[15][16]

See also

[edit]

References

[edit]
  1. ^Min Xu; et al. (2004)."HMM-based audio keyword generation"(PDF). In Kiyoharu Aizawa; Yuichi Nakamura; Shin'ichi Satoh (eds.).Advances in Multimedia Information Processing – PCM 2004: 5th Pacific Rim Conference on Multimedia. Lecture Notes in Computer Science. Vol. 3333. Springer. pp. 566–574.doi:10.1007/978-3-540-30543-9_71.ISBN 978-3-540-23985-7. Archived fromthe original(PDF) on 2007-05-10.
  2. ^Sahidullah, Md.; Saha, Goutam (May 2012). "Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition".Speech Communication.54 (4):543–565.doi:10.1016/j.specom.2011.11.004.S2CID 14985832.
  3. ^Abdulsatar, Assim Ara; Davydov, V V; Yushkova, V V; Glinushkin, A P; Rud, V Yu (2019-12-01)."Age and gender recognition from speech signals".Journal of Physics: Conference Series.1410 (1) 012073.Bibcode:2019JPhCS1410a2073A.doi:10.1088/1742-6596/1410/1/012073.ISSN 1742-6588.S2CID 213065622.
  4. ^abZheng, Fang; Zhang, Guoliang; Song, Zhanjiang (2001). "Comparison of different implementations of MFCC".Journal of Computer Science and Technology.16 (6):582–589.doi:10.1007/BF02943243.
  5. ^abFurui, S. (1986). "Speaker-independent isolated word recognition based on emphasized spectral dynamics".ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 11. pp. 1991–1994.doi:10.1109/ICASSP.1986.1168654.
  6. ^European Telecommunications Standards Institute (2003),Speech Processing, Transmission and Quality Aspects (STQ); Distributed speech recognition; Front-end feature extraction algorithm; Compression algorithms. Technical standard ES 201 108, v1.1.3.
  7. ^T. Ganchev, N. Fakotakis, and G. Kokkinakis (2005), "Comparative evaluation of various MFCC implementations on the speaker verification taskArchived 2011-07-17 at theWayback Machine," in10th International Conference on Speech and Computer (SPECOM 2005), Vol. 1, pp. 191–194.
  8. ^Meinard Müller (2007).Information Retrieval for Music and Motion. Springer. p. 65.ISBN 978-3-540-74047-6.
  9. ^"librosa.feature.inverse.mfcc_to_audio — librosa 0.10.0 documentation".librosa.org.
  10. ^Tyagi, V.; Wellekens, C. (2005). "On desensitizing the Mel-Cepstrum to spurious spectral components for Robust Speech Recognition".Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. Vol. 1. pp. 529–532.doi:10.1109/ICASSP.2005.1415167.ISBN 0-7803-8874-7.
  11. ^abP. Mermelstein (1976), "Distance measures for speech recognition, psychological and instrumental," inPattern Recognition and Artificial Intelligence, C. H. Chen, Ed., pp. 374–388. Academic, New York.
  12. ^abDavis, S.; Mermelstein, P. (1980)."Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences".IEEE Transactions on Acoustics, Speech, and Signal Processing.28 (4):357–366.doi:10.1109/TASSP.1980.1163420.
  13. ^J. S. Bridle and M. D. Brown (1974), "An Experimental Automatic Word-Recognition System", JSRU Report No. 1003, Joint Speech Research Unit, Ruislip, England.
  14. ^Nelson Morgan; Hervé Bourlard & Hynek Hermansky (2004)."Automatic Speech Recognition: An Auditory Perspective". In Steven Greenberg & William A. Ainsworth (eds.).Speech Processing in the Auditory System. Springer. p. 315.ISBN 978-0-387-00590-4.
  15. ^L. C. W. Pols (1966), "Spectral Analysis and Identification of Dutch Vowels in Monosyllabic Words," Doctoral dissertation, Free University, Amsterdam, the Netherlands
  16. ^Plomp, R.; Pols, L. C. W.; Van De Geer, J. P. (1967)."Dimensional Analysis of Vowel Spectra".The Journal of the Acoustical Society of America.41 (3):707–712.Bibcode:1967ASAJ...41..707P.doi:10.1121/1.1910398.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Mel-frequency_cepstrum&oldid=1321668226"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp