Insound processing, themel-frequency cepstrum (MFC) is a representation of the short-termpower spectrum of a sound, based on alinear cosine transform of alog power spectrum on anonlinearmel scale of frequency.
Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC.[1] They are derived from a type ofcepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). The difference between thecepstrum and the mel-frequencycepstrum is that in the MFC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal spectrum. This frequency warping can allow for better representation of sound, for example, inaudio compression that might potentially reduce the transmissionbandwidth and the storage requirements of audio signals.
MFCCs are commonly derived as follows:[2][3]
There can be variations on this process, for example: differences in the shape or spacing of the windows used to map the scale,[4] or addition of dynamics features such as "delta" and "delta-delta" (first- and second-order frame-to-frame difference) coefficients.[5]
TheEuropean Telecommunications Standards Institute in the early 2000s defined a standardised MFCC algorithm to be used inmobile phones.[6]
MFCCs are commonly used asfeatures inspeech recognition[7] systems, such as the systems which can automatically recognize numbers spoken into a telephone.
MFCCs are also increasingly finding uses inmusic information retrieval applications such asgenre classification, audio similarity measures, etc.[8]
This sectionmay beconfusing or unclear to readers. Please helpclarify the section. There might be a discussion about this onthe talk page.(August 2022) (Learn how and when to remove this message) |
This articleis written like apersonal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. Pleasehelp improve it by rewriting it in anencyclopedic style.(March 2023) (Learn how and when to remove this message) |
Since Mel-frequency bands are distributed evenly in MFCC, and they are very similar to the voice system of a human, MFCC can efficiently be used to characterize speakers. For instance, it can be used to recognize the speaker's cell phone model characteristics, and further the details of the speaker's voice.[4]
This type of mobile device recognition is possible because the production of electronic components in a phone have tolerances, because different electronic circuitrealizations do not have exact sametransfer functions. The dissimilarities in the transfer function from one realization to another becomes more prominent if the task performing circuits are from different manufacturers. Hence, each cell phone introduces aconvolutional distortion on input speech that leaves its unique impact on the recordings from the cell phone. Therefore, a particular phone can be identified from the recorded speech by multiplying the originalfrequency spectrum with further multiplications of transfer functions specific to each phone followed by signal processing techniques. Thus, by using MFCC one can characterize cell phone recordings to identify the brand and model of the phone.[5]
Considering recording section of a cellphone as Linear time-invariant (LTI) filter:
Impulse response-h(n), recorded speech signaly(n) as output of filter in response to inputx(n).
Hence, (convolution)
As speech is not stationary signal, it is divided into overlapped frames within which the signal is assumed to be stationary. So, the short-term segment (frame) of recorded input speech is:
wherew(n): windowed function of length W.
Hence, as specified the footprint of mobile phone of the recorded speech is the convolution distortion that helps to identify the recording phone.
The embedded identity of the cell phone requires a conversion to a better identifiable form, hence, taking short-time Fourier transform:
can be considered as a concatenated transfer function that produced input speech, and the recorded speech can be perceived as original speech from cell phone.
So, equivalent transfer function of vocal tract and cell phone recorder is considered as original source of recorded speech. Therefore,
whereXew(f) is the excitation function, is the vocal tract transfer function for speech in the frame and is the equivalent transfer function that characterizes the cell phone.
This approach can be useful for speaker recognition as the device identification and the speaker identification are very much connected.
Providing importance to the envelope of the spectrum which multiplied by filter bank (suitable cepstrum with mel-scale filter bank), after smoothing filter bank with transfer function U(f), the log operation on output energies are:
Representing
MFCC is successful because of this nonlinear transformation with additive property.
Transforming back to time domain:
where, cy(j), ce(j), cw(j) are the recorded speech cepstrum and weighted equivalent impulse response of cell phone recorder that characterizes the cell phone, respectively, while j is the number of filters in the filter bank.
More precisely, the device specific information is in the recorded speech which is converted to additive form suitable for identification.
cy(j) can be further processed for identification of the recording phone.
Often used frame lengths- 20 or 20 ms.
Commonly used window functions- Hamming and Hanning windows.
Hence, Mel-scale is a commonly used frequency scale that is linear till 1000 Hz and logarithmic above it.
Computation of central frequencies of filters in Mel-scale:
Basic procedure for MFCC calculation:
,,
where corresponds to the-th MFCC coefficient, is the number of triangular filters in the filter bank, is the log energy output of-th filter coefficient, and is the number of MFCC coefficients that we want to calculate.
An MFCC can be approximately inverted to audio in four steps: (a1) inverse DCT to obtain a mel log-power [dB] spectrogram, (a2) mapping to power to obtain a mel power spectrogram, (b1) rescaling to obtainshort-time Fourier transform magnitudes, and finally (b2) phase reconstruction and audio synthesis using Griffin-Lim. Each step corresponds to one step in MFCC calculation.[9]
MFCC values are not very robust in the presence of additive noise, and so it is common to normalise their values in speech recognition systems to lessen the influence of noise. Some researchers propose modifications to the basic MFCC algorithm to improve robustness, such as by raising the log-mel-amplitudes to a suitable power (around 2 or 3) before taking thediscrete cosine transform (DCT), which reduces the influence of low-energy components.[10]
Paul Mermelstein[11][12] is typically credited with the development of the MFC. Mermelstein credits Bridle and Brown[13] for the idea:
Bridle and Brown used a set of 19 weighted spectrum-shape coefficients given by the cosine transform of the outputs of a set of nonuniformly spaced bandpass filters. The filter spacing is chosen to be logarithmic above 1 kHz and the filter bandwidths are increased there as well. We will, therefore, call these the mel-based cepstral parameters.[11]
Sometimes both early originators are cited.[14]
Many authors, including Davis and Mermelstein,[12] have commented that the spectral basis functions of the cosine transform in the MFC are very similar to theprincipal components of the log spectra, which were applied to speech representation and recognition much earlier by Pols and his colleagues.[15][16]