Time stretching is the process of changing the speed or duration of anaudio signal without affecting itspitch.Pitch scaling is the opposite: the process of changing the pitch without affecting the speed.Pitch shift is pitch scaling implemented in aneffects unit and intended for live performance.Pitch control is a simpler process which affects pitch and speed simultaneously by slowing down or speeding up a recording.
These processes are often used to match the pitches and tempos of two pre-recorded clips for mixing when the clips cannot be reperformed or resampled. Time stretching is often used to adjustradio commercials[1] and the audio oftelevision advertisements[2] to fit exactly into the 30 or 60 seconds available. It can be used to conform longer material to a designated time slot, such as a 1-hour broadcast.
The simplest way to change the duration or pitch of an audio recording is to change the playback speed. For adigital audio recording, this can be accomplished throughsample rate conversion. When using this method, the frequencies in the recording are always scaled at the same ratio as the speed,transposing its pitch up or down in the process. Slowing down the recording to increase duration also lowers the pitch, while speeding it up for a shorter duration respectively raises the pitch, creating the so-calledChipmunk effect. When resampling audio to a notably lower pitch, it may be preferred that the source audio is of a higher sample rate, as slowing down the playback rate will reproduce an audio signal of a lower resolution, and therefore reduce the perceived clarity of the sound. When resampling audio to a notably higher pitch, it may be preferred to incorporate an interpolation filter, as frequencies that surpass theNyquist frequency (determined by the sampling rate of the audio reproduction software or device) will create sound distortions due toaliasing.
One way of stretching the length of a signal without affecting the pitch is to build aphase vocoder after Flanagan, Golden, and Portnoff.
Basic steps:
The phase vocoder handlessinusoid components well, but early implementations introduced considerable smearing ontransient ("beat") waveforms at all non-integer compression/expansion rates, which renders the results phasey and diffuse. Recent improvements allow better quality results at all compression/expansion ratios but a residual smearing effect still remains.
The phase vocoder technique can also be used to perform pitch shifting, chorusing, timbre manipulation, harmonizing, and other unusual modifications, all of which can be changed as a function of time.

Another method for time stretching relies on aspectral model of the signal. In this method, peaks are identified in frames using theSTFT of the signal, and sinusoidal "tracks" are created by connecting peaks in adjacent frames. The tracks are then re-synthesized at a new time scale. This method can yield good results on both polyphonic and percussive material, especially when the signal is separated into sub-bands. However, this method is more computationally demanding than other methods.[citation needed]

Rabiner and Schafer in 1978 put forth an alternate solution that works in thetime domain: attempt to find theperiod (or equivalently thefundamental frequency) of a given section of the wave using somepitch detection algorithm (commonly the peak of the signal'sautocorrelation, or sometimescepstral processing), andcrossfade one period into another.
This is calledtime-domain harmonic scaling[5] or the synchronized overlap-add method (SOLA) and performs somewhat faster than the phase vocoder on slower machines but fails when the autocorrelation mis-estimates the period of a signal with complicated harmonics (such asorchestral pieces).
Adobe Audition (formerly Cool Edit Pro) seems to solve this by looking for the period closest to a center period that the user specifies, which should be an integer multiple of the tempo, and between 30Hz and the lowest bass frequency.
This is much more limited in scope than the phase vocoder based processing, but can be made much less processor intensive, for real-time applications. It provides the most coherent results[citation needed] for single-pitched sounds like voice or musically monophonic instrument recordings.
High-end commercialaudio processing packages either combine the two techniques (for example by separating the signal into sinusoid and transient waveforms), or use other techniques based on thewavelet transform, or artificial neural network processing[citation needed], producing the highest-quality time stretching.

In order to preserve an audio signal's pitch when stretching or compressing its duration, many time-scale modification (TSM) procedures follow a frame-based approach.[6]Given an original discrete-time audio signal, this strategy's first step is to split the signal into shortanalysis frames of fixed length.The analysis frames are spaced by a fixed number of samples, called theanalysis hopsize.To achieve the actual time-scale modification, the analysis frames are then temporally relocatedto have asynthesis hopsize.This frame relocation results in a modification of the signal's duration by astretching factor of.However, simply superimposing the unmodified analysis frames typically results in undesired artifactssuch as phase discontinuities or amplitude fluctuations.To prevent these kinds of artifacts, the analysis frames are adapted to formsynthesis frames, prior tothe reconstruction of the time-scale modified output signal.
The strategy of how to derive the synthesis frames from the analysis frames is a key difference amongdifferent TSM procedures.
For the specific case of speech, time stretching can be performed usingPSOLA.
Time-compressed speech is the representation of verbal text in compressed time. While one might expect speeding up to reduce comprehension, Herb Friedman says that "Experiments have shown that the brain works most efficiently if the information rate through the ears—via speech—is the 'average' reading rate, which is about 200–300 wpm (words per minute), yet the average rate of speech is in the neighborhood of 100–150 wpm."[7]
Listening to time-compressed speech is seen as the equivalent ofspeed reading.[by whom?][8][9]
These techniques can also be used totranspose an audio sample while holding speed or duration constant. This may be accomplished by time stretching and then resampling back to the original length. Alternatively, the frequency of the sinusoids in asinusoidal model may be altered directly, and the signal reconstructed at the appropriate time scale.
Transposing can be calledfrequency scaling orpitch shifting, depending on perspective.
For example, one could move the pitch of every note up by a perfect fifth, keeping the tempo the same.One can view this transposition as "pitch shifting", "shifting" each note up 7 keys on a piano keyboard, or adding a fixed amount on theMel scale, or adding a fixed amount in linearpitch space.One can view the same transposition as "frequency scaling", "scaling" (multiplying) the frequency of every note by 3/2.
Musical transposition preserves the ratios of theharmonic frequencies that determine the sound'stimbre, unlike thefrequency shift performed byamplitude modulation, which adds a fixed frequency offset to the frequency of every note. (In theory one could perform a literalpitch scaling in which the musical pitch space location is scaled [a higher note would be shifted at a greater interval in linear pitch space than a lower note], but that is highly unusual, and not musical.[citation needed])
Time domain processing works much better here, as smearing is less noticeable, but scaling vocal samples distorts theformants into a sort ofAlvin and the Chipmunks-like effect, which may be desirable or undesirable. A process that preserves the formants and character of a voice involves analyzing the signal with achannel vocoder orLPC vocoder plus any of severalpitch detection algorithms and then resynthesizing it at a different fundamental frequency.
A detailed description of older analog recording techniques for pitch shifting can be found atAlvin and the Chipmunks § Recording technique.
Time stretching and pitch scaling is used extensively byDJs in addition tobeatmixing when playing and creatingset. In order to seamlessly blend two tracks together, the tempo of a track can be adjusted to match another track such that the beats line up. Pitch scaling is commonly used to retain the pitch of a track. Pitch scaling is also used by DJs forharmonic mixing, to transform tracks into compatible keys so that they sound pleasing when mixed together. Time stretching and pitch scaling are included in modern DJ hardware (CDJs andDJ controllers) and software (such asVirtualDJ,Mixxx,Serato and Rekordbox).
Time stretching and pitch scaling is used indigital audio workstation software for working withmusic loops, sound clips which can be repeated and transposed to form a song. The pitch and tempo of multiple loops are aligned to create tracks. Notable software includesAcid Pro with its "Acidized" loops feature andFL Studio.
Pitch-corrected audio timestretch is found in every modernweb browser as part of theHTML standard for media playback.[10] Similar controls are ubiquitous in media applications and frameworks such asGStreamer andUnity.
{{cite magazine}}:Cite magazine requires|magazine= (help)