AudioBuffer
Baseline Widely available
This feature is well established and works across many devices and browser versions. It’s been available across browsers since July 2015.
TheAudioBuffer interface represents a short audio asset residing in memory, created from an audio file using theAudioContext.decodeAudioData() method, or from raw data usingAudioContext.createBuffer(). Once put into an AudioBuffer, the audio can then be played by being passed into anAudioBufferSourceNode.
Objects of these types are designed to hold small audio snippets, typically less than 45 s. For longer sounds, objects implementing theMediaElementAudioSourceNode are more suitable. The buffer contains the audio signal waveform encoded as a series of amplitudes in the following format: non-interleaved IEEE754 32-bit linear PCM with a nominal range between-1 and+1, that is, a 32-bit floating point buffer, with each sample between -1.0 and 1.0. If theAudioBuffer has multiple channels, they are stored in separate buffers.
In this article
Constructor
AudioBuffer()Creates and returns a new
AudioBufferobject instance.
Instance properties
AudioBuffer.sampleRateRead onlyReturns a float representing the sample rate, in samples per second, of the PCM data stored in the buffer.
AudioBuffer.lengthRead onlyReturns an integer representing the length, in sample-frames, of the PCM data stored in the buffer.
AudioBuffer.durationRead onlyReturns a double representing the duration, in seconds, of the PCM data stored in the buffer.
AudioBuffer.numberOfChannelsRead onlyReturns an integer representing the number of discrete audio channels described by the PCM data stored in the buffer.
Instance methods
AudioBuffer.getChannelData()Returns a
Float32Arraycontaining the PCM data associated with the channel, defined by thechannelparameter (with0representing the first channel).AudioBuffer.copyFromChannel()Copies the samples from the specified channel of the
AudioBufferto thedestinationarray.AudioBuffer.copyToChannel()Copies the samples to the specified channel of the
AudioBuffer, from thesourcearray.
Example
The following simple example shows how to create anAudioBuffer and fill it with random white noise. You can find the full source code at ourwebaudio-examples repository; arunning live version is also available.
const audioCtx = new AudioContext();// Create an empty three-second stereo buffer at the sample rate of the AudioContextconst myArrayBuffer = audioCtx.createBuffer( 2, audioCtx.sampleRate * 3, audioCtx.sampleRate,);// Fill the buffer with white noise;// just random values between -1.0 and 1.0for (let channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) { // This gives us the actual array that contains the data const nowBuffering = myArrayBuffer.getChannelData(channel); for (let i = 0; i < myArrayBuffer.length; i++) { // Math.random() is in [0; 1.0] // audio needs to be in [-1.0; 1.0] nowBuffering[i] = Math.random() * 2 - 1; }}// Get an AudioBufferSourceNode.// This is the AudioNode to use when we want to play an AudioBufferconst source = audioCtx.createBufferSource();// set the buffer in the AudioBufferSourceNodesource.buffer = myArrayBuffer;// connect the AudioBufferSourceNode to the// destination so we can hear the soundsource.connect(audioCtx.destination);// start the source playingsource.start();Specifications
| Specification |
|---|
| Web Audio API> # AudioBuffer> |