Movatterモバイル変換


[0]ホーム

URL:


  1. Web
  2. Web APIs
  3. BaseAudioContext
  4. createBuffer()

BaseAudioContext: createBuffer() method

Baseline Widely available

This feature is well established and works across many devices and browser versions. It’s been available across browsers since ⁨April 2021⁩.

ThecreateBuffer() method of theBaseAudioContextInterface is used to create a new, emptyAudioBuffer object, whichcan then be populated by data, and played via anAudioBufferSourceNode.

For more details about audio buffers, check out theAudioBufferreference page.

Note:createBuffer() used to be able to take compresseddata and give back decoded samples, but this ability was removed from the specification,because all the decoding was done on the main thread, socreateBuffer() was blocking other code execution. The asynchronous methoddecodeAudioData() does the same thing — takes compressed audio, such as anMP3 file, and directly gives you back anAudioBuffer that you canthen play via anAudioBufferSourceNode. For simple use caseslike playing an MP3,decodeAudioData() is what you should be using.

For an in-depth explanation of how audio buffers work, including what the parameters do, readAudio buffers: frames, samples and channels from our Basic concepts guide.

Syntax

js
createBuffer(numOfChannels, length, sampleRate)

Parameters

numOfChannels

An integer representing the number of channels this buffer should have. The defaultvalue is 1, and all user agents must support at least 32 channels.

length

An integer representing the size of the buffer in sample-frames (where eachsample-frame is the size of a sample in bytes multiplied bynumOfChannels). To determine thelength to use for aspecific number of seconds of audio, usenumSeconds * sampleRate.

sampleRate

The sample rate of the linear audio data in sample-frames per second. All browsersmust support sample rates in at least the range 8,000 Hz to 96,000 Hz.

Return value

AnAudioBuffer configured based on the specified options.

Exceptions

NotSupportedErrorDOMException

Thrown if one or more of the options are negative or otherwise has an invalid value (such asnumberOfChannels being higher than supported, or asampleRate outside the nominal range).

RangeError

Thrown if there isn't enough memory available to allocate the buffer.

Examples

First, a couple of simple trivial examples, to help explain how the parameters areused:

js
const audioCtx = new AudioContext();const buffer = audioCtx.createBuffer(2, 22050, 44100);

If you use this call, you will get a stereo buffer (two channels), that, when playedback on an AudioContext running at 44100Hz (very common, most normal sound cards run atthis rate), will last for 0.5 seconds: 22050 frames / 44100Hz = 0.5 seconds.

js
const audioCtx = new AudioContext();const buffer = audioCtx.createBuffer(1, 22050, 22050);

If you use this call, you will get a mono buffer (one channel), that, when played backon anAudioContext running at 44100Hz, will be automaticallyresampled to44100Hz (and therefore yield 44100 frames), and last for 1.0 second: 44100 frames /44100Hz = 1 second.

Note:Audio resampling is very similar to image resizing: say you'vegot a 16 x 16 image, but you want it to fill a 32x32 area: you resize (resample) it.the result has less quality (it can be blurry or edgy, depending on the resizingalgorithm), but it works, and the resized image takes up less space. Resampled audiois exactly the same — you save space, but in practice you will be unable to properlyreproduce high frequency content (treble sound).

Now let's look at a more complexcreateBuffer() example, in which wecreate a three-second buffer, fill it with white noise, and then play it via anAudioBufferSourceNode. The comment should clearly explain what is going on.You can alsorun the code live, orview the source.

js
const audioCtx = new AudioContext();// Create an empty three-second stereo buffer at the sample rate of the AudioContextconst myArrayBuffer = audioCtx.createBuffer(  2,  audioCtx.sampleRate * 3,  audioCtx.sampleRate,);// Fill the buffer with white noise;// just random values between -1.0 and 1.0for (let channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {  // This gives us the actual ArrayBuffer that contains the data  const nowBuffering = myArrayBuffer.getChannelData(channel);  for (let i = 0; i < myArrayBuffer.length; i++) {    // Math.random() is in [0; 1.0]    // audio needs to be in [-1.0; 1.0]    nowBuffering[i] = Math.random() * 2 - 1;  }}// Get an AudioBufferSourceNode.// This is the AudioNode to use when we want to play an AudioBufferconst source = audioCtx.createBufferSource();// set the buffer in the AudioBufferSourceNodesource.buffer = myArrayBuffer;// connect the AudioBufferSourceNode to the// destination so we can hear the soundsource.connect(audioCtx.destination);// start the source playingsource.start();

Specifications

Specification
Web Audio API
# dom-baseaudiocontext-createbuffer

Browser compatibility

See also

Help improve MDN

Learn how to contribute

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp