OfflineAudioContext
Baseline Widely available *
This feature is well established and works across many devices and browser versions. It’s been available across browsers since April 2021.
* Some parts of this feature may have varying levels of support.
TheOfflineAudioContext interface is anAudioContext interface representing an audio-processing graph built from linked togetherAudioNodes. In contrast with a standardAudioContext, anOfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to anAudioBuffer.
In this article
Constructor
OfflineAudioContext()Creates a new
OfflineAudioContextinstance.
Instance properties
Also inherits properties from its parent interface,BaseAudioContext.
OfflineAudioContext.lengthRead onlyAn integer representing the size of the buffer in sample-frames.
Instance methods
Also inherits methods from its parent interface,BaseAudioContext.
OfflineAudioContext.suspend()Schedules a suspension of the time progression in the audio context at the specified time and returns a promise.
OfflineAudioContext.startRendering()Starts rendering the audio, taking into account the current connections and the current scheduled changes. This page covers both the event-based version and the promise-based version.
Deprecated methods
OfflineAudioContext.resume()Resumes the progression of time in an audio context that has previously been suspended.
Note:Theresume() method is still available — it is now defined on theBaseAudioContext interface (seeAudioContext.resume) and thus can be accessed by both theAudioContext andOfflineAudioContext interfaces.
Events
Listen to these events usingaddEventListener() or by assigning an event listener to theoneventname property of this interface:
completeFired when the rendering of an offline audio context is complete.
Examples
>Playing audio with an offline audio context
In this example, we declare both anAudioContext and anOfflineAudioContext object. We use theAudioContext to load an audio trackfetch(), then theOfflineAudioContext to render the audio into anAudioBufferSourceNode and play the track through. After the offline audio graph is set up, we render it to anAudioBuffer usingOfflineAudioContext.startRendering().
When thestartRendering() promise resolves, rendering has completed and the outputAudioBuffer is returned out of the promise.
At this point we create another audio context, create anAudioBufferSourceNode inside it, and set its buffer to be equal to the promiseAudioBuffer. This is then played as part of a simple standard audio graph.
Note:You canrun the full example live, orview the source.
// Define both online and offline audio contextslet audioCtx; // Must be initialized after a user interactionconst offlineCtx = new OfflineAudioContext(2, 44100 * 40, 44100);// Define constants for dom nodesconst play = document.querySelector("#play");function getData() { // Fetch an audio track, decode it and stick it in a buffer. // Then we put the buffer into the source and can play it. fetch("viper.ogg") .then((response) => response.arrayBuffer()) .then((downloadedBuffer) => audioCtx.decodeAudioData(downloadedBuffer)) .then((decodedBuffer) => { console.log("File downloaded successfully."); const source = new AudioBufferSourceNode(offlineCtx, { buffer: decodedBuffer, }); source.connect(offlineCtx.destination); return source.start(); }) .then(() => offlineCtx.startRendering()) .then((renderedBuffer) => { console.log("Rendering completed successfully."); play.disabled = false; const song = new AudioBufferSourceNode(audioCtx, { buffer: renderedBuffer, }); song.connect(audioCtx.destination); // Start the song song.start(); }) .catch((err) => { console.error(`Error encountered: ${err}`); });}// Activate the play buttonplay.onclick = () => { play.disabled = true; // We can initialize the context as the user clicked. audioCtx = new AudioContext(); // Fetch the data and start the song getData();};Specifications
| Specification |
|---|
| Web Audio API> # OfflineAudioContext> |