Movatterモバイル変換


[0]ホーム

URL:


  1. Web
  2. Web APIs
  3. BaseAudioContext
  4. createScriptProcessor()

BaseAudioContext: createScriptProcessor() method

Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see thecompatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.

ThecreateScriptProcessor() method of theBaseAudioContext interfacecreates aScriptProcessorNode used for direct audio processing.

Note:This feature was replaced byAudioWorklets and theAudioWorkletNode interface.

Syntax

js
createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)

Parameters

bufferSize

The buffer size in units of sample-frames. If specified, the bufferSize must be oneof the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. If it's not passedin, or if the value is 0, then the implementation will choose the best buffer size forthe given environment, which will be a constant power of 2 throughout the lifetime ofthe node.

This value controls how frequently theaudioprocess event is dispatchedand how many sample-frames need to be processed each call. Lower values forbufferSize will result in a lower (better) latency. Higher values will benecessary to avoid audio breakup and glitches. It is recommended for authors to notspecify this buffer size and allow the implementation to pick a good buffer size tobalance between latency and audio quality.

numberOfInputChannels

Integer specifying the number of channels for this node's input, defaults to 2.Values of up to 32 are supported.

numberOfOutputChannels

Integer specifying the number of channels for this node's output, defaults to 2.Values of up to 32 are supported.

Warning:WebKit currently (version 31) requires that a validbufferSize be passed when calling this method.

Note:It is invalid for bothnumberOfInputChannels andnumberOfOutputChannels to be zero.

Return value

AScriptProcessorNode.

Examples

Adding white noise using a script processor

The following example shows how to use aScriptProcessorNode to take a track loaded viaAudioContext.decodeAudioData(), process it, adding a bit of white noise to each audio sample of the input track, and play it through theAudioDestinationNode.

For each channel and each sample frame, the script node'saudioprocess event handler uses the associatedaudioProcessingEvent to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.

js
const myScript = document.querySelector("script");const myPre = document.querySelector("pre");const playButton = document.querySelector("button");// Create AudioContext and buffer sourcelet audioCtx;async function init() {  audioCtx = new AudioContext();  const source = audioCtx.createBufferSource();  // Create a ScriptProcessorNode with a bufferSize of 4096 and  // a single input and output channel  const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);  // Load in an audio track using fetch() and decodeAudioData()  try {    const response = await fetch("viper.ogg");    const arrayBuffer = await response.arrayBuffer();    source.buffer = await audioCtx.decodeAudioData(arrayBuffer);  } catch (err) {    console.error(      `Unable to fetch the audio file: ${name} Error: ${err.message}`,    );  }  // Give the node a function to process audio events  scriptNode.addEventListener("audioprocess", (audioProcessingEvent) => {    // The input buffer is the song we loaded earlier    let inputBuffer = audioProcessingEvent.inputBuffer;    // The output buffer contains the samples that will be modified and played    let outputBuffer = audioProcessingEvent.outputBuffer;    // Loop through the output channels (in this case there is only one)    for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {      let inputData = inputBuffer.getChannelData(channel);      let outputData = outputBuffer.getChannelData(channel);      // Loop through the 4096 samples      for (let sample = 0; sample < inputBuffer.length; sample++) {        // make output equal to the same as the input        outputData[sample] = inputData[sample];        // add noise to each output sample        outputData[sample] += (Math.random() * 2 - 1) * 0.1;      }    }  });  source.connect(scriptNode);  scriptNode.connect(audioCtx.destination);  source.start();  // When the buffer source stops playing, disconnect everything  source.addEventListener("ended", () => {    source.disconnect(scriptNode);    scriptNode.disconnect(audioCtx.destination);  });}// wire up play buttonplayButton.addEventListener("click", () => {  if (!audioCtx) {    init();  }});

Specifications

Specification
Web Audio API
# dom-baseaudiocontext-createscriptprocessor

Browser compatibility

See also

Help improve MDN

Learn how to contribute

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp