Movatterモバイル変換


[0]ホーム

URL:


  1. Web
  2. Web APIs
  3. AudioProcessingEvent

AudioProcessingEvent

Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see thecompatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.

TheAudioProcessingEvent interface of theWeb Audio API represents events that occur when aScriptProcessorNode input buffer is ready to be processed.

Anaudioprocess event with this interface is fired on aScriptProcessorNode when audio processing is required. During audio processing, the input buffer is read and processed to produce output audio data, which is then written to the output buffer.

Warning:This feature has been deprecated and should be replaced by anAudioWorklet.

Event AudioProcessingEvent

Constructor

AudioProcessingEvent()Deprecated

Creates a newAudioProcessingEvent object.

Instance properties

Also implements the properties inherited from its parent,Event.

playbackTimeRead onlyDeprecated

A double representing the time when the audio will be played,as defined by the time ofAudioContext.currentTime.

inputBufferRead onlyDeprecated

AnAudioBuffer that is the buffer containing the input audio data to be processed.The number of channels is defined as a parameternumberOfInputChannels,of the factory methodAudioContext.createScriptProcessor().Note that the returnedAudioBuffer is only valid in the scope of the event handler.

outputBufferRead onlyDeprecated

AnAudioBuffer that is the buffer where the output audio data should be written.The number of channels is defined as a parameter,numberOfOutputChannels,of the factory methodAudioContext.createScriptProcessor().Note that the returnedAudioBuffer is only valid in the scope of the event handler.

Examples

Adding white noise using a script processor

The following example shows how to use of aScriptProcessorNode to take atrack loaded viaAudioContext.decodeAudioData(), process it, adding a bitof white noise to each audio sample of the input track (buffer) and play it through theAudioDestinationNode. For each channel and each sample frame, thescriptNode.onaudioprocess function takes the associatedaudioProcessingEvent and uses it to loop through each channel of the inputbuffer, and each sample in each channel, and add a small amount of white noise, beforesetting that result to be the output sample in each case.

Note:For a full working example, see ourscript-processor-nodeGitHub repo. (You can also access thesource code.)

js
const myScript = document.querySelector("script");const myPre = document.querySelector("pre");const playButton = document.querySelector("button");// Create AudioContext and buffer sourcelet audioCtx;async function init() {  audioCtx = new AudioContext();  const source = audioCtx.createBufferSource();  // Create a ScriptProcessorNode with a bufferSize of 4096 and  // a single input and output channel  const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);  // Load in an audio track using fetch() and decodeAudioData()  try {    const response = await fetch("viper.ogg");    const arrayBuffer = await response.arrayBuffer();    source.buffer = await audioCtx.decodeAudioData(arrayBuffer);  } catch (err) {    console.error(      `Unable to fetch the audio file: ${name} Error: ${err.message}`,    );  }  // Give the node a function to process audio events  scriptNode.addEventListener("audioprocess", (audioProcessingEvent) => {    // The input buffer is the song we loaded earlier    let inputBuffer = audioProcessingEvent.inputBuffer;    // The output buffer contains the samples that will be modified    // and played    let outputBuffer = audioProcessingEvent.outputBuffer;    // Loop through the output channels (in this case there is only one)    for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {      let inputData = inputBuffer.getChannelData(channel);      let outputData = outputBuffer.getChannelData(channel);      // Loop through the 4096 samples      for (let sample = 0; sample < inputBuffer.length; sample++) {        // make output equal to the same as the input        outputData[sample] = inputData[sample];        // add noise to each output sample        outputData[sample] += (Math.random() * 2 - 1) * 0.1;      }    }  });  source.connect(scriptNode);  scriptNode.connect(audioCtx.destination);  source.start();  // When the buffer source stops playing, disconnect everything  source.addEventListener("ended", () => {    source.disconnect(scriptNode);    scriptNode.disconnect(audioCtx.destination);  });}// wire up play buttonplayButton.addEventListener("click", () => {  if (!audioCtx) {    init();  }});

Specifications

Specification
Web Audio API
# dom-audioprocessingevent-inputbuffer
Web Audio API
# dom-audioprocessingevent-playbacktime
Web Audio API
# dom-audioprocessingevent-outputbuffer
Web Audio API
# dom-audioprocessingevent-audioprocessingevent

Browser compatibility

See also

Help improve MDN

Learn how to contribute

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp