Movatterモバイル変換


[0]ホーム

URL:


  1. Web
  2. Web APIs
  3. AudioNode

AudioNode

Baseline Widely available *

This feature is well established and works across many devices and browser versions. It’s been available across browsers since ⁨July 2015⁩.

* Some parts of this feature may have varying levels of support.

TheAudioNode interface is a generic interface for representing an audio processing module.

Examples include:

EventTarget AudioNode

Note:AnAudioNode can be target of events, therefore it implements theEventTarget interface.

Instance properties

AudioNode.contextRead only

Returns the associatedBaseAudioContext, that is the object representing the processing graph the node is participating in.

AudioNode.numberOfInputsRead only

Returns the number of inputs feeding the node. Source nodes are defined as nodes having anumberOfInputs property with a value of0.

AudioNode.numberOfOutputsRead only

Returns the number of outputs coming out of the node. Destination nodes — likeAudioDestinationNode — have a value of0 for this attribute.

AudioNode.channelCount

Represents an integer used to determine how many channels are used whenup-mixing and down-mixing connections to any inputs to the node. Its usage and precise definition depend on the value ofAudioNode.channelCountMode.

AudioNode.channelCountMode

Represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.

AudioNode.channelInterpretation

Represents an enumerated value describing the meaning of the channels. This interpretation will define how audioup-mixing and down-mixing will happen.The possible values are"speakers" or"discrete".

Instance methods

Also implements methods from the interfaceEventTarget.

AudioNode.connect()

Allows us to connect the output of this node to be input into another node, either as audio data or as the value of anAudioParam.

AudioNode.disconnect()

Allows us to disconnect the current node from another one it is already connected to.

Description

The audio routing graph

AudioNodes participating in an AudioContext create an audio routing graph.

EachAudioNode has inputs and outputs, and multiple audio nodes are connected to build aprocessing graph. This graph is contained in anAudioContext, and each audio node can only belong to one audio context.

Asource node has zero inputs but one or multiple outputs, and can be used to generate sound. On the other hand, adestination node has no outputs; instead, all its inputs are directly played back on the speakers (or whatever audio output device the audio context uses). In addition, there areprocessing nodes which have inputs and outputs. The exact processing done varies from oneAudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or lets the audio pass through (for example in theAnalyserNode, where the result of the processing is accessed separately).

The more nodes in a graph, the higher the latency will be. For example, if your graph has a latency of 500ms, when the source node plays a sound, it will take half a second until that sound can be heard on your speakers (or even longer because of latency in the underlying audio device). Therefore, if you need to have interactive audio, keep the graph as small as possible, and put user-controlled audio nodes at the end of a graph. For example, a volume control (GainNode) should be the last node so that volume changes take immediate effect.

Each input and output has a given amount ofchannels. For example, mono audio has one channel, while stereo audio has two channels. The Web Audio API will up-mix or down-mix the number of channels as required; check the Web Audio spec for details.

For a list of all audio nodes, see theWeb Audio API homepage.

Creating anAudioNode

There are two ways to create anAudioNode: via theconstructor and via thefactory method.

js
// constructorconst analyserNode = new AnalyserNode(audioCtx, {  fftSize: 2048,  maxDecibels: -25,  minDecibels: -60,  smoothingTimeConstant: 0.5,});
js
// factory methodconst analyserNode = audioCtx.createAnalyser();analyserNode.fftSize = 2048;analyserNode.maxDecibels = -25;analyserNode.minDecibels = -60;analyserNode.smoothingTimeConstant = 0.5;

You are free to use either constructors or factory methods, or mix both, however there are advantages to using the constructors:

  • All parameters can be set during construction time and don't need to be set individually.
  • You cansub-class an audio node. While the actual processing is done internally by the browser and cannot be altered, you could write a wrapper around an audio node to provide custom properties and methods.
  • Slightly better performance: In both Chrome and Firefox, the factory methods call the constructors internally.

Brief history: The first version of the Web Audio spec only defined the factory methods. After adesign review in October 2013, it was decided to add constructors because they have numerous benefits over factory methods. The constructors were added to the spec from August to October 2016. Factory methods continue to be included in the spec and are not deprecated.

Example

This simple snippet of code shows the creation of some audio nodes, and how theAudioNode properties and methods can be used. You can find examples of such usage on any of the examples linked to on theWeb Audio API landing page (for exampleViolent Theremin).

js
const audioCtx = new AudioContext();const oscillator = new OscillatorNode(audioCtx);const gainNode = new GainNode(audioCtx);oscillator.connect(gainNode).connect(audioCtx.destination);oscillator.context;oscillator.numberOfInputs;oscillator.numberOfOutputs;oscillator.channelCount;

Specifications

Specification
Web Audio API
# AudioNode

Browser compatibility

See also

Help improve MDN

Learn how to contribute

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp