WebAudioAPI
Types
analyserNode
A node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.See AnalyserNode on MDN
typeanalyserNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutablefftSize:int,frequencyBinCount:int,mutableminDecibels:float,mutablemaxDecibels:float,mutablesmoothingTimeConstant:float,}Record fields
Module
There are methods and helpers defined in AnalyserNode.
analyserOptions
typeanalyserOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablefftSize?:int,mutablemaxDecibels?:float,mutableminDecibels?:float,mutablesmoothingTimeConstant?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
fftSize
maxDecibels
minDecibels
smoothingTimeConstant
audioBuffer
A short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData() method, or from raw data using AudioContext.createBuffer(). Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode.See AudioBuffer on MDN
typeaudioBuffer= {sampleRate:float,length:int,duration:float,numberOfChannels:int,}Record fields
Module
There are methods and helpers defined in AudioBuffer.
audioBufferOptions
typeaudioBufferOptions= {mutablenumberOfChannels?:int,mutablelength:int,mutablesampleRate:float,}Record fields
numberOfChannels
length
sampleRate
audioBufferSourceNode
An AudioScheduledSourceNode which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. It's especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network.See AudioBufferSourceNode on MDN
typeaudioBufferSourceNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutablebuffer:Null.t<audioBuffer>,playbackRate:audioParam,detune:audioParam,mutableloop:bool,mutableloopStart:float,mutableloopEnd:float,}Record fields
Module
There are methods and helpers defined in AudioBufferSourceNode.
audioBufferSourceOptions
typeaudioBufferSourceOptions= {mutablebuffer?:Null.t<audioBuffer>,mutabledetune?:float,mutableloop?:bool,mutableloopEnd?:float,mutableloopStart?:float,mutableplaybackRate?:float,}Record fields
buffer
detune
loop
loopEnd
loopStart
playbackRate
audioContext
An audio-processing graph built from audio modules linked together, each represented by an AudioNode.See AudioContext on MDN
typeaudioContext= {destination:audioDestinationNode,sampleRate:float,currentTime:float,listener:audioListener,state:audioContextState,audioWorklet:audioWorklet,baseLatency:float,outputLatency:float,}Record fields
Module
There are methods and helpers defined in AudioContext.
audioContextOptions
typeaudioContextOptions= {mutablelatencyHint?:unknown,mutablesampleRate?:float,}Record fields
latencyHint
sampleRate
audioContextState
typeaudioContextState=| @as("closed")Closed| @as("running")Running| @as("suspended")SuspendedaudioDestinationNode
AudioDestinationNode has no output (as it is the output, no more AudioNode can be linked after it in the audio graph) and one input. The number of channels in the input must be between 0 and the maxChannelCount value or an exception is raised.See AudioDestinationNode on MDN
typeaudioDestinationNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,maxChannelCount:int,}Record fields
Module
There are methods and helpers defined in AudioDestinationNode.
audioListener
The position and orientation of the unique person listening to the audio scene, and is used in audio spatialization. All PannerNodes spatialize in relation to the AudioListener stored in the BaseAudioContext.listener attribute.See AudioListener on MDN
typeaudioListener= {positionX:audioParam,positionY:audioParam,positionZ:audioParam,forwardX:audioParam,forwardY:audioParam,forwardZ:audioParam,upX:audioParam,upY:audioParam,upZ:audioParam,}Record fields
audioNode
A generic interface for representing an audio processing module. Examples include:See AudioNode on MDN
typeaudioNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,}Record fields
Module
There are methods and helpers defined in AudioNode.
audioNodeOptions
typeaudioNodeOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,}Record fields
channelCount
channelCountMode
channelInterpretation
audioParam
The Web Audio API's AudioParam interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain).See AudioParam on MDN
typeaudioParam= {mutablevalue:float,defaultValue:float,minValue:float,maxValue:float,}Record fields
Module
There are methods and helpers defined in AudioParam.
audioProcessingEvent
The Web Audio API events that occur when a ScriptProcessorNode input buffer is ready to be processed.See AudioProcessingEvent on MDN
typeaudioProcessingEvent= {type_:WebAPI.EventAPI.eventType,target:Null.t<WebAPI.EventAPI.eventTarget>,currentTarget:Null.t<WebAPI.EventAPI.eventTarget>,eventPhase:int,bubbles:bool,cancelable:bool,defaultPrevented:bool,composed:bool,isTrusted:bool,timeStamp:float,}Record fields
type_
Returns the type of event, e.g. "click", "hashchange", or "submit".Read more on MDN
target
Returns the object to which event is dispatched (its target).Read more on MDN
currentTarget
Returns the object whose event listener's callback is currently being invoked.Read more on MDN
eventPhase
Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE.Read more on MDN
bubbles
Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise.Read more on MDN
cancelable
Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method.Read more on MDN
defaultPrevented
Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise.Read more on MDN
composed
Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise.Read more on MDN
isTrusted
Returns true if event was dispatched by the user agent, and false otherwise.Read more on MDN
timeStamp
Returns the event's timestamp as the number of milliseconds measured relative to the time origin.Read more on MDN
Module
There are methods and helpers defined in AudioProcessingEvent.
audioProcessingEventInit
typeaudioProcessingEventInit= {mutablebubbles?:bool,mutablecancelable?:bool,mutablecomposed?:bool,mutableplaybackTime:float,mutableinputBuffer:audioBuffer,mutableoutputBuffer:audioBuffer,}Record fields
bubbles
cancelable
composed
playbackTime
inputBuffer
outputBuffer
audioScheduledSourceNode
typeaudioScheduledSourceNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,}Record fields
Module
There are methods and helpers defined in AudioScheduledSourceNode.
audioTimestamp
typeaudioTimestamp= {mutablecontextTime?:float,mutableperformanceTime?:float,}Record fields
contextTime
performanceTime
audioWorkletNode
typeaudioWorkletNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,parameters:audioParamMap,port:WebAPI.ChannelMessagingAPI.messagePort,}Record fields
Module
There are methods and helpers defined in AudioWorkletNode.
audioWorkletNodeOptions
typeaudioWorkletNodeOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablenumberOfInputs?:int,mutablenumberOfOutputs?:int,mutableoutputChannelCount?:array<int>,mutableparameterData?:WebAPI.Prelude.any,mutableprocessorOptions?:Dict.t<string>,}Record fields
channelCount
channelCountMode
channelInterpretation
numberOfInputs
numberOfOutputs
outputChannelCount
parameterData
processorOptions
baseAudioContext
typebaseAudioContext= {destination:audioDestinationNode,sampleRate:float,currentTime:float,listener:audioListener,state:audioContextState,audioWorklet:audioWorklet,}Record fields
Module
There are methods and helpers defined in BaseAudioContext.
biquadFilterNode
A simple low-order filter, and is created using the AudioContext.createBiquadFilter() method. It is an AudioNode that can represent different kinds of filters, tone control devices, and graphic equalizers.See BiquadFilterNode on MDN
typebiquadFilterNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutabletype_:biquadFilterType,frequency:audioParam,detune:audioParam,q:audioParam,gain:audioParam,}Record fields
Module
There are methods and helpers defined in BiquadFilterNode.
biquadFilterOptions
typebiquadFilterOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutabletype_?:biquadFilterType,mutableq?:float,mutabledetune?:float,mutablefrequency?:float,mutablegain?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
type_
q
detune
frequency
gain
biquadFilterType
typebiquadFilterType=| @as("allpass")Allpass| @as("bandpass")Bandpass| @as("highpass")Highpass| @as("highshelf")Highshelf| @as("lowpass")Lowpass| @as("lowshelf")Lowshelf| @as("notch")Notch| @as("peaking")PeakingchannelCountMode
typechannelCountMode=| @as("clamped-max")ClampedMax| @as("explicit")Explicit| @as("max")MaxchannelInterpretation
typechannelInterpretation=| @as("discrete")Discrete| @as("speakers")SpeakerschannelMergerNode
The ChannelMergerNode interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.See ChannelMergerNode on MDN
typechannelMergerNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,}Record fields
Module
There are methods and helpers defined in ChannelMergerNode.
channelMergerOptions
typechannelMergerOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablenumberOfInputs?:int,}Record fields
channelCount
channelCountMode
channelInterpretation
numberOfInputs
channelSplitterNode
The ChannelSplitterNode interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.See ChannelSplitterNode on MDN
typechannelSplitterNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,}Record fields
Module
There are methods and helpers defined in ChannelSplitterNode.
channelSplitterOptions
typechannelSplitterOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablenumberOfOutputs?:int,}Record fields
channelCount
channelCountMode
channelInterpretation
numberOfOutputs
constantSourceNode
typeconstantSourceNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,offset:audioParam,}Record fields
Module
There are methods and helpers defined in ConstantSourceNode.
constantSourceOptions
typeconstantSourceOptions= {mutableoffset?:float}Record fields
offset
convolverNode
An AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect. A ConvolverNode always has exactly one input and one output.See ConvolverNode on MDN
typeconvolverNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutablebuffer:Null.t<audioBuffer>,mutablenormalize:bool,}Record fields
Module
There are methods and helpers defined in ConvolverNode.
convolverOptions
typeconvolverOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablebuffer?:Null.t<audioBuffer>,mutabledisableNormalization?:bool,}Record fields
channelCount
channelCountMode
channelInterpretation
buffer
disableNormalization
decodeErrorCallback
typedecodeErrorCallback=WebAPI.Prelude.domException=>unitdecodeSuccessCallback
typedecodeSuccessCallback=audioBuffer=>unitdelayNode
A delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output.See DelayNode on MDN
typedelayNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,delayTime:audioParam,}Record fields
Module
There are methods and helpers defined in DelayNode.
delayOptions
typedelayOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablemaxDelayTime?:float,mutabledelayTime?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
maxDelayTime
delayTime
distanceModelType
typedistanceModelType=| @as("exponential")Exponential| @as("inverse")Inverse| @as("linear")LineardoubleRange
typedoubleRange= {mutablemax?:float,mutablemin?:float,}Record fields
max
min
dynamicsCompressorNode
Inherits properties from its parent, AudioNode.See DynamicsCompressorNode on MDN
typedynamicsCompressorNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,threshold:audioParam,knee:audioParam,ratio:audioParam,reduction:float,attack:audioParam,release:audioParam,}Record fields
Module
There are methods and helpers defined in DynamicsCompressorNode.
dynamicsCompressorOptions
typedynamicsCompressorOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutableattack?:float,mutableknee?:float,mutableratio?:float,mutablerelease?:float,mutablethreshold?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
attack
knee
ratio
release
threshold
gainNode
A change in volume. It is an AudioNode audio-processing module that causes a given gain to be applied to the input data before its propagation to the output. A GainNode always has exactly one input and one output, both with the same number of channels.See GainNode on MDN
typegainNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,gain:audioParam,}Record fields
Module
There are methods and helpers defined in GainNode.
gainOptions
typegainOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablegain?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
gain
iirFilterNode
The IIRFilterNode interface of the Web Audio API is a AudioNode processor which implements a general infinite impulse response (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers as well. It lets the parameters of the filter response be specified, so that it can be tuned as needed.See IIRFilterNode on MDN
typeiirFilterNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,}Record fields
iirFilterOptions
typeiirFilterOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablefeedforward:array<float>,mutablefeedback:array<float>,}Record fields
channelCount
channelCountMode
channelInterpretation
feedforward
feedback
mediaElementAudioSourceNode
A MediaElementSourceNode has no inputs and exactly one output, and is created using the AudioContext.createMediaElementSource method. The amount of channels in the output equals the number of channels of the audio referenced by the HTMLMediaElement used in the creation of the node, or is 1 if the HTMLMediaElement has no audio.See MediaElementAudioSourceNode on MDN
typemediaElementAudioSourceNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mediaElement:WebAPI.DOMAPI.htmlMediaElement,}Record fields
Module
There are methods and helpers defined in MediaElementAudioSourceNode.
mediaElementAudioSourceOptions
typemediaElementAudioSourceOptions= {mutablemediaElement:WebAPI.DOMAPI.htmlMediaElement,}Record fields
mediaElement
mediaStreamAudioDestinationNode
typemediaStreamAudioDestinationNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,stream:WebAPI.MediaCaptureAndStreamsAPI.mediaStream,}Record fields
Module
There are methods and helpers defined in MediaStreamAudioDestinationNode.
mediaStreamAudioSourceNode
A type of AudioNode which operates as an audio source whose media is received from a MediaStream obtained using the WebRTC or Media Capture and Streams APIs.See MediaStreamAudioSourceNode on MDN
typemediaStreamAudioSourceNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mediaStream:WebAPI.MediaCaptureAndStreamsAPI.mediaStream,}Record fields
Module
There are methods and helpers defined in MediaStreamAudioSourceNode.
mediaStreamAudioSourceOptions
typemediaStreamAudioSourceOptions= {mutablemediaStream:WebAPI.MediaCaptureAndStreamsAPI.mediaStream,}Record fields
mediaStream
mediaTrackCapabilities
typemediaTrackCapabilities= {mutablewidth?:uLongRange,mutableheight?:uLongRange,mutableaspectRatio?:doubleRange,mutableframeRate?:doubleRange,mutablefacingMode?:array<string>,mutablesampleRate?:uLongRange,mutablesampleSize?:uLongRange,mutableechoCancellation?:array<bool>,mutableautoGainControl?:array<bool>,mutablenoiseSuppression?:array<bool>,mutablechannelCount?:uLongRange,mutabledeviceId?:string,mutablegroupId?:string,mutablebackgroundBlur?:array<bool>,mutabledisplaySurface?:string,}Record fields
width
height
aspectRatio
frameRate
facingMode
sampleRate
sampleSize
echoCancellation
autoGainControl
noiseSuppression
channelCount
deviceId
groupId
backgroundBlur
displaySurface
mediaTrackConstraints
typemediaTrackConstraints= {mutablewidth?:int,mutableheight?:int,mutableaspectRatio?:float,mutableframeRate?:float,mutablefacingMode?:string,mutablesampleRate?:int,mutablesampleSize?:int,mutableechoCancellation?:bool,mutableautoGainControl?:bool,mutablenoiseSuppression?:bool,mutablechannelCount?:int,mutabledeviceId?:string,mutablegroupId?:string,mutablebackgroundBlur?:bool,mutabledisplaySurface?:string,mutableadvanced?:array<mediaTrackConstraintSet>,}Record fields
width
height
aspectRatio
frameRate
facingMode
sampleRate
sampleSize
echoCancellation
autoGainControl
noiseSuppression
channelCount
deviceId
groupId
backgroundBlur
displaySurface
advanced
mediaTrackConstraintSet
typemediaTrackConstraintSet= {mutablewidth?:int,mutableheight?:int,mutableaspectRatio?:float,mutableframeRate?:float,mutablefacingMode?:string,mutablesampleRate?:int,mutablesampleSize?:int,mutableechoCancellation?:bool,mutableautoGainControl?:bool,mutablenoiseSuppression?:bool,mutablechannelCount?:int,mutabledeviceId?:string,mutablegroupId?:string,mutablebackgroundBlur?:bool,mutabledisplaySurface?:string,}Record fields
width
height
aspectRatio
frameRate
facingMode
sampleRate
sampleSize
echoCancellation
autoGainControl
noiseSuppression
channelCount
deviceId
groupId
backgroundBlur
displaySurface
mediaTrackSettings
typemediaTrackSettings= {mutablewidth?:int,mutableheight?:int,mutableaspectRatio?:float,mutableframeRate?:float,mutablefacingMode?:string,mutablesampleRate?:int,mutablesampleSize?:int,mutableechoCancellation?:bool,mutableautoGainControl?:bool,mutablenoiseSuppression?:bool,mutablechannelCount?:int,mutabledeviceId?:string,mutablegroupId?:string,mutablebackgroundBlur?:bool,mutabledisplaySurface?:string,}Record fields
width
height
aspectRatio
frameRate
facingMode
sampleRate
sampleSize
echoCancellation
autoGainControl
noiseSuppression
channelCount
deviceId
groupId
backgroundBlur
displaySurface
offlineAudioCompletionEvent
The Web Audio API OfflineAudioCompletionEvent interface represents events that occur when the processing of an OfflineAudioContext is terminated. The complete event implements this interface.See OfflineAudioCompletionEvent on MDN
typeofflineAudioCompletionEvent= {type_:WebAPI.EventAPI.eventType,target:Null.t<WebAPI.EventAPI.eventTarget>,currentTarget:Null.t<WebAPI.EventAPI.eventTarget>,eventPhase:int,bubbles:bool,cancelable:bool,defaultPrevented:bool,composed:bool,isTrusted:bool,timeStamp:float,renderedBuffer:audioBuffer,}Record fields
type_
Returns the type of event, e.g. "click", "hashchange", or "submit".Read more on MDN
target
Returns the object to which event is dispatched (its target).Read more on MDN
currentTarget
Returns the object whose event listener's callback is currently being invoked.Read more on MDN
eventPhase
Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE.Read more on MDN
bubbles
Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise.Read more on MDN
cancelable
Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method.Read more on MDN
defaultPrevented
Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise.Read more on MDN
composed
Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise.Read more on MDN
isTrusted
Returns true if event was dispatched by the user agent, and false otherwise.Read more on MDN
timeStamp
Returns the event's timestamp as the number of milliseconds measured relative to the time origin.Read more on MDN
Module
There are methods and helpers defined in OfflineAudioCompletionEvent.
offlineAudioCompletionEventInit
typeofflineAudioCompletionEventInit= {mutablebubbles?:bool,mutablecancelable?:bool,mutablecomposed?:bool,mutablerenderedBuffer:audioBuffer,}Record fields
bubbles
cancelable
composed
renderedBuffer
offlineAudioContext
An AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.See OfflineAudioContext on MDN
typeofflineAudioContext= {destination:audioDestinationNode,sampleRate:float,currentTime:float,listener:audioListener,state:audioContextState,audioWorklet:audioWorklet,length:int,}Record fields
Module
There are methods and helpers defined in OfflineAudioContext.
offlineAudioContextOptions
typeofflineAudioContextOptions= {mutablenumberOfChannels?:int,mutablelength:int,mutablesampleRate:float,}Record fields
numberOfChannels
length
sampleRate
oscillatorNode
The OscillatorNode interface represents a periodic waveform, such as a sine wave. It is an AudioScheduledSourceNode audio-processing module that causes a specified frequency of a given wave to be created—in effect, a constant tone.See OscillatorNode on MDN
typeoscillatorNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutabletype_:oscillatorType,frequency:audioParam,detune:audioParam,}Record fields
Module
There are methods and helpers defined in OscillatorNode.
oscillatorOptions
typeoscillatorOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutabletype_?:oscillatorType,mutablefrequency?:float,mutabledetune?:float,mutableperiodicWave?:periodicWave,}Record fields
channelCount
channelCountMode
channelInterpretation
type_
frequency
detune
periodicWave
oscillatorType
typeoscillatorType=| @as("custom")Custom| @as("sawtooth")Sawtooth| @as("sine")Sine| @as("square")Square| @as("triangle")TriangleoverSampleType
typeoverSampleType=| @as("2x")V2x| @as("4x")V4x| @as("none")NonepannerNode
A PannerNode always has exactly one input and one output: the input can be mono or stereo but the output is always stereo (2 channels); you can't have panning effects without at least two audio channels!See PannerNode on MDN
typepannerNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutablepanningModel:panningModelType,positionX:audioParam,positionY:audioParam,positionZ:audioParam,orientationX:audioParam,orientationY:audioParam,orientationZ:audioParam,mutabledistanceModel:distanceModelType,mutablerefDistance:float,mutablemaxDistance:float,mutablerolloffFactor:float,mutableconeInnerAngle:float,mutableconeOuterAngle:float,mutableconeOuterGain:float,}Record fields
Module
There are methods and helpers defined in PannerNode.
pannerOptions
typepannerOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablepanningModel?:panningModelType,mutabledistanceModel?:distanceModelType,mutablepositionX?:float,mutablepositionY?:float,mutablepositionZ?:float,mutableorientationX?:float,mutableorientationY?:float,mutableorientationZ?:float,mutablerefDistance?:float,mutablemaxDistance?:float,mutablerolloffFactor?:float,mutableconeInnerAngle?:float,mutableconeOuterAngle?:float,mutableconeOuterGain?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
panningModel
distanceModel
positionX
positionY
positionZ
orientationX
orientationY
orientationZ
refDistance
maxDistance
rolloffFactor
coneInnerAngle
coneOuterAngle
coneOuterGain
panningModelType
typepanningModelType=HRTF | @as("equalpower")EqualpowerperiodicWave
PeriodicWave has no inputs or outputs; it is used to define custom oscillators when calling OscillatorNode.setPeriodicWave(). The PeriodicWave itself is created/returned by AudioContext.createPeriodicWave().See PeriodicWave on MDN
typeperiodicWave= {}Module
There are methods and helpers defined in PeriodicWave.
periodicWaveConstraints
typeperiodicWaveConstraints= {mutabledisableNormalization?:bool,}Record fields
disableNormalization
periodicWaveOptions
typeperiodicWaveOptions= {mutabledisableNormalization?:bool,mutablereal?:array<float>,mutableimag?:array<float>,}Record fields
disableNormalization
real
imag
requestCredentials
typerequestCredentials=| @as("include")Include| @as("omit")Omit| @as("same-origin")SameOriginstereoPannerNode
The pan property takes a unitless value between -1 (full left pan) and 1 (full right pan). This interface was introduced as a much simpler way to apply a simple panning effect than having to use a full PannerNode.See StereoPannerNode on MDN
typestereoPannerNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,pan:audioParam,}Record fields
Module
There are methods and helpers defined in StereoPannerNode.
stereoPannerOptions
typestereoPannerOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablepan?:float,}Record fields
channelCount
channelCountMode
channelInterpretation
pan
uLongRange
typeuLongRange= {mutablemax?:int,mutablemin?:int}Record fields
max
min
waveShaperNode
A WaveShaperNode always has exactly one input and one output.See WaveShaperNode on MDN
typewaveShaperNode= {context:baseAudioContext,numberOfInputs:int,numberOfOutputs:int,mutablechannelCount:int,mutablechannelCountMode:channelCountMode,mutablechannelInterpretation:channelInterpretation,mutablecurve:Null.t<array<float>>,mutableoversample:overSampleType,}Record fields
Module
There are methods and helpers defined in WaveShaperNode.
waveShaperOptions
typewaveShaperOptions= {mutablechannelCount?:int,mutablechannelCountMode?:channelCountMode,mutablechannelInterpretation?:channelInterpretation,mutablecurve?:array<float>,mutableoversample?:overSampleType,}Record fields
channelCount
channelCountMode
channelInterpretation
curve
oversample
workletOptions
typeworkletOptions= {mutablecredentials?:requestCredentials,}