BaseAudioContext: createAnalyser() method
Baseline Widely available
This feature is well established and works across many devices and browser versions. It’s been available across browsers since April 2021.
ThecreateAnalyser() method of theBaseAudioContext interface creates anAnalyserNode, whichcan be used to expose audio time and frequency data and create data visualizations.
Note:TheAnalyserNode() constructor is therecommended way to create anAnalyserNode; seeCreating an AudioNode.
Note:For more on using this node, see theAnalyserNode page.
In this article
Syntax
createAnalyser()Parameters
None.
Return value
AnAnalyserNode.
Examples
The following example shows basic usage of an AudioContext to create an Analyser node,then use requestAnimationFrame() to collect time domain data repeatedly and draw an"oscilloscope style" output of the current audio input. For more complete appliedexamples/information, check out ourVoice-change-O-matic demo (seeapp.js lines 108-193 for relevant code).
const audioCtx = new AudioContext();const analyser = audioCtx.createAnalyser();// …analyser.fftSize = 2048;const bufferLength = analyser.frequencyBinCount;const dataArray = new Uint8Array(bufferLength);analyser.getByteTimeDomainData(dataArray);// draw an oscilloscope of the current audio sourcefunction draw() { drawVisual = requestAnimationFrame(draw); analyser.getByteTimeDomainData(dataArray); canvasCtx.fillStyle = "rgb(200 200 200)"; canvasCtx.fillRect(0, 0, WIDTH, HEIGHT); canvasCtx.lineWidth = 2; canvasCtx.strokeStyle = "rgb(0 0 0)"; canvasCtx.beginPath(); const sliceWidth = (WIDTH * 1.0) / bufferLength; let x = 0; for (let i = 0; i < bufferLength; i++) { const v = dataArray[i] / 128.0; const y = (v * HEIGHT) / 2; if (i === 0) { canvasCtx.moveTo(x, y); } else { canvasCtx.lineTo(x, y); } x += sliceWidth; } canvasCtx.lineTo(canvas.width, canvas.height / 2); canvasCtx.stroke();}draw();Specifications
| Specification |
|---|
| Web Audio API> # dom-baseaudiocontext-createanalyser> |