RTCInboundRtpStreamStats: jitterBufferDelay property
Baseline Widely available
This feature is well established and works across many devices and browser versions. It’s been available across browsers since August 2022.
ThejitterBufferDelay
property of theRTCInboundRtpStreamStats
dictionary indicates the accumulated time that all audio samples and complete video frames have spent in thejitter buffer.
For an audio sample the time is calculated from the time that the sample is received by the jitter buffer ("ingest timestamp"), until the time that the sample is emitted ("exit timestamp").For a video frame, the ingest time is when the first packet in the frame was ingested until the time at which the whole frame exits the buffer.Note that several audio samples in an RTP packet will have the same ingest timestamp but different exit timestamps, while a video frame might be split across a number of RTP packets.
jitterBufferDelay
is incremented, along withjitterBufferEmittedCount
, when samples or frames exit the buffer.The average jitter buffer delay isjitterBufferDelay / jitterBufferEmittedCount
.
The jitter buffer may hold samples/frames for a longer (or shorter) delay, allowing samples to build up in the buffer so that it can provide a more smooth and continuous playout.A low and relatively constantjitterBufferDelay
is desirable, as it indicates the buffer does not need to hold as many frames/samples, and the network is stable.Higher values might indicate that the network is less reliable or predictable.
Similarly, a steady average delay indicates a more stable network, while a rising average delay indicates growing latency.
In this article
Value
A positive number, in seconds.
Specifications
Specification |
---|
Identifiers for WebRTC's Statistics API> # dom-rtcinboundrtpstreamstats-jitterbufferdelay> |
Browser compatibility
Loading…