Movatterモバイル変換


[0]ホーム

URL:


MDN Web Docs

RTCEncodedVideoFrame

Baseline2023 *
Newly available

Note: This feature is available inDedicated Web Workers.

TheRTCEncodedVideoFrame of theWebRTC API represents an encoded video frame in the WebRTC receiver or sender pipeline, which may be modified using aWebRTC Encoded Transform.

Instance properties

RTCEncodedVideoFrame.typeRead only

Returns whether the current frame is a key frame, delta frame, or empty frame.

RTCEncodedVideoFrame.timestampRead onlyDeprecatedNon-standard

Returns the timestamp at which sampling of the frame started.

RTCEncodedVideoFrame.data

Return a buffer containing the encoded frame data.

Instance methods

RTCEncodedVideoFrame.getMetadata()

Returns the metadata associated with the frame.

Description

Raw video data is generated as a sequence of frames, where each frame is a 2 dimensional array of pixel values.Video encoders transform this raw input into a compressed representation of the original for transmission and storage.A common approach is to send "key frames" that contain enough information to reproduce a whole image at a relatively low rate, and between key frames to send many much smaller "delta frames" that just encode the changes since the previous frame.

There are many different codecs, such as H.264, VP8, and VP9, each that have a different encoding processes and configuration, which offer different trade-offs between compression efficiency and video quality.

TheRTCEncodedVideoFrame represents a single frame encoded with a particular video encoder.Thetype property indicates whether the frame is a "key" or "delta" frame, and you can use thegetMetadata() method to get other details about the encoding method.Thedata property provides access to the encoded image data for the frame, which can then be modified ("transformed") when frames are sent or received.

Examples

This code snippet shows a handler for thertctransform event in aWorker that implements aTransformStream, and pipes encoded frames through it from theevent.transformer.readable toevent.transformer.writable (event.transformer is aRTCRtpScriptTransformer, the worker-side counterpart ofRTCRtpScriptTransform).

If the transformer is inserted into a video stream, thetransform() method is called with aRTCEncodedVideoFrame whenever a new frame is enqueued onevent.transformer.readable.Thetransform() method shows how this might be read, modified by inverting the bits, and then enqueued on the controller (this ultimately pipes it through to theevent.transformer.writable, and then back into the WebRTC pipeline).

js
addEventListener("rtctransform", (event) => {  const transform = new TransformStream({    async transform(encodedFrame, controller) {      // Reconstruct the original frame.      const view = new DataView(encodedFrame.data);      // Construct a new buffer      const newData = new ArrayBuffer(encodedFrame.data.byteLength);      const newView = new DataView(newData);      // Negate all bits in the incoming frame      for (let i = 0; i < encodedFrame.data.byteLength; ++i) {        newView.setInt8(i, ~view.getInt8(i));      }      encodedFrame.data = newData;      controller.enqueue(encodedFrame);    },  });  event.transformer.readable    .pipeThrough(transform)    .pipeTo(event.transformer.writable);});

Note that more complete examples are provided inUsing WebRTC Encoded Transforms.

Specifications

Specification
WebRTC Encoded Transform
# rtcencodedvideoframe

Browser compatibility

See also

Help improve MDN

Learn how to contribute.

This page was last modified on byMDN contributors.


[8]ページ先頭

©2009-2025 Movatter.jp