Video processing with WebCodecs Stay organized with collections Save and categorize content based on your preferences.
Manipulating video stream components.
Modern web technologies provide ample ways to work with video.Media Stream API,Media Recording API,Media Source API,andWebRTC API add upto a rich tool set for recording, transferring, and playing video streams.While solving certain high-level tasks, these APIs don't let webprogrammers work with individual components of a video stream such as framesand unmuxed chunks of encoded video or audio.To get low-level access to these basic components, developers have been usingWebAssembly to bringvideo and audio codecs into the browser. But giventhat modern browsers already ship with a variety of codecs (which are oftenaccelerated by hardware), repackaging them as WebAssembly seems like a waste ofhuman and computer resources.
WebCodecs API eliminates this inefficiencyby giving programmers a way to use media components that are already present inthe browser. Specifically:
- Video and audio decoders
- Video and audio encoders
- Raw video frames
- Image decoders
The WebCodecs API is useful for web applications that require full control over theway media content is processed, such as video editors, video conferencing, videostreaming, etc.
Video processing workflow
Frames are the centerpiece in video processing. Thus in WebCodecs most classeseither consume or produce frames. Video encoders convert frames into encodedchunks. Video decoders do the opposite.
AlsoVideoFrame plays nicely with other Web APIs by being aCanvasImageSource and having aconstructor that acceptsCanvasImageSource.So it can be used in functions likedrawImage() andtexImage2D(). Also it can be constructed from canvases, bitmaps, video elements and other video frames.
WebCodecs API works well in tandem with the classes fromInsertable Streams APIwhich connect WebCodecs tomedia stream tracks.
MediaStreamTrackProcessorbreaks media tracks into individual frames.MediaStreamTrackGeneratorcreates a media track from a stream of frames.
WebCodecs and web workers
By design WebCodecs API does all the heavy lifting asynchronously and off the main thread.But since frame and chunk callbacks can often be called multiple times a second,they might clutter the main thread and thus make the website less responsive.Therefore it is preferable to move handling of individual frames and encoded chunks into aweb worker.
To help with that,ReadableStreamprovides a convenient way to automatically transfer all frames coming from a mediatrack to the worker. For example,MediaStreamTrackProcessor can be used to obtain aReadableStream for a media stream track coming from the web camera. After thatthe stream is transferred to a web worker where frames are read one by one and queuedinto aVideoEncoder.
WithHTMLCanvasElement.transferControlToOffscreen even rendering can be done off the main thread. But if all the high level tools turnedout to be inconvenient,VideoFrame itself istransferable and may bemoved between workers.
WebCodecs in action
Encoding

Canvas or anImageBitmap to the network or to storageIt all starts with aVideoFrame.There are three ways to construct video frames.
From an image source like a canvas, an image bitmap, or a video element.
constcanvas=document.createElement("canvas");// Draw something on the canvas...constframeFromCanvas=newVideoFrame(canvas,{timestamp:0});Use
MediaStreamTrackProcessorto pull frames from aMediaStreamTrackconststream=awaitnavigator.mediaDevices.getUserMedia({…});consttrack=stream.getTracks()[0];consttrackProcessor=newMediaStreamTrackProcessor(track);constreader=trackProcessor.readable.getReader();while(true){constresult=awaitreader.read();if(result.done)break;constframeFromCamera=result.value;}Create a frame from its binary pixel representation in a
BufferSourceconstpixelSize=4;constinit={timestamp:0,codedWidth:320,codedHeight:200,format:"RGBA",};constdata=newUint8Array(init.codedWidth*init.codedHeight*pixelSize);for(letx=0;x <init.codedWidth;x++){for(lety=0;y <init.codedHeight;y++){constoffset=(y*init.codedWidth+x)*pixelSize;data[offset]=0x7f;// Reddata[offset+1]=0xff;// Greendata[offset+2]=0xd4;// Bluedata[offset+3]=0x0ff;// Alpha}}constframe=newVideoFrame(data,init);
No matter where they are coming from, frames can be encoded intoEncodedVideoChunk objects with aVideoEncoder.
Before encoding,VideoEncoder needs to be given two JavaScript objects:
- Init dictionary with two functions for handling encoded chunks anderrors. These functions are developer-defined and can't be changed afterthey're passed to the
VideoEncoderconstructor. - Encoder configuration object, which contains parameters for the outputvideo stream. You can change these parameters later by calling
configure().
Theconfigure() method will throwNotSupportedError if the config is notsupported by the browser. You are encouraged to call the static methodVideoEncoder.isConfigSupported() with the config to check beforehand whetherthe config is supported and wait for its promise.
constinit={output:handleChunk,error:(e)=>{console.log(e.message);},};constconfig={codec:"vp8",width:640,height:480,bitrate:2_000_000,// 2 Mbpsframerate:30,};const{supported}=awaitVideoEncoder.isConfigSupported(config);if(supported){constencoder=newVideoEncoder(init);encoder.configure(config);}else{// Try another config.}After the encoder has been set up, it's ready to accept frames viaencode() method.Bothconfigure() andencode() return immediately without waiting for theactual work to complete. It allows several frames to queue for encoding at thesame time, whileencodeQueueSize shows how many requests are waiting in the queuefor previous encodes to finish.Errors are reported either by immediately throwing an exception, in case the argumentsor the order of method calls violates the API contract, or by calling theerror()callback for problems encountered in the codec implementation.If encoding completes successfully theoutput()callback is called with a new encoded chunk as an argument.Another important detail here is that frames need to be told when they are nolonger needed by callingclose().
letframeCounter=0;consttrack=stream.getVideoTracks()[0];consttrackProcessor=newMediaStreamTrackProcessor(track);constreader=trackProcessor.readable.getReader();while(true){constresult=awaitreader.read();if(result.done)break;constframe=result.value;if(encoder.encodeQueueSize >2){// Too many frames in flight, encoder is overwhelmed// let's drop this frame.frame.close();}else{frameCounter++;constkeyFrame=frameCounter%150==0;encoder.encode(frame,{keyFrame});frame.close();}}Finally it's time to finish encoding code by writing a function that handleschunks of encoded video as they come out of the encoder.Usually this function would be sending data chunks over the network ormuxing them into a mediacontainer for storage.
functionhandleChunk(chunk,metadata){if(metadata.decoderConfig){// Decoder needs to be configured (or reconfigured) with new parameters// when metadata has a new decoderConfig.// Usually it happens in the beginning or when the encoder has a new// codec specific binary configuration. (VideoDecoderConfig.description).fetch("/upload_extra_data",{method:"POST",headers:{"Content-Type":"application/octet-stream"},body:metadata.decoderConfig.description,});}// actual bytes of encoded dataconstchunkData=newUint8Array(chunk.byteLength);chunk.copyTo(chunkData);fetch(`/upload_chunk?timestamp=${chunk.timestamp}&type=${chunk.type}`,{method:"POST",headers:{"Content-Type":"application/octet-stream"},body:chunkData,});}If at some point you'd need to make sure that all pending encoding requests havebeen completed, you can callflush() and wait for its promise.
awaitencoder.flush();Decoding

Canvas or anImageBitmap.Setting up aVideoDecoder is similar to what's been done for theVideoEncoder: two functions are passed when the decoder is created, and codecparameters are given toconfigure().
The set of codec parameters varies from codec to codec. For example H.264 codecmight need abinary blobof AVCC, unless it's encoded in so called Annex B format (encoderConfig.avc = { format: "annexb" }).
constinit={output:handleFrame,error:(e)=>{console.log(e.message);},};constconfig={codec:"vp8",codedWidth:640,codedHeight:480,};const{supported}=awaitVideoDecoder.isConfigSupported(config);if(supported){constdecoder=newVideoDecoder(init);decoder.configure(config);}else{// Try another config.}Once the decoder is initialized, you can start feeding it withEncodedVideoChunk objects.To create a chunk, you'll need:
- A
BufferSourceof encoded video data - the chunk's start timestamp in microseconds (media time of the first encoded frame in the chunk)
- the chunk's type, one of:
keyif the chunk can be decoded independently from previous chunksdeltaif the chunk can only be decoded after one or more previous chunks have been decoded
Also any chunks emitted by the encoder are ready for the decoder as is.All of the things said above about error reporting and the asynchronous natureof encoder's methods are equally true for decoders as well.
constresponses=awaitdownloadVideoChunksFromServer(timestamp);for(leti=0;i <responses.length;i++){constchunk=newEncodedVideoChunk({timestamp:responses[i].timestamp,type:responses[i].key?"key":"delta",data:newUint8Array(responses[i].body),});decoder.decode(chunk);}awaitdecoder.flush();Now it's time to show how a freshly decoded frame can be shown on the page. It'sbetter to make sure that the decoder output callback (handleFrame())quickly returns. In the example below, it only adds a frame to the queue offrames ready for rendering.Rendering happens separately, and consists of two steps:
- Waiting for the right time to show the frame.
- Drawing the frame on the canvas.
Once a frame is no longer needed, callclose() to release underlying memorybefore the garbage collector gets to it, this will reduce the average amount ofmemory used by the web application.
constcanvas=document.getElementById("canvas");constctx=canvas.getContext("2d");letpendingFrames=[];letunderflow=true;letbaseTime=0;functionhandleFrame(frame){pendingFrames.push(frame);if(underflow)setTimeout(renderFrame,0);}functioncalculateTimeUntilNextFrame(timestamp){if(baseTime==0)baseTime=performance.now();letmediaTime=performance.now()-baseTime;returnMath.max(0,timestamp/1000-mediaTime);}asyncfunctionrenderFrame(){underflow=pendingFrames.length==0;if(underflow)return;constframe=pendingFrames.shift();// Based on the frame's timestamp calculate how much of real time waiting// is needed before showing the next frame.consttimeUntilNextFrame=calculateTimeUntilNextFrame(frame.timestamp);awaitnewPromise((r)=>{setTimeout(r,timeUntilNextFrame);});ctx.drawImage(frame,0,0);frame.close();// Immediately schedule rendering of the next framesetTimeout(renderFrame,0);}Dev Tips
Use theMedia Panelin Chrome DevTools to view media logs and debug WebCodecs.

Demo
Thedemo shows how animation frames from a canvas are:
- captured at 25fps into a
ReadableStreambyMediaStreamTrackProcessor - transferred to a web worker
- encoded into H.264 video format
- decoded again into a sequence of video frames
- and rendered on the second canvas using
transferControlToOffscreen()
Other demos
Also check out our other demos:
Using the WebCodecs API
Feature detection
To check for WebCodecs support:
if('VideoEncoder'inwindow){// WebCodecs API is supported.}Keep in mind that WebCodecs API is only available insecure contexts,so detection will fail ifself.isSecureContext is false.
Feedback
The Chrome team wants to hear about your experiences with the WebCodecs API.
Tell us about the API design
Is there something about the API that doesn't work like you expected? Or arethere missing methods or properties that you need to implement your idea? Have aquestion or comment on the security model? File a spec issue on thecorrespondingGitHub repo, or addyour thoughts to an existing issue.
Report a problem with the implementation
Did you find a bug with Chrome's implementation? Or is the implementationdifferent from the spec? File a bug atnew.crbug.com.Be sure to include as much detail as you can, simple instructions forreproducing, and enterBlink>Media>WebCodecs in theComponents box.
Show support for the API
Are you planning to use the WebCodecs API? Your public support helps theChrome team to prioritize features and shows other browser vendors how criticalit is to support them.
Send emails tomedia-dev@chromium.org or send a tweetto@ChromiumDev using the hashtag#WebCodecsand let us know where and how you're using it.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-13 UTC.

