Perform efficient per-video-frame operations on video with requestVideoFrameCallback()
Learn how to use the requestVideoFrameCallback()
to work more efficiently with videos in the browser.
There's a new Web API on the block, defined in the HTMLVideoElement.requestVideoFrameCallback()
specification. The requestVideoFrameCallback()
method allows web authors to register a callback that runs in the rendering steps when a new video frame is sent to the compositor. This is intended to allow developers to perform efficient per-video-frame operations on video, such as video processing and painting to a canvas, video analysis, or synchronization with external audio sources.
Difference with requestAnimationFrame() #
Operations like drawing a video frame to a canvas via drawImage()
made through this API will be synchronized as a best effort with the frame rate of the video playing on screen. Different from window.requestAnimationFrame()
, which usually fires about 60 times per second, requestVideoFrameCallback()
is bound to the actual video frame rate—with an important exception:
The effective rate at which callbacks are run is the lesser rate between the video's rate and the browser's rate. This means a 25fps video playing in a browser that paints at 60Hz would fire callbacks at 25Hz. A 120fps video in that same 60Hz browser would fire callbacks at 60Hz.
What's in a name? #
Due to its similarity with window.requestAnimationFrame()
, the method initially was proposed as video.requestAnimationFrame()
, but I'm happy with the new name, requestVideoFrameCallback()
, which was agreed on after a lengthy discussion. Yay, bikeshedding for the win!
Feature detection #
if ('requestVideoFrameCallback' in HTMLVideoElement.prototype) {
// The API is supported!
}
Browser support #
- Chrome 83, Supported 83
- Firefox, Not supported
- Edge 83, Supported 83
- Safari 15.4, Supported 15.4
Polyfill #
A polyfill for the requestVideoFrameCallback()
method based on Window.requestAnimationFrame()
and HTMLVideoElement.getVideoPlaybackQuality()
is available. Before using this, be aware of the limitations mentioned in the README
.
Using the requestVideoFrameCallback() method #
If you have ever used the requestAnimationFrame()
method, you will immediately feel at home with the requestVideoFrameCallback()
method. You register an initial callback once, and then re-register whenever the callback fires.
const doSomethingWithTheFrame = (now, metadata) => {
// Do something with the frame.
console.log(now, metadata);
// Re-register the callback to be notified about the next frame.
video.requestVideoFrameCallback(doSomethingWithTheFrame);
};
// Initially register the callback to be notified about the first frame.
video.requestVideoFrameCallback(doSomethingWithTheFrame);
In the callback, now
is a DOMHighResTimeStamp
and metadata
is a VideoFrameMetadata
dictionary with the following properties:
presentationTime
, of typeDOMHighResTimeStamp
: The time at which the user agent submitted the frame for composition.expectedDisplayTime
, of typeDOMHighResTimeStamp
: The time at which the user agent expects the frame to be visible.width
, of typeunsigned long
: The width of the video frame, in media pixels.height
, of typeunsigned long
: The height of the video frame, in media pixels.mediaTime
, of typedouble
: The media presentation timestamp (PTS) in seconds of the frame presented (e.g., its timestamp on thevideo.currentTime
timeline).presentedFrames
, of typeunsigned long
: A count of the number of frames submitted for composition. Allows clients to determine if frames were missed between instances ofVideoFrameRequestCallback
.processingDuration
, of typedouble
: The elapsed duration in seconds from submission of the encoded packet with the same presentation timestamp (PTS) as this frame (e.g., same as themediaTime
) to the decoder until the decoded frame was ready for presentation.
For WebRTC applications, additional properties may appear:
captureTime
, of typeDOMHighResTimeStamp
: For video frames coming from either a local or remote source, this is the time at which the frame was captured by the camera. For a remote source, the capture time is estimated using clock synchronization and RTCP sender reports to convert RTP timestamps to capture time.receiveTime
, of typeDOMHighResTimeStamp
: For video frames coming from a remote source, this is the time the encoded frame was received by the platform, i.e., the time at which the last packet belonging to this frame was received over the network.rtpTimestamp
, of typeunsigned long
: The RTP timestamp associated with this video frame.
Of special interest in this list is mediaTime
. In Chromium's implementation, we use the audio clock as the time source that backs video.currentTime
, whereas the mediaTime
is directly populated by the presentationTimestamp
of the frame. The mediaTime
is what you should use if you want to exactly identify frames in a reproducible way, including to identify exactly which frames you missed.
If things seem one frame off… #
Vertical synchronization (or just vsync), is a graphics technology that synchronizes the frame rate of a video and the refresh rate of a monitor. Since requestVideoFrameCallback()
runs on the main thread, but, under the hood, video compositing happens on the compositor thread, everything from this API is a best effort, and we do not offer any strict guarantees. What may be happening is that the API can be one vsync late relative to when a video frame is rendered. It takes one vsync for changes made to the web page through the API to appear on screen (same as window.requestAnimationFrame()
). So if you keep updating the mediaTime
or frame number on your web page and compare that against the numbered video frames, eventually the video will look like it is one frame ahead.
What is really happening is that the frame is ready at vsync x, the callback is fired and the frame is rendered at vsync x+1, and changes made in the callback are rendered at vsync x+2. You can check whether the callback is a vsync late (and the frame is already rendered on screen) by checking whether the metadata.expectedDisplayTime
is roughly now
or one vsync in the future. If it is within about five to ten microseconds of now
, the frame is already rendered; if the expectedDisplayTime
is approximately sixteen milliseconds in the future (assuming your browser/screen is refreshing at 60Hz), then you are in sync with the frame.
Demo #
I have created a small demo on Glitch that shows how frames are drawn on a canvas at exactly the frame rate of the video and where the frame metadata is logged for debugging purposes. The core logic is just a couple of lines of JavaScript.
let paintCount = 0;
let startTime = 0.0;
const updateCanvas = (now, metadata) => {
if (startTime === 0.0) {
startTime = now;
}
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
const elapsed = (now - startTime) / 1000.0;
const fps = (++paintCount / elapsed).toFixed(3);
fpsInfo.innerText = `video fps: ${fps}`;
metadataInfo.innerText = JSON.stringify(metadata, null, 2);
video.requestVideoFrameCallback(updateCanvas);
};
video.requestVideoFrameCallback(updateCanvas);
Conclusions #
I have done frame-level processing for a long time—without having access to the actual frames, only based on video.currentTime
. I implemented video shot segmentation in JavaScript in a rough-and-ready manner; you can still read the accompanying research paper. Had the requestVideoFrameCallback()
existed back then, my life would have been much simpler…
Acknowledgements #
The requestVideoFrameCallback
API was specified and implemented by Thomas Guilbert. This article was reviewed by Joe Medley and Kayce Basques. Hero image by Denise Jans on Unsplash.