WebXR Layers
WebXR Layers extend the
timewarp functionality provided by the Meta Horizon platform. In web experiences, these layers:
- Increase performance
- Higher quality rendering of imagery and text
- Easiet display of immersive video
There are several samples that demonstrate the benefit of using WebXR Layers. You can find these samples in the
WebXR Samples on GitHub.
WebXR usually requires you to render at the device refresh rate. With Layers, you only need to submit rendered content when the layer updates.
For instance, if you have a skybox that is static, you only render it once and then the OS will take care of the rest. This leaves more headroom to render the dynamic parts of your experience.
Because WebXR Layers allow you to render directly to the final buffer, you can avoid double sampling and distortions.
In WebXR
In WebXR equirect layer
With WebXR Media Layers, it becomes much easier to display a video without using third-party frameworks. To set up an immersive session with a media layer, something similar to the following could be used:
… // Create an immersive session with layers support.
let xrMediaFactory = new XRMediaBinding(session);
let video = document.createElement('video');
video.src = '...'; // url to your video
let layer = xrMediaFactory.createCylinderLayer(video,
{space: refSpace, layout: "stereo-top-bottom"});
session.updateRenderState({ layers: [ layer ] });
The browser will take care of sizing the layer and will draw it in the most optimal way possible, giving you the best quality video with low system overhead.
Note that any video will work, including video that is
cross origin or streaming.
WebXR Video Sample on GitHub demonstrates how easy it is to play high quality video.
Emulation for other browsers
Not all browsers support WebXR Layers. To support development on a range of browsers and devices, the WebXR Layers polyfill hosted on the
Immersive Web GitHub supports emulation of WebXR Layers. This framework emulates the native layers implementation. The documentation on the GitHub page describes how to integrate this into your experience.
WebXR Layers - Technical Overview
In a traditional WebXR experience, Javascript renders the entire scene to a framebuffer (also called the “eye buffer”) at the refresh rate of the headset. This framebuffer is then sent to the system compositor which then displays it to the user with
Timewarp. Timewarp adjusts for minor offsets in head position that happened while the browser was rendering the scene.
In practice, the browser provides the Javascript code a head position based on where the headset is at the expected end of the frame and then renders the scene. When the user’s head position changes unexpectedly, the position of their head at the start of rendering a frame may be significantly different than what is expected. Timewarp adjusts for this discrepancy, providing higher-quality, more comfortable VR. Timewarp also helps if the browser can’t keep up with rendering by inventing new frames. This helps smooth the experience but is generally not desired because the user can still observe that the scene is not smooth.
In a scene with a skybox, some video, and your interactive content, WebXR will render every single pixel with WebGL for each frame. This causes each pixel of the video and the skybox to be sampled by the browser to create the eye buffer. The eye buffer is then sent to the compositor which will Timewarp it and adjust to compensate for lens distortion. This results in the video and skybox pixels to be sampled twice and displays as a slight blurriness, particularly when the video or skybox content is high resolution.
With WebXR Layers, you draw your skybox or video to a texture which is handled separately from the eye buffer. This texture is then directly processed by the compositor.
For our previous example, you would upload a cubemap once at the start of your scene and from then on, the compositor will take care of drawing the skybox. This increases the quality of the skybox and leaves the browser more headroom to draw the interactive foreground content. Likewise for video, you can associate it with a media layer and a video element. The system will take care of rendering it at the appropriate refresh rate.
Using Layers, the system compositor will draw your eye buffer with Timewarp as before but it will also directly render and Timewarp your skybox and video. Because the compositor can sample directly from the skybox and video, quality will be preserved since each pixel only needs to be sampled once, saving GPU time. Layers are built on top of
Meta Quest Timewarp layers and the API mirrors that of
OpenXR Composition Layers.
See the
Immersive Web Layers Explainer on GitHub if you want a more in-depth explanation of the WebXR Layers API.
WebXR Media Layers use WebGL to draw into regular WebXR Layers. These layers offer a high degree of customization and control, but require a lot of setup, knowledge of shaders, and other complex logic.
To display a video, it may be easier to create a media layer with the size and position you want and associate it with a standard
<video>
element. Now this layer will display the contents of this video. The playback is controlled by calling the
usual methods on the video element.
For an example, see the
WebXR Media Layer Sample on GitHub.
To demonstrate the difference in image quality, see the
Cube Layer Sample on GitHub and go immersive, you will see an immersive 360 photo. Initially, the photo is displayed with regular WebXR, but if you pull the trigger on the controller, it will switch to using Layers. Note how much sharper the image appears when Layers are used.
The Higher Quality Images section above contains screenshots of the differences demonstrated by the sample.
The version that uses layers is sharper and has less distortion, especially at the top and bottom of the scene. This is happening because sampling from the image is only done once. Additionally, the compositor can do a better job reprojecting the image because it knows it’s an equirect.
Measuring the improvements in GPU usage can be done with
ovrgpuprofiler.
First, put the profiler into “detailed profiling mode” by running ‘ovrgpuprofiler -e’. You can now run a trace with ‘ovrgpuprofiler -t’. (You may need to restart the browser after running with -e, which can be done by ‘adb shell am force-stop com.oculus.browser’.)
This is what the tool reports with -t
option for the regular WebXR experience:
... | 3.14 ms | 75 stages : Binning : 0.085ms Render : 1.623ms StoreColor : 0.362ms Blit : 0.002ms Preempt : 0.819ms
... | 3.15 ms | 75 stages : Binning : 0.084ms Render : 1.618ms StoreColor : 0.363ms Blit : 0.003ms Preempt : 0.819ms
... | 3.21 ms | 75 stages : Binning : 0.085ms Render : 1.663ms StoreColor : 0.364ms Blit : 0.003ms Preempt : 0.837ms
Each frame is taking around 3.15 milliseconds to render.
This is what the tool reports with -t
option for the WebXR Layers experience:
... | 0.72 ms | 74 stages : Binning : 0.072ms Render : 0.266ms StoreColor : 0.215ms Blit : 0.002ms
... | 0.72 ms | 74 stages : Binning : 0.072ms Render : 0.263ms StoreColor : 0.219ms Blit : 0.003ms
... | 0.72 ms | 74 stages : Binning : 0.073ms Render : 0.265ms StoreColor : 0.221ms Blit : 0.003ms
- “Render” is significantly faster because only the controllers are drawn.
- “StoreColor” is also faster because much of the scene is empty so those tiles don’t need to be copied.
- There is no “preempt”. The scene is rendering so fast that it wasn’t interjected by a system compositor event.
The overgpuprofiler tool has support for many other metrics that are explained in the
ovrgpuprofiler guide.
Overall GPU usage between the two modes:
Regular WebXR: GPU % Bus Busy : 50.210
WebXR Layers: GPU % Bus Busy : 23.904
So, the same experience now runs at higher quality and at half the GPU usage.