Depth API sample
Updated: May 11, 2026
This sample demonstrates how to use the Depth API to create realistic occlusion of virtual objects behind physical surfaces in mixed reality applications. You explore four techniques: hard and soft occlusion modes, depth bias adjustment to prevent z-fighting, hand removal for high-fidelity tracking, and selective geometry masking.
- Configure and toggle between hard occlusion (jagged edges) and soft occlusion (smooth edges) using
EnvironmentDepthManager - Prevent z-fighting artifacts when placing virtual content on walls using
_EnvironmentDepthBias - Replace low-resolution depth sensor hands with tracked hand meshes for sharper occlusion boundaries
- Selectively exclude scene geometry (like walls) from occlusion using mesh masking
- Integrate depth occlusion macros into custom shaders for both Built-in Render Pipeline and Universal Render Pipeline
- A Meta Quest 3, or Quest 3S headset
- A Unity development environment configured for Meta Quest development
For Unity version requirements, SDK dependencies, and build configuration, see the sample’s README. For initial environment setup, see the Unity Set Up guide.
Clone the repository from https://github.com/oculus-samples/Unity-DepthAPI. The repo contains two parallel projects: DepthAPI-BiRP for Built-in Render Pipeline and DepthAPI-URP for Universal Render Pipeline. Both projects contain the same four scenes and sample features — they differ only in shaders and materials. Open the project matching your render pipeline in Unity — choose BiRP if your project uses Unity’s default renderer, or URP if you have configured Universal Render Pipeline. All four sample scenes are located in Assets/DepthAPISample/Scenes/. Build and deploy to your Quest 3, or Quest 3S using Unity’s standard build workflow (File > Build Settings > Build and Run). See the sample’s README for additional configuration and troubleshooting.
The sample includes four scenes, each demonstrating a different aspect of the Depth API.
| Scene | Key scripts | Demonstrates |
|---|
OcclusionToggler | OcclusionToggler.cs, SceneSwitcher.cs | Toggling between no occlusion, hard occlusion, and soft occlusion modes using EnvironmentDepthManager.OcclusionShadersMode |
SceneAPIPlacement | PosterPlacer.cs, Poster.cs, OcclusionDepthBias.cs | Preventing z-fighting when placing virtual objects flush with physical walls by adjusting _EnvironmentDepthBias at runtime |
HandsRemoval | HandRemovalToggler.cs, HandsStyleUiListener.cs | Replacing low-resolution depth map hands with tracked hand meshes via EnvironmentDepthManager.RemoveHands |
DepthMask | SceneMeshDepthMask.cs | Excluding specific scene geometry from occlusion using EnvironmentDepthManager.MaskMeshFilters and MRUK wall detection |
In OcclusionToggler, virtual objects (a mug, guitar, and torch) appear in your physical space. A button cycles through three occlusion modes: none (objects always visible), hard occlusion (jagged edges where objects pass behind surfaces), and soft occlusion (smooth, feathered edges). A text label displays the current mode.
In SceneAPIPlacement, a ray extends from your controller and a ghost poster appears when aimed at a wall. Pull the index trigger to place the poster. Without depth bias, the poster flickers due to z-fighting at the wall boundary. Use the thumbstick to adjust bias (default 0.06) until flickering resolves.
In HandsRemoval, hand tracking must be enabled. Toggle the removal feature off to see your hands occlude with the rough resolution of the depth sensor. Toggle on to replace the depth map hands with tracked hand meshes, providing sharp occlusion boundaries.
In DepthMask, press the B button to toggle masking. When enabled, walls detected by MRUK no longer contribute to occlusion — virtual objects behind walls remain visible. Use the thumbstick to adjust MaskBias.
The sample cycles occlusion modes by setting the OcclusionShadersMode property on EnvironmentDepthManager. Hard occlusion uses binary visibility (fully visible or fully occluded), while soft occlusion blends edges based on depth confidence. The mode setting applies globally to all materials using occlusion shaders. See OcclusionToggler.cs for the implementation.
Note: The mode is controlled via shader keywords HARD_OCCLUSION and SOFT_OCCLUSION defined in the occlusion shaders.
Integrating occlusion into custom shaders
The sample provides annotated shader templates showing a three-step integration pattern. Declare world position output with META_DEPTH_VERTEX_OUTPUT(n), store it in the vertex shader using META_DEPTH_INITIALIZE_VERTEX_OUTPUT(output, vertex), and apply occlusion in the fragment shader with META_DEPTH_OCCLUDE_OUTPUT_PREMULTIPLY(input, color, bias). These macros are defined in the Meta XR Core SDK and work with both BiRP and URP. Examine ExampleUnlitShader.shader (BiRP) or DepthUnlit.shader (URP) for step-by-step implementation comments.
Depth bias to prevent z-fighting
When virtual content sits flush with a physical surface, depth buffer precision limits cause z-fighting (flickering pixels). The _EnvironmentDepthBias shader property offsets the virtual object slightly forward to resolve this. The OcclusionDepthBias utility class provides SetDepthBias(float) and AdjustDepthBias(float) methods that modify this property on assigned materials. The sample uses a default bias of 0.06 meters. See Poster.cs and OcclusionDepthBias.cs for the runtime adjustment pattern.
Hand removal for high-fidelity occlusion
The depth sensor captures hands at lower resolution than tracked hand meshes. Setting EnvironmentDepthManager.RemoveHands = true removes hands from the depth map, allowing the application to render tracked OVRHand meshes instead. This produces sharper occlusion boundaries at fingertips and between fingers. The feature requires hand tracking to be enabled. See HandRemovalToggler.cs for the toggle implementation.
Selective geometry masking
Some mixed reality scenarios require virtual content to ignore occlusion from specific surfaces (for example, UI panels that should remain visible even when behind walls). The sample demonstrates this by collecting wall MeshFilter components from MRUK and assigning them to EnvironmentDepthManager.MaskMeshFilters. Masked geometry no longer contributes to the depth map. The MaskBias property fine-tunes the masking boundary. See SceneMeshDepthMask.cs for MRUK integration.
- Modify the SkyboxCubemapOccluded.shader to create custom skybox effects that respond to depth occlusion values, such as dimming or color-shifting occluded regions
- Integrate the depth bias adjustment pattern from PosterPlacer.cs into a content placement system that automatically calibrates bias based on surface type or user preference
- Experiment with the URP ShaderGraph examples (LitOccluded, StylizedOcclusionEffectON/OFF) to learn how to combine depth occlusion with stylized rendering effects like toon shading or custom edge treatments