This blog post has been updated to reflect the public release of Depth API with the release of v60 SDK.
Mesh API (
Unity |
Unreal) and
Depth API (
Unity,
Unreal,
Native) are now publicly available with the SDK v60, two capabilities that can enrich the interactions in your MR experiences and generate more of the details, layering, and masking that make people’s interactions in mixed reality feel naturally believable and real.
But what do we mean when we talk about app experiences feeling believable?
When virtual objects move or interact in a way that feels unnatural, this breaks the sense of realness by creating a visual “hiccup” that sticks out and disrupts the user experience. With MR apps, the seamless and physically accurate blending of virtual and physical objects is paramount for preventing these types of user experience breaks.
Whether you’re bouncing a virtual ball off a physical wall or watching a virtual character walk behind a phyiscal coffee table, the goal should be for interactions and virtual objects in MR to feel natural and behave as they would in everyday life.
Now with Mesh API and Depth API, you have the tools to ground your audience in the magical, blended world around them with more dynamic and seamless interactions. Read on below to learn how.
Build Dynamic Experiences with Mesh API
Mesh API gives you access to the Scene Mesh, a geometric representation of the environment that reconstructs the physical world into a single triangle-based mesh. Meta Quest 3’s Space Setup feature automatically generates the Scene Mesh by capturing room elements like walls, ceilings, and floors, and your app can query this data using Scene API.
Scene Mesh on Quest 3 delivers a far more granular representation of a physical environment than what was possible on Quest 2. When you get access to the Scene Mesh of your users’ environments, you can engage audiences and be more creative with these key use cases:
- Generating accurate physics: Fast collisions, like bouncing virtual balls, lasers, projectiles, and other objects across a physical environment, benefit greatly from the detailed representation of the room. Mesh API enables collision effects that adapt according to various objects in the room to create a more realistic experience.
- Navigation and obstacle avoidance: Querying the Scene Mesh enables your app to know where obstacles are in a physical space, which can inform AI navigation and prevent virtual objects from intersecting with physical ones.
- Precise content placement: Being able to access users’ environments with higher fidelity enables experiences where your audience has freedom to place and attach virtual objects in more precise locations.
You can reference Phanto to see best practices for using the Scene Mesh.
To get started with Mesh API and learn how to get the Scene Mesh of a physical environment, reference the Scene Mesh documentation (
Unity
|
Unreal). To see best practices for using the Scene Mesh, you can also check out the code for the reference app
Phanto on GitHub. The app includes a variety of features such as
AI navigation, collisions, and physics-based interactions, all of which are powered by the Scene Mesh and Scene API.
Blend Virtual Content Realistically with Depth API
When your furry four-legged friend walks to the other side of a couch, without consciously thinking about it, you naturally know it’s behind the couch because the furniture is blocking your view of it. In MR, when a virtual pet is expected to hide behind a physical object and the app has no way to render it appropriately, the believability instantly suffers and immersion breaks.
Now with Depth API, you have a solution for these types of depth-related interaction issues thanks to support for occlusion, or the ability to accurately hide or mask virtual elements behind physical objects.
Depth API on Quest 3 provides you with a real-time depth map that represents the physical environment’s depth as it’s seen from the user’s point of-view. By using Depth API, you can now support dynamic occlusion for fast-moving objects like virtual characters, pets, limbs, and much more. Here are just a few examples of what this can help unlock in your MR experiences:
- Immersive social and gaming experiences where physical objects and people can move freely without disrupting 3D virtual objects around the room.
- Believable hand interactions and experiences that encourage people to manipulate virtual objects with their hands.
Demo from Meta Connect showing experience using Depth API.
To build realistic mixed reality applications, both Depth and Scene Mesh APIs should be used together to cover a broader set of use cases than either of these capabilities can address in isolation. We recommend the following flow:
- Step 1: Prompt users to initiate Space Setup to build a representation of the scene.
- Step 2: Use depth maps from Depth API to render occlusions based on the per-frame sensed depth, augmented with data from Scene.
- Step 3: If needed, use Scene to implement other effects, such as collisions.
To get started using Depth API, please visit the documentation (
Unity |
Unreal). If you're building on Unity, visit our
GitHub repository to find examples of using Depth API for real-time, dynamic occlusions.
Key Considerations for Getting Started with Mesh API and Depth API
Mesh API and Depth API were designed to leverage existing
Presence Platform MR capabilities to support dynamic interactions involving users’ physical environments. Depth API uses annotations through Scene Model to augment the stereo depth data—in order to get that data, we recommend initiating Space Setup. While it’s possible to use Depth API without Scene Understanding, we recommend getting familiar with Scene (
Unity
|
Unreal
|
OpenXR
|
WebXR) before building with these new capabilities.
If you’re new to building for MR and want to incorporate rich interactions with users’ physical environments, please visit our
MR Design Guidelines to learn more about how to design engaging MR experiences.
For the latest news and updates for VR developers, follow us on
X/Twitter and
Facebook.