Enhance Realism and Expedite Project Setup with the Instant Content Placement Mixed Reality Motif
With the launch of v71, we introduced the MRUK Raycast Beta API, making it even easier for developers to start creating mixed reality experiences almost instantly. In the current mixed reality landscape, content placement can be a tedious process due to a setup process that requires users to scan their room upon launching a mixed reality experience. With so many use cases for virtual content and panels—from menus to objects and everything in between—the ability to quickly place a variety of interactable content is more important than ever for creating seamless, enjoyable experiences.
That’s where MRUK Raycasting comes in. This capability is designed for apps that want to place a 2D panel or 3D object somewhere in front of a user with minimal setup and effort. With MRUK Raycasting, a user can simply look in the direction of the placement and start interacting with both 2D and 3D content—even in a physical space they’ve never visited in-headset
This type of instant placement can enable large-scale, friction-free experiences that support natural and realistic interactions between users and their virtual environment—for example, our recent Shared Activities in Mixed Reality Motif. In this motif, we created a sample virtual chessboard experience and a movie co-watching experience. In this scenario, MRUK Raycasting can be leveraged to easily place the virtual chessboard and cast realistic shadows below it, without the need for expensive light sources.
In this Instant Placement Mixed Reality Motif, we will focus on the basics of how to use the Depth API—the core technology behind the Raycast API—and achieve visual effects by leveraging depth information directly in your shaders. This will build the necessary knowledge to understand how the EnvironmentRaycastManager class is supported by EnvironmentDepthManager.
Additionally, you’ll learn how EnvironmentRaycastManager uses its environment information to perform raycasts, return hit point information and allow you to place objects effortlessly. The EnvironmentRaycastManager is part of the MRUK package, so before you dive in, make sure MRUK v71 or later is installed in your project. Keep reading below to learn more, and don’t forget to watch the video tutorial and download the project from GitHub to accelerate development with the Depth API.
Depth API: The basics
The Depth API provides real-time depth maps that apps can use to sense the surrounding environment. Primarily, it enhances mixed reality experiences by allowing virtual objects to be occluded by real-world objects and surfaces, making them appear integrated into the actual environment. Occlusion is crucial as it prevents virtual content from appearing as a layer over the real world, which can disrupt immersion.
Starting with version v67, we've updated the mechanism for retrieving depth textures, resulting in improved quality and performance. The EnvironmentDepthManager class, which is now part of the Meta XR Core SDK, is responsible for providing depth information to shaders and allows you to create visual effects and gameplay that have previously been near impossible.
When leveraging Depth API, the EnvironmentDepthManager class needs to be present in your scene in order to make use of environment depth information in shaders and scripts. This class checks for support, initializes depth providers and retrieves depth textures from the depth sensors in each frame. Key properties like _EnvironmentDepthTexture (real-world depth map), _EnvironmentDepthReprojectionMatrices (view-projection matrices of the depth cameras) and _EnvironmentDepthZBufferParams (parameters used to linearize the depth values) are set globally for use in shaders. EnvironmentDepthManager supports both hard and soft occlusion modes and can apply object-specific depth masking. This setup allows shaders to blend virtual objects with real-world depth for realistic occlusion and visual effects.
Accessing Depth Variables in Shaders
The DepthLookingGlassMotif shader visualizes the real-world depth provided by the Depth API using a gradient color effect. It also displays the depth of virtual objects seen by the camera. To see virtual scene objects, you’ll need to enable the Depth Texture, either on your camera rendering settings or in the Universal Render Pipeline Assets.
To make use of already declared variables such as _EnvironmentDepthTexture, _EnvironmentDepthReprojectionMatrices and _EnvironmentDepthZBufferParams, we can import a shader include file that comes with the Meta XR SDK Core package. The shader include file contains utility functions and predefined macros for handling environment depth sampling and reprojection, allowing us to easily access depth data and integrate it into the shader. The same include file works for both BiRP and URP.
The DepthLookingGlassMotif shader shows how to sample depth and convert data from normalized device coordinates (NDC) to linear depth using the _EnvironmentDepthZBufferParams, allowing us to accurately measure the distance from our headset to the environment depth.
_EnvironmentDepthReprojectionMatrices is then used to transform world positions into the correct depth space for comparing scene depth with the environment depth texture. The _EnvironmentDepthTexture class is the actual depth map captured by the system, representing the depth of real-world surfaces relative to the camera. The sampled depth can easily be normalized to linear depth using _EnvironmentDepthZBufferParams, making it possible to seamlessly compare and blend virtual objects and real-world depth.
This functionality is what makes it possible to occlude an object based on its position relative to the environment depth. If you want to visualize virtual objects on your shader as well, keep in mind that this only works for opaque objects, since transparent objects in Unity do not write to the depth buffer. If you still would like to have invisible objects that write to the depth buffer, we included a custom shader in the project called DepthMaskMotif that you can apply to an object to make it invisible but still see through your Depth Looking Glass. This mechanism can be helpful for creating experiences like an escape room or a ghost hunting adventure.
Accessing the Depth Map in Shaders
With the Instant Placement Motif, you can learn how to apply effects directly to the depth map, which you can see in the sample through the addition of the orb spawner and shock wave example. Additionally, we apply a light effect to the depth map as the orb is flying through the air.
The orb itself has a shader that is responsible for occluding itself behind objects. To add occlusion to your shadergraph shader, all you have to do is add an OcclusionSubGraph and connect it to the alpha output of your shader. In our sample, we added a float property called Environment Depth Bias as an input. Increasing this value slightly will solve z-fighting, or in other words, prevent our object from flickering when it is too close to a wall.
For the shockwave effect in the sample, we use a simple sphere that increases in size over time and wherever it intersects with the depth, and this is where we apply some coloring to give it a shock wave effect. You also can find this DepthScanEffectMotif shader in the project. It contains a _Color property to determine the base color for the visualization and a _Girth property that controls the blending and cutoff thresholds for depth comparisons, or in other words, the thickness of the shock wave.
And this is how we use the Depth API to create different visual effects. Next, let’s dive into why is this relevant for instant content placement.
Instant Content Placement
If you’ve taken a look at the project, you may have noticed that in order to place the orbs in our room, we did not use any meshes at all—instead, we leverage the new EnvironmentRaycastManager, which heavily relies on the EnvironmentDepthManager. The basic concept of placing an object using the EnvironmentRaycastManager looks like the following:
The important part here is the EnvironmentRaycastManager.Raycast method. It performs a raycast against the environment and returns a hit result to us with information such as the hit point, normal of the surface we hit and normal confidence. This is all the information we need to detect a surface and place any object on that surface. The Raycast API is as easy to work with as Unity’s Physics.Raycast, which many Unity developers will already be familiar with.
To deepen your understanding of this concept, we prepared an additional scene called InstantContentPlacement. In this scene, we show you how to grab an object and detect any surface that is suitable for placement. Additionally, using the same concept, we place a grounding shadow below the object, which sits tightly on the detected surface to give our object and experience more realism.
In the SurfacePlacementMotif class, we check if the object has been grabbed. While grabbing, we want to update our placement indicator, which tells us if we are close enough to the surface to place our object. You will see that it casts a ray from the object downwards towards the environment. If it hits a surface, we want to measure the distance between the object and our hitpoint. Once the distance is close enough, we enable the indicator and move it to the hitpoint. You can see a common theme here—it is the same concept we follow when we subsequently unselect the object.
When unselecting, we first check if we hit a surface. If so, we measure the distance to check if the object is still close enough to that surface. If so, we smoothly move the object onto the surface with a little bit of offset for better visualization and so that we can still see the shadow. The most simple bare bones instant placement would look something like this:
using Meta.XR.MRUtilityKit;
using UnityEngine;
public class BasicInstantPlacement : MonoBehaviour
{
public Transform Object;
public EnvironmentRaycastManager RaycastManager;
// Check if we hit a surface below our object
if (RaycastManager.Raycast(new Ray(Object.position, Vector3.down), out var hitInfo))
{
// Position the object on the hitpoint/detected surface
Object.position = hitInfo.point;
}
}
}
Start Placing Virtual Content in Mixed Reality
Using the insights in this blog, the source code and project on GitHub and the tips provided in our tutorial video, you have the knowledge and skills to start placing any virtual object on nearly any surface in your physical environment. Leveraging the Raycast API and Depth API to their full potential unlocks unique visual effects that deliver more delight, realism, and authenticity, and we can’t wait to see how you use it to bring your mixed reality experiences to life.
For more developer news, product updates and tutorials, be sure to follow us on X and Facebook and subscribe to our monthly newsletter in your Developer Dashboard settings.
Design
Hand Tracking
WebXR
Did you find this page helpful?
Explore more
Introducing North Star: A Visual Showcase for Meta Horizon Developers
Explore North Star, a Unity graphics showcase and open source GitHub project for Meta Horizon developers featuring cutting-edge visuals and graphics optimizations.
Explore a New Era of Mixed Reality with the Passthrough Camera API
The Passthrough Camera API provides access to the front-facing RGB cameras on Quest 3 and 3S so you can unlock advanced mixed reality use cases supporting machine learning and computer vision. Discover use cases and samples here.
All, Apps, Design, Entertainment, Games, Native SDK, Quest, Unity, Unreal
Quarterly Developer Recap: Tracked Keyboard, Hand Gestures, WebXR PWAs and more
Unlock the latest updates for developers building 3D, 2D and web-based experiences for Horizon OS, including the new tracked keyboard, microgesture-driven locomotion, and more.