Surface Projected Passthrough
Surface-projected passthrough allows apps to specify the geometry onto which the passthrough images are projected (instead of an automatic environment depth reconstruction). For surface-projected passthrough layers, passthrough is only visible within the specified surface geometries, the rest of the layer is transparent.
Surface-projected passthrough can be used when the exact locations of real-world features are known (for example, a desk marked by the user using controllers) to avoid visual artifacts that may arise from the dynamic environment reconstruction used by regular passthrough layers. The surface geometries provided by the app should match real-world surfaces as closely as possible. If they differ significantly, users will receive conflicting depth cues, and objects may appear too small or large. On Quest Pro, such mismatches also lead to a shift between the color and luminance images, making colored objects appear in the wrong location.
There is no depth testing between the passthrough projection surface and the objects rendered in VR. This leads to surface-projected passthrough rendering either as an underlay (always occluded by virtual objects) or overlay (always occludes virtual objects).
You can add the following example script to GameObjects to render them as passthrough.
Add a static mesh actor to the passthrough layer with the Add Surface Geometry Blueprint.
If updateTransform is false, the geometry of the GameObject is assumed to be static and updates to either the geometry or the transformation are not reflected. If updateTransform is true, the transformation is updated in every frame. Since this has a small overhead, it should only be used if needed. Changes to the mesh geometry itself are not reflected, the GameObject needs to be removed and readded for such changes to take effect.