This sample demonstrates the foundational use case of the Passthrough Camera API: accessing a live camera texture from the Quest headset and displaying it in your Unity scene. It covers requesting camera permissions, initializing the camera feed, and rendering it to a world-space UI Canvas, making it a strong starting point for developers new to the Passthrough Camera API.
Request and check camera access permissions at runtime using OVRPermissionsRequester
Initialize camera access with PassthroughCameraAccess and detect when the feed is ready
Retrieve a GPU texture from the headset camera using GetTexture()
Display the camera feed on a world-space Canvas UI element in XR
Configure camera position and resolution through prefab-based settings
Requirements
Hardware: Meta Quest 3 or Quest 3S running Horizon OS 74 or newer
Development environment: Unity with the Mixed Reality Utility Kit (MRUK) package installed
For version requirements, SDK installation, and project configuration, see the sample README.
Get started
Clone or download the Unity-PassthroughCameraApiSamples repository from GitHub. Open the project in Unity and load the scene at Assets/PassthroughCameraApiSamples/CameraViewer/CameraViewer.unity. Build and deploy to a Quest 3 or Quest 3S device — the app prompts for camera access permission on first launch. For detailed build configuration, see the sample README.
Explore the sample
The CameraViewer scene contains the essential components for camera access and display.
File / Scene
What it demonstrates
Key concepts
CameraViewer.unity
Complete scene setup with XR camera rig, passthrough rendering, and camera feed UI
Building Blocks integration (Camera Rig, Passthrough), world-space Canvas positioning
Scripts/CameraViewerManager.cs
Camera initialization and permission status display
Coroutine-based polling for camera readiness, permission checking in Update loop
Prefabs/CameraViewerManagerPrefab.prefab
World-space Canvas with RawImage for camera display and debug text
World-space UI in XR (RenderMode=WorldSpace, 0.001 scale, 1.2m forward offset)
Runtime permission flow for Scene and PassthroughCameraAccess
Runtime behavior
When you run the sample, you see a world-space canvas floating in front of you displaying the live feed from the Quest headset’s left camera. Below the camera feed, a debug text field displays whether camera access has been granted. The camera feed appears as a 1280x960 texture rendered on the canvas, positioned at a local z-offset of 1.2 meters from the center eye anchor.
Key concepts
Coroutine-based camera initialization
The sample uses a coroutine pattern in CameraViewerManager.Start() to handle asynchronous camera startup. The coroutine polls PassthroughCameraAccess.IsPlaying each frame using yield return null until the camera feed is ready, then assigns the texture once via GetTexture(). This pattern avoids blocking the main thread while waiting for hardware initialization. See CameraViewerManager.cs for the complete implementation.
Permission flow
Camera access requires three steps: (1) enabling camera permissions in OculusProjectConfig at build time, (2) triggering the OS permission dialog at runtime via RequestPermissionsOnce.cs, and (3) verifying permission status using OVRPermissionsRequester.IsPermissionGranted(Permission.PassthroughCameraAccess). The sample displays the current permission state in a debug text field updated each frame in Update().
Prefab separation for reusability
The sample separates camera lifecycle management (PassthroughCameraAccessPrefab) from UI presentation (CameraViewerManagerPrefab). The shared PassthroughCameraAccessPrefab configures camera position (CameraPositionType.Left) and resolution (1280x960) through serialized fields, allowing other samples to reuse the same camera instance without duplicating configuration. This prefab-based approach appears in all five samples in the project.
Extend the sample
Add camera switching: Modify the sample to toggle between the left and right cameras by changing CameraPositionType and re-initializing the camera feed
Apply custom shaders: Use the camera texture with Unity shader graphs or custom materials to create visual effects, color grading, or image processing. See the ShaderSample in this project for examples
Integrate spatial anchors: Combine the camera feed with MRUK’s scene understanding to overlay the camera texture on detected surfaces or at specific anchor points. See the CameraToWorld sample for spatial alignment patterns