CameraToWorld sample overview
Updated: May 7, 2026
This sample demonstrates camera pose alignment and 2D-to-3D coordinate transformation using the Passthrough Camera API. It shows how to position virtual content in world space relative to the RGB camera and convert normalized viewport coordinates into world-space rays for spatial calculations.
- Retrieve the RGB camera’s world-space position and rotation using
GetCameraPose() - Convert 2D viewport coordinates to 3D world-space rays with
ViewportPointToRay() - Calculate horizontal field of view from viewport edge rays to size virtual content
- Capture freeze-frame snapshots of the camera feed using
GetColors() - Visualize debug markers and rays aligned with camera space
- Quest 3 or Quest 3S running Horizon OS v74 or higher
- Unity 6000.0.38f1 or newer
- Passthrough enabled in your project
For complete build prerequisites and package dependencies, see the sample README.
Clone the repository from GitHub, open the project in Unity, and navigate to the CameraToWorld scene. Build and deploy to your Quest device. Grant camera permissions when prompted, and the live camera feed appears on a floating canvas with debug rays extending to the canvas corners.
The sample includes five scripts that demonstrate pose tracking, coordinate transformation, and debug visualization:
| File / Scene | What it demonstrates | Key concepts |
|---|
CameraToWorldManager.cs | Main orchestrator for pose tracking, ray transformation, and canvas positioning | GetCameraPose(), ViewportPointToRay(), FOV calculation, snapshot mode, debug offset
|
CameraToWorldCameraCanvas.cs | Camera texture display with live streaming and snapshot support | GetTexture() for GPU streaming, GetColors() for CPU snapshot
|
CameraToWorldRayRenderer.cs | Ray segment visibility data holder | Line renderer management for debug visualization |
InputManager.cs (shared) | Controller and hand tracking input abstraction | Button A/B mapping, index/middle finger pinch detection |
RequestPermissionsOnce.cs (shared) | One-time permission request at scene load | RuntimeInitializeOnLoadMethod, permission state polling |
When you run the sample, a permission prompt appears requesting camera access. After granting permissions, the live camera feed displays on a floating canvas positioned 1 meter in front of the RGB camera. Four rays extend from the camera origin to the canvas corners, labeled with normalized viewport coordinates. By default, only the ray endpoints are visible.
Press Button A on your controller or perform an index finger pinch to toggle snapshot mode, which freezes the frame and reveals the full rays with position markers. Press Button B or perform a middle finger pinch to toggle debug mode, which shifts all visualization objects 15 centimeters down and 40 centimeters forward in head space for comfortable viewing.
The sample obtains the RGB camera’s world-space pose by calling GetCameraPose() every frame and positions the canvas at a fixed distance in front of the camera:
var cameraPose = m_cameraAccess.GetCameraPose();
m_cameraCanvas.transform.position = cameraPose.position
+ cameraPose.rotation * Vector3.forward * m_canvasDistance;
m_cameraCanvas.transform.rotation = cameraPose.rotation;
The canvas is placed 1 meter forward from the camera origin (m_canvasDistance = 1f) and tracks the camera’s rotation as you move your head. See CameraToWorldManager.cs for the complete implementation.
The sample converts normalized 2D viewport coordinates to world-space rays using ViewportPointToRay(Vector2). This enables calculating the horizontal field of view by measuring the angle between rays cast from the left and right viewport edges:
Ray leftRay = m_cameraAccess.ViewportPointToRay(new Vector2(0f, 0.5f));
Ray rightRay = m_cameraAccess.ViewportPointToRay(new Vector2(1f, 0.5f));
float hFOV = Vector3.Angle(leftRay.direction, rightRay.direction);
The sample uses this angle to size the canvas width at the configured distance (1 meter default). Four rays extend from the camera origin to the viewport corners (0,0), (1,0), (0,1), (1,1) to visualize the transformation. See CameraToWorldManager.cs for the FOV calculation and ray visualization logic.
Freeze-frame snapshot capture
The sample captures a freeze-frame snapshot by reading raw pixel data via GetColors() and loading it into a pre-created Texture2D:
var pixels = m_cameraAccess.GetColors();
m_cameraSnapshot.LoadRawTextureData(pixels);
m_cameraSnapshot.Apply();
The texture is created once at CurrentResolution and reused across snapshot calls. Toggling snapshot mode disables the camera component to stop streaming, then re-enables it to resume GetTexture() GPU texture updates. See CameraToWorldCameraCanvas.cs for the snapshot implementation.
- Modify the canvas distance and observe how the FOV calculation automatically adjusts the canvas width to maintain correct viewport projection
- Add raycasting from arbitrary viewport points to detect 3D objects in the camera’s field of view, enabling spatial interaction with camera-aligned content
- Combine with the MultiObjectDetection sample to project detected object bounding boxes into world space using viewport-to-ray transformation