First Hand sample overview
Updated: May 11, 2026
This sample demonstrates how to build a complete mixed reality game using the Interaction SDK (ISDK), hand tracking, controller input, haptics, voice commands, and scene understanding. First Hand showcases production-ready patterns for interaction types, locomotion, state management, and seamless hand-to-controller switching across multiple gameplay scenarios.
- All major ISDK interaction types (HandGrab, Poke, TouchHandGrab, DistanceGrab, Snap, Pose Detection) applied in gameplay contexts
- Custom ISDK extensions and transformers for physics-based manipulation and seamless hand-to-controller switching
- Multi-step fabrication sequences using snap zones, pose detection, and timeline-driven animations
- Voice SDK command integration with gameplay systems through decoupled event patterns
- Procedural mixed reality content placement using passthrough, scene understanding, and
MRUK room queries
- Meta Quest 3, or Quest 3S
- Unity 2022.3.22f1 or newer (Unity 6+ recommended)
- Development environment with Android build support enabled
For platform setup, see Set up your development environment.
Clone the repository from GitHub and open the project in Unity. Load the entry scene at Assets/Project/Scenes/Level/FirstLoad.unity. Build and deploy to your Quest device (File > Build Settings > Build And Run) — the project includes pre-configured build settings. For detailed build instructions and dependency versions, see the sample README.
First Hand uses additive scene loading, where each gameplay area loads a Level scene (logic and interactables) alongside a separate Art scene (environment geometry). This separation lets you iterate on gameplay scripts without reloading heavy art assets. The FirstLoad scene initializes the experience and loads the main menu.
| Scene Pair | What it demonstrates | Key concepts |
|---|
DevClocktower / ClockTower | Glove fabrication, superpowers (palm blast, shield), drone combat | Multi-step assembly, pose detection, object pooling, snap zones, PlayableDirector timelines |
DevStreet / Street | Guided locomotion tutorial | Teleport (hand gesture + controller), snap turn, custom yield instructions |
DevMixedReality / MRPortals | Passthrough and scene understanding integration | Voice SDK commands, MRUK room queries, procedural wall placement, Android permissions |
DevHaptics / MountainPeak | Kinesis modules, kite controller, spray can | Runtime haptic modulation, IHandGrabUseDelegate, custom force-based transformers |
DevHub / Hub | Central navigation hub | Scene transitions, state management |
Key script directories:
- Assets/Project/Scripts/ISDK/ — Custom ISDK extensions (CompoundHandRef, InteractionTracker, AutoInteractableSetup, custom transformers)
- Assets/Project/Scripts/Gameplay/ — Gameplay systems (GloveBuildBehaviour, Blaster, PlayerShield, DroneMiniGame)
- Assets/Project/Scripts/MRPlacement/ — Mixed reality scene understanding and wall placement (SceneUnderstandingLoader,
MRUKLoader, MRWallPlacement) - Assets/Project/Scripts/Voice/ — Voice SDK integration (
DroneVoiceController, AppVoiceUI)
When you run the Clock Tower scene, you see a multi-step glove fabrication sequence. You poke UI buttons to select a glove part and watch a 3D printer fabricate it using a timeline animation. You grab the printed part with HandGrab and drop it into a snap zone on the glove. After assembling all four parts and adding a crystal, you wear the gloves by placing your hands inside them — the sample detects position and rotation alignment. Once equipped, you trigger a palm blast with an open-hand pose or activate a two-handed shield. Drones spawn and you destroy them in timed combat rounds.
In the Street scene, a guided tutorial activates with visual prompts. An animated ghost hand shows you how to teleport using either hand gestures or a controller. After three successful teleports, the tutorial teaches left and right snap turns, waiting for each locomotion event before proceeding.
In the Mixed Reality scene, passthrough activates and the sample queries your room’s physical layout using MRUK. You use voice commands (“drone, fly to the target”) to control a drone, which responds by navigating to procedurally placed wall targets. The sample handles Android permission requests and falls back to a pre-built test room if your device hasn’t scanned the physical space.
In the Haptics scene, you grab kinesis modules that vibrate with amplitude and frequency modulated by your hand’s distance from the object. You launch a kite using a custom force-based transformer that triggers haptics proportional to throw velocity, and you spray paint with a can that uses trigger pressure to control haptic intensity.
Seamless hand and controller switching
The sample handles input switching through CompoundHandRef, which implements IHand and wraps a list of hand references (tracked hand and controller hand). Each frame, it selects the first connected hand and delegates all IHand calls to it. This pattern appears throughout the project so all interactions work with either input type without conditional logic in gameplay code.
// From CompoundHandRef.cs — selects the first connected hand each frame
var newBest = Hands.Find(x => x.IsConnected) ?? NullHand.instance;
View source: CompoundHandRef.cs
For ISDK hand reference patterns, see Hand Tracking in ISDK.
Simplified interaction state tracking
InteractionTracker wraps ISDK’s interactor/interactable event model and exposes simple queries like IsGrabbed() and TryGetHand(). The sample uses this to check interaction state without subscribing to events.
// From InteractionTracker.cs — checks if any grab-type interactor is selecting
public bool IsGrabbed() =>
_selectingInteractors.FindIndex(IsGrabInteractor) != -1;
View source: InteractionTracker.cs
Multi-step fabrication with snap zones
The glove assembly system chains together four steps: poke selection, timeline-driven printing, HandGrab retrieval, and snap zone placement. GloveBuildBehaviour coordinates these steps using IActiveState checks. Each snap zone emits events when items are placed, advancing the fabrication state machine.
View source: GloveBuildBehaviour.cs
For snap interactables, see ISDK Snap Interactions.
Pose detection for superpowers
The palm blast detects an open-hand pose and fires projectiles from an object pool. The shield detects a two-hand pose and positions a shield GameObject at the midpoint between hands using IHand.GetRootPose(). Both systems use ISDK’s ActiveStateSelector to trigger on specific hand shapes.
// From PlayerShield.cs
Vector3 midpoint = Vector3.Lerp(leftPose.position, rightPose.position, 0.5f);
_shield.position = midpoint;
View source: PlayerShield.cs
For pose detection setup, see ISDK Pose Detection.
Voice command integration
DroneVoiceController uses AppVoiceExperience (wit.ai) to recognize four commands: fly to, pick up, drop, and scan. Each command maps to a DroneCommandPreset ScriptableObject, and DroneCommandHandler dispatches commands to the drone’s AI system. This decouples voice recognition from gameplay logic.
View source: DroneVoiceController.cs
For Voice SDK integration, see Voice SDK Overview.
Runtime haptic modulation
The sample modulates haptic clip playback using HapticClipPlayer. KiteInputHaptics adjusts amplitude and frequency based on hand distance from the kite. KiteThrow (a custom ITransformer) triggers haptics proportional to throw force. SprayCan uses IHandGrabUseDelegate to vary intensity with trigger pressure.
// From KiteInputHaptics.cs
_player.amplitude = Mathf.Lerp(0.2f, 1f, normalizedDistance);
_player.frequencyShift = Mathf.Lerp(-0.5f, 0.5f, normalizedDistance);
View source: KiteInputHaptics.cs
For haptics authoring and playback, see Haptics SDK for Unity.
Procedural mixed reality content placement
MRWallPlacement queries the current room using MRUK.GetCurrentRoom() and generates random positions on wall surfaces with GenerateRandomPositionOnSurface(). It prioritizes positions facing the player by sorting candidates based on their alignment with the player’s forward direction.
View source: MRWallPlacement.cs
For scene understanding APIs, see Scene Understanding (MRUK).
Composable state management with IActiveState
The sample uses IActiveState throughout to represent boolean conditions (button pressed, item grabbed, scene loaded). Implementations include ReferenceActiveState, ActiveStateExpectation, ActiveStateObserver, and ConfigurableActiveState (which supports AND, OR, NOT logic). This keeps systems decoupled and testable.
View source: IActiveState implementations
- Add a new glove part to the fabrication sequence by creating a snap zone, updating GloveBuildBehaviour state transitions, and authoring a print timeline
- Implement a new superpower using pose detection and IActiveState triggers, following the palm blast or shield patterns
- Create a new voice command by adding a wit.ai intent, defining a DroneCommandPreset, and extending
DroneCommandHandler - Design a custom haptic clip for a new interaction and integrate it with HapticClipPlayer runtime modulation
For related samples, see Unity Interaction SDK Samples.