Develop

Hand Pose Showcase sample

Updated: May 11, 2026

Overview

This sample demonstrates hand pose recognition, gesture detection, far-field selection, and force grab mechanics using the OculusHandTools plugin for Unreal Engine on Meta Quest. It showcases how to encode hand poses with weighted bone matching, build gesture state machines, and implement cone-based raycasting with distance-based velocity scaling for force grab interactions.

What you will learn

  • Encode hand poses using bone rotation data with per-bone weight multipliers
  • Build gesture state machines that recognize multi-pose sequences
  • Implement cone-based raycasting for far-field object selection with stickiness
  • Create force grab mechanics with distance-based exponential velocity scaling
  • Configure custom collision channels for interactable objects

Requirements

  • Meta Quest 2, Quest 3, or Quest 3S
  • Unreal Engine configured for Meta Quest development
For setup instructions, see the Meta Quest Developer Hub documentation.

Get started

Clone the repository from GitHub, open the project in Unreal Engine, and build for Android. The project uses the OculusHandTools plugin with 3 modules: OculusHandPoseRecognition, OculusInteractable, and OculusUtils. For detailed build and configuration steps, see the project README.

Explore the sample

FeatureWhat it demonstratesKey concepts
Pose encoding
Storing hand poses as bone rotation data
Pitch/yaw/roll per bone, compact string encoding
Weighted bone matching
Emphasizing specific bones for pose recognition
*3 multipliers on key bones, omitting irrelevant bones
Gesture state machine
Detecting sequential pose transitions
Point/200 states, Flick gesture
Cone-based raycasting
Far-field object selection with acquisition/stickiness
AInteractableSelector, 10° acquisition, 20° stickiness
Force grab
Pulling distant objects with velocity scaling
Distance-based velocity (1x at 20cm, 4x at 40cm)
Custom collision
Dedicated channel for interactables
ECC_GameTraceChannel1, “Interactable” channel

Runtime behavior

When running on a Meta Quest device, the sample recognizes hand poses in real time by comparing bone rotations against stored pose data. The Gun pose uses *3 weight multipliers on critical bones and omits the middle finger entirely from matching. Gestures like Point/200 and Flick are detected through a state machine that tracks transitions between recognized poses. AInteractableSelector uses a cone-based raycast with 10° acquisition angle for initial selection and 20° stickiness angle to prevent accidental deselection. Force grab pulls objects toward the hand with exponential velocity scaling: 1x speed at 20cm distance, ramping to 4x speed at 40cm distance.

Key concepts

Pose encoding with weighted bones

Hand poses are encoded by storing bone rotation values (pitch, yaw, roll) for each bone. Different poses can weight specific bones more heavily:
// Gun pose example: weight index finger bones by 3x, omit middle finger
// This makes the recognizer more sensitive to index finger position
// while ignoring middle finger state entirely
BoneWeight[IndexFingerBone] = 3.0f;
BoneWeight[MiddleFingerBone] = 0.0f; // Omitted from matching

Gesture state machine

The gesture recognizer tracks sequences of poses over time. A Flick gesture, for example, transitions from a Point pose to a 200 pose within a timing window:
// Gesture states: Point -> 200 = Flick
// Each state transition requires the target pose confidence
// to exceed threshold within the timing window

Cone-based raycasting

AInteractableSelector uses a cone-shaped raycast for far-field selection. The acquisition cone (10°) is narrower than the stickiness cone (20°), creating hysteresis that prevents flickering between targets:
// Acquisition: object must be within 10° of ray center to select
// Stickiness: selected object stays active within 20° of ray center
// This prevents rapid selection switching during hand movement

Force grab with distance scaling

Force grab velocity scales exponentially with distance to create natural-feeling pulls:
// Velocity scaling: exponential curve from 1x at 20cm to 4x at 40cm
// Closer objects move slowly for precision
// Farther objects accelerate to reduce wait time
float VelocityScale = FMath::InterpExpoOut(1.0f, 4.0f, DistanceFactor);

Custom collision channel

The sample uses a custom “Interactable” collision channel (ECC_GameTraceChannel1) to separate interactable object traces from standard physics collision:
// Project Settings > Collision > New Trace Channel: "Interactable"
// Maps to ECC_GameTraceChannel1
// Used by AInteractableSelector for cone-based raycasting

Extend the sample

  • Add new hand poses by recording bone rotations and adjusting per-bone weights.
  • Create complex gesture sequences by chaining additional pose states in the state machine.
  • Tune the acquisition and stickiness cone angles for different interaction distances.
  • Adjust the force grab velocity curve for heavier or lighter objects.