Develop

Movement SDK for Unreal

Updated: Apr 15, 2026
Movement Overview
An illustration of how the Movement SDK takes real-world movements and mirrors them in virtual avatars.
Movement SDK for Unreal uses body tracking, face tracking, and eye tracking to bring a user’s physical movements into virtual environments and enhance social experiences. By using the abstracted signals that tracking provides, you can animate characters with social presence and provide features beyond character embodiment.
You can find the samples for body tracking, face tracking, and eye tracking in the Movement SDK Sample for Unreal GitHub repository.

Feature overview

Body tracking

Body tracking for Meta Quest devices is an API that uses hands and/or controller and headset movements to infer the body poses of the user. These body poses are represented as transforms in a 3D space and are composed into a body tracking skeleton. This works like a video being composed from multiple still shots per second. By repeatedly calling the body tracking API, you can infer the movements of the person wearing the headset.

Use cases

  • You can use the body tracking joints to analyze the movement of the person and determine body posture or compliance to exercise forms.
  • By mapping the joints of the skeleton onto a character rig, you can animate the character to reflect human motions for game play or for production animation.
  • Likewise, you can use body joints data in your gameplay to hit targets or to detect if the user has dodged a projectile.
  • While body poses are typically mapped to a humanoid rig, you can also map them to non-playable characters.
  • For research and usability study purposes, you can collect data about user body movement while interacting with your apps or games, but appropriate notice should be given.

Known issues and limitations

  1. When using Inside Out Body Tracking (IOBT) with controllers you may see jitter when the controllers are in positions where tracking is difficult (e.g., hands above the head).
  2. For Generative Legs (which provide a full body skeleton), the approximate height from headset to floor is measured when body tracking starts. Each session should be started with the user standing. If the body scale does not look correct, press the Meta Quest button twice to reset calibration. The guardian must be initialized for this calibration.
Environmental restrictions
Inside-Out Body Tracking (IOBT) is based on visibility from the cameras in the headset, so there are some limitations. These limitations reflect current tracking capabilities and may improve in future SDK releases.
Occlusion: Tracking may be lost when one body part is occluded by an object (for example, a table), another body part (like moving your arms behind your back), or when the body is close to a wall. For this reason, and also for safety reasons, we recommend making sure the area around you is clear from clutter or obstructions, and to avoid depending on motions in which the arms or torso become obstructed.
Lighting: Strong background lighting can cause shadows or other visual artifacts that may impact our ability to detect arms and hands. Very dim lighting can also cause issues. For best performance, you should be in a well-lit room, but not close to large windows with direct sunlight.
Rig requirement: To support body tracking, your skeleton needs to:
  • Have a spine starting at the pelvis / hips and ending with the neck / head
  • Have two arms connected to the spine at the chest, and typically 5 fingers on each hand
  • For full body tracking: Have two legs, connected to the pelvis / hips
  • For best result, your character should include all bones in Body Joint Visual Reference.

Supported devices

Body tracking is supported on Meta Quest 2, Meta Quest 3, Meta Quest 3S, and Meta Quest Pro. Inside-Out Body Tracking (IOBT), which enables full-body tracking, is available on Meta Quest 3 and Meta Quest 3S.

Face tracking

Face tracking detects facial movements and maps them to expressions. On Meta Quest Pro, it uses inward-facing cameras for visual detection. On devices without inward-facing cameras, such as Meta Quest 3, it relies on microphone audio to estimate facial movements. These movements are categorized into expressions based on the Facial Action Coding System (FACS). FACS breaks down facial movements into expressions that map to common facial muscle movements like raising an eyebrow, wrinkling your nose, and so on, or a combination of multiple of these movements. For example, a smile could be a combination of both the right and left lip corner pullers around the mouth, as well as the cheek moving and the eyes slightly closing. For that reason, it is common to blend multiple motions together at the same time. To achieve this in immersive or blended apps (VR or AR/MR), the common practice is to represent these as morph targets (also known as blendshapes) with a strength that indicates how strong the face is expressing this action. The face tracking API conveys each of the facial expressions as a defined morph target with a strength that indicates the activation of that morph target.

Use cases

  • You can directly interpret morph targets to determine if the user has their eyes open, if they are blinking or smiling.
  • You can also combine morph targets together and retarget them to a character to provide Natural Facial Expressions.

Known issues and limitations

  1. Facial hair or clothing like masks which obscure the face may prevent the device from correctly detecting facial movements.
Rig requirement: Face tracking is based on morph targets and requires a correct mapping to the character if used for character embodiment.

Supported devices

  • Meta Quest Pro supports visual-based face tracking.
  • Audio-based face tracking is provided on all Meta Quest 2 and later devices.

Eye tracking

Eye tracking uses the headset’s internal sensors to determine gaze direction for each eye. The API provides an abstracted eye gaze representation (gaze state per eye) that you can use to drive social presence and gaze-based interactions in your application.

Use cases

  • The abstracted eye gaze representation that the API provides (gaze state per eye) allows a user’s character representation to make eye contact with other users. This can significantly improve your users’ social presence.
  • You can also use eye tracking to determine where in the 3D space the person is looking at. This can provide a good understanding of regions of interest or targeting in games.
Eye Tracking Rig

Known issues and limitations

  1. This eye tracking API is used for social presence and as such applies limitations and smoothing to the tracking to avoid unnatural eye movements.
Rig requirement: For eye tracking to work, your character must have eye meshes that are rigged to bones, one for each eye.

Supported devices

  • Meta Quest Pro
  • Meta Quest 3S
The Movement SDK provides LiveLink support through ULiveLinkOculusXRMovementSourceFactory, which exposes body, face, eye, and face viseme data as LiveLink subjects. This offers an alternative workflow for retargeting body and face tracking data using Unreal’s LiveLink system. Retarget assets are available for body and face tracking. For working examples of LiveLink-based movement tracking, see the Movement SDK Sample for Unreal.

What’s next

To start integrating movement tracking in your project, follow the guides below:

Design guidelines

Design guidelines are Meta’s human interface standards and design frameworks that help you create safe, user-oriented, and retainable immersive and passthrough user experiences.

User considerations

Inputs

  • Input modalities: Explore the different input modalities.
  • Head: Design and UX best practices for head input.
  • Hands: Design and UX best practices for using hands.
  • Controllers: Design and UX best practices for using controllers.
  • Voice: Design and UX best practices for using voice.
  • Peripherals: Design and UX best practices for using peripherals.