An illustration of how the Movement SDK takes real-world movements and mirrors them in virtual avatars.
Movement SDK for Unreal uses body tracking, face tracking, and eye tracking to bring a user’s physical movements into the metaverse and enhance social experiences. By using the abstracted signals that tracking provides, you can animate characters with social presence and provide features beyond character embodiment.
You can find the samples for body tracking, face tracking, and eye tracking in the Oculus Samples GitHub repo. OculusVR Unreal Engine Fork.
Feature overview
Body tracking
After completing this section, the developer should:
Understand the use cases for body tracking.
Understand the restrictions for the end users of this feature.
Understand the known problems to determine if there are blockers that would prevent their use case.
Body Tracking for Meta Quest devices is an API that uses hands and/or controller and headset movements to infer the body poses of the user. These body poses are represented as transforms in a 3D space and are composed into a body tracking skeleton.
This works like a video being composed from multiple still shots per second. By repeatedly calling the BodyTracking API, you can infer the movements of the person wearing the headset.
Use cases
You can use the body tracking joints to analyze the movement of the person and determine body posture or compliance to exercise forms.
By mapping the joints of the skeleton onto a character rig, you can animate the character to reflect human motions for game play or for production animation.
Likewise, you can use body joints data in your gameplay to hit targets or to detect if the user has dodged a projectile.
While body poses are typically mapped to a humanoid rig, you can also map them to non-playable characters.
For research and usability study purposes, you can collect data about user body movement while interacting with your apps or games, but appropriate notice should be given.
Known issues and limitations
When using Inside Out Body Tracking (IOBT) with controllers you may see jitter when the controllers are in positions where tracking is difficult (e.g., hands above the head).
For Generative Legs (which provide a full body skeleton), the approximate height from headset to floor is measured when body tracking starts. Each session should be started with the user standing. If the body scale does not look correct, press the Oculus button twice to reset calibration. The guardian must be initialized for this calibration.
Environmental restrictions
Inside-Out Body Tracking (IOBT) is based on visibility from the cameras in the headset, so there are some limitations. While these limitations may be reduced or even eliminated over time, they are currently part of the expected behavior.
Occlusion:
Tracking may be lost when one body part is occluded by an object (for example, a table), another body part (like moving your arms behind your back), or when the body is close to a wall. For this reason, and also for safety reasons, we recommend making sure the area around you is clear from clutter or obstructions, and to avoid depending on motions in which the arms or torso become obstructed.
Lighting::
Strong background lighting can cause shadows or other visual artifacts that may impact our ability to detect arms and hands. Very dim lighting can also cause issues. For best performance, you should be in a well-lit room, but not close to large windows with direct sunlight.
Rig requirement::
To support body tracking, your skeleton needs to:
Have a spine starting at the pelvis / hips and ending with the neck / head
Have two arms connected to the spine at the chest, and typically 5 fingers on each hand
For full body tracking: Have two legs, connected to the pelvis / hips
All Meta Quest 2 or later devices are supported although Inside Out Body Tracking is only supported on Meta Quest 3.
Face tracking
After completing this section, the developer should:
Understand the use cases for face tracking.
Understand the restrictions for the end users of this feature.
Understand the known problems to determine if there are blockers that would prevent their use case.
Face tracking for Meta Quest Pro is an API that relies on inward-facing cameras to detect expressive facial movements. For devices without inward-facing cameras, such as Meta Quest 3, face tracking relies on audio from the microphone to estimate facial movements. These movements are categorized into expressions based on the Facial Action Coding System (FACS).
This reference breaks down facial movements into expressions that map to common facial muscle movements like raising an eyebrow, wrinkling your nose, and so on, or a combination of multiple of these movements. For example, a smile could be a combination of both the right and left lip corner pullers around the mouth, as well as the cheek moving and the eyes slightly closing. For that reason, it is common to blend multiple motions together at the same time.
To achieve this in immersive or blended apps (VR or AR/MR), the common practice is to represent these as morph targets (also known as blendshapes) with a strength that indicates how strong the face is expressing this action.
The Face Tracking API conveys each of the facial expressions as a defined morph target with a strength that indicates the activation of that morph target.
Use cases
You can directly interpret morph targets to determine if the user has their eyes open, if they are blinking or smiling.
You can also combine morph targets together and retarget them to a character to provide Natural Facial Expressions.
Known issues and limitations
Facial hair or clothing like masks which obscure the face may prevent the device from correctly detecting facial movements.
Rig requirement:
Face tracking is based on morph targets and requires a correct mapping to the character if used for character embodiment.
Supported devices
Meta Quest Pro supports visual-based face tracking.
Audio-based face tracking is provided on all Meta Quest2 and later devices.
Eye tracking
After completing this section, the developer should:
Understand the use cases for eye tracking.
Understand the restrictions for the end users of this feature.
Understand the known problems to determine if there are blockers that would prevent their use case.
Use cases
The abstracted eye gaze representation that the API provides (gaze state per eye) allows a user’s character representation to make eye contact with other users. This can significantly improve your users’ social presence.
You can also use eye tracking to determine where in the 3D space the person is looking at. This can provide a good understanding of regions of interest or targeting in games.
Known issues and limitations
This eye tracking API is used for social presence and as such applies limitations and smoothing to the tracking to avoid unnatural eye movements.
Rig requirement:
For eye tracking to work, your character must have eye meshes that are rigged to bones, one for each eye.