After configuring your project for VR, follow these steps:
Make sure you have an OVRCameraRig prefab in your scene. The prefab is located at Packages/com.meta.xr.sdk.core/Prefabs/OVRCameraRig.prefab.
From the OVRCameraRig object, navigate to the OVRManager component.
Select Target Devices.
Scroll down to Quest Features > General.
If you want hand tracking, select Controllers and Hands for Hand Tracking Support.
Under General, make sure Body Tracking Support is selected. Click General if that view isn’t showing.
If you want to use IOBT, select High for Body Tracking Fidelity in the Movement Tracking section. IOBT is a suggested fidelity mode and not a requirement of the API. For more details, see Troubleshooting body tracking.
If you want to use Full Body, select Full Body for the Body Tracking Joint Set in the Movement Tracking section.
If you want Eye and Face Tracking, apply the same setting as in the previous steps.
Note: OVRManager has a permissions request on startup that must be selected for the tracking technologies you require.
If your project depends on Face Tracking, Eye Tracking, or Hand Tracking, ensure that these are enabled on your HMD. This is typically part of the device setup, but you can verify or change the settings in Settings > Movement Tracking.
In the Unity Editor, go to Edit > Project Settings > Meta XR to access the Project Setup Tool.
After completing this section, the developer should: 1. Understand the relationship of the Unity’s Mecanim Rig, 2. Be able to apply body tracking to a character model. 3. Be able to apply deformation logic to adjust the skeleton to the character mesh so that movement looks natural.
In this section, you will learn how to import a character asset, ensure it is rigged with the Unity Mecanim Humanoid, and then enable it for body tracking.
Meta uses a proprietary Body Tracking Skeleton consisting of 84 bones (See Appendix A). This skeleton corresponds to the human skeleton as opposed to a character rig. Within Unity, the Body Tracking API is accessed through a series of scripts running as components attached to a Unity Game Object.
If Body Tracking Support is enabled, the OVRBody script polls for updated body pose data (position and rotation) in tracking space. This pose data can be interpreted and used directly, or it can be mapped to the Unity Mecanim Humanoid as shown in the figure. The naming convention that Unity uses for each bone in the skeleton is not identical to the Meta body tracking skeleton, so it is necessary to map from the tracking skeleton to the Unity Mecanim Humanoid.
Import a character and rig with Mecanim Humanoid
To import a character and rig as a Mecanim Humanoid, follow these steps:
Import the custom character into the Unity Project as an asset.
Click on the third-party model asset (target rig).
In the Unity Inspector tab, navigate to the Rig tab.
Next to Animation Type, click on Humanoid.
Click Apply.
A Configure button will now show. Click on that to make sure most of the simulated bones of the character have been mapped to Unity’s HumanBodyBones. Certain bones, such as the jaw or eyes, might not be necessary to animate a body compared to bones in the legs, spine, and arms.
(Optional) If your character looks too small or too large when imported into a scene, modify the Scale Factor under the asset’s Model tab to a different value and then click on Apply. Changing the scale of the character in the scene will likely cause unsatisfactory results when body tracking is modifying its joint transforms.
Go to Muscles & Settings, scroll to the end, and make sure that Translation DoF is enabled. If it is not enabled, then enable it and then click on the Apply button below it.
The custom target character has been imported as a humanoid, and each bone has been associated with a HumanBodyBone enum value.
(Optional) Unity uses a mesh’s precomputed bounds to determine visibility relative to a camera, and skinned meshes that are not considered visible will not be skinned. Therefore, if the mesh’s bounds are too small relative to an expected height or wingspan, it might not animate properly depending on how visible the bounds are. You may use the “Edit Bounds” button in a Skinned Mesh Renderer and make the bounds larger to mitigate this problem. For instance, you can increase the depth of the bounds of the character rendered below to accomodate the case where the user might reach forward and exceed the depth of the bounds. Additionally, you may enable “Update When Offscreen” option so that Unity recalculates the bounds every frame, however that will consume extra CPU cycles.
Enable tracking for the character
The next step is to add components that are scripts to the character that read the body tracking movement from the service and apply it to the character.
Drag the model or prefab of your character into the scene. Note that the model must have an animator component with an avatar assigned. If it doesn’t, follow the steps in the previous section and change the animation type to Humanoid.
Right click on the model, then select Movement Samples > Body Tracking > Animation Rigging Retargeting (upper body) (constraints). For full-body characters, use Movement Samples > Body Tracking > Animation Rigging Retargeting (full body) (constraints). The option with constraints improve the look of the character by adding the following:
FullBodyDeformationConstraint (Animation Rigging) under a Rig transform to preserve character proportions.
The following options should be verified in the components added in the previous step:
For Generative Legs, make sure the character’s Provided Skeleton Type is set to Full Body in OVRBody (added in the previous step). Furthermore, OVRManager’s Body Tracking Joint Set must be set to Full Body in the Movement Tracking section for full body joints to be represented in any character. For IOBT, ensure that Body Tracking Fidelity is set to High in OVRManager’s Movement Tracking section. IOBT is a suggested fidelity mode and not a requirement of the API – for more details see Troubleshooting Body Tracking.
Ensure the RetargetingLayer (also added in the previous step) has the following:
Skeleton Type set to Full Body
Enable Tracking By Proxy checked
Blend Left Hand, Blend Right Hand, Correct Bones, Correct Left Hand, and Correct Right Hand retargeting processors located and assigned in the Retargeting Processors array. These can be found in Packages/com.meta.movement/Shared/RetargetingProcessors.
The blend hand processors strive to keep accurate position when the hands are in front of the body by biasing towards the tracked position. However, when the hands move above the head or to the side or back of the body, the processor biases toward preserving the character humanoid constraints.
Improve character appearance
While the Unity Mecanim Humanoid provides you a naming convention that allows movement to be transferred bone to bone, problems can arise when the character tries to move for one of several reasons:
The actual rig may be positioned toward the front or back of the mesh,
The proportions of the rig may not align with typical human proportions,
The person being tracked may be of significantly different proportions than the character (e.g. broader shoulders).
As such, it is often necessary to adjust the character to make the movement appear natural. We have provided deformation constraints that should make the character appear more natural during movement. The following gif shows the sequence that we followed to make the blue robot example appear natural. The exact steps you may need to follow will vary based on the creative intent. For instance, you might want the character to walk with a hunch, or you may want it to have good posture. The gif represents how to retarget to the blue robot while trying to maintain its original proportions.
Basic Retargeting: This step applies body tracking information to the retargeted character without taking into account proportion differences between the character and the data received from body tracking. This involves modifying the bone transforms of the Mecanim Humanoid that skinned to the mesh of the character by affecting their rotations and positions.
Adjustments: Characters may require additional rotational adjustments applied to the tracking result depending on how the character is rigged. To modify adjustments, the “Adjustments” array can be modified. The “Adjustments Array” is part of the Retargeting Layer script that was added to your character with the 1-click described in Step 2 of Enable Tracking for the Character. By default, the one-click setup will add an adjustment for the spine and shoulders that are automatically calculated. If the axis or values of these rotations cause the model to look incorrect when applied, the adjustments should be modified or removed.
Base Deformation: In this step we add some default constraints that are common across many skeletons. The Deformation Constraint script was added to the Rig structure of the character on which the 1-click set-up was applied. You should be able to see this from the project explorer underneath the Rig GameObject of the character to which you applied the 1-click. In the following image, the FullBodyDeformationConstraint was added by the 1-click.
Adjust deformation settings
The deformation settings allow you to make a trade-off between the skeleton of the character being driven by body tracking and the body tracking skeleton described earlier. This requires an understanding of the creative intent: for example, you character might represent an old wizard that is hunched over while your tracked skeleton is tracking a person standing up straight. In this condition, you would want to bias towards the character skeleton and away from the tracked person so that a third person would see a wizard walking instead of a person with good posture.
Spine Translation Correction Type
This setting is necessary because the relationship of the hips and the head position might be very different from the tracked human position. Think of a character whose torso might be large compared to its legs. Would you prefer the head position to be accurate and adjust the proportions of the character based on the head position? If so, you would select “Accurate Head” and the character would then be scaled based on the head position so that their hips might not be in the correct position. If on the other hand, it was more important for your character to have accurate hip positions, then you would select “Accurate Hips” and which would result in the head position possibly higher than your actual head. If you want both, then it will compromise some of the torso proportion to match the tracked position. One caveat to this description is that when we use full body, we always use the correct feet position and then adjust based on the heads/hips position.
Spine Alignment
These settings allow you to balance between the actual tracked positions of the spine vs aligning the spine vertically above the hips. Many characters are built to just assume a perfectly straight spine aligned above the hip. A weight of 1 aligns the SpineLower/SpineUpper/Chest bone with the Hips, while a weight of 0 aligns the SpineLower/SpineUpper/Chest bone with the tracked spine position. Note that this applies to the starting position of the Rest Pose. Even if the values are all set to 1, the character will still be able to bend, it will just mean that they bend with the spine that resembles the character’s rest pose (which could be straight). Note that the Unity Mecanim Humanoid will automatically interpolate bones that aren’t mapped (in the case of more bones in the Unity Skeleton) and in the case of less bones, it might be more inaccurate, but should work if at least the lower spine is mapped. In the case of less bones it should work. If you wish to enforce the original spine bone local positions at the expense of lower tracking quality, increase the “Original Spine Positions Weight” weight.
Shoulder Interpolation
If the tracked person has significantly different shoulder widths than the character, it is necessary to make a tradeoff in preserving accurate shoulder position versus the look and feel of the character. For instance, if the person being tracked has broad shoulders and you are stylistically trying to represent a graceful ballerina, you might want to err on the side of preserving the character compared to accurate positioning of the shoulders.
A weight of 1 will place the shoulders at their local position relative to the original chest, ignoring the tracked shoulder positions. A weight of 0 will place the shoulders at their tracked positions. On RetargetingLayer, the ArmatureCorrectBones retargeting processor has a Shoulder Correction Weight Late Update that can be updated if the restricted muscle space rotations are incorrect. A weight of 1 will use the retargeted shoulder rotation without being limited by muscle space, while a weight of 0 will use the shoulder rotation limited by muscle space.
Arms and Hands Interpolation
Similar tradeoffs are available with positioning the arms. A weight of 1 will place the arms (upper and lower arm bones) using the character’s proportions. A weight of 0 will place the arms at their tracked positions. The same goes for the hand weight. For instance, if the character has really short arms compared to the tracked person, then you can use the following weights to adjust.
Legs Alignment
Using the leg weight, align the legs between the tracked positions (weight of 0) and their positions after proportions are enforced and aligned (weight of 1).
Feet Alignment
The tracking skeleton has a bit of a foot arch. This may cause the toes of the character to lift upwards. A weight of 1 for the feet alignment weight will rotate the feet toward the toes, ignoring the tracked vertical rotation. A weight of 0 will make no changes to the tracked feet rotation. If the toes position when enforcing proportions look incorrect, the toes weight can be adjusted. A weight of 1 for the toes weight will place the toes at the local position relative to the feet. A weight of 0 for the toes weight will place the toes at the tracked position.
Retargeting Processors
Retargeting processors are ScriptableObjects that store data and execute instructions during the LateUpdate phase to make adjustments to the final retargeting output that go beyond the constraints of Unity’s Humanoid muscle space. This encompasses both positional and rotational modifications to the fingers, and potentially other bones of the characters. These processors inherit from the base RetargetingProcessor.cs script.
Blend Hands Processor: This processor determines the optimal placement of the hand based on tracked hand data when it falls within the head’s field of view. By setting the IK target accurately for visible hands and maintaining arm rotations for occluded hands, this approach ensures both realistic hand positions and preserved arm movements with retargeted body tracking. A weight of 1 means that this processor will actively modify the IK processor weights, while a weight of 0 will leave the IK processor weights unmodified.
Correct Bones Processor: This processor updates positions and rotations of all bones on the humanoid so that they are not restricted by muscle space, using information from the Pose capture constraints located in the animation rigging stack. A weight of 1 will apply the position and rotational offsets from animation rigging that aren’t restricted by muscle space, while a weight of 0 will apply no updates.
Correct Hands Processor: This processor utilizes an IK algorithm, to manipulate the arm’s rotations to update the hand to reach the specified IK target’s position. A weight of 1 will run the selected IK algorithm completely, while a weight to 0 results in no modifications being applied to the arm and hand’s position.
Remaining Touch-ups: If the model still doesn’t look right, you can try the more advanced tweaks described in Appendix C - Advanced Body Constraints. These might also be useful if the character might want to pin the character to a seat, override tracking such as if the VR environment wasn’t flat (e.g. stairs).
Modifying Character Height to Match User
Depending on the options that you chose in this section, your character might deform based on the height of the user. As such, it is necessary to ensure that your character can adapt to different heights by ensuring that the areas around the joints impact enough of the mesh to allow for users of different heights. See Configuring Character Height in Appendix C.
Adding animation to a character
After completing this section, the developer should:
Understand that body tracking is compatible with keyframe animations.
Be able to apply this knowledge to import a key frame animation and apply the animation to the entire rig or only part of the rig (e.g., legs).
This section details how you can use body tracking to import a key frame animation and apply it to the entire rig or only part of the rig (e.g. legs).
There are a lot of different scenarios in which you would want to add a key-frame animation to a character. For instance, these animations can be used to change from a body tracking state to an “animated running” state when using controllers to drive the character. (See the next section for more detail on using a locomotion controller with body tracking)
The MovementRetargeting.unity scene contains an example of using a wave animation to drive part of the body.
To add an animation to a character:
Download or create the animation that you want to add.
Ensure your character has an animator component with an assigned avatar as mentioned in Enable Prefab for Tracking.
Add an Animator Controller that will control when to play the animation and assign it to the Animator’s Controller field. In the example scene that we provide, this exists in WaveAnimController.
Define the three masks that should be used for the animation to work.
Animation Mask: This is the mask to which the animation is applied. In our example, this is JustRightArm. It is configured as a property of the WaveAnimation layer of the Animation Controller.
Retargeting Mask: This is the mask to which the retargeting is applied. In the example, this is ExcludeRightArm and it is applied to the RetargetingConstraint’s AvatarMaskComp field.
Full BodyMask: When the animation is toggled off, this mask indicates that retargeting can affect the entire character’s body again. This should be applied in the RetargetingConstraint’s AvatarMaskComp on the character’s rig.
The 1-click scripts that set up body tracking require an Animator component, and can be used with humanoid animations. If importing to an existing project, it is possible that your character already has animations and an animation controller. Below are notes regarding how they are integrated with retargeting:
Animation masks: Define animation masks for parts of the body that retargeting should not affect while animations play on those same joints. When animations stop playing, a different mask is used to allow retargeting to affect all joints. We do this in our retargeting sample where we implement a wave animation and use masks that correspond to the animation playing or not. See Adding an Animation to a Character.
Animator states: You could define a state in your animation controller that allows transitioning to animations. For instance, you could decide that in the Idle state, where the controller doesn’t play animations on the character and body tracking affects the character instead. Then when you transition to an animated state (e.g. walking, running) you would allow the animation to play, and masking is used to prevent retargeting on the joints relevant to the animations.
After completing this section, the developer should:
Understand the need for controller-based animation vs body tracking.
Be able to implement a locomotion controller to allow controller-based animation in conjunction with body tracking
In this section, you will learn when you can implement a locomotion controller to allow controller-based animation in conjunction with body tracking.
A useful design pattern for locomotion with body tracking is to use controller-based input to navigate in a virtual world and then use body tracking within a specific context or location. During controller-based navigation, the character will typically be animated corresponding to the velocity of the character in the virtual world (e.g. walking, jogging, running). There are many different options for implementing player controllers and tutorials are readily available online. However, for the purpose of mixing controller-based animation and body tracking the solution must solve the following problems:
Placement of colliders so that ground and object interactions work well.
Implementing the PlayerController to locomote correctly in the virtual world and keep the camera aligned with the 1P view.
Determining when to allow body tracking to override the controller-driven animations.
Before diving into the details, it is important to understand a few key concepts:
World/Absolute space vs Local/Tracked space: When the user jumps or moves in Absolute space (using the left joystick in this sample), the PlayerController moves. This movement shifts the position of the local body-tracked tracked space (OVRCameraRig) through the game world. When the user moves in real life, XR hardware detects that movement, and the user’s body moves within the tracked space. The tracked space can rotate (with left and right on the right-controller joystick in this sample), which pivots around the player’s position in tracked space, rather than around the origin of tracked space.
PlayerController is the game object controlled by the user to move their character. The PlayerController, moves (or locomotes) the animated character when its UserInput vector is manipulated. The motion is managed by Unity’s physics system, with a Rigidbody component’s velocity. Collision with the ground while moving also uses Unity’s physics system, implemented using colliders. This sphere collider moves according to an estimate of the user’s real-life ground position.
Unity Input has a legacy input API for listening to input (e.g. Input.GetAxis("Horizontal")). This is utilized in MovementLocomotion sample’s UnityInputBinding component (in the PlayerController → Controls object) to listen to Horizontal and Vertical input, but other forms of input could easily be used as well.
Meta Quest Controller Input signals can be read from Meta’s OVRInput API.
We have provided a sample that implements a basic XR locomotion controller to illustrate the principles. The basic steps for setting up a locomotion controller are:
Create a game object which will act as the controller’s transform (position and rotation). It should have a script that handles input and physics to move the character. Our script is called MovementSDKLocomotion and it requires a Rigidbody for physics.
Add your character’s skinned mesh (the body that the user sees), which should follow the controller. The simplest way to connect the mesh body to the controller is to set the mesh object as a child to the controller (object created in step 1). Previous sections describe how to rig the mesh’s animator with body tracking. Our examples use a script called TransformsFollowMe that will move the character mesh to the controller even if it is not in the controller’s hierarchy.
Add functionality that will trigger movement animations when the character controller moves. Our sample uses an AnimationHooks script to trigger animations, which is designed to work with our expected locomotion and jump animations.
Note: Our example script will pass animation triggers to any mesh animator that is a child of the object with an AnimationHooks component as long as Auto Assign From Children is set to true.
As part of applying locomotion animations, the RetargetingAnimationConstraint component (which applies body tracked poses to a character animator) will need to be at least be partially overwritten during animations, otherwise body tracking will override your intended locomotion animations. To do this correctly, you need to add special rig constraints objects to the Rig child object of your RetargetingLayer:

* Add a `CaptureAnimationConstraint` as the first element of the Rig. You may need to unpack a prefab to do this. This RigConstraint will remember the animated body pose (triggered by `AnimationHooks`) before body tracking is applied by the `RetargetingConstraint`.
* Add a `PlaybackAnimationConstraint` component to a new last child object of the Rig object. This script will partially re-apply the body pose stored by the `CaptureAnimationConstraint` at the front of the RigConstraint stack. To apply these animations to the legs, provide an AvatarMask marking the lower body (see “LowerBodyMask” in the Meta Movement package). The strength of this body pose application is controlled by the Weight property of the `PlaybackAnimationConstraint`.

Setting the weight to 0 will fully hide any locomotion animations, leaving the body tracking applied to the animator. Setting the weight to 1 will apply the locomotion animation to the masked region.
Our scripts automatically trigger changes to the PlaybackAnimationConstraint’s weight property with a StateTransition component, which feeds a progress value into the weight property:

This transition is triggered in our scripts by the MovementSDKLocomotion’s `OnStartMove` and `OnStopMove` callbacks, in the PlayerController object:


It is also triggered by our JumpingRigidbody’s `OnJump` and `OnJumpFinished` callbacks, in the PlayerController’s Jump object:


These callbacks identify the source of the animation event with a string because “Locomotion” and “Jump” animations can happen separately or together, and either one should be able to maintain the animation mask without being canceled by the other.
Add a collider (or possibly multiple colliders) to constrain locomotion and collide with physics objects. Our solution has:
an upper body collider that encompasses most of the body, and
a collider for the feet to keep them above the ground plane and inform the controller of the character’s collision with the ground.
Add a script that registers real-life motion and moves the player controller’s colliders accordingly. The user can move when wearing an XR device, which should move the character in the app. Our example has SphereColliderStaysBelowHips, a script that updates collider positions to match real life motion.
Note: Our example script updates the collider positions at Update instead of FixedUpdate. This means the user notices movement almost instantly (better for preventing nausea), but those changes are out of sync with the physics system, which can cause collision problems in corner cases.
Add a script that routes specific input into the PlayerController script. Our example has a script called OVRInputBinding and UnityInputBinding that reads input from OVRInput and UnityInput respectively. These scripts have UnityEvent callbacks that pass input to methods and properties in MovementSDKLocomotion.
Ensure that the OVRCameraRig follows the position of the character controller as it moves. The OVRCameraRig is the actual ‘embodiment’ of the user, and wherever it goes, that is where the user will feel like they are. Our examples have a TransformsFollowMe script that will move the OVRCameraRig to follow the controller even if it is not in the controller’s hierarchy. It should theoretically be possible to simply put the OVRCameraRig in the controller’s hierarchy (as a child of the controller object from step 1), but that behavior is not always acceptable for different tools and algorithms that expect OVRCameraRig to be a root object. Also, having OVRCameraRig separate from the controller allows the controller to more easily move without the user, in a ‘disembodied’ kind of way, which may have benefits for locomotion or for tutorials.
Using body tracking for fitness
Movement SDK has a sample called MovementBodyTrackingForFitness that shows how body poses can be recorded and compared using a component called BodyPoseAlignmentDetection.
After completing this section, the developer should be able to implement a detector that will allow the user to determine if the user is matching a pose required by a desired exercise (for example, squats).
Comparing body poses in your scene
In this section, we will show you how to:
Visualize body poses in the Unity Editor
View body poses at runtime
Export body poses during Unity Preview
Import body poses
View how closely two body poses are aligned
Set up custom actions to invoke based on comparisons
Adjust body poses with the Editor
Visualize body poses in the Unity Editor
Create a body pose in the editor with a new GameObject that has a BodyPoseController component. Name it “BodyPose.”
Add BodyPoseBoneTransforms component to “BodyPose”.
In the Inspector, select Refresh Transforms.
View body poses at runtime
A body pose (from BodyPoseController) consists of an array of bone Poses, not visible at runtime. The transforms created by BodyPoseBoneTransforms are also not visible without some extra work:
Add a BodyPoseBoneVisuals component to the “BodyPose” object.
Select Refresh Bone Visuals button to automatically generate bone visuals for each bone, which persist at runtime. This will also generate a default Bone Visual Prefab. Now the body will be visible even at runtime.
You can change the default Bone Visual Prefab to something else and refresh the skeleton. For example, try changing the Bone Visual Prefab reference to a prefab in Assets/Samples/Meta Movement/<version>/Advanced samples/BodyTrackingForFitness/Prefabs, then click Clear Bone Visuals and then Refresh Bone Visuals.
Note: When editing the skeleton pose using the Unity Editor, make sure you are changing the transform of a bone transform and not the clone of the Bone Visual Prefab generated to show the bone.
Export body poses during Unity Preview
Add the OVRBodyPose component to the “BodyPose” object. This component reads body tracking data from the headset and converts it for BodyPoseController. OVRBodyPose is set to Full Body by default, but can also be set to Upper Body only (no legs).
Note: Different Quest devices may read body tracked data differently. For example, Quest 3 may automatically augment Full Body body tracking data with Inside Out Body Tracking (IOBT), which is not a feature available for Quest 2.
Drag the OVRBodyPose to the BodyPoseController → Source Data Object field.
This enables body tracking to drive the bone Poses at runtime or during the Unity preview when using Link. BodyPoseController’s “Export Asset” button works during Unity’s Preview mode. This means body poses can be exported from real body-tracked data from OVRBodyPose.
Select the “BodyPose” object, and then start a Unity preview by pressing the play button, with a headset plugged in and connected to Link. Use BodyPoseController’s “Export Asset” button to export the current bone pose. The exported data appears in a time-stamped generated asset file in the /Assets/BodyPoses/ folder.
Note: “Export Asset” is a Unity Editor function that is not accessible in an APK at runtime. Body poses can be exported during Unity Preview, not APK runtime.
It’s recommended that you change the default time-stamped name of the body pose asset to something describing the pose.
For body tracking to work over Link, your Meta Quest Link application needs to have Developer Runtime Features enabled. This can be verified by checking Settings → Beta → Developer Runtime Features. Additional toggles in that user interface may be required for other Quest features to work properly over Link.
Import body poses
In the Unity Editor (not during a preview), drag a generated body pose asset file into the Source Data Object field of BodyPoseController. Click “Refresh T-Pose” and then “Refresh Source Data” to see the skeleton swap between the two poses.
Source Data Object can be filled by any IBodyPose object. That means a BodyPoseController can reference data from another BodyPoseController (to duplicate a pose), or a BodyPoseBoneTransforms (though this is not recommended), or OVRBodyPose.
View how closely two body poses are aligned
Add a BodyPoseAlignmentDetector component to “BodyPose”
For the Pose A field, drag and drop a generated body pose file from the /Assets/BodyPoses/ folder. For Pose B, drag and drop this object itself, and select the BodyPoseController component in the disambiguation popup.
Drag and drop the BodyPose object into “Bone Visuals To Color”; this will cause the skeleton visuals to be recolored based on how closely the bone poses match.
The BodyTrackingForFitness scene is a proof of concept for an app that counts exercise poses using this BodyPoseAlignmentDetector. This app can be used, for example, to detect if the user is in a squat or not.
Set up custom actions to invoke based on comparisons
The BodyPoseAlignmentDetector provides a flexible interface that could be expanded for different purposes. The Pose Eventscallbacks can automatically trigger your customized actions based on how well “Pose A” aligns with “Pose B.” These actions can trigger logic or effects, like any Unity Event.
- `OnCompliance` triggers once each time all the bone poses comply.
- `OnDeficiency` triggers once each time at least one bone no longer complies.
- `OnCompliantBoneCount` triggers once each time the number of complying bones changes.
The alignment, or compliance, is defined by the Alignment Wiggle Room configuration entries.
- The `Maximum Angle Delta` field determines how closely two bones need to align to count as compliant.
- The `Width` field determines how much margin there is between compliant and deficient. For example, if the `Left Arm Upper` must be 30 degrees aligned with a width of 4 degrees, then compliance will trigger when the `Left Upper Arm` is aligned within 28 degrees, and fall out of compliance when it exceeds 32 degrees. This overlap prevents flickering compliance.

The compliance angles can be measured in one of two ways:
- `Joint Rotation`: compares the local rotation, including the roll of the bone. This alignment detection is mathematically quite simple, but has the drawback of requiring bones to be perfectly rolled in addition to being perfectly angled.

- `Bone Direction`: compares the direction of the bone from the perspective of a specific bone. This method ignores bone roll, requiring a specific bone to be chosen as the arbiter of direction. This method of comparing angles felt better while testing for specific fitness application use cases, but it requires slightly more processing to test.

Adjust body poses with the Editor
Changing positions/rotations in the BodyPoseController’s Bone Poses array will change the skeleton. Modifying individual values in this list is possible, but not the recommended way to create body poses. Modify transforms created by BodyPoseBoneTransforms or read body-tracked data from the Quest in preview mode, and then press Export Asset to create a new pose.
The body-tracked bone positions and orientations (and lengths) are determined at runtime. However, a static copy of the T-Pose data is saved in FullBodySkeletonTPose.cs and applied to the BodyPoseController in the editor.
The recommended way to change specific bone poses with the editor is to go to BodyPoseBoneTransforms → BoneTransforms, expand the list, and click on the desired bone. Clicking on the object exposes it in the hierarchy. Click on that transform in the Hierarchy to select it, then change the transform’s rotation. Note that this changes the blue lines of the skeleton because it is changing the bone pose data in BodyPoseController → Bone Poses.

BodyPoseAlignmentDetector should adjust bone colors while changing a body pose during editor time, as long as Bone Visuals To Color is set up correctly.
Calibration API
After completing this section, the developer should:
Understand when the calibration API might be useful.
Be able to apply the calibration API override the auto-detected height with an explicit height.
This section details when you can use the calibration API, and how you can use it to override the auto-detected height from app-specific calibration.
Auto-calibration is a process by which the system tries to determine the height of the person wearing the headset. The height is necessary to ensure that we can detect if the user is standing, squating, or sitting. The auto-calibration routines run within the first 10 seconds of the service being created, so the initial state when requesting the service is important. Ideally, the application should ensure that the user is standing when the service is launched. If the person is in a sitting position, they can also extend their arm and draw a circle of around 0.3 meters (1 foot) in diameter with their arm fully extended. In this case, height can be estimated by wingspan.
However, there are some cases in which the auto-calibration might not work sufficiently (e.g., the person remains sitting and doesn’t stretch out their arm). There are other situations in which the app might already have gone through an initialization process to determine the person’s height and would like to just use it. For both these use cases, we provide the The SuggestBodyTrackingCalibrationOverride() and ResetBodyTrackingCalibration functions that can be used to override the auto calibration.
OVRBody.cs allows overriding the user height via SuggestBodyTrackingCalibrationOverride(float height) where height is specified in meters.
Troubleshooting Body Tracking
This section details how you can troubleshoot common issues with body tracking. After completing this section, you should understand:
How to check for common project-related errors
How to tell if the tracking services are running
Some of the most common symptoms and their causes
To start troubleshooting, look at the warnings under Edit > Project Settings > Meta XR and resolve any issues found there. For more information, see Troubleshooting Movement SDK.
You can debug many issues with the command adb logcat -s Unity. Otherwise, use the resolutions highlighted below.
See which Body Tracking services are active
You may encounter a problem during development where the body tracking doesn’t seem to work. This can be caused by several issues including not having permissions enabled for Body tracking. However, if you think all permissions are set up and you have tried to start the service, but it still isn’t working, you can use ADB to see which services are active using the following command:
adb shell dumpsys activity service com.oculus.bodyapiservice.BodyAPIService
If body tracking is not active at all, this command will return:
The fbs line indicates if Generative Legs are active.
The IOBT line shows whether IOBT is active.
Being active means that at least one character is currently using the service. If no characters are currently running with the appropriate body tracking service, then the boolean will indicate false.
IOBT is a suggested fidelity mode and not a requirement of the API. This system is implemented to manage the performance of the entire system. As such, the developer is advised to test the modes where IOBT will be running in combination with other features to ensure that IOBT is supported in conjunction with the other services requested. Specifically, if the call to enable IOBT for body tracking is made after the system is highly utilized (e.g., passthrough and Fast Motion Mode (FMM) or controllers are enabled, or the system is under high CPU load generally) body tracking will be enabled in low fidelity mode (without IOBT). The user will see the same skeletal output, but certain tracking features provided by IOBT will not be available (e.g. elbow tracking, correct spine position in a lean).
Body Tracking works with controllers, but not with hands when running on the Quest device (not PC)
Check if hand tracking permissions are enabled for the device in Settings > Movement Tracking > Hand Tracking.
Ensure that Hand Tracking and Controllers are enabled in your project config.
Ensure that the app requests Hand Tracking on start-up.
Body Tracking works when running directly on the HMD, but fails to run over Link
Ensure you are connected with a USB cable that supports data. This can be tested from the Oculus App on the PC under Device Settings: Link Cable > Connect Your Headset > Continue.
Ensure you have Developer Runtime features enabled on the Oculus App on the PC by going to Settings > Beta > Developer Runtime Features.
Body tracking is not working. Verify that controllers are being tracked if you are using controllers (or hands if you are using hand tracking). You can do this outside of the game context.
Additionally, use the debug command, adb shell dumpsys activity service com.oculus.bodyapiservice.BodyAPIService to determine if body tracking is running.
If you notice that body tracking errors occur when you remove and don the headset afterwards, you can use HMDRemountRestartTracking to restart body tracking after the headset is donned. This script will re-enable the project’s OVRRuntimeSettings joint set and tracking fidelity settings after it is removed and donned.
Body is in a motorcycle pose with the upper body correct, but knees bent
If you are using IOBT with a character that has legs, you can use two-bone IK to straighten the legs. In the Movement package, you can see an example of this in Samples/Prefabs/Retargeting/ArmatureSkinningUpdateGreen.
Appendix A: Open XR changes
The developer should gain an understanding of the naming convention and Open XR
calls sufficient to help in debugging issues such as mismatches between bone
names or blendshapes and their retargeted characters.
The information in this section reflects the changes at the OpenXR interface to
provide the reader context. But, if you are using this on Unity, you will not
deal directly with these APIs and can skip to the following section. See
Movement SDK OpenXR Documentation
for API specifics. At a high level, the existing OpenXR body tracking extension
exposes four functions:
xrCreateBodyTrackerFB - Creates the body tracker.
xrGetBodySkeletonFB - Gets the set of joints in a reference T-shaped pose.
xrLocateBodyJointsFB - Gets the current location of joints.
xrDestroyBodyTrackerFB - Destroys the tracker.
Generative Legs
The new body tracking full body extension reuses these functions, but adds
support for creating a tracker with a new joint set. Specifically, the
XrBodyJointSetFB enum is expanded with a new value
XR_BODY_JOINT_SET_FULL_BODY_META, allowing users to specify that they want
joints for the full lower body (i.e., the current upper body plus the new lower
body joints).
To create the tracker with lower-body tracking, specify the new joint set when
creating the tracker:
Locating joints (or finding the default position of joints) is identical to the
existing body tracking extension. The set of joints correspond to the enum
XrFullBodyJointMETA, with the same 70 upper body joints, and an additional 14
lower body joints. The FullBodyJoint enums are:
After this code runs and assuming locations.isActive is true,
jointLocations[XR_FULL_BODY_JOINT_RIGHT_UPPER_LEG_META] will represent the
pose of the upper leg joint.
Body tracking calibration
The new body tracking calibration extension allows applications to override the
auto-calibration performed by the system by providing an overridden value for
the user’s height.
The calibration override height is specified in meters.
Applications may also query the current calibration status, and decide whether
to use the body tracking result based on whether the body is currently
calibrated correctly. While calibrating the returned skeleton may change scale.
In order to determine the current calibration status, an application may pass in
a XrBodyTrackingCalibrationStatusMETA through the next pointer of the
XrBodyJointLocationsFB struct when querying the body pose through
xrLocateBodyJointsFB.
A valid calibration result means that the pose is safe to use. A calibrating
body pose means that calibration is still running and the pose may be incorrect.
An invalid calibration result means that the pose is not safe to use.
Appendix B: ISDK integration
The developer should:
Understand how to set up ISDK and MSDK to be compatible.
Be able to rig a scene with body tracking using the Movement SDK to allow
manipulation of sample virtual objects using the Interaction SDK.
In the Unity Movement package you will find a
MovementISDKIntegration scene
by navigating to Unity-Movement > Samples~ > AdvancedSamples >
Scenes. This sample shows how to apply Interaction SDK (ISDK) hand movements
to the retargeted Movement SDK (MSDK) body.
This scene is of interest to developers wishing to retarget body movements to a
character with the MSDK, and also have the character interact with virtual
objects using the interactions provided by ISDK. In particular, when using
grabbing or touch limiting, ISDK repositions the finger positions from the
tracked positions so that they are visually correct when grabbing the virtual
object (e.g., a cup) or pressing a virtual screen. Since these are different
from the actual positions of the fingers, it is necessary to adjust the
retargeted character’s finger or elbow positions to their new virtual
counterparts. This sample shows how to do this.
Documentation for the Interaction SDK and a quickstart guide can be found
here. You should use
Capsense with ISDK,
and incorporate the OVRHands prefab as discussed below.
To enable Capsense with ISDK, set Controller Driven Hand Poses (found on
the OVRManager) to Natural.
In addition, find the OVRHandPrefab objects located in the OVRCameraRig
hierarchy, and set the Show State option to Always. This will enable
controllers to work with hands.
Key scripts
The following scripts implement the basic functionality and can be ported to
your projects:
SkeletonHandAdjustment is a script that can apply ISDK hand’s pose to a
retargeted body. It has several instances contained in OVRInteraction
component’s children. OVRInteraction is used to provide hand interactions so
that the user can manipulate virtual objects. These hand interactions are
represented in the scene hierarchy as the left and right hands, namely
LeftHandSynthetic and RightHandSynthetic. With Capsense enabled, these
hand meshes will work whether one uses controller or hand tracking.
Each synthetic hand must also have a SkeletonHandAdjustment component as seen
in the
MovementISDKIntegration sample.
This serves as a data source for the SkeletonProcessAggregator script
described below.
SkeletonProcessAggregator is a script that references multiple
IOVRSkeletonProcessor scripts, such as SkeletonHandAdjustment, and applies
them in a defined order to the input skeleton before retargeting occurs. It is
added as a component to the RetargetingLayer character that you want
animated. In the example, it is under
ArmatureSkinningUpdateRetargetSkeletonProcessor.
Interaction SDK and Movement SDK Component Integration
It is recommended that you directly copy the exact OVRInteraction object used
in the MovementISDKIntegration sample scene, either with a cut-and-paste
operation, or by making a prefab. To manually set up these scripts in your
project without a direct copy:
Read the
Getting Started with ISDK tutorial
or a similar introduction to ISDK. You should have a OVRCameraRig object,
as well as the OVRInteraction object as seen in the image above, except it
will be missing the OVRHands and Synthetic hands. Search for OVRHands
(which has hand objects that can act as a source for hand poses) and add it
to OVRInteraction.
Search for OVRLeftHandSynthetic and OVRRightHandSynthetic prefabs and add
those to OVRHands if those items do not exist already. These scripts were
renamed in the MovementISDKIntegration scene for clarity. Continue the
setup process per synthetic hand:
If a synthetic hand does not set itself up, drag and drop the corresponding
data source into the Modify Data Source from Source Mono field. For
instance, the LeftHand should be dragged into the LeftHandSynthetic as
shown in the image:
Add SkeletonHandAdjustment to the Left and Right Synthetic Hands.
Add grab interactors to OVRHands as described
here. For
instance, for the left hand you would add HandGrabInteractor to
OVRHands > LeftHand > HandInteractorsLeft.
Once ISDK hand interactors are set up, SkeletonHandAdjustment components need
to be referenced by the SkeletonProcessAggregator, which should be attached to
the RetargetingLayer‘s GameObject. The SkeletonProcessAggregator applies
special processing to the input skeleton before it is used by the
RigConstraint components expected by RetargetingLayer.
Make sure the GameObject with RetargetingLayer has a
SkeletonProcessAggregator component.
Adding the SkeletonProcessAggregator component will automatically populate
its Auto Add To field.
Add each of the Synthetic hand objects to the SkeletonProcessAggregator‘s
Skeleton Processors listing and make sure the Enabled is checked.
Remove any null values in the SkeletonProcessAggregator‘s Skeleton
Processors listing.
At this point you have added the basic components necessary for ISDK interaction
and can move onto one of the tutorials like the one that discusses
hand grab interactions
or continue to utilize the sample that we have provided.
Note that for each HandGrabInteractor you must enable the HandGrabVisual in
order to get the position of the adjusted hands so that they appear to wrap
around some objects.
Navigate to Visuals->HandGrabVisual underneath the interactor object
hierarchy.
Enable HandGrabVisual.
If it is not assigned, assign the proper synthetic hand to the Synthetic
Hand field. This corresponds to the synthetic hand object related to the
hand based on handedness.
Visuals
ISDK has some visuals that are useful for debugging, but might be seen as
distracting since they will overlap with the character’s hands. Here are the
objects to disable if you wish to remove all of ISDK’s hand visuals:
For each synthetic hand object, disable the OVRLeftHandVisual or
OVRRightHandVisual underneath it.
MovementSDKIntegration has hierarchy objects that should be understood as
being for demonstration purposes only, and would likely be replaced or removed
in your project:
UI → Skeleton Rendering Menu: This menu enables/disables runtime
visualizations:
Input skeleton visualization: Renders the input body tracking skeleton.
Animator skeleton visualization: draws the retargeted Animator’s skeleton.
Skeleton Visuals: This has BoneVisualizer objects, modified by the menu
buttons.
Environment: This is an object containing the floor and lighting of the
scene.
MirroredObjects: This contains the mirrored avatar, and related lighting.
InteractableObjects: This contains the grabbable object used to
demonstrate the Interaction SDK.
Appendix C - Advanced Body Deformation
If there are visual issues with the character even after the adjustments to the
constraints, other animation rigging constraints or retargeting processors
should be applied to resolve the remaining issues.
The developer should understand the key components that can be modified to
fine-tune the deformation logic to apply to their needs.
Animation Rigging Constraints: For more information about animation
rigging constraints, please refer to the Unity documentation located
here
Retargeting Processors: Retargeting processors are scriptable objects
that are designed to update bone transforms using the original tracking data
and the retargeting data while being unrestricted by muscle space. The
ArmatureCorrectLeftHand retargeting processor is an example of this in
action, where if using CCD IK, it will rotate each retargeted bone from the
shoulder onwards to the hand to try to reach the original tracked hand
position. A template ScriptableObject is provided if you would want to add
your own retargeting processor, located
here
Retargeted Bone Targets: If you require the tracking output (i.e.
SpineLower position and rotation), they can be exposed via the
ExternalBoneTargets located on the RetargetingLayer, which will update a
specified target transform with the tracked transform data (position and
rotation).
Modifying Character Height to Match Varying User Heights
A character’s vertices will usually affect limited parts of its mesh when
character animations are used. However, body tracking might scale a mesh to
accommodate different user heights, and this can lead to several visual
artifacts. These include the hand meshes disconnecting from the forearm meshes
(stretching), or the lower leg meshes compressing into the knee (squishing).
In the figure below, the character’s skin influence is currently not shared
along limb joints to allow for some joint translation. The picture on the left
shows the original character with no mesh weight from the ankle joint, while the
version on the right shows the weight influence from the ankle painted to shin.
With the adjusted weighted influence, the fixed ankle can now be translated with
deformation affecting the lower shin.
Another example
below shows how weight influence can affect the area from the thigh to the knee
joint. The left picture shows no weight in that region, while the right
counterpart shows some weight influence from the knee painted to the thigh.
The fixed
knee can now be translated with deformation result on thigh and knee.
The
following gif shows the result in motion. The left side shows the unmodified
weight while the right shows the adjusted weight.
In general, we
recommend that you extend the impact area of the bones to at least 6 inches
above the joint in question so that more mesh vertices are affected during
stretching or squishing. The video below shows how to adjust weighting to allow
for different heights or body proportions.