Develop
Develop
Select your platform

Use Capsense

Updated: Jul 15, 2024
Capsense generates realistic hand animations from controller input. When a user holds a Meta Quest controller, Capsense reads the controller’s capacitive touch sensors and button states to determine finger positions, then produces matching hand poses. These are the same hand visuals you see in the Horizon Home environment.
Capsense supports two styles of hand rendering:
  • Natural hand poses: The hand is rendered without a visible controller, as if the user is interacting bare-handed. Finger positions are inferred from the controller’s touch sensors.
  • Controller hand poses: The hand is rendered alongside a visible controller model. Capsense adjusts the hand shape to match each controller type. Supported controllers include Quest 2, Quest 3, Quest Pro, and all future device controllers.

Benefits of Capsense

  • Benefit from best in class logical hand implementation and future improvements instead of investing in a custom implementation.

Known limitations

  • When using Link on PC, pose data for controllers is unavailable when you’re not actively using them (such as when they’re lying on a table).

Compatibility

Hardware compatibility

  • Supported devices: Quest 2, Quest Pro, Quest 3 and all future devices.

Software compatibility

  • Meta XR Core SDK v62+

Feature compatibility

  • Fully compatible with Wide Motion Mode (WMM).
  • Using Capsense for hands with body tracking through MSDK will both work simultaneously, but they have a different implementation of converting controller data to hands, so the position and orientation of joints will be slightly different.

Setup

A native sample has been provided in the SDK package for using this feature. It is titled XrHandDataSource.
XR_EXT_hand_tracking_data_source allows the application to create a hand tracker that can receive a hand pose’s controller-generated data as well as the standard camera tracked hands path. This is done by creating a XrHandTrackingDataSourceInfoEXT structure to pass in as a next pointer to the data provided to the xrCreateHandTrackerEXT call. When querying the poses using xrLocateHandJointsEXT, the application can pass a XrHandTrackingDataSourceStateEXT structure into the function that receives the data about which data source was used. The available options for the data sources are:
  • XR_HAND_TRACKING_DATA_SOURCE_UNOBSTRUCTED_EXT: This means that the tracker should use the hands tracked via the cameras for hand poses.
  • XR_HAND_TRACKING_DATA_SOURCE_CONTROLLER_EXT: This means that the tracker should use the controller data to fill in the hand poses.
If both sources are provided to the hand tracker, then the runtime will use the camera tracked poses if available. Otherwise, it will try to use controller data to fill in the poses.
XR_EXT_hand_joints_motion_range lets the application specify the constraints placed on the joint positions that are generated from the controller data. The developer places a XrHandJointsMotionRangeInfoEXT structure on the next chain of the XrHandJointsLocateInfoEXT struct provided to the xrLocateHandJointsEXT call. The available options for the constraints are:
  • XR_HAND_JOINTS_MOTION_RANGE_UNOBSTRUCTED_EXT: This is interpreted as providing the hands for natural/social usage. The poses provided on this path will intersect controller data, so the hands shouldn’t be rendered at the same time as a controller.
  • XR_HAND_JOINTS_CONFORMING_TO_CONTROLLER_EXT: This is a request for the hands to be wrapped around the controller geometry. When the hand poses from this path are rendered with the controller model, it should provide an immersive effect of seeing the user’s hands as they are actually placed.

Troubleshooting

  • How can I confirm Capsense is running on my headset?
    In your headset, you should see either hands instead of controllers or hands holding controllers. Also, hand pose data should be provided while the hands are active with the controllers.
  • Can I evaluate the feature on my headset without changing my code?
    No, using Capsense requires some code changes.