Use Capsense
Updated: Jul 15, 2024
Capsense provides logical hand poses when using controllers. It uses tracked controller data to provide a standard set of hand animations and poses for supported Oculus controllers. For consistency, we provide the Oculus Home hand and controller visuals. Capsense supports two styles of hand poses.
- Natural hand poses: These are designed to look as if the user is not using a controller and is interacting naturally with their hand.
- Controller hand pose: These are designed to be rendered while also rendering the controller. We provide different shapes depending on the controller type. Currently Capsense supports the Quest 2, Quest 3, and Quest Pro controllers.
- Benefit from best in class logical hand implementation and future improvements instead of investing in a custom implementation.
- Due to limitations in the current plugin implementation, in Unity, if hand tracking is enabled and the hand is not on the controller, the hand poses will use the hand tracking data.
- When using Link on PC, pose data for controllers is unavailable when you’re not actively using them (such as when they’re lying on a table).
- In Unity, due to the object hierarchy on the OVRCameraRig Tracking Space, it is non-trivial to provide the hand data and the controller data simultaneously with the legacy anchors. This has required us to create multiple new anchors on the Tracking Space and to add gating logic on the controller and hands prefabs. The gating logic determines if the prefabs should render.
Prior to v65, handscale was ignored for hand tracking whenever capsense was enabled. To fix this, you need to rebuild your project with Core SDK v65 or higher.
- Supported devices: Quest 2, Quest Pro, Quest 3 and all future devices.
- Unity 2022.3.15f1+ (Unity 6+ is recommended)
- Meta XR Core SDK v62+
- Fully compatible with Wide Motion Mode (WMM).
- Using capsense for hands with body tracking through MSDK will both work simultaneously, but they have a different implementation of converting controller data to hands, so the position and orientation of joints will be slightly different.
A native sample has been provided in the SDK package for using this feature. It is titled XrHandDataSource.
XR_EXT_hand_tracking_data_source allows the application to create a hand tracker that can receive a hand pose’s controller-generated data as well as the standard camera tracked hands path. This is done by creating a
XrHandTrackingDataSourceInfoEXT
structure to pass in as a next pointer to the data provided to the
xrCreateHandTrackerEXT
call. When querying the poses using
xrLocateHandJointsEXT
, the application can pass a
XrHandTrackingDataSourceStateEXT
structure into the function that receives the data about which data source was used. The available options for the data sources are:
- XR_HAND_TRACKING_DATA_SOURCE_UNOBSTRUCTED_EXT:
This means that the tracker should use the hands tracked via the cameras for hand poses.
- XR_HAND_TRACKING_DATA_SOURCE_CONTROLLER_EXT:
This means that the tracker should use the controller data to fill in the hand poses.
If both sources are provided to the hand tracker, then the runtime will use the camera tracked poses if available. Otherwise, it will try to use controller data to fill in the poses.
XR_EXT_hand_joints_motion_range lets the application specify the constraints placed on the joint positions that are generated from the controller data. The developer places a
XrHandJointsMotionRangeInfoEXT
structure on the next chain of the
XrHandJointsLocateInfoEXT
struct provided to the
xrLocateHandJointsEXT
call. The available options for the constraints are:
- XR_HAND_JOINTS_MOTION_RANGE_UNOBSTRUCTED_EXT:
This is interpreted as providing the hands for natural/social usage. The poses provided on this path will intersect controller data, so the hands shouldn’t be rendered at the same time as a controller.
- XR_HAND_JOINTS_CONFORMING_TO_CONTROLLER_EXT:
This is a request for the hands to be wrapped around the controller geometry. When the hand poses from this path are rendered with the controller model, it should provide an immersive effect of seeing the user’s hands as they are actually placed.
How can I confirm Capsense is running on my headset?
In your headset, you should see either hands instead of controllers or hands holding controllers. Also, hand pose data should be provided while the hands are active with the controllers.
Can I evaluate the feature on my headset without changing my code?
No, using Capsense requires some code changes.