At last year’s Connect, we introduced
Presence Platform—our suite of machine perception and AI capabilities that empowers developers to build compelling mixed reality and natural interactions based experiences. Today, with
Meta Quest Pro, we’re unlocking
full-color mixed reality and introducing a new pillar to Presence Platform’s core stack: Social Presence.
Social Presence and More with Movement SDK
Today, we introduced Movement SDK, our newest addition to Presence Platform. Movement SDK is composed of our eye, face, and three-point body tracking capabilities—all of which help unlock a new dimension of presence in VR. Movement SDK lets avatars mimic facial movements in real time using
Meta Quest Pro’s inward-facing sensors. Now, developers can bring characters to life with eye and face tracking. With more expressivity, characters can look and feel more like you. This is the same underlying technology that drives our avatars in
Meta Horizon Worlds and
Meta Horizon Workrooms.
Eye and Face Tracking Capabilities
Movement SDK includes eye and face tracking capabilities opened up by new technology in Meta Quest Pro. Inside the headset, there are five IR sensors directed towards the face: three sensors pointed towards the eyes and upper face and two pointed towards the lower face. Face tracking is driven by a machine learning model that lets Meta Quest Pro detect a wide range of facial movements.
While people want their avatars to appear expressive, it’s also important that they feel natural. That’s why the abstracted facial movement data that we output through the Movement SDK is represented by linear blend shapes based on the Facial Action Coding System (FACS)—a series of zero-to-one values that correspond with a set of generic facial movements, like when you scrunch your nose or furrow your eyebrows. These signals make it easy for a developer to preserve the semantic meaning of their users’ original movement when mapping signals from the Face Tracking API to their own character rig, whether their character is humanoid or even something fantastical, like a penguin or alien.
With the introduction of Movement SDK, we’re able to support social presence for third-party character embodiment. Working with an internal team of artists, we created an alien character that we call Aura. She can wink, move her mouth from side to side, and more.
As you can see above, Aura’s facial and eye movements are used to make her socially engaging—and she shows the potential for high-resolution characters driven by machine learning. Aura will be available as a sample developers can download from GitHub starting later this month. You’re free to look behind the scenes and see how we implemented the retargeting and rig mapping to get this embodied character to work. Look out for our documentation, which will provide guidance and best practices for mapping to different embodied characters.
In addition to social presence, developers can also build completely new interactions and experiences using estimates of where someone is looking in VR as an input, similar to hand tracking. That means people can interact with virtual content based on where they’re looking. For example, developers can use the player’s eye gaze to target objects or pop a balloon in a carnival game.
We’ve baked privacy in from the start. Apps never get access to raw image data (or pictures) of people’s eyes and face from these features—they can only access a set of numbers that estimate where someone is looking in VR and their facial movement. Images of people’s eyes and face are inaccessible to developers and Meta because they’re deleted after processing and never leave the headset. Learn more about
how we built privacy into these features.
We can’t wait to see the creative ways that developers will use eye and face tracking to improve social presence in VR.
Body Tracking API
Movement SDK also includes our Body Tracking API, which uses three-point tracking based on the position of the controllers or hands relative to the headset. Our goal was to give you an easy API that works well right out of the box for rich social applications. Body language is a big part of a social experience, and seeing the character body act in a way that a human doesn’t move can totally break the sense of immersion. So we used a large dataset of real human motions to learn and correct the errors that are commonly seen with simple
inverse kinematics (IK) approaches.
Our Body Tracking API works both when the player is using controllers or just their hands, and we provide the full simulated upper-body skeleton including hands in both these cases. We also automatically take care of cases when a player puts down their controllers and starts using hands without any additional logic handling for the developer.
Our Body Tracking API works across both Meta Quest Pro and Meta Quest 2, and it will work on future Meta Quest VR devices. New improvements to body tracking in the coming years will be available through the same API, so you’ll continue getting the best body tracking tech from Meta without having to switch to a different interface.
Movement SDK is available now, so you can start building apps that take advantage of these new features today.
Mixed Reality with Scene Understanding, Full-Color Passthrough, and Shared Spatial Anchors
Using Presence Platform’s mixed reality capabilities,
Arkio is unlocking a much more immersive and impactful experience, far beyond what is possible using flat screens.
Our
Scene understanding capabilities let developers create mixed reality experiences that adapt to the user’s personal space for increased immersion. Digital content can interact with the physical world with support for occlusion, collision, nav meshes, and blob shadows—all of which improve the sense of immersion and realism in mixed reality.
Take the scenes below from Sir Aphino, which shows a basic example of how Scene understanding enables interaction between a virtual character, the physical wall, and ceiling locations, as well as other objects within the player’s room.
Here’s a look at the new mixed reality capabilities we unveiled today, which will help developers combine the physical and virtual worlds in entirely new ways.
Full-Color Passthrough
Passthrough gives people in VR a real-time representation of the physical world around them, and it lets developers build experiences that blend virtual content with the physical world. Since the launch of Passthrough, developers have built some very cool early experiences that incorporate a black-and-white view of your physical surroundings. And the obvious next step is color Passthrough. While Meta Quest 2 enables mixed reality, Meta Quest Pro was
designed for it. With Meta Quest Pro’s advanced sensors and 4X the number of pixels, developers can now build experiences that let people engage with the virtual world while maintaining presence in your physical space in full, rich color. Our investments in advanced stereoscopic vision algorithms deliver a higher quality and more comfortable Passthrough experience with better depth perception and fewer visual distortions for both close-up and room-scale mixed reality scenarios.
Productivity apps can combine huge virtual monitors with physical tools like keyboards and mice. With
Immersed, users can spawn up to five virtual screens and work without distractions—or collaborate with others on a virtual desktop.
With color Passthrough, there are more possibilities than ever to let your audience explore, experiment, and engage with mixed reality. And the great thing is that Meta Quest 2 apps built with Passthrough will work on Meta Quest Pro.
Shared Spatial Anchors
Last year, we shipped
Spatial Anchors, which let developers anchor virtual content in the physical world. We have an exciting new update rolling out later this year, Shared Spatial Anchors, that will enable developers to build local multiplayer experiences.
Shared Spatial Anchors enable developers to build local multiplayer (co-located) experiences by creating a shared world-locked frame of reference for multiple users. For example, developers can build games and apps where two or more people can collaborate or play together while in the same physical space and interact with the same virtual content.
At Connect, we showed
a sneak peek of Magic Room, a mixed reality experience we’re building for
Meta Horizon Workrooms that leverages Shared Spatial Anchors. Magic Room lets any mix of people, some together in a physical room and some remote, collaborate in the same physical room together. We’re excited to bring this feature to our developers.
Just the Beginning
As our hardware continues to advance, Presence Platform will create more opportunities to bring mixed reality, natural interactions, and social presence into VR—bringing us one step closer to delivering on our vision of the metaverse.
We can’t wait to see what developers build with Meta Quest Pro and Presence Platform.