Today at Connect, we shared our vision for the metaverse, a more connected digital experience that allows you to move seamlessly from one place to another—spending time with people who may be physically distant from you—while maintaining your unique virtual identity and digital goods from one world to the next.
A vast and interoperable metaverse cannot be built by one company alone. It will require the contributions of many companies, developers, and creators—and it will take time.
Some building blocks for the metaverse already exist, but in order to create virtual environments that feel natural and authentic, we’ll need to improve the way that motion and space are represented in-app. During the keynote, we announced
Presence Platform, a broad range of machine perception and AI capabilities—including Passthrough, Spatial Anchors, and Scene Understanding—that allow you to build more realistic mixed reality, interaction, and voice experiences that seamlessly blend virtual content in a user’s physical world. We also provided a sneak peek at our next-generation all-in-one VR hardware, Project Cambria, launching next year—an advanced device at a higher price point that will be packed with the latest VR technologies. As our hardware leaps forward and the capabilities of Presence Platform open up groundbreaking opportunities to bring mixed reality experiences and natural interactions in VR today, we come closer to delivering on the promise of the metaverse and the potential for bringing people together in new, more immersive ways in the future.
With Presence Platform, our goal is to unlock a wide range of mixed reality experiences that are guided by our
Responsible Innovation Principles. This starts with being responsible stewards when it comes to collecting the information needed to deliver amazing mixed reality experiences that are safe and seamless.
Explore the new capabilities, available in Insight SDK, Interaction SDK, and Voice SDK, below.
Capabilities Overview
Insight SDK
Today, we’re announcing Insight SDK, enabling you to build mixed reality experiences that create a realistic sense of presence.
Earlier this year, we
introduced Passthrough API Experimental, enabling you to build experiences that blend virtual content with the physical world. Today, we’re announcing general availability of Passthrough in our next release, which means you’ll be able to build, test, and ship experiences with Passthrough capabilities.
We’re also announcing Spatial Anchors, world-locked frames of reference that will enable you to place virtual content in a physical space that can be persisted across sessions. With Spatial Anchors Experimental, available soon, you will be able to create Spatial Anchors at specific 6DoF poses, track the 6DoF pose relative to the headset, persist Spatial Anchors on-device, and retrieve a list of currently tracked Spatial Anchors.
We’re also announcing a new Scene Understanding capability. Together with Passthrough and Spatial Anchors, Scene Understanding allows you to quickly build complex and scene-aware experiences that have rich interactions with the user’s environment. As part of Scene Understanding, Scene Model provides a geometric and semantic representation of the user’s space, so you can build room-scale mixed reality experiences. Scene Model is a single, comprehensive, up-to-date representation of the physical world that is indexable and queryable. For example, you can attach a virtual screen to the user's wall or have a virtual character navigate on the floor with realistic occlusion. Further, you can bring real-life, physical objects into VR. To create this Scene Model, we provide a system-guided Scene Capture flow that lets users walk around and capture their scene. We’re excited to make Scene Understanding capabilities available as an experimental capability early next year.
With the new Passthrough, Spatial Anchors, and Scene Understanding capabilities in Insight SDK, you’ll be able to build mixed reality experiences that blend virtual content with the physical world, creating new possibilities for social connection, entertainment, productivity, and more.
Protecting the privacy of people’s physical space is important to us. We designed Passthrough, Spatial Anchors, and Scene Understanding so that developers can create experiences that blend the physical and virtual surroundings without needing access to the raw images or videos from your Quest sensors.
Interaction SDK
With Interaction SDK, we’re making it easier for you to integrate hands and controller-centric interactions. The Unity library, available early next year, will come with a set of ready to use, robust interaction components like grab, poke, target and select. All components can be used together, independently, or even integrated into other interaction frameworks. Interaction SDK solves many of the tough interaction challenges linked to computer vision based Hand Tracking, offers standardized interaction patterns, and prevents regressions as the technology evolves. Last but not least, it provides tooling to help you build your own custom gestures as well.
The data protections we’ve always offered around
Hand Tracking apply here. The images and estimated points specific to your hands are deleted after processing and are not stored on our servers.
Tracked Keyboard SDK
Last year, we announced a Tracked Keyboard capability for developers. We’re hard at work to launch a Tracked Keyboard SDK, and we’re on track to release it early next year as part of Presence Platform.
Voice SDK Experimental
We’re also announcing Voice SDK Experimental, available in our next release so you can start to build and experiment. Voice SDK is a set of natural language capabilities that let you create hands-free navigation and new voice-driven gameplay. With Voice SDK, you can create Voice Navigation & Search or enable Voice FAQ to allow users to ask for help or a reminder. We’re also enabling new Voice-Drive Gameplay—like winning a battle with a voice-activated magic spell or talking with a character or avatar. Voice SDK is powered by Facebook’s Wit.ai natural language platform and it is free to sign up and get started.
Imagine What’s Possible
We’ve also built a sample experience called The World Beyond to unleash your imagination with the possibilities of what you can build with Presence Platform. We will make this available early next year in the form of a sample project so you can leverage this as you build.
Availability and Documentation Coming Soon
Presence Platform and the
Oculus Platform SDK are designed to work together, so you can use both in tandem to start building mixed reality experiences and natural interactions on Oculus Quest devices. Developer documentation will be available in our upcoming releases so you can start prototyping.
With our next software release, we’ll be deprecating Passthrough API Experimental. Keep an eye out next month for the production launch of Passthrough API to start building.
We’re excited to see the mixed reality experiences you build with Presence Platform on the Quest platform. With these new capabilities, we can begin to explore what the metaverse might look like, and we’re committed to helping you create the connected, interoperable worlds that lie ahead.