GDC 2025: Strategies and Considerations for Designing Incredible MR/VR Experiences
Earlier this year we shared how Meta Quest 3 and Meta Quest 3S are redefining how users engage with our devices by enabling a wider audience to explore a myriad of high-quality mixed reality and VR experiences that simply can’t be recreated on traditional 2D screens and entertainment platforms.
Of course, VR experiences are still a core part of our content ecosystem and drive countless users to continue engaging with our devices to be transported to and immersed in entirely new realms of wonder and excitement; however, even with tools like Building Blocks and Mixed Reality Utility Kit, creating fully immersive experience and blending the physical and virtual worlds can still spur unique design challenges.
Whether you’re building for mixed reality or VR, we want to help you bring your vision to life and succeed in creating high-quality experiences that are accessible to as many users as possible. So today, we’re bringing you some strategies and considerations from experienced developer teams that can guide your decision making as you design your next project. Dive in below to get started.
Approaching mixed reality design
Creative Director Doug North Cook from Creature joined us at GDC to provide some tips that can help you design and develop mixed reality experiences with an approach that leaves users satisfied and engaged. Here’s a summary of some pointers he shared:
Build meaningful interactions
Start at the per object level and go deeper: Taking a layered approach helps ensure that interactable objects are optimized for their environment. Start by creating high-quality and identifiable objects, then move onto integrating them into the broader gameplay experience, and then make scene-level optimizations.
Leverage a product design approach: Regardless of your app genre, ensuring that users can intuitively understand, move and interact within your experience increases satisfaction and reduces frustration. Making interactions comfortable and accessible can help users stay engaged throughout your experience.
Ensure cohesion and interaction fidelity: User experiences that feel seamless maintain feelings of immersion and believability. Heightening interaction fidelity with regard to virtual objects helps users feel present in your blended environment and suspends feelings of disbelief.
Make it feel real
Focus on intelligible affordances: Clearly communicating what actions are possible and how to interact with an object or panel reduces user frustration and enables more seamless interactions, especially for users new to mixed reality.
All interactions demand audio, visual and haptic feedback: Delivering forms of feedback serve as confirmation mechanisms to enhance immersion and provide clarity that an interaction has been made.
Create objects worth interacting with: Ensuring that interactable objects serve a purpose or are important in the context of your experience reduces confusion and propels progression.
Ideas on leveraging the new Passthrough Camera API
We’re excited to share that all developers can start leveraging the public experimental release of the Passthrough Camera API to access the front-facing RGB cameras on Quest 3 and Quest 3S. This unlocks the ability to use room images to apply textures, provide input for AI models and calculate real world data like brightness level, ambience color and more. You can see a quick example video below of this API being leveraged to apply an image to a texture to make a reflecting pool effect.
Probably most exciting is that these images can be used to drive Machine Learning and Computer Vision pipelines. Being able to understand the environment and react in real time should enable interesting games as well as coaching and lifestyle applications.
We believe that this capability can be applied to a wide variety of experiences, from entertainment to business and industrial apps, but use cases are still ripe for innovation. To explore some potential use cases, we brought in a couple teams to share how they’ve been using the Passthrough Camera API to help with content creation, player customization, object generation and wayfinding.
Tommy Palm, CEO of Resolution Games, shared some experiences incorporating mixed reality. With the Passthrough Camera API, Resolution Games is experimenting with concepts of capturing and leveraging real-world images to deliver compelling gameplay.
Palm’s team has played with fun ideas like capturing a friend’s face and applying it to a rag doll, which you can see an example of below.
Of course, this is just an early concept, but you can imagine how bringing anything from your room—from pictures of your cat to virtual versions of your favorite guitar—into a game and sharing with friends can enable an experience that feels both novel and authentic.
Alicia Berry from Peridot and Niantic shared a couple use cases for the API that include object and spatial recognition. In the first example, you can see how the API enables a virtual character to recognize a tree, with generative AI enabling an appropriate subsequent reaction. The power to recognize objects and alter game play based on them opens up a world of possibilities.
Berry also shared an example that leverages the API with visual positioning systems to aid in navigation through densely populated or large scale physical areas where GPS can be ineffective—like the Moscone Center at GDC for example. Below, you can see how this application can help guide users to a designated area.
These are just a few examples of how this new tool can be used across a variety of apps, and we can’t wait to see how you leverage it to create more complex and aware mixed reality experiences. To get started with Passthrough Camera API, visit the documentation.
Designing and rendering hit VR games
We had the pleasure of hosting Design Director Ryan Darcey and Rendering Engineer Kevin Call from Camouflaj, the studio behind the hit VR game Batman: Arkham Shadow, as they shared strategies on design and rendering for VR.
Designed exclusively for VR, Batman: Arkham Shadow lets players experience Gotham City like never before.
Design tips
Don’t reinvent the wheel: Whether it be on VR or traditional gaming consoles, there are many established best practices for input, interactions and movement. To ensure a friction-free experience, consider sticking to common mechanics unless you need a unique solution.
Strike a balance between new, better, and the same: If you’re pulling from an existing IP, leverage previously popular features and examine how you may be able to enhance them using the latest capabilities—don’t change things for the sake of changing them. Given that VR presents opportunities that aren’t available on traditional platforms, determine if and when features like hand tracking can be used to provide a new and unique experience.
Ship fewer features at higher quality: Focusing on quality over quantity ensures that you don’t overwhelm users and can perfect core mechanics and functionality. Ideally, you want to aim for consistently high-performance throughout your game experience with regard to each interaction your user makes.
Easy to learn, difficult to master: Accessibility plays a major role in mixed reality and VR games as audience continue to onboard with the technology and hardware. Delivering an intuitive experience that can be easily picked up but challenges players the longer they play can be an effective strategy for creating a game with broad appeal.
Be generous with interactions: Straying away from precision can help VR experiences feel more smooth and comfortable when users are interacting with virtual elements. For example, if a user needs to open a door in VR, you could design the interaction to only require the user to turn the knob halfway before the gae auto-completes the interaction.
Consider baking locomotion into combat: In VR, actions can be leveraged to propel users throughout your virtual environment to prevent action sequences feeling stale and prevent users from disrupting immersion by thinking about their physical surroundings outside of their headset.
Focus on visible body areas: Consider what parts of the character’s body a user will most frequently see. For example, if your experience utilizes a first-person perspective, it may be more important to focus on creating high fidelity arms and hands versus the character’s head.
Consider how game features correlate and interact with the environment: Features like character gadgets or weapons can heighten user expectations with regard to interactability. For example, in Batman: Arkham Shadow, players may expect to be able to use the grapple hook to reach higher areas freely.
Rendering tips
Mitigate constraints with baked lighting: To reduce performance costs, Camouflaj leveraged baked lighting throughout the Arkham Shadow experience and worked to optimize lightmaps for a more authentic look and feel.
Static object lighting: To achieve higher quality lighting for static objects at a lower performance cost, leverage Unity’s built-in lightmaps. Next, achieve direct specular lighting by evaluating your specular BRDF using the dominant direction when applying lightmap lighting. Finally, to achieve a more realistic contrast between light and shadows, construct a mask from the diffuse luminance by specifying a minimum threshold along with attenuation and multiply it with the specular in the shader.
Dynamic object lighting: For dynamic objects, Camouflaj leveraged Unity’s built-in light probes, which store spherical harmonics. To achieve balance between placing baked lights for both static environments and dynamic characters, the team deployed a system called “custom probes” that allowed them to override spherical harmonics for characters using a MaterialPropertyBlock. Custom probe components could be placed as needed to override lighting, allowing artists to add light using a three-point light setup. Unity’s SphericalHarmonicsL2 API was used to accumulate lighting from the three sources.
Realtime lighting: To achieve lower performance costs with mixed lighting (baked lighting and realtime lighting) when necessary, Camouflaj used Subtractive. Direct light can be added to static objects in the lightmap, while realtime lighting is added to dynamic objects. Indirect light can then be added to static objects in the lightmap and to dynamic objects with light probes.
The use of baked lighting and later lightmap optimizations saved performance costs and delivered an authentic Gotham City experience.
Realtime shadows: In order to add shadows cast from characters on a static environment, the team used “fake shadows” by rendering shadow maps, then rendering opaque objects with forward lighting. Next, they rendered a projection volume and sample depth for each fake shadow light and used it to reconstruct world position and return an attenuation value that was blended with the frame buffer. Lastly, transparent objects were rendered.
Reflections: Camouflaj laid the foundation for reflections by using Unity’s built-in reflections probes to store a capture of the environment in a cubemap. To overcome hurdles in scenes with complex geometry and lighting conditions, they implemented a technique called “cubemap normalization” by storing the spherical harmonics at the point of the reflection probe capture. Then reflection was scaled in the fragment shader using the ratio of the diffuse lighting at the fragment to diffuse lighting at the reflection probe samples with the vertex normal. To augment reflections from probes, reflection proxies were used, which are images captured of a specific object in the scene. At runtime, proxies were rendered and reflected over a known reflection plane to a screenspace buffer that objects can sample.
To learn more details about these design and rendering tips, be on the lookout for the full session with Camouflaj coming soon to our Youtube channel.
Building a tabletop MR game with Popup Asylum
Creative Director Martin Ashford and Director Mark Hogan from Popup Asylum, the studio behind Battlenauts, took attendees on a journey through the creation of the hit tabletop mixed reality game. Walking through key stages from ideation to testing, Hogan and Ashford shared some key insights that reveal how you can approach and accelerate development.
Battlenauts delivers a unique tabletop MR experience with both multiplayer and single player modes.
Getting started
The team at Popup Asylum had created a multiplayer VR prototype based on the classic board game Battleship before coming across Meta’s Spirit Slingshowcase. Upon playing it, they realized the potential for implementing mixed reality as a means to elevate immersive board game experiences by mirroring the social presence felt during an in-person game night–without the hassle of coordinating around people’s schedules.
Spirit Sling is a social mixed reality showcase that demonstrates how to build exciting and social tabletop games in blended environments.
Meta offers several showcases that provide developers with best practices on implementing mixed reality and social capabilities. By exploring the showcase, Popup Asylum was able to lean on Spirit Sling’s tabletop framework for logic that handles game board and user avatar positioning. This framework also provided wrappers for Photon Fusion, which the team integrated with the Destination API to support multiplayer game invites between users. Whether it’s combining mixed reality APIs, setting up hand tracking or anything in between, looking through showcases can provide the inspiration for your next project and the steps to help you achieve it.
Mitigating differences between VR and MR environments
Immersive VR environments are often much larger than the space people can safely navigate in their physical environment. When adjusting its original VR prototype to support Passthrough-based gameplay, Popup Asylum noticed that they needed to scale down the play space to fit around a more realistic tabletop size.
To ensure that rendered virtual objects represent the dimensions of a user’s physical space, the team implemented a solution to fade out any object rendered beyond physical walls. This was achieved by using the Mixed Reality Utility Kit to first generate a 3D mesh of a room. The mesh was then rendered using a multiply blend mode to darken objects rendered beyond the mesh. When composited with the camera feed, the game’s dark colors enabled the physical world to show through and apply transparency to rendered objects.
Avatar positioning
The use of Meta Avatars gave Popup Asylum an opportunity to provide users with more high-quality ways to represent themselves in Battlenauts; however, given that Meta Avatars now support rendered legs, the team had to overcome the challenge of creating a seated tabletop experience against the avatar’s default standing position. As a solution, the team decided to pin seats to the avatars’ hips using the Avatar SDK’s attachables system. This enables a familiar, seated board game experience while maintaining eyeline consistency between users.
Keep an eye out for the full GDC session with more insights from Martin Ashford and Mark Hogan coming soon to our Youtube channel.
Dive deeper with on-demand sessions
While GDC is coming to a close, the learnings don’t end here. Soon, we’ll be publishing our sessions on YouTube so you can watch (or rewatch) to discover a trove of insights for accelerating development, building incredible experiences and growing a business with Horizon OS.
Stay tuned on the developer blog, X and Facebook to see when these sessions drop, and don’t forget to subscribe to our newsletter in your Developer Dashboard settings to unlock monthly recaps of the biggest news impacting developers.
Accessiblity
All
Apps
Avatars
Design
GDC
Games
Hand Tracking
Optimization
Quest
Unity
Did you find this page helpful?
Explore more
Growth Insights Series: More Best Practices for New User Onboarding
Explore strategies and best practices to increase retention by supporting user recall and progression during app onboarding.
All, App Submission, Apps, Games, Marketing, Quest
New to the Meta Horizon Store from App Lab? Here are Tips for Overcoming 5 Key Challenges
Explore the top five issues developers encounter when making the switch from App Lab to the Meta Horizon Store and gain solutions to navigate these challenges successfully.
Build Faster and Smarter with GenAI Tools in Meta Horizon Worlds
GenAI tools in the Meta Horizon Worlds desktop editor are now available to creators in the US, UK and Canada. Explore how new features like Mesh Generation can greatly reduce the time it takes to build worlds for mixed reality and mobile.