Creating audio experiences for virtual, augmented, and mixed reality applications can be a challenge. The built-in audio systems in popular game engines were designed primarily to be experienced on devices such as consoles and PCs, and this can result in significant downsides when translated to XR devices like VR headsets. For example, with these systems, users can’t determine if sounds are coming from in front of them or behind them, or from above them or below them, while sound sources directly to their side can result in a phenomenon called hard-panning, which sounds and feels unnatural.
Today, we’re rolling out immersive audio capabilities within
Presence Platform that will help developers overcome these challenges. The new
Audio SDK contains everything needed to create audio experiences for XR applications that properly localize sounds in 3D space and create a sense of space in the virtual environment, allowing users to be fully immersed in the auditory scene. The spatial audio rendering and room acoustics functionality included in the SDK improves and expands upon the
legacy Oculus Spatializer, and this feature set will expand in future releases.
Audio SDK was designed specifically to solve the challenges of developing audio for XR in an approachable yet flexible way. Developers, even those with no experience in audio, will quickly be up and running with the core functionality that is fundamental to creating a sense of immersion and presence in XR experiences. Apps built with the Audio SDK can run on almost any standalone mobile VR device, as well as PCVR (SteamVR, etc.) and devices from other manufacturers.
At Connect, we provided an overview of spatial audio and room acoustics and explained why they’re so important for building immersive audio experiences in VR, MR, and AR. This developer session includes a demonstration of these audio features in
Meta Horizon Workrooms. The spatial audio, room acoustics, and voice directivity features that help create such an immersive experience in Workrooms are now available to developers via the Audio SDK.
Spatial Audio Sources
Audio sources in a scene can be made to render with HRTF-based spatial audio by attaching the corresponding component from the Audio SDK. This allows users to localize and differentiate between sounds coming from above them, below them, behind them, and in front of them, in addition to the basic left/right separation that game engines apply by default. The Audio SDK supports point sources as well as ambisonic sources.
Room Acoustics Simulation
The room acoustics functionality in the Audio SDK creates early reflections and late reverberation that accurately models a room based on its size, shape, and surface materials. When used in XR applications, this allows users to get a sense of the environment they’re in and helps to make objects sound like they all belong in the same space.
Oculus Spatializer
The existing Oculus Spatializer SDK will continue to be supported and will be available to developers who are working in Unreal, FMOD, Wwise, or those who want a native API solution. For new Unity projects, developers should use the new
Audio SDK to leverage the quality of life upgrades and try out experimental features. Starting with
v49, any new functionality introduced will be included exclusively in the new Audio SDK.
Get Started with Audio SDK
Audio SDK launches today on Unity, while support for Unreal, Wwise, and FMOD is planned in the future. Get started by referencing the
documentation to learn more and to get access to the samples made available for developers’ reference. At this time, we do not recommend upgrading to Audio SDK unless you’re starting a new Unity project.
Visit the
Developer Center to learn more about the broader set of Presence Platform capabilities.