Today we hear from Oculus Software Engineers Xiang Wei and Kevin Xiao. This team works with game engines like Unity and UE4 to ensure the best developer experience, while also helping partners improve their game engine workflow. In this post they share more about our recent updates to the
Oculus Mixed Reality Capture Tools and the integrations that bring this essential feature to life.
As VR developers, we often find that it is not easy to communicate the virtual reality experience to people who are not wearing a headset. Mixed Reality Capture (MRC) composites the person’s real world image with the virtual world, so user can easily understand what the person is doing in VR. It also opens up new possibilities for sharing the VR experience, whether to market your games through high impact trailers, or elevating your booth at events by giving attendees a preview of the experience as they walk-by. It will also expand the reach of your games and applications by providing users the ability to share their experience in new ways, both with their family and friends at home, and with the world.
The hardware + software you need to get started
Here is a list of everything we would need in the mixed reality capture. I won't explain much about the Quest headset since you all know what it is.
Personal Computer
We'll need a Windows 10 PC for running OBS and connecting the camera. It doesn't have to be super high performance computer. A gaming laptop will usually be sufficient.
Wireless Router
This is important, because we connect the Quest to the PC through WiFi, the connection quality matters. It's recommended to use a high quality wireless router for this task.
Green Screen + Lighting
We’ll need a green screen for the background removal. If your capture environment doesn't have great lighting conditions, it would be helpful to setup a few photographic light lamps for added brightness, and also to remove the shadows.
Camera
Any high quality USB or HDMI camera would do the job. It's recommended to get a camera which can output 1080p at 60fps.
How to integrate with your preferred game engine
If you're familiar with using our Unity and Unreal Engine 4 integrations, you're already most of the way there. All the functionality is included with our integration package since the 1.38 release. It’s recommended to update your application to our latest release as it includes important features and bug fixes. You can download our latest integration release for the up-to-date mixed reality capture support:
Unity Integration,
UE4 Integration.
MRC with Unity
For Unity, MRC is included in the
Oculus Integration package. It's enabled through the OVRManager and other Oculus classes, when the Mixed Reality Capture for Quest setting is set to automatic (which it is by default). The Unity Integration uses the camera configuration from the camera configuration file, then adds an additional in-game camera matching the configuration to capture the MRC view. The camera is also controlled by the integration to follow the OVRCameraRig that it's attached to.
If your Unity project does not use the OVRCameraRig and/or the OVRManager script, you will need to make some small changes to your application, but you can continue using your own scripts and components for the rest of your game. Simply add the code snippets found on the
MRC with Unity documentation page into your system component's Awake event.
The old version of the Unity Integration would stop MRC when OVRManager is destructed. So if your application constructs and destructs OVRManager in each level, the MRC session will not sustain between levels. We have fixed this issue since 1.41 by enabling the persistent MRC session even after the destruction of OVRManager. If you encounter the issue, please upgrade to our latest integration.
MRC with UE4
For UE4, MRC is included in the OculusVR plugin that’s included in the engine. It's enabled with the OculusVR plugin, so if you are using a compatible plugin, you should have MRC available out of the box. The MRC part of the plugin automatically reads the camera configuration and handles the network connection. When connected, it will automatically add a SceneCapture2D to render the MRC view, which will, by default, follow the first VR player pawn in game. The position and configuration of the SceneCapture is controlled by the plugin according to the camera configuration from OVRPlugin.
You may not always set up the player character the same way, and in some cases, the auto-selected tracking reference, which is the first player pawn, may not be the correct component. Depending on the player setup, this may be offset from the actual tracking origin. For this case, we provide a Set Tracking Reference Component function, which allows the MRC tracking reference to be set to the correct component according to your setup.
Things to remember
While we did try to make the implementation as simple as possible, there are some things that you should keep in mind when developing your app for Mixed Reality Capture.
Prep your environment
Make your environment compatible with a third-person camera view. Leave some empty space around the player to avoid obstructing the MRC camera, or make them visible only to the VR camera. Also, the camera will sometimes display things behind the player's head, so make sure the environment behind the player can be rendered at high quality, even if the player doesn't look in that direction within the game often.
Visual effect considerations
Make sure your app is flexible when rendering visual effects. As you saw in the optimizations that we made, there are a number of differences between the eye capture camera and the MRC camera. Design your visual effects to be compatible at different resolutions and aspect ratios. Have any interfering HUD, UI components or body model visible to the VR camera only so that they don't conflict with the camera capture.
Hide unnecessary game objects and components (Unity)
When MRC gets activated, it would construct new third person MRC cameras by cloning the main camera, and inherit all the components and child objects attached to that. Although that’s the expected behavior most of the time, it can cause a problem if some of the components and objects wouldn’t work well when attaching to the MRC cameras. There are a few methods to hide the unnecessary components and objects from the MRC camera, but keep them visible in VR. More information can be found on the MRC with Unity documentation page.
Find the right tracking origin (UE4)
Even though we do our best to automatically select the correct tracking origin, based on your game’s movement setup, it is possible for this default tracking origin to be incorrect. If you make changes to the camera rig in Unity, or use a different player component setup in UE4, the default tracking origin for the MRC camera can be incorrect, causing the game capture and real-world camera to be misaligned, so make sure you find the correct tracking origin if the default doesn't work for you.
Debug MRC on Rift S through game editors
MRC shares the same main components and the camera calibration workflow on both Rift and Quest. When enabling MRC for your Quest application, you also enable that on Rift too. It’s usually much easier to debug the MRC functionality on Rift S through the “VR preview” of the game editors, than testing the application on Quest. Please access the Oculus Rift: Mixed Reality Capture page for further details about MRC on Rift.
Capture workflow overview
See below for a diagram which shows the entire capture workflow. The camera calibration tool is on the left; it receives the tracking space information from Quest for the calibration. On the right we have the
Open Broadcaster Software (OBS). Many streamers use OBS since it's an all-in-one solution for capturing, composition, and recording. The MRC plugin runs inside OBS, it communicates with the Quest to control the capture, and obtain the game frames for the final composition.
Camera calibration overview
The purpose of the camera calibration is to decide all the necessary parameters of the virtual camera, to make it a perfect match to the physical camera. The calibration tool runs on your PC, which connects to the camera. During the camera calibration, the tool connects to the Quest to obtain its tracking space. It will then calculate the camera configuration by matching a touch controller’s poses between the physical space and the Quest tracking space. Finally, we push the camera configuration back to the Quest, to let the other application know the required parameters of the physical camera. The app can now construct a virtual camera with the same parameters when the mixed reality capture starts.
Capturing with OBS (Open Broadcaster Software)
After we calibrate the camera, we can now launch OBS, and connect it to a Quest app through the MRC plugin which we built for OBS. When MRC is activated, the Quest application will render the virtual world from the perspective of the virtual camera, and send the audio and video frames to our OBS plugin.
The video frame contains both the background and foreground of the virtual world. OBS then takes the video of the real world, and combines them to generate the composited mixed reality footage. We can then record the final video and save, or start a live casting session.
The following documentation page provides more info on
compositing your scene using OBS, including a detailed outline for how to setup the capture and synchronize the audio/video sources during the composition.
Conclusion
Mixed Reality Capture is a helpful tool to share the VR experience with your growing community. What we shared above is just the beginning, stay tuned as there are more updates around the corner!
- Xiang Wei and Kevin Xiao