Unity’s UI system makes it easy to create user interfaces, but can we use it for VR applications? Fortunately, the answer is yes.
At the end of this post, you’ll find a link to an example project containing everything you need to use the Unity UI system in VR. The scripts and resources in the project contain everything you need to convert your existing Unity UI to a VR-enabled UI.
The components in the project support two input schemes commonly used in VR applications.
The first is the gaze-pointer, where your user controls a pointer with their head orientation and interacts with objects or UI elements as they would using a mouse pointer. The “clicking” event can come from a gamepad button or tap on the Gear VR touchpad.
The second is a pointer similar to a conventional mouse pointer, which moves across a world-space plane such as a UI panel hovering in space, or a computer monitor in the virtual world.
Let’s begin with a brief look at how the Unity UI system works.
The Unity GUI System
Unity’s UI system consists of a few key components:
- EventSystem
- InputModules
- RayCasters
- GraphicComponents: Button, Toggle, Slider etc
.
The EventSystem is the core through which all events flow. It works closely with a few other components, including the InputModule component, which is the main source of events handled by the EventSystem. In the Unity UI system, only one InputModule is active in a scene at a time. Built-in implementations of a mouse input module and a touch input module manage state with respect to the source of the pointing ( i.e. the mouse or touches), and they are responsible for detecting the intersection of these pointer events with graphic UI components. The actual detection is implemented in a ray caster class such as GraphicRaycaster.
A ray caster is responsible for some set of interactive components. When an InputModule processes a pointer movement or touch, it polls all the ray casters it knows about, and each one detects whether or not that event hit any of its components. There are two types of built-in ray caster in Unity’s UI system: the GraphicRaycaster (for Canvases) and the PhysicsRaycaster (for physics objects).
In a mouse-driven or touch-driven application, the user touches or clicks on some point on the screen which corresponds to a point on the viewport of the application. A point in a viewport corresponds to a ray in the world, from the origin of the camera. Because the user may have intended to click on any object along that line, the InputModule is responsible for choosing the closest result from all the ray intersections found by the various ray casters in the scene.
Why doesn’t this work in VR?
The short answer is that there’s no screen in VR, and therefore no visible surface for a mouse to move across.
One way to provide GUI control in VR is to create a virtual screen in world space for the mouse pointer to traverse. In this approach head movement doesn’t control the pointer, the pointer is moved across the virtual screen according to mouse movement. Note that this differs from Unity’s world space UI system, in which clicks and touches still originate from the origin of the camera which shows the image the user is clicking on, even though the UI is in world space.
Another common tool for interacting with VR applications is the gaze pointer, which is always in the middle of the user’s field of view and is controlled by the user’s head movement. A gaze pointer also works as a ray cast, but from a point between your eyes, not from the origin of a camera as Unity’s UI system expects. The ray could even originate from a pointer in your hand if you had a tracked input device.
Unlike mouse pointing and touching, these pointers can’t be represented as a ray originating from the origin of a camera. Unity’s UI system is very much orientated around screen positions.
A ray of hope
Luckily, it’s not too hard to modify the UI system to make the event system work with rays in world space rather than using a screen position associated with a camera. In the future, Unity’s UI system may work on a deeper level with rays, but for now we take the approach of using rays in our raycasting code, and then converting this back to a screen position for compatibility with the rest of the Unity UI system.
If you open the demo project attached to this blog post, you will notice that we added the following classes that derive from built-in Unity UI classes:
We will examine the code in each of these classes later. Before going any further, now would be a good time to open the sample project.
Running the Sample Project
The
project attached to this blog post is a Unity 5.2 project, so we recommend you use this version of Unity to open the project.However, all of the code included will also work with Unity 4.6 and Unity 5.1.
You will also need the latest version of Oculus Unity Utilities, available for
download here.
Once you have downloaded the Integration, import it into the project as
described here.
In the project you’ll find two scenes in the Scenes directory, Pointers and VRPointers. Pointers is a scene using normal Unity UI canvases and a normal camera. VRPointers is the same scene, but with an OVRCameraRig and the UI setup to work in VR. Feel free to try these scenes out before we continue, but remember that you will to set the “Virtual Reality Supported” option off or on respectively to run these scenes. You find this option in Player Settings.
Now let’s walk through how to use OVRInputModule, OVRRaycaster, and OVRPhysicsRaycaster (and a few other helper classes) to go from the non-VR version to the VR version. Once we’ve stepped through the process we’ll dig into how these classes work.
Step by Step UI Conversion
Open up the Pointers scene and press play to run it in the editor. Notice that it works like a conventional non-VR application–you can move your mouse around on the screen and use it to move sliders and click on check boxes.
Now let’s look at how to convert this scene to VR. Make sure you’ve left Play Mode before continuing.
Step 1: Replace Camera with an OVRCameraRig
Delete the Camera from the scene and replace it with the OVRCameraRig prefab from the directory OVR->Prefabs. (If you don’t see OVRCameraRig listed in your Project view, make sure you imported the Oculus Integration package). You may choose to place it where the camera previously was, or anywhere else you feel provides a good vantage point of the UI.
If you are using Unity 5.1 or later then at this point you should also make sure that “Virtual Reality Supported” is turned on in the Standalone Player Settings.
Step 2: Change the InputModule
Select the EventSystem in the Hierarchy view. In the Inspector, notice it has a StandaloneInputModule component. This is the input module for normal mouse input. Remove this component (right click and select remove component) and add the new OVRInputModule from the directory Assets->Scripts. The OVRInputModule handles ray-based pointing, so we need to set the Ray Transform property of this component. Do this by dragging the CentreEyeAnchor from the OVRCameraRig onto this slot – this means you’ll be pointing with your head.
In order to enable gamepad support you should also add the OVRGamepadController component to the EventSystem object.
Step 3: Add a gaze pointer
We’d like to add a visual pointer in the world that moves around with your gaze. Find the GazePointerRing prefab in the Assets->Prefabs directory and drop it into the scene. The OVRInputModule will automatically find this and move it around the scene as you look around. Notice that this prefab has some other scripts on it to do particles effects. This is all optional – the only part required to work with the OVRInputModule is the OVRGazePointer component.
Drag the OVRCameraRig object on to the CameraRig slot so the OVRGazePointer component knows about the CameraRig.
Step 4: Set up the Canvas
Any world-space Canvas object can be manipulated in VR with a few changes. There are actually three canvases in this scene, so you’ll need to repeat this step for each of those. First, let’s find and select the JointsCanvas object under the Computer object.
4a: The first thing you’ll notice is that the Canvas component doesn’t have an Event Camera reference anymore. That’s because we deleted that camera, so now let’s add a reference to one of the cameras in the OVRCameraRig. In Unity 4.6 you can choose the LeftEyeAnchor or RightEyeAnchor camera. In 5.1+ the only choice is CenterEyeAnchor.
4b: In the Inspector, you’ll notice the canvas has a GraphicRaycaster component. This is used to detect when your mouse intersects with GUI components. Let’s remove that, and replace it with an OVRRaycaster component (in the Scripts directory), this does the same thing, but works with rays instead of mouse positions.
4c: On the new OVRRaycaster object, change the Blocking Objects drop down box selection to All. This will make sure that our gaze is blocked by objects such as the lever in the scene.
GUI Gaze Pointing Ready!
At this point in the process, you should be able to run the scene and use your gaze and the space bar to interact with the GUI elements. You can change the gaze “click” key from the space bar to anything else in the Inspector panel for the OVRInputModule. You can also configure a gamepad button to act as gaze-“click”. Remember: if you only completed step 4 for the JointsCanvas then at the moment you will only be able to gaze-click on that canvas (the pink one with vertical sliders).
Step 5: Interacting with Physics objects
Add the OVRPhysicsRaycaster component to the OVRCameraRig. This new component looks very similar to Unity’s built-in PhysicsRaycaster. You’ll notice in the inspector that it has an Event Mask property. This filter specifies which objects in the scene this ray caster will detect. Set this to just the “Gazable” layer. The scene has been setup so that all interactive components are in the “Gazable” layer. Run the scene again and now try gaze-clicking on the lever in the middle of the scene.
Step 6: World-space mouse pointers
Let’s add a world-space pointer to the scene. This pointer will act as if it were a mouse on a virtual monitor, and will only be activated when you look at the Canvas with your gaze pointer. This a good way to provide a familiar input device in VR.
These steps could be followed for any Canvas you want to add a mouse pointer to. For now, let’s choose JointsCanvas.
6a: Find the CanvasPointer prefab and instantiate it as a child of the Canvas. This object has no fancy scripts on it, existing purely as a visual representation of the pointer. You could replace this with any kind of 2D or 3D pointer representation you like.
6b: Drag the newly-added pointer to the Pointer reference slot on the OVRRaycaster for the Canvas. This lets it know it should use this as a pointer.
6c: Add the OVRMousePointer component to the Canvas object. This takes responsibility for moving the pointer around the canvas.
That’s it! Now if you run the scene, you’ll find that you can still interact with UI and physics elements using the gaze pointer, and you can now use your mouse to control the virtual mouse pointer on a Canvas when you look at it.
The important thing to note here is that the initial scene only contained normal Unity UI components. So you can perform the exact same process to VR-enable any existing Unity UI scene.
So how does this all work?
In this section we’ll take a look at how the scripts we used in the previous section allow you to convert an existing Unity UI to a VR UI. The scripts have been written so that they can be used in your own projects by following the steps above, it’s not strictly necessary to understand the inner working, but this section is included for the technically curious.
To make the magic happen, we made extensions to some of the core Unity UI classes. Let’s start by taking a look at the core class handling our input extensions….
OVRInputModule
Unity’s StandaloneInput module has a lot of pointer interaction and GUI element navigation code and would have been a great class to derive our class from, but unfortunately most of the core functionality in StandaloneInputModule is private rather than protected, so we couldn’t use all the good stuff in our new class. Given this situation, we had 3 options:
- Branch Unity’s UI system and make our own version of the UI.
In a private project, this would have been the best choice, but
since we want you to be able to use it with as little fuss as
possible, we didn’t want to ask you to install a new UI DLL, so
this was out.
- Ask Unity to change these functions to protected for us. This
will take time, but we did pursue this, and thanks to cooperation
from Unity, these changes will happen. In the future the extensions
discussed in this tutorial will be simpler.
- Inherit from the base class one step further up the chain
instead, and just copy and paste the code we need from
StandaloneInputModule into our class. Because we wanted you to be
able to run this example as soon as possible in as many versions of
Unity as possible, we went with this option for this project.
Organization
If you look at OVRInputModule.cs, you’ll see that the class inherits from PointerInputModule, and there are two regions in the code where we’ve placed the StandaloneInputModule code:
#region StandaloneInputModule code
#region Modified StandaloneInputModule methods
The first is code moved verbatim, and the second are functions we would have overridden, if they hadn’t been private. Over all, the changes in OVRInputModule are simple extensions of StandaloneInputModule. The best way to understand is take a look at the code. However, the following is a summary of the key changes:
Processing Gaze and World Space Pointers
The two new functions GetGazePointerData() and GetCanvasPointerData() do what GetMousePointerEventData does in PointerEventData, but for our new types of pointer. These are our extensions that handle pointer input state, e.g., treating space as the “click” key for the gaze pointer, and using the assigned ray transform for the pointer direction. These functions also call out to OVRRaycaster/OVRPhysicsRaycaster to find the GUI/Physics intersections. But we’ve changed the way we talk to the ray casters slightly…
Pointing with Rays
An important change we’ve made is to subclass PointerEventData to make OVRRayPointerEventData. This new class has an extra member:public Ray worldSpaceRay; Because this inherits from PointerEventData, the entire existing UI system including the EventSystem can treat it like any other kind of PointerEventData. The important thing is that our new ray caster objects know about this new member and use it to do correct world space ray intersections.
Pointing with World-Space Pointers
OVRInputModule has the following member:
Public OVRRaycaster activeGraphicRaycaster;
This keeps track of what it considers the “active” raycaster. You could adopt various schemes to decide which raycaster is active (and there’s no reason you need to have only one), but in this example an OVRRaycaster component declares itself active when the gaze pointer enters it. Which OVRRaycaster is currently active is important, because this is the one the OVRInputModule allows to detect intersections between that canvas’s world-space pointer and its GUI elements. In the example you can see this behaviour in the fact that you can only move the mouse pointer for a Canvas when you’ve made that Canvas active by looking at it with the gaze pointer.
Bringing it back to Screen Space
Probably the most important job of OVRInputModule is to hide the fact that we’re working with VR pointers from the bulk of the GUI system, e.g. the buttons, toggles, sliders, scroll bars, edit fields. A lot of the logic in these elements relies on the screen position of pointer events. Our pointers are based in world space, but luckily we can easily convert these world space positions to screen positions relative to one of the VR cameras (in the example, we arbitrarily chose the left eye). The restriction this tactic imposes is that you can’t interact with UI elements that aren’t in-front of the camera – this doesn’t seem unreasonable.
Because of this conversion, and the fact that our OVRRayPointerEventData class is just a subclass of PointerEventData, the rest of the Unity UI system can interact with the PointerEventData objects without needing to know whether they came from a mouse pointer, a gaze pointer or anything else.
Updating the Gaze Pointer
The changes described above are technically enough to make gaze pointing work in VR. However, it wouldn’t be very intuitive to use if there were no visual representation of your gaze in the scene. OVRGazePointer is the singleton component that takes care of this, and OVRInputModule has the responsibility of keeping it in the right place and right orientation.
Keeping it in the right place is simple enough – the world position that comes back from the ray casters is forwarded on to OVRGazePointer. Orientation is slightly more involved; a naive approach is to orientate the gaze pointer so that it always faces the user (by orientating it towards the CenterEyeAnchor of the camera rig). This is in fact what OVRGazePointer does when no intersections are detected. But when there are intersections, i.e., your gaze pointer is actually looking at an object or UI Canvas, then OVRInputModule will find the normal of the GUI component or physics object and use this to align the gaze pointer with the surface. This makes the gaze pointer feel much more attached to the surface, as illustrated in the images below:
The gaze cursor faces the user
The gaze cursor is aligned with the UI surface
In conclusion…
It turns out that with a few new classes extending Unity’s UI system, it’s possible to get UI working 100% in VR. The example shown here is just one way to get started – there are lots of ways you could extend this to implement new ways of interacting with UIs in VR. The gaze pointer could be replaced with a tracked controller pointer; a world-space pointer could be moved with a gamepad thumbstick; the world-space pointer could even be made to follow the position of tracked input controller directly – the list goes on and on.
As Unity adopts more and more VR-specific features, some of the code in this example may become redundant, but there’s no time like the present. We hope this blog post helps you get kickstarted using Unity’s great UI system in VR right now.