Controlling Artificial Locomotion
Updated: Dec 17, 2024
To enable a user to move through the virtual world using artificial locomotion, you must process their input. In this section, you’ll learn techniques to enable the user to control their movement within the virtual space, even if the user is sitting. These techniques include
thumbstick driven locomotion,
teleport controls, and
motion-tracked locomotion.
Thumbstick Driven Locomotion
Using the controller thumbsticks to control locomotion is usually more complicated than just moving the avatar in the direction of the stick. While there aren’t any rules for how thumbstick controls should behave, people expect your VR applications to follow certain design patterns, including
direction mapping and
turning controls.
Direction mapping determines the direction the user moves based on how the thumbstick directions are mapped to directions in the virtual space. Directions in VR can be defined in multiple ways, such as controller or headset orientation, so allow the user to choose the direction mapping type that is most comfortable for them. To maximize usability and prevent motion sickness, they should be able to easily anticipate how the camera will move through the virtual environment in response to their head movements and controller input.
Some common types of direction mapping that you should support if possible include:
Head-Relative: With head-relative logic, thumbstick input is interpreted relative to the direction in which the user’s head is facing. Pushing the thumbstick upward causes the avatar to move in whichever direction the user’s head is facing. Turning the head while the thumbstick is pushed in any direction will continuously change the direction of movement so that it is always relative to wherever the headset is facing.
Initial Head-Relative: With initial head-relative logic, pushing the thumbstick forward causes the avatar to move in the direction the headset is facing when the user initiated the movement. Unlike head-relative controls, if the user turns their headset while pushing on the thumbstick, the direction of movement won’t change. For instance, if a user facing north pushes the thumbstick upward, they will move north, and until they release the thumbstick, they will continue moving north, even if they turn their head to look sideways.
Controller Relative: Controller-relative logic is perhaps the most common approach today. With controller relative logic, pushing the thumbstick upward causes the camera to move in the direction the hand controllers are pointed. The headset’s orientation has no impact on the direction of user movement; it is always relative to the orientation of the controller. This allows someone to steer by simply turning the controller itself while leaving the thumbstick pushed in the same direction.
Initial Controller-Relative: With initial controller-relative logic, pushing the thumbstick upward causes the avatar to move in the direction the hand controllers are pointed when movement is initiated. Unlike controller-relative logic, turning the controller while pushing the thumbstick will not change the direction of movement, which is similar to how initial head relative movement behaves.
You should support artificial turning based on thumbstick input when it’s compatible with your application design. Supporting artificial turning based on thumbstick input makes your app more accessible for users who are in chairs that don’t spin, use wheelchairs, are tethered to a PC, or simply prefer not to spin around. Three of the most common artificial turning control schemes are
quick turns,
snap turns, and
smooth turns.
The camera turns a fixed angle for each tap on the thumbstick. The angle of each turn per tap of the controller is usually 30 or 45 degrees, which occurs over 100 milliseconds or less. Users generally have strong preferences for how this should be tuned, so you should provide the option to tune both of these values. The goal is for the turn to be slow enough for users to keep track of the surroundings, but fast enough that it doesn’t trigger discomfort.
You should also register all taps on the thumbstick so that users can continue to provide turning input signals, even though a turn may already be in progress. The system should not disregard taps that occur while the camera is in the process of turning, and should immediately change direction if the chosen orientation shifts to another direction. This affords people the opportunity to begin a turn, and immediately turn back, or tap a few more times if they know exactly how far they want to turn.
Snap turning is similar to quick turning with respect to the accumulation of thumbstick taps to set the desired direction. However, instead of smoothly turning over time, it will immediately face the final direction. If there is any kind of transition effect enabled that delays the immediate orientation of the player to the new direction, the accumulation of thumbstick taps is necessary so that multiple taps will always result in rotating a fixed amount per tap.
It is a common problem for applications to disregard thumbstick taps when a time-consuming turn has begun. This historically leads to an interface that responds unpredictably to directional controls because people need to wait until the turn has been completed before triggering another turn, and if they don’t, the input signal would be disregarded.
The camera turns at a speed relative to how far the thumbstick is pushed left or right. This can be uncomfortable because the movement in the field of view causes users to expect a corresponding sense of physical acceleration which doesn’t happen because they are not physically turning.
Because turning starts as soon as the thumbstick leaves the center position, the angular velocity will vary based on precisely how far from center the thumbstick happens to be, which often leads to further discomfort. While smooth turning is considered uncomfortable by many, it’s possible to improve this behavior as described in the
Improved Smooth Turning section of Techniques and Best Practices.
The process of performing a teleport is a sequence of events that generally includes activation, aiming, potentially controlling the landing orientation, and finally, triggering the actual teleport. While it’s common for teleports to be triggered by a simple button or thumbstick movement, some designs will integrate teleport controls into the gameplay, which makes it possible to choose the destination by throwing a projectile, or some other mechanic unique to the application.
Thumbstick-triggered Teleports
A popular approach to initiating a teleport is to activate the process when the thumbstick is moved out of its center position. An aiming beam will appear which people then point at the desired destination. Aiming continues for as long as the thumbstick is pushed. When the thumbstick is released, the player is teleported to the destination.
Using a thumbstick to control this process makes it possible for the user to control the direction in which the user is facing after teleportation. While aiming at the destination, the thumbstick is used to control the direction of an indicator shown at the end of the teleport beam. When the teleport is completed, the player perspective will face the direction of the indicator. It is generally useful to provide a way to cancel the teleport by pressing another button or clicking the thumbstick, since releasing the thumbstick usually triggers the teleport.
The tricky part of this technique is to ensure that the act of releasing the stick and triggering the teleport does not change the landing orientation. One way to approach this problem is to carefully monitor the thumbstick input so that when it moves a small distance back towards the center position, the indicator direction will remain where it was last set.
In some cases, the thumbstick on the dominant hand may not be preferred or available to trigger teleportation, in this case, you can use one of the standard buttons on the controller to initiate the teleport. The thumbstick in the user’s non-dominant hand can be used to control landing orientation when teleport aiming is active, or cancel the teleport by clicking down on this thumbstick before releasing the button that activated the teleport.
Button-triggered teleportation also enables the dominant handed thumbstick to be used for other locomotion needs, such as controlling snap turns, although it is recommended to disable these snap turns if a user is pressing the controller button to teleport.
Motion-Tracked Locomotion
It’s possible to enable locomotion functionality using motion controllers without using controller buttons or thumbsticks for input. These techniques often consider posture, hand poses, and other physical movements as signals for controlling movement.
If you are using hand tracking in your application, you may find the examples below useful for controlling artificial locomotion.
Simulated activities map the user’s physical movements to a real-world or imaginary activity, increasing immersion. The intent is to mimic the physical movement if you were to perform these activities in a real-world setting. From swimming and running to flying like a superhero, pretty much anything you can imagine is possible.
While simulated activities require users to perform a movement in the physical world in order to perform it in VR, pose-driven controls use learned poses and abstract gestures to perform a movement. Pose-driven controls are a bit more abstracted from the intended activity, and usually require some training or guidance so people know what motions cause each effect. These poses and gestures can take many forms, one example is to point with one hand and initiate the movement with a gesture on the other hand.
If you’d like to implement abstract gestures:
- Ensure that your app detects the pose reliably every time. This is critical.
- Provide consistent and reliable feedback for when a gesture is detected, so the user knows their intent has been understood by the system.
- Provide clear guidance for how to use the gestures that are available.
- Keep gestures simple enough to memorize so they can be used by the user with minimal cognitive effort.
- Minimize the number of supported gestures, making them easier to use as a reflex.
Seated Mode Considerations
The goal of supporting a seated mode is to simulate a standing experience while seated. This makes your app more accessible and helps reduce fatigue if the user has been standing.
If you implement a seated mode, it should include the following:
- A prompt early in the experience to choose between seated and standing modes.
- An avatar height and camera height that default to the player height reported by the system but can also be customized in the app options. If the height were to be based on the headset’s elevation from the physical floor, avatars would be shorter and the user’s perspective would not match what they see when standing.
- Support for artificial turning since users won’t always be in a chair that spins.
- Controls to toggle poses, such as crouching, that would otherwise require the user to physically move around the play space. To avoid vection, make camera transitions during these poses as brief as possible without being instantaneous.