Hoja B - Hand Tracking 2
Hoja B - Hand Tracking 2
In these guidelines, you’ll find interactions, components, and best practices we’ve validated through
researching, testing, and designing with hands. We also included the principles that guided our process. This
information is by no means exhaustive but should provide a good starting point so you can build on what
we’ve learned so far. We hope this helps you design experiences that push the boundaries of what hands
can do in virtual reality.
The Benefits
People have been looking forward to hand tracking for a long time, and for good reason. There are a number
of things that make hands a preferable input modality to end users.
Hands are a highly approachable and low-friction input that require no additional hardware
Unlike other input devices, they are automatically present as soon as you put on a headset.
Self and social presence are more rich in experiences where you’re able to use your real hands.
Your hands aren’t holding anything, leaving them free to make adjustments to physical objects like
your headset
The Challenges
There are some complications that come up when designing experiences for hands. Thanks to sci-fi movies
and TV shows, people have exaggerated expectations of what hands can do in VR. But even expecting your
virtual hands to work the same way your real hands do is currently unrealistic for a few reasons.
There are inherent technological limitations, like limited tracking volume and issues with occlusion.
Virtual objects don’t provide the tactile feedback that we rely on when interacting with real-life
objects.
Choosing hand gestures that activate the system without accidental triggers can be difficult, since
hands form all sorts of poses throughout the course of regular conversation.
The Capabilities
To be an effective input modality, hands need to allow for the following interaction primitives, or basic tasks:
These interactions can be performed directly, using your hands as you might in real life to poke and pinch at
items, or they can be performed through raycasting, which directs a raycast at objects or two-dimensional
panels.
Interactions
There are several factors we’ve experimented with when it comes to designing interactions. The Interaction
Options section breaks down some of the different considerations you might face, depending on the kind of
experience you’re designing. Then, the Interaction Primitives section breaks down the options that work best
for specific tasks, based on what we’ve experimented with.
Interaction Options
The best experiences incorporate multiple interaction methods, to provide the right method for any given
task or activity.
Here we outline the different options to consider when designing your experience.
Target Distance
Near-Field Components The components are within arm’s reach. When using direct interactions, this space
should be reserved for the most important components, or the ones you interact with most frequently.
Far-Field Components The components are beyond arm’s reach. To interact with them, you would need to
use a raycast or locomote closer to the component to bring it into the near-field.
Interaction Methods
Direct With direct interactions, your hands interact with components, so you’d reach a finger out to “poke”
at a button, or reach out a hand to “grasp” an object by pinching it. This method is easy to learn, but it limits
you to interactions that are within arm’s reach.
Raycasting Raycasting is similar to the interaction method you may be familiar with from the Touch
controllers. This method can be used for both near- and far-field components, since it keeps users in a
neutral body position regardless of target distance.
Selection Methods
Poking is a direct interaction where you extend and move your finger towards an object or component until
you “collide” with it in space. However, this can only be used on near-field objects, and the lack of tactile
feedback may mean relying on signifiers and feedback to compensate.
Pinching can be used with both the direct and raycasting methods. Aim your raycast or move your hand
toward your target, then pinch to select or grasp it. Feeling your thumb and index finger touch can help
compensate for lack of tactile feedback from the object or component.
Note: Using this method also makes for a more seamless transition between near- and far-field components,
so the user can pinch to interact with targets at any distance.
Relative With relative movements, you can adjust the ratio of distance between your hand’s movement and
how much the output moves. For example, for every 1° your hand moves, the cursor will move 3° in that
direction. You can make the ratio smaller for more precision, or increase the ratio for more efficiency when
moving objects across broad distances.
Note: For even more efficiency, you can use a variable ratio, where the output moves exponentially faster
the more quickly you move your hand. Another option is an acceleration-based ratio, which is similar to
using a joystick. If a user keeps their hand in a far-left position and holds it there, the object will continue
moving in that direction. However, this makes it more difficult to place an object where you want it, so it’s
not recommended for experiences where precise placement is the goal.
We recommend using abstract gestures sparingly, and to instead use analog control to manipulate and
interact with most virtual interfaces and objects.
Analog Control When using analog control, your hand’s motion has an observable effect on the output. This
makes interactions feel more responsive. It’s also easy to understand — when you move your hand to the
right, and the cursor, object, or element moves to the right. Raycasting and direct interactions are both
examples of analog control.
These interaction options work together in different ways, depending on the circumstances. For example, if
your target objects are in the far-field, your only available interaction method is raycasting (unless you bring
the object into the near-field). Or for direct poking interactions with near-field objects, your only option for
hand-output relationship is absolute.
This chart helps lay out the available options for different circumstances.
Interaction Primitives
As we said in the Introduction, hands need to allow for 3 interaction primitives, or basic tasks, to be an
effective input modality.
As you’ll see, some of the above interaction options may work better than others for your specific
experience.
Select Something
There are two kinds of things you might select: 2D panel elements, and 3D objects. Poking works well for
buttons or panel selections. But if you’re trying to pick up a virtual object, we’ve found that the thumb-index
pinch works well, since it helps compensate for the fact that virtual objects don’t provide tactile feedback.
This can be performed both directly and with raycasting.
Move Something
If the target is within arm’s reach, you can move it with a direct interaction. Otherwise, raycasting can help
maintain a neutral body position throughout the movement.
Absolute movements can feel more natural and easy, since this similar to how you move and place items in
real life. For more efficiency, you can use relative movements to move the object easily across any distance
or to place them in precise locations.
Rotate Something
If you’re looking for an intuitive rotation method and aren’t too worried about precision, you can make
objects follow the rotation of a user’s hand when grasped.
A more precise method of rotation is to snap the object to a 2D surface, like a table or a wall. This can limit
the object’s degrees of freedom so that it can only rotate on one axis, which makes it easier to manipulate. If
it’s a 2D object, you can similarly limit its degrees of freedom by having the object automatically rotate to
face the user.
Resize Something
Uniform scaling is the easiest way to resize an object, rather than trying to stretch it vertically or horizontally.
Similar to rotation, an easy method for resizing is to snap an object to a 2-dimensional surface and allow it
to align and scale itself. However, this limits user freedom, since the size of the object is then automatically
determined by the size of the surface.
To define specific sizes, you can also resize objects using both hands. While your primary hand pinches to
grasp the object, the second hand pulls on another corner to stretch or shrink the object. We found this to
be problematic for accessibility reasons, as people may have difficulty with hand dexterity, or their second
hand might be occupied. Plus, this method increases the likelihood of your hands crossing over each other,
which leads to occlusion.
Handles
Another method for manipulation is to attach handles to your object. This provides separate handles that
control movement, rotation, and resizing, respectively. Users pinch (either directly or with a raycast) to
select the control they want, then perform the interaction.
This allows users to manipulate objects easily regardless of the object’s size or distance. Separating
movement, rotation, and scale also enables precise control over each aspect, and allows users to perfect the
object’s positioning and change its rotation without the object moving around in space. However, having to
perform each manipulation task separately can become tedious, particularly in contexts where this level of
precision is not necessary.