0% found this document useful (0 votes)
77 views38 pages

Group 1 3D User Interface Hardware (Input and Output) : Virtual Reality

The document discusses various 3D user interface input hardware options including desktop devices like keyboards and mice, tracking devices, data gloves, 3D mice, and direct human input devices. It describes different types of input devices based on physical interaction required and provides examples of handheld and worn 3D mice. Special purpose devices like ShapeTape for curve manipulation are also covered. Choosing an input device requires considering ergonomics, input modes, and task types.

Uploaded by

shikhar gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views38 pages

Group 1 3D User Interface Hardware (Input and Output) : Virtual Reality

The document discusses various 3D user interface input hardware options including desktop devices like keyboards and mice, tracking devices, data gloves, 3D mice, and direct human input devices. It describes different types of input devices based on physical interaction required and provides examples of handheld and worn 3D mice. Special purpose devices like ShapeTape for curve manipulation are also covered. Choosing an input device requires considering ergonomics, input modes, and task types.

Uploaded by

shikhar gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

VIRTUAL REALITY

GROUP 1
3D User Interface Hardware
(input and output)

SWARAJ BHOSLE - IIT2019024


SMITESH HADAPE - IIT2019090
ARYAN GUPTA - IIT2019101
AKASH DAHANE - IIT2019176
Input Devices
Input devices are described based on how much physical interaction is required to use the device

1) Purely Active
a) These devices require user to actually perform some physical action before data is generated.
2) Purely Passive
a) These devices do not require any physical action for device to function.
b) Ex. a tracker will continually output position and orientation records even if the device is not
moving.
Desktop Input Devices
● Keyboards
● 2D mice and trackballs
● pen-Based tablets
● joysticks
● 6-DOF(Degree Of Freedom) devices for the desktop.
Keyboards
Keyboard is a traditional desktop input device which contain a set of discrete components(buttons).Ex.
arrow keys are often used to navigate in first-person shooter game. It is not possible to use this keyboard
in more immersive 3D environments.chord keyboards are used in 3D environments.

2D Mice and Trackballs

Trackball is basically upside


down mouse. Instead of
moving the whole device to
move the pointer, the user
manipulates a rotatable ball
embedded in the device.
Pen-Based Tablets
● Pen-based tablets and handheld personal digital assistants generate the same types of
input that mice do, but they have a different form factor.
● These devices have a manually continuous component for controlling a cursor and
generating 2D pixel coordinate values when the stylus is moving on or hovering over the
tablet surface.
● Larger pen-based tablets can be used in 3D applications where the user is sitting, such as
with desktop 3D displays,some workbench and single-wall displays, and small
hemispherical displays.

Joysticks
● Joysticks are another example of input devices traditionally used on the desktop and
with a long history as a computer input peripheral.
● To stop the cursor, the joystick’s handle must be returned to the neutral position. This
type of joystick is commonly called an isotonic joystick, and the technique is called rate
control.
● They are frequently used in driving and flight simulation games, and when integrated
into game controllers, they are the input device of choice with console video game
systems.
Six-DOF input devices for desktop
● 6 DOF input devices that were developed specifically for 3D interaction on the desktop.

● Slight push and pull pressure of the fingers on the cap of the device generates small deflections in x, y, and
z, which moves objects dynamically in the corresponding 3 axes. With slight twisting and tilting of the cap,
rotational motions are generated along the 3 axes.

● This type of device is commonly used in desktop 3D applications for manipulating virtual objects.

● They were originally developed for telerobotic manipulation and are commonly used today by 3D
designers and artists with CAD/CAM and animation applications.

● They do not replace the mouse; rather, they are used in conjunction with it. One hand on the motion
controller positions the objects in 3D space, while the other hand with the mouse can simultaneously select
menu items and edit the object
Tracking Devices
Motion Trackers

Eye Trackers

Data gloves
MOTION TRACKERS:-
One of the most important aspects of 3D interaction in virtual worlds is providing a correspondence between the
physical and virtual environments.Currently, there are a number of different motion-tracking technologies in use,
which include

● magnetic tracking

● mechanical tracking

● acoustic tracking

● inertial tracking

● optical tracking

● hybrid tracking
EYE TRACKER:-
● Eye trackers are purely passive input devices used to determine where the user is
looking.

● Eye-tracking technology is primarily based on computer vision techniques: the


device tracks the user’s pupils using corneal reflections detected by a camera.
● In the context of 3D interface design, active eye-tracking systems have the
potential to improve upon many existing 3D interaction techniques.
● For example, there are numerous techniques that are based on gaze direction
which use the user’s head-tracker as an approximation to where they are looking.
DATA GLOVES:-
● It is useful to have detailed tracking information about the user’s hands, such as how the
fingers are bending or if two fingers have made contact with each other.
● Data gloves are input devices that provide this information.
● Data gloves come in two basic varieties: bend-sensing gloves and pinch gloves.

● Bend-sensing data gloves are purely passive input


devices used to detect postures (static configurations)
of the hand.
● The Pinch Glove system is an input device that
determines if a user is touching two or more fingertips
together. These gloves have a conductive material at
each of the fingertips so that when the user pinches
two fingers together, an electrical contact is made.
3D User Interface Input Hardware
3D Mice
● In may cases tracking devices are combined with other
physical device components ike button, sliders, knobs and
dials to create more functionally powerful input devices
called as 3D mice or handheld or worn tracking devices.
● The distinguishing characteristic of 3D mice, as opposed to
regular 2D mice, is that the user physically moves them in
3D space to obtain position or orientation information
instead of just moving the device along a flat surface.
Handheld 3D Mice
● The handheld 3D Mice houses a motion trackr
in a structure that looks like a simple remote
controller. It is commonly used in conjunction
with surround-screen displays for both
navigation and selection of 3D objects.
● The physical structure that houses the motion
tracker is often a replication of an input device
used in the real world. For example the below
mouse is modeled after an Air Force pilot’s
flight stick.
Handheld 3D Mice
● The Cubic Mouse, is a 3D mouse designed primarily
as an interactive prop for handling 3D objects. It is
ideally suited for examining volumetric data because
of its ability to intuitively map to the volume’s
coordinates and act as a physical proxy for
manipulating it.
● The device consists of a box with three
perpendicular rods passing through the center, an
embedded tracker, and buttons for additional input.
● The Cubic Mouse does have a disadvantage in that
the three orthogonal rods can get in the way when
the user is holding the device in certain
configurations.
User-Worn 3D Mice
● Assuming the device is light enough, having
the device worn on the user’s finger, for
example, makes the device an extension of
the hand.
● the Ring Mouse show besides is an
example of such a device. It is a small,
two-button, ringlike device that uses
ultrasonic tracking that generates only
position information. One of the issues with
this device is that it has a limited number of
buttons because of its small form factor.
User-Worn 3D Mice
● The FingerSleeve, shown besided is a
finger-worn 3D mouse that is similar to
the Ring Mouse in that it is small and
lightweight, but it adds more button
functionality in the same physical space
by using popthrough buttons .
● Pop-through buttons have two clearly
distinguished activation states
corresponding to light and firm finger
pressure.
Special-Purpose Input Devices
● Some devices used in 3D interfaces are
often designed for specific applications or
used in specific interfaces.
● ShapeTape is a flexible, ribbon like tape of
fiber-optic curvature sensors that comes in
various lengths and sensor spacing.
Because the sensors provide bend and
twist information along the tape’s length, it
can be easily flexed and twisted in the hand,
making it an ideal input device for creating,
editing, and manipulating 3D curves.
Direct Human Input
● A powerful approach to interacting with 3D applications is
to obtain data directly from signals generated by the
human body. With this approach, the user actually
becomes the input device.
Bioelectric Input
● NASA has developed a bioelectric input
device that reads muscle nerve signals
emanating from the forearm.
● These nerve signals are captured by a dry
electrode array on the arm. The nerve
signals are analyzed using pattern
recognition software and then routed
through a computer to issue relevant
interface commands.
● We can see a user controlling a virtual
aircraft. This type of device could also be
used to mimic a real keyboard in a VE.
Brain Input
● The goal of brain–computer interfaces is to have a user
directly input commands to the computer using signals
generated by the brain. A brain–computer interface can use a
simple, noninvasive approach by monitoring brainwave activity
through EEG signals.
● The user simply wears a headband or a cap with integrated
electrodes.
● A future, more invasive approach would be to surgically
implant microelectrodes in the motor cortex. Of course, this
approach is still not practical for common use but might be
appropriate for severely disabled people who cannot interact
with a computer in any other way.
Choosing Input Devices for 3D User Interfaces
● Many factors must be considered when choosing an appropriate input
device for a particular 3D UI. Device ergonomics, the number and type
of input modes, and the types of tasks involved all play a role. Devices
should be lightweight, require little training, and provide a significant
transfer of information with minimal effort.
● An input device can handle a variety of interaction techniques
depending on the logical mapping of the technique to the device. The
major issue is whether that mapping makes the device and the
subsequent interaction techniques usable.
3D User Interface Output Hardware
INTRO

● A necessary component of any 3D UI is the hardware that presents information to the user.
These hardware devices, called display devices (or output devices), present information to one
or more of the user’s senses through the human perceptual system; the majority of them are
focused on stimulating the visual, auditory, or haptic (i.e., force and touch) senses.
● Of course, these output devices require a computer to generate the information through
techniques such as rendering, modeling, and sampling. The devices then translate this
information into perceptible human form. Therefore, displays actually consist of the physical
devices and the computer systems used in generating the content the physical devices
present.
● Display devices need to be considered when designing, developing, and using various
interaction techniques in 3D UIs, because some interaction techniques are more appropriate
than others for certain displays.
Visual Display Characteristics
● A number of important characteristics must be considered when describing visual display
devices which are as mentioned below
● Field of regard and field of view
● Spatial Resolution
● Screen Geometry
● Light Transfer Mechanism
● Refresh Rate
● Ergonomics
Depth Cues
Because 3D UIs are used primarily in 3D applications, the user must have an understanding of
the 3D structure of the scene; in particular, understanding visual depth is crucial. Depth
information will help the user to interact with the application, especially when performing 3D
selection, manipulation, and navigation tasks.

Visual depth cues can be broken up into four categories:

● monocular, static cues


● oculomotor cues
● motion parallax
● binocular disparity and stereopsis
Visual Display Device Types
We now examine many different types of visual displays used in 3D UIs, which include the following:

● monitors
● surround-screen displays
● workbenches
● hemispherical displays
● head-mounted displays
● arm-mounted displays
● virtual retinal displays
● autostereoscopic displays
Auditory Displays
One of the major goals of auditory displays in VEs is the generation and display of spatialized 3D
sound, enabling the human participant to take advantage of his auditory localization capabilities.

As with the visual system, the auditory system provides listeners with a number of different
localization cues that allow them to determine the direction and distance of a sound source. Although
there are many different localization cues, the main ones that apply to 3D UIs (Shilling and
Shinn-Cunningham 2002) are

● binaural cues
● spectral and dynamic cues
● head-related transfer functions (HRTFs)
● reverberation
● sound intensity
● vision and environment familiarity
3D Sound Generation - 3D Sound Sampling and
Synthesis
● The basic idea behind 3D sound sampling and synthesis is to record sound that the listener would
hear in the 3D application by taking samples from a real environment. For example, with binaural audio
recording, two small microphones are placed inside the user’s ears (or in the ears of an
anthropomorphic dummy head) to separately record the sounds heard by left and right ears in the
natural environment.
● However, the main problem with this type of sound generation is that it is specific to the environmental
settings in which the recordings were made. Therefore, any change in the sound source’s location,
introduction of new objects into the environment, or significant movement of the user would require
new recordings.
● An alternative approach, which is one of the most common 3D sound generation techniques used in
3D applications today, is to imitate the binaural recording process by processing a monaural sound
source with a pair of left- and right-ear HRTFs corresponding to a desired position within the 3D
environment . With these empirically defined HRTFs, real-time interactivity becomes much more
feasible because particular sound sources can be placed anywhere in the environment and the HRTFs
will filter them accordingly to produce 3D spatial audio for the listener.
Auralization
● Auralization is the process of rendering the sound field of a source in space in such a way as to simulate the binaural
listening experience through the use of physical and mathematical models).

● The goal of auralization is to recreate a listening environment by determining the reflection patterns of sound waves
coming from a sound source as they move through the environment. Therefore, this process is very useful for creating
reverberation effects.

● The two main computer-based approaches to creating these sound fields are wave-based modeling and ray-based
modeling. With wave-based modeling techniques, the goal is to solve the wave equation so as to completely re-create
a particular sound field. In many cases, there is no analytical solution to this equation, which means that numerical
solutions are required.

● In the ray-based approach, the paths taken by the sound waves as they travel from source to listener are found by
following rays emitted from the source. The problem with the ray-based approach is that these rays ignore the
wavelengths of the sound waves and any phenomena associated with them, such as diffraction. This means this
technique is appropriate only when sound wavelengths are smaller than the objects in the environment but larger than
the roughness of them.
Sound System Configurations
Headphones
Headphones have many distinct advantages in a 3D UI. They provide a high level of channel separation, which
helps to avoid crosstalk, a phenomenon that occurs when the left ear hears sound intended for the right ear, and
vice versa. They also isolate the user from external sounds in the physical environment, which helps to ensure that
these sounds do not affect the listener’s perception. They are often combined with visual displays that block out the
real world, such as HMDs, helping to create fully immersive experiences. Additionally, headphones allow multiple
users to receive 3D sound (assuming that they are all head-tracked) simultaneously, and they are somewhat easier
to deal with because there are only two sound channels to control.

The main disadvantage of headphones is a phenomenon called inside-the-head localization (IHL). IHL is the lack of
externalization of a sound source, which results in the false impression that a sound is emanating from inside the
user’s head IHL occurs mainly because of the lack of correct environmental information, that is, lack of reverberation
and HRTF information. The best way to minimize IHL is to ensure that the sounds delivered to the listener are as
natural as possible. Of course, this naturalness is difficult to achieve based on our previous discussions on the
complexity of 3D sound generation. At a minimum, having accurate HRTF information will go a long way toward
reducing IHL. Including reverberation can basically eliminate IHL at a cost of reduced user localization accuracy .
External Speakers
● The second approach to display 3D sound is to use external speakers placed at strategic locations in the
environment. This approach is often used with projection-based visual displays. With external speakers, the
user does not have to wear any additional devices.

● The main limitation with this approach is that it makes it difficult to present 3D sound to more than one
head-tracked user (external speakers work very well for non-spatialized sound with multiple users)

● On the other hand, The major challenge with using external speakers for displaying 3D sound is how to avoid
crosstalk and make sure the listener’s left and right ears receive the appropriate signals.

● The two main approaches for presenting 3D sound over external speakers are with transaural audio and
amplitude panning. Transaural audio allows for the presentation of the left and right binaural audio signals to
the corresponding left and right ears using external speakers. Amplitude panning adjusts the intensity of the
sound in some way to simulate the directional properties. By systematically varying each external speaker’s
intensity, a phantom source is produced in a given location.

● A final issue when using external speakers is speaker placement because sounds emanating from external
speakers can bounce or be filtered through real-world objects, hampering sound quality. For example, in a
surround-screen system, placing the speakers in front of the visual display could obstruct the graphics, while
placing them behind the display could muffle the sound.
Audio in 3D Interfaces
There are several different ways 3D interfaces can use audio displays, including

● localization
● sonification
● ambient effects
● sensory substitution
● annotation and help
Haptic Displays
● Haptic displays try to provide the user with the sense of touch by simulating the physical interaction between
virtual objects and the user.

● Therefore, depending on the haptic display, these devices provide the user with a sense of force, a sense of
touch, or a combination of the two. Haptic displays also can be considered, in many cases, to be both input
and output devices because of their physical connection to the user.

An important component of a haptic display


system (besides the actual physical device) is the
software used to synthesize forces and tactile
sensations that the display device presents to the
user. The software used in these interfaces is
called haptic rendering software and includes
many different algorithmic techniques taken from
physics-based modeling, simulation, and others.
Haptic Cues
Tactile Cues
Tactile cues are perceived by a variety of cutaneous receptors located under the surface of the skin that produce
information about surface texture and temperature as well as pressure and pain. Mechanoreceptors determine
mechanical action (i.e., force, vibration, slip), thermoreceptors detect changes in skin temperature. If the stimulus is
greater than a particular receptor’s threshold, then a response is triggered as an electrical discharge and
subsequently sent to the brain.

Kinesthetic Cues

Kinesthetic cues are perceived by receptors in the muscles, joints, and tendons of the body to produce information
about joint angles and muscular length and tension. Kinesthetic cues help to determine the movement, position,
and torque of different parts of the body, such as the limbs, as well as the relationship between the body and
physical objects, through muscular tension. Kinesthetic cues can be both active and passive. Active kinesthetic
cues are perceived when movement is self-induced, and passive kinesthetic cues occur when the limbs are being
moved by an external force (Stuart 1996).
Haptic Devices
Haptic Display Characteristics
Haptic displays have many different characteristics we can use to describe them. These characteristics help to
determine a haptic device quality and provide information on how it can be utilized in 3D interfaces. In this section,
we discuss three of the most common characteristics, including

Resolution

Haptic
presentation
capability

Ergonomics
Haptic Display Characteristics
They are often categorized based on the types of actuators (i.e., the components of the haptic display that
generate the force or tactile sensations) they use. For the purposes of our discussion, haptic display
devices can be placed into one of five categories:

● ground-referenced
● body-referenced
● tactile
● combination
● passive

You might also like