Group 1 3D User Interface Hardware (Input and Output) : Virtual Reality
Group 1 3D User Interface Hardware (Input and Output) : Virtual Reality
GROUP 1
3D User Interface Hardware
(input and output)
1) Purely Active
a) These devices require user to actually perform some physical action before data is generated.
2) Purely Passive
a) These devices do not require any physical action for device to function.
b) Ex. a tracker will continually output position and orientation records even if the device is not
moving.
Desktop Input Devices
● Keyboards
● 2D mice and trackballs
● pen-Based tablets
● joysticks
● 6-DOF(Degree Of Freedom) devices for the desktop.
Keyboards
Keyboard is a traditional desktop input device which contain a set of discrete components(buttons).Ex.
arrow keys are often used to navigate in first-person shooter game. It is not possible to use this keyboard
in more immersive 3D environments.chord keyboards are used in 3D environments.
Joysticks
● Joysticks are another example of input devices traditionally used on the desktop and
with a long history as a computer input peripheral.
● To stop the cursor, the joystick’s handle must be returned to the neutral position. This
type of joystick is commonly called an isotonic joystick, and the technique is called rate
control.
● They are frequently used in driving and flight simulation games, and when integrated
into game controllers, they are the input device of choice with console video game
systems.
Six-DOF input devices for desktop
● 6 DOF input devices that were developed specifically for 3D interaction on the desktop.
● Slight push and pull pressure of the fingers on the cap of the device generates small deflections in x, y, and
z, which moves objects dynamically in the corresponding 3 axes. With slight twisting and tilting of the cap,
rotational motions are generated along the 3 axes.
● This type of device is commonly used in desktop 3D applications for manipulating virtual objects.
● They were originally developed for telerobotic manipulation and are commonly used today by 3D
designers and artists with CAD/CAM and animation applications.
● They do not replace the mouse; rather, they are used in conjunction with it. One hand on the motion
controller positions the objects in 3D space, while the other hand with the mouse can simultaneously select
menu items and edit the object
Tracking Devices
Motion Trackers
Eye Trackers
Data gloves
MOTION TRACKERS:-
One of the most important aspects of 3D interaction in virtual worlds is providing a correspondence between the
physical and virtual environments.Currently, there are a number of different motion-tracking technologies in use,
which include
● magnetic tracking
● mechanical tracking
● acoustic tracking
● inertial tracking
● optical tracking
● hybrid tracking
EYE TRACKER:-
● Eye trackers are purely passive input devices used to determine where the user is
looking.
● A necessary component of any 3D UI is the hardware that presents information to the user.
These hardware devices, called display devices (or output devices), present information to one
or more of the user’s senses through the human perceptual system; the majority of them are
focused on stimulating the visual, auditory, or haptic (i.e., force and touch) senses.
● Of course, these output devices require a computer to generate the information through
techniques such as rendering, modeling, and sampling. The devices then translate this
information into perceptible human form. Therefore, displays actually consist of the physical
devices and the computer systems used in generating the content the physical devices
present.
● Display devices need to be considered when designing, developing, and using various
interaction techniques in 3D UIs, because some interaction techniques are more appropriate
than others for certain displays.
Visual Display Characteristics
● A number of important characteristics must be considered when describing visual display
devices which are as mentioned below
● Field of regard and field of view
● Spatial Resolution
● Screen Geometry
● Light Transfer Mechanism
● Refresh Rate
● Ergonomics
Depth Cues
Because 3D UIs are used primarily in 3D applications, the user must have an understanding of
the 3D structure of the scene; in particular, understanding visual depth is crucial. Depth
information will help the user to interact with the application, especially when performing 3D
selection, manipulation, and navigation tasks.
● monitors
● surround-screen displays
● workbenches
● hemispherical displays
● head-mounted displays
● arm-mounted displays
● virtual retinal displays
● autostereoscopic displays
Auditory Displays
One of the major goals of auditory displays in VEs is the generation and display of spatialized 3D
sound, enabling the human participant to take advantage of his auditory localization capabilities.
As with the visual system, the auditory system provides listeners with a number of different
localization cues that allow them to determine the direction and distance of a sound source. Although
there are many different localization cues, the main ones that apply to 3D UIs (Shilling and
Shinn-Cunningham 2002) are
● binaural cues
● spectral and dynamic cues
● head-related transfer functions (HRTFs)
● reverberation
● sound intensity
● vision and environment familiarity
3D Sound Generation - 3D Sound Sampling and
Synthesis
● The basic idea behind 3D sound sampling and synthesis is to record sound that the listener would
hear in the 3D application by taking samples from a real environment. For example, with binaural audio
recording, two small microphones are placed inside the user’s ears (or in the ears of an
anthropomorphic dummy head) to separately record the sounds heard by left and right ears in the
natural environment.
● However, the main problem with this type of sound generation is that it is specific to the environmental
settings in which the recordings were made. Therefore, any change in the sound source’s location,
introduction of new objects into the environment, or significant movement of the user would require
new recordings.
● An alternative approach, which is one of the most common 3D sound generation techniques used in
3D applications today, is to imitate the binaural recording process by processing a monaural sound
source with a pair of left- and right-ear HRTFs corresponding to a desired position within the 3D
environment . With these empirically defined HRTFs, real-time interactivity becomes much more
feasible because particular sound sources can be placed anywhere in the environment and the HRTFs
will filter them accordingly to produce 3D spatial audio for the listener.
Auralization
● Auralization is the process of rendering the sound field of a source in space in such a way as to simulate the binaural
listening experience through the use of physical and mathematical models).
● The goal of auralization is to recreate a listening environment by determining the reflection patterns of sound waves
coming from a sound source as they move through the environment. Therefore, this process is very useful for creating
reverberation effects.
● The two main computer-based approaches to creating these sound fields are wave-based modeling and ray-based
modeling. With wave-based modeling techniques, the goal is to solve the wave equation so as to completely re-create
a particular sound field. In many cases, there is no analytical solution to this equation, which means that numerical
solutions are required.
● In the ray-based approach, the paths taken by the sound waves as they travel from source to listener are found by
following rays emitted from the source. The problem with the ray-based approach is that these rays ignore the
wavelengths of the sound waves and any phenomena associated with them, such as diffraction. This means this
technique is appropriate only when sound wavelengths are smaller than the objects in the environment but larger than
the roughness of them.
Sound System Configurations
Headphones
Headphones have many distinct advantages in a 3D UI. They provide a high level of channel separation, which
helps to avoid crosstalk, a phenomenon that occurs when the left ear hears sound intended for the right ear, and
vice versa. They also isolate the user from external sounds in the physical environment, which helps to ensure that
these sounds do not affect the listener’s perception. They are often combined with visual displays that block out the
real world, such as HMDs, helping to create fully immersive experiences. Additionally, headphones allow multiple
users to receive 3D sound (assuming that they are all head-tracked) simultaneously, and they are somewhat easier
to deal with because there are only two sound channels to control.
The main disadvantage of headphones is a phenomenon called inside-the-head localization (IHL). IHL is the lack of
externalization of a sound source, which results in the false impression that a sound is emanating from inside the
user’s head IHL occurs mainly because of the lack of correct environmental information, that is, lack of reverberation
and HRTF information. The best way to minimize IHL is to ensure that the sounds delivered to the listener are as
natural as possible. Of course, this naturalness is difficult to achieve based on our previous discussions on the
complexity of 3D sound generation. At a minimum, having accurate HRTF information will go a long way toward
reducing IHL. Including reverberation can basically eliminate IHL at a cost of reduced user localization accuracy .
External Speakers
● The second approach to display 3D sound is to use external speakers placed at strategic locations in the
environment. This approach is often used with projection-based visual displays. With external speakers, the
user does not have to wear any additional devices.
● The main limitation with this approach is that it makes it difficult to present 3D sound to more than one
head-tracked user (external speakers work very well for non-spatialized sound with multiple users)
● On the other hand, The major challenge with using external speakers for displaying 3D sound is how to avoid
crosstalk and make sure the listener’s left and right ears receive the appropriate signals.
● The two main approaches for presenting 3D sound over external speakers are with transaural audio and
amplitude panning. Transaural audio allows for the presentation of the left and right binaural audio signals to
the corresponding left and right ears using external speakers. Amplitude panning adjusts the intensity of the
sound in some way to simulate the directional properties. By systematically varying each external speaker’s
intensity, a phantom source is produced in a given location.
● A final issue when using external speakers is speaker placement because sounds emanating from external
speakers can bounce or be filtered through real-world objects, hampering sound quality. For example, in a
surround-screen system, placing the speakers in front of the visual display could obstruct the graphics, while
placing them behind the display could muffle the sound.
Audio in 3D Interfaces
There are several different ways 3D interfaces can use audio displays, including
● localization
● sonification
● ambient effects
● sensory substitution
● annotation and help
Haptic Displays
● Haptic displays try to provide the user with the sense of touch by simulating the physical interaction between
virtual objects and the user.
● Therefore, depending on the haptic display, these devices provide the user with a sense of force, a sense of
touch, or a combination of the two. Haptic displays also can be considered, in many cases, to be both input
and output devices because of their physical connection to the user.
Kinesthetic Cues
Kinesthetic cues are perceived by receptors in the muscles, joints, and tendons of the body to produce information
about joint angles and muscular length and tension. Kinesthetic cues help to determine the movement, position,
and torque of different parts of the body, such as the limbs, as well as the relationship between the body and
physical objects, through muscular tension. Kinesthetic cues can be both active and passive. Active kinesthetic
cues are perceived when movement is self-induced, and passive kinesthetic cues occur when the limbs are being
moved by an external force (Stuart 1996).
Haptic Devices
Haptic Display Characteristics
Haptic displays have many different characteristics we can use to describe them. These characteristics help to
determine a haptic device quality and provide information on how it can be utilized in 3D interfaces. In this section,
we discuss three of the most common characteristics, including
Resolution
Haptic
presentation
capability
Ergonomics
Haptic Display Characteristics
They are often categorized based on the types of actuators (i.e., the components of the haptic display that
generate the force or tactile sensations) they use. For the purposes of our discussion, haptic display
devices can be placed into one of five categories:
● ground-referenced
● body-referenced
● tactile
● combination
● passive