CG Lesson12 (En)
CG Lesson12 (En)
Virtual reality
■ Coates (1992):
□ Virtual Reality is electronic simulations of
environments experienced via head mounted eye
goggles and wired clothing enabling the end user to
interact in realistic three-dimensional situations.
■ Greenbaum (1992):
□ Virtual Reality is an alternate world filled with
computer-generated images that respond to human
movements. These simulated environments are
usually visited with the aid of an expensive data suit
which features stereophonic video goggles and
fiber-optic gloves.
Definition
■ Isdale (1998)
□ VR is a way for humans to visualize, manipulate and
interact with computers and extremely complex data
■ Other concepts
□ Virtual Worlds, Virtual Environments, Immersive VR,
Cyberspace ...
History
■ 1962 : Sensorama (from the movie industry : Morton
Heilig)
■ 1970sh : visualisation of virtual world on the screen
■ 1970 : First Head Mounted Display : Daniel Vivkers
from Utah University (From a Ivan Sutherland/MIT's idea)
■ 1982 : Dataglove
■ 1980-85 : First VR commercial products 1987 Virtual
Cockpit (British Aerospace) head and hand tracking, eye
tracking, 3d visuals, 3D audio, speech recognition vibro
tactile feedback
■ 1990-95 : Popularisation of VR (Film, Books...)
□ ARMY – spend close to $1000 million 1998 in VRresearch.
VR – a new media
■ Archeology:
Reconstruction of
historic sites that
have been
destroyed or no
longer exist
■ Training and
coaching: Allows
learners to be
trained before
using the real
device, especially
when using the
device is difficult
or too expensive
Flight simulator of the US Air Force
Application
■ Fiction: Creating
non-existent scenes
11
Types of VR
■ Desktop VR
□ Non-immersive: 3D
mouse, window system
□ 3D world displayed on
screen; using mouse and
keyboard as interactive
devices
□ VRML, Games
Types of VR
■ video-mapping
□ A variation of the WoW-style approach. This approach combines a
video of the user with 2D graphics. The user will see an interactive
display of his image with the computer.
□ Simple DVR system using common interactive devices such as 3D
mouse, window system. The 3D world is displayed on the screen, with
the mouse and keyboard as interactive inputs.
Other types of VR (?)
■ User
□ Interact with the environment, view the environment
through camera
□ Has a representation in the virtual world (avatar)
□ Able to listen, speak (voice recognition)
□ Able to move (gesture recognition)
Architectural components
■ Multiuser
■ Network connection
■ Simulation
□ Control the application
□ Event-based control (collision detection)
□ Simulate object’s physical attributes
■ World
□ Combine other components into a virtual world that
the users interact.
Perceive
The ability to hear; the auditory faculty; SYN. audition, auditory sense,
hearing sound ears, body
sense of hearing, auditory modality.
The ability to see; the faculty of vision; SYN. vision, visual sense, visual
sight image eyes
modality.
The faculty of touch; SYN. sense of touch, skin senses, touch modality,
touch surface / temperature skin
cutaneous senses.
smell odour nose The faculty of smell; SYN. sense of smell, olfaction, olfactory modality.
tongue/
taste savour, flavour The faculty of taste; SYN. gustation, sense of taste, gustatory modality.
nose
position, movement, The perception of body position and movement and muscular tension
kinesthesia muscles
muscular tensions etc; SYN: kinaestesia, feeling of movement
balance, acceleration,
position, location, The ability to sense the position and location and orientation and
proprioception ear
orientation, movement movement of the body and its parts.
of the body
Perceive
■ monoscopic cues
□ relative size
□ interposition and occlusion
□ perspective distortion
□ lighting and shadows
□ texture gradient
□ motion parallax
■ binocular (stereoscopic) cues
□ stereo disparity
□ convergence
Recreate sensation
1. monoscopic cues
□ realistic rendering / lighting simulation
2. stereoscopic cues -> stereodisparity
presentation of appropriate view to each eye
□ - time multiplexing of images
□ - multiplexing with chromatic filters (anaglyph)
□ - multiplexing with polarizer filters
□ - providing two views simultaneously
□ Color Encoded Stereo Image Pair
Presence
Physical Simulation
and Animation
Lights Visual
Illustration Model
Head Position
Objects
Geometry Audio OUTPUT
INPUT
Surface Properties
Dynamic Properties
Hand Position Physical Constrains
Acoustic Properties Haptic
Collision Detection
VR system model
Head Tracker Receiver
Head-mounted display(HMD)
Head phones
Microphone
Tracker Machine VE
Transmitter Intelligence Database
Display and interaction devices
Oculus Rift
3-DOF vs. 6-DOF
HTC Vive Pro 2 800-1400 € Not standalone 2440x2440 per eye 120 Hz
Cameras
IR LEDs
■ Lost tracking?
□ Predict from received sensor data
CAVE Automatic Virtual
Environment
Christe’s CAVE
Haptic Displays
- Adam Savage, Inside Valve: Making Half-Life: Alyx for Virtual Reality
Computer Graphics
issues in VR
General graphics issues in VR
● Quest 2 has
○ 97° horizontal, 93° vertical visible FoV
○ 104° horizontal, 98° vertical rendered FoV
● Valve Index
○ 108° horizontal, 104° vertical visible FoV
○ 108° horizontal, 109° vertical rendered FoV
Lens Distortion
● Pincushion distortion
● Mustache distortion (combination of the above)
● Many ways of correcting this including the The
Brown–Conrady model
Correcting lens distortion in real-
time
There are at least 3 options:
● Doing it in the fragment shader
○ Slowest, but is very accurate
● Mesh based: calculating it in the vertex
shader
○ Using a separate pass utilizing a
tessellated grid
○ Can be combined with chromatic
aberration correction!
● Doing it using vertex displacement on the
whole scene
○ Doable but can be a hassle
● The squishy and moving nature of eyes
makes these kinds of corrections tricky
Correcting chromatic
aberration
● The Index of Refraction of light
depends on wavelength
● Displays have Red, Green and
Blue subpixels so you only need
to figure out the offsets for
those 3 wavelength
○ Though yes the light they emit
might not be fully
monochromatic
● Achieving this with a vertex grid:
have 3 sets of UVs with different
distortion on them based on if
they affect the R, G or B buffers
Stencil Mesh / HiddenAreaMask
Reduces the amount of pixels that need to be shaded
● VR.HiddenAreaMask variable in UE
● Reduced from shading 457 million pixels/sec to 378
million pixels/sec (1512 × 1680 per eye render
resolution) on Original Vive
● PS! Nowadays doing 844 million display pixels/sec on
Quest 2
○ Uncompressed this would be (3 x 8 bit at 120FPS) 20Gb/s
● SteamVR
○ Interleaved reprojection
■ If frames are dropped, render at half rate and estimate
other frames
○ Motion Smoothing
■ Asynchronous SpaceWarp type technology (not depth
aware?)
The two views rendered for
the eyes are relatively close
together. This can be used to
optimize the rendering
process!
Instanced Stereo Rendering
Instead of sending the draw calls for the two view points separately to the
GPU do it once for both eyes!
● This is a CPU optimization, but a GPU net negative. Choose based on
content!
● You should still be doing draw call batching!
● Called Multiview for mobile VR in UE
● Things are changing with UE5 and Nanite (*Nanite does not work with
stereo rendering yet)
Other stereoscopic view
optimizations
● Round Robin Occlusion: do the occlusion culling
calculaton only for one eye per frame use that data for
both eyes when rendering objects, and then switch the
eye that is doing the calculation for next frame
● Monoscopic Far Field Rendering: render the far field
geometry from a central view as parallax from longer
distances becomes less noticeable (deprecated in UE)
Anti-aliasing
● MSAA is the standard
○ UE supports 4xMSAA though other engines can apps
go for 8x
● TAA is used in some applications, but its inherent
softness is not desirable
● DLSS works relatively well
○ Though can have artifacting with small repeating
details, e.g. cloth with visible weaving
○ Waiting for non vendor specific solution that works as
well
● SSAA is the desired result
○ Increasing resolution scale to 2x can very perceptible
image quality improvements
■ The original resolution scale is likely already
~1.4x the pixel resolution of the display
Specular Aliasing from Normal Maps
Whaaa…?
□ Can cause motion sickness. Reset plz! RESET!
(vomit)
→ Avoiding by ‘vignetting’
We are moving!
No we are not!
Locomotion
■ No locomotion!
The game takes place
around the player in a
single location
(Only the player’s
physical movement
results in locomotion)
■ Fixed locations
Movement to these
locations can be
controlled by the
player or be event-
based.
Locomotion
■ Projectile moving
■ Blink teleportation
Instantly teleport to
the desired location.
Locomotion
Other
■ Simulate running
Forward motion by
moving the controllers
up and down →
Simulates the
movement of hands as
if running.
■ Grappling hook
You shoot a grappling
hook and it pulls you
forward (or the
player pulls themselves
on it).
Locomotion
Standard door height: 2.1 m
Average human height: 1.6 - 1.7 m
My height: 1.88 m
■ Use real world dimensions 1.7 m is ~81% (⅘) of 2.1 m Half-Life 2 (2004)
6 4 3
5 3
2
4
2
3
1 1
2
1
https://learnleveldesign.com/tutorials/first-twenty-minutes-of-half-life-2/
GUI
Diegetic Non-Diegetic
Good for VR
Spatial Meta
GUI
■ Diegetic
□ Part of the environment
Tribocalypse VR (2017)
Dead Space (2008) Half-Life: Alyx (2020)
GUI
■ Non-Diegetic
□ Not a part of the environment in
anyway (e.g. usual GUI)
Neverwinter (2013) Totem Games (2016)
GUI
■ Spartial
□ In the 3D environment
■ Meta
□ Not in the environment but the player should be
aware of
Doom Eternal (2020) Need for Speed 2 (1997)
Turning regular games
to VR
Regular games to VR
Others
● Not only perception but also study being in a
new world
● Travel methods need changing
● Interactions are more important
● Movement methods need agreement
● Testing importance (including motion sickness)
● Not too hard rules
● Iterate quickly, fail fast, learn what works best
Tips and suggestions