Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Unit-I: VIRTUAL REALITY AND VIRTUAL

ENVIRONMENTS: The historical development of VR:


Scientific landmarks Computer Graphics, Real-time
computer graphics, Flight simulation, Virtual
environments, Requirements for VR, benefits of Virtual
reality. HARDWARE TECHNOLOGIES FOR 3D USER
INTERFACES: Visual Displays Auditory Displays, Haptic
Displays, Choosing Output Devices for 3D User Interfaces.

Virtual Reality, Augmented Reality, and Mixed


Reality
•••
Virtual Reality (VR): The word ‘virtual’ means something that is conceptual and does
not exist physically and the word ‘reality’ means the state of being real. So the term
‘virtual reality’ is itself conflicting. It means something that is almost real.

We will probably never be on the top of Mount Everest or dive deep into the Mariana
Trench or step on the Moon, but we might be able to do all these things without even
stepping out of our homes, this is where Virtual Reality comes to the rescue.

In simple words, Virtual Reality means we can experience things that never actually
happen. It actually tricks our brain into thinking that we are in a different place using 3
of our senses-seeing, hearing, and touching. It creates a different world, and we feel we
are a part of it, both physically and mentally.

Virtual Reality (VR) has a lot of applications, some of them are:


1. Entertainment-Used for gaming, 3D cinema, and in theme parks.
2. Medicine-Used for surgery training, exposure therapy for people with phobia
or anxiety disorder.
3. Skill training-Used for astronaut training, flight training, military training,
etc.
Augmented Reality (AR): The word ‘augmented’ means to add. It might not sound
as exciting as VR but has impacted our lives deeply. Augmented reality uses different
tools to make the real and existing environment better and provides an improved
version of reality.

Snapchat filters, Instagram filters, Pokemon Go are all Examples of AR. It is these
AR apps that allow a customer to place virtual furniture in their house before buying.
Projection-based AR applications allow human interaction by sending light onto a real
surface and then sensing human touch, learning has become way more interesting and
easy with AR, markerless AR provides data based on our location, it is also a
powerful tool for marketing as it allows users to try products before buying.

Mixed Reality (MR): It brings real-world and digital elements together. But wait, this
is what AR does, so what is the difference?
It integrates digital objects and real-world in such a way that it makes it look like the
objects really belong there.
Mixed Reality works by scanning our physical environment and creating a map of our
surroundings so that the device will know exactly how to place digital content into
that space –realistically –allowing us to interact with it.

A few Examples of MR apps are:


1. An app that allows users to place notes around their environment.
2. A television app placed in comfortable spots for viewing.
3. A cooking app placed on the kitchen wall.
4. Microsoft’s Hololens is also a famous example of MR.
One common thing between all the above forms of technology is that they change the
way we perceive real-world objects. All of these are trying to connect the real world
and virtual tools, helping us improve our productivity.
History of Virtual Reality
••• Reality refers to a computer-generated simulation of a three-dimensional environment
Virtual
that allows individuals to engage with and explore the simulated surroundings in a manner
that closely imitates reality as it is perceived through their senses.

To convey the idea that the current version of stereoscopic 3D TVs has a static image, it is
referred to as a stereoscope. Sir Charles Wheatstone invented the stereoscope in 1832, even
before the advent of photography. This device involved the use of mirrors placed at a 45°
angle to project images into the viewer’s eyes from both the left and right sides.

Stereoscopes are optical devices that help to view the two images separately and with relaxed
eyes. Lens or pocket stereoscopes permit viewing the full overlap area of prints up to ∼9 cm
× 13 cm or parts of larger prints

In the case of modern phone-based virtual reality systems like Google Cardboard, a mobile
phone is used to display images instead of physical images.

Edwin Link invented the first mechanical flight simulator in the 1930s, a device that
mimicked the movements and feelings of flying in a cockpit-like structure. The Army Air
Corps purchased six of these systems in 1935 and by the end of World War II, more than
10,000 had been sold by Link.
Computer graphics:
Topics :
• Basics
• Output Primitives
• 2-Dimensional Viewing
• Visible Surface Detection
• 3-Dimension Object Representation
• Open GL
• Graphics Function in C
• Misc

Computer Graphics Tutorial

It is difficult to display an image of any size on the computer screen. This method is
simplified by using Computer graphics. Graphics on the computer are produced by using
various algorithms and techniques. This tutorial describes how a rich visual experience is
provided to the user by explaining how all these processed by the computer.

In computer graphics, two or three-dimensional pictures can be created that are used for
research. Many hardware devices algorithm has been developing for improving the speed of
picture generation with the passes of time. It includes the creation storage of models and
image of objects. These models for various fields like engineering, mathematical and so on.
Definition of Computer Graphics:
It is the use of computers to create and manipulate pictures on a display device. It comprises
of software techniques to create, store, modify, represents pictures.

Why computer graphics used?

Suppose a shoe manufacturing company want to show the sale of shoes for five years. For this
vast amount of information is to store. So a lot of time and memory will be needed. This method
will be tough to understand by a common man. In this situation graphics is a better alternative.
Graphics tools are charts and graphs. Using graphs, data can be represented in pictorial form.
A picture can be understood easily just with a single look.

Interactive computer graphics work using the concept of two-way communication between
computer users. The computer will receive signals from the input device, and the picture is
modified accordingly. Picture will be changed quickly when we apply command
Computer Graphics Tutorial Index

Computer Graphics Tutorial

o Computer Graphics Tutorial


o Application of Computer Graphics
o Interactive and Passive Graphics

Graphic Systems

o Display Processor
o Cathode Ray Tube (CRT)
o Random Scan vs Raster Scan
o Color CRT Monitors
o Direct View Storage Tubes
o Flat Panel Display

Input-Output Devices

o Input Devices
o Trackball
o Light Pen
o Image Scanner
o Output Devices
o Plotters

Scan Conversion a line

o Scan Conversion Definition


o Scan Converting a Point
o Scan Converting a Straight Line
o DDA Algorithm
o Bresenham's Line Algorithm

Scan Conversion Circle

o Defining a Circle
o Defining a Circle using Polynomial Method
o Defining a Circle using Polar Coordinates Method
o Bresenham's Circle Algorithm
o Midpoint Circle Algorithm

Scan Converting Ellipse

o Scan converting a Ellipse


o Polynomial Method
o Trignometric Method
o Midpoint Ellipse Algorithm

Filled Area Primitives

o Boundary Fill Algorithm


o Flood Fill Algorithm
o Scan Line Polygon Fill Algorithm

2D Transformations

o Introduction of Transformation
o Translation
o Scaling
o Rotation
o Reflection
o Shearing
o Matrix Representation
o Homogeneous Coordinates
o Composite Transformation
o Pivot Point Rotation
2D-Viewing

o Window
o Window to Viewport Co-ordinate Transformation
o Zooming
o Panning

Clipping Techniques

o Clipping
o Point Clipping
o Line Clipping
o Midpoint Subdivision Algorithm
o Text Clipping
o Polygon
o Sutherland-Hodgeman Polygon Clipping
o Weiler-Atherton Polygon Clipping

Pointing & Positioning

o Pointing & Positioning Techniques


o Elastic or Rubber Band Techniques
o Dragging

Shading

o Introduction of Shading
o Constant Intensity Shading
o Gouraud shading
o Phong Shading

Animation

o Animation
o Application Areas of Animation
o Animation Functions

3D Computer Graphics
o Three Dimensional Graphics
o Three Dimensional Transformations
o Scaling
o Rotation
o Rotation about Arbitrary Axis
o Inverse Transformations
o Reflection
o Shearing

Hidden Surfaces

o Hidden Surface Removal


o Back Face Removal Algorithm
o Z-Buffer Algorithm
o Painter's Algorithm
o Scan Line Algorithm
o Subdivision Algorithm
o 3D Modelling System

Projection

o Projection
o Perspective Projection
o Parallel Projection

Programs

o Computer Graphics Programs


“Stereoscopic"
“Stereoscopic" refers to a method of creating the illusion of depth in an image by presenting
two offset images separately to the left and right eye of the viewer.
These two-dimensional images are then combined in the brain to give the perception of 3D
depth.
This technique relies on the fact that our two eyes view the world from slightly different angles.
When each eye receives its own image, the brain fuses the two images together, allowing us to
perceive spatial depth and three-dimensionality.
1. Parallax
Parallax in the context of virtual reality (VR) is a phenomenon that helps to create a sense of
depth and realism in a virtual environment. It refers to the apparent displacement or difference
in the position of an object when it is viewed from two different lines of sight, and it is a key
cue that human vision uses to perceive depth.
In VR, parallax is simulated to give users a 3D experience. When a VR headset tracks the
movement of the user's head and adjusts the images seen by each eye accordingly, it creates a
sense of depth through parallax.
This means that as the user's head moves, objects in the virtual world will appear to move in
relation to one another, just as they do in the real world. Near objects will seem to move faster
than far objects, creating a sense of spatial relationships and depth.

Freeviewing
Free viewing in virtual reality (VR) refers to the ability to look around a virtual environment
in any direction without restrictions. This means the user can naturally move their head to look
up, down, left, right, and even behind, and the VR system will update the visual display in real
time to match the user's perspective.

360-degree Exploration: Users can explore a full 360-degree environment, experiencing


content that is rendered in all directions.
Intuitive Interaction: Free viewing often comes with the expectation of intuitive interaction
within the environment, allowing users to navigate and interact with the virtual world as
naturally as they would in the physical world.

Head Tracking: VR headsets track the user's head movements with gyroscopes,
accelerometers, and sometimes external sensors to ensure that the display responds accurately
to the user's looking direction.
Immersive Experience: This unbound exploration capability is key to creating an immersive
VR experience, making the user feel as if they are truly present in the virtual environment.
No Handheld Devices Required: Unlike controlled viewing, where the user might use a
handheld device or a keyboard/mouse to change the viewpoint, free viewing in VR relies on
the natural movement of the user's head.

Free viewing is a fundamental aspect of high-quality VR experiences, crucial for applications


such as virtual tours, simulations, gaming, education, and training, where unrestricted visual
exploration enhances the sense of presence and engagement.

Shutter system
It showcases how shutter glasses synchronize with a display to create a stereoscopic 3D effect.
The computer generates two images, one for the left and one for the right eye; the user wears
the wireless glasses, which alternately block and pass images to create the stereo effect.
The computer system generates left and right eye images sequentially. An infrared signal
synchronizes the glasses to the computer images, so that the right image is shown when the
right lens is transparent and the left image is shown when the left lens is transparent.
Infrared emitters are placed throughout and around the VR display environment, so that
regardless of where the user “looks” the glasses are fully functional.
The tracker, attached to the stereo glasses finds the position of the user’s head, enabling the
CAVE software to calculate and update the images from the user’s perspective.
Polarization systems

To present stereoscopic pictures, two images are projected superimposed onto the same screen
through polarizing filters or presented on a display with polarized filters. For projection, a silver
screen is used so that polarization is preserved. On most passive displays every other row of
pixels is polarized for one eye or the other. This method is also known as being interlaced. The
viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As
each filter only passes light which is similarly polarized and blocks the opposite polarized light,
each eye only sees one of the images, and the effect is achieved.
5. Interference filter systems
This technique uses specific wavelengths of red, green, and blue for the right eye, and different
wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific
wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb
filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle.
The Omega 3D/Panavision 3D system has also used an improved version of this technology.
In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical,
who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market
conditions″.

6.Holograph
A hologram is a recording of an interference pattern that can reproduce a 3D light field using
diffraction. In general usage, a hologram is a recording of any type of wavefront in the form of
an interference pattern.
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is
best known as a method of generating three-dimensional images, and has a wide range of other
uses, including data storage, microscopy, and interferometry. In principle, it is possible to make
a hologram for any type of wave.

Some popular stereoscopic software solutions include Autodesk 3ds Max, Blender,
Unity 3D,bino and Unreal Engine. These software tools offer a range of features for
creating and displaying stereoscopic content, such as 3D modeling, animation, lighting,
texture mapping, and stereoscopic rendering.

Bino software:

The Rolling Marbles scene, rendered with WurblPT.


360° 3D: rolling-marbles-360-tb.mp4 (preview on YouTube)
360° 2D: rolling-marbles-360.mp4 (preview on YouTube)
180° 3D: rolling-marbles-180-tb.mp4
180° 2D: rolling-marbles-180.mp4

Bino is a video player with a focus on 3D and Virtual Reality:

Support for 3D videos in various formats

Support for 360° and 180° videos, with and without 3D

Support for 3D displays with various modes

Support for Virtual Reality environments, including SteamVR, CAVEs, powerwalls,


and other multi-display / multi-GPU / multi-host systems

Bino is based on Qt. The optional Virtual Reality support is based on QVR. No other
libraries are required.

Bino is free software; you can redistribute it and/or modify it under the terms of the
GNU General Public License as published by the Free Software Foundation; either
version 3 of the License, or (at your option) any later version.
Simulator:
It is useful when experimentation with the real system is expensive, dangerous or likely to
cause significant disruption (eg transport systems, nuclear reactor and airline systems). It might
also be an option when mathematical modelling of a system is impossible.
The Need for Simulation
A simulator is a collection of hardware and software systems which are used to mimic the
behaviour of some entity or phenomenon. Typically, the entity or phenomenon being simulated
is from the domain of the tangible -- ranging from the operation of integrated circuits to
behaviour of a light aircraft during wind sheer. Simulators may also be used to analyze and
verify theoretical models which may be too difficult to grasp from a purely conceptual level.
Such phenomenon range from examination of black holes to the study of highly abstract models
of computation. As such, simulators provide a crucial role in both industry and academia.
Despite the increasing recognition of simulators as a viable and necessary research tool,
one must constantly be aware of the potential problems which simulators may introduce. Many
of the problems are related to the computational limitations of existing hardware platforms but
are quickly being overcome as more powerful platforms are introduced. Other problems,
unfortunately, are inherent within simulators and are related to the complexity associated with
the systems being simulated. This section highlights some of the major advantages and
disadvantages posed by modern day simulators.
Flight simulation:

What is a native database?


Native database or data APIs: these APIs access specific data or database sources whose
providers may feel these APIs provide better functionality than industry standard APIs, or that
they cannot conform to those standards.
API stands for Application Programming Interface. In the context of APIs, the word
Application refers to any software with a distinct function. Interface can be thought of as a
contract of service between two applications. This contract defines how the two communicate
with each other using requests and responses.

What are BIM projects?


What is BIM? Building Information Modelling (BIM) is a process that encourages
collaborative working between all the disciplines involved in design, construction,
maintenance and use of buildings. All parties share the same information simultaneously, in
the same format.
3D user interfaces

➢ User interfaces are the means for communication between users and systems. 3D
interfaces include media for 3D representation of system state, and media for 3D user
input or manipulation. Using 3D representations is not enough to create 3D
interaction. The users must have a way of performing actions in 3D as well. To that
effect, special input and output devices have been developed to support this type of
interaction. Some, such as the 3D mouse, were developed based on existing devices
for 2D interaction.
3D user interfaces, are user interfaces where 3D interaction takes place, this means that the
user's tasks occur directly within a three-dimensional space. The user must communicate with
commands, requests, questions, intent, and goals to the system, and in turn this one has to
provide feedback, requests for input, information about their status, and so on.
Both the user and the system do not have the same type of language, therefore to make
possible the communication process, the interfaces must serve as intermediaries or translators
between them.
The way the user transforms perceptions into actions is called Human transfer function, and
the way the system transforms signals into display information is called System transfer
function. 3D user interfaces are actually physical devices that communicate the user and the
system with the minimum delay, in this case there are two types: 3D User Interface Output
Hardware and 3D User Interface Input Hardware.

3D user interface output hardware


Output devices, also called display devices, allow the machine to provide information or
feedback to one or more users through the human perceptual system. Most of them are focused
on stimulating the visual, auditory, or haptic senses. However, in some unusual cases they also
can stimulate the user's olfactory system.
3D visual displays
This type of devices are the most popular and its goal is to present the information produced
by the system through the human visual system in a three-dimensional way. The main features
that distinguish these devices are: field of regard and field of view, spatial resolution, screen
geometry, light transfer mechanism, refresh rate and ergonomics.
Another way to characterize these devices is according to the different categories of depth
perception cues used to achieve that the user can understand the three-dimensional information.
The main types of displays used in 3D user interfaces are: monitors, surround-screen displays,
workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and
autostereoscopic displays. Virtual reality headsets and CAVEs (Cave Automatic Virtual
Environment) are examples of a fully immersive visual display, where the user can see only
the virtual world and not the real world. Semi-immersive displays allow users to see both.
Monitors and workbenches are examples of semi-immersive displays.
rendering pipeline
✓ a rendering is a particular view of a 3D model that has been converted into
a realistic image.

✓ The rendering pipeline, also known as the graphics pipeline, is a


conceptual model in computer graphics that describes the series of steps
used to transform 3D models into a 2D image on a screen. This pipeline is
the foundation of rendering in most graphic systems, including video
games, simulations, and computer-aided design (CAD) applications.

✓ The Application Stage is the first phase in the rendering pipeline, where
most of the preparatory work is done before the actual rendering process
begins. This stage is primarily handled by the CPU (Central Processing
Unit) and involves various tasks executed by the application software such
as Scene Setup, 3D Model Loading, Camera Setup, light Setup.

✓ Mention CPU's involvement and potential APIs used (e.g., OpenGL,


DirectX).

✓ Vertex Specification
✓ The process of vertex specification is where the application sets up an
ordered list of vertices to send to the pipeline. These vertices define the
boundaries of a primitive.

✓ Primitives are basic drawing shapes, like triangles, lines, and points.
Exactly how the list of vertices is interpreted as primitives is handled via a
later stage.

✓ A vertex shader is a graphics processing function used to add special


effects to objects in a 3D environment by performing mathematical
operations on the objects' vertex data.
Vertex Transformation: This is the initial step where the vertices of the 3D
models are transformed from their local object coordinates to camera (view)
coordinates. This involves various transformations to position, rotate, and scale
the objects within the 3D world.
Lighting: Calculations to determine the lighting for each vertex are performed.
This takes into account the lights present in the scene, the material properties of
the object, and the camera perspective. The normals of the vertices are used to
calculate how light interacts with the surfaces.
Clipping: Vertices outside the visible area are clipped out of the pipeline because
they won't be visible on the screen.
Culling: Triangles facing away from the camera can be removed to optimize the
rendering process since they are not visible to the viewer.
rasterization
The rasterization stage is a key phase in the rendering pipeline where vector
graphics (in the form of geometric primitives such as points, lines, and polygons)
are converted into raster images (pixels or dots) for display on the screen.
framebuffer
The framebuffer is a portion of RAM containing a bitmap that drives a video
display. It is a memory buffer dedicated to storing the intensity values of pixels
that are displayed on the screen. Its primary purpose is to hold the final image
data that has been processed by the graphics pipeline until it can be displayed on
the monitor.

You might also like