Unit 1
Unit 1
Unit 1
We will probably never be on the top of Mount Everest or dive deep into the Mariana
Trench or step on the Moon, but we might be able to do all these things without even
stepping out of our homes, this is where Virtual Reality comes to the rescue.
In simple words, Virtual Reality means we can experience things that never actually
happen. It actually tricks our brain into thinking that we are in a different place using 3
of our senses-seeing, hearing, and touching. It creates a different world, and we feel we
are a part of it, both physically and mentally.
Snapchat filters, Instagram filters, Pokemon Go are all Examples of AR. It is these
AR apps that allow a customer to place virtual furniture in their house before buying.
Projection-based AR applications allow human interaction by sending light onto a real
surface and then sensing human touch, learning has become way more interesting and
easy with AR, markerless AR provides data based on our location, it is also a
powerful tool for marketing as it allows users to try products before buying.
Mixed Reality (MR): It brings real-world and digital elements together. But wait, this
is what AR does, so what is the difference?
It integrates digital objects and real-world in such a way that it makes it look like the
objects really belong there.
Mixed Reality works by scanning our physical environment and creating a map of our
surroundings so that the device will know exactly how to place digital content into
that space –realistically –allowing us to interact with it.
To convey the idea that the current version of stereoscopic 3D TVs has a static image, it is
referred to as a stereoscope. Sir Charles Wheatstone invented the stereoscope in 1832, even
before the advent of photography. This device involved the use of mirrors placed at a 45°
angle to project images into the viewer’s eyes from both the left and right sides.
Stereoscopes are optical devices that help to view the two images separately and with relaxed
eyes. Lens or pocket stereoscopes permit viewing the full overlap area of prints up to ∼9 cm
× 13 cm or parts of larger prints
In the case of modern phone-based virtual reality systems like Google Cardboard, a mobile
phone is used to display images instead of physical images.
Edwin Link invented the first mechanical flight simulator in the 1930s, a device that
mimicked the movements and feelings of flying in a cockpit-like structure. The Army Air
Corps purchased six of these systems in 1935 and by the end of World War II, more than
10,000 had been sold by Link.
Computer graphics:
Topics :
• Basics
• Output Primitives
• 2-Dimensional Viewing
• Visible Surface Detection
• 3-Dimension Object Representation
• Open GL
• Graphics Function in C
• Misc
It is difficult to display an image of any size on the computer screen. This method is
simplified by using Computer graphics. Graphics on the computer are produced by using
various algorithms and techniques. This tutorial describes how a rich visual experience is
provided to the user by explaining how all these processed by the computer.
In computer graphics, two or three-dimensional pictures can be created that are used for
research. Many hardware devices algorithm has been developing for improving the speed of
picture generation with the passes of time. It includes the creation storage of models and
image of objects. These models for various fields like engineering, mathematical and so on.
Definition of Computer Graphics:
It is the use of computers to create and manipulate pictures on a display device. It comprises
of software techniques to create, store, modify, represents pictures.
Suppose a shoe manufacturing company want to show the sale of shoes for five years. For this
vast amount of information is to store. So a lot of time and memory will be needed. This method
will be tough to understand by a common man. In this situation graphics is a better alternative.
Graphics tools are charts and graphs. Using graphs, data can be represented in pictorial form.
A picture can be understood easily just with a single look.
Interactive computer graphics work using the concept of two-way communication between
computer users. The computer will receive signals from the input device, and the picture is
modified accordingly. Picture will be changed quickly when we apply command
Computer Graphics Tutorial Index
Graphic Systems
o Display Processor
o Cathode Ray Tube (CRT)
o Random Scan vs Raster Scan
o Color CRT Monitors
o Direct View Storage Tubes
o Flat Panel Display
Input-Output Devices
o Input Devices
o Trackball
o Light Pen
o Image Scanner
o Output Devices
o Plotters
o Defining a Circle
o Defining a Circle using Polynomial Method
o Defining a Circle using Polar Coordinates Method
o Bresenham's Circle Algorithm
o Midpoint Circle Algorithm
2D Transformations
o Introduction of Transformation
o Translation
o Scaling
o Rotation
o Reflection
o Shearing
o Matrix Representation
o Homogeneous Coordinates
o Composite Transformation
o Pivot Point Rotation
2D-Viewing
o Window
o Window to Viewport Co-ordinate Transformation
o Zooming
o Panning
Clipping Techniques
o Clipping
o Point Clipping
o Line Clipping
o Midpoint Subdivision Algorithm
o Text Clipping
o Polygon
o Sutherland-Hodgeman Polygon Clipping
o Weiler-Atherton Polygon Clipping
Shading
o Introduction of Shading
o Constant Intensity Shading
o Gouraud shading
o Phong Shading
Animation
o Animation
o Application Areas of Animation
o Animation Functions
3D Computer Graphics
o Three Dimensional Graphics
o Three Dimensional Transformations
o Scaling
o Rotation
o Rotation about Arbitrary Axis
o Inverse Transformations
o Reflection
o Shearing
Hidden Surfaces
Projection
o Projection
o Perspective Projection
o Parallel Projection
Programs
Freeviewing
Free viewing in virtual reality (VR) refers to the ability to look around a virtual environment
in any direction without restrictions. This means the user can naturally move their head to look
up, down, left, right, and even behind, and the VR system will update the visual display in real
time to match the user's perspective.
Head Tracking: VR headsets track the user's head movements with gyroscopes,
accelerometers, and sometimes external sensors to ensure that the display responds accurately
to the user's looking direction.
Immersive Experience: This unbound exploration capability is key to creating an immersive
VR experience, making the user feel as if they are truly present in the virtual environment.
No Handheld Devices Required: Unlike controlled viewing, where the user might use a
handheld device or a keyboard/mouse to change the viewpoint, free viewing in VR relies on
the natural movement of the user's head.
Shutter system
It showcases how shutter glasses synchronize with a display to create a stereoscopic 3D effect.
The computer generates two images, one for the left and one for the right eye; the user wears
the wireless glasses, which alternately block and pass images to create the stereo effect.
The computer system generates left and right eye images sequentially. An infrared signal
synchronizes the glasses to the computer images, so that the right image is shown when the
right lens is transparent and the left image is shown when the left lens is transparent.
Infrared emitters are placed throughout and around the VR display environment, so that
regardless of where the user “looks” the glasses are fully functional.
The tracker, attached to the stereo glasses finds the position of the user’s head, enabling the
CAVE software to calculate and update the images from the user’s perspective.
Polarization systems
To present stereoscopic pictures, two images are projected superimposed onto the same screen
through polarizing filters or presented on a display with polarized filters. For projection, a silver
screen is used so that polarization is preserved. On most passive displays every other row of
pixels is polarized for one eye or the other. This method is also known as being interlaced. The
viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As
each filter only passes light which is similarly polarized and blocks the opposite polarized light,
each eye only sees one of the images, and the effect is achieved.
5. Interference filter systems
This technique uses specific wavelengths of red, green, and blue for the right eye, and different
wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific
wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb
filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle.
The Omega 3D/Panavision 3D system has also used an improved version of this technology.
In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical,
who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market
conditions″.
6.Holograph
A hologram is a recording of an interference pattern that can reproduce a 3D light field using
diffraction. In general usage, a hologram is a recording of any type of wavefront in the form of
an interference pattern.
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is
best known as a method of generating three-dimensional images, and has a wide range of other
uses, including data storage, microscopy, and interferometry. In principle, it is possible to make
a hologram for any type of wave.
Some popular stereoscopic software solutions include Autodesk 3ds Max, Blender,
Unity 3D,bino and Unreal Engine. These software tools offer a range of features for
creating and displaying stereoscopic content, such as 3D modeling, animation, lighting,
texture mapping, and stereoscopic rendering.
Bino software:
Bino is based on Qt. The optional Virtual Reality support is based on QVR. No other
libraries are required.
Bino is free software; you can redistribute it and/or modify it under the terms of the
GNU General Public License as published by the Free Software Foundation; either
version 3 of the License, or (at your option) any later version.
Simulator:
It is useful when experimentation with the real system is expensive, dangerous or likely to
cause significant disruption (eg transport systems, nuclear reactor and airline systems). It might
also be an option when mathematical modelling of a system is impossible.
The Need for Simulation
A simulator is a collection of hardware and software systems which are used to mimic the
behaviour of some entity or phenomenon. Typically, the entity or phenomenon being simulated
is from the domain of the tangible -- ranging from the operation of integrated circuits to
behaviour of a light aircraft during wind sheer. Simulators may also be used to analyze and
verify theoretical models which may be too difficult to grasp from a purely conceptual level.
Such phenomenon range from examination of black holes to the study of highly abstract models
of computation. As such, simulators provide a crucial role in both industry and academia.
Despite the increasing recognition of simulators as a viable and necessary research tool,
one must constantly be aware of the potential problems which simulators may introduce. Many
of the problems are related to the computational limitations of existing hardware platforms but
are quickly being overcome as more powerful platforms are introduced. Other problems,
unfortunately, are inherent within simulators and are related to the complexity associated with
the systems being simulated. This section highlights some of the major advantages and
disadvantages posed by modern day simulators.
Flight simulation:
➢ User interfaces are the means for communication between users and systems. 3D
interfaces include media for 3D representation of system state, and media for 3D user
input or manipulation. Using 3D representations is not enough to create 3D
interaction. The users must have a way of performing actions in 3D as well. To that
effect, special input and output devices have been developed to support this type of
interaction. Some, such as the 3D mouse, were developed based on existing devices
for 2D interaction.
3D user interfaces, are user interfaces where 3D interaction takes place, this means that the
user's tasks occur directly within a three-dimensional space. The user must communicate with
commands, requests, questions, intent, and goals to the system, and in turn this one has to
provide feedback, requests for input, information about their status, and so on.
Both the user and the system do not have the same type of language, therefore to make
possible the communication process, the interfaces must serve as intermediaries or translators
between them.
The way the user transforms perceptions into actions is called Human transfer function, and
the way the system transforms signals into display information is called System transfer
function. 3D user interfaces are actually physical devices that communicate the user and the
system with the minimum delay, in this case there are two types: 3D User Interface Output
Hardware and 3D User Interface Input Hardware.
✓ The Application Stage is the first phase in the rendering pipeline, where
most of the preparatory work is done before the actual rendering process
begins. This stage is primarily handled by the CPU (Central Processing
Unit) and involves various tasks executed by the application software such
as Scene Setup, 3D Model Loading, Camera Setup, light Setup.
✓ Vertex Specification
✓ The process of vertex specification is where the application sets up an
ordered list of vertices to send to the pipeline. These vertices define the
boundaries of a primitive.
✓ Primitives are basic drawing shapes, like triangles, lines, and points.
Exactly how the list of vertices is interpreted as primitives is handled via a
later stage.