0% found this document useful (0 votes)
80 views10 pages

UNIT 4 Notes

Uploaded by

Harsh Dewangan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views10 pages

UNIT 4 Notes

Uploaded by

Harsh Dewangan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Unit: 04

Syllabus
Unit – IV (8 hours)
Illumination Models: Basic Models, Displaying Light Intensities. Surface
Rendering Methods: Polygon Rendering Methods: Gouraud Shading, Phong
hading. Computer Animation: Types of Animation, Key frame Vs. Procedural
Animation, Methods of Controlling Animation, Morphing. Introduction to
Virtual Reality and Augmented Reality.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

ILLUMINATION AND SHADING


Light Source: Light source that illuminate an object are of two types
 Light emitting Source: Bulb, Sun etc.
 Light reflecting Source: Wall of a room etc.
i. Point Source: The dimension of the light source are smaller than size of the object.
ii. Distributed light: The dimension of the light source and the object are approximately same.
iii. Light Source Attenuation: A basic property of light is that it loses its intensity the further it
travels from its source. The intensity of light from the sun changes in proportion to the
distance from the sun. The technical name for this is light attenuation.
Then illumination equation

iv. Ambient Light: if instead of self-luminosity, there is a diffuse, non-directional source light then
product of multiple reflection light from many surface present in the environment is called as
ambient light.
Diffuse Reflection
Light coming from all directions is assumed as background light and it is assume as
uniform. There is same amount of light going up as going down. The ratio of light reflected
from the total incoming light called coefficient of reflection or reflectivity. White surface
has coefficient as ‘1’ while black surface ‘0’.
Considering the effect of ambient light when it is reflected from a surface, it
produces a uniform illumination of surface at any viewing position from which surface is
visible.

Specular Reflection
When illuminate a shiny surface such
as polished metal, we observe high light or
bright spot on signee surface. This
phenomena of reflection of incident light in
concentrated region around the specular
reflection angle is called specular reflection.
N: Normal Vector
R: unit vector in the direction of total
specular reflection.
L: unit vector towards point light
source.
V: Unit vector pointing viewer from surface.
Phong illumination Model
Here maximum specular reflection occurs when and fall off sharply as is
increasing.
Rapid fall off is approximately by .
[IV.1]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

n is specular reflection parameter determined by the type pf surface. The value of n


typically vary from 1 to 100 depending on the surface materials.
 For perfect reflector n is infinity.
 For rough surface n is near to 1.

Halfway Vector / Blinn–Phong reflection model


Use the additional vector . It is also called modified
Phong reflection model because its direction is halfway
between the direction of light source and the viewer.

SHADING:
We can shade any surface by calculating surface normal at each visible point and applying
the describe illumination model at that point.
Different Types of Shading
I. Constant intensity shading
II. Gouraud Shading
III. Phong Shading
IV. Halftone Shading

CONSTANT INTENSITY SHADING:


It is also called as flat shading. Illumination
model is applied only one face for eac polygon to
determine single intensity value.
This method is valid for following
assumption:
1. The light source is at infinity distance. So
is constant across the polygon face.
2. The viewer is at infinity distance so
constant over the surface.
3. The polygon represent the actual surface
being modeled & is not approximation to a
curve surface.
Gouraud Shading:
1. It is also called interpolated shading.
2. The polygon surface is displayed by linearly
interpolating intensity value across the surface.
3. Intensity vale of each polygon method with the
value of adjacent polygon along the common
edge.
It need following calculation:
1. Determine the average normal vector at each polygon vertex.
2. Apply illumination model to each polygon vertex to determine vertex intensity.
3. Linearly interpolate the vertex intensity over the surface of the polygon.
[IV.2]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Let are normal of three surface with sharing vertex ‘V’.

Phong Shading:
It is also called normal vector interpolating shading.
Here interpolate normal vector rather than intensity. It
procced as follows:
1. Determine he average nit normal vector at each
polygon vertex.
2. Linearly interpolate the vertex normal over the
surface of the polygon.
3. Apply an illumination model along each scan
line to determine projected pixel intensities for
surface point.

Halfway Shading:
Many display devices are like
 They can produce only two intensity level.
 In such case we can create an apparent increase in the number of available intensity.
This is achieved b incorporating multiple pixel position into the display for each intensity
value.
 : when we view a small area from large distance our eye’s average fine details
within the small area & record only the overall intensity of the area.
This phenomena of apparent increase the number of available intensity by considering
combine intensity of multiple pixel is known as Halftoning.
ANIMATION:
Literally means “Giving life to”. It generally refer to any time sequence of visual
changes in a scenes. He change in the scene are made by transformation (Translation,
Scaling, rotation). Some of the application of animation is for entertainment like cartoon.
We can produce animation by changing lighting effect or other parameters and procedure
associated with illumination & rendering.
PRODUCTION TECHIQUE:
The overall animation of enter object is referred as Production. Production is broken
into major part refer to as sequence. A sequence is usually identified by an associated
strategy area. Production is usually consists of one to dozen sequence.

[IV.3]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Each sequence is broken down into one or more sot. Each shot is continuous camera
recording. Each shot is broken down into the indirect frame of film. Frame is a single
recorded image.

Animation is a trial & error process that involves feedback from one step to
previous step.
Straight Ahead: processing from starting point & developing motion continuously pose to
pose key frame are identified. Intermediate formulate are interpolated.
Story Board: Consists of key frame, animation is outline frame produced by master
animation.
Inbetweening: Producing intermediate frame between Key frames. It is done by associate
animation.
Model Sheet: it consists of a no of drawing for each figure in various pose.
Exposure Sheet: Recorded information for each frame such as sound tracker, camera more.
Story Reel: May be produce in which he story board frame are recorded.
Type of Animation:
1. Conventional Animation:
1st write a script of story. Then a series of picture is drawn of story. Important moment
of story is called as story board. Once story board created actual animation will start. In
final animation is achieved by filling the gap between adjacency key frames.
2. Computer Assistance:
Many stage conventional animation seems ideally suited to computed assistance
especially inbetweening and coloring can be done by seed filling algorithm. Before the
computer can be used, the drawing must be digitized. This can be done by using
optional scanning by tracking, dragging with the data table.
By placing several small resolution frame of an animation in a rectangle array. The
equivalent of pencil test can be generated using pan-zoom available in some frame
buffer.

Types of Animation Systems


1. Scripting Systems:
System is not interactive. One scripting system is ASAS (Actor Script Animation
Language), which has a syntax similar to LISP. ASAS introduced the concept of an actor,
i.e., a complex object which has its own animation rules. For example, in animating a
bicycle, the wheels will rotate in their own coordinate system and the animator doesn't
[IV.4]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

have to worry about this detail. Actors can communicate with other actors be sending
messages and so can synchronize their movements. This is similar to the behavior of
objects in object-oriented languages.
2. Procedural Animation:
Scripting Systems were the earliest type of motion control systems. The animator writes a
script in the animation language. Thus, the user must learn this language and the
Procedures are used that define movement over time. These might be procedures that use
the laws of physics (Physically - based modeling) or animator generated methods. An
example is a motion that is the result of some other action (this is called a "secondary
action"), for example throwing a ball which hits another object and causes the second
object to move.
3. Representational Animation:
This technique allows an object to change its shape during the animation. There are three
subcategories to this. The first is the animation of articulated objects, i.e., complex objects
composed of connected rigid segments. The second is soft object animation used for
deforming and animating the deformation of objects, e.g. skin over a body or facial
muscles. The third is morphing which is the changing of one shape into another quite
different shape. This can be done in two or three dimensions.
4. Stochastic Animation:
This uses stochastic processes to control groups of objects, such as in particle systems.
Examples are fireworks, fire, water falls, etc.
5. Behavioral Animation:
Objects or "actors" are given rules about how they react to their environment. Examples
are schools of fish or flocks of birds where each individual behaves according to a set of
rules defined by the animator.

MORPHING
Morphing is a
familiar technology to
produce special effects in
image or videos.
Morphing is common in
entertainment industry.
Morphing is widely used in movies, animation games etc. In addition to the usage of entertainment
industry, morphing can be used in computer based trainings, electronic book illustrations,
presentations, education purposes etc. morphing software is widely available in internet.
Animation industry looking for advanced technology to produce special effects on their
movies. Increasing customers of animation industry does not satisfy with the movies with simple
animation. Here comes the significance of morphing.
The Word "Morphing" comes from the word "metamorphosis" which means change shape,
appearance or form. Morphing is done by coupling image warping with color interpolation.
Morphing is the process in which the source image is gradually distorted and vanished while
producing the target image. So earlier images in the sequence are similar to source image and last

[IV.5]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

images are similar to target image. Middle image of the sequence is the average of the source
image and the target image.
Introduction to Virtual Reality and Augmented Reality:
Virtual Reality: Today the Virtual reality (VR) technology is applied to advance fields of medicine,
engineering, education, design, training, and entertainment. VR is a computer interfaces
which tries to mimic real world beyond the flat monitor to give an immersive 3D (Three
Dimension) visual experiences. Often it is hard to reconstruct the scales and distances
between objects in static 2D images. Thus the third dimension helps bringing depth to
objects.
Virtual reality (VR) is a computer-generated scenario that simulates experience
through senses and perception. The immersive environment can be similar to the real world
or it can be fantastical, creating an experience not possible in our physical
reality. Augmented reality systems may also be considered a form of VR that layers virtual
information over a live camera feed into a headset or through a smartphone or tablet device
giving the user the ability to view three-dimensional images.
Current VR technology most commonly uses virtual reality headsets or multi-
projected environments, sometimes in combination with physical environments or props, to
generate realistic images, sounds and other sensations that simulate a user's physical
presence in a virtual or imaginary environment. A person using virtual reality equipment is
able to "look around" the artificial world, move around in it, and interact with virtual
features or items. The effect is commonly created by VR headsets consisting of a head-
mounted display with a small screen in front of the eyes, but can also be created through
specially designed rooms with multiple large screens.
VR systems that include transmission of vibrations and other sensations to the user
through a game controller or other devices are known as haptic systems. This tactile
information is generally known as force feedback in medical, video gaming and military
training applications.
Augmented Reality: Augmented Reality (AR) is a general term for a collection of technologies
used to blend computer generated information with the viewer’s natural senses. A simple
example of AR is using a spatial display (digital projector) to augment a real world object (a
wall) for a presentation. As you can see, it’s not a new idea, but a real revolution has come
with advances in mobile personal computing such as tablets and smartphones.
Since mobile ‘smart’ devices have become ubiquitous, ‘Augmented Reality
Browsers’ have been developed to run on them. AR browsers utilise the device’s sensors
(camera input, GPS, compass, et al) and superimpose useful information in a layer on top of
the image from the camera which, in turn, is viewed on the device’s screen.
AR Browsers can retrieve and display graphics, 3D objects, text, audio, video, etc.,
and use geospatial or visual ‘triggers’ (typically images, QR codes, point cloud data) in the
environment to initiate the display.
AR is being used in an increasing variety of ways, from providing point-of-sale
information to shoppers, tourist information on landmarks, computer enhancement of
traditional printed media, service information for on-site engineers; the number of
applications is huge.
There are a number of different development platforms on the market. The main
mobile device platforms are Junaio (now withdrawn), Aurasma and Layar. There’s also a
[IV.6]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

plethora of different apps with novel ideas for AR applications if you search for them with
your app provider, from interactive museum displays to overlaying medical
information over a patient.
There are some exciting emergent display technologies which are nearing
commercialization. A series of ‘Head Up Display’ (HUD) devices are coming to market
which will provide a ‘hands-free’ projection of AR information via devices integrated in
spectacle-like screens – examples include Google’s Project Glass, which is now in beta
release to selected users in the States.
Time will tell the level of adoption such hardware will reach, since the wearer looks
somewhat unusual wearing the device, but it won’t be long before they become more
discreet – the ‘bionic contact lens’ is already in development.

Virtual Reality vs. Augmented Reality

What is Virtual Reality?


Virtual reality (VR) is an artificial, computer-generated simulation or recreation of a real
life environment or situation. It immerses the user by making them feel like they are experiencing
the simulated reality firsthand, primarily by stimulating their vision and hearing.
VR is typically achieved by wearing a headset like Facebook’s Oculus equipped with the
technology, and is used prominently in two different ways:
 To create and enhance an imaginary reality for gaming, entertainment, and play (Such as
video and computer games, or 3D movies, head mounted display).
 To enhance training for real life environments by creating a simulation of reality where
people can practice beforehand (Such as flight simulators for pilots).
Virtual reality is possible through a coding language known as VRML (Virtual Reality
Modeling Language) which can be used to create a series of images, and specify what types of
interactions are possible for them.
What is Augmented Reality?
Augmented reality (AR) is a technology that layers computer-generated enhancements atop
an existing reality in order to make it more meaningful through the ability to interact with it. AR is
developed into apps and used on mobile devices to blend digital components into the real world in
such a way that they enhance one another, but can also be told apart easily.
AR technology is quickly coming into the mainstream. It is used to display score overlays
on telecasted sports games and pop out 3D emails, photos or text messages on mobile devices.
Leaders of the tech industry are also using AR to do amazing and revolutionary things with
holograms and motion activated commands.
Augmented Reality vs. Virtual Reality

[IV.7]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

Augmented reality and virtual reality are inverse reflections of one in another with what
each technology seeks to accomplish and deliver for the user. Virtual reality offers a digital
recreation of a real life setting, while augmented reality delivers virtual elements as an overlay to
the real world.
How are Virtual Reality and Augmented Reality Similar?
Technology
Augmented and virtual realities both leverage some of the same types of technology, and
they each exist to serve the user with an enhanced or enriched experience.
Entertainment
Both technologies enable experiences that are becoming more commonly expected and
sought after for entertainment purposes. While in the past they seemed merely a figment of a
science fiction imagination, new artificial worlds come to life under the user’s control, and deeper
layers of interaction with the real world are also achievable. Leading tech moguls are investing and
developing new adaptations, improvements, and releasing more and more products and apps that
support these technologies for the increasingly savvy users.
Science and Medicine
Additionally, both virtual and augmented realities have great potential in changing the
landscape of the medical field by making things such as remote surgeries a real possibility. These
technologies been already been used to treat and heal psychological conditions such as Post
Traumatic Stress Disorder (PTSD).
How do Augmented and Virtual Realities Differ?
Purpose
Augmented reality enhances experiences by adding virtual components such as digital
images, graphics, or sensations as a new layer of interaction with the real world. Contrastingly,
virtual reality creates its own reality that is completely computer generated and driven.
Delivery Method
Virtual Reality is usually delivered to the user through a head-mounted, or hand-held
controller. This equipment connects people to the virtual reality, and allows them to control and
navigate their actions in an environment meant to simulate the real world.

Augmented reality is being used more and more in mobile devices such as laptops, smart phones,
and tablets to change how the real world and digital images, graphics intersect and interact.
How do they work together?
It is not always virtual reality vs. augmented reality– they do not always operate
independently of one another, and in fact are often blended together to generate an even more
immersing experience. For example, haptic feedback-which is the vibration and sensation added to

[IV.8]
COMPUTER GRAPHICS | Unit: 04
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, GIET (AUTONOMOUS), GUNUPUR
COMPUTER GRAPHICS - BCSPC7010 - 3-0-1 4

interaction with graphics-is considered an augmentation. However, it is commonly used within a


virtual reality setting in order to make the experience more lifelike though touch.
Virtual reality and augmented reality are great examples of experiences and interactions
fueled by the desire to become immersed in a simulated land for entertainment and play, or to add a
new dimension of interaction between digital devices and the real world. Alone or blended together,
they are undoubtedly opening up worlds-both real and virtual alike.

[IV.9]
COMPUTER GRAPHICS | Unit: 04

You might also like