0% found this document useful (0 votes)
34 views16 pages

Graphics 2018 - 2

Uploaded by

Shiji Mathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views16 pages

Graphics 2018 - 2

Uploaded by

Shiji Mathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Graphics 2018 part 2

1. Explain translation:
In computer graphics, transformation of the coordinates consists of three major processes:

 Translation
 Rotation
 Scaling
A translation process moves every point a constant distance in a specified direction. It can be
described as a rigid motion. A translation can also be interpreted as the addition of a constant
vector to every point, or as shifting the origin of the coordinate system.
Suppose, If point (X, Y) is to be translated by amount Dx and Dy to a new location (X’, Y’) then
new coordinates can be obtained by adding Dx to X and Dy to Y as:
X' = Dx + X
Y' = Dy + Y

or P' = T + P where

P' = (X', Y'),


T = (Dx, Dy ),
P = (X, Y)
Here, P(X, Y) is the original point. T(Dx, Dy) is the translation factor, i.e. the amount by which the
point will be translated. P'(X’, Y’) is the coordinates of point P after translation.
Examples:

Input : P [ ] = {5, 6}, T = {1, 1}


Output : P'[ ] = {6, 7}

Input : P[ ] = {8, 6}, T = {-1, -1}


Output : P'[ ] = {7, 5}
Whenever we perform translation of any object we simply translate its each and every point. Some
of basic objects along with their translation can be drawn as:

1. Point Translation P(X, Y) : Here we only translate the x and y coordinates of given point as
per given translation factor dx and dy respectively.
2. Line Translation: The idea to translate a line is to translate both of the end points of the line
by the given translation factor(dx, dy) and then draw a new line with inbuilt graphics function.
3. Rectangle Translation : Here we translate the x and y coordinates of both given points A(top
left ) and B(bottom right) as per given translation factor dx and dy respectively and then draw a
rectangle with inbuilt graphics function

2. What is grid?
A grid is a structure of intersecting lines or bars. It is used as a guide to divide and organize a
space. Usually, a grid is two-dimensional, and the lines are evenly spaced, intersecting at right
angles.
Grids are useful for organizing and ordering information. When information is organized in a
grid structure, any unit of information is located by specifying two of the intersecting lines.
In maps
Often, maps are overlaid with a grid to help define locations. On a world map, latitude grid lines
are oriented parallel to the equator, and longitude grid lines are oriented perpendicular to the
equator. By specifying a latitude and a longitude, you can uniquely identify any point on the
map.
In spreadsheets
In a spreadsheet, information is organized in rows and columns, forming a grid. Individual
locations on the grid are called cells. Any cell in the spreadsheet is uniquely identified by its
column id (usually a letter) and row id (usually a number). For example, the cell located at the
intersection of column C and row 13 is identified as cell "C13."
In graphic design
Many graphic design applications provide a grid displayed over or under the contents of an
image. The grid helps the artist to visually align graphic elements. Some programs provide the
option to snap elements to points on the grid. When an artist using the snap feature moves an
item near a point on the grid, the element "snaps" (moves automatically) to that precise
location.
In web design, CSS Grid is a technology for defining the layout of a web
page with CSS (cascading style sheets). It provides attributes and logic for organizing visual
elements in two dimensions on the web page.

3. Clipping and its different types:


Clipping: Any procedure that identifies those portions of a picture that are either inside or
outside of a specified region of space is referred to as a clipping algorithm or simply clipping.
The region against which an object is to be clipped is called a clip window. Clipping is the
process to identify the picture either inside or outside of the displaying area.
Application of clipping includes extracting part of a defined scene for viewing, identifying visible
surface in three-dimensional views, object boundary, creating objects using solid modelling
procedures, displaying a multi-window environment and drawing and painting operations,
moving, cursing depending on the application, the clip window can be general polygon or it can
have curved boundaries.

For the viewing transformation, we want to display only those picture parts that are within the
window area. Everything outside the window is discarded. Clipping algorithm can be applied in
world co-ordinates, so that only the contents of the window interior are mapped to device co-
ordinates.
Types of Clipping :
These are the main types of clipping

1. Point clipping.
2. Line clipping (straight line segment).
3. Area clipping (polygons) or curve clipping.
4. Text clipping.

Line and polygon clipping routines are standard components of the graphics package, but
many packages accommodate curved objects.

1. Point Clipping: Assuming that the clip window is a rectangle in the standard position we
have a point P = (x, y) for display, if the following conditions are satisfied :

Xw min ≤ x ≤ Xw max , Yw min ≤ y ≤ Yw max.

Where the edges of the clip window (Xw min , Xw max, Yw min, Yw max ) can be either the
world co-ordinate window boundaries or viewport boundaries. If anyone of these four
conditions is not satisfied the point is clipped.
For Example: Point clipping can be applied to the scene involving explosions that are modelled
with particles (points) distributed in some region of the scene.

2. Line Clipping: A line clipping procedure involves several parts. First, we can test a given line
segment to determine whether it lies completely inside the clipping window if it does not, we try
to determine whether it lies completely outside the window.

Finally, if we cannot identify a line segment as completely inside or completely outside we must
perform intersection calculations with one or more clipping boundaries.

A line with both endpoints insides all clipping boundaries, such as the line P1 to P2 is saved. A
line with both endpoints outside any one of the clip boundaries (line P3 P4) is outside the
window.

3. Area Clipping (Polygons) or Curve Clipping: Area with curved boundaries can be clipped
with methods similar to point clipping and line clipping. Curve clipping procedures will involve
non-linear equations and this requires more processing than for object with linear boundaries.

The bounding rectangle for a circle or other curved object can be used first to test the overlap
with a rectangular clip window. If the bounding rectangle for the object completely inside the
window, we save the object. If the rectangle is determined to be completely outside the
window, we discard the object.

If the bounding rectangle test fail, we can look for other computation saving approaches. For a
circle, we can use co-ordinate extent of individual quadrants and then octants for testing before
calculating curve window intersections.

4. Text Clipping: There are several techniques that can be used to provide text clipping in a
graphics package. The clipping technique used will depends on method used to generate
characters and the requirements of a particular application.

The simplest method for processing character string relative to a window boundary is to use
them all or one string clipping strategy. If all of the string is inside a clip window, we keep it.
Otherwise, the string is discarded. This procedure is implemented by considering a bounding
rectangle around the text pattern.
The boundary position of the rectangle are then compared to window boundaries and the string
is rejected if there is any overlap.

An alternative to reject an entire character string that overlaps a window boundary is to use
them all or none character clipping strategy. Here we discard only those characters that are not
completely inside the window.

A final method for handling text clipping is to clip the components of individual characters. We
now treat the character in the same way as the line.

4. Why Perspective Projection appear more realistic


Creating realistic 3D scenes with perspective projection is a fundamental skill in computer
graphics. Perspective projection is a technique that simulates how the human eye perceives
depth and distance in the real world. It makes objects appear smaller and closer together as
they get farther away from the viewer.
In perspective projection, the center of projection is at a finite distance from the projection
plane. This projection produces realistic views but does not preserve relative proportions of an
object’s dimensions

Perspective projection is a type of 3D projection where three-dimensional objects are projected


onto a two-dimensional surface, such as a screen or a canvas 1. This projection simulates the
view from a real-world camera and makes distant objects appear smaller than nearer objects 1.
The lines of projection in perspective projection are not parallel. Instead, they all converge at a
single point called the center of projection or projection reference point 1.
In contrast, oblique projection of objects is considered less realistic than a perspective
projection, but oblique projections are useful in technical applications since the parallelism of
an object’s lines and faces is preserved

5. Boundary Representations:
Objects are represented as a collection of surfaces. 3D object representation is divided into
two categories.
Boundary Representations B−reps− It describes a 3D object as a set of surfaces that
separates the object interior from the environment.
Space–partitioning representations − It is used to describe interior properties, by partitioning
the spatial region containing an object into a set of small, non-overlapping, contiguous
solids usually cubes.
The most commonly used boundary representation for a 3D graphics object is a set of surface
polygons that enclose the object interior. Many graphics system use this method.

6. Raster animation:
In computer animation, raster graphics refers to animation frames made of pixels 1. Raster
animations can be generated using raster operations on raster systems
2. A sequence of raster operations can be executed to produce real-time animation of either
2D or 3D objects. Objects can be animated along 2D motion paths using color-table
transformation
The term "raster image" refers to the way the image is stored rather than how it's displayed.

7. Parameterized systems: allow object-motion characteristics (such as degree of freedom,


motion limitation and allowable shape changes) to be specified as part of the object
definitions.
8. Key-frame Systems: are specialized animation languages designed simply to generate the
in –betweens from the user-specified key frames.

9. Scripting Systems: allow object specifications and animation sequences to be defined with
a user-input script

10. Explain the working of a) Data Glove b) Digitizer c) Touch Pen d) Light Pen

A Data Glove is an input device that captures physical data such as bending of fingers and hand
movements as data. It is used to interface those movements with a computer, and is commonly used
in virtual reality environments where the user sees an image of the data glove and can manipulate
the movements of the virtual environment using the glove 1. The user puts on the glove, then
sensors detect movement and transmit it to the computer. The glove is mostly used in combination
with a headset. As well as navigation and orientation in virtual space, gloves also provide tactile
force feedback 2. The glove features various sensor technologies that are used to capture physical
data such as bending of fingers. Often a motion tracker, such as a magnetic tracking device or
inertial tracking device, is attached to capture the global position/rotation data of the glove 3. The
glove is calibrated by adjusting the sensor’s zero position and amplitude. In the adjustment process,
the user makes a specific gesture in advance and gets a value matching the user’s hand, so as to
ensure the performance stability of the data glove 4. Expensive high-end wired gloves can also
provide haptic feedback, which is a simulation of the sense of touch 3.

A digitizer is a machine that converts an analog object, image, or signal into a digital (i.e., computer-
readable) format. The process of transforming information into a digital (i.e., computer-readable)
form is digitization. The result represents an image, object, sound, signal, or document by generating
a series of numbers that describe a discrete set of points or samples.

Examples of digitizers

One of the digitizer examples is a digital camera. Some other examples are:
1. Audio digitizer

Most computers have a microphone jack, where an analog microphone can be attached. A separate
sound card processes the analog input (audio signal) on the monitor or by audio equipment on the
motherboard itself. Code running on the machine can then use this data. Some audio digitizers are
inexpensive peripherals, while others are small hand-held devices that provide the professional-
quality conversion. Another audio digitizer in a smartphone is the microphone.
2. Tablet Computer
A tablet is a machine powered with a finger or a digital pen, which is a type of stylus. Typically, a
tablet is bigger than a smartphone but smaller than a display for a device. Some tablets have a
touch-sensitive screen, while others are peripheral devices with no screen that connects to a
monitor.

By pressing the tablet, the user can paint, draw, write. The analog touch input is translated by
software to lines or pressure-sensitive brush strokes in a script. To convert handwritten text to
typewritten words, the program can also perform handwriting recognition. These tablets are
commonly referred to as graphics tablets when dealing with graphics.
3. Accelerometer and gyroscope

Digitizers in smartphones and tablets may detect how quickly the device is moving (an
accelerometer) as well as the angle at which it is kept (a gyroscope). Motion and angular rotation
data is transformed into data that your apps can use in real-time.

A smartphone, for example, maybe held up to the sky and gain information about the location of
stars and planets using a gyroscope. When taking a shot, the accelerometer eliminates motion blur,
and it can also enable safety features if the unit is dropped.
4. Scanner

A scanner is a photographic device that gradually collects image data, usually stationary. A flatbed
scanner takes a picture of a document or photograph. By moving a camera on the document.

A motion picture film scanner digitizes motion picture film frames by advancing the film one frame at
a time, photographing the frames, and storing them as a digital image sequence. By scanning a
laser over a printed barcode, a barcode scanner collects binary data.

A touchscreen pen works by using a conductive tip that mimics the touch of a finger. When you
touch the screen with the pen, it creates an electrical connection, allowing the device to detect the
input 1. Touchscreen pens are commonly used for tasks that require precision, such as taking notes,
drawing, or navigating small on-screen elements 1.
There are different types of touchscreen pens available. Some pens have a capacitive tip, which
works on most modern touchscreens, while others use active technology and require a battery to
function. There are also pens with additional features like pressure sensitivity, customizable buttons,
and palm rejection 1.
While a touchscreen pen is not essential for using a touchscreen device, it can enhance your
experience, especially if you frequently engage in activities like drawing, note-taking, or precise
selections 1. It provides an alternative input method that can be more comfortable and precise than
using your fingers 1.

A light pen is a light-sensitive computer input device that can be used to select or draw on the
screen. It works by detecting changes in brightness of nearby screen pixels when scanned by a
cathode ray tube electron beam. It communicates the timing of this event to the computer, which
then calculates the position of the pen on the screen. A light pen consists of a photocell mounted on
a pen-shaped tube.
A light pen, also known as a stylus, is a data input peripheral for devices that use touch screens,
such as tablets, smartphones, digitizing tablets, and other devices.
Its external appearance is similar to a traditional pen or pencil but accompanied by a round rubber
piece, which moves in an elementary way across touch screens.
It is a tool preferred by many users to perform different tasks with great precision, such as taking
notes, signing documents, and making drawings or designs in digital format.
Types of Light Pen
There are two types of Stylus according to the surface for which they were created: passive stylus
and active stylus.

Depending on their attributes you can use them for navigating graphical purposes.
Passive Passive stylus pens work with capacitive screens, so they are usually used
Stylus exclusively for navigation and selection of items or applications.

Active An active stylus, also known as a “digital pen,” is a tool that includes a built-in
stylus battery and a circuit board attached to a tip with special contacts.

11. DDA Algorithm

The DDA (Digital Differential Analyzer) algorithm is a line drawing algorithm used in computer
graphics to generate a line segment between two specified endpoints 1234. It is an incremental
method that works by using the incremental difference between the x-coordinates and y-coordinates
of the two endpoints to plot the line 2. Here are the steps to implement the DDA algorithm:

1. Declare x1, y1, x2, y2, dx, dy, x, y as integer variables.


2. Enter the values of x1, y1, x2, y2.
3. Calculate dx = x2 - x1.
4. Calculate dy = y2 - y1.
5. If |dx| > |dy|, then step = |dx|, else step = |dy|.
6. Calculate dx = dx / step.
7. Calculate dy = dy / step.
8. Assign x = x1.
9. Assign y = y1.
10. Set pixel (x, y).
11. Repeat the following until x = x2:
o x = x + dx.
o y = y + dy.
o Set pixel (round(x), round(y)).

The DDA algorithm is faster than the method of using the direct use of the line equation and does
not use multiplication theorem 1. It allows us to detect the change in the value of x and y, so plotting
of the same point twice is not possible 1. This method gives overflow indication when a point is
repositioned 1. However, rounding off operations and floating-point operations consume a lot of
time 1. It is more suitable for generating lines using software but is less suited for hardware
implementation

Advantage:

1. It is a faster method than method of using direct use of line equation.


2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is not
possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.

Disadvantage:

1. It involves floating point additions rounding off is done. Accumulations of round off error cause
accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for hardware
implementation.
4. Resulted lines are not smooth because of round off( ) function.

15. Matrix representations for the basic transformations:

The basic transformations in computer graphics are translation, rotation, scaling, and reflection.
These transformations can be expressed in the general matrix form 1:

 Translation:
o The translation matrix is a 3x3 matrix with the first two columns representing the
identity matrix and the last column representing the translation vector.
 Rotation:
o The rotation matrix is a 3x3 matrix that rotates a point around the origin by a given
angle. The matrix is constructed using the cosine and sine of the angle of rotation.
 Scaling:
o The scaling matrix is a 3x3 matrix that scales a point by a given factor along the x, y,
and z axes. The matrix is constructed using the scaling factors along the diagonal.
 Reflection:
o The reflection matrix is a 3x3 matrix that reflects a point across a given plane. The
matrix is constructed using the normal vector of the plane.

The following table shows the matrix representation of each transformation:

Transformation Matrix Representation

Translation [1 0 tx; 0 1 ty; 0 0 1]

Rotation [cosθ -sinθ 0; sinθ cosθ 0; 0 0 1]

Scaling [sx 0 0; 0 sy 0; 0 0 1]

Reflection [1 0 0; 0 1 0; 0 0 -1]

16. Window to view port transformation


Window to Viewport Transformation is the process of transforming 2D world-coordinate objects
to device coordinates. Objects inside the world or clipping window are mapped to the viewport
which is the area on the screen where world coordinates are mapped to be displayed.
 World coordinate – It is the Cartesian coordinate w.r.t which we define the diagram, like X wmin,
Xwmax, Ywmin, Ywmax
 Device Coordinate –It is the screen coordinate where the objects are to be displayed, like
Xvmin, Xvmax, Yvmin, Yvmax
 Window –It is the area on the world coordinate selected for display.
 ViewPort –It is the area on the device coordinate where graphics is to be displayed.
Mathematical Calculation of Window to Viewport:
It may be possible that the size of the Viewport is much smaller or greater than the Window. In
these cases, we have to increase or decrease the size of the Window according to the Viewport
and for this, we need some mathematical calculations.
(xw, yw): A point on Window
(xv, yv): Corresponding point on Viewport
We have to calculate the point (xv, yv)

Now the relative position of the object in Window and Viewport are same.
For x coordinate,

For y coordinate,

So, after calculating for x and y coordinate, we


get
Where sx is the scaling factor of x coordinate and s y is the scaling factor of y coordinate

Example: Let us assume,


 for window, Xwmin = 20, Xwmax = 80, Ywmin = 40, Ywmax = 80.
 for viewport, Xvmin = 30, Xvmax = 60, Yvmin = 40, Yvmax = 60.
 Now a point ( X w, Yw ) be ( 30, 80 ) on the window. We have to calculate that point on the
viewport
i.e ( Xv, Yv ).
 First of all, calculate the scaling factor of x coordinate Sx and the scaling factor of y coordinate
Sy using the above-mentioned formula.
Sx = ( 60 - 30 ) / ( 80 - 20 ) = 30 / 60
Sy = ( 60 - 40 ) / ( 80 - 40 ) = 20 / 40
 So, now calculate the point on the viewport ( X v, Yv ).
Xv = 30 + ( 30 - 20 ) * ( 30 / 60 ) = 35
Yv = 40 + ( 80 - 40 ) * ( 20 / 40 ) = 60
 So, the point on window ( X w, Yw ) = ( 30, 80 ) will be ( Xv, Yv ) = ( 35, 60 ) on viewport.

17. Sweep representation

In computer graphics, a sweep representation is a technique used to construct 3D objects from 2D


shapes that have some kind of symmetry. It is created by specifying a 2D shape and a sweep that
moves the shape through a region of space. The sweep can be translational or rotational, and can
be used to create curved surfaces like an ellipsoid or a torus .

Sweep representations are used to construct three dimensional objects from two dimensional shape
.There are two ways to achieve sweep: Translational sweep and Rotational sweep. In
translational sweeps, the 2D shape is swept along a linear path normal to the plane of the area to
construct three dimensional object. To obtain the wireframe representation we have to replicate the
2D shape and draw a set of connecting lines in the direction of shape
In general we can specify sweep constructions using any path. For translation we can vary the
shape or size of the original 2D shape along the sweep path. For rotational sweeps, we can move
along a circular path through any angular distance from 0° to 360°. These sweeps whose generating
area or volume changes in size, shape or orientation as they are swept and that follow an arbitrary
curved trajectory are called general sweeps .General sweeps are difficult to model efficiently for
example, the trajectory and object shape may make the swept object intersect itself, making volume
calculations complicated. Further more, general sweeps do not always generate solids. For example,
sweeping a 2D shape in its own plane generates another 2D shape.

18. Compare CSG and Ray casting methods

In computer graphics, Constructive Solid Geometry (CSG) and Ray Casting are two different
methods used to create 3D models.

CSG is a technique used in solid modeling that allows a modeler to create a complex surface or
object by using Boolean operators to combine simpler objects 1. CSG can be performed on
polygonal meshes and may or may not be procedural and/or parametric 1. The simplest solid objects
used for the representation are called geometric primitives, such as cuboids, cylinders, prisms,
pyramids, spheres, cones, etc1. These primitives can be combined into compound objects using
operations like union, difference, and intersection 1. CSG is often used in procedural modeling 1.

Ray Casting is a rendering technique that involves casting rays from the eye of the viewer into the
scene and calculating the color of the pixel based on the object that the ray hits first 2. Ray casting is
faster than ray tracing, as it is limited by one or more geometric constraints 2. Ray casting was the
most popular rendering tool in early 3-D video games 2.

In summary, CSG is a technique used to create complex 3D models by combining simpler objects
using Boolean operators, while Ray Casting is a rendering technique that involves casting rays from
the viewer’s eye into the scene and calculating the color of the pixel based on the object that the ray
hits first.

19. Morphing

Morphing is a special effect in computer graphics and animation that transforms one image or shape
into another through a seamless transition. It was first used by Hollywood directors and special
effects teams in the late 1980s and early 90s, using powerful computers and pioneering software
such as Gryphon Software Morph and Image Master 1.

Morphing is mostly known as a visual effects technique in the film industry. It’s also used in other
mediums such as animation and 3D modelling. There are different approaches to morphing,
including direct morphing, morphing at max speed, and 3D morphing 1.

Direct morphing is used for more simple animations, such as transforming a simple line-drawn shape
into another shape. Morphing at max speed is often seen in films, where objects are morphed while
in motion. 3D morphing is used with 3D modelling to animate objects that don’t have skeletal
structure 1.

20. Different motion specifications:

In computer graphics, motion specifications refer to the parameters that define the movement of
objects in an animation system. There are several ways to specify motion, ranging from explicit to
abstract approaches 1. One of the most straightforward methods is to directly specify the motion
parameters 1.

Direct motion specification : -

The most straightforward method for defining a motion sequence is direct specification of the motion
paremeters. Here, We explicitly give the rotation angles and translation vectors. Then the geometric
transformation matrices are applied to transform co-ordinate positions. Alternatively, We could use
an approximating equation to specify certain kinds of motions. These methods can be used for
simple user programmed animation sequences.

Goal-directed systems : -

At the opposite extreme, We can specify the motions that are to take place in general terms that
abstractly describe the actions. these systems are referred to as goal directed because they
determine specific motion parameters given the goals of the animation. For example, We could
specify that we want an object to "walk " or to "run" to a particular destination. Or We could state that
we want an object to "pick up " some other specified object. The inpute directive are then interpreted
in term of component motions that will accomplish the selected task. Human motion, for instance,
can be defined as a heirarchical structure of sub motion for the toros, limbs,and so forth.

Kinematics and dynemics : -

We can also construct animation sequences using kinematic or dynemic descriptions. With a
kinematic description, we specify the animation by giving motion parameters position, velocity, and
acceleration) without reference to the forces that cause the motion. for constant velocity (zero
acceleration), we designate the motions of rigid bodies in a scene by giving an initial position and
velocity vector for each objects.

An alternate apporach is to use inverse kinematics. Here, we specify the initial and final positions of
objects at specified times and the motion parameters are computed by the system . For example,
assuming zero acceleration , we can determine the constant velocity that will accomplish the
movement of an object from the initial position to the final position.

Dynamic descriptions on the other hand, require the specification of the forces that produce the
velocities and acceleration. Descriptions of object behavior under the are generally referred to as a
physically based modeling. Example of forces affecting object motion include electromagnetic,
gravitational, friction, and other mechanical forces.

Object motion are obtained from the forces equations describing physical laws, such as newton's law
of motion for gravitational ang friction processes, euler or navier-stokes equations describing fluid
flow, and maxwell 's equations for electromagnetic forces. For example, the general form os
newton's second law for a particle of mass m is
F = d(mv)/dt

with F as the force vector, and v as the velocity vector. If mass is constant, we solve the equation
F=ma, where a is the acceleration vector. otherwise, mass is a function of time, as in relativistic
motions of space vehicles that consume measurable amounts of fuel per unit time. We can also use
inverse dynemics to obtain the forces, given the initial and final positions of objects and the type of
motion.

Application of physically based modeling include complex rigid-body systems and such nonrigid
systems as cloth and plastic materials. Typically, numerical methods are used to obtain the motion
parameters incrementally from the dynemical equations using initial conditions or boundary values.

21 Working of Cathode Ray tube

The working of a cathode ray tube (CRT) in computer graphics is based on the following principle: a
beam of electrons, emitted by an electron gun, strikes a phosphorescent surface and creates images
on the screen. The electron beam is modulated, accelerated, and deflected by focusing and
deflection systems, that direct the beam towards a specific position on the screen. CRT is a
technology used in traditional computer monitors and televisions 1.
The electron gun generates an electron beam of high intensity when it is connected to high voltage.
When the electron beam emits from the electron gun, it passes through a pair of electrostatic and
magnetic deflection coils which are on the neck of the electron gun. The electron beam is then
directed towards a specific position on the screen by the deflection system. The beam is modulated
by the video signal, which varies the intensity of the beam, and the beam is accelerated by the high
voltage applied to the electron gun. The phosphorescent surface on the screen glows when the
electron beam strikes it, creating images on the screen 2.

CRT stands for Cathode Ray Tube. CRT is a technology used in traditional computer monitors and
televisions. The image on CRT display is created by firing electrons from the back of the tube of
phosphorus located towards the front of the screen.

Once the electron heats the phosphorus, they light up, and they are projected on a screen. The color
you view on the screen is produced by a blend of red, blue and green light.

Components of CRT:

Main Components of CRT are:

1. Electron Gun: Electron gun consisting of a series of elements, primarily a heating filament (heater)
and a cathode. The electron gun creates a source of electrons which are focused into a narrow beam
directed at the face of the CRT.

2. Control Electrode: It is used to turn the electron beam on and off.

3. Focusing system: It is used to create a clear picture by focusing the electrons into a narrow beam.

4. Deflection Yoke: It is used to control the direction of the electron beam. It creates an electric or
magnetic field which will bend the electron beam as it passes through the area. In a conventional CRT,
the yoke is linked to a sweep or scan generator. The deflection yoke which is connected to the sweep
generator creates a fluctuating electric or magnetic potential.

5. Phosphorus-coated screen: The inside front surface of every CRT is coated with phosphors.
Phosphors glow when a high-energy electron beam hits them. Phosphorescence is the term used to
characterize the light given off by a phosphor after it has been exposed to an electron beam.

23. Sutherland Hodgeman polygon clipping

It is performed by processing the boundary of polygon against each window corner or edge. First of
all entire polygon is clipped against one edge, then resulting polygon is considered, then the polygon
is considered against the second edge, so on for all four edges.
Four possible situations while processing

1. If the first vertex is an outside the window, the second vertex is inside the window. Then second
vertex is added to the output list. The point of intersection of window boundary and polygon
side (edge) is also added to the output line.
2. If both vertexes are inside window boundary. Then only second vertex is added to the output
list.
3. If the first vertex is inside the window and second is an outside window. The edge which
intersects with window is added to output list.
4. If both vertices are the outside window, then nothing is added to output list.

Disadvantage of Cohen Hodgmen Algorithm:

This method requires a considerable amount of memory. The first of all polygons are stored in original
form. Then clipping against left edge done and output is stored. Then clipping against right edge done,
then top edge. Finally, the bottom edge is clipped. Results of all these operations are stored in memory.
So wastage of memory for storing intermediate polygons.
24. Octree is a tree data structure in which each internal node can have at most 8 children. Like
Binary tree which divides the space into two segments, Octree divides the space into at most
eight-part which is called as octanes. It is used to store the 3-D point which takes a large amount
of space. If all the internal node of the Octree contains exactly 8 children, then it is called full
Octree. It is also useful for high-resolution graphics like 3D computer graphics.
The Octree can be formed from 3D volume by doing the following steps:
Divide the current 3D volume into eight boxes
1. If any box has more than one point then divide it further into boxes
2. Do not divide the box which has one or zero points in it
3. Do this process repeatedly until all the box contains one or zero point in it
The above steps are shown in figure.

Three types of nodes are used in octree:


1. Point node: Used to represent of a point. Is always a leaf node.
2. Empty node: Used as a leaf node to represent that no point exists in the region it represent.
3. Region node: This is always an internal node. It is used to represent a 3D region or a cuboid.
A region node always have 4 children nodes that can either be a point node or empty node

Insertion in Octree:

To insert a node in Octree, first of all, we check if a node exists or not if a node exists then return
otherwise we go recursively
 First, we start with the root node and mark it as current
 Then we find the child node in which we can store the point
 If the node is empty then it is replaced with the node we want to insert and make it a leaf
node
 If the node is the leaf node then make it an internal node and if it is an internal node then
go to the child node, This process is performed recursively until an empty node is not found

Search in Octree:

This function is used to search the point exist is the tree or not
 Start with the root node and search recursively if the node with given point found then return
true, if an empty node or boundary point or empty point is encountered then return false
 If an internal node is found go that node. The time complexity of this function is also O(Log N)
where, N is the number of nodes

25. The Steps involved in design of animation sequence are:


Storyboard layout: The storyboard is an outline of the action. It defines the motion sequence as a
set of basic events that are to take place. Depending on the type of animation to be produced, the
storyboard could consist of a set of rough sketches or it could be a list of the basic ideas for the
motion.

Object definitions: This step involves defining the objects that will be used in the animation
sequence. An object definition is given for each participant in the action. Objects can be defined in
terms of basic shapes, such as polygons or splines. In addition, the associated movements for each
object are specified along with the shape

Key-frame specifications: Key frames are the most important frames in the animation sequence.
They define the position, rotation, and scale of the objects in the scene. Within each key frame, each
object is positioned according to the time for that frame. Some key frames are chosen at extreme
positions in the action; others are spaced so that the time interval between key frames is not to
great. More key frames are specified for intricate motions than for simple, slowly varying motions.

Generation of in-between frames: In-between frames are the frames that are generated between
the key frames. They help to create a smooth transition between the key frames.

Line testing: This step involves testing the animation sequence to ensure that it is working as
expected.

Recording: The final step is to record the animation sequence.

You might also like