0% found this document useful (0 votes)
32 views15 pages

Exam Questions by Subject

Uploaded by

Ergo Proxy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views15 pages

Exam Questions by Subject

Uploaded by

Ergo Proxy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

9/23/24, 7:41 AM Exam Questions by Subject

Graphics Pipeline:

1. Describe the purpose of the following groups of functions:


a. Primitive – Define low level objects or atomic entities that the system can display
b. Attribute – Governs the way primitives appear on the display
c. Viewing – Allows us to specify various views
d. Transformation – Allows us to carry out transformations of objects
e. Query – Allows us to obtain info about operating environment, camera parameters, values in frame
buffer, etc

2.

At which stage is each of the following performed:


a. Projection normalisation B
b. Hidden surface removal D
c. Antialiasing D
d. Primitive assembly B
e. Perspective division B
f. Usual results of this process are sets of vertices specifying a group of geometric objects supported by
the rest of the system A
g. Inside Outside testing C
h. Converts vertices in normalised device coordinates to fragments whose locations are in window
coordinates C
i. Interpolation of per-vertex coordinates takes place, and the texture parameters determine how to
combine texture colour and fragment colours to determine final colours in colour buffer
C

3. Explain difference between immediate and retained graphics modes


Immediate: As vertices are generated, they are sent to the graphics processor to be displayed. However there
is no memory of geometric data and will have to be generated again if they need to be displayed again

Retained: All geometric data is computed and stored in some storage structure. The scene is then displayed by
sending all data to the graphics processor at once.

4. Give 2 advantages and a disadvantage of using the pipeline approach to form CGI.
Advantages:
- Increases performance when the same sequence of concurrent operations in carried out on many, or large,
datasets
- Process on each primitive can be done independently

Disadvantage:
- Latency of the system must be balanced against increased throughput.
- Global effects may not be handled correctly.

about:blank 1/15
9/23/24, 7:41 AM Exam Questions by Subject

5. Name the frames in the usual order in which they occur in the WebGL pipeline
Object coordinates  World coordinates  eye coordinates  clip coordinates  Normalised device
coordinates  window coordinates  Screen coordinates

6. What are the main advantages of programmable shaders?


Programmable shaders make it possible to not only incorporate more realistic lighting models in real time, but
to also create interesting non-photorealistic effects.

7. Explain differences between RGB, RGBA, and indexed colour systems


RGB: Uses the primary colours (Red, blue, and green) in a 24 bit colour depth, with 8 bits assigned to each of
the colours. A number between 0.0 and 1.0 denote the saturation of each colour.
RGBA: Same as RGB, but with a 4th channel, the alpha channel. This denotes the translucency level assigned. A
value of 0 is transparent, whereas a value of 1 is opaque.
Indexed Colour system: The frame buffers are limited in colour depth, 8 bits deep, and not subdivided into
groups. The limited depth pixel is interpreted as an integer value indexing to the colour lookup table.

8.

about:blank 2/15
9/23/24, 7:41 AM Exam Questions by Subject

From Geometry to pixels:

1. Besides the transformation to 2D, explain what other transformations must be done before we can show an
image on a computer screen?

2. When is clipping normally performed in the graphics pipeline?


Clipping happens after vertex processing but before rasterization.

3. Why is clipping performed at this point?


Clipping is carried out on primitives. If it is done before this point, the primitives would still be vertices. If done
after this point, the primitives would have already been converted to fragments.

4. Describe, with use of diagrams, how polygons are clipping using the Cohen-Sutherland line clipping
algorithm
The algorithm divides a 2 dimensional space into 9 regions (Centre being the inside region), then efficiently
determines the line and portions of lines that are inside the given rectangular area.

5. Give one advantage and one disadvantage of using the Cohen-Sutherland line clipping algorithm?
Advantage: Works best when there are many line segments but few are actually visible.
Disadvantage: the algorithm has to be used recursively.

6. What is meant by the term “double buffering” and for what purpose is it used?
Double buffering solves the problem of distortion caused when the frame buffer redisplays causing a partially
drawn display. There are 2 frame buffers used, a front buffer that is displayed, and a back buffer that is
available for constructing the next display. The buffers are then swopped and the new back buffer is cleared
and starts drawing the next display.

7. What information is stored in:


a. Frame buffer – stores collective pixels of an image.
b. Z-buffer – Stores depth information as primitives are rasterized

8. Describe briefly the Liang-Barsky clipping algorithm


Four points are assigned where the line intersects the extended sides of the window. It then orders the points
corresponding to intersections needed for clipping. P1 is the L/H window limit, and p2 is the R/H window limit.
Any negative values yield points on the line on the other side of p1 from p2. All values more than 1 correspond
to points on the line past p2. Line segments between the points p1 and p2 inside the window are clipped for
display.

about:blank 3/15
9/23/24, 7:41 AM Exam Questions by Subject

9. In WebGL graphics pipeline, when a triangle is processed, the (x,y,z) coordinates of the vertices are
interpolated across whole triangle to give coordinates of each fragment. Name 2 other things that may
commonly be specified at the vertices and then interpolated across the triangle to give a value for each
fragment
Colours, normals, and texture coordinates.

10. Explain crossing, or odd-even test, with respect to a point p inside a polygon.
Any ray emanating from point p that is inside the polygon will have an odd number of crossings before infinity
Any ray emanating from point p that is outside the polygon will have an even number of crossings before
infinity

11. Briefly describe the Winding test


The winding test considers the polygon as a knot being wrapped around a point or line. It starts by traversing
the edges of the polygon from any starting vertex and going around the edges in a particular direction until the
starting point is reached. An arbitrary point is then considered, and the winding number for this point is the
number of times it encircles the edges of the polygons. Count clockwise as positive and counter-clockwise as
negative.

12.

about:blank 4/15
9/23/24, 7:41 AM Exam Questions by Subject

Transformations and viewing:

1. Differentiate between parallel and perspective projection


Parallel: These views have a COP at infinity. Parallel views do not form realistic views of objects
Perspective: These views have a finite COP. Perspective views are characterised by the diminution of size as the
object is moved farther from the viewer. They form a realistic picture of the object

2. Why are projections produced by parallel and perspective viewing known as planar geometric projections?
Both parallel and perspective views are known as planar projections because the projection surface is a plane
and the projectors are lines.

3. Explain briefly the projection normalisation technique


This technique converts all projections into simple orthogonal projections by distorting objects such that the
orthogonal projection of distorted objects is the same as desired projection of original objects.
The vertices are then transformed such that vertices within the specified view volume are transformed to
vertices within the canonical view volume.

4. What are the advantages of the normalization transformation process?


- both perspective and parallel views can be supported by the same pipeline.
- The clipping process is simplified because sides of canonical view volume are aligned with coordinate axes

5. Shape of the viewing volume for an orthogonal projection is right parallelepiped. Discuss steps involved in
the projection normalisation process for an orthographical projection
1. Perform translation to move centre of specified view volume to centre of canonical view volume
2. Scale sides of specified view volume such that they have a length of 2.

6. Draw a view frustum – 3 important rectangular planes at their correct positions.

7. Transformations are often carried out using a homogenous coordinate representation. Why is this
representation used?
- All affine transformations can be represented using matrix multiplications
- Uniform representation of all affine transformations make carrying out successive transformations far easier
than in 3D space.
- Less arithmetic is involved

about:blank 5/15
9/23/24, 7:41 AM Exam Questions by Subject

8. Explain what is meant by non-uniform foreshortening


In a perspective projection, the farther an object is from the viewer, the smaller it will appear.

9. Since clipping to a clip region that is a cube is so easy, graphics systems transform any scene with its clip
window to make the clip window a cube
a. Name this transformation technique
b. Give one other advantage of using this transformation technique
a) Projection normalisation
b) Both perspective and parallel views can be supported by the same pipeline.

10. What is unique about affine transformations


Affine transformations preserve the line during transformations.

11. Give 2 examples of affine transformations


Rotation and translation

12. Consider the following transformations:


A. Translation
B. Rotation
C. Uniform scaling

For each, state if statements are True. May have none, one, or more than one.
a. Position of an arbitrary vertex is always the same after transformation None
b. Length of an arbitrary line segment is the same after transformation A and B
c. Angle between 2 arbitrary vectors is the same after transformation
d. Arbitrary parallel lines are still parallel after transformation A, B, and C

13. Synthetic coordinate reference frame is given by a VRP, VPN, and VUP. Using a diagram show how these
quantities describe the location and orientation of the synthetic camera

14. Explain the term transformation as used in CG


Transformations are changes made to an object to either change angle, size, or direction.

15. Explain why translation and rotation are known as rigid body transformations
Translation and rotation cannot alter the volume or the shape of an object

about:blank 6/15
9/23/24, 7:41 AM Exam Questions by Subject

16. Model-view transformation is the concatenation of 2 transformations. Name and describe these 2
Modelling transformation: Takes instances of objects in object coordinates and brings them into the world
frame
Viewing transformation: Transforms world coordinates into camera coordinates.

17.

about:blank 7/15
9/23/24, 7:41 AM Exam Questions by Subject

Hidden surface removal:

1. Hidden surface removal can be divided into 2 broad classes, state and explain each of these classes
Object space: Attempts to order surfaces of objects in the scene such that rendering surfaces in a particular
order provide the correct image
Image space: Works as part of projection process and seeks to determine the relationship among object points
on each projector

2. Compare and contrast the depth sort and Z-buffer hidden surface removal algorithms with respect to the
following:
a. Rasterization process
b. Type of algorithm
c. Hardware implementation
d. Scenes that are difficult to render

a) Depth Sort:
Z-Buffer: As primitives are rasterized, it keeps track of the distance from COP to closest point on each
projector already rendered

b) Depth Sort: Object space


Z-Buffer: Image space

c) Depth Sort:
Z-Buffer: Easy to implement.

d) Depth Sort:
Z-Buffer:

3. Draw a picture of a set of simple polygons that the depth sort algorithm cannot render without splitting the
polygons

4. Differentiate between depth sort and Z-buffer algorithms for hidden surface removal
Z-Buffer:
Z-Buffer stores the depth of the object from the COP, whereas the depth sort does sorting to determine if
there are any polygons whose z-extents overlap.

5. State whether the following about depth sort and Z-buffer are true or false, correct if false.
a. Depth sort is image space algorithm False, object space algorithm
b. Z-buffer is image space algorithm True
c. Z-buffer does rasterization polygon by polygon True
d. Depth sort considers depth of each fragment in a polygon corresponding to intersection of a polygon
with a ray from COP True
e. Z-buffer find scenes where polygon pierces another difficult to render False, depth sort
f. Depth-sort find scenes where 3 or more polygons operate cyclically difficult to render True
g. Z-buffer can only be implemented in software False, hardware or software
h. Depth sort orders polygons True

about:blank 8/15
9/23/24, 7:41 AM Exam Questions by Subject

6. Briefly describe the algorithm for removing back facing polygons, assume normal points out from visible side
of the polygon
Culling is used in situations where back faces cannot be seen. This reduces the work for the hidden surface
removal algorithms by eliminating all back facing polygons before applying other hidden surface removal
algorithms.

7. WebGL makes use of a Z-buffer


a. What info is stored in the Z-buffer
b. How does WebGL use this info for hidden surface removal?
c. Give 2 advantages of using this approach to hidden surface removal

a) Depth information
b) WebGL uses this to determine if a fragment rasterized will have a greater depth than that in the Z-buffer. If
so, it is discarded.
c) Easy to implement in either hardware of software, and is compatible with pipeline architectures.

about:blank 9/15
9/23/24, 7:41 AM Exam Questions by Subject

Lighting and shading:

1. Phong reflection model


a. Describe the 4 vectors used to calculate a colour for an arbitrary point p, illustrate with a figure
b. In the specular term, there is a factor of (r.v)p. What does p refer to? What effect does varying the
power p have?
c. What is the term kala? What does ka refer to? How will decreasing ka affect the rendering of a
surface?
d. Is ka a property of the light or the surface?

a) n – normal at p
v – in direction from p to viewer or COP
l – In direction of a line from p to arbitrary point on light source
r – in direction of a perfectly reflected ray from I would take
b) p is the shininess coefficient. Reflected light is concentrated in a narrower region centred on the angle
of a perfect reflector.
c) ka is the ambient reflection coefficient, decreasing this will decrease the amount of light reflection
given by the surface.

2. Consider Gouraud and Phong shading models:


a. What information about the object or polygon to be shaded is needed by both models?
b. Explain how this information is used in the 2 shading models
c. Which of the 2 models is more realistic, especially for highly curved surfaces? Explain

3. Can the standard WebGL pipeline easily handle light interactions from object to object?
Explain

4. Explain what diffuse reflection is in the real world


A tar road would be an example of a diffuse reflection due to the rough surface and the impression it reflects
light in all directions.

5. State and explain Lambert’s law using a diagram

Lambert’s law states we only see the vertical component of the incoming light

6. Using Lambert’s law, derive the equation for calculating approximations to diffuse reflection on a computer
�d=�d� d(�∙�)

about:blank 10/15
9/23/24, 7:41 AM Exam Questions by Subject

7. Name 2 artifacts in computer graphics that may commonly be specified at vertices of a polygon and then
interpolated across the polygon to give a value for each fragment within the polygon
Colours, normals, and texture coordinates

8. Shading intensity at any given point p on a surface is, in general comprised of 3 contributions, each
corresponding to a distinct physical phenomenon. List and describe these 3 phenomenon
Ambient, specular, and diffuse light interactions

9. We have 3 choices where we do lighting calculations. In the application, vertex shader, or fragment shader
a. Describe 3 steps required to implement lighting in a shader
b. Explain difference between doing lighting calculation on a per-fragment as opposed to per-vertex
basis

a. Choose lighting model, Write the shader to implement the model, and finally transfer necessary data to
shader

b. When doing lighting on a per-fragment vases, we obtain highly smooth and realistic looking shadings.

10. In CG, we have local and global lighting models


a. Differentiate between the 2 lighting models
b. Give one limitation of the local lighting model
c. Give one limitation of the global lighting model
d. Name one global lighting model used in CG

a) Local – lighting is done independently on objects. Global – Lighting is universal throughout the scene
b) Cannot manage shadows or reflections on the objects
c) Global models are incompatible with the pipeline architecture
d) Ray tracing

11. Interactions between light and material can be classified into 3 groups, state and explain each
Diffuse surface: Reflected light is scattered in all directions
Specular surface: Appear shiny because most of the light that is reflected or scattered is in a narrow range of
angles
Translucent surface: Some light penetrates the surface and emerges from another location on the object

12. Name and describe the 4 basic light sources used in CG


Ambient: Lights provide uniform illumination throughout the room or area
Spot: Narrow range of angles through which light is emitted.
Point: Light emits equally in all directions.
Distant: All rays are parallel and replace the location of the light source with direction of the light

13. Shading:
a. Name the 3 major shading techniques used in computer graphics
b. Describe the computation process for each of these
c. Discuss how objects shaded by different methods differ in appearance.

about:blank 11/15
9/23/24, 7:41 AM Exam Questions by Subject

a) Flat shading, Gouraud shading, and Phong shading


b) Flat: Shading calculation only done once for each polygon and each point is assigned same shade
Gouraud: Light calculation is done at each vertex using material properties and vectors n, v, and l
Phong: Instead of interpolating vertex intensities, we interpolate normal across each polygon and an
independent lighting calculation for each fragment is made.
c) Flat: Each point is assigned the same shade
Gouraud: The rasterizer interpolates a shade for each fragment
Phong: Independent lighting calculation is made for each fragment.

14. In a simple CG lighting model, we assume specular reflection component ls=ksLscosaO


a. What lighting effect does specular reflection component approximate
b. What does the term ks represent
c. What effect does increasing angle O have?
d. What effect does increasing angle a have?

a) the intensity of the specular light


b) the specular reflection coefficient
c) The light will be projected at an angle closer to the normal
d) Reflected light is concentrated in a narrower region centred on the angle of a perfect reflector.

about:blank 12/15
9/23/24, 7:41 AM Exam Questions by Subject

Discrete Techniques:

1. Explain what is meant by reflection mapping and discuss briefly how this is implemented in WebGL
Allows us to create images that heave the appearance of reflected materials without having to deflect rays
An image of the environment is painted onto the surface as that surface is being rendered.

Done in 2 passes:
1) Renders scene without reflecting object. Camera place at centre of mirror pointed in direction of the
normal of mirror
2) Use the image to obtain shades to place on the mirror for second rendering.

2. Explain what is meant by texture mapping and discuss briefly how this is implemented in WebGL
Uses an image to influence the colour of a fragment.
The texture can be a digitised image or generated by a procedural texture generation method.

Done by:
1) Form texture image and place in texture memory on GPU
2) Assign texture coordinates to each fragment
3) Apply texture to each fragment

3. Explain the problem of rendering translucent objects using the Z-buffer algorithm, and describe how the
algorithm can be adapted to deal with this problem
When rendering a translucent object before an opaque, the Z-buffer will not composite polygons correctly if an
opaque object is rendered afterwards, behind it.
The Z-buffer can be made read only when rendering the translucent polygons and depth information will be
prevented from being updated when rendering translucent objects

4. Explain what is meant by bump (or normal) mapping and discuss briefly how this is implemented in
computer graphics
Bump mapping distorts the normal vectors during the shading process to make the surface appear to have
small variations in shape. The technique varies the apparent shape of the surface by perturbing normal vectors
as the surface is rendered,

5. We would like to create a realistic looking 3D CG scene of room using WebGL. There is a mirror and a
window, garden is visible through window.
a. Briefly describe a fairly simple and cheap way for us to create the window through which the garden
can be seen
b. Briefly describe a fairly simple and cheap way for to create the mirror

a. Using the texture mapping to map an image of the outside to create the scene of the garden on the window
b. Using the environment mapping to get an image of the room from the mirror to the normal of the mirror
and map it to the mirror.

about:blank 13/15
9/23/24, 7:41 AM Exam Questions by Subject

6. Alpha channel is the 4th colour in RGBA colour mode


a. Explain main purpose for the alpha channel
b. Explain how alpha channel is used for antialiasing
c. Explain how the alpha channel is used in WebGL
d. Explain how the alpha channel is used for blending

a) The alpha channel gives the object the ability to be transparent, translucent, or opaque
b) When a line is rendered, instead of colouring a whole pixel with the colour of the line, it passes through it.
It adjusts the intensity of the colour, avoiding sharp contrasts.
c) Enable blending, setup the desired source and destination factors, and the application program must use
RGBA colours.
d) The alpha value controls how much RGB is written into frame buffer

7. What mapping technique computes the surroundings visible as a reflected image on a shiny object?
Environment mapping.

8. Explain 2 methods of implementing the mapping technique described above


You could project the environment onto a sphere centred at the COP; or compute six projections that
correspond to the six sides of a cube, using six virtual cameras, each pointing in a dirrent direction.

9. When texture mapping, aliasing errors occurring when mapping texture coordinates to a texel. Name and
describe 2 strategies used in CG to deal with this.
1. Point sampling: Use value of texel closest to the texture coordinate output
2. Linear filtering: Use a weighted average for a group of texels in the neighbourhood determined by point
sampling.

about:blank 14/15
9/23/24, 7:41 AM Exam Questions by Subject

From vertices to fragments

1. Provide the pseudocode for DDA line rasterization algorithm, and explained how it is derived

for (ix = x1; ix <= x2; ix++)


{
y += m;
write_pixel(x, round(y), line_colour);
}

2. Explain the 2 techniques to determine the process of filling the inside of a polygon with a colour or pattern
Odd -even testing & Winding test – see previous explanations

3. Describe briefly, with the use of diagrams, the Cohen-Sutherland line clipping algorithm
The algorithm divides a 2D space into 9 regions, then efficiently determining the lines and portions of lines inside
the given rectangular area.

Possible cases of the line or line segment:


o Completely inside given rectangle
o Completely outside given rectangle
o Partially inside the window.

about:blank 15/15

You might also like