2D Graphics 3D Graphics
2D Graphics 3D Graphics
2. Let us assume a display with pixel resolution 1920 x 1080 displays the images at
frequency of 80 Hz. Calculate the time needed to display a single pixel?
imageDisplayTime = 1/80Hz = 0.0125 s = 12.5 ms;
pixelCount = 1920 * 1080 = 2 073 600 pixels;
singlePixelTime = imageDisplayTime / pixelCount = 12.5 ms / 2 073 600
pixels = 6.028163 * 10-6 = 6 ns;
1 sec
1 000 (103) ms
1 000 000 (106)mcs
1 000 000 000 (109)ns
5. How much memory is needed to store 100 circles and 100 triangles in a simple
vector representation? (Assume all numbers are single precision floating point
type (4 bytes)
point {
float x, y; //– coordinates
}
circle {
point; // – coordinate of centre
float r; //- radius
}
triangle {
point1, point2, point3; // - coordinates of corners
}
Vertex shader
Vertex shaders perform basic processing of each individual vertex. Vertex
shaders receive the attribute inputs from the vertex rendering and converts
each incoming vertex into a single outgoing vertex based on an
arbitrary, user-defined program.
Vertex shaders can have user-defined outputs, but there is also a special
output that represents the final position of the vertex. If there are no
subsequent vertex processing stages, vertex shaders are expected to fill in
this position with the clip-space position of the vertex, for rendering
purposes.
Triangle assembly
Primitive assembly is the process of collecting a run of vertex data output
from the prior stages and composing it into a sequence of primitives. The type
of primitive the user rendered with determines how this process works.
The output of this process is an ordered sequence of simple primitives (lines,
points, or triangles). If the input is a triangle strip primitive containing
12 vertices, for example, the output of this process will be 10 triangles.
Rasterization
Primitives that reach this stage are then rasterized in the order in which
they were given. The result of rasterizing a primitive is a sequence
of Fragments.
A fragment is a set of state that is used to compute the final data for a
pixel (or sample if multisampling is enabled) in the output framebuffer. The
state for a fragment includes its position in screen-space, the sample
coverage if multisampling is enabled, and a list of arbitrary data that was
output from the previous vertex or geometry shader.
This last set of data is computed by interpolating between the data values in
the vertices for the fragment. The style of interpolation is defined by the
shader that output those values.
Fragment shader
The data for each fragment from the rasterization stage is processed by a
fragment shader. The output from a fragment shader is a list of colours for
each of the colour buffers being written to, a depth value, and a stencil
value. Fragment shaders are not able to set the stencil data for a fragment,
but they do have control over the colour and depth values.
Fragment shaders are optional. If you render without a fragment shader, the
depth (and stencil) values of the fragment get their usual values. But the
value of all the colours that a fragment could have are undefined.
Testing and blending
The fragment data output from the fragment processor is then passed through a
sequence of steps.
The first step is a sequence of culling tests; if a test is active and the
fragment fails the test, the underlying pixels/samples are not updated
(usually).
After this, colour blending happens. For each fragment colour value, there is
a specific blending operation between it and the colour already in the
framebuffer at that location. Logical Operations may also take place in lieu
of blending, which perform bitwise operations between the fragment colours and
framebuffer colours.
Lastly, the fragment data is written to the framebuffer. Masking
operations allow the user to prevent writes to certain values. Colour, depth,
and stencil writes can be masked on and off; individual colour channels can be
masked as well.
From: https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview
13. Calculate the memory capacity required to store an image with pixel resolution
1024 x 768 x 24. The result should be given in megabytes (MB)
height = 768 pixels;
width = 1024 pixels;
memoryPerPixel = 24 bits = 3 bytes;
14. Pixel resolution of a 42-inch (diagonal) display is 1920 x 1080 pixels. The ratio
of width to height of the display is 16:9. Calculate the linear resolution of this
display.
Linear image resolution (ppi) defines the number of pixels displayed on
the one inch line.
displayDiagonal = 42 inches;
height = 1080 pixels;
width = 1920 pixels;
heightRatio = 9;
widthRatio = 16;
x; // - length coefficient;
maxHorResolution = alpha *
capableDistinguishing = 34 * 60 = 2040 px;
// OR another way
20. Is it possible using three primary colours R, G and B to obtain any given visible
colour?
It is worth noting that the concept of three primary colours is a
simplification with respect to real human colour vision. The reality is
more complex and in practice it is impossible to choose such tree
primaries that enables reproduction of all colours existing in nature.
b) what is the coordinate system describing the colours (please draw and describe the axes)
c) in the drawn coordinate system, please mark following colours white, black and yellow
24. Describe subsequent steps of the Digital Differential Analyzer algorithm and
mark selected pixels. Coordinates of:
• starting point (0,0) void LineDDA (
• end point (5,3) int x0, int y0, int x1, int y1
//LineDDA(0,0,5,3); )
//initial: { //x0 < x1
dy = 3 – 0 = 3; float dy, dx, y, m; // -1 <= m <= 1
dx = 5 – 0 = 5; dy = y1 - y0;
m = 3 / 5 = 0.6; dx = x1 - x0;
y = 0; m = dy / dx;
//loop 1: y = y0;
for (int x = x0; x <= x1; x++)
x = 0;
{
WritePixel( 0, 0 );
WritePixel (x, round(y));
y = y + m = 0 + 0.6 = 0.6;
y += m;
//loop 2:
}
x = 1; }
WritePixel( 1, 1 );
y += m = 1.2;
//loop 3:
x = 2;
WritePixel( 2, 1 );
y += m = 1.8;
//loop 4:
x = 3;
WritePixel( 3, 2);
y += m = 2.4;
// loop 5:
x = 4;
WritePixel( 4, 2 );
y += m = 3;
// loop 6:
x = 5;
WritePixel( 5, 3 );
y += m = 3.6;
// end
25. Why is the assumption being made that the slope of the line is not greater than
45 degrees, when using the basic DDA algorithm?
At the angle greater than 45° and less than 90° - define pixels in
subsequent rows instead in columns.