0% found this document useful (0 votes)
8 views12 pages

2D Graphics 3D Graphics

The document covers various aspects of graphics, including data rates for 4K images, differences between 2D and 3D graphics, and comparisons between vector and raster graphics. It also discusses memory requirements for storing shapes in vector format, the principle of raster displaying, and the graphics pipeline stages. Additionally, it addresses GPU processing needs, pixel definitions, image sampling, quantization, and calculations for image storage and display resolutions.

Uploaded by

Emma Cichocka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views12 pages

2D Graphics 3D Graphics

The document covers various aspects of graphics, including data rates for 4K images, differences between 2D and 3D graphics, and comparisons between vector and raster graphics. It also discusses memory requirements for storing shapes in vector format, the principle of raster displaying, and the graphics pipeline stages. Additionally, it addresses GPU processing needs, pixel definitions, image sampling, quantization, and calculations for image storage and display resolutions.

Uploaded by

Emma Cichocka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

1. What is the data rate (bytes / s) needed to display 4K resolution image?

(Assume refresh rate of 60 Hz)


pixelCount = 4K = 4 096 * 2 160 = 8 847 360 pixels;
pixel = RGB value = 3 bytes;
refreshRate = 60 Hz;
dataRate = pixelCount * pixel * refreshRate = 8 847 360 pixels * 3 bytes *
60 frames/s = 1 592 524 800 B/s = 1 592.5248 MB/s = 1.5 GB/s
1 B/s
0.001 (10-3)KB/s
0.000 001 (10-6)MB/s
0.000 000 001 (10-9)GB/s
0.000 000 000 001 (10-12)TB/s

2. Let us assume a display with pixel resolution 1920 x 1080 displays the images at
frequency of 80 Hz. Calculate the time needed to display a single pixel?
imageDisplayTime = 1/80Hz = 0.0125 s = 12.5 ms;
pixelCount = 1920 * 1080 = 2 073 600 pixels;
singlePixelTime = imageDisplayTime / pixelCount = 12.5 ms / 2 073 600
pixels = 6.028163 * 10-6 = 6 ns;

1 sec
1 000 (103) ms
1 000 000 (106)mcs
1 000 000 000 (109)ns

3. What is the difference between 2D graphics and 3D graphics?


2D Graphics 3D Graphics
• 2D computer graphics operates on • In 3D graphics scenes with 3D objects
objects located on a plane. For example are created.
the objects are: straight lines, curves, • Important elements of each scene are
planar geometric figures, hand the light sources that provide proper
drawings, texts etc. object illumination.
• Typical operations include drawing, • In consequence we must calculate
figure interior filling, geometrical objects surface colours (shading) and
transformations (translations, define shadows casted by these objects
rotations, scaling), object dimensioning (shadowing).
etc. • After finishing the scene modelling
• Characteristic feature is lack of process the camera is placed in the
illumination. Many application scene. The camera placement defines
programs for 2D graphics are available. reference point of view for creation of
They contain many tools that make two-dimensional view of three-
image generation easier. dimensional scene, that might be
• Professional editors as Corel Draw and displayed on flat display screen.
Adobe Illustrator are highly evaluated. • The process of transfer from 3D scene
There exists also free Inkscape to 2D view is referred to as rendering.
(inkscape.org). The rendering may be realized in
different manner. The most frequently
two methods are used: object method
(rasterization concept) and image
method.
• In object approach, during rendering
such operations as: projections,
definition of object visibility/obscuring,
shading, rasterization etc. take place.
• It is important that for each object its
influence on corresponding pixels
(fragments) is calculated.
• If we know fragments associated with
pixel, we can define its final colour.
• In image approach, for each pixel of
image all objects that influence this
pixel are found and pixel colour is
calculated directly. Such method is
used in ray tracing, where different
algorithms than in object approach are
used. Rasterization problem does not
occur.  Editors for 3D graphics: 3D
Studio Max and free program Blender

4. Compare the vector graphic and raster graphics.


Vector Graphics Raster Graphics
• In both 2D and 3D graphics during • Memory capacity required to store final
image generation phase so called image is most frequently much greater
vector graphics is used, where the than in vector graphic, where only the
objects are described in mathematical data for creation of final digital image
way. are stored.
• For each object a set of data is given, • If the image exists in digital form the
on the basis of which appropriate information/description of the
algorithms are capable to generate displayed objects is no longer available.
digital image of this object, which may In this case it is impossible to perform
be then displayed. any operation on objects. We may only
• Vector graphics allows definition and operate on the whole pixel image, on
edition of object each time in desired individual pixels or on groups of pixels.
resolution. • In possible change of image resolution
we must reckon with some losses in
image quality.

5. How much memory is needed to store 100 circles and 100 triangles in a simple
vector representation? (Assume all numbers are single precision floating point
type (4 bytes)
point {
float x, y; //– coordinates
}
circle {
point; // – coordinate of centre
float r; //- radius
}
triangle {
point1, point2, point3; // - coordinates of corners
}

triangles [triangle1, ... triangle100] = (point1 * 4 bytes + point2 * 4


bytes + point3 * 4 bytes) * 100 = ([4 bytes + 4 bytes] + [4 bytes + 4
bytes] + [4 bytes + 4 bytes]) * 100 = 2 400 bytes;

circles [circle1, ... circle100] = (point * 4 bytes + radius) * 100 = ([4


bytes + 4 bytes] + 4 bytes) * 100 = 1 200 bytes;

total = 1200 bytes + 2400 bytes = 3600 bytes;

6. Explain the principle of raster displaying.


In contemporary displays raster method of displaying is used. The pixels
are displayed on the screen in a sequential manner. At first, the pixels
in first row are displayed one after another. Then the sequence repeats in
second row, and so on until the last row.
The information about pixel colours are taken from image memory. As a rule
pixel colour is defined by the values of three primary colours R,G,B.
General rule of colour pixel displaying relies on displaying in close
vicinity three small colour dots respectively R,G and B. Such triad of
dots, observed from some distance converges into single coloured pixel.
The detailed way of displaying triplets of dots depends on the type of
display:
• CRT displays - there are three dots of luminophores R,G,B (triads)
emitting light after excitation by an electron beam.
• LCD displays - there are three neighbouring LCD cells equipped with
R,G and B filters
• OLED displays - there are three colour LEDs that form a pixel.

7. Explain the operation of double image memory buffer.


During displaying the given image the next image is calculated. The
calculations should be finished before the end of displaying of given
image. Hence the calculation time should be no longer than ten and a few
milliseconds.
It is advisable to store the pixels of new image in a block of memory
different from block which is sending information about presently
displayed image.
In practice the concept of double image memory buffer is used which works
as follows. When the image is displayed from the first buffer, the new
image is stored in the second. After finishing the calculations of new
image and finishing present frame the roles of buffers are switched. This
method is frequently referred to as ping-pong.
Sometimes with complex scenes system does not follow up with calculations
and is capable to calculate less images during a second than the required
frame rate. Then some frames may be displayed twice.
The problem of synchronization between abilities of the display with given
frame rate and abilities of calculation system defined by the number of
frames calculated during one second referred to as Frames per Second
(FPS).
8. Describe the stages of the graphics pipeline?

Vertex shader
Vertex shaders perform basic processing of each individual vertex. Vertex
shaders receive the attribute inputs from the vertex rendering and converts
each incoming vertex into a single outgoing vertex based on an
arbitrary, user-defined program.
Vertex shaders can have user-defined outputs, but there is also a special
output that represents the final position of the vertex. If there are no
subsequent vertex processing stages, vertex shaders are expected to fill in
this position with the clip-space position of the vertex, for rendering
purposes.
Triangle assembly
Primitive assembly is the process of collecting a run of vertex data output
from the prior stages and composing it into a sequence of primitives. The type
of primitive the user rendered with determines how this process works.
The output of this process is an ordered sequence of simple primitives (lines,
points, or triangles). If the input is a triangle strip primitive containing
12 vertices, for example, the output of this process will be 10 triangles.
Rasterization
Primitives that reach this stage are then rasterized in the order in which
they were given. The result of rasterizing a primitive is a sequence
of Fragments.
A fragment is a set of state that is used to compute the final data for a
pixel (or sample if multisampling is enabled) in the output framebuffer. The
state for a fragment includes its position in screen-space, the sample
coverage if multisampling is enabled, and a list of arbitrary data that was
output from the previous vertex or geometry shader.
This last set of data is computed by interpolating between the data values in
the vertices for the fragment. The style of interpolation is defined by the
shader that output those values.
Fragment shader
The data for each fragment from the rasterization stage is processed by a
fragment shader. The output from a fragment shader is a list of colours for
each of the colour buffers being written to, a depth value, and a stencil
value. Fragment shaders are not able to set the stencil data for a fragment,
but they do have control over the colour and depth values.
Fragment shaders are optional. If you render without a fragment shader, the
depth (and stencil) values of the fragment get their usual values. But the
value of all the colours that a fragment could have are undefined.
Testing and blending
The fragment data output from the fragment processor is then passed through a
sequence of steps.
The first step is a sequence of culling tests; if a test is active and the
fragment fails the test, the underlying pixels/samples are not updated
(usually).
After this, colour blending happens. For each fragment colour value, there is
a specific blending operation between it and the colour already in the
framebuffer at that location. Logical Operations may also take place in lieu
of blending, which perform bitwise operations between the fragment colours and
framebuffer colours.
Lastly, the fragment data is written to the framebuffer. Masking
operations allow the user to prevent writes to certain values. Colour, depth,
and stencil writes can be masked on and off; individual colour channels can be
masked as well.

From: https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview

9. What is the input & output data of the fragment shader?


In previous question.

10. Justify the need for GPU processing.


• The architecture of a single GPU core is Single Instruction Multiple
Data – there are multiple ALUs executing the same program, but
processing different data (e.g., colour of several pixels).
• Execution context contains processing state (data, variables,
conditions) used by ALUs.
• GPU core contains memory shared by the ALUs storing e.g., textures.
• This architecture allows processing many vertices or pixels at the same
time.
• Modern graphics cards can contain multiple cores. Different programs
can run on the cores simultaneously.

11. Explain the notion of pixel.


12. Explain the notions: image sampling and image quantization.
Image sampling
Rectangular mesh is put on a photo.
Each node of mesh corresponds to single image sample.
A set of such specified samples is approximation of image: the better the
bigger is the number of samples.
Each image sample has some colour Creal.
The closest colour Cpixel taken from admissible set of colours: the
cardinality of this set is defined by the length of binary word that
represents colour in graphic system.
Digital representation of image sample is referred to as pixel.
conversion of real images -> digital form: image sampling and quantisation
of samples.
conversion of digital form -> real images: does not require any operations
- human visual system allows perception of discrete information, such as
digital image as continuous information.
Sample representation:
A. mesh nodes.
B. mesh eyelets - the colour averaged over the whole surface of the
mesh eyelet is assigned to the sample.
Image quantization

13. Calculate the memory capacity required to store an image with pixel resolution
1024 x 768 x 24. The result should be given in megabytes (MB)
height = 768 pixels;
width = 1024 pixels;
memoryPerPixel = 24 bits = 3 bytes;

totalMemory = height * width * memoryPerPixel


= 768 * 1024 * 24 = 18 876 368 bits = 18 876 Kb = 18.87 Mb;
= 768 * 1024 * 3 = 2 359 296 bytes = 2.359 MB;

14. Pixel resolution of a 42-inch (diagonal) display is 1920 x 1080 pixels. The ratio
of width to height of the display is 16:9. Calculate the linear resolution of this
display.
Linear image resolution (ppi) defines the number of pixels displayed on
the one inch line.

displayDiagonal = 42 inches;
height = 1080 pixels;
width = 1920 pixels;

heightRatio = 9;
widthRatio = 16;
x; // - length coefficient;

displayDiagonal2 = (heightRatio * x)2 + (widthRatio * x)2;


422 = (16 * x)2 + (9 * x)2;
. . .
x = 2.29;
widthInInches = widthRatio * x = 36.6 inches;
heightInInches = heightRatio * x = 20.61 inches;
LinearImageResolution
= width/widthInInches = 1920 / 36.6 = 52.4 ppi;
= height/heightInInches = 1080 / 20.61 = 52.4 ppi;

15. What is maximum horizontal pixel resolution


of a display considering human eye
limitations. The width of display is 30 cm, the
distance between observer and the display is
50 cm.
capableDistinguishing = 1” arcmin = 60’
arcsec;
// - should be given in the task
distance = 50 cm;
width = 30 cm;
alpha; // - angle of view

tan(alpha/2) = (width/2)/distance = 0.3;


alpha/2 = 17 degrees;
alpha = 34 degrees;

maxHorResolution = alpha *
capableDistinguishing = 34 * 60 = 2040 px;

16. Write the pseudo-code of a function for decreasing/increasing the resolution


by 2:1 ratio. The image is given as a two-dimensional array of pixels: Color[][]
image=new Color[100][100].
Color[][] DecreaseResolutionBy2(Color[][] image)
{
Color[][] smallerImage = new Color[image.length/2][image[0].length/2];
for(int i = 0; i < smallerImage.lenght; i++)
{
for(int j = 0; j < smallerImage[i].length; j++)
{
smallerImage[i][j] = (Color[i*2][j*2]
+ Color[i*2+1][j*2]
+ Color[i*2][j*2 + 1]
+ Color[i*2 + 1][j*2 + 1]) / 4;
}
}
return smallerImage;

Color[][] IncreaseResolutionBy2(Color[][] image)


{
Color[][] biggerImage = new Color[image.length*2][image[0].length*2];
for(int i = 0; i < image.lenght; i++)
{
for(int j = 0; j < image[i].length; j++)
{
biggerImage[i*2][j*2] = image[i][j];
biggerImage[i*2 + 1][j*2] = image[i][j];
biggerImage[i*2][j*2 + 1] = image[i][j];
biggerImage[i*2 + 1][j*2 + 1] = image[i][j];
}
}
return biggerImage;

17. The resolution of an


image was changed
(see figure below,
grey grid - original
image, black grid –
new image).
Calculate the
intensity of the pixel
denoted by N in
new image. The
pixel intensities in
original image: A =
30, B = 50, E = 45, F
= 60.
A = 30, B = 50, E = 45, F = 60, N; // - intensities of pixels

P_AB = 0.75 * B + 0.25 * A = 0.75 * 50 + 0.25 * 30 = 45;


P_EF = 0.75 * F + 0.25 * E = 0.75 * 60 + 0.25 * 45 = 56.25;
N = 0.75 * P_EF + 0.25 * P_AB = 0.75 * 56.25 + 0.25 * 45 = 53.43;

// OR another way

P_AE = 0.75 * E + 0.25 * A = 0.75 * 45 + 0.25 * 30 = 41.25;


P_BF = 0.75 * F + 0.25 * B = 0.75 * 60 + 0.25 * 50 = 57.5;
N = 0.75 * P_BF + 0.25 * P_AE = 0.75 * 57.5 + 0.25 * 41.25 = 53.43;
18. What are monochromatic colours?
Visual spectrum is in range from 380 nm to 780
nm. From physical point of view, colour is the
notion that corresponds to wavelength –
different wavelengths in visual spectrum
correspond to different colours. Colours that
are associated with single wavelength are named
as monochromatic. Pure monochromatic colour is
red (on the picture). Therefore, there are
colours that cannot be assigned to single
wavelength (purples are defined as combination
of two wavelength from opposite ends of spectrum corresponding to blue and
red colours).

19. Describe the notion „metamers”?


Specified colour may be evoked by different combinations of spectral
components, so called metamers (combination of waves 610 nm and 540 nm
with chosen intensities will evoke the same colour sensation as single
wavelength of 575nm).

20. Is it possible using three primary colours R, G and B to obtain any given visible
colour?
It is worth noting that the concept of three primary colours is a
simplification with respect to real human colour vision. The reality is
more complex and in practice it is impossible to choose such tree
primaries that enables reproduction of all colours existing in nature.

21. Discuss chromaticity diagram.


• All saturated spectral colours are placed on the curved part of
perimeter of chromaticity diagram.
• On the line in lower part of diagram the purples are placed.
• The W point represents White colour is placed in the centre of
diagram.
• Non saturated colours are placed inside.
• The diagram represents all visible colours (not taking into account
the value of luminance).
• Chromaticity diagram has the following
feature. If we mark on the diagram the
placement of two colours (for example A
and C) then the colour resulting from
mixing initial colours will lie on line
segment connecting these colours (for
example colour B).
• If we mix three colours, then the
resultant colour will lie inside the
triangle which has initial colours in
its vertices.
• In chromaticity diagram we may
illustrate different colour models based
on primary colours – for example RGB model
• All colours represented in RGB model lie inside the triangle which
has R, G and B colours in its vertices.
• Outside the RGB colour triangle there exist many other visible
colours.
• It means that if we identify the colours of vertices with the RGB
colours of real display then we will never see the colours from
outside the triangle on this display.
• Any set of three visible colours does not enable obtaining all
visible colours.
22. Explain the role of Colour Management Systems.
Color Management Systems (CMS) - store each image in single common colour
space which is independent of used devices.
Such space may be one of spaces defined by CIE Commission (XYZ or L*a*b*).
For each device so called device profile is defined. It enables
transformation between colour model of given device and accepted common
colour space.
Principle of operation:
• the image generated on device is sent along with the device profile
to the system
• it is converted to common colour space and stored in this form
• the device on which the image will be reproduced sends its profile
to the system
• it enables system to perform conversion of stored image to colour
model of image reproducing device
23. Please describe the RGB colour model. The description should include following
topics:
a) what components describe the colour
i. RGB model is based on assumption that three primary
colours R, G and B are available.
ii. All colours that result from mixing these primaries are
represented as points belonging to unitary cube defined
in R, G, B coordinate system.

b) what is the coordinate system describing the colours (please draw and describe the axes)
c) in the drawn coordinate system, please mark following colours white, black and yellow

d) write the components of colours from c) in numerical form


Black = (0,0,0);
White = (255,255,255);
Yellow = (255,255,0);
e) what are the applications of this model
From computer graphic application point of view RGB model is a discrete
model:
• each primary is represented by the word of certain length.
• if each primary is represented by 8-bit word then the colour point
is represented on 24 bits
• the number of possible colours is equal to 224 (=aprox. 16.7
millions)
• average observer can distinguish about 10 millions of colours.
RGB model describes colours which may be obtained by additive mixing of
three primary colours R, G and B.
Note that there are colours that cannot be obtained in this way and they
are not included in RGB model.

24. Describe subsequent steps of the Digital Differential Analyzer algorithm and
mark selected pixels. Coordinates of:
• starting point (0,0) void LineDDA (
• end point (5,3) int x0, int y0, int x1, int y1
//LineDDA(0,0,5,3); )
//initial: { //x0 < x1
dy = 3 – 0 = 3; float dy, dx, y, m; // -1 <= m <= 1
dx = 5 – 0 = 5; dy = y1 - y0;
m = 3 / 5 = 0.6; dx = x1 - x0;
y = 0; m = dy / dx;
//loop 1: y = y0;
for (int x = x0; x <= x1; x++)
x = 0;
{
WritePixel( 0, 0 );
WritePixel (x, round(y));
y = y + m = 0 + 0.6 = 0.6;
y += m;
//loop 2:
}
x = 1; }
WritePixel( 1, 1 );
y += m = 1.2;
//loop 3:
x = 2;
WritePixel( 2, 1 );
y += m = 1.8;
//loop 4:
x = 3;
WritePixel( 3, 2);
y += m = 2.4;
// loop 5:
x = 4;
WritePixel( 4, 2 );
y += m = 3;
// loop 6:
x = 5;
WritePixel( 5, 3 );
y += m = 3.6;
// end
25. Why is the assumption being made that the slope of the line is not greater than
45 degrees, when using the basic DDA algorithm?
At the angle greater than 45° and less than 90° - define pixels in
subsequent rows instead in columns.

You might also like