0% found this document useful (0 votes)
8 views39 pages

Chapter 3

Chapter 3 introduces the rendering process using OpenGL, focusing on coordinate systems, transformations, and projection matrices. It explains the differences between local, world, view, clip, and screen coordinates, as well as the importance of model, view, and projection matrices in transforming vertex coordinates. The chapter also covers orthographic and perspective projections, detailing how they affect the rendering of 3D objects on a 2D screen.

Uploaded by

Betelhem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views39 pages

Chapter 3

Chapter 3 introduces the rendering process using OpenGL, focusing on coordinate systems, transformations, and projection matrices. It explains the differences between local, world, view, clip, and screen coordinates, as well as the importance of model, view, and projection matrices in transforming vertex coordinates. The chapter also covers orthographic and perspective projections, detailing how they affect the rendering of 3D objects on a 2D screen.

Uploaded by

Betelhem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Chapter - 3

Introduction to the rendering process with OpenGL


Coordinate System

• In a 2-D coordinate system the X axis generally points from left to right, and the Y axis
generally points from bottom to top.
• When we add the third coordinate, Z, we have a choice as to whether the Z-axis points into
the screen or out of the screen: Right Hand Coordinate System (RHS)
Z is coming out of the page
Counterclockwise rotations are positive
if we rotate about the X axis : the rotation Y->Z is positive
if we rotate about the Y axis : the rotation Z->X is positive
if we rotate about the Z axis : the rotation X->Y is positive

Left Hand Coordinate System (LHS)


Z is going into the page
Clockwise rotations are positive
if we rotate about the X axis : the rotation Y->Z is positive
if we rotate about the Y axis : the rotation Z->X is positive
if we rotate about the Z axis : the rotation X->Y is positive

OpenGL generally uses a right-hand coordinate system.


Cont’d

• Transforming coordinates to NDC (normalized device coordinates) and then to screen


coordinates is usually accomplished in a step-by-step fashion.
• Transform an object's vertices to several coordinate systems before finally transforming
them to screen coordinates.
• There are a total of 5 different coordinate systems that are importance to us:
• Local space (or object space)
• World space
• View space (or eye space)
• Clip space
• Screen space
• Those are all a different state at which our vertices will be transformed in before finally
ending up as fragments.
Cont.…

• Modeling coordinates are used to construct individual object shapes.


• World coordinates are computed for specifying the placement of
individual objects in appropriate
• positions.
• Normalized coordinates are converted from world coordinates, such
that x,y values are ranged from 0 to 1.
• Device coordinates are the final locations on the output devices.
Cont’d (The global picture)
• To transform the coordinates in one space to the next coordinate space we'll use several
transformation matrices.
• Most important are the model, view and projection matrix.
• Our vertex coordinates first start in local space as local coordinates and are then further
processed to world coordinates, view coordinates, clip coordinates and eventually end up
as screen coordinates.
• The following image displays the process and shows what each transformation does:
Cont’d (The global picture)
• Local coordinates are the coordinates of your object relative to its local origin;
 They're the coordinates your object begins in.
• The next step is to transform the local coordinates to world-space coordinates
 Which are coordinates in respect of a larger world.
 These coordinates are relative to a global origin of the world,
 Many other objects also placed relative to the world's origin.
• Next we transform the world coordinates to view-space coordinates
 Each coordinate is as seen from the camera or viewer's point of view.
• After the coordinates are in view space we want to project them to clip coordinates.
 Clip coordinates are processed to the -1.0 and 1.0 range and determine which vertices will
end up on the screen.
• And lastly we transform the clip coordinates to screen coordinates in a process we call
viewport transform that transforms the coordinates from -1.0 and 1.0 to the coordinate range
defined by glviewport.
 The resulting coordinates are then sent to the rasterizer to turn them into fragments.
Cont’d (Local space)
• Local space is the coordinate space that is local to your object, i.e. where your object begins in.
• Probably all the models you've created all have (0,0,0) as their initial position.
• All the vertices of your model are therefore in local space: they are all local to your object.
• The vertices may be specified as coordinates between -0.5 and 0.5 with 0.0 as its origin. These
are local coordinates.
Cont’d (World space)
• The coordinates in world space are exactly what they sound like
• The coordinates of all your vertices relative to a (game) world.
• This is the coordinate space where you want your objects transformed to in such a way a
realistic fashion.
• The coordinates of your object are transformed from local to world space; this is accomplished
with the model matrix.
• The model matrix is a transformation matrix that translates, scales and/or rotates your object
to place it in the world at a location/orientation they belong to.
Cont’d (View space)
• The view space is what people usually refer to as the camera of OpenGL (it is sometimes also
known as the camera space or eye space).
• The view space is the result of transforming your world-space coordinates to coordinates that
are in front of the user's view.
• The view space is thus the space as seen from the camera's point of view.
• This is usually accomplished with a combination of translations and rotations to
translate/rotate the scene so that certain items are transformed to the front of the camera.
• These combined transformations are generally stored inside a view matrix that transforms
world coordinates to view space.
Cont’d (Clip space)
• At the end of each vertex shader run, opengl expects the coordinates to be within a specific range and
any coordinate that falls outside this range is clipped.
• Coordinates that are clipped are discarded, so the remaining coordinates will end up as fragments
visible on your screen.
• This is also where clip space gets its name from.
• Because specifying all the visible coordinates to be within the range -1.0 and 1.0 isn't really intuitive
 We specify our own coordinate set to work in and convert those back to NDC as opengl expects them.
• To transform vertex coordinates from view to clip-space we define a so called projection matrix that
specifies a range of coordinates e.G. -1000 and 1000 in each dimension.
• The projection matrix then transforms coordinates within this specified range to normalized device
coordinates (-1.0, 1.0).
• All coordinates outside this range will not be mapped between -1.0 and 1.0 and therefore be clipped.
• This viewing box a projection matrix creates is called a frustum.
 Each coordinate that ends up inside this frustum will end up on the user's screen.
• The total process to convert coordinates within a specified range to ndc that can easily be mapped to 2d
view-space coordinates is called projection
 The projection matrix projects 3d coordinates to the easy-to-map-to-2d normalized device
Cont’d (Clip space)
• Once all the vertices are transformed to clip space a final operation called perspective division
is performed where we divide the x, y and z components of the position vectors by the vector's
homogeneous w component;
• Perspective division is what transforms the 4D clip space coordinates to 3D normalized device
coordinates.
• This step is performed automatically at the end of each vertex shader run.
• It is after this stage where the resulting coordinates are mapped to screen coordinates (using
the settings of glviewport) and turned into fragments.
• The projection matrix to transform view coordinates to clip coordinates can take two different
forms, where each form defines its own unique frustum.
• We can either create an orthographic projection matrix or a perspective projection matrix.
Cont’d----- Orthographic projection
• An orthographic projection matrix defines a cube-like frustum box that defines the clipping space where each
vertex outside this box is clipped.
• When creating an orthographic projection matrix we specify the width, height and length of the visible frustum.
• All the coordinates that end up inside this frustum after transforming them to clip space with the orthographic
projection matrix won't be clipped.
• The frustum looks a bit like a container:
• The frustum defines the visible coordinates and is specified by a width, a
height and a near and far plane.
• Any coordinate in front of the near plane is clipped and the same applies
to coordinates behind the far plane.
• The orthographic frustum directly maps all coordinates inside the frustum
to normalized device coordinates since the w component of each vector is
untouched;
• If the w component is equal to 1.0 perspective division doesn't change the
coordinates.
• To create an orthographic projection matrix we make use of GLM's built-in function glm::ortho:

glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.1f, 100.0f);


Cont’d----- Orthographic projection
• The first two parameters specify the left and right coordinate of the frustum and the third and
fourth parameter specify the bottom and top part of the frustum.
• The 5th and 6th parameter then define the distances between the near and far plane.
• This specific projection matrix transforms all coordinates between these x, y and z range values
to normalized device coordinates.
• An orthographic projection matrix directly maps coordinates to the 2D plane that is your
screen, but in reality a direct projection produces unrealistic results since the projection
doesn't take perspective into account.
• That is something the perspective projection matrix fixes for us.
Cont’d ---- Perspective projection
• Perspective is especially noticeable when looking down the end of an infinite motorway or
railway as seen in the following image:
• As you can see, due to perspective the lines seem to coincide the farther
they're away.
• This is exactly the effect perspective projection tries to mimic and it
does so using a perspective projection matrix.
• The projection matrix maps a given frustum range to clip space, but also
manipulates the w value of each vertex coordinate in such a way that
the further away a vertex coordinate is from the viewer, the higher this
w component becomes.
• Once the coordinates are transformed to clip space they are in the range
-w to w (anything outside this range is clipped).
• Opengl requires that the visible coordinates fall between the range -1.0
and 1.0 as the final vertex shader output,
• Thus once the coordinates are in clip space, perspective division is
applied to the clip space coordinates:
Cont’d ---- Perspective projection
• Each component of the vertex coordinate is divided by its w component giving smaller vertex
coordinates the further away a vertex is from the viewer.
• This is another reason why the w component is important, since it helps us with perspective
projection.
• The resulting coordinates are then in normalized device space.
• A perspective projection matrix can be created in GLM as follows:
• glm::mat4 proj = glm::perspective(glm::radians(45.0f), (float)width/(float)height,
0.1f, 100.0f);
Cont’d ---- Perspective projection
• What glm::perspective does is again create a large frustum that defines the visible
space,
• An image of a perspective frustum is seen below:
• Its first parameter defines the four value, that
stands for field of view and sets how large the
view space is.
• For a realistic view it is usually set to 45 degrees,
but for more doom-style results you could set it to
a higher value.
• The second parameter sets the aspect ratio which
is calculated by dividing the viewport's width by
its height.
• The third and fourth parameter set the near and
far plane of the frustum. We usually set the near
distance to 0.1f and the far distance to 100.0f.
• All the vertices between the near and far plane
Cont’d ---- Perspective projection
• Whenever the near value of your perspective matrix is set a bit too high (like 10.0f), OpenGL
will clip all coordinates close to the camera (between 0.0f and 10.0f), which gives a familiar
visual result in videogames in that you can see through certain objects if you move too close to
them.
• When using orthographic projection, each of the vertex coordinates are directly mapped to clip
space without any fancy perspective division (it still does perspective division, but the w
component is not manipulated (it stays 1) and thus has no effect).
• Because the orthographic projection doesn't use perspective projection, objects farther away
do not seem smaller, which produces a weird visual output.
• For this reason the orthographic projection is mainly used for 2D renderings and for some
architectural or engineering applications where we'd rather not have vertices distorted by
perspective.
• Applications like Blender that are used for 3D modelling sometimes use orthographic
projection for modelling, because it more accurately depicts each object's dimensions.
Cont’d ---- Putting it all together
• We create a transformation matrix for each of the aforementioned steps: model, view and projection
matrix.
• A vertex coordinate is then transformed to clip coordinates as follows:

• Note that the order of matrix multiplication is reversed (remember that we need to read matrix
multiplication from right to left).
• The resulting vertex should then be assigned to gl_Position in the vertex shader and OpenGL will then
automatically perform perspective division and clipping
• And then
• The output of the vertex shader requires the coordinates to be in clip-space which is what we just
did with the transformation matrices.
• OpenGL then performs perspective division on the clip-space coordinates to transform them
to normalized-device coordinates.
• OpenGL then uses the parameters from glViewPort to map the normalized-device coordinates
to screen coordinates where each coordinate corresponds to a point on your screen.
• This process is called the viewport transform.
Viewing using a synthetic camera
• How to explain 3D viewing, projection to the computer.
• The paradigm which looks at creating a computer generated image as being similar to forming an
image using an optical system.
• Synthetic camera paradigm
• Position of camera (Center of Projection)
• Area of interest (direction camera lens is pointed in) (Projector lines)
• Orientation (which way is up)
• Field of view (wide angle, normal...)
• Depth of field (clipping planes, sort of)
• Tilt of view/film plane (if not normal to view direction)
• Perspective or parallel projection? (camera near objects or an infinite distance away)
• In case of image formation using optical systems, the image is flipped relative to the object.
• In synthetic camera model this is avoided by introducing a plane in front of the lens which is called
the image plane.
• The angle of view of the camera poses a restriction on the part of the object which can be viewed.
• This limitation is moved to the front of the camera by placing a Clipping Window in the projection
Cont’d (Position)
• Determining the Position is analogous to a photographer deciding the vantage point from
which to shoot a photo
• Three degrees of freedom: x, y, and z coordinates in 3-space
• This x, y, z coordinate system is right-handed: if you open your right hand, align your palm
and fingers with the +x axis, and curl your fingers towards the +y axis, your thumb will point
along the +z axis
Cont’d (Orientation)
• Orientation is specified by a point in 3D space to look at (or a direction to look in) and an angle
of rotation about this direction
• Default (canonical) orientation is looking down the negative z-axis and up direction pointing
straight up the y-axis
• In general the camera is located at the origin and is looking at an arbitrary point with an
arbitrary up direction
Cont’d (Orientation)

• Computer-generated image based on an optical system – synthetic camera model


• Viewer behind the camera can move the back of the camera – change of the distance d
i.E. Additional flexibility
• Objects and viewer specifications are independent – different functions within a graphics
library
Cont’d (Look and Up Vectors)
• More concrete way to say the same thing as orientation
 Soon you’ll learn how to express orientation in terms of look and up vectors
• Look vector
 The direction the camera is pointing
 Three degrees of freedom; can be any vector in 3-space
• Up vector
o Determines how the camera is rotated around the look vector
o For example, whether you’re holding the camera horizontally or vertically (or in between)
o Projection of up vector must be in the plane perpendicular to the look vector (this allows
up vector to be specified at an arbitrary angle to its look vector)
Cont’d (Aspect Ratio)
• Analogous to the size of film used in a camera
• Determines proportion of width to height of image displayed on screen
• Square viewing window has aspect ratio of 1:1
• Movie theater “letterbox” format has aspect ratio of 2:1
• NTSC television has an aspect ratio of 4:3, and HDTV is 16:9
Cont’d (View Angle)
• Determines amount of perspective distortion in picture, from none (parallel projection) to a
lot (wide angle lens)
• In a frustum, two viewing angles: width and height angles
• Choosing View angle analogous to photographer choosing a specific type of lens (e.g., a
wide-angle or telephoto lens)
• Lenses made for distance shots often have a nearly parallel viewing angle and cause little
perspective distortion, though they foreshorten depth
• Wide-angle lenses cause a lot of perspective distortion
Cont’d (Front and Back Clipping Planes)
• Volume of space between Front and Back clipping planes defines what camera can see
• Position of planes defined by distance along Look vector
• Objects appearing outside of view volume don’t get drawn
• Objects intersecting view volume get clipped
Cont’d (Front and Back Clipping Planes)
•. Reasons for back (far) clipping • Reasons for front (near) clipping
plane: plane:
• Don’t want to draw objects too far  Don’t want to draw things too
away from camera close to the camera
• Distant objects may appear too • Would block view of rest of
small to be visually significant, scene
but still take long time to render • Objects would be prone to
• By discarding them we lose a distortion
 Don’t want to draw things behind
small amount of detail but
reclaim a lot of rendering time camera
• Alternately, the scene may be • Wouldn’t expect to see things
filled with many significant behind the camera
objects; for visual clarity, we • In the case of the perspective
may wish to declutter the scene camera, if we were to draw
by rendering those nearest the things behind the camera,
camera and discarding the rest they would appear upside-
down and inside-out because
Cont’d (Focal Length)
• Some camera models take a Focal length
• Focal Length is a measure of ideal focusing range; approximates behavior of real camera lens
• Objects at distance of Focal length from camera are rendered in focus; objects closer or
farther away than Focal length get blurred
• Focal length used in conjunction with clipping planes
• Only objects within view volume are rendered, whether blurred or not.
• Objects outside of view volume still get discarded
Cont’d (View Volume Specification)
• From Position, Look vector, Up vector, Aspect ratio, Height angle, Clipping planes, and
(optionally) Focal length together specify a truncated view volume
• Truncated view volume is a specification of bounded space that camera can “see”
• 2D view of 3D scene can be computed from truncated view volume and projected onto film
plane
• Truncated view volumes come in two flavors: parallel and perspective

Truncated view volume means we only need to render what the camera can see
Cont’d (Truncated View Volume for Orthographic
Parallel Projection)
• Limiting view volume useful for eliminating extraneous objects
• Orthographic parallel projection has width and height view angles of zero
Cont’d (Truncated View Volume (Frustum) for
Perspective Projection)
• Removes objects too far from Position, which otherwise would merge into “blobs”
• Removes objects too close to Position (would be excessively distorted)
3-D APIs (OpenGL - basics)
• To follow the synthetic camera model discussed earlier, the API should support:
• Objects, viewers, light sources, material properties.
• OpenGL defines primitives through a list of vertices.
• Primitives: simple geometric objects having a simple relation between a list of vertices
• Simple prog to draw a triangular polygon :
glBegin(GL_POLYGON)
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( );
• Specifying viewer or camera:
• Position - position of the COP
• Orientation – rotation of the camera along 3 axes
• Focal length – determines the size of image
• Film Plane – has a height & width & can be adjusted independent of orientation of lens.
Cont’d
• Function call for camera orientation :
gluLookAt(cop_x,cop_y,cop_z,at_x,at_y,at_z,up_x,up_y,up_z);
gluPerspective(field_of_view,aspect_ratio,near,far);
• Lights and materials :
• Types of lights
 Point sources vs distributed sources
 Spot lights
 Near and far sources
 Color properties
• Material properties
 Absorption: color properties
 Scattering
A small set of geometric primitives
Cont’d (OpenGL Primitives)
Cont’d (Specifying geometric primitives)
• Each geometric object is described by:
• A set of vertices
• Type of the primitive
Cont’d (Attributes)
• Attributes are part of the OpenGL state and determine the appearance of objects
• Color (points, lines, polygons)
• Size and width (points, lines)
• Stipple pattern (lines, polygons)
• Polygon mode
• Display as filled: solid color or stipple pattern
• Display edges
Question

1. All the vertices of your model are therefore in local space.


2. projection matrix is a transformation matrix that projects 2d coordinates into
the 3d normalized device coordinates.

You might also like