0% found this document useful (0 votes)
51 views65 pages

Chap 6

This document discusses various methods for representing 3D objects in graphics applications. It covers boundary representations using polygons and meshes, space-partitioning representations using techniques like octrees, and spline representations using control points. It also describes the 3D modeling process and various coordinate systems and transformations involved in the 3D graphics pipeline, including modeling transformations, viewing transformations to define the camera position, projection transformations for parallel and perspective projections, normalization and clipping, and final viewport transformations.

Uploaded by

Eman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views65 pages

Chap 6

This document discusses various methods for representing 3D objects in graphics applications. It covers boundary representations using polygons and meshes, space-partitioning representations using techniques like octrees, and spline representations using control points. It also describes the 3D modeling process and various coordinate systems and transformations involved in the 3D graphics pipeline, including modeling transformations, viewing transformations to define the camera position, projection transformations for parallel and perspective projections, normalization and clipping, and final viewport transformations.

Uploaded by

Eman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Chapter6

Representing 3D Objects &


Viewing Transformation
Introduction
Graphics scenes contain many different kinds of objects፡ Trees, flowers,
glass, rock, water etc..
There is not any single method that we can use to describe objects that
will include all characteristics of these different materials
3D object can be represented in many ways in a graphics application
A surface can be analytically generated using its function involving
coordinates
an object can be represented in terms of its vertices, edges and polygons
This scheme of object modeling using polygon elements is the most
general method, and is also convenient for applications in various
graphics algorithms
Curves and surfaces can also be designed using splines by specifying a set
of control points
most objects in 3-D graphics are represented using a polygonal mesh,
but this is not the only possibility.
Object Representation Classification:
 Boundary Representations (B-reps) – only the
surface/boundadry of the object is stored, normally
using a set of smaller primitives
e.g. Polygon faces/meshes
 Space-partitioning representations – volume of the
object is hierarchically broken down into smaller sub-
regions, until we reach basic 3D primitives such as
cubes, spheres, etc
eg. Octree Representation
Polygonal meshes, in which primitives are normally
convex polygons
spline patches – are small pieces of curved surface
represented by polynomial equations
3D modeling – process of developing a mathematical
representation of any 3D surface of object via specialized
software
It can be displayed as a 2D image through a process called
3D rendering or used in a computer simulation of physical
phenomena.
3D models represent a 3D object using a collection of
points in 3D space, connected by various geometric
entities such as triangles, lines, curved surfaces, etc.
3D models are widely used anywhere in 3D graphics
Many computer games used pre-rendered images of 3D
models as sprites before computers could render them in
real-time.
Overview of 3D Rendering
Modeling:
 Define object in local coordinates
 Place object in world coordinates (modeling
transformation)
Viewing:
 Define camera parameters
 Find object location in camera coordinates (viewing
transformation)
Projection: project object to the viewplane
Clipping: clip object to the view volume
Viewport transformation
Rasterization: rasterize object
Viewing Transformation
operations needed to produce views of a 3D scene
Generating a view of a 3d scene is similar to taking
a photograph of a scene using a camera
To take a photo
- first we place the camera at a particular position
- then change the direction of the camera
- point the camera to a particular scene and we can rotate
the camera around the 3d object
- finally, when we put the switch, the scene will be
adjusted to the size of the window of the camera and
light form the scene will be projected on to the camera
film
Viewing in 3D involves the following
considerations:
- We can view an object from any spatial position,
eg.
In front of an object,
Behind the object,
In the middle of a group of objects,
Inside an object, etc.
- 3D descriptions of objects must be projected
onto the flat viewing surface of the output
device.
- The clipping boundaries enclose a volume of
space.
3D Graphics Pipeline
Viewing pipeline  sequence of transformations that every
primitive has to undergo before it is displayed
- operation of the OpenGL viewing pipeline for 3-D graphics
Note: different coordinate system after each
transformation
Every transformation can be thought of as defining a new
coordinate system
Steps involved in 3D viewing
- Specify view volume
- Projection onto a projection plane
- Mapping of this projected scene on to the view surface
- Objects in 3D are clipped against the view volume and are then
projected on to a plane
Coordinate Transformation
Coordinate Systems
Several coordinate systems used in graphics
systems.
- World coordinates are the units of the scene to be modelled; we
select a “window” of this world for viewing
- Viewing coordinates are where we wish to present the view on
the output device; we can achieve zooming by mapping different
sized clipping windows (via glOrtho2D())to a fixed-size viewport;
can have multiple viewports on screen simultaneously
The clipping window selects; the viewport indicates
where on output device and what size
Normalized coordinates are introduced to make
viewing process independent of any output device
(paper vs. mobile phone); clipping normally and
more efficiently done in these coordinates
Device coordinates are specific to output device:
printer page, screen display
Modeling Transformation
13

All primitives start off in their own private coordinate


system  modelling coordinate system
Modelling  process of building our 3-D scene from a
number of basic primitives
these primitives must be transformed to get them in the
correct position, scale & orientation to construct our scene
E.g., a wheel primitive being transformed in four different
ways to form the wheels of a car
modeling transformation – used to transform
primitives so that they construct the 3-D scene we
want
- are normally different for each primitive
Once we have transformed all primitives to their
correct positions we say that our scene is in world
coordinates
- This coordinate system will have an origin somewhere
in the scene, and its axes will have a specific orientation
- All primitives are now defined in world coordinate
system
Viewing Transformation
In order to be able to render an image of a scene we need
to define a virtual camera to take the picture
Just like cameras in real life, virtual cameras must have a
position in space, a direction (which way is it pointing?)
and an orientation (which direction is ‘up’?)
- camera position, a ‘look’ vector and an ‘up’ vector
viewing transformation – represents the positioning,
direction and orientation of the virtual camera
defines new coordinate system – viewing
coordinate
- origin is normally at the position of the virtual camera,
the camera ‘looks’ along the z-axis, and the y-axis is ‘up’
Projection Transformation
Now we know where our primitives are in relation
to the virtual camera, we are in a position to ‘take’
our picture  involves ‘projecting’ 3-D primitives
onto a 2-D image plane
Projection from 3D to 2D – defined by straight
projection rays (projectors) emanating from the
‘center of projection’, passing through each point of
the object, and intersecting the ‘projection plane’ to
form a projection
Two different types of projection transformation:
 parallel and
 perspective projections
A. Parallel Projection
For all projection transformations we can imagine the 2D
image plane as being positioned somewhere in 3-D space
line primitive in 3-D viewing coordinates being projected
onto the image plane
projection lines of the two end-points are parallel: after
they have passed through the image plane, they continue
to infinity without ever meeting
Eye at infinitely
Parallel projections can be divided into two subtypes:
 orthographic (or orthogonal) projections and
 oblique projections
Orthographic (or orthogonal) projections
 projection lines intersect with the image plane at right-
angles (they are orthogonal),
 Often used to obtain front, side, & top views of an object
 Engineering & architectural drawings commonly employ,
because lengths and angles are accurately depicted, and can
be measured from the drawings
Oblique projections
 intersect at an angle (they are oblique)
Orthographic projection of an object
B. Perspective Projection
probably more familiar to you, even if you don’t know it
all projection lines converge to a point  centre of projection
Visual effect is similar to human visual system
Property:
- objects that are far away from the camera do appear smaller
- Projectors are rays (i.e., non-parallel)
- Single point centre of projection (i.e. projection lines converge at
a point)
- Difficult to determine exact size and shape of object
Normalization and Clipping
Before final image can be generated, we need to decide which
primitives (or parts of primitives) are ‘inside’ the picture and
‘outside’  clipping.
- performed by defining a number of clipping planes
4 clipping planes: any point whose projection line intersects image
plane outside the image bounds (left, right, bottom or top) will not
be visible by the camera
2 extra clipping planes: near and far
Total 6: near, far, left, right, bottom and top
These 6 planes together form a bounded volume, inside which
points will be visible by the virtual camera, and outside which
points will not be seen  view volume
Different types of projection transformation lead to different shapes
of view volume
For example:
 perspective – forms a view volume in the shape of a frustum,
whereas
 parallel (orthographic) – leads to a view volume that is a
cuboid (a 3-D rectangle)
Contd…
27

View Volume of a Parallel Projection

View Volume of a Perspective Projection


For example, if we scale to the range [0,1] then
- points at the near clipping plane will have a z-coordinate of 0 whilst
points at far clipping plane will have 1
- same applies to x-coordinate (left and right clipping planes) and y-
coordinate (bottom and top clipping planes)
Once the view volume has been normalised, and clipping has taken
place, our remaining points are in normalised coordinates
Viewport Transformation
If we wish, we can draw in the whole of the screen window, but we
can also specify a sub region of it for drawing
- Whatever region we specify for drawing  viewport
- we may want to draw several different (or identical) images in the same
display
viewport transformation defines the mapping from our normalised
coordinate system onto the viewport in the display window
After undergoing this transformation, our final rendered image is in
device coordinates (i.e. the coordinate system of the display device)
OpenGL Viewing Transformations
How to position camera - to transform view (camera)
- Method I. Use transform functions to position all
objects in correct positions
gluLookAt( eye_x, eye_y, eye_z, center_x, center_y,
center_z, up_x, up_y, up_z)
 If gluLookAt() was not called, camera has a
default position (origin) and orientation (points
down negative z-axis, and up-vector of (0, 1, 0))
 Should be used after
glMatirxMode(GL_MODELVIEW); //combines
viewing matrix and modeling matrix into one matrix
glLoadIdentity();
OpenGL Projection Transformations
GL_PROJECTION matrix – to define viewing volume
OpenGL provides 2 functions
glOrtho - to produce a orthographic (parallel)
- glOrtho(left, right, bottom, top, near, far)
- Specify clipping coordinates in six directions.
gluPerpective (fovy, aspect, near, far)
- Field of view is in angle(bigger, objects smaller)
- Aspect ratio is usually set to width/height
- Near clipping plane must > 0
glFrustum - require 6 parameters to specify 6
clipping planes;
- glFrustum(left, right, bottom, top, near, far)
- More general than gluPerspective
OpenGL Viewport Transformations
glViewport (GLint x, GLint y, GLsizei width, GLsizei
height);
- Defines a pixel rectangle in the window into which the
final image is mapped
- x, y specifies lower-left corner of the viewport, and width
and height are the size of the viewport rectangle
OpenGL Viewing Pipeline
OpenGL defines 3 matrices in its viewing pipeline:
 model view matrix: combines the effects of modelling and
viewing transformations
- Before this transformation, each primitive is defined in its own
coordinate
- After it, all primitives should be in viewing coordinate system
 projection matrix: projection of 3-D viewing coordinate onto the
2D image plane
- clipping performed automatically based on bounds we specify for
view volume
 viewport matrix: defines part of the display that will be used
for drawing
Matrix Modes
 Matrix Mode (glMatrixMode)
 ModelView Matrix (GL_MODELVIEW)
- Model related operations: glBegin, glEnd, glTranslate,
glRotate, glScale, gluLookAt…
 Projection Matrix (GL_PROJECTION)
- Setup projection matrix: glViewport, gluPerspective/
glOrtho/ glFrustum…
Contd… 38

OpenGL functions to modify current modelview matrix:


glTranslate*
glRotate*
glScale*
glLoadMatrix*
glMultMatrix*
gluLookAt
These are typically used to define the modelling part of the
modelview matrix.
gluLookAt – used to define the viewing part of the
transformation
- defines the position, direction and orientation of a virtual camera
The basic format:

gluLookAt(x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);


OpenGL 3-D Object Functions
The OpenGL library on its own does not provide any routines for drawing
3-D objects – it provides routines for drawing basic primitives such as
polygons only and relies on the programmer to construct 3-D objects in
the form of polygonal meshes
However, the glut library does provide a number of routines for
displaying a range of pre-defined 3-D objects, such as cubes, cones and
cylinders
We can divide these routines into those that draw regular polyhedra and
those that draw quadric or cubic surfaces
Regular Polyhedra
regular polyhedron/ Platonic solids – a polyhedron in which all of the
faces are regular polygons
regular polygon – is one in which all sides and angles are equal
Regular polyhedra are five in all: cube, tetrahedron(4 faces), octahedron(8
faces), dodecahedron(12 faces) and icosahedron(20 faces)
All can be drawn in either wireframe or solid using glut routines
Polyhedron Routine for Wireframe Rendering Routine for Solid Rendering
Cube glutWireCube(GLdouble size) glutSolidCube(GLdouble size)

Tetrahedron glutWireTetrahedron() glutSolidTetrahedron()

Octahedron glutWireOctahedron() glutSolidOctahedron()

Dodecahedron glutWireDodecahedron() glutSolidDodecahedron()

Icosahedron glutWireIcosahedron() glutSolidIcosahedron()


Quadric/Cubic Surfaces
glut library provides routines for drawing a # of quadric/cubic surfaces
Object Routine for Wireframe Routine for Solid Rendering
Rendering
Sphere glutWireSphere(GLdouble glutSolidSphere(GLdouble
radius, GLint nSlices, GLint
nStacks)
Torus glutWireTorus(GLdouble inRad, glutSolidTorus(GLdouble
le inRad, GLdouble outRad, inRad, GLdouble outRad,
outRad, GLint nSlices, GLint GLint nSlices, GLint nStacks)
GLint nStacks)
Cone glutWireCone(GLdouble glutSolidCone(GLdouble
baseRad, GLdouble height,
GLint nSlices, GLint nStacks)

Teapot glutWireTeapot(GLdouble size) glutSolidTeapot(GLdouble size)


Both take 4 arguments - the radius of the cone at its base, the height
of the cone, and the number of slices and stacks that make up the cone

Wire-Frame Model - Teapot


Lighting in OpenGL
One way we can make our scenes look cooler is by
adding light to them, call to
glEnable(GL_LIGHTING) which enables Phong lighting
glDisable(GL_LIGHTING) if want to turn it back off
Then, call glEnable(GL_LIGHT0) & glEnable(GL_LIGHT1) to
enable two light sources, numbered 0 and 1
You can disable individual light sources by calling
glDisable(GL_LIGHT0) and glDisable(GL_LIGHT1)
More light can be GL_LIGHT0, GL_LIGHT1, ... , or
GL_LIGHT7
Types of lighting
Ambient light:
- scattered so much by the environment
- direction is impossible to determine
- It strikes a surface then it’s scattered equally in all
directions
Diffuse:
- light comes from one direction, so it’s brighter
- Once it hits a surface, however, it’s scattered equally in all
directions
Specular:
- shininess
- light comes from a particular direction
- it tends to bounce off the surface in a preferred direction.
A well-collimated laser beam bouncing off a high-quality
mirror produces almost 100 percent specular reflection
GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; //Color(0.2, 0.2, 0.2)
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); //Add
ambient light
ambientColor - array of 4 GLfloats
first 3 floats represent RGB intensity of the light
Note: values don't exactly represent color; represent an intensity of light
Lights can be directional (infinitely far away) or positional
Positional lights can be either point lights or spotlights
Directional lights have the w component set to 0, and positional
lights have w set to 1
Light properties are specified with the glLight functions:
Add positioned light
GLfloat lightColor0[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.5, 0.5)
GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 1.0f}; //Positioned at (4, 0, 8)
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); //set color/ intensity of light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos0);
The first 3 elements of array are position, and last element is 1
Add directed light
GLfloat lightColor1[] = {0.5f, 0.2f, 0.2f, 1.0f}; //Color (0.5, 0.2, 0.2)
//Coming from the direction (-1, 0.5, 0.5)
GLfloat lightPos1[] = {-1.0f, 0.5f, 0.5f, 0.0f};
glLightfv(GL_LIGHT1, GL_DIFFUSE, lightColor1);
glLightfv(GL_LIGHT1, GL_POSITION, lightPos1);
set up our second light source - make it red, with an intensity of
(0.5, 0.2, 0.2)
Instead of giving it a fixed position, we want to make it directional,
so that it shines the same amount across our whole scene in a
fixed direction
To do that, we need to use 0 as the last element in lightPos1.
When we do that, instead of the first three elements' representing
the light's position, they represent the direction from which the
light is shining, relative to the current transformation state.
Note that glLightfv cannot be called inside a glBegin-glEnd block.
Contd…
50

Steps required to add lighting to your scene:


1. Define normal vectors for each vertex of every object. These normals
determine the orientation of the object relative to the light sources.
2. Create, select, and position one or more light sources.
3. Create and select a lighting model, which defines the level of global
ambient light and the effective location of the viewpoint (for the
purposes of lighting calculations).
4. Define material properties for the objects in the scene.
Normal Vectors
• An object’s normals determine its orientation relative to the
light sources.

• OpenGL uses the assigned normal to determine how much


light that particular vertex receives from each light source.

• Set by
– glNormal3f(x, y, z);

– glNormal3fv(p);

– glEnable(GL_NORMALIZE) allows for autonormalization


Normal Vector
1. A normal vector (or normal) points in a direction
perpendicular to a surface (in general).
2. An object’s normal vectors define its orientation relative
to light sources. These vectors are used by OpenGL to
determine how much light the object receives at its vertices.
3.Lighting for vertices is calculated after MODELVIEW
transformation before PROJECTION transformation (vertex
shading) glBegin (GL_POLYGON);
glNormal3fv(n0);
glVertex3fv(v0);
glNormal3fv(n1);
glVertex3fv(v1);
glNormal3fv(n2);
glVertex3fv(v2);
glNormal3fv(n3);
glVertex3fv(v3);
glEnd();
glMaterial functions are used to specify material properties, for example:
GLfloat diffuseRGBA = {1.0f, 0.0f, 0.0f, 1.0f};
GLfloat specularRGBA = {1.0f, 1.0f, 1.0f, 1.0f};
glMaterialfv(GL_FRONT, GL_DIFFUSE, diffuseRGBA);
glMaterialfv(GL_FRONT, GL_SPECULAR, specularRGBA);

// A directional light
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseRGBA);
glLightfv(GL_LIGHT0, GL_SPECULAR, specularRGBA);
// A spotlight
glLightfv(GL_LIGHT1, GL_POSITION, position);
glLightfv(GL_LIGHT1, GL_DIFFUSE, diffuseRGBA);
glLightfv(GL_LIGHT1, GL_SPOT_DIRECTION, spotDirection);
glLightf(GL_LIGHT1, GL_SPOT_CUTOFF, 45.0f);

When rendering an object, normals should be provided for each face or for
each vertex so that lighting can be computed:
glNormal3f(nx, ny, nz);
glVertex3f(x, y, z);
glRotatef(_angle, 0.0f, 1.0f, 0.0f); //Back
glColor3f(1.0f, 1.0f, 0.0f); glNormal3f(0.0f, 0.0f, -1.0f);
glBegin(GL_QUADS); glVertex3f(-1.5f, -1.0f, -1.5f);
//Front glVertex3f(-1.5f, 1.0f, -1.5f);
glNormal3f(0.0f, 0.0f, 1.0f); glVertex3f(1.5f, 1.0f, -1.5f);
glVertex3f(-1.5f, -1.0f, 1.5f); glVertex3f(1.5f, -1.0f, -1.5f);
glVertex3f(1.5f, -1.0f, 1.5f); //Left
glVertex3f(1.5f, 1.0f, 1.5f); glNormal3f(-1.0f, 0.0f, 0.0f);
glVertex3f(-1.5f, 1.0f, 1.5f); glVertex3f(-1.5f, -1.0f, -1.5f);
//Right glVertex3f(-1.5f, -1.0f, 1.5f);
glNormal3f(1.0f, 0.0f, 0.0f); glVertex3f(-1.5f, 1.0f, 1.5f);
glVertex3f(1.5f, -1.0f, -1.5f); glVertex3f(-1.5f, 1.0f, -1.5f);
glVertex3f(1.5f, 1.0f, -1.5f); glEnd();
glVertex3f(1.5f, 1.0f, 1.5f);
glVertex3f(1.5f, -1.0f, 1.5f);
Selecting an OpenGL Lighting Model
It helps, how to specify a lighting mode
void glLightModel{if}(GLenum pname, TYPE param);
void glLightModel{if}v(GLenum pname, const TYPE *param);
Cont.,
• Global Ambient Light (not from any particular source)
GLfloat lmodel_ambient[] = { 0.2, 0.2, 0.2, 1. };

glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient);

• Local or Infinite Viewpoint


A local viewpoint tends to more realistic results (specular
highlights), but overall performance is decreased:
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
This call places the viewpoint at (0, 0, 0) in eye coordinates.

• Two-sided Lighting
glLightModeli(LIGHT_MODEL_TWO_SIDE, GL_TRUE);

both front-facing polygons and back-facing polygons are lit.


OpenGL reverses the normals to calculate the lighting for
back-facing polygons.
Defining Material Properties
void glMaterial{if}(GLenum face, GLenum pname, TYPE param);
void glMaterial{if}v(GLenum face, GLenum pname, const TYPE *param);

• face: GL_FRONT, GL_BACK, GL_FRONT_AND_BACK.


Color Material Mode
Another technique for minimizing performance costs associated with
changing material properties is to use glColorMaterial().

void glColorMaterial(GLenum face, GLenum mode);

• face: GL_FRONT, GL_BACK, GL_FRONT_AND_BACK.

• mode: GL_AMBIENT, GL_DIFFUSE, GL_EMISSION; etc.


Need to call glEnable(GL_COLOR_MATERIAL), then you
can change the current color using glColor*() to change the
material properties.
Shading in OpenGL
• A line or a filled polygon primitive can be drawn with a single color (flat
shading) or with many different colors (smooth shading, also called Gouraud
shading).
void glShadeModel(GLenum mode);
The mode parameter can be either GL_SMOOTH (the default) or GL_FLAT.
Texture Mapping
We would like to give objects a more varied and realistic appearance
through complex variations in reflectance that convey textures.
There are two main sources of natural texture:
 Surface markings — variations in albedo (i.e. the total light reflected from ambient
and diffuse components of reflection), and
 Surface relief — variations in 3D shape which introduces local variability in shading.
We will focus only on surface markings.

Examples of surface markings and surface relief


Texture Sources
Texture Procedures
- Textures may be defined procedurally. As input, a procedure requires a
point on the surface of an object, and it outputs the surface albedo at
that point. Examples of procedural textures include checkerboards, fractals,
and noise.

A procedural checkerboard pattern applied to a teapot. The checkerboard texture


comes from the OpenGL programming guide chapter on texture mapping.
Texturing in OpenGL
Steps in Texture Mapping
1. Create a texture object and specify a texture for that object.
2. Indicate how the texture is to be applied to each pixel.
3. Enable texture mapping.
4. Draw the scene, supplying both texture and geometric coordinates.
Texturing in OpenGL
To use texturing in OpenGL, a texturing mode must be enabled
glEnable(GL_TEXTURE_2D); //to display 2D texture on polygons
Texture coordinates are normalized, so that (0, 0) is the lower left corner, and (1, 1) is
always the upper right corner.
Since multiple textures can be present at any time, the texture to render with must be
selected.
Use glGenTextures to create texture handles and glBindTexture to select the texture with a
given handle.
texture can then be loaded from main memory with glTexImage2D For example:
GLuint handles[2];
glGenTextures(2, handles);
glBindTexture(GL_TEXTURE_2D, handles[0]); // Initialize texture parameters and
load a texture with glTexImage2D
glBindTexture(GL_TEXTURE_2D, handles[1]); // Initialize texture parameters and
load another texture

You might also like