0% found this document useful (0 votes)
1 views

lab_all

The document provides an overview of basic OpenGL programming, focusing on 2D object creation and transformations. It explains the purpose of graphics programming, introduces OpenGL and its related libraries, and outlines the steps to create a simple OpenGL program that displays a triangle. Additionally, it covers essential OpenGL functions for drawing, transforming objects, and setting up the display environment.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

lab_all

The document provides an overview of basic OpenGL programming, focusing on 2D object creation and transformations. It explains the purpose of graphics programming, introduces OpenGL and its related libraries, and outlines the steps to create a simple OpenGL program that displays a triangle. Additionally, it covers essential OpenGL functions for drawing, transforming objects, and setting up the display environment.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Computer Graphics Laboratory

Lab – 01
2D Object Creation and Applying Transformations
(Basic OpenGL Programming)
Version-1.0

Objective of graphics programming


 To create synthetic image such a way that it is close to reality.
 To create artificial animation that emulates virtual reality.
 But why? We have already tools like camera to create image of
natural scene and video camera for creating natural movies!

OpenGL – Open Graphics Library


 What it is?
 a software interface to graphics hardware
 a graphics programming library
 Dominant industry "standard" for 3D graphics
 Consists of about 150 basic commands
 Platform independent
 What it is not?
o a windowing system (no window creation)
o a UI system (no keyboard and mouse routines)
o a 3D modeling system (Open Inventor, VRML, Java3D)

OpenGL Related Libraries


 GL – gl.h – Opengl32.lib
 Provides basic commands for graphics drawing
 GLU (OpenGL Utility Library) – glu.h – glu32.lib
 Uses GL commands for performing compound graphics
like
 viewing orientation and projection specification
 Polygon tessellations, surface rendering etc.
 GLUT (OpenGL Utility Toolkit) – glut.h – glut.lib
 is a window system-independent toolkit for user
interaction

OpenGL Library Organization


Host OS
GLUT
User
Interface
OpenGL GLU Fram
Applicatio Generic e
n Graphics Buffe
Program GL API r
DirectDra
WGL w
Either will
Xlib / Exist in any
GLX Xt system

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 1


Convention of function naming

glColor3f(...)
Type of Arguments
Gl library
Root Command
Number of Arguments

OpenGL APIs
 Primitive functions
 glBegin(type);
 glVertex(…);
 glVertex(…);
 …
 glVertex(…);
 glEnd();
 Attribute functions
 glColor3f(…);
 Transformation functions
 glRotate(…); glTranslate(…);
 Viewing functions
 gluLookAt(…);
 Input functions
 glutKeyboardFunc(…);
 Control functions
 Inquiry functions

Primitives
 Primitives: Points, Lines & Polygons
 Each object is specified by a set of Vertices

 Grouped together by glBegin & glEnd


glBegin(type)
glVertex*( )
glVertex*( )

glEnd( );
 type can have 10 possible values
 To specify attributes of last Vertex drawn:
 glColor*()/ glIndex*() current vertex color
 glNormal*() current vertex normal (lighting)
 glMaterial*() current material property (lighting)
 glTexCoord*() current texture coordinate
 glEdgeFlag*() edge status (surface primitives)

Primitive Types

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 2


Build your First openGL Program
Developing programs using openGL library is very simple. openGL
has reach library functions that help you to develop your own project.

Follow the GLUT setup file for details information about creating
an open GL project in CodeBlocks IDE.
After creating a project, there will be a default program. Run the
project and you will see some graphics objects in yout output. Don’t
need to analyze the program. Simply delete all the code from
main.cpp file.
Now let’s try to develop our first openGL simple program. What you
needs to do first is to add the following header files in the top of your
main.cpp file.
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <stdlib.h>
#include <stdio.h>
#include <windows.h>
Here you already know about last three header files. First three are
the required header file for GL, GLU and GLUT library functions.

Suppose we want to create a simple triangle and want to apply some


animation on it. To create an object, it is a better practice to develop a
user define function for that object. We will just call that function
whenever the object is required. So let’s create a function named
triangle in you main.cpp file as like below;
void triangle()
{
glBegin(GL_TRIANGLES);//Denotes the
beginning of a group of vertices that define
one or more primitives.
glColor3f(1.0,1.0,1.0);
glVertex2f(2.0,2.0);
glColor3f(0.0,1.0,0.0);
glVertex2f(2.0,0.0);
glColor3f(0.0,1.0,0.0);
glVertex2f(0.0,0.0);
glEnd(); //Terminates a list of
vertices that specify a primitive initiated by
glBegin.

}
Here we create triangle polygon in its own object space like below

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 3


(2, 2)

(0, 0) (2, 0)

Fig: sample object drawing in object space.

Whose primitives type is GL_TRIANGLES that forms filled triangles


from given set of vertex points. glVertex2f(x, y) function is
used to define a vertex point in 2D object space and the attribute
function glColor3f(R, G, B) is used to apply a particular color
in its vertex. Each of the three parameters very between 0 to 1. 0
means 0% of that color and 1 means 100% of that color. Combination
of the three colors will apply to that vertex. See here the colors of the
vertices are interpolated among the triangle depending on the shade
model (will discuss later).

Now our task is to display the object. To do this we develop another


user defined function called display(), that includes some openGL
library functions along with our defined triangle function like below;
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluOrtho2D(-3, 3, -3, 3);

glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

glViewport(0, 0 ,windowWidth ,windowHeight);

triangle();

glFlush();
glutSwapBuffers();
}
Let’s introduce with the above library functions.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) -
consists of the bitwise OR of all the buffers to be cleared. Color
buffer stores the color of each pixel of the projected scene. And depth
buffer stores the depth information of each pixel for multiple
overlapping objects. Both needs to cleared before creating new scene.

glViewport(0, 0 ,windowWidth ,windowHeight) sets


-
the region within a window that is used for mapping the clipping
volume coordinates to physical window coordinates. First two
parameters are the x and y coordinate values of the lower left corner
of the window and the last two are the width and height of that
window, must be less than or equal to the original window width and
height. Remember, don’t forget to declare two global variable called

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 4


windowWidth ,windowHeight set their initial value. The figure
below illustrate the above function
800

600

(400, 300)
glViewport
(400,0,400,300)

(0, 0) Fig: Sample window and viewport in it.

Here we can see that, if we want we can create more than one
viewport in one window, will be demonstrated later. But for now we
create a viewport equal to the original window size.

glMatrixMode() This function determines which matrix stack


-
(GL_MODELVIEW, GL_PROJECTION, or GL_TEXTURE) is used
for matrix operations.

GL_MODELVIEW Matrix operations affect the modelview matrix stack. (Used to move objects
around the scene.)

GL_PROJECTION Matrix operations affect the projection matrix stack. (Used to define clipping
volume.)

GL_TEXTURE Matrix operations affect the texture matrix stack. (Manipulates texture
coordinates.)

Basically openGL works by representing the objects, their


transformation, projection etc. in matrix form. Everything is done
using any of the three above matrix stack. Mainly modelview and
project matrix stack is used. Every operations of each category are
stored by multiplying the current operation matrix with the stored
matrix.

The projection matrix stack is used store the clipping volume defined
by gluOrtho2D(-3, 3, -3, 3) function. Therefore
glMatrixMode()function selects the required matrix stack and then
glLoadIdentity()function replaces the current transformation
matrix with the identity matrix of the stack.

gluOrtho2D(Xmin, Xmax, Ymin, Ymax) defines a rectangular


clipping volume from world space specifies by the parameters Xmin,
Xmax and Ymin, Ymax. This actually specifies which portion of the
world coordinate (space) system will be displayed in the scene. It
defines kind of eye window for a viewer. Comparing with the actual

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 5


world scenes, a viewer only can see the objects only that are in
between its eye boundary. So, the scene outside the boundary of the
window is clipped and will not displayed. You can examine this by
changing the window size in gluOrtho2D() function. Finally, this
clipping window is mapped with the viewport window and display
the scene in the specified viewport.

glMatrixMode(GL_MODELVIEW); and glLoadIdentity();


are also used to select the modelview matrix stack and replace the
current matrix with the identity matrix. Modelview matrix actually
store the multiple transformations in multiplication of matrix form
that will be apply on particular object(s) for different kind of
transformation of the object(s). Therefore, creating an object and
applying transformations on that object must have to be done
after selecting the modelview matrix stack.

Then we draw the triangle by calling triangle()function. This


triangle will be represented in matrix form and stored in modelview
matrix stack, multiplied by the identity matrix in modelview matrix.

Finally, glFlush()is used to ensure the drawing commands are


actually executed rather than stored in a buffer waiting additional
OpenGL commands.
glutSwapBuffers()performs a buffer swap on the layer in use for
the current window. Specifically, glutSwapBuffers promotes the
contents of the BACK BUFFER of the layer in use of the current
window to become the contents of the FRONT BUFFER. The
contents of the back buffer then become undefined. The update
typically takes place during the vertical retrace of the monitor, rather
than immediately after glutSwapBuffers is called.

By now, our display function is ready. At this stage all we need is a


main function to execute the defined functions. Here we have a
special main() function for this project;
int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB
| GLUT_DEPTH);

glutInitWindowPosition(100,100);
glutInitWindowSize(windowWidth,
windowHeight);
glutCreateWindow("Traingle-Demo");

glShadeModel( GL_SMOOTH );
glEnable( GL_DEPTH_TEST );

glutDisplayFunc(display);

glutMainLoop();

return 0;
}
Now let’s know about the new functions used here:
glutInit(&argc, argv) - This initializes the GLUT library,
passing command line parameters (this has an effect mostly on
Linux/Unix).
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB |
GLUT_DEPTH) - Initial parameters for the display. In this case, we
are specifying a RGB display (GLUT_RGB) along with double-

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 6


buffering (GLUT_DOUBLE), so the screen won't flicker when we
redraw it. GLUT_DEPTH specifies a 32-bit depth buffer.
glutInitWindowPosition(100,100) – sets the original display
window start position in your display device. Remember, display
screen coordinates starts from upper left corner of the screen.
glutInitWindowSize(windowWidth,windowHeight) - Initial
window size - width, height
glutCreateWindow("Traingle-Demo") - Creates and sets the
window title.
glShadeModel( GL_SMOOTH ) - Sets the default shading to flat or
smooth. OpenGL primitives are always shaded, but the shading
model can be flat (GL_FLAT) or smooth (GL_SMOOTH).
glEnable( GL_DEPTH_TEST ) - glEnable enables an OpenGL
drawing feature. Here it enables the depth test features for depth
buffer.
glutDisplayFunc(display) - This function tells GLUT which
function to call whenever the windows contents must be drawn. This
can occur when the window is resized or uncovered or when GLUT
is specifically asked to refresh with a call to the glutPostRedisplay
function (will be discussed later).
glutMainLoop() - This function begins the main GLUT event-
handling loop. The event loop is the place where all keyboard,
mouse, timer, redraw, and other window messages are handled. This
function does not return until program termination.

At last we are ready to execute our program. This simple program


will display a triangle in the display window.

Here in this program, let’s verify and observe some effects:


1. Change the viewport’s position and size and try to understand
the effect of viewport transformation.
2. Now change clipping window size and try understand the effect
varying the objects’ size.

Till now, we are able to draw and display a particular object. Next
task is to apply some transformations on that object to place the
object in the world space in an appropriate fashion. In openGL, there
are three basic transformation functions;
1. glTranslatef(Xval,Yval,Zval) – this function forms a
translation matrix that translate (simply adds) the current
object’s x, y, z coordinate values by Xval, Yval, Zval
respectively. This matrix is multiplied with modelview matrix
stored in modelview matrix stack.
2. glScalef(Xval,Yval,Zval) – this function forms a scale
matrix that scale (simply multiply) the current object’s x, y, z
coordinate values by Xval, Yval, Zval respectively. This matrix
is multiplied with modelview matrix stored in modelview
matrix stack.
3. glRotatef(theta,Xval,Yval,Zval) – this function forms
a rotation matrix that rotate the current object’s x, y, z
coordinate values by theta degree in anti-clockwise direction
with respect to vector formed by Xval, Yval, Zval and centered
at origin. This matrix is multiplied with modelview matrix
stored in modelview matrix stack.

Now let’s apply these functions on our created triangle.


Suppose, we want to translate the triangle 1unit in right and 1unit
down to its current position. Then we need to call the
glTranslatef(1,-1,0) function just before the triangle()
function in our display() function. In openGL, the transformations
you want to apply, must need to define before the object. All the

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 7


transformation functions defined before the object will affect the
object.
If you want more than one transformations on object(s), for example
you want a translation first and then a rotation on the triangle. Then
both the transformation functions must called before the triangle, but
glTranslatef(1,-1,0) needs to be called immediately before
triangle() and then glRotatef(45,0,0,1) before the
glTranslatef(1,-1,0). That is in the following order;
glRotatef(45,0,0,1);
glTranslatef(1,-1,0);
triangle();
This will translate the triangle first and then rotate 45 degree anti-
clockwise with respect to Z axis.
Remember, the order of placing the transformation function is
important. Because, the output of first translation, then rotation is not
the same for the output of first rotation, then translation. For
justification replace the above function by the following:
glRotatef(45,0,0,1);
glTranslatef(1,-1,0);
triangle();

Solve class work 01 at this stage.

Now, we want to display the same triangle twice. Very simple, call
the triangle() function again just after the first one. But in output
you will see only one triangle (same as previous output). Why?
Actually the second triangle is also drawn along with first one and
they completely overlapped. But why they overlap? We did not apply
any transformations for the second triangle.
Remember, all the transformations defined before the object will
affect the object (mentioned above).
But, we don’t want to apply the transformations on the second
triangle. What could be the solution? You can push the
transformation matrices along with first triangle in the modelview
matrix stack and pop the matrix when to display using
glPushMatrix() and glPopMatrix()
like below:
glPushMatrix();
glTranslatef(1,-1,0);
glRotatef(45,0,0,1);
triangle();
glPopMatrix();

glPushMatrix();
triangle();
glPopMatrix();
Now see the output. There will be two triangles in different shape.
That is second triangle is now out of the effect of the transformations.
Here
glPushMatrix() - pushes the current matrix onto the current
matrix stack. This function is most often used to save the current
transformation matrix so that it can be restored later with a call to
glPopMatrix().
glPopMatrix() - Pops the current matrix off the matrix stack.

Now try some different transformations for the second triangle and
you will see that affects only the second one.

Now we will introduce with glut keyboard and idle functions. First,
the purpose of keyboard function is to listen a key press and perform
an action with respect to that key press. It needs to define a user

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 8


defined function, where we define the actions for a specific key press.
Say, we want to translate the first triangle left, right, up and down
using key L, R, U and D respectively. So the defined function should
be look like below:
void myKeyboardFunc( unsigned char key, int x, int
y )
{
switch ( key )
{
case 'r':
Txval+=0.2;
break;
case 'l':
Txval-=0.2;
break;
case 'u':
Tyval+=0.2;
break;
case 'd':
Tyval-=0.2;
break;
case 27: // Escape key
exit(1);
}
glutPostRedisplay();
}
Here the function parameters are predefined and cannot be changed.
Parameter key receive the pressed key value and x and y are the
mouse position if mouse action is to be listened. Also we use two
global variables named Txval and Tyval. We just changing these
variables value with respect to any key press.
Our task is to translate the first triangle left, right, up and down using
key L, R, U and D respectively. So, we must use these global
variables in the parameters of glTranslatef() function like below:
glPushMatrix();
glTranslatef(Txval,Tyval,0);
glRotatef(45,0,0,1);
triangle();
glPopMatrix();
This is not yet done. You must add one function in your keyboard
function called glutPostRedisplay().
This function informs the GLUT library that the current window
needs to be refreshed. Multiple calls to this function before the next
refresh result in only one repainting of the window. Therefore,
without this function you cannot see the keyboard actions.

Now you need to call your keyboard function ( myKeyboardFunc)


from the main() function using function
glutKeyboardFunc(myKeyboardFunc) which sets the keyboard
callback function for the current window.

Finally, the last function for this tutorial is to apply an idle function in
your program (optional). Suppose you want to apply some
transformation continuously until you stop it. Say, we want to scale
the second triangle continuously until key press. For this we need to
add some codes in our keyboard function and define a function called
animate() like below:
case 'S':
flagScale=true;
break;
case 's':
flagScale=false;
break;
add the above codes into the switch case.

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 9


void animate()
{
if (flagScale == true)
{
sval+= 0.005;
if(sval > 3)
sval = 0.005;
}
glutPostRedisplay();
}
Here the global variable is used in the function in display function
like below:
glPushMatrix();
glScalef(sval,sval,1);
triangle();
glPopMatrix();

Now you need to call your animate function from the main()
function using function glutIdleFunc(animate) which is
particularly useful for continuous animation or other background
processing.

At this stage, let’s play with more than one viewports. Modify you
display function with the following code segments below:
glPushMatrix();
glViewport(400, 300 ,400 ,300);
glTranslatef(Txval,Tyval,0);
glRotatef(45,0,0,1);
triangle();
glPopMatrix();

glPushMatrix();
glViewport(0, 0 ,400,300);
glScalef(sval, sval,1);
triangle();
glPopMatrix();
Here we add specific viewport for each matrix stack omitting
the single general viewport function. In this way you can create
more than one viewport in your output window and display
specific objects in that viewport.

Solve class work 02 at this stage.

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 10


Computer Graphics Laboratory
Lab – 02
3D Object Creation and Applying Transformations
Version-1.0

Today we will discuss about creation of 3 dimensional objects


and their transformations in model world. Also we will discuss
about viewing coordinate system and clipping of 3D scene.
In the last tutorial we perform modelview transformations and
projections of 2D scene. As 2D scene has only one plane (XY)
plane, the viewer has no need to move around and thus viewer is
fixed and placed in the origin of modeling world. Therefore, in
2D we need not to explicitly define the viewing coordinate
system. We only defined the clipping volume. But for 3D scene,
we need to define the viewing coordinate system also as the
viewer can be placed in any place in modeling world.

At first, let’s create a 3D object in object space. This is similar


like 2D object creation. Here we going to create a pyramid like
below:

Y (1, 4, 1)
Index=4

(0, 0, 2)
(2, 0, 2)
Index=1
Index=2

Fig: pyramid
Today, we create our desired pyramid in little different way than we
did in 2D object creation. Here we see that the pyramid is a
combination of four triangular plane and one rectangular plane.
Creating four triangle and one rectangle requires 16 points. But here
we see that we need only five points and we will reuse that points to
create four triangular and one rectangular plane.
Therefore, at first we store our all required points (coordinate values
of point) in an array. We also store the color for the points in another
array defined in global section of our program like below:
static GLfloat v_pyramid[5][3] = {
{0.0, 0.0, 0.0}, //point index 0
{0.0, 0.0, 2.0}, //point index 1
{2.0, 0.0, 2.0}, //point index 2
{2.0, 0.0, 0.0}, //point index 3
{1.0, 4.0, 1.0} //point index 4
};

static GLubyte p_Indices[4][3] = {


{4, 1, 2}, // indices for drawing the triangle plane 1
{4, 2, 3}, // indices for drawing the triangle plane 2
{4, 3, 0}, // indices for drawing the triangle plane 3
{4, 0, 1} // indices for drawing the triangle plane 4
};

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 1


static GLubyte quadIndices[1][4] ={
{0, 3, 2, 1} }; // indeces for drawing the quad plane

static GLfloat colors[5][3] = {


{0.0, 0.0, 1.0}, //color for point index 0
{0.5, 0.0, 1.0}, //color for point index 1
{0.0, 1.0, 0.0}, //color for point index 2
{0.0, 1.0, 1.0}, //color for point index 3
{0.8, 0.0, 0.0} //color for point index 4
};
Here above, the array p_Indices and quadIndices are important for
creating triangular and rectangular planes. In p_Indices the first three
values are the indices of array v_pyramid which points to three
coordinate points to create the first triangular plane. But be careful to
add the indices value in p_Indices array. Because there ordering is
vital.
The ordering of the indices must follow the anti-clockwise
rotational sequence for drawing a plane like shown in fig above.
That is the points must have to draw in either (4,1,2) or (1,2,4) or
(2,4,1) order. Ordering is important for specifying the surface or
plane normal which is used for applying material properties and
lighting effects (we will see in next tutorial). Joining the points in
anti-clockwise direction for creating a plane surface identifies that its
normal will be in outward direction.

Therefore, in this tutorial we will draw a pyramid by drawing its five


planes and also we calculate the normal of each plane when we draw
the plane. The code segment is given below:
static void getNormal3p
(GLfloat x1, GLfloat y1,GLfloat z1, GLfloat x2, GLfloat
y2,GLfloat z2, GLfloat x3, GLfloat y3,GLfloat z3){
GLfloat Ux, Uy, Uz, Vx, Vy, Vz, Nx, Ny, Nz;

Ux = x2-x1;
Uy = y2-y1;
Uz = z2-z1;

Vx = x3-x1;
Vy = y3-y1;
Vz = z3-z1;

Nx = Uy*Vz - Uz*Vy;
Ny = Uz*Vx - Ux*Vz;
Nz = Ux*Vy - Uy*Vx;

glNormal3f(Nx,Ny,Nz);
}

void drawpyramid()
{
glBegin(GL_TRIANGLES);
for (GLint i = 0; i <4; i++) {
getNormal3p(v_pyramid[p_Indices[i][0]][0],
v_pyramid[p_Indices[i][0]][1], v_pyramid[p_Indices[i][0]][2],
v_pyramid[p_Indices[i][1]][0], v_pyramid[p_Indices[i][1]][1],
v_pyramid[p_Indices[i][1]][2], v_pyramid[p_Indices[i][2]][0],
v_pyramid[p_Indices[i][2]][1], v_pyramid[p_Indices[i][2]][2]);

glVertex3fv(&v_pyramid[p_Indices[i][0]][0]);
glVertex3fv(&v_pyramid[p_Indices[i][1]][0]);
glVertex3fv(&v_pyramid[p_Indices[i][2]][0]);
}

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 2


glEnd();

glBegin(GL_QUADS);
for (GLint i = 0; i <1; i++) {
getNormal3p(v_pyramid[quadIndices[i][0]][0],
v_pyramid[quadIndices[i][0]][1], v_pyramid[quadIndices[i][0]][2],
v_pyramid[quadIndices[i][1]][0], v_pyramid[quadIndices[i][1]][1],
v_pyramid[quadIndices[i][1]][2], v_pyramid[quadIndices[i][2]][0],
v_pyramid[quadIndices[i][2]][1], v_pyramid[quadIndices[i][2]][2]);

glVertex3fv(&v_pyramid[quadIndices[i][0]][0]);
glVertex3fv(&v_pyramid[quadIndices[i][1]][0]);
glVertex3fv(&v_pyramid[quadIndices[i][2]][0]);
glVertex3fv(&v_pyramid[quadIndices[i][3]][0]);
}
glEnd();

}
Here getNormal3p function is used to create normal of each plane and
normalize the created normals using glEnable(GL_NORMALIZE) in
main() function which will check and if necessary renormalize all
your surface normal. glVertex3fv is a vector function which accepts
vector as parameter shown above. Use a common color for the
pyramid by using glColor3f(1,0,0) in the display() function.

At this stage, our 3d object creation is complete. Now we need to


define the viewing coordinate system as the viewer can be
placed in any place in modeling world.
We will define the viewing coordinate system using viewer’s
eye position, look at point (where the viewer looking at) and
head direction. We need to use a glu function called
gluLookAt(2,3,10, 2,0,0, 0,1,0 ) that defines the viewing coordinate
matrix and multiplied with modelview matrix.
Here the
i) first three parameters are the x,y,z coordinate values for
the viewer’s eye position.
ii) Second three parameters are the x,y,z coordinate values
for the lookat point where the viewer is looking at.
iii) Third three parameters are the x,y,z coordinate values
for the viewer’s head up direction. This is vector defines
the head direction of the viewer.
You can use this function in your display function after enabling
modelview matrix.
Finally we need to define the clipping or view volume for the scene
like below:

This volume depends on the viewer’s position and look at direction


and defines which portion of the model world the viewer will see. In
2D scene, clipping area is defined by function glOrtho2d() function.
G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 3
But in 3D scene, we can define the view volume using three function
depending on our requirements are describe below:
1. glFrustum(Xmin, Xmax, Ymin, Ymax, near, far) - This function
creates a perspective matrix that produces a perspective
asymmetric projection. The eye is assumed to be located at
(0,0,0) if not defined.
Parameters are:
1 & 2:Coordinates for the left and right clipping planes.
3 & 4:Coordinates for the bottom and top clipping planes.
5 & 6:Distance to the near and far clipping planes. Both of
these values must be positive.

2. gluPerspective(fovy, aspect ratio, near, far) - This function


creates a matrix that describes a viewing symmetric frustum in
world coordinates. The aspect ratio should match the aspect
ratio of the viewport (specified with glViewport).
Parameters are:
1.The field of view in degrees, in the y direction.
2.The aspect ratio. This is used to determine the field of view
in the x direction. The aspect ratio is x/y.
3 & 4:The distance from the viewer to the near and far
clipping
plane. These values are always positive.

3. glOrtho(Xmin, Xmax, Ymin, Ymax, near, far) - This function


describes a parallel clipping volume. This projection means that
objects far from the viewer do not appear smaller
Parameters are:
1.The leftmost coordinate of the clipping volume.
2.The rightmost coordinate of the clipping volume.
3.The bottommost coordinate of the clipping volume.
4.The topmost coordinate of the clipping volume.
5.The maximum distance from the origin to the viewer.
6.The maximum distance from the origin away from the
viewer.
You can see the effects of these projection functions executing
projection.exe file in tutor folder. Press P for perspective, F frustum
and O for ortho function.
You have to use any one of these functions in your display function
after enabling projection matrix.

Remember, you must have to use all the other functions used in
display() and main() function in last tutorial.

Finally, we are going to develop our own transformation functions.


As we know the general form of transformation matrices given
below:

Translation Scale Rotation (about Z axis)


matrix matrix matrix

We just need to develop a function for a specific transformation


matrix and define (using parameter value) and store the matrix in a
one dimensional array. Then the multiply the matrix with modelview
matrix. A sample translation function is given below.
G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 4
void ownTranslatef(GLfloat dx, GLfloat dy, GLfloat dz){
GLfloat m[16];

m[0] = 1; m[4] = 0; m[8] = 0; m[12] = dx;


m[1] = 0; m[5] = 1; m[9] = 0; m[13] = dy;
m[2] = 0; m[6] = 0; m[10] = 1; m[14] = dz;
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1;

glMatrixMode(GL_MODELVIEW);
glMultMatrixf(m);
}
Here, the translation matrix is stored in a one dimensional array and
each sequential 4 indices represent the corresponding columns of the
matrix. glMultMatrixf(m) function is used to multiply the translation
matrix with the modelview matrix.
Now try this ownTranslatef() function to translate your object(s).

Solve class work 01and 02 at this stage.

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 5


Computer Graphics Laboratory
Lab – 03
Applying Material Properties on Objects and Lights in a Scene
Version-1.0

Continuing from our previous laboratory, we created a 3D pyramid


and place a viewer with 3D projection function. The demo program
of the last lab is in Lab 03 demo_main.cpp file. Try to open a
GLUT project and run the given file. We will see a white pyramid as
we did not apply any color to the pyramid. Use key R and S to rotate
the pyramid.

Today we apply material property on the pyramid rather than


applying color on each vertex and also we will apply light in the
scene. Before practically applying material properties, we need to
learn some basics of material properties, lighting attributes etc.

Real-World and OpenGL Lighting [Ref: Redbook]


When you look at a physical surface, your eye's perception of the
color depends on the distribution of photon energies that arrive and
trigger your cone cells. Those photons come from a light source or
combination of sources, some of which are absorbed and some of
which are reflected by the surface. In addition, different surfaces may
have very different properties -some are shiny and preferentially
reflect light in certain directions, while others scatter incoming light
equally in all directions. Most surfaces are somewhere in between.

OpenGL approximates light and lighting as if light can be broken


into red, green, and blue components. Thus, the color of light
sources is characterized by the amount of red, green, and blue light
they emit, and the material of surfaces is characterized by the
percentage of the incoming red, green, and blue components that is
reflected in various directions.

In the OpenGL model, the light sources have an effect only when
there are surfaces that absorb and reflect light. Each surface is
assumed to be composed of a material with various properties. A
material might emit its own light (like headlights on an automobile),
it might scatter some incoming light in all directions, and it might
reflect some portion of the incoming light in a preferential direction
like a mirror or other shiny surface.

The OpenGL lighting model considers the lighting to be divided into


four independent components: emissive, ambient, diffuse, and
specular. All four components are computed independently and then
added together.

Ambient, Diffuse, and Specular Light [Ref: Redbook]


Ambient illumination is light that's been scattered so much by the
environment that its direction is impossible to determine - it seems to
come from all directions. When ambient light strikes a surface, it's
scattered equally in all directions.

The diffuse component is the light that comes from one direction, so
it's brighter if it comes squarely down on a surface than if it barely
glances off the surface. Once it hits a surface, however, it's scattered
equally in all directions, so it appears equally bright, no matter where
the eye is located. Any light coming from a particular position or
direction probably has a diffuse component.

Finally, specular light comes from a particular direction, and it tends


to bounce off the surface in a preferred direction. A well-collimated

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 1


laser beam bouncing off a high-quality mirror produces almost 100
percent specular reflection. Shiny metal or plastic has a high specular
component, and chalk or carpet has almost none. You can think of
specularity as shininess.

Although a light source delivers a single distribution of frequencies,


the ambient, diffuse, and specular components might be different. For
example, if you have a white light in a room with red walls, the
scattered light tends to be red, although the light directly striking
objects is white. OpenGL allows you to set the red, green, and blue
values for each component of light independently.

Material Colors [Ref: Redbook]


The OpenGL lighting model makes the approximation that a
material's color depends on the percentages of the incoming red,
green, and blue light it reflects. For example, a perfectly red ball
reflects all the incoming red light and absorbs all the green and blue
light that strikes it. If you view such a ball in white light (composed
of equal amounts of red, green, and blue light), all the red is reflected,
and you see a red ball. If the ball is viewed in pure red light, it also
appears to be red. If, however, the red ball is viewed in pure green
light, it appears black (all the green is absorbed, and there's no
incoming red, so no light is reflected).

Like lights, materials have different ambient, diffuse, and specular


colors, which determine the ambient, diffuse, and specular
reflectances of the material. A material's ambient reflectance is
combined with the ambient component of each incoming light source,
the diffuse reflectance with the light's diffuse component, and
similarly for the specular reflectance and component. Ambient and
diffuse reflectances define the color of the material and are typically
similar if not identical. Specular reflectance is usually white or gray,
so that specular highlights end up being the color of the light source's
specular intensity. If you think of a white light shining on a shiny red
plastic sphere, most of the sphere appears red, but the shiny highlight
is white.

In addition to ambient, diffuse, and specular colors, materials have an


emissive color, which simulates light originating from an object. In
the OpenGL lighting model, the emissive color of a surface adds
intensity to the object, but is unaffected by any light sources. Also,
the emissive color does not introduce any additional light into the
overall scene.

Now let’s try to apply material properties and light on the scene. First
we will apply material properties to our created pyramid. Lets see the
following code segment:

GLfloat no_mat[] = { 0.0, 0.0, 0.0, 1.0 };


GLfloat mat_ambient[] = { 0.5, 0.0, 0.0, 1.0 };
GLfloat mat_diffuse[] = { 1.0, 0.0, 0.0, 1.0 };
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = {10};

glMaterialfv( GL_FRONT, GL_AMBIENT, mat_ambient);


glMaterialfv( GL_FRONT, GL_DIFFUSE, mat_diffuse);
glMaterialfv( GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv( GL_FRONT, GL_SHININESS, mat_shininess);

Here we first define the reflectance properties of four color


component for the material. To apply these reflectance properties as

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 2


material properties we use function glMaterialfv) which has three
parameters.
1. Specifies whether the front, back, or both material properties of
the polygons are being set by this function. May be either
GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK.
2. Specifies the single-valued material parameter being set for the
first two variations. Currently, the only single-valued material
parameter is GL_SHININESS. The second two variations,
which take arrays for their parameters, can set the
following material properties: GL_AMBIENT, GL_DIFFUSE,
GL_SPECULAR, GL_EMISSION, GL_SHININESS etc.
3. Specifies the value to which the parameter specified by
parameter 2.

Before using these code segment, we need to enable lighting features


by using glEnable(GL_LIGHTING) function in the main function. Now
use the above code segment (drawpyramid()) where you created the
pyramid as this properties only applicable to pyramid. Now run the
program. You will see dim red pyramid as we still do not add
any light in the scene. This is because of ambient property.

At this stage we need to add one or more light source in the scene.
We can add at most 8 lights simultaneously in our scene. The code
segment below adds one light in the scene.

GLfloat no_light[] = { 0.0, 0.0, 0.0, 1.0 };


GLfloat light_ambient[] = {1.0, 1.0, 1.0, 1.0};
GLfloat light_diffuse[] = { 1.0, 1.0, 1.0, 1 };
GLfloat light_specular[] = { 1, 1, 1, 1 };
GLfloat light_position[] = { 2.0, 25.0, 3.0, 1.0 };

glEnable( GL_LIGHT0);
glLightfv( GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv( GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv( GL_LIGHT0, GL_SPECULAR, light_specular);
glLightfv( GL_LIGHT0, GL_POSITION, light_position);

Here, first we define the four color component for light No. 0 (zero).
Then define the light position using x,y,z values and the fourth
parameter of light position, if 1.0, specifies that the light is at this
position. Otherwise, the light source is directional and all rays
are parallel. Then enable light 0 by glEnable( GL_LIGHT0) function.
Finally apply these color component on light 0 using glLightfv()
function. It also has three parameters like glMaterialfv() function, but
the first parameter is the light number to which we apply the color
component. Now use this code segment globally (in main function) in
your program. Now run the program. You will see a bright red
pyramid.

Now try to analyze above material properties and lighting properties.


Omit any one or more color component properties and try to
understand the effects. Also change the color component values for
both material and light and see the effects and justify it with your
basic knowledge about lighting.

Remember, the specular effects depends upon the light position


and the viewer’s position.

We can also add spot lights in our scene. This is very simple. We just
need to add some more properties in the light to make it a spot light.

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 3


Append the following code segment along with your light property
code segment.

GLfloat spot_direction[] = { 0.0, -1.0, 0.0 };


glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction);
glLightf( GL_LIGHT0, GL_SPOT_CUTOFF, 25.0);

Here, first we need to define the direction of the spot light like below.

Then set the GL_SPOT_CUTOFF to specify the angle between the


axis of the cone and a ray along the edge of the cone. [Ref: Redbook]

You must have to set the direction of the spot light properly to
focus your desired object.

Solve class work 01and 02 at this stage.

G. M. Mashrur-E-Elahi, Dept. of CSE, KUET 4


Texture Mapping
Kazi Saeed Alam
Frame Buffer
• Figure 7.2 shows the OpenGL frame buffer and some of
its constituent parts. When we work with the frame
buffer, we usually work with one constituent buffer at a
time. Thus, we shall use the term buffer in what follows
to mean a particular buffer within the frame buffer. Each
of these buffers is n × m and is k bits deep. However, k
can be different for each buffer. For a color buffer, its k is
determined by how many colors the system can display,
usually 24 for RGB displays and 32 for RGBA displays.
Color Palette
• Framebuffers have traditionally supported a wide variety of color
modes. Due to the expense of memory, most early framebuffers used
1-bit (2-color), 2-bit (4-color), 4-bit (16-color) or 8-bit (256-color) color
depths.
• Here is a typical indexed 256-color image and its own palette
Digital Images
• if we are working with RGB images, we usually represent each of the
color components with 1 byte whose values range from 0 to 255.
Thus, we might declare a 512 × 512 image in our application program
as
• GLubyte myimage[512][512][3];
• or, if we are using a floating-point representation,
• typedef vec3 color3;
• color3 myimage[512][512];
Digital Images
• For example, suppose that we want to create a 512 × 512 image that
consists of an 8 × 8 checkerboard of alternating red and black squares,
such as we might use for a game. The following code will work:
class mRGB
{
public:
uchar r,g,b,a;
mRGB(){r = g = b = 0,a=255;}
};
Digital Images
• nRows=nCols=64;
• pixel = new mRGB[3*nRows*nCols];
• long count=0;
• for(int i=0;i<nRows;i++)
• for(int j=0;j<nCols;j++)
• {
• int c=(( (i/8)+(j/8) ) %2)*255;
• pixel[count].r=c; //red
• pixel[count].g=c; //green
• pixel[count++].b=0; //blue
• }
Digital Images
Mapping Methods
• Consider, for example, the task of creating a virtual orange by computer. Our
first attempt might be to start with a sphere. Although it might have the
correct overall properties, such as shape and color, it would lack the fine
surface detail of the real orange. If we attempt to add this detail by adding
more polygons to our model, even with hardware capable of rendering tens
of millions of polygons per second, we can still overwhelm the pipeline. as
the implementation renders a surface—be it a polygon or a curved surface—
it generates sets of fragments, each of which corresponds to a pixel in the
frame buffer. Fragments carry color, depth, and other information that can
be used to determine how they contribute to the pixels to which they
correspond. As part of the rasterization process, we must assign a shade or
color to each fragment.
Mapping Methods
• An alternative is not to attempt to build increasingly more complex
models, but rather to build a simple model and to add detail as part
of the rendering process. There are three major techniques:
• Texture mapping
• Bump mapping
• Environment mapping
Texture Mapping
• Texture mapping uses an image (or texture) to influence the color of a
fragment. Textures can be specified using a fixed pattern, such as the
regular patterns often used to fill polygons; by a procedural texture-
generation method; or through a digitized image. In all cases, we can
characterize the resulting image as the mapping of a texture to a
surface, as shown in Figure 7.8, as part of the rendering of the
surface.
Bump Mapping & Environment Mapping
• bump maps distort the normal vectors during the shading process to
make the surface appear to have small variations in shape, such as the
bumps on a real orange.
• Reflection maps, or environment maps, allow us to create images
that have the appearance of reflected materials without our having to
trace reflected rays. In this technique, an image of the environment is
painted onto the surface as that surface is being rendered.
• Color Plate 15 uses a texture map for the surface of the table; Color
Plate 10 uses texture mapping to create a brick pattern.
Example of Texture
Example of Bump Mapping
Example of Environment Mapping
Example of Texture Mapping
Example of Texture Mapping
Texture Mapping
• Textures are patterns. They can range from regular patterns, such as
stripes and checkerboards, to the complex patterns that characterize
natural materials. In the real world, we can distinguish among objects
of similar size and shape by their textures.
Two Dimensional Texture Mapping
• Although there are multiple approaches to texture mapping, all require
a sequence of steps that involve mappings among three or four
different coordinate systems. At various stages in the process, we shall
be working with:
• screen coordinates, where the final image is produced;
• object coordinates, where we describe the objects upon which the
textures will be mapped;
• texture coordinates, which we use to locate positions in the texture;
• and parametric coordinates, which we use to help us define curved
surfaces.
What is a texture map?
• Practical: “A way to slap an image on a model.”
• Better: “A mapping from any function onto a surface in three
dimensions.”
• Most general: “The mapping of any image into multidimensional
space.”
Texture Mapping
Texture Mapping
Two Dimensional Texture Mapping
• In most applications, textures start out as two-dimensional images of
the sorts we introduced in page-5 of this slide. Thus, they might be
formed by application programs or scanned in from a photograph,
but, regardless of their origin, they are eventually brought into
processor memory as arrays. We call the elements of these arrays
texels, or texture elements, rather than pixels to emphasize how they
will be used. However, at this point, we prefer to think of this array as
a continuous rectangular two-dimensional texture pattern T(s, t). The
independent variables s and t are known as texture coordinates. With
no loss of generality, we can scale our texture coordinates to vary over
the interval [0,1].
Two Dimensional Texture Mapping
• A texture map associates a texel with each point on a geometric
object that is itself mapped to screen coordinates for display. If the
object is represented in homogeneous or (x, y, z, w) coordinates, then
there are functions such that
• x = x(s, t),
• y = y(s, t),
• z = z(s, t),
• w = w(s, t).
Two Dimensional Texture Mapping
• One of the difficulties we must confront is that although these
functions exist conceptually, finding them may not be possible in
practice. In addition, we are worried about the inverse problem:
Having been given a point (x, y, z) or (x, y, z , w) on an object, how do
we find the corresponding texture coordinates, or equivalently, how
do we find the “inverse” functions
• s = s(x, y, z , w),
• t = t(x, y, z , w)
• to use to find the texel T(s, t)?
Two Dimensional Texture Mapping
• If we define the geometric object using parametric (u, v) surfaces,
there is an additional mapping function that gives object coordinate
values, (x, y, z) or (x, y, z , w) in terms of u and v.
• we also need the mapping from parametric coordinates (u, v) to
texture coordinates and sometimes the inverse mapping from texture
coordinates to parametric coordinates.
Two Dimensional Texture Mapping
• We also have to consider the projection process that take us from
object coordinates to screen coordinates, going through eye
coordinates, clip coordinates, and window coordinates along the way.
We can abstract this process through a function that takes a texture
coordinate pair (s, t) and tells us where in the color buffer the
corresponding value of T(s, t) will make its contribution to the final
image. Thus, there is a mapping of the form
• = (s, t),
• = (s, t)
• into coordinates, where ( , ) is a location in the color buffer.
Two Dimensional Texture Mapping
• One way to think about texture mapping is in terms of two concurrent
mappings: the first from texture coordinates to parametric
coordinates, and the second from parametric coordinates to object
coordinates, as shown in Figure 7.9. A third mapping takes us from
object coordinates to screen coordinates.
Two Dimensional Texture Mapping
Two Dimensional Texture Mapping
Two Dimensional Texture Mapping
• If we assume that the values of T are RGB color values, we can use
these values either to modify the color of the surface that might have
been determined by a lighting model or to assign a color to the
surface based on only the texture value. This color assignment is
carried out as part of the assignment of fragment colors.
Two Dimensional Texture Mapping
Two Dimensional Texture Mapping
Visualization of texture coordinates
• Texture coordinates linearly interpolated over triangle
Texture mapping
Texture Mapping
Linear Texture Mapping
• Do a direct mapping of a block of texture to a surface patch:
Linear Texture Mapping
• A point p on the surface is a function of two parameters u and v. For
each pair of values, we generate the point:

• In Figure 7.12, the patch determined by the corners (smin, tmin) and
(smax, tmax) corresponds to the surface patch with corners (umin,
vmin) and (umax, vmax), then the mapping is
Cube Mapping
• “Unwrap” cube and map texture over the cube.
Cylinder Mapping
• Wrap texture along outside of cylinder, not top and bottom
• This stops texture from being distorted
Cylinder Mapping
• in Figure 7.13. Points on the cylinder are given by the parametric
equations
• x = r cos(2πu)u),
• y = r sin(2πu)u),
• z = v/h,
• as u and v vary over (0,1). Hence, we can use the mapping
• s = u,
• t = v.
Two-part Mapping
• To simplify the problem of mapping from an image to an arbitrary
model, use an object we already have a map for as an intermediary!
• Texture -> Intermediate object -> Final model
• Common intermediate objects:
• Cylinder
• Cube
• Sphere
Intermediate Object to Model
• This step can be done in many ways:
• Normal from intermediate surface
• Normal from object surface
• Use center of object
Difficulties in Texture Mapping
• First, we must determine the map from texture coordinates to object
coordinates. A two-dimensional texture usually is defined over a
rectangular region in texture space. The mapping from this rectangle
to an arbitrary region in three-dimensional space may be a complex
function or may have undesirable properties. For example, if we wish
to map a rectangle to a sphere, we cannot do so without distortion of
shapes and distances.
• Second, owing to the nature of the rendering process, which works on
a pixel-by-pixel basis, we are more interested in the inverse map from
screen coordinates to texture coordinates.
Difficulties in Texture Mapping
• Third, because each pixel corresponds to a small rectangle on the
display, we are interested in mapping not points to points, but rather
areas to areas. Here again is a potential aliasing problem that we must
treat carefully if we are to avoid artifacts, such as wavy sinusoidal or
moire’ patterns.
What is aliasing?
• An on-screen pixel does not always map neatly to a texel. Particularly
severe problems in regular textures.
Aliasing
Moire Pattern
Anti-Aliasing
• Pre-calculate how the texture should look at various distances, then
use the appropriate texture at each distance. This is called
mipmapping.
Texture magnification
• a pixel in texture image ('texel') maps to an area larger than one pixel
in image
Texture minification
• a pixel in texture image('texel') maps to an area smaller than a pixel in
image:
Mipmapping
Anti-Aliasing
• Another approach: Filtering-
1. Bi Linear Filtering
2. Tri Linear Filtering
3. Anisotropic Filtering
Aliasing and Anti-aliasing
OpenGL Texture Mapping
• OpenGL’s texture maps rely on its pipeline architecture. We have seen
that there are actually two parallel pipelines: the geometric pipeline
and the pixel pipeline. For texture mapping, the pixel pipeline merges
with fragment processing after rasterization, as shown in Figure 7.16.
This architecture determines the type of texture mapping that is
supported. In particular, texture mapping is done as part of fragment
processing. Each fragment that is generated is then tested for visibility
with the z-buffer. We can think of texture mapping as a part of the
shading process, but a part that is done on a fragment-by-fragment
basis.
Texture Mapping Pipeline
OpenGL Texture Mapping
• Texture mapping requires interaction among the application program,
the vertex shader, and the fragment shader. There are three basic
steps. First, we must form a texture image and place it in texture
memory on the GPU. Second, we must assign texture coordinates to
each fragment. Finally, we must apply the texture to each fragment.
glBindTexture
• glBindTexture(GL_TEXTURE_2D,textureName);
• GL_TEXTURE_2D: Specify that it is a 2D texture
• textureName: Name of the texture
glTexImage2D
• glTexImage2D(GL_TEXTURE_2D, level, components, width, height,
border, format, type, tarray)
• GL_TEXTURE_2D: Specify that it is a 2D texture
• Level: Used for specifying levels of detail for mipmapping
• Components: Generally is 0 which means GL_RGB, Represents
components and resolution of components
• Width, Height: The size of the texture must be powers of 2
• Border Format: Specify what the data is (GL_RGB, GL_RGBA, …)
• Type: Specify data type (GL_UNSIGNED_BYTE, GL_BYTE, …)
glTexParameteri
• glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
• This function sets several texture mapping parameters. These parameters are
bound to the current texture state that can be made current with glBindTexture.
• parameters:
• P1: GLenum: The texture target for which this parameter applies. Must be one of
GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, or GL_TEXTURE_CUBE_MAP.
• P2: GLenum: The texturing parameter to set. GL_TEXTURE_MAG_FILTER Returns
the texture magnification filter value
• P3: GLfloat or GLfloat* or GLint or GLint*: Value of the parameter specified by
pname.
GL_REPEAT Instead of GL_LINEAR
GL_CLAMP
GL_NEAREST
GL_LINEAR
glTexGen
• void glTexGeni( GLenum coord, GLenum pname, GLint param);
• glTexGen selects a texture-coordinate generation function or supplies
coefficients for one of the functions. coord names one of the (s, t, r, q)
texture coordinates; it must be one of the symbols GL_S, GL_T, GL_R,
or GL_Q.
• Coord: Specifies a texture coordinate. Must be one of GL_S, GL_T,
GL_R, or GL_Q.
• Pname: Specifies the symbolic name of the texture-coordinate
generation function or function parameters. Must be
GL_TEXTURE_GEN_MODE, GL_OBJECT_PLANE, or GL_EYE_PLANE.
glTexGen
• Params: Specifies a pointer to an array of texture generation
parameters. If pname is GL_TEXTURE_GEN_MODE, then the array
must contain a single symbolic constant, one of GL_OBJECT_LINEAR,
GL_EYE_LINEAR, GL_SPHERE_MAP, GL_NORMAL_MAP, or
GL_REFLECTION_MAP. Otherwise, params holds the coefficients for
the texture-coordinate generation function specified by pname.

You might also like