CG Unit-2
CG Unit-2
UN I T - 2
Dept. of CSE,NGIT 1
Computer Graphics
2.1 Interaction
Project Sketchpad :
Dept. of CSE,NGIT 2
Computer Graphics
Ivan Sutherland (MIT 1963) established the basic interactive paradigm that
characterizes interactive computer graphics:
– User sees an object on the display
– User points to (picks) the object with an input device (light pen,
mouse, trackball)
– Object changes (moves, rotates, morphs)
– Repeat
2.2 Input devices
A position
An object identifier
Modes
o How and when input is obtained
Request or event
Devices such as the data tablet return a position directly to the operating system
Devices such as the mouse, trackball, and joy stick return incremental inputs
(or velocities) to the operating system
o Must integrate these inputs to obtain an absolute position
Rotation of cylinders in mouse
Roll of trackball
Dept. of CSE,NGIT 3
Computer Graphics
Logical Devices
Input Modes
Input devices contain a trigger which can be used to send a signal to the
operating system
o Button on mouse
o Pressing or releasing a key
When triggered, input devices return information (their measure) to the system
o Mouse returns position information
o Keyboard returns ASCII code
Request Mode
Dept. of CSE,NGIT 4
Computer Graphics
Event Mode
Most systems have more than one input device, each of which can be triggered at
an arbitrary time by a user
Each trigger generates an event whose measure is put in an event queue which can be
examined by the user program
Event Types
Idle: nonevent
o Define what should be done if no other event is in
queue
Dept. of CSE,NGIT 5
Computer Graphics
Retained mode - The host compiles the graphics program and this compiled set
is maintained in the server within the display list.
The redisplay happens by a simple function call issued from the client to the server
Dept. of CSE,NGIT 6
Computer Graphics
GL_COMPILE – Tells the system to send the list to the server but not to display
the contents
GL_COMPILE_AND_EXECUTE – Immediate display of the contents while the
list is being constructed.
void OurFont(char c)
{
switch(c)
{
case ‘O’ :
glTranslatef(0.5,0.5,0.0); /* move to the center */
glBegin(GL_QUAD_STRIP)
for (i=0;i<12;i++) /* 12 vertices */
{
angle = 3.14159/6.0 * i; /* 30 degrees in radians */
glVertex2f(0.4 * cos(angle)+0.5, 0.4 * sin(angle)+0.5)
glVertex2f(0.5 * cos(angle)+0.5, 0.5 * sin(angle)
+0.5)
}
glEnd();
break;
}
Dept. of CSE,NGIT 7
Computer Graphics
Fonts in GLUT
GLUT provides a few raster and stroke fonts
# define FACE 2
glNewList(FACE);
/* Draw outline */
glTranslatef(…..)
glCallList(EYE);
2.6 Programing Event Driven Input
Pointing Devices :
A mouse event occurs when one of the buttons of the mouse is pressed or
released void myMouse(int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
exit(0);
}
The callback from the main function would be :
glutMouseFunc(myMouse);
Dept. of CSE,NGIT 8
Computer Graphics
Window Events
/* adjust viewport */
glViewport(0,0,w,h);
}
Keyboard Events
When a keyboard event occurs, the ASCII code for the key that generated the event and
the mouse location are returned.
E.g.
void myKey(unsigned char key, int x, int y)
{
if (key==‘q’ || key==‘Q’)
exit(0);
}
Callback : glutKeyboardFunc(myKey);
Dept. of CSE,NGIT 9
Computer Graphics
At the end of each execution of the main loop, GLUT uses this flag to determine if
the display function will be executed.
The function ensures that the display will be drawn only once each time the
program goes through the event loop.
Idle Callback is invoked when there are no other events to be performed.
Its typical use is to continuously generate graphical primitives when nothing else
is happening.
Idle callback : glutIdleFunc(function name)
Window Management
Id = glutCreateWindow(“second window”);
To set a particular window as the current window where the image has to be
rendered glutSetWindow(id);
2.7 Menus
GLUT supports pop-up menus
o A menu can have submenus
Three steps
o Define entries for the menu
o Define action for each menu item
Action carried out if entry selected
– Attach menu to a mouse button
Defining a simple menu
menu_id = glutCreateMenu(mymenu);
glutAddmenuEntry(“clear Screen”, 1);
gluAddMenuEntry(“exit”, 2);
glutAttachMenu(GLUT_RIGHT_BUTTON);
Menu callback
Dept. of CSE,NGIT 10
Computer Graphics
void mymenu(int id)
{
if(id == 1) glClear();
if(id == 2) exit(0);
}
In principle, it should be simple because the mouse gives the position and we should
be able to determine to which object(s) a position corresponds
Practical difficulties
o Pipeline architecture is feed forward, hard to go from screen back to world
o Complicated by screen being 2D, world is 3D
o How close do we have to come to object to say we selected it?
Rendering Modes
Dept. of CSE,NGIT 11
Computer Graphics
As we just described it, selection mode won’t work for picking because every
primitive in the view volume will generate a hit
Change the viewing parameters so that only those primitives near the cursor are in
the altered view volume
– Use gluPickMatrix (see text for details)
2.8 A Sample CAD Program
Dept. of CSE,NGIT 12
Computer Graphics
draw_objects(GL_SELECT);
glMatrixMode(GL_PROJECTION);
Dept. of CSE,NGIT 13
Computer Graphics
Dept. of CSE,NGIT 14
Computer Graphics
In order to add code for deleting an object, we include some extra information in
the object structure:
float bb[2][2];
bb[0][0] = x0-1.0;
bb[0][1] = y0-1.0;….
2.10 Animating interactive programs
The points x=cos θ, y=sin θ always lies on a unit circle regardless of the value of θ.
We have 2 color buffers for our disposal called the Front and the Back
buffers. Front buffer is the one which is always displayed.
Back buffer is the one on which we draw
glutSwapBuffers();
Dept. of CSE,NGIT 15
Computer Graphics
glDrawBuffer(GL_BACK);
glDrawBuffer(FRONT_AND_BACK);
Rubberbanding
Draw object
– For line can use first mouse click to fix one endpoint and then use
motion callback to continuously update the second endpoint
– Each time mouse is moved, redraw line which erases it and then draw
line from fixed first position to to new second position
– At end, switch back to normal drawing mode and draw line
– Works for other objects: rectangles, circles
XOR in OpenGL
Dept. of CSE,NGIT 16
Computer Graphics
The basic geometric objects and relationship among them can be described using the
three fundamental types called scalars, points and vectors.
Geometric Objects.
Points:
One of the fundamental geometric objects is a point.
Q
P
because of vectors does not have fixed position, the directed line segments shown in
Dept. of CSE,NGIT 17
Computer Graphics
figure below are identical because they have the same direction and magnitude.
Vector lengths can be altered by the scalar components, so the line segment A shown
in figure below is twice t he length of line segment B
B A=2B
We can also combine directed line segments as shown in figure below by using the head
and tail rule
D=A+B B
We obtained new vector D from two vectors A and B by connecting head of A to tail of B.
Magnitude and direction of vector D is determined from the tail of A to the head of B, we
can call D has sum of A and B, so we can write it as D=A+B.
Consider the two directed line segments A and E shown in figure below with the same
length but opposite direction. We can define the vector E in terms of A as E =-A, so the
vector E is called inverse vector of A. The sum of vectors A and E is called Zero vector,
which is denoted as 0, that has a zero magnitude and orientation is undefined.
Dept. of CSE,NGIT 18
Computer Graphics
In a three-dimensional world, we can have a far greater variety of geometric objects than
we can in two dimensions. When we worked in a two-dimensional plane , we considered objects
that were simple curves, such as line segments, and flat objects with well-defined interiors, such
as simple polygons. In three dimensions, we retain these objects, but they are no longer restricted
to lie in the same plane. Hence, curves become curves in space , and objects with interiors can
become surfaces in space . In addition, we can have objects with volumes, such as
parallelepipeds and ellipsoids
1. The objects are described by their surfaces and can be thought of as being hollow.
3. The objects either are composed of or can be approximated by flat, convex polygons.
The scalars _1, _2, and _3 are the components of w with respect to the basis v1, v2, and v3.
These relationships are shown in below Figure . We can write the representation of w with
respect to this basis as the column matrix
Dept. of CSE,NGIT 19
Computer Graphics
We usually think of the basis vectors, v1, v2, v3, as defining a coordinate system.
However, for dealing with problems using points, vectors, and scalars, we need a more general
method. Below Figures shows one aspect of the problem. The three vectors form a coordinate
system that is shown in Figure(a) as we would usually draw it, with the three vectors emerging
from a single point. We could use these three basis vectors as a basis to represent any vector in
three dimensions. Vectors, however, have direction and magnitude, but lack a position attribute.
Hence, Figure(b) is equivalent, because we have moved the basis vectors, leaving their
magnitudes and directions unchanged. Most people find this second figure confusing, even
though mathematically it expresses the same information as the first figure. We are still left with
the problem of how to represent points—entities that have fixed positions. Because an affine
Dept. of CSE,NGIT 20
Computer Graphics
space contains points, once we fix a particular reference point—the origin—in such a space, we
can represent all points unambiguously. The usual convention for drawing coordinate axes as
emerging from the origin, as shown in Figure(a), makes sense in the affine space where both
points and vectors have representations. However, this representation requires us to know both
the reference point and the basis vectors. The origin and the basis vectors determine a frame.
Loosely, this extension fixes the origin of the vector coordinate system at some point P0. Within
a given frame, every vector can be written uniquely as
Thus, the representation of a particular vector in a frame requires three scalars; the representation
of a point requires three scalars and the knowledge of where the origin is located.
Suppose that vectors e1, e2, and e3 form a basis. The representation of any vector, v, is given by
the component (α1, α2, α3), of a vector a where
Dept. of CSE,NGIT 21
Computer Graphics
The basis vectors1 must themselves have representations that we can denote e1, e2, and e3, given
by
In other words, the 3-tuple (1, 0, 0) is the representation of the first basis vector. Consequently,
rather than thinking in terms of abstract vectors we can work with 3-tuples and we can write the
representation of any vector v as a column matrix a or the 3-tuple (α1, α2, α3), where
Frequently, we are required to find how the representation of a vector changes when we
change the basis vectors. For example, in OpenGL, we define our geometry using the coordinate
system or frame that is natural for the model, which is known as the object or model frame.
Models are then brought into the world frame. At some point, we want to know how these
objects appear to the camera. It is natural at that point to convert from the world frame to the
camera or eye frame. The conversion from the object frame to the eye frame is done by the
model-view matrix.
Let’s consider changing representations for vectors first. Suppose that{v1, v2, v3} and
{u1, u2, u3} are two bases. Each basis vector in the second set can be represented in terms of the
first basis (and vice versa). Hence, there exist nine scalar components, {γij }, such that
Dept. of CSE,NGIT 22
Computer Graphics
The matrix M contains the information to go from a representation of a vector in one basis to its
representation in the second basis. The inverse ofM gives the matrix representation of the change
from {u1, u2, u3} to {v1, v2, v3}. Consider a vector w that has the representation {α1,α 2,α3}
with respect to {v1, v2, v3};
Dept. of CSE,NGIT 23
Computer Graphics
Dept. of CSE,NGIT 24
Computer Graphics
Thus, rather than working with our original vectors, typically directed line segments, we can
work instead with their representations, which are 3- tuplesThese changes in basis leave the
origin unchanged. We can use them to represent rotation and scaling of a set of basis vectors to
derive another basis set, as shown in Figure
However, a simple translation of the origin, or change of frame as shown in below Figure,
cannot be represented in this way. After we complete a simple example, we introduce
homogeneous coordinates, which allow us to change frames yet still use matrices to represent the
change.
Dept. of CSE,NGIT 25
Computer Graphics
We can denote the three basis vectors as v1, v2, and v3. Hence,
w = v1 + 2v2 + 3v3.
Now suppose that we want to make a new basis from the three vectors v1, v2, and v3 where
u1 = v1,
u2 = v1 + v2,
u3 = v1 + v2 + v3.
The matrix M is
Dept. of CSE,NGIT 26
Computer Graphics
The matrix that converts a representation in v1, v2, and v3 to one in which the basis vectors are
u1, u2, and u3 is
As we have seen, OpenGL is based on a pipeline model, the first part of which is a sequence of
operations on vertices, many of which are geometric. We can characterize such operations by a
sequence of transformations or, equivalently, as a sequence of changes of frames for the objects
specified by an application program.
The following is the usual order that the frames occur in the pipeline:
1. Object or model coordinates
2. World coordinates
3. Eye (or camera) coordinates
4. Clip coordinates
5. Normalized device coordinates
Dept. of CSE,NGIT 27
Computer Graphics
Object or model coordinates: In most applications, we tend to specify or use an object with a
convenient size, orientation, and location in its own frame called the model or object frame.
World coordinates: The origin of application coordinates might be a location in the center of
the bottom floor of the building. This application frame is called the world frame and the values
are in world coordinates.
Eye (or camera) coordinates: Object and world coordinates are the natural frames for the
application program. However, the image that is produced depends on what the camera or viewer
sees. Virtually all graphics systems use a frame whose origin is the center of the camera’s lens
and whose axes are aligned with the sides of the camera. This frame is called the camera frame
or eye frame.
Clip coordinates: The last three representations are used primarily in the implementation of the
pipeline, but, for completeness, we introduce them here. Once objects are in eye coordinates,
OpenGL must check whether they lie within the view volume. If an object does not, it is clipped
from the scene prior to rasterization. OpenGL can carry out this process most efficiently if it first
carries out a projection transformation that brings all potentially visible objects into a cube
centered at the origin in clip coordinates.
Normalized device coordinates: After this transformation, vertices are still represented in
homogeneous coordinates. The division by the w component, called perspective division, yields
three-dimensional representations in normalized device coordinates.
Window (or screen) coordinates: The final transformation takes a position in normalized
device
coordinates and, taking into account the viewport, creates a three-dimensional representation in
window coordinates. Window coordinates are measured in units of pixels on the display but
retain depth information. If we remove the depth coordinate, we are working with two-
dimensional screen coordinates.
Fig: Camera and object frames (a) in default frames (b) After applying model-view matrix
Dept. of CSE,NGIT 28
Computer Graphics
We start by assuming that the vertices of the cube are available through an array of vertices; for
example, we use the following homogeneous coordinates so
We can then use the list of points to specify the faces of the cube. For example, one face is given
by the sequence of vertices (0, 3, 2 1). We can specify the other five faces similarly.
Dept. of CSE,NGIT 29
Computer Graphics
We could now describe our cube through a set of vertex specifications. For example,
we could use a two dimensional array of positions
We can use the vertex list to define a color cube. We use a function quad that takes as input the
indices of four vertices in outward pointing order and adds data to two arrays, to store the vertex
positions and the corresponding colors for each face in the arrays
Note that because we can only display triangles, the quad function must generate two triangles
for each face and thus six vertices. If we want each vertex to have its own color, then we need 24
vertices and 24 colors for our data. Using this quad function, we can specify our cube through
the function
Dept. of CSE,NGIT 30
Computer Graphics
We will assign the colors to the vertices using the colors of the corners of the color solid
from (black, white, red, green, blue, cyan, magenta, yellow). We assign a color for each vertex
using the index of the vertex. Alternately, we could use the first index of the first vertex specified
by qua to fix the color for the entire face. Here are the RGBA colors
Here is the quad function that uses the first three vertices to specify one triangle and the
first, third and fourth to specify the second:
Dept. of CSE,NGIT 31
Computer Graphics
Dept. of CSE,NGIT 32
Computer Graphics
However, the display of the cube is not very informative. Because the sides of the cube are
aligned with the clipping volume, we see only the front face. The display also occupies the entire
window. We could get a more interesting display by changing the data so that it corresponds to a
rotated cube. We could scale the data to get a smaller cube. For example, we could scale the cube
by half by changing the vertex data to
but that would not be a very flexible solution. We could put the scale factor in the quad function.
A better solution might be to change the vertex shader to
Dept. of CSE,NGIT 33