0% found this document useful (0 votes)
19 views33 pages

CG Unit-2

The document outlines key concepts in computer graphics, focusing on input and interaction, including input devices, event-driven programming, and the client-server model. It discusses display lists for efficient graphics rendering, programming techniques for interactive models, and methods for handling user input through various events. Additionally, it covers the creation of interactive programs and animation techniques using OpenGL, emphasizing the importance of managing graphical objects and user interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views33 pages

CG Unit-2

The document outlines key concepts in computer graphics, focusing on input and interaction, including input devices, event-driven programming, and the client-server model. It discusses display lists for efficient graphics rendering, programming techniques for interactive models, and methods for handling user input through various events. Additionally, it covers the creation of interactive programs and animation techniques using OpenGL, emphasizing the importance of managing graphical objects and user interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Computer Graphics

UN I T - 2

 Input and Interaction: Input devices,


 Clients and Servers,
 Display lists,
 Display lists and modeling,
 Programming event-driven input,
 Picking,
 Building interactive models,
 Animating interactive programs,
 Logic operations. Geometric Objects: Three-dimensional
primitives,
 Coordinate systems and frames,
 Frames in OpenGL,
 Modeling colored cube.

Dept. of CSE,NGIT 1
Computer Graphics

INPUT AND INTERACTION

2.1 Interaction
Project Sketchpad :

Dept. of CSE,NGIT 2
Computer Graphics

Ivan Sutherland (MIT 1963) established the basic interactive paradigm that
characterizes interactive computer graphics:
– User sees an object on the display
– User points to (picks) the object with an input device (light pen,
mouse, trackball)
– Object changes (moves, rotates, morphs)
– Repeat
2.2 Input devices

Devices can be described either by


o Physical properties
 Mouse
 Keyboard
 Trackball
o Logical Properties
 What is returned to program via API

A position
An object identifier

Modes
o How and when input is obtained
 Request or event

Incremental (Relative) Devices

Devices such as the data tablet return a position directly to the operating system

Devices such as the mouse, trackball, and joy stick return incremental inputs
(or velocities) to the operating system
o Must integrate these inputs to obtain an absolute position
 Rotation of cylinders in mouse
 Roll of trackball

 Difficult to obtain absolute position


 Can get variable sensitivity

Dept. of CSE,NGIT 3
Computer Graphics

Logical Devices

Consider the C and C++ code


o C++: cin >> x;
o C: scanf (“%d”, &x);

What is the input device?


o Can’t tell from the code
o Could be keyboard, file, output from another program

The code provides logical input


o A number (an int) is returned to the program regardless of the physical device

Graphical Logical Devices


Graphical input is more varied than input to standard programs which is
usually numbers, characters, or bits
Two older APIs (GKS, PHIGS) defined six types of logical input
– Locator: return a position
– Pick: return ID of an object
– Keyboard: return strings of characters
– Stroke: return array of positions
– Valuator: return floating point number
– Choice: return one of n items

Input Modes

Input devices contain a trigger which can be used to send a signal to the
operating system
o Button on mouse
o Pressing or releasing a key

When triggered, input devices return information (their measure) to the system
o Mouse returns position information
o Keyboard returns ASCII code

Request Mode

Input provided to program only when user triggers the device

Dept. of CSE,NGIT 4
Computer Graphics

Typical of keyboard input


– Can erase (backspace), edit, correct until enter (return) key (the trigger)
is depressed

Event Mode

Most systems have more than one input device, each of which can be triggered at
an arbitrary time by a user
Each trigger generates an event whose measure is put in an event queue which can be
examined by the user program

Event Types

Window: resize, expose, iconify


Mouse: click one or more buttons
Motion: move mouse
Keyboard: press or release a key

Idle: nonevent
o Define what should be done if no other event is in
queue

Dept. of CSE,NGIT 5
Computer Graphics

2.3 Clients And Servers

The X Window System introduced a client-server model for a network


of workstations
– Client: OpenGL program
– Graphics Server: bitmap display with a pointing device and a keyboard

2.4 Display Lists

The Display Processor in modern graphics systems could be considered as a graphics


server.

Retained mode - The host compiles the graphics program and this compiled set
is maintained in the server within the display list.
The redisplay happens by a simple function call issued from the client to the server

It avoids network clogging

Avoids executing the commands time and again by the client

Definition and Execution of display lists:


#define PNT 1
glNewList(PNT, GL_COMPILE);
glBegin(GL_POINTS);
glVertex2f(1.0,1.0);
glEnd();
glEndList();

Dept. of CSE,NGIT 6
Computer Graphics

GL_COMPILE – Tells the system to send the list to the server but not to display
the contents
GL_COMPILE_AND_EXECUTE – Immediate display of the contents while the
list is being constructed.

Each time the point is to be displayed on the server, the function is


executed. glCallList(PNT);
glCallLists function executes multiple lists with a single function call

Text and Display Lists


 The most efficient way of defining text is to define the font once, using a display
list for each char, and then store the font on the server using these display lists
 A function to draw ASCII characters

void OurFont(char c)
{
switch(c)
{
case ‘O’ :
glTranslatef(0.5,0.5,0.0); /* move to the center */
glBegin(GL_QUAD_STRIP)
for (i=0;i<12;i++) /* 12 vertices */
{
angle = 3.14159/6.0 * i; /* 30 degrees in radians */
glVertex2f(0.4 * cos(angle)+0.5, 0.4 * sin(angle)+0.5)
glVertex2f(0.5 * cos(angle)+0.5, 0.5 * sin(angle)
+0.5)

}
glEnd();

break;
}

Dept. of CSE,NGIT 7
Computer Graphics

Fonts in GLUT
 GLUT provides a few raster and stroke fonts

 Function call for stroke text :


glutStrokeCharacter(GLUT_STROKE_MONO_ROMAN, int character)
 Function call for bitmap text :
glutBitmapCharacter(GLUT_BITMAP_8_BY_13, int character)

2.5 Display Lists And Modeling


Building hierarchical models involves incorporating relationships between
various parts of a model

#define EYE 1 glTranslatef(……);


glCallList(EYE);
glNewList(EYE);
/* code to draw eye */
glEndList();

# define FACE 2
glNewList(FACE);
/* Draw outline */
glTranslatef(…..)
glCallList(EYE);
2.6 Programing Event Driven Input

Pointing Devices :
A mouse event occurs when one of the buttons of the mouse is pressed or
released void myMouse(int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
exit(0);
}
The callback from the main function would be :
glutMouseFunc(myMouse);

Dept. of CSE,NGIT 8
Computer Graphics

Window Events

Most windows system allows user to resize window.

This is a window event and it poses several problems like


– Do we redraw all the images
– The aspect ratio
– Do we change the size or attributes of the
primitives to suit the new window

void myReshape(GLsizei w, GLsizei h)


{
/* first adjust clipping box */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0,(GLdouble)w, 0.0, (GLdouble)h);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

/* adjust viewport */
glViewport(0,0,w,h);
}

Keyboard Events
When a keyboard event occurs, the ASCII code for the key that generated the event and
the mouse location are returned.
E.g.
void myKey(unsigned char key, int x, int y)
{
if (key==‘q’ || key==‘Q’)
exit(0);
}
Callback : glutKeyboardFunc(myKey);

Dept. of CSE,NGIT 9
Computer Graphics

GLUT provides the function glutGetModifiers function enables us to


define functionalities for the meta keys

The Display and Idle callbacks


Interactive and animation programs might contain many calls for the reexecution of
the display function.
glutPostRedisplay() – Calling this function sets a flag inside GLUT’s main loop
indicating that the display needs to be redrawn.

At the end of each execution of the main loop, GLUT uses this flag to determine if
the display function will be executed.
The function ensures that the display will be drawn only once each time the
program goes through the event loop.
Idle Callback is invoked when there are no other events to be performed.

Its typical use is to continuously generate graphical primitives when nothing else
is happening.
Idle callback : glutIdleFunc(function name)

Window Management

GLUT supports creation of multiple windows

Id = glutCreateWindow(“second window”);
To set a particular window as the current window where the image has to be
rendered glutSetWindow(id);

2.7 Menus
GLUT supports pop-up menus
o A menu can have submenus

Three steps
o Define entries for the menu
o Define action for each menu item
 Action carried out if entry selected
– Attach menu to a mouse button
Defining a simple menu
menu_id = glutCreateMenu(mymenu);
glutAddmenuEntry(“clear Screen”, 1);
gluAddMenuEntry(“exit”, 2);
glutAttachMenu(GLUT_RIGHT_BUTTON);
Menu callback

Dept. of CSE,NGIT 10
Computer Graphics
void mymenu(int id)
{
if(id == 1) glClear();
if(id == 2) exit(0);
}

– Note each menu has an id that is returned when it is created


Add submenus by
glutAddSubMenu(char *submenu_name, submenu id)
Picking

Identify a user-defined object on the display

In principle, it should be simple because the mouse gives the position and we should
be able to determine to which object(s) a position corresponds
Practical difficulties
o Pipeline architecture is feed forward, hard to go from screen back to world
o Complicated by screen being 2D, world is 3D
o How close do we have to come to object to say we selected it?

Rendering Modes

OpenGL can render in one of three modes selected by


glRenderMode(mode)
– GL_RENDER: normal rendering to the frame buffer (default)
– GL_FEEDBACK: provides list of primitives rendered but no output to
the frame buffer
– GL_SELECTION: Each primitive in the view volume generates a hit record
that is placed in a name stack which can be examined later–
Selection Mode Functions

glSelectBuffer(GLsizei n, GLuint *buff): specifies name


buffer glInitNames(): initializes name buffer
glPushName(GLuint name): push id on name
buffer glPopName(): pop top of name buffer
glLoadName(GLuint name): replace top name on buffer
id is set by application program to identify objects

Dept. of CSE,NGIT 11
Computer Graphics

Using Selection Mode


Initialize name buffer

Enter selection mode (using mouse)

Render scene with user-defined identifiers

Reenter normal render mode


o This operation returns number of hits

Examine contents of name buffer (hit records)


– Hit records include id and depth information

Selection Mode and Picking

As we just described it, selection mode won’t work for picking because every
primitive in the view volume will generate a hit

Change the viewing parameters so that only those primitives near the cursor are in
the altered view volume
– Use gluPickMatrix (see text for details)
2.8 A Sample CAD Program

void mouse (int button, int state, int x, int y)


{
GLUint nameBuffer[SIZE];
GLint hits;
GLint viewport[4];
if (button == GLUT_LEFT_BUTTON && state== GLUT_DOWN)
{
/* initialize the name stack */
glInitNames();
glPushName(0);
glSelectBuffer(SIZE, nameBuffer)l
/* set up viewing for selection mode */
glGetIntegerv(GL_VIEWPORT, viewport); //gets the current
viewport glMatrixMode(GL_PROJECTION);
/* save original viewing matrix */
glPushMatrix();
glLoadIdentity();

/* N X N pick area around cursor */

Dept. of CSE,NGIT 12
Computer Graphics

gluPickMatrix( (GLdouble) x,(GLdouble)(viewport[3]-y),N,N,viewport);


/* same clipping window as in reshape callback */
gluOrtho2D(xmin,xmax,ymin,ymax);

draw_objects(GL_SELECT);
glMatrixMode(GL_PROJECTION);

/* restore viewing matrix */


glPopMatrix();
glFlush();
/* return back to normal render mode */
hits = glRenderMode(GL_RENDER);
/* process hits from selection mode rendering*/
processHits(hits, nameBuff);
/* normal render */
glutPostRedisplay();
}
}
void draw_objects(GLenum mode)
{
if (mode == GL_SELECT)
glLoadName(1);
glColor3f(1.0,0.0,0.0)
glRectf(-0.5,-0.5,1.0,1.0);
if (mode == GL_SELECT)
glLoadName(2);
glColor3f(0.0,0.0,1.0)
glRectf(-1.0,-1.0,0.5,0.5);
}
void processHits(GLint hits, GLUint buffer[])
{
unsigned int i,j;
}

2.9 Building Interactive Programs

Dept. of CSE,NGIT 13
Computer Graphics

 Building blocks : equilateral triangle, square, horizontal and vertical line


segments. typedef struct object{
int type;
float x,y;
float color[3];
} object;
Define array of 100 objects & index to last object in the
list. object table[100];
int last_object;
Entering info into the object:
table[last_object].type = SQUARE;
table[last_object].x = x0;
table[last_object].y = y0;
table[last_object].color[0] = red;
…..
last_object ++;

 To display all the objects, the code looks like this:


for (i=0;i<last_object;i++)
{
switch(table[i].type)
{
case 0: break;
case 1:
{
glColor3fv(table[i].color);
triangle(table[i].x,table[i].y)
; break;
}
…..
}

Dept. of CSE,NGIT 14
Computer Graphics

 In order to add code for deleting an object, we include some extra information in
the object structure:
float bb[2][2];
bb[0][0] = x0-1.0;
bb[0][1] = y0-1.0;….
2.10 Animating interactive programs
The points x=cos θ, y=sin θ always lies on a unit circle regardless of the value of θ.

 In order to increase θ by a fixed amount whenever nothing is happening, we use the


idle function
void(idle)
{
theta+ =2;
If (theta > 360.0) theta - = 360.0;
glutPostRedisplay();
}
 In order to turn the rotation feature on and off, we can include a mouse function as
follows :
Void mouse(int button, int state, intx, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(idle);
if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(NULL);
}
Double Buffering

We have 2 color buffers for our disposal called the Front and the Back
buffers. Front buffer is the one which is always displayed.
Back buffer is the one on which we draw

Function call to swap buffers :

glutSwapBuffers();

By default openGl writes on to the back buffer.


But this can be controlled using

Dept. of CSE,NGIT 15
Computer Graphics

glDrawBuffer(GL_BACK);
glDrawBuffer(FRONT_AND_BACK);

2.11 Logical Operations


XOR write

Usual (default) mode: source replaces destination (d’ = s)


o Cannot write temporary lines this way because we cannot recover what was
“under” the line in a fast simple way

Exclusive OR mode (XOR) (d’ = d s)


o x y x =y
– Hence, if we use XOR mode to write a line, we can draw it a second time
and line is erased!

Rubberbanding

Switch to XOR write mode

Draw object
– For line can use first mouse click to fix one endpoint and then use
motion callback to continuously update the second endpoint
– Each time mouse is moved, redraw line which erases it and then draw
line from fixed first position to to new second position
– At end, switch back to normal drawing mode and draw line
– Works for other objects: rectangles, circles
XOR in OpenGL

Dept. of CSE,NGIT 16
Computer Graphics

There are 16 possible logical operations between two bits


All are supported by OpenGL
o Must first enable logical operations
 glEnable(GL_COLOR_LOGIC_OP)
o Choose logical operation
 glLogicOp(GL_XOR)
 glLogicOp(GL_COPY) (default)

2.12 Scalars, points and vectors

The basic geometric objects and relationship among them can be described using the
three fundamental types called scalars, points and vectors.

Geometric Objects.

Points:
One of the fundamental geometric objects is a point.

 In 3D geometric system, point is a location in space. Point possesses only


the location property, mathematically point neither a size nor a shape.
 Points are useful in specifying objects but not sufficient.
Scalars:
 Scalars are objects that obey a set of rules that are abstraction of the operations
of ordinary arithmetic.
 Thus, addition and multiplication are defined and obey the usual rules such as
commutativity and associativity and also every scalar has multiplicative
and additive inverses.
Vector:
 Another basic object which has both direction and magnitude, however,
vector does not have a fixed location in space.
 Directed line segment shown in figure below connects two points has both
direction i.e, orientation and magnitude i.e., its length so it is called as a
vector

Q
P

because of vectors does not have fixed position, the directed line segments shown in

Dept. of CSE,NGIT 17
Computer Graphics
figure below are identical because they have the same direction and magnitude.

Vector lengths can be altered by the scalar components, so the line segment A shown
in figure below is twice t he length of line segment B

B A=2B

We can also combine directed line segments as shown in figure below by using the head
and tail rule

D=A+B B

We obtained new vector D from two vectors A and B by connecting head of A to tail of B.
Magnitude and direction of vector D is determined from the tail of A to the head of B, we
can call D has sum of A and B, so we can write it as D=A+B.

Consider the two directed line segments A and E shown in figure below with the same
length but opposite direction. We can define the vector E in terms of A as E =-A, so the
vector E is called inverse vector of A. The sum of vectors A and E is called Zero vector,
which is denoted as 0, that has a zero magnitude and orientation is undefined.

Dept. of CSE,NGIT 18
Computer Graphics

2.13 THREE-DIMENSIONAL PRIMITIVES

In a three-dimensional world, we can have a far greater variety of geometric objects than
we can in two dimensions. When we worked in a two-dimensional plane , we considered objects
that were simple curves, such as line segments, and flat objects with well-defined interiors, such
as simple polygons. In three dimensions, we retain these objects, but they are no longer restricted
to lie in the same plane. Hence, curves become curves in space , and objects with interiors can
become surfaces in space . In addition, we can have objects with volumes, such as
parallelepipeds and ellipsoids

Curves in three dimensions. Surfaces in three dimensions. Volumetric objects.


Three features characterize three-dimensional objects that fit well with existing graphics
hardware and software:

1. The objects are described by their surfaces and can be thought of as being hollow.

2. The objects can be specified through a set of vertices in three dimen-sions.

3. The objects either are composed of or can be approximated by flat, convex polygons.

2.14 COORDINATE SYSTEMS AND FRAMES


So far, we have considered vectors and points as abstract objects, without representing
them in an underlying reference system. In a three-dimensional vector space, we can
represent any vector w uniquely in terms of any three linearly independent vectors, v1, v2,
and v3 (see Appendix B), as

The scalars _1, _2, and _3 are the components of w with respect to the basis v1, v2, and v3.
These relationships are shown in below Figure . We can write the representation of w with
respect to this basis as the column matrix

where boldface letters denote a representation in a particular basis, as opposed to the


original abstract vector w. We can also write this relationship as

Dept. of CSE,NGIT 19
Computer Graphics

Fig. Vector derived from three basis vectors.

We usually think of the basis vectors, v1, v2, v3, as defining a coordinate system.
However, for dealing with problems using points, vectors, and scalars, we need a more general
method. Below Figures shows one aspect of the problem. The three vectors form a coordinate
system that is shown in Figure(a) as we would usually draw it, with the three vectors emerging
from a single point. We could use these three basis vectors as a basis to represent any vector in
three dimensions. Vectors, however, have direction and magnitude, but lack a position attribute.
Hence, Figure(b) is equivalent, because we have moved the basis vectors, leaving their
magnitudes and directions unchanged. Most people find this second figure confusing, even
though mathematically it expresses the same information as the first figure. We are still left with
the problem of how to represent points—entities that have fixed positions. Because an affine

Dept. of CSE,NGIT 20
Computer Graphics

space contains points, once we fix a particular reference point—the origin—in such a space, we
can represent all points unambiguously. The usual convention for drawing coordinate axes as
emerging from the origin, as shown in Figure(a), makes sense in the affine space where both
points and vectors have representations. However, this representation requires us to know both
the reference point and the basis vectors. The origin and the basis vectors determine a frame.
Loosely, this extension fixes the origin of the vector coordinate system at some point P0. Within
a given frame, every vector can be written uniquely as

Fig. (a)vectors emerging from a common point Fig.(b) vectors moved

w = α1v1 + α2v2 + α3v3 = aTv,

just as in a vector space; in addition, every point can be written uniquely as

Thus, the representation of a particular vector in a frame requires three scalars; the representation
of a point requires three scalars and the knowledge of where the origin is located.

2.14.1 Representations and N-Tuples

Suppose that vectors e1, e2, and e3 form a basis. The representation of any vector, v, is given by
the component (α1, α2, α3), of a vector a where

Dept. of CSE,NGIT 21
Computer Graphics

The basis vectors1 must themselves have representations that we can denote e1, e2, and e3, given
by

In other words, the 3-tuple (1, 0, 0) is the representation of the first basis vector. Consequently,
rather than thinking in terms of abstract vectors we can work with 3-tuples and we can write the
representation of any vector v as a column matrix a or the 3-tuple (α1, α2, α3), where

2.14.2 Change of Coordinate Systems

Frequently, we are required to find how the representation of a vector changes when we
change the basis vectors. For example, in OpenGL, we define our geometry using the coordinate
system or frame that is natural for the model, which is known as the object or model frame.
Models are then brought into the world frame. At some point, we want to know how these
objects appear to the camera. It is natural at that point to convert from the world frame to the
camera or eye frame. The conversion from the object frame to the eye frame is done by the
model-view matrix.
Let’s consider changing representations for vectors first. Suppose that{v1, v2, v3} and
{u1, u2, u3} are two bases. Each basis vector in the second set can be represented in terms of the
first basis (and vice versa). Hence, there exist nine scalar components, {γij }, such that

Dept. of CSE,NGIT 22
Computer Graphics

The matrix M contains the information to go from a representation of a vector in one basis to its
representation in the second basis. The inverse ofM gives the matrix representation of the change
from {u1, u2, u3} to {v1, v2, v3}. Consider a vector w that has the representation {α1,α 2,α3}
with respect to {v1, v2, v3};

Dept. of CSE,NGIT 23
Computer Graphics

Dept. of CSE,NGIT 24
Computer Graphics

Thus, rather than working with our original vectors, typically directed line segments, we can
work instead with their representations, which are 3- tuplesThese changes in basis leave the
origin unchanged. We can use them to represent rotation and scaling of a set of basis vectors to
derive another basis set, as shown in Figure

Fig. Rotation and scaling of a basis

However, a simple translation of the origin, or change of frame as shown in below Figure,
cannot be represented in this way. After we complete a simple example, we introduce
homogeneous coordinates, which allow us to change frames yet still use matrices to represent the
change.

Dept. of CSE,NGIT 25
Computer Graphics

Fig. Translation on Basis

2.14.3 Example Change of Representation

Suppose that we have a vector w whose representation in some basis is

We can denote the three basis vectors as v1, v2, and v3. Hence,

w = v1 + 2v2 + 3v3.

Now suppose that we want to make a new basis from the three vectors v1, v2, and v3 where

u1 = v1,
u2 = v1 + v2,
u3 = v1 + v2 + v3.

The matrix M is

Dept. of CSE,NGIT 26
Computer Graphics

The matrix that converts a representation in v1, v2, and v3 to one in which the basis vectors are
u1, u2, and u3 is

2.15 Frames in OPENGL

As we have seen, OpenGL is based on a pipeline model, the first part of which is a sequence of
operations on vertices, many of which are geometric. We can characterize such operations by a
sequence of transformations or, equivalently, as a sequence of changes of frames for the objects
specified by an application program.

The following is the usual order that the frames occur in the pipeline:
1. Object or model coordinates
2. World coordinates
3. Eye (or camera) coordinates
4. Clip coordinates
5. Normalized device coordinates

Dept. of CSE,NGIT 27
Computer Graphics

6. Window (or screen) coordinates

Object or model coordinates: In most applications, we tend to specify or use an object with a
convenient size, orientation, and location in its own frame called the model or object frame.
World coordinates: The origin of application coordinates might be a location in the center of
the bottom floor of the building. This application frame is called the world frame and the values
are in world coordinates.
Eye (or camera) coordinates: Object and world coordinates are the natural frames for the
application program. However, the image that is produced depends on what the camera or viewer
sees. Virtually all graphics systems use a frame whose origin is the center of the camera’s lens
and whose axes are aligned with the sides of the camera. This frame is called the camera frame
or eye frame.
Clip coordinates: The last three representations are used primarily in the implementation of the
pipeline, but, for completeness, we introduce them here. Once objects are in eye coordinates,
OpenGL must check whether they lie within the view volume. If an object does not, it is clipped
from the scene prior to rasterization. OpenGL can carry out this process most efficiently if it first
carries out a projection transformation that brings all potentially visible objects into a cube
centered at the origin in clip coordinates.
Normalized device coordinates: After this transformation, vertices are still represented in
homogeneous coordinates. The division by the w component, called perspective division, yields
three-dimensional representations in normalized device coordinates.
Window (or screen) coordinates: The final transformation takes a position in normalized
device
coordinates and, taking into account the viewport, creates a three-dimensional representation in
window coordinates. Window coordinates are measured in units of pixels on the display but
retain depth information. If we remove the depth coordinate, we are working with two-
dimensional screen coordinates.

Fig: Camera and object frames (a) in default frames (b) After applying model-view matrix

Dept. of CSE,NGIT 28
Computer Graphics

2.16 Modelling a colored Cube:

2.16.1 Modelling the faces:

We start by assuming that the vertices of the cube are available through an array of vertices; for
example, we use the following homogeneous coordinates so

We can then use the list of points to specify the faces of the cube. For example, one face is given
by the sequence of vertices (0, 3, 2 1). We can specify the other five faces similarly.

2.16.2 Inward- and Outward-Pointing Faces:

Fig. Traversal of edges of polygon


We have to be careful about the order in which we specify our vertices when we are
defining a three-dimensional polygon. We used the order 0, 3, 2, 1 for the first face. The order 1,
0, 3, 2 would be the same, because the final vertex in a polygon definition is always linked back
to the first. However, the order 0, 1, 2, 3 is different. Although it describes the same boundary,
the edges of the polygon are traversed in the reverse order—0, 3, 2, 1—as shown in above
Figure. The order is important because each polygon has two sides. Our graphics systems can
display either or both of them. From the camera’s perspective, we need a consistent way to
distinguish between the two faces of a polygon. The order in which the vertices are specified
provides this information. We call a face outward facing if the vertices are traversed in a counter-
clockwise order when the face is viewed from the outside. This method is also known as the
right-hand rule because if you orient the fingers of your right hand in the direction the vertices
are traversed, the thumb points outward.

2.16.3 Data Structures for Object Representation:

Dept. of CSE,NGIT 29
Computer Graphics

We could now describe our cube through a set of vertex specifications. For example,
we could use a two dimensional array of positions

Fig : Vertex Representation of a cube

2.16.4 The Color Cube:

We can use the vertex list to define a color cube. We use a function quad that takes as input the
indices of four vertices in outward pointing order and adds data to two arrays, to store the vertex
positions and the corresponding colors for each face in the arrays

Note that because we can only display triangles, the quad function must generate two triangles
for each face and thus six vertices. If we want each vertex to have its own color, then we need 24
vertices and 24 colors for our data. Using this quad function, we can specify our cube through
the function

Dept. of CSE,NGIT 30
Computer Graphics

We will assign the colors to the vertices using the colors of the corners of the color solid
from (black, white, red, green, blue, cyan, magenta, yellow). We assign a color for each vertex
using the index of the vertex. Alternately, we could use the first index of the first vertex specified
by qua to fix the color for the entire face. Here are the RGBA colors

Here is the quad function that uses the first three vertices to specify one triangle and the
first, third and fourth to specify the second:

Dept. of CSE,NGIT 31
Computer Graphics

2.16.5 Bilinear Interpolation


Although we have specified colors for the vertices of the cube, the graphic system must
decide how to use this information to assign colors to points inside the polygon. There are many
ways to use the colors of the vertices to fill in, or interpolate, colors across a polygon. Probably
the most common methods— ones that we use in other contexts—are based on bilinear
interpolation.

Dept. of CSE,NGIT 32
Computer Graphics

2.16.6 Displaying the Cube:

However, the display of the cube is not very informative. Because the sides of the cube are
aligned with the clipping volume, we see only the front face. The display also occupies the entire
window. We could get a more interesting display by changing the data so that it corresponds to a
rotated cube. We could scale the data to get a smaller cube. For example, we could scale the cube
by half by changing the vertex data to

but that would not be a very flexible solution. We could put the scale factor in the quad function.
A better solution might be to change the vertex shader to

attribute vec4 vPosition;


attribute vec4 vColor;
void main(out vec4 color)
{
gl_Position = 0.5*vPosition;
color = vColor;
}
Note that we also changed the vertex shader to use input data in four dimensional homogeneous
coordinates. We also can simplify the fragment shader to

void main(in vec4 color)


{
gl_FragColor = color;
}

Dept. of CSE,NGIT 33

You might also like