0% found this document useful (0 votes)
58 views22 pages

Chapter 5 CG (2024)

Uploaded by

tam858267
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views22 pages

Chapter 5 CG (2024)

Uploaded by

tam858267
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Deber Markos University

Faculty of Technology
Department of Computer Science
CoSc3072 –Computer Graphics
Chapter 5 Handout –W-V Transformation, Clipping, Projections and 3D Objects

THE VIEWING PIPELINE

A world-coordinate area selected for display is called a window. An area on a display device to
which a window is mapped is called a viewport. The window defines what is to be viewed; the
viewport defines where it is to be displayed.

Often, windows and viewports are rectangles in standard position, with the rectangle edges
parallel to the coordinate axes. Other window or viewport geometries, such as general polygon
shapes and circles, are used in some applications, but these shapes take longer to process.

In general, the mapping of a part of a world-coordinate scene to device coordinates is referred


to as a viewing transformation. Sometimes the two-dimensional viewing transformation is
simply referred to as the window-to-viewport transformation or the windowing transformation.

Window to View Port mapping

A point at position (xw,yw) in the window is mapped into position (xv, yv) in the associated
viewport.
To maintain the same relative placement in the viewport as in the window, we require that

Solving these expressions for the viewport position (xv,yv), we have

Computer Graphics Page 1


Where the scaling factors are

drawOutline("face.dat");

Suggested Exercises/Activities

Try to reproduce the following picture as closely as possible:

Clipping
Generally, any procedure that identifies those portions of a picture that are either inside or
outside of a specified region of space is referred to as a clipping algorithm, or simply clipping.
The region against which an object is to clipped is called a clip window.

Applications of clipping:
• extracting part of a defined scene for viewing;
• identifying visible surfaces in three-dimensional views;
• antialiasing line segments or object boundaries;
• creating objects using solid-modeling procedures;
• displaying a multi window environment;
• Drawing and painting operations that allow parts of a picture to be selected for copying,
moving, erasing, or duplicating.

Depending on the application, the clip window can be a general polygon or it can even have
curved boundaries. We first consider clipping methods using rectangular clip regions.

Why Clip?
• Rasterization is very expensive
• Approximately linear with number of fragments created
• Math and logic per pixel
• If we only rasterize what is actually viewable, we can save a lot
• A few operations now can save many later

Computer Graphics Page 2


Clipping Primitives

• Different primitives can be handled in different ways


• Points
• Lines
• Polygons
• Text

POINT CLIPPING

Assuming that the clip window is a rectangle in standard position, we save a point P = (x, y) for
display if the following inequalities are satisfied: where the edges of the clip window are,
(xwmin, xwmax, ywmin, ywmax) can be either the world-coordinate window boundaries or
viewport boundaries. If any one of these four inequalities is not satisfied, the point is clipped
(not saved for display).

Although point clipping is applied less often than line or polygon clipping, some applications
may require a point clipping procedure. For example, point clipping can be applied to scenes
involving explosions or sea foam that is modeled with particles (points) distributed in some
region of the scene.

LINE CLIPPING

Figure illustrates possible relationships between line positions and a standard rectangular
clipping region. A line clipping procedure involves several parts. First, we can test a given line
segment to determine whether it lies completely inside the clipping window. If it does not, we
try to determine whether it lies completely outside the window.

Finally, if we cannot identify a line as completely inside or completely outside, we must perform
intersection calculations with one or more clipping boundaries. We process lines through the
"inside-outside'' tests by checking the line endpoints. A line with both endpoints inside all
clipping boundaries, such as the line from P1 to P2 is saved. A line with both endpoints outside
any one of the clip boundaries (line P3, P4 in Fig.) is outside the window.
All other lines cross one or more clipping boundaries, and may require calculation of multiple
intersection points. To minimize calculations, we try to devise clipping algorithms that can
efficiently identify outside lines and redraw intersection calculations.

Computer Graphics Page 3


For a line segment with endpoints (x1, y1) and (x2, y2) and one or both endpoints outside the
clipping rectangle, the parametric representation

X = x1 + u (x2 –x1)
Y = y1 + u (y2 –y1) 0<= u <=1 could be used to determine values of parameter u for
intersections with the clipping boundary coordinates. If the value of u for an intersection with a
rectangle boundary edge is outside the range 0 to 1, the line does not enter the interior of the
window at that boundary. If the value u is within the range from 0 to 1, the line segment does
indeed cross into the clipping area.

This method can be applied to each clipping boundary edge in turn to determine whether any
part of the line segment is to be displayed. Line segments that are parallel to window edges can
be handled as special cases.

Clipping line segments with these parametric tests require a good deal of computation, and
faster approaches to clipping are possible. A number of efficient line clippers have been
developed, and we survey the major algorithms in the next section. Some algorithms are
designed explicitly, for two-dimensional pictures and some are easily adapted to three
dimensional applications.

Cohen-Sutherland Line Clipping

This is one of the oldest and most popular line-clipping procedures. Generally, the method
speeds up the processing of line segments by performing initial tests that reduce the number of
intersections that must he calculated. Every line end point in a picture is assigned a four-digit
binary code, called a region code, which identifies the location of the point relative to the
boundaries of the clipping rectangle.

Regions are set up in reference to the boundaries as shown in Fig. Each bit position in the
region code is used to indicate one of the four relative coordinate positions of the point with
respect to the clip window: to the left, right, top, or bottom. By numbering the bit positions in
the region code as 1 through 4 from right to left, the coordinate regions can be correlated with
the bit positions as

bit 1: left bit 2: right bit 3: below bit 4: above

Computer Graphics Page 4


A value of 1 in any bit position indicates that the point is in that relative position; otherwise, the
bit position is set to 0. If a point is within the clipping rectangle, the region code is 0000. A point
that is below and to the left of the rectangle has a region code of 0101.

Bit values in the region code are determined by comparing endpoint coordinate values (x, y) to
the clip boundaries. Bit 1 is set to 1 if x < xwmin. The other three bit values can be determined
using similar comparisons. For languages in which bit manipulation is possible, region-code bit
values can be determined with the following two steps:

(1) Calculate differences between endpoint coordinates and clipping boundaries.


(2) Use the resultant sign bit of each difference calculation to set the corresponding value in the
region code. Bit 1 is the sign bit of x –xwmin; bit 2 is the sign bit of xwmax - x; bit 3 is the sign bit
of y - ywmin; and bit 4 is the sign bit of ywmax - y.

Once we have established region codes for all line endpoints, we can quickly determine which
lines are completely inside the clip window and which are clearly outside. Any lines that are
completely contained within the window boundaries have a region code of 0000 for both
endpoints, and we trivially accept these lines. Any lines that have a 1 in the same bit position in
the region codes for each endpoint are completely outside the clipping rectangle, and we
trivially reject these lines. We would discard the line that has a region code of 1001 for one
endpoint and a code of 0101 for the other endpoint. Both endpoints of this line are left of the
clipping rectangle, as indicated by the 1 in the first bit position of each region code.
A method that can be used to test lines for total clipping is to perform the logical and operation
with the both region codes. If the result is not 0000, the line is completely outside the clipping
region.

Lines that cannot be identified as completely inside or completely outside a clip window by
these tests are checked for intersection with the window boundaries. As shown in Fig. such
lines may or may not cross into the window interior. We begin the clipping process for a line by
comparing an outside endpoint to a clipping boundary to determine how much of the line can
be discarded.

Then the remaining part of the Line is checked against the other boundaries, and we continue
until either the line is totally discarded or a section is found inside the window. We set up our
algorithm to check line endpoints against clipping boundaries in the order left, right, bottom,
top.
Computer Graphics Page 5
To illustrate the specific steps in clipping lines against rectangular boundaries using the Cohen-
Sutherland algorithm, we show how the lines in figure could be processed. Starting with the
bottom endpoint of the line from P1 to P2,

We check P, against the left, right, and bottom boundaries in turn and find that this point is
1
below the clipping rectangle. We then find the intersection point P1 with the bottom
1
boundary and discard the line section from P1 to P1 . The line now has been reduced to the
1
section from P1 to P2. Since P2 is outside the clip window, we check this endpoint against the
1
boundaries and find that it is to the left of the window. Intersection point P2 is calculated, but
11
this point is above the window. So the final intersection calculation yields P2 , and the line
11
from P1 to P2 is saved.

This completes processing for this line, so we save this part and go on to the next line. Point P3
in the next line is to the left of the clipping rectangle, so we determine the intersection P, and
1
eliminate the line section from P3 to P3' By checking region codes for the line section from P3
to P4 we find that the remainder of the line is below the clip window and can be discarded also.

Intersection points with a clipping boundary can be calculated using the slope-intercept form of
the line equation. For a line with endpoint coordinates (x1,y1) and (x2, y2), the y coordinate of
the intersection point with a vertical boundary can be obtained with the calculation

Where the x value is set either to xwmin or to xwmax, and the slope of the line is calculated as
m = (y2 - y 1 ) / ( x 2 –x1). Similarly, if we are looking for the intersection with a horizontal
boundary, the x coordinate can be calculated as with y set either to ywmin or to ywmax.

  Algorithm: 
1. if (regioncode1 == regioncode2 == 0000)
accept the line
2. if ((regioncode1 & regioncode2) != 0)
reject the line
3. Clip the line against an edge (where both bits are nonzero)
4. Assign the new vertex a 4-bit value
5. Return to 1

What Are Projections?

Computer Graphics Page 6


Our 3-D scenes are all specified in 3-D world coordinates
To display these we need to generate a 2-D image - project objects onto a picture plane

Converting From 3-D to 2-D

Projection is just one part of the process of converting from 3-D world coordinates to a 2-D
image

Types of Projections

There are two broad classes of projection:


– Parallel: Typically used for architectural and engineering drawings
– Perspective: Realistic looking and used in computer graphics

Parallel Projection / Orthographic Projection

One method for generating a view of a solid object is to project points on the object surface
along parallel lines onto the display plane. By selecting different viewing positions, we can
project visible points on the object onto the display plane to obtain different two-dimensional
views of the object, as in Figure. In a parallel projection, parallel lines in the world-coordinate
scene projected into parallel lines on the two-dimensional display plane. This technique is used

Computer Graphics Page 7


in engineering and architectural drawings to represent an object with a set of views that
maintain relative proportions of the object. The appearance of the solid object can then be
reconstructed from the mapped views.

Arguably the simplest projection


o Image plane is perpendicular to one of the coordinate axes;
o Project onto plane by dropping that coordinate;
o All rays are parallel.

Advantages and Disadvantages

• Preserves both distances and angles


– Shapes preserved
– Can be used for measurements
• Building plans
• Manuals
• Cannot see what object really looks like because many surfaces hidden from view

Perspective Projection

Perspective projections are much more realistic than parallel projections

Another method for generating a view of a three-dimensional scene is to project points to the
display plane along converging paths. This causes objects farther from the viewing position to
be displayed smaller than objects of the same size that are nearer to the viewing position. In a
perspective projection, parallel lines in a scene that are not parallel to the display plane are
projected into converging lines. Scenes displayed using perspective projections appear more

Computer Graphics Page 8


realistic, since this is the way that our eyes and a camera lens form images. In the perspective
projection view shown in Figure, parallel lines appear to converge to a distant point in the
background, and distant objects appear smaller than objects closer to the viewing position.

There are a number of different kinds of perspective views


The most common are one-point and two point perspectives

Projectors converge at center of projection

Naturally we see things in perspective


o Objects appear smaller the farther away they are;
o Rays from view point are not parallel.

Vanishing Points

• Parallel lines (not parallel to the projection plan) on the object converge at a single point
in the projection (the vanishing point)
• Drawing simple perspectives by hand uses these vanishing point(s)

Computer Graphics Page 9


Advantages and Disadvantages
• Objects further from viewer are projected smaller than the same sized objects closer to
the viewer (diminution)
– Looks realistic
• Equal distances along a line are not projected into equal distances (nonuniform
foreshortening)
• Angles preserved only in planes parallel to the projection plane
• More difficult to construct by hand than parallel projections (but not more difficult by
computer)

Elements of a Perspective Projection

The Up And Look Vectors

The look vector indicates the direction in which the camera is pointing
The up vector determines how the camera is rotated
For example, is the camera held vertically or horizontally?

Computer Graphics Page 10


Projections in OpenGL

We haven’t taken properties of the camera so


– angle of view
– view volume
Frustrum –truncated pyramid objects not within the view volume are said to be clipped out

Typical sequence:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
glFrustrum(xmin, xmax, ymin, ymax, near, far);
near & far distances must be positive and measured from the COP be careful about the signs !
Perspective viewing in OpenGL

Typical sequence:

glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluPerspective (fovy, aspect, near,
far); fovy –view angle in y-axis
aspect –aspect ratio width/height

Computer Graphics Page 11


Parallel viewing in OpenGL

Typical sequence:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
glOrtho (xmin, xmax, ymin, ymax, near, far);

Typical sequence:
glutInitDisplayMode
(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glEnable(GL_DEPTH_TEST);
Clear the buffer before new rendering
glClear(GL_DEPTH_BUFFER_BIT);
Computer Viewing

• There are three aspects of the viewing process, all of which are implemented in the
pipeline,
– Positioning the camera
• Setting the model-view matrix
– Selecting a lens
• Setting the projection matrix
– Clipping
• Setting the view volume
The OpenGL Camera

• In OpenGL, initially the object and camera frames are the same
– Default model-view matrix is an identity
• The camera is located at origin and points in the negative z direction
• OpenGL also specifies a default view volume that is a cube with sides of length 2
centered at the origin
– Default projection matrix is an identity

The LookAt Function

gluLookAt(eyex, eyey, eyez, atx, aty, atz, upx, upy, upz)

Computer Graphics Page 12


3D- Graphics
Setting the Camera

When you take a photograph of a 3-D scene in real life you use a camera. The camera has a
position and orientations that determine what parts of the world are included in the
photograph and what parts are 'clipped' out. OpenGL is no different. To define a camera in
OpenGL we must do two things - set the camera position and orientation, and set a view
volume.

We position and orient the camera by using the function gluLookAt.

gluLookAt takes nine arguments. The order and meaning of these arguments is:

gluLookAt(camera.x, camera.y, camera.z, look.x, look.y, look.z, up.x, up.y, up.z);

We can see that the nine arguments make up three separate vectors/points. The first, camera,
specifies the position in world coordinates of the camera that is looking at the scene. The
second, look, is the point in world coordinates that the camera is looking at. The third, up,
specifies an 'upwards' direction for the camera, i.e. it defines the vertical axis of the captured
image. In the gluLookAt() definition below, the camera is positioned at the point (8,8,8), looking
at the origin (0,0,0), and the z-axis is 'up'.

gluLookAt(8,8,8,0,0,0,0,0,1);

Therefore, as programmers, we know that any objects we want to appear in the final image
should be located close to the origin; otherwise the camera won't see them.
Notice also that the call to gluLookAt comes after a call to glMatrixMode specifying
GL_MODELVIEW as the current matrix. This is necessary because the camera position and
orientation are included in the modelview matrix. In fact, you may often hear about the
modelview matrix consisting of two separate matrices - M (the modeling matrix) and V (the
viewing matrix) - hence the name 'modelview' matrix. The modeling matrix transforms the
objects so that they can be seen by the camera, and the viewing matrix transforms them from
world coordinates into a coordinate system defined by the camera position and orientation.
The gluLookAt routine defines the V (view) part of the modelview matrix.

Computer Graphics Page 13


In the previous chapters we saw how the gluOrtho2D routine was used to specify the bounds of
the world window. This effectively specifies what parts of the world would be included in the
image. In fact, this was a special case of the more general function glOrtho. Whereas
gluOrtho2D specifies a 2-D viewing window, glOrtho specifies a 3-D view volume. In other
words, as well as having left, right, bottom and top bounds, we also have near and far bounds.
Consider the following fragment code

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float l, r, b, t, n, f;
l = -12; r = 12; b = -12; t = 12; n = 0; f = 30;
glOrtho(l, r, b, t, n, f);

Here we are specifying a view volume that is defined by a 24 by 24 window (from -12 to +12 in
both x and y directions), a near plane of 0 (the camera) and a far plane of 30. Note that these
are all in camera coordinates, i.e. the camera itself is at the origin, the z-axis is the axis along
which the camera is looking, the y-axis is 'up', and the x-axis is perpendicular to both the y and z
axes.

In terms of the OpenGL pipeline we introduced in chapter four, both gluOrtho2D and glOrtho
specify the projection matrix. Both are orthographic or parallel projection matrices. This
basically means that objects that are further away from the camera do not appear smaller. To
achieve a better 3-D effect we need to use a perspective projection matrix. With orthographic
projections the view volume is a cuboid, but with perspective projections it is a frustum. A
frustum is like a cuboid except
that one end is narrower than
the other. This is illustrated in
the picture to the right. The
view volume for both projection
types is the volume enclosed
between the near and far
planes. With orthographic
projection the lines that project
each point in 3-D world
coordinates into the 2-D image
plane are parallel (hence the term parallel projection), whereas with perspective projection
they converge to a point.

In OpenGL we can define perspective projections using the routines gluPerspective or


glFrustum instead of glOrtho.

gluPerspective(viewAngle, aspectRatio, near, far);

glFrustum(left, right, bottom, top, near, far);

Computer Graphics Page 14


Here the viewAngle is the angle subtended at the camera between the top and bottom bounds
of the view volume, aspectRatio is the aspect ratio of the projected image, and near and far are
the near and far planes of the view volume. You can try experimenting with these arguments to
see the difference between gluPerspective and gluOrtho.

Drawing 3-D Objects

OpenGL comes with a number of built-in routines for generating common 3-D objects, such as
cubes, cones and cylinders.

Example

The following function will be new to you:

glutWireCone(0.12,0.6,12,9);

This function draws a wireframe cone - you can also draw a solid cone with the similar function
glutSolidCone. Both take 4 arguments - the radius of the cone at its base, the height of the
cone, and the number of slices and stacks that make up the cone.
glPushMatrix();
...
glPopMatrix();

When we only want the transformation to apply to a specific object, we need to 'save' the
current modelview matrix before applying it, and then 'restore' it afterwards. OpenGL provides
a stack for this purpose. Before applying any transformation that is specific to an object or
objects, we push the current matrix onto the stack, and after we have finished with the
transformations, we pop it back again, so that it will not be applied to all subsequent objects.

The following is a summary of the built-in routines for generating standard 3-D objects in
OpenGL/glut:

Cube: glutWireCube(GLdouble size), glutSolidCube(GLdouble size)


Sphere: glutWireSphere(GLdouble radius, GLint nSlices, GLint nstacks),
glutSolidSphere(GLdouble radius, GLint nSlices, GLint nstacks)
Torus: glutWireTorus(GLdouble inRad, GLdouble outRad, GLint nSlices, GLint nStacks),
glutSolidTorus(GLdouble inRad, GLdouble outRad, GLint nSlices, GLint nStacks)
Cone: glutWireCone(GLdouble baseRad, GLdouble height, GLint nSlices, GLint nStacks),
glutSolidCone(GLdouble baseRad, GLdouble height, GLint nSlices, GLint nStacks)
Teapot: glutWireTeapot(GLdouble size), glutSolidTeapot(GLdouble
size) Tetrahedron: glutWireTetrahedron(), glutSolidTetrahedron()
Octahedron: glutWireOctahedron(), glutSolidOctahedron()
Dodecahedron: glutWireDodecahedron(), glutSolidDodecahedron()
Icosahedron: glutWireIcosahedron(), glutSolidIcosahedron()

Computer Graphics Page 15


In summary, OpenGL has many built-in routines for generating 3-D objects. By combining these
objects and varying the arguments used to create them, it is possible to build a wide range of
complex objects. But remember that if you want to apply modelview transformations that are
specific to an object or objects, you should set the current matrix to be the modelview matrix
(using glMatrixMode), push the current matrix onto the stack, and pop it back again afterwards.

Hidden Surface Removal

When we are dealing with 3-D objects there will normally be some parts of the objects that will
be hidden from the camera. For example, the back of the 3-D teapot is hidden from the camera
by the front of the teapot. It is normally not necessary or desirable to display these 'hidden
surfaces'. In computer graphics there are many different algorithms for hidden surface removal.
The simplest of these is the depth-buffer, or z-buffer approach.

Depth-buffering works by storing a corresponding 'depth' for each pixel in the image. This
depth represents the distance from the camera of the part of an object that led to the pixel
being set. Therefore before setting the value of any pixel we check the current depth of the
pixel and compare it with the depth of the object we are currently drawing. If our current
object is in front of the existing pixel, we overwrite it. Otherwise we leave the current pixel
value as it is. That way each pixel's value is always set by the nearest of any primitives that lie
on the same straight line from the camera

OpenGL makes hidden surface removal using depth-buffering easy for us. First, when we call
glutInitDisplayMode in the main function we have to specify that we want extra memory
allocated for each pixel to store the depth (the depth buffer).

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);

Next, we switch on depth-buffering with the following command:

glEnable(GL_DEPTH_TEST);

Finally, when we clear the screen we also need to clear the depth buffer by using:

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Try disabling depth-buffering in your 3-D program (you can just comment out the
glEnable(GL_DEPTH_TEST) line) and see the effect it has on the final image.

Computer Graphics Page 16


// Program for Cohen Sutherland Line Clipping Algorithm
#include<stdio.h>
#include<GL/glut.h>
#define outcode int
int z;
double xvmin=100,yvmin =100, xvmax=200,yvmax=200; // Viewport boundaries
double x0=40,y0=10,x1=220,y1=130;
const int RIGHT= 2; // bit codes for the right
const int LEFT =1; //bit codes for the left
const int TOP=8; // bit codes for the top
const int BOTTOM=4; //bit codes for the bottom
outcode ComputeOutCode(double x,double y); // used to compute bit codes of a point

void drawtext()
{
char a[100] = " Press B /b for before clipping and A / a for After
Clipping"; glRasterPos2i(200,300);
for(int i=0; i<strlen(a); i++)
glutBitmapCharacter(GLUT_BITMAP_9_BY_15,a[i]);
}
void CohenSutherlandLineClipAnddraw(double x0,double y0,double x1,double y1)
{
//OutCodes for P0 ,P! and Whatever point lines outside the clip rectangle

outcode outcode0,outcode1,outcodeOut;
int accept =0,done =0;
//compute outcodes
outcode0= ComputeOutCode(x0,y0);
outcode1= ComputeOutCode(x1,y1);
do
{
if(!(outcode0|outcode1)) // logical or is 0 trivially accept and
exit { accept=1;
done=1;
}
else
if(outcode0 & outcode1) // logical and is 0 trivially reject and exit
done=1;
else
{
//failed both tests , so calculate the line segment clip;
// from an outside point to an intersection with clip
edge double x,y;
// at least one endpoint is outside the clip rectangle ; pick
it. outcodeOut= outcode0?outcode0:outcode1;

Computer Graphics Page 17


//now find the intersection point ; slope m= (y1-y0)/(x1-x0) //
use formula y=y0+slope*(x-x0),x=x0+(1/slope)*(y-y0)
if(outcodeOut & TOP) //point is above the clip rectangle
{
x= x0+(x1-x0)*(yvmax-y0)/(y1-
y0); y=yvmax;
}
else
if(outcodeOut & BOTTOM) //point is below the clip rectangle
{
x= x0+(x1-x0)*(yvmin-y0)/(y1-y0);
y=yvmin;
}
else
if(outcodeOut & RIGHT) //point is to the right of clip rectangle
{
y= y0+(y1-y0)*(xvmax-x0)/(x1-
x0); x=xvmax;
}
else //point is to the left of the clip rectangle
{
y= y0+(y1-y0)*(xvmin-x0)/(x1-x0);
x=xvmin;
}
// now we move outside point to intersection point to clip
// and get ready for next pass.
if(outcodeOut == outcode0) // If the outside point was p0 update x0,y0 to x,y
{ x0=x;
y0=y;
outcode0 = ComputeOutCode(x0,y0);
}
else // If the outside point was p1 update x1,y1 to x,y
{
x1=x;
y1=y;
outcode1 = ComputeOutCode(x1,y1);
}
}
}while(!done);
if(accept)
{
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINE_LOOP);
glVertex2d(xvmin,yvmin);
glVertex2d(xvmax,yvmin);

Computer Graphics Page 18


glVertex2d(xvmax,yvmax);
glVertex2d(xvmin,yvmax);
glEnd();
glColor3f(0.0,0.0,1.0);
glBegin(GL_LINES);
glVertex2d(x0,y0);
glVertex2d(x1,y1);
glEnd();
}
}
outcode ComputeOutCode(double x,double y)
{
outcode code =0;
if(y>yvmax) //above the clip window
code |=TOP;
if(y<yvmin) //below the clip window
code |=BOTTOM;
if(x>xvmax) //to the right of the clip window
code |=RIGHT;
if(x<xvmin) //to the left of the clip window
code |=LEFT;
return code;
}
void display()
{
drawtext();
if(z==1)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0,0.0,0.0); // draw red color
lines glBegin(GL_LINES);
glVertex2d(x0,y0);
glVertex2d(x1,y1);
glVertex2d(160,120);
glVertex2d(380,420);
glEnd();
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINE_LOOP);
glVertex2d(xvmin,yvmin);
glVertex2d(xvmax,yvmin);
glVertex2d(xvmax,yvmax);
glVertex2d(xvmin,yvmax);
glEnd();
}
if(z==2)

Computer Graphics Page 19


{
glClear(GL_COLOR_BUFFER_BIT);
CohenSutherlandLineClipAnddraw(x0,y0,x1,y1);
CohenSutherlandLineClipAnddraw(160,120,380,420);
}
glFlush();
}
void keyboard(unsigned char key, int , int )
{
switch(key)
{
case 'B':
case 'b':
z=1;
break;
case 'A':
case 'a':
z=2;
break;
case 'Q':
case 'q':
exit(1);
break;
}
glutPostRedisplay();
}

void myinit()
{
glClearColor(1.0,1.0,1.0,1.0);
glColor3f(1.0,0.0,0.0);
glPointSize(1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0,640.0,0.0,480.0);
}

int main(int argc, char** argv)


{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(640,480);
glutInitWindowPosition(10,10);
glutCreateWindow("cohen Sutherland line clipping algorithm");

Computer Graphics Page 20


glutDisplayFunc(display);
myinit();
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}

Computer Graphics Page 21


//Program to Display 3D object (e.g.: Wire Cube)
#include <GL/glut.h>
void init(void)
{
glClearColor (1.0, 0.0, 0.0, 0.0);
glShadeModel (GL_FLAT);
}
void display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (0.0, 1.0, 1.0);
glLoadIdentity (); /* clear the matrix */ /*
viewing transformation */
gluLookAt (0.0, 0.0, 5.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0);
glScalef (1.0, 2.0, 1.0); /* modeling transformation */
glutWireCube (0.5);
glFlush ();
}
void reshape (int w, int h)
{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glFrustum (-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
glMatrixMode (GL_MODELVIEW);
}

int main(int argc, char** argv)


{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}

Computer Graphics Page 22

You might also like