0% found this document useful (0 votes)
13 views125 pages

Mmte 004

The document is a study guide for a Computer Graphics course offered by Indira Gandhi National Open University, detailing five units covering hardware primitives, 2D and 3D shape primitives, geometric transformations, and clipping techniques. It emphasizes the applications of computer graphics in various fields and provides practical exercises using C programming with OpenGL. The guide also includes a curriculum design committee, acknowledgments, and instructions for practical work, aiming to enhance understanding of computer graphics concepts and algorithms.

Uploaded by

srcw2222k140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views125 pages

Mmte 004

The document is a study guide for a Computer Graphics course offered by Indira Gandhi National Open University, detailing five units covering hardware primitives, 2D and 3D shape primitives, geometric transformations, and clipping techniques. It emphasizes the applications of computer graphics in various fields and provides practical exercises using C programming with OpenGL. The guide also includes a curriculum design committee, acknowledgments, and instructions for practical work, aiming to enhance understanding of computer graphics concepts and algorithms.

Uploaded by

srcw2222k140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

Indira Gandhi

National Open University


MMTE-004
School of Sciences Computer Graphics

STUDY GUIDE
UNIT 1
An Overview of Hardware Primitives 7
UNIT 2
2D Shape Primitives 17
UNIT 3
More Output Primitives and Geometric Transformations 41
UNIT 4
Clipping and 3D Primitives 65
UNIT 5
Three Dimensional Transformations 89

APPENDIX 111
Curriculum Design Committee
Dr. B.D. Acharya Dr. R.K. Khanna
Dept. of Science & Technology, New Delhi Scientific Analysis Group, DRDO, Delhi
Prof. Adimurthi Prof. Susheel Kumar
School of Mathematics Dept. of Management Studies
TIFR, Bangalore IIT, Delhi
Prof. Archana Aggarwal Prof. Veni Madhavan
CESP, School of Social Sciences Scientific Analysis Group
JNU, New Delhi DRDO, Delhi
Prof. R.B. Bapat Prof. J.C. Misra
Indian Statistical Institute Dept. of Mathematics
New Delhi IIT, Kharagpur
Prof. M.C. Bhandari Prof. C. Musili
Dept. of Mathematics Dept. of Mathematics and Statistics
IIT, Kanpur University of Hyderabad
Prof R. Bhatia Prof. Sankar Pal
Indian Statistical Institute, New Delhi ISI, Kolkata
Prof. A.D. Dharmadhikari Prof. A.P. Singh
Dept. of Statistics PG Dept. of Mathematics
University of Pune University of Jammu
Prof. O.P. Gupta Faculty Members
Dept. of Financial Studies School of Sciences, IGNOU
University of Delhi Prof. Poornima Mital
Dr. Atul Razdan
Prof. S.D. Joshi Prof. Parvin Sinclair
Dept. of Electrical Engineering Prof. Sujatha Varma
IIT Delhi Dr. S. Venkataraman

Course Design Committee


Prof. S. D. Joshi Prof. S. Ponnusamy
IIT, Delhi IIT, Madras
Dr. Vijay Kumar Faculty Members
Cochin University, Kerala School of Sciences, IGNOU
Prof. S. Kumaresan Dr. Deepika
Bombay University , Mumbai Prof. Poornima Mital
Dr. Atul Razdan
Prof. (Mrs.) R. D. Mehta Prof. Parvin Sinclair
Sardar Patel University, Gujarat Prof. Sujatha Varma
Dr. S. Venkataraman
Prof. A. Ojha
PDPM, IIIT, D&M
GEC Campus, Ranjhi
Jabalpur

Block Preparation Team


Prof. Subodh Kumar (Editor) Prof. Poornima Mital
Dept. of Computer Science & Engineering School of Sciences, IGNOU
I.I.T. Delhi
Prof. A. Ojha
PDPM, IIIT, D&M
GEC Campus, Ranjhi
Jabalpur
Course Coordinator: Prof. Poornima Mital
Acknowledgements: Mr. Surender Singh Chauhan for preparing the CRC.

June, 2010
© Indira Gandhi National Open University, 2010
ISBN-81-
All right reserved. No part of this work may be reproduced in any form, by mimeograph or any other
means, without permission in writing from the Indira Gandhi National Open University.
Further information on the Indira Gandhi National Open University courses, may be obtained from the
University’s Office at Madian Garhi, New Delhi – 110 068.
Printed and published on behalf of the Indira Gandhi National Open University, New Delhi by the
Director, School of Sciences.
USING STUDY GUIDE
This study guide developed around the textbook “Computer Graphics”
C-Version, 2nd Edition by Donald Hearn and M. Pauline Baker will help
you in studying and understanding the prescribed chapters of the book. This
guide indicates when, what and how much to study from the book. This guide
consists of five units. Each unit is further divided into sections and sub-
sections. In each section of the unit we have clearly indicated in a box in bold
letters the particular chapter/section of the book to be studied at that time. The
exercises to be done by you are also indicated in a box. Typically a box would
be of the form:

Read Secs. 1-1 to 1-8, Chapter 1, pages 24-54 of the book.

or

Do the exercises No. 2-4, 2-6, 2-9, 2-11, 2-12, Chapter 2 on page 101
of the book.

Remember that instructions in the second box above does not restrict you from
trying other exercises given in the book.

Wherever we feel that more explanation is needed for understanding the text
given in the book we have given them in the unit just below the respective box
clearly indicating the section number/equation number/page number, to which
we are referring to. In addition to this we have given a number of solved
examples/exercises in the units in order to enhance your understanding of
various concepts.

At the end of each unit we have recapitulated the learning points of the unit in
the form of summary. This is followed by solutions/answers to the exercises
from the book, which we have indicated in the boxes, and also the exercises
given in the unit itself. We hope that you look at the answers only after you
have sincerely tried the exercises.

Whenever you study this study guide please keep the textbook along with you.
We advise you to go section by section and follow the instructions given there.
Apart from the exercises indicated in a box please try to work out as many
exercises as you can from the book.
COMPUTER GRAPHICS
Computer Graphics is a fast developing field of computer science which
primarily deals with modelling, representation and synthesis of real life objects.
This is done using geometric objects and shapes and certain basic
transformations. Due to tremendous advancements in speed and quality of
hardware and software, it has rapidly gained popularity with applications in
areas such as industrial design, computer games, virtual reality, synthetic
content, media and entertainment, interactive TV, graphical user interfaces,
information visualization, interactive art, education and the internet. Computer
graphics plays an increasingly important role in popularizing applications of
computers in a variety of other areas also. Fundamentals of computer graphics
are based on mathematical formulations of curves, surfaces and various
transformations for object representation, digital imaging, colouring, rendering
and animation based on basic principles of optics and kinematics. Thus one of
the important aims of studies in computer graphics is to achieve as much
realism as possible in representation and rendering of real life objects, their
images and animation.

This course on “Computer Graphics” is aimed to provide you an introduction


to mathematical tools and algorithms for generating graphics primitives for
applications in 2D and 3D graphics. This course is developed as a wrap-
around material around the textbook „Computer Graphics‟, C-Version by
Donald Hearn and M. Pauline Baker (second edition). This study guide
will provide you guidance in reading the book and will also provide an aid in
understanding the prescribed chapters of the book. In the text book by Hearn
and Baker, you will find that C language has been used to implement graphics
algorithms with an API (application programming interface) called PHIGS.
PHIGS (Programmer’s Hierarchical Interactive Graphics System) is a
very powerful API standard for rendering 3D computer graphics and at one
time was considered to be the 3D graphics standard during 1990s. PHIGS was
designed in the 1980s, inheriting many of its ideas from the Graphical Kernel
System of the late 1970s. Due to its early gestation, the standard supports only
the most basic 3D graphics and because of this PHIGS is no longer used now.

In this study guide, we have used C language to implement graphics algorithms


with a different API called OpenGL (open graphics library). OpenGL is a
powerful low-level graphics rendering and imaging library available on all
major platforms. An appendix is given at the end of the study guide to make
you acquainted with basic structure of a graphics program using OpenGL. We
advise you to go through the appendix carefully before starting any
programming. Apart from the appendix there are five units in this study guide.

Unit 1 illustrates the diversity of applications of computer graphics for


designing of buildings, automobiles, aircraft, space crafts, to generate
illustrations and slides for use with projector, for making motion pictures,
music videos, television shows etc. Various graphic display devices and
technologies available for displaying and printing graphics objects are
discussed here. Unit 2 introduces you to input and hardcopy devices.
Algorithms for plotting points on a straight line segment, for plotting a circle,
an ellipse and few more important output primitive curves are also discussed in
this unit. Basic methods to fill polygonal or arbitrary shaped closed regions are
discussed in Unit 3. We have also discussed here some primitive two-
dimensional geometric transformations and their applications to build more
useful transformations. In Unit 4, you will learn how a two-dimensional object
is clipped against a rectangular boundary and how a three-dimensional object
can be modeled and rendered using simple three-dimensional primitives. After
exploring Bézier curves you will also learn the construction of splines in this
unit. Finally, Unit 5 introduces you to three-dimensional transformations.
You will see here how all the two-dimensional transformations, except for
rotation, are generalized, in a natural way to three dimensions. The unit
concludes with the projection transformations which transform a 3d scene to
2D display unit.

In this course you are required to do practical exercises using


C-programming for seven sessions. At the end of each of the Units 2, 3, 4
and 5, we have given a list of practical exercises to be done on the computer at
your programme study centre.

Instructions regarding the practical work

For each of the problems given in the assignment you are expected to:

 Compile and run the program and show it to the counsellor at your centre.
 Get a flow chart and the source code of the program signed by the
counsellor.
 You are required to maintain a file of these signed programs along with the
output of your programs. This file will be a part of your internal
assessment and you will have to produce it at the time of the final practical
exam. So please keep it neat, systematic, updated and handy.

You may like to refer to some books on the subject and try to solve some
exercises given there to get a better grasp of the techniques discussed in this
course. We are giving a list of titles for your reference purposes.

1. James D. Foley, Andries van Dam, Steven K. Feiner and John F. Hughes,
Computer Graphics, Principles and Practice, Addison Wesley, 1997.
2. F. S. Hill, Jr., Computer Graphics using OpenGL, Pearson Education
(second edition), 2006.
3. David F. Rogers, Procedural Elements of Computer Grapgics, Tata
McGraw Hill (second edition), 2006.
4. Edward Angel, Interactive Computer Graphics: A Top Down Approach
Using OpenGL (fifth edition), Pearson Education, 2009.
UNIT 1 AN OVERVIEW OF HARDWARE
PRIMITIVES
Structure Page No
1.1 Introduction 7
Objectives
1.2 Applications of Computer Graphics 7
1.3 Display Devices 8
1.4 Raster and Random Scan Display Technologies 10
1.5 Summary 13
1.6 Solutions/Answers 14

1.1 INTRODUCTION
With the advancement of computer technology, computer graphics has become
a practical tool used in real life applications in diverse areas such as science,
engineering, medicine, business, industry, art, entertainment, education, etc.
Due to such widespread applications and utility of computer graphics, now a
days a broad range of graphics hardware and software systems are available.
You make use of a wide variety of input devices and graphic software packages
while using your personal computers for your day-to-day work. In this unit, we
shall familiarise you with the basic features of graphics hardware components
and graphics software packages.

We shall start the unit by illustrating in Sec. 1.2, the diversity of applications of
computer graphics for designing of buildings, automobiles, aircraft, space
crafts etc., to generate illustrations and slides for use with projector, for making
motion pictures, music videos, television shows, models used for educational
aid and many more. Sec. 1.3 introduces you to various graphic display devices.
Technologies available for displaying and printing graphics objects are
discussed in Sec. 1.4.

Objectives
After studying this unit, you should be able to
 describe the wide spread applications of computer graphics in different
fields;
 describe the functioning of various display devices;
 distinguish between raster scan and random scan display techniques.

1.2 APPLICATIONS OF COMPUTER GRAPHICS


When you begin using a computer and open an application, you simply click
on a graphics object, called icon. When you watch TV, several times you find
that one entity changes its shape to become another entity in many
advertisements. For example, a running cheetah turning to a car tyre. This is
called morphing in computer graphics. The successful Hollywood film
Jurassic Park and many more science fiction films are using techniques that are
heavily dependent on computer graphics. For planning, designing and
Computer Graphics construction of buildings, nowadays many softwares show the entire plan, front
elevation, top elevation and a walkthrough inside and around the building. All
these constitute only a glimpse of what we mean by applications of computer
graphics.

To have a look at a gallery of graphics applications read the following.

Read Secs. 1-1 to 1-8, Chapter 1, pages 24-54 of the book.

After reading this section you must have been motivated to learn computer
The book mentioned
everywhere refers to graphics. Before we proceed further, you may try the following exercises.
‘Computer Grahpics’
C-Version by Donald
Hearn & M. Pauline E1) List five different areas of applications of computer graphics.
Baker (second edition).
E2) Explain how computer graphics would play a useful role in education and
training.

Let us now discuss some of the output devices used in a graphic system.

1.3 DISPLAY DEVICES


This section gives you an introduction to graphics display devices. Details of
architecture and functioning of some of the important display technologies
have been discussed here.

Read Sec. 2-1, Chapter 2, pages 56-72 of the book.

This section introduces you to cathode ray tubes (CRT), CRT monitors, raster
scan displays, random scan display, flat panel display, direct view storage tubes
and video controllers. We are now listing some of the terms you must
remember after reading this section.

Persistence (of phosphor): Time it takes the emitted light from screen to
decay to one-tenth of its original intensity.

The point where an electron gun strikes the phosphor coated screen starts
glowing and maximum brightness is observed on the focus (strike) point of the
electron gun. The brightness tends to reduce radially as per the Gaussian
distribution (refer to Figure 2-5 of the book on page 59). Actually at the gun
strike point the energy level of phosphor atoms is highest and gradually spreads
over the nearby phosphor atoms. The ‘original intensity’ is the maximum
brightness that is achieved when the electron gun strikes the phosphor atoms
and the corresponding electrons reach maximum possible ‘excited state’.

In order to keep the pixel glowing, one requires to re-strike the electron gun on
the same point before the screen point fades away, that is, before the energy
level of the phosphor atom reduces to a certain limit as defined in the term
persistence. This is known as refreshing of the pixel. Infact the rate of
refreshing depends on the type of phosphor used and also on the human
perception limits. Before the human eye could catch that the pixel is beginning
8
to reduce its intensity, the pixel must be refreshed. In CRT monitors, the entire An Overview of Hardware
Primitives
screen frame is refreshed normally 30-60 times in a second. This is called
refresh rate.

Lower persistence phosphors require faster refresh rate and higher persistence
phosphors require lower refresh rates. The lower persistence phosphors are
used in monitors that are specially designed for animations, whereas higher
persistence phosphors are mostly used in complex static representation of
objects such as in CAD/CAM (Computer Aided Design/Manufacturing).

Resolution: On a display device, the resolution means the maximum number


of points that can be displayed simultaneously without an overlap in a row and
number of such rows covering per square centimetre (or per square inch). In
order to understand what non-overlap area is, refer to Fig. 1.

Negligible intensity
Non overlapping rings making
distinct pixels

Highest intensity at the point of


electron strike

Fig. 1

The outmost ring is having almost negligible intensity, the second outer circle
specifies the diameter which becomes visible to human eyes making a non-
overlapping range of spots. The innermost spot is showing maximum intensity.
So the non-overlapping points are actually those points which can be clearly
distinguished. As given in the book, two adjacent spots are distinct if the
difference between them is greater than the diameter at which each spot has
intensity of about 60% of the maximum intensity.

Number of points is put in matrix form as the number of points in each


horizontal line and the total number of horizontal lines that can be displayed at
a time. In a layman's term if a CRT monitor can display 1024 points in a
horizontal line and total horizontal lines that can be displayed simultaneously
are 768, then the resolution of the monitor is 1024  768 . Each single point of
display, which forms one unit of display, is called a pixel (short form of
picture-cell). So a graphics display devices is treated as a graph paper with
pixel as the smallest unit of display. Larger the resolution smaller is the size of
the pixel.

Aspect ratio: Ratio of vertical points to horizontal points necessary to produce


equal length lines in both directions on the screen. For example, in a CRT
monitor with resolution 1024  768 , the aspect ratio is given by 0.75 or 3 / 4 .

Horizontal retrace: In a refresh CRT monitor, the time it takes for an electron
beam to return to the left most point on the next horizontal line after refreshing
one horizontal line is called horizontal retrace. The beam is shut off, blanked,
during the period of retrace. Generally, about 83% of the total horizontal
retrace time is spent tracing the line. The remaining 17% is spent bringing the
beam back to the left side, before starting the next line.

9
Computer Graphics Vertical retrace: In a refresh CRT monitor, the time it takes for an electron
beam to return to the top, left most point on the monitor after refreshing all
horizontal lines from top to bottom is called vertical retrace. Set of all
horizontal lines that is displayed simultaneously is called a frame.

Let us now consider an example to illustrate the use of all these terms for
solving problems.

Example 1: Consider a non-interlaced raster monitor with a resolution of


1024  768 , a refresh rate of 30 frames per second, a horizontal retrace time of
3 microseconds and a vertical retrace time of 300 microseconds. What is the
fraction of total retrace time per frame spent in retrace of the electron beam?

Solution: Refresh rate  30 frames per second. Therefore, 1 frame will be


refreshed in 1 / 30 seconds. This contains horizontal retrace time of 3
microseconds per horizontal retrace. That means total horizontal retraces will
take 3 768 microseconds  0.002304 seconds. Then a vertical retrace will
take 300 microseconds  0.0003 seconds. Therefore, total refresh time per
frame  0.002604 seconds. So the fraction of total refresh time per frame spent
in electron beam retrace is given by
0.002604
 0.07812 seconds.
0.033333
***

You may now test your understanding of what you have learnt by solving the
following exercises.

Do the exercises No. 2-4, 2-9, 2-11, 2-12, Chapter 2 on page 101 of the
book.

The next section introduces you to some of the technologies used in graphics
systems.

1.4 RASTER AND RANDOM SCAN DISPLAY


TECHNOLOGIES
For display or printing graphics objects, various technologies have been
invented. Research and development is still on for getting refined and more
advanced display technologies for achieving as much realism as possible in
graphics display. Early graphics devices were line-oriented. This means the
primitive operations were line drawing. For example, a plotter was based on
sketching the figure through a pen using line segments approximating the
graphics object. For complex objects, thousands of line segments were used to
plot the object. Such devices were called vector displays or random scan
displays. In the present time raster graphics is a standard technology
commonly used in most of the graphics devices. Raster graphics uses a raster
which is a 2-dimensional grid of pixels (picture elements). Each pixel may be
addressed and illuminated independently, just as in a graph paper, we are able
to address every cell independently. Therefore the primitive operation is to
draw a point which means to assign a color to a pixel. Everything else is built
upon that. There are a variety of hardcopy and display devices based on raster
graphics technology. The inkjet and laser printers are examples of hardcopy
10
devices whereas CRT (cathode ray tube) and LCD (liquid crystal display) are An Overview of Hardware
Primitives
well known examples of display devices based on raster graphics. For more
details of these technologies read the following.

Read Secs. 2-2 and 2-3, Chapter 2, pages 73-77 of the book.

Secs. 2-2 and 2-3 of the book introduce you to the design and functioning of
raster scan displays and random scan displays. This also includes a brief
description of design and functioning of video controllers.

We now sum up the functioning of the two most common display techniques
namely, CRT and LCD.

CRT

 Electron gun is used to send an electron beam aimed at a particular point


on the screen. Deflection system is used to make the beam strike the
screen on a precise point.
 When electron beam strikes the screen, coated phosphor emits light due
to the natural phenomenon called phosphorescence. Output decays
rapidly approximately within  10 to 60 microseconds.
 As a result of this decay, the entire screen must be refreshed at least
30-60 times per second. This is called the refresh rate. If the refresh rate
is low, screen will tend to flicker.
 CFF (Critical Fusion Frequency) is the minimum refresh rate needed to
avoid flicker. It depends on the persistence of the phosphors and the
human vision system.
 Electron beam traces out a path on the screen, hitting each pixel once per
cycle. Tracing out of the entire picture once is called a cycle. In case of a
raster scan display we call the horizontal path of the beam a scan line. In
case of random scan display, the path is a directed set of line segment,
which are called vector lines.
 On a raster scan display, refresh rate is fixed as mentioned above. On a
vector display, a scan is in the form of a list of directed line segments to
be drawn, so the time to refresh is dependent on the length of the display
list. For small and simple object displays, some kind of delay is
introduced to avoid un-necessary refreshing of the screen.
 In color CRT, shadow-mask method is most common since it produces a
wide range of colors. In this method, each pixel consists of a group of
three phosphor dots (one each for red, green, and blue), arranged in a
triangular form called a triad. The shadow mask is a grid structure with
one hole per pixel. To excite one pixel, three electron guns are used one
for each of red, green, and blue. Each gun fires its electron stream
through the hole in the mask to hit that pixel.

11
Computer Graphics LCD

 A liquid crystal display consists of two glass plates each containing a


light polarizer. One glass plate contains vertical polarizer and the other
plate having horizontal polarizer.
 Horizontal polarizer acts as a filter, allowing only the horizontal
component of light to pass through and vertical polarizer allows only the
vertical component of light to pass through.
 Rows of horizontal conductors on one glass plate and columns of vertical
conductors on the other glass plate are built in to address individual
pixels. The intersection of these rows and columns define the pixel grid.
The conductors are transparent.
 A liquid crystal layer exists between the two glass plates. The liquid
crystal means a compound which has crystalline structure of its
molecules, yet it flows like a liquid.
 Ambient light is captured at one glass plate, vertically polarized and then
is rotated (phase of the light wave changed) to horizontal polarity during
the period it passes through the liquid crystal layer. Then it passes
through the horizontal filter. Light is then reflected back through all the
layers to the viewer, giving an appearance of lightness.
 However, if the liquid crystal molecules are charged, they become
aligned and no longer allow the light to change its polarity when the light
passes through them. If this occurs, no light can pass through the
horizontal filter, so the screen appears dark. To do this we simply apply
voltage between two intersecting conductors residing in two glass plates.
 Crystals can be dyed to provide color.
 Screen is refreshed at a rate of 60 frames per seconds using a refresh
buffer. An LCD may be backlit, so as not to be dependent on ambient
light. This is known as passive matrix LCD. However, ambient light is
rarely used now a days.
 Most of the LCD are manufactured using active matrix LCD. In active
matrix LCD, transistors are placed at every pixel location to control
voltage and to prevent charge from gradually leaking out from the liquid
crystal cells.
 TFT (thin film transistor) is most popular LCD technology today.

You may now try the following exercises.

Do the exercises No. 2-17 and 2-18, Chapter 2 on page 102 of the book.

The following exercises will also be useful to you.

E3) Suppose we have a video monitor with a display area measuring 12.8
inches across and 9.6 inches high. If the resolution is 1024 by 768 and
the aspect ratio is 1, what is the diameter of each screen point?
12
E4) Compute the following: An Overview of Hardware
Primitives
a) Size of 420  300 image at 240 pixels per inch.

b) Resolution (per square inch) of 3 2 inch image that has 768  512
pixels.

c) Height of the resized image 1024  768 to one that is 640 pixels
wide with the same aspect ratio.
d) Width of an image having height of 4 inches and an aspect ratio
1 .5 .

We now end the unit by giving a summary of what we have covered here.

1.5 SUMMARY
In this unit, we have covered the following:

1. Application areas of Computer Graphics which include Computer-Aided


Design, Computer Art, Presentation Graphics, Entertainment and
Education and Training.
2. Design and functioning of a refresh cathode ray tube. Primary
components of refresh cathode ray tube are (i) electron gun used in
producing electron beam (ii) heating filament, used to heat up the cathode
for emission of electrons (iii) focusing system used in making the
electrons converge to one point on the screen (iv) magnetic deflection
coils for making a specific point on the screen as the focus of electron
beam (v) phospher coated screen, where the electron beam strikes and
transfers its kinetic enery to phosphor atoms, which in their turn convert
the energy to optical energy and a picture cell (pixel) is glown (evoked)
(vi) a control grid to control the intensity of electron beam which is
governed by the number of electrons passing through the control grid.
3. Persistence of phosphor – the time it takes for the emitted light to decay
to one tenth of its original intensity.
4. Resolution means the number of points per inch that can be plotted
simultaneously without overlap, horizontally and vertically.
5. Aspect ratio – Ratio of the number of points plotted vertically to
horizontally to plot a line segment of the same length.
6. Raster and random scan displays. In Raster scan displays, whole screen is
refreshed a number of times in a second to keep the picture visible on the
screen. This is called refresh rate of the system. In random scan display
line drawing commands are used to plot a graphics, that is every grahpics
object is saved as a sequence of line segments. This is also refreshed a
number of times in a second, but its refresh rate depends on the number
of lines to be displayed. If the drawing is complex, the refresh rate is
different from when the drawing is small and simple.
7. Memory area where the picture definition is stored is called frame buffer
in a raster scan system.
13
Computer Graphics 8. Horizontal retrace refers to the time an electron beam takes to traverse a
scan line.Vertical retrace means the time taken by the electron beam to
come back to the top left corner of the screen after tracing a frame (all
the scan lines of the screen).
9. Two methods exist for displaying colors in a CRT monitor – beam
penetration method and shadow mask method. Beam penetration method
is a simple inexpensive way to produce colors in random scan displays,
but has a limited scope of colors. Shadow mask method is more
commonly used in color CRT monitors having raster scan display
systems.
10. There are two types of shadow masks avaialable, delta-delta arrangement
and in-line arrangement. The in-line arrangement referes to positioning of
colored phosphor dots in a display screen.
11. Flat panel displays have now become more common. These include
liquid crystal displays (LCD) and thin film electroluminescent displays.
12. In raster or random scan display systems a special purpose processor for
graphics is used and is called graphics controller.

1.6 SOLUTIONS/ANSWERS
Exercises on page 8 of this unit.

E1) Five major areas of applications of computer graphics are:


i) Study of molecular structures.
ii) Modelling and simulation of automobile items.
iii) Building design.
iv) Visual effects in movies.
v) Making professional presentations more effective by inclusion of
graphics and animation.

E2) Computer graphics has occupied an important role in education and


training. One of the important areas where computer graphics plays a
crucial role is computer based learning. For example, in order to teach a
course on engineering drawing, it is better to make use of software like
AutoCAD or SolidWorks that give very convenient and efficient ways of
generating 2D and 3D geometric objects, used in composition of various
objects. For study of molecular structures, visualization of these
structures would help understand better the concept of bonds etc. If one
wants to study electronic circuits and component design, it is always
better to first make a circuit in a supporting software and test its validity,
then make a final electronic circuit. There are a variety of different areas
in education and training where use of graphics enhances the
effectiveness of lectures/training sessions. For example, presentation of
lectures with the help of graphics and animation leads to more effective
styles of teaching and is adapted worldwide in higher education and also
in elementary education in a number of areas. Especially for making the
learning a fun for kids, graphics has been extensively used in many such
learning programmes.
14
An Overview of Hardware
Primitives
Exercises on page 101 of the book.

2-4 i) 640  480  307200 pixels,


ii) 1280  1024  1310720 pixels,
iii) 2560  2048  5242880 pixels.

Since one pixel needs 12 bits to store the information, number of bytes in
the frame buffer of each of the above raster systems is:
i) 307200  12 / 8  460800 bytes  450 Kbytes ,
ii) 1310720  12 / 8  1966080 bytes  1920 Kbytes ,
iii) 5242880  12 / 8  7864320 bytes  7680 Kbytes .

2-9 The monitor is 12 inches wide and accommodates 1280 pixels in a scan
line. Similarly, it is 9.6 inches high and accommodates 1024 pixels.
Therefore size of one pixel should be
(12  1280 )  (9.6  1024 )  0.00935  0.00935 . Therefore the pixel
diameter is 0.00935 inch.

2-11 Refresh rate is r frames per second. Therefore one frame is refreshed in
1 / r seconds. Once horizontal refresh takes t horiz and one vertical retrace
takes t vert time. Since there are m scan lines, total retrace time for one
frame will be m  t horiz  t vert . So the fraction of retrace time 1 / r for
one frame will be
m  t horiz  t vert
 r (m  t horiz  t vert ) .
1/ r

2-12 Substitute m  1280 , t horiz  5 microseconds, t vert  500 microseconds,


r  60 Hz in the solution to exercise 2-11 and get the desired solution.

Exercises on page 102 of the book.

2-17 Large screen displays have found immense applications in a variety of


different areas including advertisement, entertainment venues, public
information, menu boards, remote education and training, corporate
communications, and government. Nowadays large screen displays have
become so popular that people are using them even for home TVs.

2-18 General purpose systems for programmers do not require high resolution
displays with special graphics processors. But for applications such as
architectural design, CAD systems etc., you need a dedicated graphics
processor with sufficient memory. However for a general purpose
computer, you may not be interested in increasing the cost with
practically no such significance.

Exercises given on pages 12-13 of this unit.

E3) Since the aspect ratio is 1, the diameter can be obtained as:
12.8  1024  9.6  768  0.0125 inch. 15
Computer Graphics E4) a) Size of 420  300 image at 240 pixels per inch  1.75  1.25 sq.
inches.
b) Resolution of 3 2 inch image that has 768  512 pixels  256
pixels per inch.
c) Height of the resized image 1024  768 to one that is 640 pixels
wide with the same aspect ratio  640  768  1024  480 pixels.
d) Width of an image having height of 4 inches and an aspect ratio
1.5  6 inches.
―x―

16
2D Shape Primitives
UNIT 2 2D SHAPE PRIMITIVES
Structure Page No
2.1 Introduction 17
Objectives
2.2 Input and Hardcopy Devices 18
2.3 2D Line Segment Generation 18
2.4 Midpoint Circle Generation Algorithm 22
2.5 Ellipse Generation Algorithm 25
2.6 Other Curves 26
2.7 Summary 30
2.8 Solutions/Answers 31
2.9 Practical Exercises 39

2.1 INTRODUCTION
In Unit 1, you have already learnt to plot a point or a pixel on any display
device. In this unit, we shall introduce you to various devices available for data
input on graphics workstations and also study some of the basic algorithms that
help create graphics output primitives. By graphics output primitives we mean
the basic building blocks that are required to represent and model a real life
object. For example, a four-leg table can be modelled using a few rectangular
shapes and some curved 3D shapes (if required). Hence an output primitive is a
simple geometric object that can be easily constructed and for which nice
constructive algorithms exist. Some of the simpler shapes are lines, circles,
ellipses, etc.

Sec. 2.2 introduces you to input and hardcopy devices. We shall study an
algorithm to plot points on a straight line segment in Sec. 2.3. Algorithms for
plotting a circle is discussed in Sec. 2.4 and that for an ellipse in Sec. 2.5. We
shall also discuss a few more important output primitive curves in Sec. 2.6,
which have been extensively used in graphics. Practical exercises related to
this unit are given at the end of the unit in Sec. 2.9. The aim of this unit is to
make you acquainted with the algorithms and their implementation in C
language using OpenGL. OpenGL is an industry standard graphics library that
gives enormous facilities for object modelling and rendering. In order to know
more about OpenGL, you are advised to refer to the Appendix given at the end
of the study guide.

Objectives
After reading this unit, you should be able to:
 describe the functioning of some of the common input and hardcopy
devices such as mouse, keyboard, scanners and printers;
 generate a line segment using one of the two important line drawing
algorithms – DDA and Bresenham algorithm;
 draw a circle with specified radius and centre using midpoint circle
generation algorithm;
 draw an ellipse if the centre, major and minor axes are specified;
 draw other curves including Bézier curves.
17
Computer Graphics Let us start with understanding the functioning of various input and output
devices.

2.2 INPUT AND HARDCOPY DEVICES


This section gives a brief introduction to the functioning of some wellknown
input and hardcopy devices. Input devices include keyboard, mouse, scanners,
joysticks, digitizers datagloves, touch panels and a few more. Hardcopy
devices that are discussed are mainly printers. You can start with reading the
following.

Read Secs. 2-5 and 2-6, Chapter 2, pages 80-94 of the book.

You can look at an input device in two ways: physically what it is, and
logically what it does. Physically, each of the devices like mouse, keyboard,
joysticks or trackball are like a small gadget whose use can be manipulated by
hands. It then measures these manipulations and sends corresponding
numerical information back to the graphics program.

Let us now discuss some algorithms for generating geometric objects.

2.3 2D LINE SEGMENT GENERATION


A digitally plotted line is basically an approximation of infinite number of
points on an abstract line segment by only a finite number of points on a
computer display. This is needed because the display devices can only plot a
finite number of points, however large the resolution of the device may be. So,
the key concept of any line drawing algorithm is to provide an efficient way of
mapping a continuous abstract line into a discrete plane of computer display.
This process is called rasterization or scan conversion. These algorithms
basically approximate a real valued line by calculating pixel coordinates to
provide an illusion of line segment. Since the pixels are sufficiently small, the
approximation gives a good illusion of a continuous line segment to the human
eyes. To understand what is meant by rasterization, we plot a line segment on
a pixel grid as shown in Fig. 1(a). The segment points are scan converted and
approximated by a single shaded pixel as shown in Fig. 1(b). Here we have
shown a pixel by a square, but you know that a pixel actually has a disc shape
with the boundary marked as the visible portion of the dot formed by the
striking electron gun. The pixel shown here is the bounding rectangle of that
dot.

(a) (b)
Fig. 1

To know the details about the line drawing algorithm read the following.
18
2D Shape Primitives

Read Secs. 3-1 and 3-2, Chapter 3, pages 104-114 of the book.

You may note that this section explains two fundamental algorithms for
generation of a line segment whose end points are given. The first algorithm
that was proposed was based on the principle used in measuring voltage
deflection in a voltmeter. Hence it was named Digital Differential Analyzer
(DDA) Algorithm. Its improvement was later proposed by Bresenham, which
is known as Bresenham's line drawing algorithm. In order to better
understand the two algorithms, we rasterize a line segment using both the
algorithms as follows.

Let L be a line segment with end points P  (8, 10 ), Q  (16, 13) . Following
the notations given in the book, m  3 / 8 and b  7 . Further x  8, y  3
(Remember that the same notation  y and x are used both for the difference
in the x and y coordinates of the end points of the line segment as well as for
the increment values.) We initialize (x 0 , y 0 )  (8, 10) . We begin scan
converting the line segment using DDA algorithm. First pixel to be plotted is
given by (x 0 , y 0 ) . Since m  1 , we use x k1  x k  1 and y k 1  y k  m .
This gives
x1  9, y1  10  3 / 8  10, x 2  10, y2  y1  3 / 8  10  3 / 4  11, x3  11 ,
3 3
y3  y2  3 / 8  10   11  1 / 8  11 and so on. After scan conversion the
4 8
continuous line reduces to nine pixels starting from (x 0 , y 0 )  (8, 10) to
(x 8 , y8 )  (16, 13) with the intermediate pixels as derived above. The pixels
are plotted as per Table-1.
Table-1
k xk yk yk  m
0 8 10 10 + 3/8  10
1 9 10 10 + 3/4  11
2 10 11 11 + 1/8  11
3 11 11 11 + 1/ 2  12
4 12 12 11 + 7/8  12
5 13 12 12 + ¼  12
6 14 12 12 + 5/8  13
7 15 13 13
8 16 13 13

Let us now use Bresenham's algorithm. Again the first pixel to be plotted
would be (x 0 , y 0 ) . We calculate p 0  2 y  x  2 , which is negative, so
the next pixel (x1 , y1 )  (x 0  1, y 0 )  (9, 10) and p1  p 0  2 y  4 . This
time decision parameter p1 is positive, so (x 2 , y 2 )  (x1  1, y1  1)  (10, 11)
and p 2  p1  2 y  2 x  6  0 . In this way, repetitive use of the steps
will lead to nine pixels rasterization of the line segment. For this example, you
will find that the pixel values obtained using both the algorithms are the same.
The pixels in this case are plotted as per the computation shown in Table-2 on
the next page..
19
Computer Graphics Table-2

K xk yk pk
0 8 10 –2
1 9 10 4
2 10 11 –6
3 11 11 0
4 12 12 –10
5 13 12 –4
6 14 12 2
7 15 13 –8
8 16 13 –

The rasterized line segment is shown in Fig. 2 below.

(16,13)

(8,10)
Fig. 2

Remark: The main advantage of using Bresenham's Algorithm is that the


algorithm not only efficiently picks up approximating pixels, it does so with
integer calculations only, which results in speed improvement to a great extent.
As you draw line segments with large lengths, you will also notice that the
pixels are deviating significantly from the original line segment in case of
DDA algorithm. The reason behind that is (i) round off errors in floating point
operations (ii) rounding function used in selecting next pixel values.
Following code implements both the algorithms for line generation. The C
function dda(int, int, int, int) plots a line segment using DDA algorithm while
the function bresenham(int, int, int, int) plots the same line segment using
Bresenham's algorithm.
//DDA Line algorithm implementation
void dda(int x1,int y1,int x2,int y2)
{ int dx,dy,i;
float delta, x=x1,y=y1;
glColor3f(0.0f, 1.0f, 1.0f); //Set Line Colour
dx=x2-x1;
dy=y2-y1;
if(abs(dx)>=abs(dy))
delta=abs(dx);
else delta=abs(dy);
glBegin(GL_POINTS);
glVertex2f(x,y);
for(i=1;i<=delta;i++)
{ x=x+(float)dx/delta;
y=y+(float)dy/delta;
glVertex2f(x,y);
}
glEnd();
}

//Bresenham line algortihm implementation


void bresenham(int x1,int y1,int x2,int y2)
20
{ int x,y,end,inc=0,p,dx=abs(x2-x1),dy=abs(y2-y1); 2D Shape Primitives
glColor3f(1.0f, 0.0f, 0.0f); //Set Line Colour
if(dx>dy)
{ p=2*dy-dx;
if(x1<x2)
{ x=x1;y=y1;end=x2;
if(y1<=y2)inc=1;
else inc=-1;
}
else
{ x=x2;y=y2;end=x1;
if(y2<=y1)inc=1;
else inc=-1;
}
glBegin(GL_POINTS);
while(x<end)
{ glVertex2i(x,y);
if(p<0) p=p+2*dy;
else
{y=y+inc;p=p+2*(dy-dx);
}
x++;}
glEnd(); //Plot pixels'
}
else
{
p=2*dx-dy;
if(y1<y2)
{ y=y1;x=x1;end=y2;
if(x1<=x2)inc=1;
else inc=-1;
}
else
{ y=y2;x=x2;end=y1;
if(x2<x1)inc=1;
else inc=-1;
}
glBegin(GL_POINTS);
while(y<end)
{
glVertex2i(x,y);
if(p<0)p=p+2*dx;
else {
x=x+inc;
p=p+2*(dx-dy);
}
y++; }
glEnd(); //Plot pixels
}
}

For plotting line segments using the above C-functions, you need to call it in
main function of your program. A sample main program is given in the
Appendix. Remember that for m  1 (or  1) , algorithm is straightforward
with (x k1 , y k1 )  (x k  1, y k  1) (or (x k1 , y k1 )  (x k  1, y k  1)) .

You may now try the following exercises.

Do the exercises no. 3-1, 3-3 and 3-5, Chapter 3, on pages 160 and 161
of the book.

21
Computer Graphics Also try the following exercises.

E1) Use DDA algorithm to get the output of your program as shown in Fig. 3.

Fig. 3

E2) Draw a long line segment using (i) DDA line drawing algorithm (ii)
Bresenham line drawing algorithm (iii) OpenGL function using
GL_LINES. Observe if DDA line segment deflects from Bresenham line
segment or the graphics library line segment. (Please refer to the remark
given in paragraph 3, page 108 of the book on performance issues of
DDA algorithm.)

E3) Modify the dda( ) and bresenham( ) function codes to draw line segments
which are (i) dotted (ii) dashed.

We now discuss the algorithm for the generation of a circle.

2.4 MIDPOINT CIRCLE GENERATION


ALGORITHM

Just as a line drawing algorithm approximates infinite number of points on a


line segment by only a finite number of pixels, a circle drawing algorithm
approximates continuous circle by appropriately choosing a discrete set of
points close to the circular arc. One of the well known algorithms for circle
generation is midpoint circle generation algorithm. The algorithm is a variant
of Bresenham's line generation algorithm and uses a decision parameter to find
out the closest pixel with the actual circle at each incremental step. Implicit
equation of the circle given by
(x  x c ) 2  ( y  y c ) 2  r 2 (1)

is used to define the decision parameter. Here r is the radius and ( x c , y c ) is


the centre of the circle. Symmetry of the circle is fully exploited for its scan
conversion. Actually, we need to approximate the circle only in one octant,
points in other seven octants similarly get approximated due to symmetry, as
shown in Fig. 4 on the next page. The decision parameter helps in deciding
which pixels are closest to the circle boundary. The algorithm is iterative in
nature. Next pixel position and decision parameter are computed iteratively.
Just as in Bresenham line drawing algorithm, one only needs to check the sign
of the decision parameter to find out next pixel position. This makes the
implementation of the algorithm very easy.

22
2D Shape Primitives

(x, y)
(–x, y)

(–y, x) (y, x)

(y, -x)
(–y, –x)
(x, -y)
(–x, –y)

Fig. 4

You can start with reading the following:

Read Sec. 3-5, Chapter 3 on pages 117-122 of the book.

Before proceeding further, we summarize the algorithm for your reference.


 Define a circle function f circle (x,y)  x 2  y 2  r 2 . For any point ( x , y)
function value f circle (x,y)  0 tells, ( x , y) is inside the circle,
f circle (x,y)  0 means point is on the circle boundary and f circle (x,y)  0
tells that ( x , y) is outside the circle boundary.

 Set up the algorithm for circle of radius r with centre at the origin (0, 0) .
Calculate the closest pixels to the circle boundary and then translate the
pixels by ( x c , y c ) to finally plot a circle with centre at (x c , y c ) .

 Determine the closest pixel to the circle boundary only in one octant.
Pixels on the other seven octants are determined similarly as shown in
Fig. 4 and also mentioned in our previous discussion.

 Start with the point corresponding to (0, r ) . For each x value,


determine y value and increment x until it becomes equal to y . To
determine y value of the pixel proceed as follows:

 Suppose (x k , y k ) has been determined. Then take x k1  x k  1 .


Either (x k1 , y k ) or (x k 1 , y k  1) will be closer to the circle
boundary. Define the decision parameter as the value of circle
function at the mid point of y k and y k  1. That is, define
1 1
p k  f circle (x k  1, y k  )  (x k  1) 2  ( y k  ) 2  r 2 .
2 2

 Obtain the recurrence relation

23
Computer Graphics
p k 1  p k  2( x k  1)  ( y 2k 1  y 2k )  ( y k 1  y k )  1 .

 If p k  0, (x k1 , y k ) is closer to the circle boundary, else


(x k 1 , y k  1) is closer to the circle boundary.

 Therefore
x k 1  x k  1, y k 1  y k if p k  0 else y k 1  y k  1.
p k 1  p k  2x k 1  1 for p k  0 else p k 1  p k  2x k 1  1  2 y k 1 .

 Plot all eight symmetric pixels


(x c  x k 1 , y c  y k 1 ), (x c  y k 1 , y c  x k 1 ) (Round off the pixel
coordinates to the nearest integer value whenever required.)

It is always a good practice to experience the algorithm by first rasterizing a


circle manually. Let us chose a circle with radius 8 and centre at (10 , 10 ) . We
shall scan convert a circle of radius 8 with centre at the origin and then
translate the points by (10 , 10 ) to get actual pixel values approximating the
circle.

We begin by initializing the first point ( x 0 , y 0 )  (0, 8) . By symmetry, the


other three points will be (0,  8), (8, 0), (8, 0) . We shift these points by
(10 , 10 ) to get the first four pixels as (10, 18), (10, 2), (18, 10 ), (2, 10 ) . The
decision parameter's initial value will be p 0  5 / 4  8  7 . As per the
algorithm, the next pixel value will be (x1 , y1 )  (x 0  1, y 0 )  (1, 8) , since p 0
is negative. Next we calculate p1  p 0  2 x1  1  4 (again negative). After
this we calculate symmetry pixels in the other seven octants as ( x1 ,  y1 )
and ( y1 ,  x1 ) . Move each pixel ( x , y) by ( x  10, y  10 ) to get
(10  x1 , 10  y1 ) and (10  y1 , 10  x1 ) . We repeat these steps until x  y .

You may now try the following exercises.

E4) Draw the object as shown in Fig. 5 using midpoint circle generation
algorithm. All the four boundary curves are circular arcs.

Fig. 5

E5) Write a C code for generating concentric circles.

E6) Fig. 6 on the next page uses three dashed arcs and one small circle.
Dashed line or arc is a style attribute that can be attached with a line or a
curve. In OpenGL you can use the line stipple functions to produce such
output. Use the midpoint circle algorithm to get the output of your
program as in Fig. 6.
24
2D Shape Primitives

Fig. 6

An ellipse is an elongated circle. Therefore, elliptical curves can be generated


by modifying circle-drawing procedures to take into account the different
dimensions of an ellipse along the major and minor axes. Let us now see how
this can be done.

2.5 ELLIPSE GENERATION ALGORITHM


You know that a circle is symmetric in all the octants, while ellipse is
symmetric with respect to four quadrants. Therefore, to draw an ellipse, we
need to determine approximating pixels in one quadrant. We proceed exactly
the same way as we did in the case of a circle. Define a decision parameter and
choose the next pixel position depending on the sign of the decision parameter.
The decision parameter is defined in terms of the equation of an ellipse.

As in the case of a circle, let us assume that the ellipse is having centre at the
origin. Consider its perimeter segment in the positive quadrant (see Fig. 7).
Identify the point on the perimeter where the ellipse has tangent line with
slope  1. Then divide the positive quadrant in two regions- Region I where
slope dy / dx  1 and Region II where slope dy / dx  1 . You may notice
that for Region I, y values of points of the perimeter are decreasing slowly
with respect to x values, than in Region II. This means for determining the
approximating pixels, we have to do the following. For each increment in x
value, y will either remain same or be decremented depending on the sign of a
decision parameter. But in Region II, for each decremented y value, x will
either be incremented or will remain the same (compare with two cases of
Bresenham line drawing algorithm for slopes | m |  1 and | m |  1 ).

Region I
b dy/dx = –1

Region II

Fig. 7

25
Computer Graphics Start with reading the following.

Read Sec. 3-6, Chapter 2, pages 122-130 of the book.

The algorithm generates an ellipse with major and minor axes parallel to the
coordinate axes. In order to draw an ellipse with axes not parallel to the
coordinates axes, all we need to do is shift and rotate the ellipse thus
constructed to get the desired ellipse. We shall discuss rotation and some more
transformations in detail in the next unit.

You may now try the following exercises.

Do the exercises no. 3-13, 3-15 to 3-19, Chapter 3, on page 161 of the
book.

Geometric modelling requires curves which are flexible on one hand and
smooth enough on the other to represent a variety of different real life object
shapes. For this we need to have curves that are easy to handle in design
problems. In the next section we describe two of the most important types of
curves that are used in object modelling, namely conic sections and spline
curves.

2.6 OTHER CURVES


As you know conic sections are implicitly defined curves obtained as the
intersection of a plane with an infinite cone. In many applications, however,
you require to use parametrically defined curves. In general, it has been
observed that parametric curves are more suitable for geometric modelling and
are extensively used in graphics. Spline curves are one such parametric curves
that have been the first choice for many design applications. Recall that a curve
given in the form F( x , y)  0 is said to be in implicit form, since neither y nor
x could be explicitly expressed in terms of other variable, in general. For
example, the circle given in the form x 2  y 2  r 2 is an implicit curve. A
curve is said to be in parametric form if it is expressed as a continuous function
F : [a, b]  R 3 (or R 2 ) from a closed interval of real line to three (or two)
dimensional space. As an example, we can consider the parametric form of an
ellipse as follows:
r ()  (a cos , b sin ), 0    2 .

Such a curve can be easily plotted by incrementing the parameter  and


successively obtaining the new points on the curve. You only need to scan
convert the points by approximating the points by the corresponding pixel
values with appropriate nearest integer values. Following simple function code
will help you understand how such curves are plotted on the screen.

void param_ellipse( float a, float b, int x, int y)


{
float theta;
glColour3f(0.0f, 1.0f, 1.0f); //Set drawing colour
glBegin(GL_POINTS);
for(theta = 0;theta <=2*22.0/7.0; theta+=0.01)
26
glVertex2f(floor(a*cos(theta))+x,floor(b*sin(theta)) 2D Shape Primitives
+ y);
glEnd(); //Plot pixels for ellipse generation
}
Bézier curves are one of the most fascinating and extensively used curves in
geometric modelling and graphics applications. These are simple polynomial
curves with the additional properties that their shape can be controlled by a
finite set of points called control points or control vertices. These curves were
introduced by two people independently, deCasteljau and Bézier, as solutions
to certain geometric modelling and design problems in mechanical engineering.
The work of Bézier got published whereas, that of deCasteljau remained un-
noticed due to copyright agreement of the company in which deCasteljau was
working. Excellent iterative algorithm exists for drawing Bézier curves and is
known as deCasteljau algorithm. We shall be learning this algorithm later in
Unit 4. Here we are simply giving you an idea of Bézier curves.

Let n be a positive integer and b 0 , b 1 , b 2 , , b n be n  1 points in the plane


(or in the space). A Bézier curve P of degree n is defined as follows.
n
P( t )   b i Bin ( t ), t  [0, 1] ,
i 0

where Bin ( t ) are Bernstein (polynomial) basis functions of degree n given by

n
B in (t)    (1  t) n-i t i , 0  i  n, t [0, 1]
i 
B in  0 for i  0 or i  n .

Here  n  is the standard binomial coefficient given by


n!
.
i  i! (n  i)!
For example, a linear Bézier curve is simply a straight line segment
(1  t )b 0  t b1 and a quadratic Bézier curve is a parabola given by
(1  t ) 2 b 0  2(1  t ) t b1  t 2 b 2 .

Fig. 8 gives two examples of Bézier curves.

Control Point

Fig. 8

A complete C-code for plotting a Bézier curve of degree n is given below.


You need to input the control points using mouse clicks. The program opens a
window with background colour as black and draws a Bezier curve for every
set of n+1 control points selected using mouse clicks.

27
Computer Graphics /* Create multiple Bezier curves by selecting control points
with mouse clicks*/

#include <windows.h>
#include <math.h>
#include <gl/glut.h>

#define n 4 //Degree of Bezier curves


int ww=640,wh = 480; //Window Size
int MOUSE_CLICKS = 0; // Number of mouse clicks, initially
zero.
int Point_x[n+1],Point_y[n+1];
//To store coordinates of points selected through mouse clicks

void myInit(){
glClearColor(0.0,0.0,0.0,0.0);
glColor3f(0.0,1.0,0.0);
glPointSize(4.0); //Select point size
gluOrtho2D(0.0,640.0,0.0,480.0);
//For setting the clipping areas

} //Initialize

//Point plotting
void drawPoint(int x, int y) {
glBegin(GL_POINTS);
glVertex2i(x,y);
glEnd();
glFlush();
}

//Computing factorial of a number k


int factorial(int k) {
int fact=1,i;
for(i=1;i<=k;i++)
fact=fact*i;
return fact;
}

/* Draw a bezier curve with control points (x[i],y[i]),


i=0,..., n */
void drawBezier(int x[n+1], int y[n+1]) {
double P_x,P_y;
glColor3f(1.0,1.0,1.0); //Set drawing colour for curve
glBegin(GL_POINTS); //Draw point (P_x,P_y) on the curve
for( double t=0.0;t<=1.0;t+=0.01){
P_x=0.0;
P_y=0.0;
for( int i=0;i<=n;i++) {
int cni=factorial(n)/(factorial(n-i)*factorial(i));
P_x = P_x+(double)(x[i]*cni)*pow(1 - t,n-i)*pow(t,i);
P_y = P_y+(double)(y[i]*cni)*pow(1 - t,n-i)*pow(t,i);
}
glVertex2f(P_x,wh -P_y);
}
glEnd();
}

void myMouse(int button, int state, int x, int y) {


//Draw Bezier curve if #Mouse Clicks =n+1
if(MOUSE_CLICKS==n+1){
drawBezier(Point_x, Point_y);
glColor3f(0.0,1.0,0.0); //Set Colour for Control Polygon

for(int i=0;i<n;i++)
{
28
glBegin(GL_LINES); 2D Shape Primitives
glVertex2i(Point_x[i],wh-Point_y[i]);
glVertex2i(Point_x[i+1],wh-Point_y[i+1]);
glEnd();
MOUSE_CLICKS=0;
glFlush();
}
}
//Draw Control Polygon
if(button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) {
// Store where the user clicked, note y is backwards.
Point_x[MOUSE_CLICKS]=x;
Point_y[MOUSE_CLICKS]=y;
drawPoint(x, wh - y);
MOUSE_CLICKS++;
}
}

void myDisplay() {
glClear(GL_COLOR_BUFFER_BIT);
glFlush();
}

int main() {
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(ww,wh);
glutInitWindowPosition(200,200);
glutCreateWindow("Bezier Curves");
glutMouseFunc(myMouse);
glutDisplayFunc(myDisplay);
myInit();
glutMainLoop();
return 0;}

Rational extensions of Bézier curves have also been defined. These curves are
being used extensively in computer aided design and geometric modelling.
Here we shall not be discussing about these curves.

Spline curves are simple extensions of Bézier curves. They are piecewise
polynomial curves with one or more polynomial pieces joined in such a manner
that the resulting curve is continuous and may satisfy certain degree of
smoothness. By this we mean that if we join two polynomial segments, at the
joints the curve should be satisfying the requirements of continuity of
derivatives upto certain order, depending upon the requirement of the problem.
If higher order derivatives are matched, the composite curve becomes
smoother. For example, we might be interested in designing a path of an object
which has a continuous velocity along the path. In this case the path should be
designed with a spline having continuous derivatives at all points, since the
velocity is precisely the derivative vector at every point on the curve. This
means, if we are having more than one polynomial segment, then at the joints
one should ensure that the derivatives of the two meeting polynomial segments
are matching. This is because at all other points the curve will of course be
continuously derivable, being a polynomial curve.

Read Sec. 3-7, Chapter 3, pages 130-132 of the book.

We shall discuss spline curves in detail later in Unit 4.

29
Computer Graphics You may now try the following exercises.

E7) Suggest a method to draw an object similar to the one given in E4) using
four Bézier curves of suitable degree.

E8) Draw the letters S, P, R or U of English alphabet using multiple Bézier


curves.

We now conclude this unit by giving a summary of what we have covered in it.

2.7 SUMMARY
In this unit, we have covered the following.

1. Functioning of input devices–Keyboard, mouse, trackball and spaceball,


Joystick, data glove, digitizer, scanner, touch panel, light pen, voice
system.
2. Functioning of hardcopy devices–laser printers and dot matrix printers,
plotters.

3. Output primitives: Graphics programming packages provide basic


geometric shapes such as line segments, circles, ellipses and other simple
curves as building blocks for defining more complex structures. These
are known as output primitives.

4. Scan conversion: For display of a continuously defined object, a method


to map a continuous abstract object or shape into a discrete plane of
computer display is needed. This process is called rasterization or scan
conversion.

5. There are two important algorithms for basic line segment plotting–DDA
algorithm and Bresenham algorithm. Both the algorithms use the form
y  mx  c of the line segment.

6. Whereas DDA uses floating point calculations to determine iteratively


the pixel positions, Bresenham algorithm is based purely on integer
calculation. Therefore, Bresenham algorithm is more efficient and faster.
However, in the literature, you may find modified DDA algorithm that
uses integer calculation only.

7. Midpoint circle generation algorithm: This makes use of a circle


function. Based on this circle function, a decision parameter is created
which is used to decide successive pixel positions. Pixel positions are
computed only in one octant because of the symmetry of the circle. On
the seven other quadrants, pixels are generated by reflection about the
lines parallel to the axes and y   x and passing through the center of
the circle.

8. Ellipse generating algorithm: Algorithm is similar to circle algorithm.


We divide the ellipse on the positive quadrant into two regions. Region 1
where the slope  1, and Region 2 where the slope  1 . Different
decision parameters are defined for these two regions.
30
9. Other curves: Conic sections such as parabola and hyperbola are used in 2D Shape Primitives
many instances such as in motion planning along a trajectory or in
modelling the collision of charged particles. Circle generation algorithm
could again be modified to generate such curves. For each such curve a
decision parameter is defined and is used to determine the closest pixel
from the actual path on the curve.
10. Bezier curves: Theses are special polynomial curves expressed using
Bernstein polynomials. Spline curves are simple extensions of Bézier
curves composed of two or more polynomial curves joined in a sequence
in such a way that the resulting curve is atleast continuous. Formula to
compute points on a Bézier curve P is as follows:
n
P( t )   b i Bin ( t ), t  [0, 1] ,
i 0

where {b i } are the control points of the curve and Bin ( t ) are Bernstein
polynomials.

2.8 SOLUTIONS/ANSWERS
Exercises given on pages 160-161 of the book

3-1 The code is implemented as a function named polyline() which uses the
function dda(int, int, int, int). The code is given below.
void polyline()
{ // Give number of vertices of the polyline
int n=4;
//Specify the x,y coordinates of input vertices
int x[10]={100,140,200,200,100,100,200,200,100,100};
int y[10]={200,200,100,150,200,150,200,150,100,100};

if(n==1)
glBegin(GL_POINTS);
glVertex2i(x[0],y[0]);
glEnd();
for(int i=0;i<n;i++)
dda(x[i],y[i],x[i+1],y[i+1]);
}

You need to add the above function in the code given for dda line
drawing algorithm (c.f. page 20 of this unit). Further, you can drop the
code for the function bresenham(int, int, int, int) from the program as that
is not required.

3-3 The bresenham( ) function given on page 20-21 of this unit has to be
modified so that the last point is not plotted. To be precise, write the code
using while(x<end) or while(y<end) in place of while(x<=end) or
while(y<=end).

3-5 Use the code for line segments with any slope given on pages 20-21 of
this unit.

31
Computer Graphics Exercises given on page 22 of this unit.

E1) Use DDA( ) function to plot line segments that have end points on
diametrically opposite points on the circumference of a circle. Take
these points on uniform distance. For example, you may use the
following function CirclePoints() to get line segment endpoints
( x1[i], y1[i]) and ( x 2[i], y 2[i]), i  0, 1,  , 5 and use the array
x1, y1, x 2, y 2 as input to your dda() function. Call the DDA function six
times as given in plot Lines code below.

// Compute points on the circumference of a circle.


int Points(int r, int xc, int yc, int x1[6], int y1[6],int
x2[6], int y2[6])
{
float pi = 3.1416;
int i=0;
for(float theta = 0; theta <pi;theta+=pi/6.0)
{
x1[i] = xc + r*cos(theta);
y1[i] =yc+r*sin(theta);
x2[i] = xc + r*cos(pi+theta);
y2[i] =yc+r*sin(pi+theta);
i++;
}
return x1[6], y1[6], x2[6],y2[6];
}
//Plot line segments using DDA algorithm
void plotLines()
{
int xx1[6],yy1[6],xx2[6],yy2[6];
Points(100, 200, 200, xx1, yy1, xx2,yy2);
for(int i=0;i<=5;i++)
dda(xx1[i], yy1[i], xx2[i], yy2[i]);
}

Add the above code in your program for line segment generation using
dda() function and call the fuction plotLines in your RenderScene()
function.

E2) Implementation of Bresenham and DDA algorithm is already given on


page 20-21 of this unit. Use the code to draw a line segment
(10 , 20 )  (1020 , 700 ) using both the algorithms. Further, we draw the
same line segment by adding the following code in the RenderScene( )
function of the code using the following OpenGL library function.

glBegin(GL_LINES);
glVertex2i(1020,700);
glVertex2i(10,20);
glEnd();

Use a different colour for each of the three codes. Accordingly code of
RenderScene( ) will be modified as follows:

void RenderScene(void)
{
glClear(GL_COLOUR_BUFFER_BIT);
glColour3f(1.0f, 0.0f, 0.0f);
bresenham(10,20,1020,700);//Bresenham line segment with RED
32
glColour3f(0.0f, 1.0f, 0.0f); 2D Shape Primitives
dda(10,20,1020,700); //dda line segment with GREEN
glColour3f(1.0f, 1.0f, 1.0f);
glBegin(GL_LINES);
glVertex2i(1020,700);
glVertex2i(10,20);
glEnd(); //OpenGl Line segment with WHITE
glFlush();
}

On present day systems with very high accuracy on floating point


calculations all the three line segments are plotting almost the same line
segment. However, if you run the same program on an older system, you
would find that the pixels generated by DDA algorithm are slightly away
from the two line segments that have been generated using Bresenham
algorithm and OpenGL function. The OpenGL function and Bresenham
algorithm produce almost the same line segments.

E3) Case (i): For the dotted line segment modify the code so that it skips
pixels at regular intervals. For this modification the portion of the code
containing the line segment plotting statements is as follows:

if(i%5 ==0){
glBegin(GL_POINTS);
glVertex2f(x,y);
glEnd(); }
This plots only those pixels for which value of i is divisible by 5 leaving
all other pixels on the line segment. The full code of modified dda( ) is
as follows:

void dotted_dda(int x1,int y1,int x2,int y2)


{ int dx,dy,i;
float delta, x=x1,y=y1;
dx=x2-x1;
dy=y2-y1;
if(abs(dx)>=abs(dy))
delta=abs(dx);
else delta=abs(dy);
glBegin(GL_POINTS);
for(i=1;i<=delta;i++)
{
x=x+(float)dx/delta;
y=y+(float)dy/delta;
if(i%5 ==0){
glVertex2f(x,y); }
}
glEnd();
}

Case (ii): For dashed line segment, again skip pixel plotting at regular
intervals. But this time plot more pixels and skip less number of pixels
as reflected in the following code.
glBegin(GL_POINTS);
for(i=1;i<=delta;i++)
{ x=x+(float)dx/delta;
y=y+(float)dy/delta;
if(i%10 <=3) goto label;
/* You may choose any other number in place of 10 and 3,
depending on the spacing of dashes that you want */
glVertex2f(x,y); 33
Computer Graphics label: ;; //Do nothing.
} glEnd();

Exercises given on pages 24-25 of this unit

E4) Obtain the figure by using midpoint circle algorithm with the following
modifications.

Assume ( x c , y c ) to be the centre of the circle and r be its radius. In the


midpoint circle algorithm, starting with ( x, y)  (0, r ) , plot pixels
( x c  x, y c  y) and ( x c  y, y c  x) , where ( x , y) is obtained using
midpoint circle algorithm. The algorithm computes iteratively the points
( x , y) until y  x . In this case, first plot all four pixels
( x c  x, y c  y) . This traces the arcs which are horizontal, the top and
bottom arcs. Now consider the point ( x , y) where x  y . In this case
x  y  x c  r . Fig. 9 shows the points that are plotted on vertical arcs
are simply those points which are symmetrically opposite to the points
(x c  y, y c  x) , where symmetry is taken about the two lines, namely
x  x c  r and x  x c  r .

y=x

x = xc-r x = xc+r

Fig. 9

Therefore, instead of plotting the pixels ( x c  y, y c  x) , plot following


four pixels (compute points symmetric to these points).

(xc +sqrt(2.0)*radius - y, yc + x);


(xc -sqrt(2.0)*radius+ y, yc + x);
(xc +sqrt(2.0)*radius - y, yc - x);
(xc -sqrt(2.0)*radius + y, yc - x);

Accordingly, the C-code circleMidpoint()of midpoint circle


algorithm has to be modified.

E5) Put the circle function circleMidpoint()in a for loop as follows:


for( int radius = MinRadius; radius <= MaxRadius, radius+=
distance)

circleMidpoint(int xc,int yc, int rad);

where MinRadius = radius of smallest circle, MaxRadius = radius of


largest circle, distance = difference in circle radii. Give your choice for
34 these values.
2D Shape Primitives
E6) Again this is a concentric circular arc drawing problem. Draw a circle
first with smallest radius and then for the remaining three circles plot
only the pixels (x c  x, y c  y) until y  x . Implement dashed arcs as
done in the solution to E3). Modify your circle drawing code
circleMidpoint()accordingly.

Exercises given on page 161 of the book

3-13 Decision parameters are obtained by interchanging x and y coordinates.


1
For Region 1 (ref. Fig. 7), p10  rx2  ry2 rx  ry2 . Starting with the
4
above value keep on calculating the ( x , y) positions and next value
p1k of the decision parameter. If p1k  0 , the next value
p1k 1  p1k  rx2 (2 y k 1  1) else p1k 1  p1k  rx2 (2y k 1  1)  2ry2 x k 1 .
Similarly for Region 2, the decision parameter is calculated as
p2 k 1  p2 k  ry2 (2y k 1  1) for p2k  0 , else
p2 k 1  p2 k  ry2 (2y k 1  1)  2rx2 y k 1 .

3.15 Write the general equation of a sine curve in the form


y  A sin( x ) ,
where A is the amplitude and x shows the angular frequency. Exploit
the periodicity and symmetry property of the sine curve to compute
points on other cycles. For example, suppose you have computed a point
( x , y) in the interval [0,  / 2] , you can also identify three more points
( x, y), (  x, y), ( x  ,  y) and (2  x ,  y) . The algorithm can be
implemented as given in the following code.

/* Code for plotting sine curve with amplitude A and angular


frequency omega */
void sinecurve(float A, float omega)
{
float pi=3.1416,y;
for(float x=0;x<=omega;x+=1)
{
y=A*sin(pi*x/(2*omega));//A=amplitude, omega = dilation.
glBegin(GL_POINTS);
glVertex2f(x,y);
glVertex2f(2*omega-x,y);//Symmetric points
glVertex2f(x+2*omega,-y);
glVertex2f(4*omega-x,-y);
glEnd();
//Plot x-axis
glBegin(GL_LINES);
glVertex2f(0,0);
glVertex2f(4*omega,0);
glEnd();
//Plot y-axis
glBegin(GL_LINES);
glVertex2f(omega,-100);
glVertex2f(omega,200);
glEnd();
glFlush(); }
}

35
Computer Graphics 3-16 Use the method given in 3-15 to compute points on the curve
y  A e  kx sin (x  ) . For each point x on the interval
[ / , ( / 2  ) / ] , compute the point y as given in the formula. Plot
all points ( x, y), (  x,  y), (3 / 2  x,  y) and (2  x ,  y) , by
exploiting the symmetry and periodicity of the curve. Accordingly
modify the code used in 3-15 above.

1 3 dy 1 2
3-17 Define the curve function f (x, y)  x  y . For this curve  x .
12 dx 4
The slope is greater than 1 for x  1 . Divide the first quadrant in two
regions, Region 1, where slope  1 and Region 2, where slope  1. In
Region 1 initial point (0, 0) is clearly a point on the curve. In this region
keep incrementing x values by 1 and determine the corresponding y
value for the pixels to be plotted. Initialize the point (0, 0) and send it to
the frame buffer for plotting. Suppose the k-th pixel (x k , y k ) has been
identified and sent to the frame buffer. Determine the next pixel as
follows:

Set x k1  x k  1 . The curve is monotonically increasing. Therefore


next y value can be either y k or y k  1. Find out the value of the curve
function at the midpoint of y k and y k  1.

p1k  f x k  1, y k  1 / 2 
1
(x k  1) 3  y k  1 / 2 .
12

If p1k  1 , then (x k1 , y k ) is closer to the curve than the pixel


(x k 1 , y k  1) . Therefore, set y k1  y k if p1k  0 , else y k1  y k  1 .
The decision parameter can also be computed iteratively as follows:
p1k 1  f x k 1  1, y k 1  1 / 2  (x k 1  1) 3  y k 1  1 / 2
1
12
1
Therefore, p1k 1  (x k  1  1) 3  y k  1 / 2  y k  y k 1
12
1 1
 p1k  (x k2 1  x k 1 )  ( y k 1  y k ) 
4 12
1 1
If p1k  0 then p1k 1  p1k  (x k2 1  x k 1 )  ,
4 12
1 11
else p1k 1  p1k  (x k2 1  x k 1 )  .
4 12
On Region 2, y values will be more rapidly increasing and increment in
x values will depend on the decision parameter's sign. Therefore for
each k, y k 1  y k  1 define

p2 k  f x k  1 / 2, y k  1  (x k  1 / 2) 3  y k  1 .
1
12
If p2 k  0 , then x k 1  x k , else x k1  x k  1 . For iterative
computation of p2 k
p2 k1  p2 k  y k when p2 k  0 ,
1 1
else p2 k 1  p2 k  (x k  1 / 2) (x k 1  1 / 2)  y k  .
4 12

36
3-18 As in the solution to Ex. 3-17, define the curve function as 2D Shape Primitives

f ( x, y)  100  x 2  y .
The function is evaluated at each midpoint between the two candidate
pixels. The slope of the curve is always greater than 1 for all values of
x  1 / 2 or less than  1 / 2 . Therefore, it suffices to consider only one
region. Increment y values and determine the x values for these y
values to get the best approximating pixel. The curve is symmetric about
y-axis. Consider the decision parameter
p k  f ( x k  1 / 2, y k  1)  100  ( x k  1 / 2) 2  ( y k  1) .
For p k  0 , choose (x k 1 , y k 1 )  (x k , y k  1) , otherwise,
(x k1 , y k1 )  (x k  1, y k  1) . Further
p k 1  p k  (x k ) 2  y k  2x k ( x k  1 / 2) .
Thus, p k1  p k  1 , for p k  0 and p k1  p k  2x k1  1 otherwise.

3-19 The curve function is defined by f ( x, y)  y 2  x . The analysis goes


exactly the way as in the solution to Ex. 3-18. The curve in this case is
symmetric about x-axis.

Exercises given on page 30 of this unit

E7) Employ four quartic Bézier curves with the endpoint of one as the initial
point of the next Bézier curve and the endpoint of fourth as the initial
point of the first Bézier curve as shown using black dots in Fig. 10. Other
white dots are internal control points for the individual Bézier curves.

Fig. 10

To get the coordinates of control points, place it on a graph paper and


approximately find out the coordinates. Remember that you are only
approximating the arcs and this approximation is not exact. Reason –
Circle cannot be reproduced by Bézier curves.

E8) A complete code for plotting Bezier curves is given on page 28-29 of this
unit. There in the code, control points for the Bézier curves are taken
using mouse input. Plot Bézier curves by first identifying the control
points of the curve and then storing them in an array. Always employ
curves of same degree for plotting different parts of the alphabet letter.
In the following code, character P is generated using only quadratic
Bézier curves. Straight line segments are also generated using a quadratic
Bézier curve by choosing the control points on a straight line.

/* Create the character P using multiple Bezier curves*/


37
Computer Graphics
#include <windows.h>
#include <math.h>
#include <gl/glut.h>

#define n 2 //Uniform degree of all Bezier curves


int ww=640,wh = 480; //Window Size
int OUT_CURVES = 18; //Counter for curves in outer boundary
int IN_CURVES=12; // Counter for curves in inner boundary
//Coordinates of control points for outer curves
int Px_Out[21]={90, 120, 160, 180, 180, 180, 180, 180, 160,
140, 120, 120,120, 120, 140, 140,140, 120, 90, 90, 90};
int Py_Out[21]={100, 100, 100, 100, 120, 140, 160, 180, 180,
180, 180, 230,230,240,240, 250, 260, 260, 260, 200,100};
//Coordinates of control points for inner curves
int Px_In[15]={120, 130, 140, 160, 160, 160, 160,
160,160,160,130,120,120,120,120};
int Py_In[15]={120, 120, 120, 120, 130, 140,
140,145,150,160,160,160,160,130,120};

void myInit(){
glClearColor(0.0,0.0,0.0,0.0);
glColor3f(0.0,1.0,0.0);
glPointSize(4.0); //Select point size
gluOrtho2D(0.0,640.0,0.0,480.0);
//For setting the clipping areas
} //Initialize

//Point plotting

//Computing factorial of a number k


int factorial(int k) {
int fact=1,i;
for(i=1;i<=k;i++)
fact=fact*i;
return fact; }

/* Draw a bezier curve with control points (x[i],y[i]),


i=0,..., n */
void drawBezier(int x[n+1], int y[n+1]) {
double P_x,P_y;
glColor3f(1.0,1.0,1.0); //Set drawing colour for curve
for( double t=0.0;t<=1.0;t+=0.01){
P_x=0.0;
P_y=0.0;
glBegin(GL_POINTS); //Draw point (P_x,P_y) on the curve
for( int i=0;i<=n;i++) {
int cni=factorial(n)/(factorial(n-i)*factorial(i));
P_x = P_x+(double)(x[i]*cni)*pow(1 - t,n-i)*pow(t,i);
P_y = P_y+(double)(y[i]*cni)*pow(1 - t,n-i)*pow(t,i);
}

glVertex2f(P_x,wh -P_y);
}
glEnd();
glFlush(); }

//Draw character P using Bezier curves


void Bezier()
{
int Control_x[3], Control_y[3];
//Outer Boundary curves
for (int j=0;j<=OUT_CURVES; j+=2){
Control_x[0]=Px_Out[j];
Control_y[0]=Py_Out[j];
Control_x[1]=Px_Out[j+1];
38
Control_y[1]=Py_Out[j+1]; 2D Shape Primitives
Control_x[2]=Px_Out[j+2];
Control_y[2]=Py_Out[j+2];
drawBezier(Control_x, Control_y);
}
//Inner Boundary curves
for (int j=0;j<=IN_CURVES; j+=2){
Control_x[0]=Px_In[j];
Control_y[0]=Py_In[j];
Control_x[1]=Px_In[j+1];
Control_y[1]=Py_In[j+1];
Control_x[2]=Px_In[j+2];
Control_y[2]=Py_In[j+2];
drawBezier(Control_x, Control_y);
}
glFlush();
}

//Draw character P on a mouse click


void myMouse(int button, int state, int x, int y) {
if(button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) {
Bezier();
glFlush();}
}

void myDisplay() {
glClear(GL_COLOR_BUFFER_BIT);
//Bezier();
glFlush();
}

int main() {
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(ww,wh);
glutInitWindowPosition(200,200);
glutCreateWindow("Bezier curves");
glutMouseFunc(myMouse);
glutDisplayFunc(myDisplay);
myInit();
glutMainLoop();
return 0;
}

Other characters R , S, U can be similarly plotted using appropriate


coordinates of the control points.

2.9 PRACTICAL EXERCISES


Sessions 1 and 2

1. Before you start programming for graphics output primitives, you should
be familiar with some of the basics in programming using OpenGL
functions. This first exercise is aimed at learning the basic structure of a
simple graphics program using OpenGL API.

Write a C program to create a window of specified size and position and


draw the following objects with dimensions of your choice, to fit within
the window.

(a) A point (b) A rectangle (c) A line segment


39
Computer Graphics (d) A triangle (e) A filled rectangle.

Use OpenGL to generate these objects.

2. Write a C program which takes points as input from mouse clicks (left
button) and then performs an action. Apply your program to generate a
closed polygon as follows: Every time a left button is pressed, a point on
the screen is selected at mouse position and corresponding edge is drawn.
When the right button is clicked, the polygon is created. Use one of the
line drawing algorithms to plot your line segment.

3. Write a simple C-code to generate a circular arc between two angle


values. Use this to draw Fig. 11.

Fig. 11

4. Write a C-code which generates a font interactively. This means after


every n mouse clicks, a Bezier curve is generated and then the terminal
point of the last drawn Bezier curve is taken as the initial point of the
next Bezier curve and so on, until the user right clicks the mouse. In this
case the curve of desired degree is drawn and the multi piece font
boundary is generated. If the font generation required further drawing,
the program allows starting again on the same window with previously
drawn multi-piece curve. But this time the new curve will start with the
fresh selected new point.
―x―

40
More Output Primitives and
UNIT 3 MORE OUTPUT PRIMITIVES AND Geometric Transformations

GEOMETRIC TRANSFORMATIONS
Structure Page No
3.1 Introduction 41
Objectives
3.2 Filled-Area Primitives 42
3.3 Character Generation 50
3.4 Two-Dimensional Geometric Transformations 52
3.5 Summary 55
3.6 Solutions/Answers 56
3.7 Practical Exercises 64

3.1 INTRODUCTION
In Unit 2, you learnt how some basic geometric objects such as line segments,
circles, ellipses and other curves are processed for plotting on a display unit. In
most of the graphics packages you will find that polygons are also used as
standard output primitives. Filled polygons with single solid color (or other
alternatives for filling the interior) are provided as shape primitives.

In this unit, we shall discuss two basic methods to fill polygonal or arbitrary
shaped closed regions in Sec. 3.2. We consider here methods for solid fill of
specified areas. While creating or editing a document in a word processing
software, you might have used letters, numbers and other characters in various
font types and styles. We shall discuss in Sec. 3.3 how different fonts are
generated using 2D drawing primitives. In Sec. 3.4 we shall discuss tools and
techniques from yet another interesting concept in Computer Graphics–two-
dimensional geometric transformations. These transformations help translate,
rotate or deform the basic shapes to build complex structures and environment
such as those used in computer games, movie production or real time 3D
technologies. Practical exercises related to this unit are given at the end of the
unit in Sec. 3.7.

Objectives
After reading this unit, you should be able to
 describe how a closed area can be filled using the two methods namely,
scan line method and seed fill method;
 use these methods for fill area applications;
 generate a character or a font from the alphabet of any language;
 describe how basic 2D geometric transformations such as translation,
rotation, scaling, reflection, etc. are applied on geometric objects;
 combine basic 2D transformations to obtain more useful composite
transformations for applications;
 implement these transformations using a C code with OpenGL.

41
Computer Graphics
3.2 FILLED-AREA PRIMITIVES
Filled-area primitives are one of the most important types of primitives used in
Computer Graphics. Basically filled-area primitives are meant to fill the given
bounded and closed region with a single color/multiple colors or with some fill
pattern. You will find later that almost all types of curves/surfaces are
approximated by polylines/polygonal surfaces for the purposes of scan
conversion. Therefore, polygon fill algorithms are extensively used in
applications involved in 2D geometry processing or, in scan converting surface
patches with polygonal surface approximation. This section deals with some of
the important and most common algorithms being used for defining and
implementing filled area primitives.

Two basic approaches are followed in area filling on raster systems. In the first
approach overlap intervals for scan lines that cross the area are determined per
scan line (see Fig. 1 (a)). Remember that a scan line is a horizontal line of
pixels that can be plotted on the display device when the electron beam
traverses the display area horizontally during one horizontal retrace. Second
approach begins with an interior point and fills the area moving outward from
this point until the boundary condition is reached (see Fig. 1(b)). An algorithm
following the first approach is classified as scan line algorithm and that falling
under second class is called seed fill algorithm. Simple objects such as
polygons, circles etc. are efficiently filled with scan line fill algorithms and
more complex regions use the seed fill method. The scan line algorithm is
mostly used in general graphics packages.

Pixels between two


intersections filled with
color

Scan line Seed point

(a) (b)
Fig. 1

Let us begin with scan line polygon fill algorithm. Notice that polygons can be
as simple as a triangle and could be as complicated as the one shown in Fig. 2
below.

42 Fig. 2: A self intersecting polygon


These are self intersecting polygons. We broadly keep the polygons in one of More Output Primitives and
Geometric Transformations
the three categories (i) convex (ii) concave (iii) self intersecting.
Mathematically, a self intersecting polygon is concave. You will deal with
such polygons in greater details for the purpose of area filling.

Before we go ahead with area filling algorithms, a word about pixel addressing
and object geometry. You know that line segments are discretized into finite
number of pixels and each pixel has its height and width depending upon the
aspect ratio of the display unit. Therefore, if you plot a single pixel it will
occupy some area on display unit. This contributes to the object geometry
resulting in aberrations from the actual size. For example, if you have to plot a
line segment starting from (10 , 10 ) to (15, 10 ) , then length of the segment is 5
whereas, in plotting this line segment, six pixel areas will be contributing to
this segment, resulting in increased length of line segment by one unit.
Sec. 3-10 of the book discusses this issue and suggests some solution to the
problem. Read carefully this section before you study Sec. 3-11 to understand
and implement scan line polygon fill algorithm.

Read Secs. 3-10 and 3-11, Chapter 3, pages 134-144 of the book.

Before proceeding further, we summarize the main steps used in scan line
polygon fill algorithm for your reference.

Let P be a given polygon with N vertices given by


Vi  (x i , yi ), i  0, 1, , N  1 in the counter clockwise orientation. For the
sake of convenience, it will be assumed that vertices have non-negative integer
coordinates.

Step 1: Input polygon vertices Vi  (x i , yi ), i  0, 1, , N  1 with VN  V0 .


Further, define the edges Ei  Vi Vi1 , i  0, 1, , N  1.
Step 2: Find out y min  min i {y i }, y max  max i {y i } .
Step 3: Create a sorted edge table in either clockwise or anti-clockwise
direction while traversing the edges along the boundary of the polygon.
Exclude horizontal lines while creating sorted edge table (because
horizontal lines do not intersect scan lines. Remember the horizontal
lines may overlap the scan line, but do not cross the scanline. Here you
are finding out intersection when the two lines are crossing each other).
You may use the bucket sort to create the sorted edge table based on
smallest y value. If two edges have the same smallest y -value, the
one having smaller x - intercept value for the other vertex will come
prior. Each entry contains
(a) maximum y -value of the edge,
(b) x -intercept value for the lower vertex of that edge,
(c) inverse of the slope of the edge.

Step 4: Starting with scan line y  y min find out the intersection point of each
scan line with all the edges E i stored in a sorted edge table until you
reach y  y max . For this you need to

43
Computer Graphics (a) Create an 'Active Edge List' (AEL) for each scan line. This
contains all edges that are crossed by the current scan line. To
do this you need to compare the y -value of current scan line
with the minimum and maximum y-values of the end points of
each edge. If for an edge E k , the current scan line y-value
satisfies min {y k , y k1}  y  max {y k , y k1} , then E k
intersects the current scan line. More precisely -
i) Move from the edge table all those edges E k for which
lowest y values are equal to the current scan line y-value.
ii) Remove from AEL those edges E k whose highest y-
values are equal to the current scanline y-value.
(b) Find out intersection with only those edges which are in the
AEL. Use coherence to find out successive intersections for
incrementing scan lines. Coherence here means that if an edge
has been intersected by the scan line, the next scan line will also
most probably intersect the edge. This helps in fast computation
of successive intersection points using the equation of the line
containing the edge. More precisely, if the current scan line is yk
and the intersection of the scan line with an edge is xk then using
the equation y  mx  b of the edge, the intersection xk+1 of the
edge with the next scan line yk+1 = yk +1 can be easily computed
1
as x k 1  x k  (refer to DDA line algorithm).
m
(c) Sort intersection points with increasing x-coordinate values.
(d) Make pairs of intersection points. For example, if for scan line
y  3 , there are four intersection points
(12 , 3), (17 , 3), (19, 3), (23, 3) , make pairs as follows:
{(12, 3), (17 , 3)}, {(19, 3), (23, 3)} .
(e) Check if an intersection point appears twice in the list. If yes,
then this could be a vertex of the polygon. For such vertices
(x m , y m )
 count the intersection point twice if
y m  min{ y m1 , y m , y m1} , otherwise
 count the intersection point only once.
Step 5: For all pixels ( x , y) on the current scan line for which
x p  x  1 / 2  x p1 , ( x, y) lies in the polygon. Assign the specified
fill color to ( x , y) .

Notice that by performing Step 4(a)(ii) you actually shorten edges by one pixel
that helps in resolving the vertex intersection problem.

The above algorithm gives a solid fill of the given polygon. One can also
include pattern fills, factors for transparency, etc. We are now illustrating the
algorithm for a simple polygon.

44
Example 1: Let us consider a polygon with vertices V0 (10, 10), V1  (15, 15) , More Output Primitives and
Geometric Transformations
V2  (16, 13), V3  (16,15), V4  (20, 15), V5  (20, 10), V6  V0 and edges
E j  Vj Vj1 , j  0, 1, 2, 3, 4, 5 .

Step 1: Store the vertices and edges in the order given.


Step 2: y min  min i {yi }  10 and y max  max i {y i }  15 .
Step 3: Sorted edge table is created as follows. Start with y  10 and continue
till y  15 . Clearly for y = 10 there are only two edges that begin with
this y-value namely, E 0 and E 4 . Slope of E 0 is 1 and that of E 4 is
 . As described in Step 3, part (a) – (c) above, the edge table is
formed as shown in Fig. 3.

15

14

13 15 16 –½ 15 16 0

12

11

y = 10 15 10 1 15 20 0

Fig. 3: Sorted edge table for the polygon of Example 1.

Notice that the edges E 3 and E 5 do not appear in the edge table, being
horizontal line segments.

Step 4: (a) Active edge list for all scan lines


y  10, AEL  {E 0 , E 4 }
y  11, AEL  {E 0 , E 4 }
y  12, AEL  {E 0 , E 4 }
y  13, AEL  {E 0 , E1 , E 2 , E 4 }
y  14, AEL  {E 0 , E1 , E 2 , E 4 }
y  15, AEL  {}, Empty set

(b) Intersection points for each scan line are as follows


y  10 : (10, 10 ), (20, 10 )
y  11 : (11, 11), (20, 11) {Use formula x k 1  x k  1 / m , m being the slope}
y  12 : (12, 12 ), (20, 12 )
y  13 : (13, 13), (16, 13), (16, 13), (20, 13)
y  14 : (14, 14 ), (15 .5, 14 ), (16, 14 ), (20, 14 )

45
Computer Graphics In practice you need not perform intersection calculation for the scan line
y  15 . You only need to calculate the intersection points for scan lines
y such that y min  y  1 / 2  y max .

(c) You need not perform this step since intersection points are already in
sorted order with increasing x-values.

(d) You can now make pairs easily.

Step 5: For y  10 , all pixels starting from (10 , 10 ) to (19 , 10 ) will be plotted.
Similarly, you can find out pixels to be plotted for scan lines y  11
and y  12 . For y  13 , two pairs of intersection points are given by
{(13, 13), (16 , 13)} and {(16, 13), (20, 13)} . Pixels that have to be
plotted are from (13, 13) to (15, 13) and then from (16 , 13) to
(19 , 13) . You may proceed in the same way for other two scan lines.
The resulting filled polygon is shown in Fig. 4.
(20, 15)
(15, 15)

(10, 10) (20, 10)

Fig. 4: Polygon of Example 1 filled using scan line polygon fill algorithm.

Note that the polygon filling scheme will not fill the pixels on the horizontal
edge E 4 joining (16,15) and ( 20,15). But the boundary of the polygon will
display the edge. Similarly the vertex (15, 15) is plotted by virtue of it being a
boundary point. Further, the vertical edge E 5 = (20,15) – (20,10) is not plotted
since one needs to conform to the conditions of step 5.
***

You may now try to do the following exercises on your own.

Do the exercises no. 3-21, 3-23, 3-24, Chapter 3 on page 162 of the book.

As you saw the implementation of scan line polygon fill requires that
boundaries should be straight line segments. The seed fill algorithms do not
require any such constraints. You only need to know an interior point of the
closed boundary object to fill it. This interior point is called a seed point.
However, determination of interior point for complex polygons such as the one
shown in Fig. 2 itself is a challenging task. The following methods are used to
determine an interior point of the area to be filled.
1. Odd-even rule,
2. Nonzero winding number rule.
46
Once an interior point of the object is determined, the boundary-fill algorithm More Output Primitives and
Geometric Transformations
or, flood-fill algorithm may be applied to fill the given area.

To know about these methods for determining an interior point and algorithms
to fill the given area, particularly for curved boundary areas, you may read the
following.

Read Sec. 3-11, Chapter 3, pages 145-150 of the book.

Here it would be important to mention that in OpenGL a fill area must be


specified as a convex polygon. Infact OpenGL makes some strong restrictions
on the definition of a primitive polygon. It requires all interior angles of a
polygon must be less than 180 degree, there should be no crossing edges, and a
single polygon fill area can be defined with only one vertex list, which
eliminates the possibility of holes in the polygon interior.

Fig. 5 shows some examples of valid and invalid polygons in terms of OpenGL
definitions. However, there is no restriction on the number of line segments
making up the boundary of a convex polygon.

Valid Polygons Invalid Polygons


Fig. 5: Valid and Invalid Polygons in OpenGL.

Main reason for OpenGL specifications on valid polygon types is that simple
convex polygons can be rendered quickly and hardware for fast polygon-
rendering could be made possible for such a restricted class of polygons.
Therefore to maximize the performance with the given hardware, OpenGL only
processes convex polygons correctly. Ofcourse concave polygons are
processed by splitting them into convex sub-polygons but the result may not be
correct. Many graphics processors triangulate the polygons to apply filling
procedure.

However, as you know, many real-world surfaces consist of polygons, which


may be non-convex, self intersecting or even having holes. Since all such
polygons can be constructed from union of simple convex polygons or more
precisely triangles, some routines to build more complex objects are provided
in the GLU library. These routines take complex objects and tessellate them,
or decompose them into simpler OpenGL polygons that can then be rendered.
Later in Unit 4 you will study certain methods of splitting a concave polygon
along some of its edges for the purpose of making it a union of convex
polygons. Here we give you some of the basic OpenGL polygon fill-area
functions that are commonly used in rendering polygons.

OpenGL procedures for fill polygons are similar to those for describing a point
or a polyline with only one exception for filling a rectangular shape. As you
have already seen for rectangles and polylines, glVertex function is used to
input the coordinates for a single polygon vertex, and a complete polygon is

47
Computer Graphics described with a list of vertices placed between a glBegin/ glEnd pair.
However, for displaying a rectangle, the function has an entirely different
format. This is because rectangles are simple and one of the most frequently
used polygons. The procedure directly accepts vertex specifications in the
xy-plane and is given by
glRect*(x1, y1, x2, y2);
Here ( x1, y1) is one corner and ( x 2, y 2) is diagonally opposite corner of this
rectangle. Meaning of * after glRect tells that you may use one of the
following suffix codes for specifying the coordinate data type. These codes are
as follows: i – integer, s – short, f – float and d – double. For example, the
following statement defines a rectangle with four corners
(200 , 100 ), (350 , 100 ), (350 , 250 ), (100 , 250 ) having integer data.
glRecti(200, 100, 350, 250);
The rectangle is displayed with edges parallel to the coordinate axes. Further,
if you want to express coordinates as array elements then you can also use the
suffix code v for vector. Following piece of code helps you understand how to
use array elements as vertices of rectangle.
int vertex1[ ] = {200, 100};
int vertex2[ ] = {350, 250};
glRect2iv(vertex1, vertex2);

For a complete code for generating a rectangle, you may refer to Appendix A
or the OpenGL reference manual given online at the url
http://www.opengl.org/documentation/

Coming back to a general polygon, you will find that by default a polygon is
filled in a solid color, determined by the current color settings. One can also
use a pattern to fill a polygon. Each polygon has two faces: a back face and a
front face. You can set the fill color and other attributes for each face
separately. Vertices are specified in a counterclockwise order in OpenGL for
the front face of the polygon. Following piece of code uses OpenGL primitive
constant GL_POLYGON to fill the polygon area specified with the six
vertices as shown below.
glBegin(GL_POLYGON);
glVertex2i(100,100);
glVertex2i(150,100);
glVertex2i(200,150);
glVertex2i(200,200);
glVertex2i(150,250);
glVertex2i(100,250);
glEnd( );

Following code will help you understand how to use scalar and vector vertex
specifications in drawing polygons.
void filledPoly (void) //Give two different specs.
{
glClear (GL_COLOUR_BUFFER_BIT); // Clear display window.
glColour3f (0.0, 0.0, 1.0); // Set current colour to
blue
48
int vertex[][2] = {{150, 100},{150, 120},{160, 130}, More Output Primitives and
{170,120}, {170,100}}; Geometric Transformations
glBegin(GL_POLYGON);
glVertex2i(50,100);
glVertex2i(50,120);
glVertex2i(60,130);
glVertex2i(70,120);
glVertex2i(70,100);
glEnd( );

glBegin(GL_POLYGON);
glVertex2iv(vertex[0]);
glVertex2iv(vertex[1]);
glVertex2iv(vertex[2]);
glVertex2iv(vertex[3]);
glVertex2iv(vertex[4]);
glEnd( );

}
void myDisplay() {
filledPoly(); //Fill polygon function call
glFlush(); // Process the OpenGL routines as quickly
as possible.
}

You may now try the following exercises.

E1) For the following polygon, prepare an initial sorted edge list and then
make the active edge list for scan lines y  5, 20 , 30 , 35 . Coordinates of
the vertices are as shown in Fig. 6.

40

30

20

10

0
0 10 20 30 40 50

Fig. 6

E2) For the polygon shown in Fig. 7 on the next page, how many times will
the vertex V1 appear in the set of intersection points for the scan line
passing through that point? How many times will you count it when you
form the pairs of intersection points? Justify your answer.

49
Computer Graphics V0 V5

V1 = V 4

V2 V3

Fig. 7

E3) Use an appropriate OpenGL primitives to draw the shapes given in Fig. 8.

(a) (b) (c)

Fig. 8

You may also try the following exercises from the book.

Do the exercises no. 3-27 to 3-28, Chapter 3 on page 162 of the book.

Let us now discuss the character generation techniques.

3.3 CHARACTER GENERATION


You know that graphics displays often contain components which are text
based. For example graph labels, annotations, descriptions on data flow
diagrams, fill patterns containing text strings and information for simulation
and visualization applications. Routines for generating character are available
in most of the graphics systems and graphics application softwares. Here you
will learn some simple techniques used in creating texts.

There are two main approaches followed in character or font generation (i)
Bitmap font method (ii) Outline font method. In the bitmap method, a matrix of
bits is formed which approximates the shape of the font. Every entry in the
matrix corresponds to a pixel and those pixels are plotted for which the matrix
entry is 1.

In the outline method the font boundary shape is approximated by spline


curves. You have already done some practice on character generation using
multiple Bezier curve segments (see the solution to E8), Unit 2). To know the
details about the two methods of character generation read the following.

Read Sec. 3-14, pages 151-153 and do the exercises 3-29, 3-30, Chapter
3 on page 162 of the book.

In order to understand how fonts are converted to bitmap, you are advised to
begin with a letter and draw it on a graph paper sheet. Once you plot it on the
50 graph paper, check its span in terms of number of maximum x and y units it
covers (see Fig. 9). Consider a matrix of that order. For example, in Fig. 9(a), More Output Primitives and
Geometric Transformations
the letter R spans in an area corresponding to the matrix of order 914 . In
your drawing all those graph square units which fall inside the font boundary
will have their corresponding value 1 in the matrix and those falling out will
have bitmap 0 .

R (a)

Fig. 9
(b)

In the outline method we simply identify a few shape control points and draw
the corresponding Bezier curves for different segments of the outline.

Some predefined character sets are available in the OpenGL Utility Toolkit
(GLUT). So you need not create fonts as bitmap shapes unless you want to
display a font that is not available in GLUT. For example, you may want to
generate a font of your own mother tongue. The GLUT library contains
routines for displaying both bitmapped and outline fonts. Bitmapped GLUT
fonts are rendered using the glutBitmapCharacter function, and the outline
fonts are generated with polyline (GL_LINE_STRIP) boundaries. We can
display a bitmap GLUT character with
glutBitmapCharacter (font, character);

where parameter font is assigned a symbolic GLUT constant identifying a


particular set of type faces, and parameter character is assigned either the
ASCII code or the specific character we wish to display. Thus, to display the
upper-case letter “A”, we can either use the ASCII value 65 or the designation
'A'. Similarly, a code 97 corresponds to the lower-case letter 'a' and so on.
You can also use fixed-width fonts and proportionally spaced fonts. For
example, to select a fixed-width font of 8 13 , you need to assign either
GLUT_BITMAP_8_BY_13 or GLUT_BITMAP_9_BY_15. For a
proportionally spaced font of Times Roman type with size 10 point, you need
to select GLUT_BITMAP_TIMES_ROMAN_10 to parameter font. The
choice GLUT_BITMAP_TIMES_ROMAN_12 is also available. For
example, to draw the bitmap font Times Roman of size 10 at the raster position
(20, 20), you need to use the following statements.
glRasterPos2i(20, 20);
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_10,'B');

An outline character is displayed with the following function call.


glutStrokeCharacter(font, character);

You can assign parameter 'font' either the value GLUT_STROKE_ROMAN,


which displays a proportionally spaced font, or the value
GLUT_STROKE_MONO_ROMAN, which displays a font with constant

51
Computer Graphics spacing. You can specify and control the size and position of these characters
by using certain geometric transformations that you will study in the next
section. You may refer to OpenGL reference manual for more options.

The following code demonstrates displaying the string 'OpenGL' using


glutStrokeCharacter ( ).
char text[] = {'O', 'p', 'e', 'n','G', 'L'};
for(int k = 0; k < 6 ; k++)
glutStrokeCharacter(GLUT_STROKE_ROMAN, text[k]);

You may now test your understanding by trying out the following exercises.

E4) Use the outline method to plot the following font boundaries (style should
remain the same). Implement your method in C language using OpenGL.

Gt
E5) Design a bitmap for the English vowels A, E, I, O, U for two different
sizes and then implement the bitmaps to plot these vowels on the display.
Keep in mind that baseline of all the bitmaps should remain the same
when plotting, just as it is printed in English language.

In the next section we shall study tools needed to gain more flexible control
over the size, orientation and position of the objects of interest.

3.4 TWO-DIMENSIONAL GEOMETRIC


TRANSFORMATIONS
When a real life object is modelled using shape primitives, there are several
possible applications. You may be required to do further processing with the
objects. For example, suppose you have created a chair model. You may then
like to view it from different angles, or you may like to create another chair
model with a slight variation in its shape or size. Similarly, you may want to
show an object moving from one position to another along a path, or rotate it
about a given pivot point. All this can be achieved by using simple
mathematical transformations called affine transformations. Since these
transformations help change the geometry of the object in terms of shape, size
or position, we call them geometric transformations. Present section deals with
two-dimensional (2D) geometric transformations. 2D geometric
transformations can be broadly classifies as – (i) Rigid body transformations
(ii) Non-rigid body transformations. Rigid body transformations do not change
the object dimensions, while non-rigid body transformations modify the
dimensions of the object. For example, when you resize a rectangle, the
transformation is non-rigid body, but when you rotate an object its shape or
52
size does not change, hence it is rigid body transformation. To know about More Output Primitives and
Geometric Transformations
these transformations in detail, you may read the following.

Read Secs. 5-1 and 5-2, Chapter 5, pages 204-210 of the book.

You have seen that a 2D geometric transformation can be represented by a


3 3 matrix. Since matrix multiplication is not commutative in general, it
follows that the composite of two primitive transformations has to be carefully
constructed. For example, a scaled object shifted to a position after scaling or
an object scaled after shifting, may give different results as illustrated in the
following example.

Example 2: Consider a triangle with vertices in 2D plane given by


(0, 0), (1, 0) and (0, 1) (called unit triangle). Translate it by (1, 1) and then
scale in each coordinate direction by 1 / 2 . You get a triangle with vertices
(1 / 2, 1 / 2), (1, 1 / 2) and (1 / 2, 1) . Now if you first scale by 1 / 2 and then
translate it by (1, 1) , you will get a different triangle with vertices
(1, 1), (3 / 2, 1), (1, 3 / 2) .
***

Let us consider one more example to obtain the composite of translation and
rotation matrix.

Example 3: Let  be a triangle with vertices (0, 0), (2, 0) and (1,1) . You
want to shift it to a position (3, 2) ((0, 0) transformed to (3, 2)) and then rotate
it about the point (4, 3) by an angle of  / 4 . The composite of translation and
rotation matrices is given by

 1 1 1 3   1 1 
  4(1  )  4
 2 2 2 2  1 0 3 
 2 2 

 1 1
3(1 
1
)
4  0 1 2    1 1
3  2 .
 2 2 2 2    2 2 
 0 0 1  0 0 1  0 0 1 
   
   
***

You may now try the following exercise.

E6) Magnify a triangle with vertices A  (1, 1), B  (3, 1) and C  (2, 2) to
twice its size in such a way that A remains in its original position.

There are a few important transformations which are not primitive in the sense
that they can be expressed as compositions of translation, rotation and scaling.
These transformations are used extensively in many application software using
graphics. Two such transformations are reflection and shear. When you watch
a cartoon animation movie for example, many a times you will find that shapes
are deformed and a cartoon character is shaking like a pendulum standing at
one place. This is basically an application of a continuous shear transformation
to give you the appearance of a shaking body. Similarly you will find plenty of
53
Computer Graphics applications of reflection in many ad films, cartoon films and also in
engineering design softwares. For the details about composite and other
transformations read the following.

Read Secs. 5-3 to 5-6, pages 211-228 of Chapter 5 and Secs 6-1 to 6-3,
pages 237-242 of Chapter 6 of the book.

Following OpenGL routines are available for applying 2D primitive


transformations.
 glScaled(sx,sy,1.0): Scale by sx in x coordinate, by sy in y-
coordinate and no scaling in z coordinate.
 glTranslated(dx,dy,0.0): Translate by dx in x direction, by dy
in y-direction and no translation in z direction.
 glRotated(angle,0,0,1): Rotate by 'angle' degrees in
anticlockwise direction about the origin. The vector (0,0,1) specifies
rotation about z-axis.

Following simple code demonstrates how these primitives can be applied in a


C-program. As in the case of character generation, you only need to change
your display function as follows.

//Translation, rotation, scaling demo

void myDisplay ( void ) {


glClear (GL_COLOUR_BUFFER_BIT );
glColour3f(0.0, 1.0, 0.0);

for(float angle=0; angle <30; angle=angle + 5)


{
glTranslated(-0.5f, -0.5f, 0.0f);
//scaling and rotation with (0.5,0.5) as the pivot point
glScaled(0.7,0.7,10);
glRotated( angle, 0.0, 0.0, 1.0);
glTranslated(0.5f, 0.5f, 0.0f);
glBegin(GL_LINE_LOOP);
glVertex3f(-0.5, -0.5, 0.0);
glVertex3f(1.0, -0.5, 0.0);
glVertex3f(1.0, 1.0, 0.0);
glVertex3f(-0.5, 1.0, 0.0);
glEnd();
}
glFlush();
}

You may now check your understanding of 2D geometric transformations by


trying out the following exercises.

E7) Write a code to continuously rotate a square about a pivot point.

E8) A 2D geometric shape is rotated about a point with coordinates (1, 2) by


90 in a clockwise direction. Then, the shape is scaled about the same
point in the x-coordinate by 2 times and in the y-coordinate by 3 times.
Finally, the shape is reflected about the origin. Derive the single
54 combined transformation matrix for these operations.
More Output Primitives and
Geometric Transformations
E9) If you perform a x-direction shear transformation and then a y-direction
shear transformation, will the result be the same as the one which is
obtained when it is simultaneous shear in both the directions? Justify
your answer by appropriate reasoning.

Also, you may try the following exercises from the book.

Do the exercise no. 5-6, 5-9 and 5-16, Chapter 5 on pages 233-234 of the
book.

We now end this unit by giving a summary of what we have covered in it.

3.5 SUMMARY
In this unit, we have covered the following:

1. Algorithms for filled-area primitives. These algorithms are classified


into two categories (i) Scan line algorithms (ii) Seed fill algorithms.
2. A scan line algorithm determines the overlap intervals of the polygon
with each scan line to obtain interior points of the polygon for assigning
those points the desired color.
3. A seed fill algorithm starts with a known initial interior point of the
polygon and spreads out to determine other interior points to fill the given
closed area with specified color. Four connected and eight connected
pixels are used to determine other interior points for painting with
specified color.
4. Some other approches to area filling are
 Scan line polgon fill algorithm
 Boundary fill algorithm
 Flood fill algorithm.
5. Boundary fill algorithm is suitable when the boundary has single color
while flood fill algorithm is more suitable for filling regions which are
defined with boundary having more than one color. For example map of
a country surrounded with other countries.
6. These basic algorithms provide methods for filling areas with a single
(solid) color. Modifications can be made for multi color filling or pattern
filling.
7. OpenGL supports filling polygon with a solid color or with pattern. The
restriction is that the polygon must be convex. To draw a filled convex
polygon with vertices ( x 0, y0), ( x1, y1),  , ( xN , yN ) having floating
point values in x and y coordinates, use the following piece of code:

glBegin(GL_POLYGON);
glVertex2i(x0,y0);
glVertex2i(x1,y1);

55
Computer Graphics ....
....
glVertex2i(xN,yN);
glEnd( );

8. Two basic approaches used in character generation are – Bitmap method


and outline method.
9. All 2D geometric transformations can be obtained by using appropriate
combinations of three primitive transformations (i) translation
(ii) rotation (iii) scaling. Shear and reflection are two important
transformations which can also be derived from primitive
transformations. But looking into their vast applications, they are
discussed separately and are treated as primitive transformations in many
graphics packages.
10. 2D transformations can be classified as rigid body or non-rigid body
transformations. Rigid body transformations keep the shape and size of
the objects intact whereas non-rigid body transformations do affect the
shape and/or size of the object. Translation and rotation are examples of
rigid body transformations. Other transformations come under non-rigid
body transformations.
11. All 2D geometric transformations can be expressed as 3 3 matrices,
with points in 2D plane represented in homogeneous coordinates.
12. While constructing a composite transformation, one should be clear
about the sequence in which transformations have to be applied, as these
are not commutative in general.

3.6 SOLUTIONS/ANSWERS
Exercises given on pages 49-50 of the unit.

E1) First label the vertices and edges as shown in the Fig. 10.

V6 V4
40
E5 E4

30
E6 V5
V0 E7
20
V7 E3
E0 V2
10
E1
E2
0 V1 V3
0 10 20 30 40 50

Fig. 10

Start with y  0 and continue till y  40 . For y  0 there are four


edges, namely E 0 , E1 , E 2 and E 3 . The sorted edge table is created as
56 follows:
More Output Primitives and
Geometric Transformations

y=30 40 30 –1 40 30 1

y=20 40 20 –½ (Horizontal edge ignored)

y=0 20 20 –½ 10 20 1 10 40 –1 40 40 0

Active edge lists (AEL) for scan lines y  5, 20 , 30 , 35 are as follows.


y  5, AEL  {E 0 , E1 , E 2 , E 3 }
y  20, AEL  {E 6 , E 3 }
y  30, AEL  {E 6 , E 5 , E 4 , E 3 }
y  35, AEL  {E 6 , E 5 , E 4 , E 3 }

E2) The vertex V1  V4 will be counted once only since it is neither a


maximum nor a minimum for y coordinates of the pairs of edges
{V0 V1 , V1V2 } and {V3 V4 , V4 V5 } .

E3) Fig. 8(a) is made up of quadrilaterals. Fig. 8(b) is a strip of triangles


while Fig. 8(c) is a fan of triangles. You need to make use of
glBegin()/glEnd() pair with argument GL_QUADS for Fig. 8(a),
GL_TRIANGLE_STRIP for Fig. 8(b) and GL_TRIANGLE_FAN for
Fig. 8(c). Suppose in Fig. 8(a) vertices V0 through V8 have following
coordinates. V0  (10, 50), V1  (20, 30), V2  (10, 10), V3  (50, 50),
V4  (50, 30), V5  (50, 10), V6  (100 , 50), V7  (80, 30),
V8  (100 , 10)

V0 V3 V6

V1 V7
V4

V2 V5 V8

A sample code for Fig. 8(a) is as follows.

glBegin(GL_QUADS);
glVertex2f(20,30); // Quad1
glVertex2f(10,10);
glVertex2f(50,10);
glVertex2f(50,30);
glVertex2f(50,30); //Quad2
glVertex2f(50,10);
glVertex2f(100,10);
glVertex2f(80,30);
glVertex2f(50,50); //Quad3
glVertex2f(50,30);
glVertex2f(80,30);

57
Computer Graphics glVertex2f(100,50);
glVertex2f(10,50); //Quad4
glVertex2f(20,30);
glVertex2f(50,30);
glVertex2f(50,50);
glEnd();

You may also use vector form of vertices to draw the figure. Use similar
code for the other two cases.

Exercises given on pages 162 of the book.

3-21 Using midpoint algorithm, you can find out the curve position for each
scan line y . The two positions will help you find out the intersection
points and the interior region. Suppose you have identified a pixel
x c  x, y c  y) as the next point to be plotted on the positive quadrant
with respect to the centre of ellipse, then the scan line intersection pixel
pair will be {( x c  x, y c  y), (x c  x, y c  y)} . Assign the fill color to
all pixels lying between these two pixels on the scan line. Modify the
function ellipsePlotPoints(int xCenter, int yCenter, int
x, int y) of the code of Midpoint circle generation code given in the
book on pages 129-130 is as follows.

void ellipsePlotPoints(int xCenter, int yCenter, int x,


int y)
{
glBegin(GL_POINTS);
for(int xx = xCenter - x ; xx< xCenter + x ; xx++)
{
glVertex2i(xx, yCenter + y);
glVertex2i(xx, yCenter - y);
}
glEnd();
}

3-23 Let P be a given polygon with N vertices given by


Vi  (x i , yi ), i  0, 1, , N  1 in the counter clockwise orientation.
Find out x min  min i {x i }, x max  max i {x i } and
y min  min i {y i }, y max  max i {y i }

Let Q  ( x, y) be the point to be tested. Initialize the winding number


wn  0 . If any of the following conditions is satisfied Q cannot be
inside the polygon.

(i) x  x min (ii) x  x max (iii) y  y min (ii) y  y max

In case none of the conditions is satisfied, you are required to have a ray
starting from Q and extending to a distant point from the polygon and
then construct a vector in the direction of this ray. For simplifying the
computations, you can choose a vector u from Q in the direction (1, 0) .
Basically, this vector is on the scan line on which point Q lies. If no
vertex is on this scan line, your choice of vector is correct. Else choose a
58 vector with a slightly modified direction so that no vertex falls on the ray
from Q in the chosen direction. Create an active edge list (AEL) for this More Output Primitives and
Geometric Transformations
ray. Each edge whose one of the x-extents is greater than x is included
in the AEL. Next make a vector v perpendicular to u. If u  (u x , u y ), v
can be defined as v  (u y , u x ) . For each edge Vi Vi1 in the AEL, take
the dot product of the vector Vi1  Vi with v . Let d i  (Vi1  Vi ).v
(dot product). If d i  0 , the edge Vi Vi1 crosses the ray from right to
left. Update the wn  wn  1, else the edge crosses the ray from left to
right and wn  wn  1. When all the edges are processed and wn  0 , the
point Q is inside the polygon else it is outside.

3-24 For each scan line use the winding number rule to obtain interior points
on the scan line by counting and updating winding number for each edge
crossing on the AEL. For example, if AEL contains the first edge as E0.
Then as the pixel on the scan line is incremented and crosses E0, winding
number is updated. It will be either decremented by 1 or incremented by
1 depending whether the edge is crossing the scan line from left to right
or vice versa. Keep updating the winding number for each edge crossing.
This way on each scan line, you will identify inside/outside pixels.

3-27 An ellipse can be properly filled using 4-connected boundary fill method.
In case of 8-connected boundary fill method, some of the pixels that you
set for frame buffer will be outside and hence will be redundant. As a
result in 8-connected boundary fill, you will find some leakage in the
filling.

3-28 A code for flood fill algorithm with 4-connected cells is given on page
150 of the book. Implement the same code using OpenGL as follows.

void floodFill4 (int x, int y, int fillColour, int


interiorColour)
{
int colour;
/* Set current colour to fillColour, then perform
following operations. */
getPixel (x, y, colour);
if (colour = interiorColour)
{
// Set colour of pixel to fillColour.
setPixel (x, y);
floodFill4(x + 1, y, fillColour, interiorColour);
floodFill4(x - 1, y, fillColour, interiorColour);
floodFill4(x, y + 1, fillColour, interiorColour);
floodFill4(x, y - 1, fillColour, interiorColour);
}
}

Exercises given on page 52 of this unit

E4) Here letter G is produced using linear segments for outline method. You
may also use Bezier curves of higher degree to approximate the outline of
the character. The outline will be drawn using the code given here. For
filling the interior, you need to employ one of the fill area methods.
Vertices of the line segments are given as follows.

59
Computer Graphics GLint vt[][2]={{315,245}, { 315,195}, {316,190},{333,183},
{ 251,183}, {263,186}, { 270, 190},{272,195}, {272,225}, {
253, 238}, {242,236}, {235,229}, {231,219}, {227,198}, {
227,169}, { 228,152}, {233,133}, {242,124}, {248,120}, {
253,120}, {269,119}, { 273,121}, { 280,126}, {292,134}, {
299,141}, {305,149}, {312,162}, {312,111}, {307,110}, {307,
120}, {304, 130}, {301, 133}, {298, 131}, { 289, 125},
{279, 120}, {266, 113}, {251, 109}, {239, 110},{229,110},
{219,114}, {208,120}, {196,129}, {190,138}, {185,148},
{181, 158}, {180,172},{180,184}, {180,196}, {184, 203},
{189, 214}, {198, 226}, {211, 235}, {220,242}, {230,245},
{245,247}, {260, 247}, {271, 241}, {284, 236}, {293, 231},
{299, 231}, {306, 235}, {315,245}};

The code uses simple line loop to produce the character.

void lineG(void)
{glBegin(GL_LINE_LOOP);
for(int i = 0;i<62;i++) {
vt[i][1]=480-vt[i][1];
glVertex2iv(vt[i]);}
glEnd();
}

Call the function lineG in your display function. Similar is the code for
letter ' t'.

E5) Here we discuss the bitmap for letter 'E'. You need to place the character
'E' on a square grid of sufficiently large size. If the character's area is
overlapping more than 50% of a cell of the square grid, assign that cell a
bit 1, otherwise assign the cell bit 0. This way you will have a
rectangular grid having cells either assigned 0 or 1. Map this onto a
rectangular matrix and plot the character 'E' using the bitmap method as
shown in Fig. 11.

1 1 1 1 1 1 1
 
1 1 1 1 1 1 1

1 1 0 0 0 0 1
1 1 0 0 0 0 0
1 1 0 0 0 0 0

 
1 1 0 0 0 0 1
1 1 1 1 1 1 1
1 1 0 0 0 0 1
 
1 1 0 0 0 0 0
1 1 0 0 0 0 0
1 1 0 0 0 0 1
 
 1 1 1 1 1 1 1
1 1 1 1 1 1 1 

Fig. 11

Other characters can also be modelled using the same technique. Repeat
your process for a larger size font also. Or you can use the same matrix
to double the size of the matrix by mapping a cell to four cells with the
same bit value of the cell as shown in Fig. 12 on the next page.
60
More Output Primitives and
Geometric Transformations

1 1 1 1

1 1 1 1

0 0 0 0
1 1 0 0 0 0

0 0

Fig. 12

Exercise given on pages 53 of this unit

E6) You need to apply scaling by keeping the point (1, 1) fixed. For this, you
require following sequence of operations. (i) translating the triangle so
that (1, 1) is at the origin (ii) scaling the triangle by a factor of 2 in both
the coordinates (iii) translating the triangle back so that A is back to
(1, 1) . The sequence of transformation is as follows.

 x   x  1 2(x - 1)  2(x - 1)  1 2x - 1


 y    y  1  2(y - 1)   2(y - 1)  1  2y - 1
         

The triangle is transformed to a triangle with vertices (1, 1), (5, 1), (3, 3) .

Exercises given on pages 54-55 of this unit

E7) #include <GL/glut.h>

static GLfloat rotat=0.0;

void init(void);
void display(void);
void reshape(int w, int h);
void rotate(void);

int main()
{
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE);
glutInitWindowSize(500,500);
glutInitWindowPosition(100,100);
glutCreateWindow("Moving squares");
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutIdleFunc(rotate);
glutMainLoop();
}

void init(void){
glClearColour(0.0,0.0,0.0,0.0);
}

void display(void)
{ glClear(GL_COLOUR_BUFFER_BIT);
glPushMatrix(); //Push the transformation matrix to stack
61
Computer Graphics glTranslatef(-50.0f,-50.0f,0.0);
//Translate the pivot point to origin
glRotatef(rotat,0.0,0.0,1.0); // Rotate about origin
glTranslatef(50.0f,50.0f,0.0);
//Translate pivot point back to its position
glColour3f(0.0,0.0,1.0); //Set colour of square
glRectf(-50.0,-50.0,50.0,50.0); //Draw square
glPopMatrix(); //Pop the matrix from stack
glutSwapBuffers(); // Swap buffers
}

void reshape(int w, int h)


{ glViewport(0,0,(GLsizei)w,(GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-250.0,250.0,-250.0,250.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

void rotate(void)
{ rotat+=0.1; //Continuously increse the rotation angle by
0.1
if(rotat>360.0)
rotat-=360.0;
glutPostRedisplay(); //send the current window for
redisplay
}

E8) The sequence of transformations is as follows.

 1 0 0 1 0 1  2 0 0  0 1 0 1 0  1 
 0  1 0  0 1 2   0 3 0    1 0 0  0 1  2 
     
 0 0 1 0 0 1  0 0 1  0 0 1 0 0 1 

From right to left the sequence is as follows- (i) Translate so that (1, 2)
moves to origin (ii) Rotate about the origin by  90 (iii) Scale by ( 2, 3)
(iv) Translate back so that (1, 2) is at its previous position (v) Flip about
the origin. Single combined transformation can be obtained by
multiplying all these matrices.

E9) No, since an x-axis shear and then a y-axis shear will lead to following
matrix.
1 0 0 1 a 0 1 a 0
b 1 0 0 1 0  b ab  1 0
    
0 0 1 0 0 1 0 0 1

while a simultaneous shear in both the directions will be represented by


the following matrix.
1 a 0
b 1 0
 
0 0 1 

62
Exercises given on pages 233-234 of the book. More Output Primitives and
Geometric Transformations

5-6 (a) For rotation- Let R () denote the rotation matrix about origin with an
angle  . You know that R(1  2 )  R(2  1 ) . Therefore two
successive rotations are commutative.

(b) For translation-

1 0 t x  1 0 u x  1 0 t x  u x  1 0 u x  t x 
0 1 t  0 1 u   0 1 t y  u y   0 1 u y  t y 
 y  y 
0 0 1  0 0 1  0 0 1  0 0 1 
1 0 u x  1 0 t x 
 0 1 u y  0 1 t y 
0 0 1  0 0 1 

s x 0 0 v x 0 0 s x  v x 0 0  v x  s x 0 0
(c) 0 sy 0 0 vy 0   0 sy  vy 0   0 vy sy 0
 
 0 0 1  0 0 1  0 0 1  0 0 1
v x 0 0 s x 0 0
  0 vy 0 0
 sy 0
 0 0 1  0 0 1

5-9 Matrix for reflection about x-axis is given by


1 0 0 
0  1 0 
 
0 0 1 

Counterclockwise rotation of 90 degrees has following matrix.


0  1 0 
1 0 0 
 
0 0 1 

Then the composite matrix is the same as (5-51) of the book.

5-16 You need to set the base line of the font as the line relative to which shear
operation has to be performed. Then apply shear transformation to the
font outlines. Vector font definition will provide an array of vertices that
you will use to make the font outline. Out of those vertices choose the
one that has lowest x-coordinate value. Imagine a horizontal line passing
through that vertex and choose that line as the line about which you have
to consider shear operation. Define your matrix for shear operation and
then apply the matrix to the vertex array. Experience shows that a factor
of 1/3 or 1/4 could be chosen for shearing operation relative to base line
of the font.

63
Computer Graphics
3.7 PRACTICAL EXERCISES

Session 3
1. Implement the Scan line polygon fill algorithm for any arbitrary polygon
in C-language and then use your code to fill each of the following type of
polygon.
i) Convex polygon
ii) Concave polygon
iii) Self intersecting polygon.

Session 4
1. Implement the boundary fill algorithm and flood fill algorithm in
C-language and use your code to fill two different types of closed areas
such as
i) A Circle
ii) A self intersecting polygon
Compare the results of two algorithms for the self intersecting polygon.

2. Write a code using C-language to generate an arbitrary character by using


(i) bitmap method (ii) outline method. Use this code to generate three
different characters of your own mother tongue.

3. Write a code in C-language for character generation using the standard


OpenGL functions (both bitmap and outline style).

Session 5
1. Write a C-code for an interactive program which allows a user to draw a
polygon object in a window and then gives various choices of geometric
transformations on the polygon. Once the user selects a choice, the
transformed polygon is also plotted on the window. To begin with, you
can use a triangle in place of a polygon.

OR

2. Write a C-code that plots an object on the window and on the user’s click
of mouse on the window, the object starts rotating continuously until the
user presses the mouse again.

3. Use your C-code of character generation to draw the same character with
regular font type and italic font type by making use of shear
transformation.

64
Clipping and 3D Primitives
UNIT 4 CLIPPING AND 3D PRIMITIVES
Structure Page No
4.1 Introduction 65
Objectives
4.2 2D Clipping Algorithms 66
4.3 Three Dimensional Concepts and Display Methods 72
4.4 Bézier and B-Spline Curves 75
4.5 Summary 79
4.6 Solutions/Answers 81
4.7 Practical Exercise 87

4.1 INTRODUCTION
This unit introduces you to three important concepts of Computer Graphics.
Sec. 4.2 discusses the concept of clipping, which means removal of objects not
in the window frame to be plotted on a display device. This simple concept has
far reaching consequences in computer animation and especially in video
games which is a fast growing industry. Good clipping strategy is critically
important for speeding up the rendering of the current scene and hence plays a
crucial role in maximizing game’s performance and visual quality.

In Sec. 4.3, we have discussed methods related to modelling and display of


three dimensional objects on a 2D display device. This gives an overview of
various techniques that are involved in achieving realism in 2D display of a 3D
scene. These techniques include inter-alia projection methods, depth
calculation and surface rendering.

For surface rendering most suited and common methods are based on Bézier
and B-spline representations for simple reasons–ease of use and efficiency.
Sec. 4.4 discusses the Bézier and B-spline curves.

Objectives
After reading this unit you should be able to:
 clip 2D line segments against a rectangular clipping boundary using one
of the two line clipping algorithms–Cohen Sutherland line clipping
algorithm and Liang Barsky line clipping algorithm;
 explain how a curve and a text string is clipped against a window
boundary;
 explain the concepts used in rendering and projecting a 3D scene onto a
2D display;
 use basic techniques and methods employed in 3D object modelling and
representation, which includes
 polygonal surfaces and their representation,
 Bézier and B-spline curves.

65
Computer Graphics
4.2 2D CLIPPING ALGORITHMS
Clipping is an operation that eliminates invisible objects from the view
window. To understand clipping, recall that when we take a snapshot of a
scene, we adjust our aperture and focus to bring only desirable objects within
the window and the final shot is taken once the window contains only those
objects within the frame. From the point of view of mathematics, the problem
of clipping looks simple. All that we have to do is to find out the extremities of
the object and check their intersection with the window. This however is a
critical problem from the point of view of processing in computer. In gaming
industry especially, when animation at a high speed has to be shown, time
complexity of these clipping methods are crucial. You will observe that due to
this reasons in most of the clipping algorithms, intersection calculations are
avoided as far as possible. The basic idea behind these algorithms is to
maximize the performance and efficiency by deciding the visibility/partial
visibility/invisibility of an object using the minimum number of intersection
calculations. In this section you will study clipping algorithms for the
following primitive objects
 Point
 Line
 Area
 Curve
 Text

Point clipping is very simple. All you need to check is whether a point is
inside the window extremes in x- and y-directions. For line clipping several
algorithms have been introduced over the years and many refinements have
also been suggested by people in order to improve the performance of the
algorithms. One of the first algorithms that is still being used in applications is
Cohen Sutherland algorithm. It categorizes line segments into three
categories (i) trivially rejected (ii) trivially selected (iii) may be partially visible
or totally invisible. For category (i) and (ii) no intersection calculations are
required whereas category (iii) contains all those line segments that need
further processing for determining if they are partially visible or totally
invisible. For the details about point clipping and line clipping read the
following sections of the book carefully.

Read Secs. 6-5 to 6-7, Chapter 6, pages 244-250 of the book upto
Cohen Sutherland line clipping.

Before proceeding further, we summarize the main steps of Cohen Sutherland


line clipping algorithm.

Recall that the rectangular window W has window boundaries parallel to the
coordinate axes and falling on the lines
x  xw min , x  xw max , y  yw min , y  yw max . The four window boundaries
are labelled in the given order by numbers 1, 2, 3, 4 . Let L be a line segment
with end points P1  (x1 , y1 ), P2  (x 2 , y 2 ) . The algorithm proceeds as
follows.
66
1. Divide the entire plane into nine disjoint regions by extending the four Clipping and 3D Primitives
window boundaries indefinitely in both the directions
2. Assign a 4-bit long region code to each region and initialize it to be zero
(0000) for all the points. Region code helps you in identifying line
segments that can be (i) trivially rejected (Line segment AB in Fig. 1
below) (ii) trivially selected (Line segment CD) (iii) possibly partially
visible (Line segment EF). In case the line segment is neither trivially
rejected nor trivially selected, it is further processed for checking its
possible intersections with window boundaries. In this case also the
region code helps in identifying which window boundary can potentially
intersect the line segment (see Fig. 1).

1001 1000 1010


A
C

0001 0000 0010


F
D

B 0101 0100 E 0110

Fig. 1.

3. For each end-point P in the plane, assign it a code as per the following
rule.
i) If P is to the left of the line x  xw min , first (least significant) bit is
set to 1.
ii) If P is to the right of the line x  xw max , second bit is set to 1.
iii) If P is below the line y  yw min , third bit is set to 1.
iv) If P is above the line y  yw max , fourth (most significant bit) is set
to 1.
(This way we have unique identification code for each region. For
example, if the point P is on the top right region of the window then it has
a code 1010.)
4. For each line segment L , determine the region codes-code(P1) and
code(P2) of its end points P1, P2.
5. If code(P1) = 0 and code(P2) = 0, then the line is completely visible.
Select the line and store it in the frame buffer. Else
6. Find out bitwise intersection of code(P1) and code(P2). If intersection is
a non-zero code, then line segment L is completely outside the window
and is rejected. Else there are two possibilities
i) Line may be partially visible.
ii) Line may be completely invisible.
For example, if the two endpoints are such that code(P1) = 0101
and code(P2) = 1010, then the line segment may or may not
intersect the window. See the two line segments AB and CD in
67
Computer Graphics Fig. 2 below. Both satisfy the conditions given above. However, CD
is partially visible while AB is completely invisible.
B

D
1001 1000 1010

0001 0000
0010

0101 0100 0110


A
C

Fig. 2.

Now proceed as follows:


7. Check if code(P1) or code(P2) is zero. In this case line is partially
visible. If code (P1)  0 then find out the non-zero bits present in
code(P2). Find out the intersection of L with the corresponding window
boundaries. For example, if code (P 2)  1010 , then find out the
intersection points Int1 with right boundary. If Int1 is inside the window
boundary, replace P2 with Int1, else find out the intersection point Int2
with top window boundary and replace P with Int2. Select the updated
line segment L and send it to the frame buffer.
8. If code (P 2)  0 swap P1 and P2 and repeat step 7.
9. If both code(P1) and code(P2) are non-zero then for i  1, 2, 3, 4 find out
intersection of L with i th window boundary. One way to implement it is
as given below.
10. For end point P1, find out if it is beyond the i th window boundary. If
yes, find out intersection point and replace P1 with the point. Repeat the
same for P2.
11. If after step 10, updated L is inside the window boundary, send it to the
frame buffer, else L is completely invisible and is rejected.

You may observe here that for totally or partially visible lines, algorithm works
quite efficiently. However for some invisible lines one has to calculate
intersection points with all the four window boundaries to finally decide about
its rejection. For example, consider an invisible line having one end point in
the region 0110 and the other end point in the region 1001, intersection with all
the four window boundaries has to be calculated in order to assert that the line
is invisible.

You may now test your understanding by trying the following exercises.

E1) Let W be a window having two diagonally opposite corners at (1, 1) and
(5, 4) . Trace Cohen Sutherland line clipping algorithm for the line
segment having two end points (0, 0) and (4, 5) .

68
E2) Let W be the window as given in E1). Clip a triangle with vertices Clipping and 3D Primitives
(0, 0), (4, 5) and (6, 1) against the window by tracing Cohen Sutherland
line clipping algorithm for each of the edges.

Another important algorithm known as Liang Barsky algorithm is based on


parametric form of the line segments and is faster than the Cohen Sutherland
line clipping algorithm in general. For the details of this algorithm, you may
read the following.

Read Sec. 6-7, Chapter 6, pages 250-252 of the book.

Before you apply the Liang Barsky algorithm to a particular problem, let us
recapitulate the main steps involved in Liang-Barsky algorithm.

Recall that x  x 2  x1, y  y2  y1 . Parametric equation of the line


segment L is given by x  x1  u x ; y  y1  u y, 0  u  1 . The
algorithm proceeds as follows:

1. Consider the infinite extension of the i th window boundary


(i  1, 2, 3, 4), which divides the entire plane into two parts – The inside
region that contains the window and the outside region corresponding to
that window boundary. For each line segment L proceed as follows:
2. Define the constants pk , qk (k  1, 2, 3, 4) as follows:
i) p1  x, p2  x , p3  y, p 4  y
ii) q1  x1  xw min, q 2  xw max  x1, q3  y1  yw min, q 4  yw max  y1.

The above constants are computed initally and will be used in most of the
computations right from deciding the visibility of the line segment to
calculating the intersection with window boundaries. Note that p1 = – p2,
still the authors of the algorithm have used them as independent
constants. This helps inter-alia in efficiently defining the loops and in
managing the intersection points with corresponding window boundary.

3. Initialize the parameters u1  0 and u 2  1 that define the part of line


segment L that lies within the window boundary for u1  u  u2 . These
parameter values will be updated as and when required.
4. For k  1,  , 4 .
i) if pk  0 then there are two possibilities.
a) qk  0 . In this case the line segment is completely outside the
window boundary and is rejected.
b) qk  0 . In this case the line segment is inside the i th window
boundary and is kept for further processing.

Fig. 3 on the next page gives you an idea of such line segments. For Line AB
and CD p1  p2  0 . But in case of k  1, q1  0 for AB while q1  0 for CD.

69
Computer Graphics Hence AB is rejected when k  1while CD is kept for further processing. When
k takes value 2, q 2  0 for CD and it is rejected.

D
B
k=4

k=3

A k=1 k=2
C

Fig. 3.

ii) If pk  0 , the infinite extension of the directed line segment L


proceeds from outside to inside region of the i th window boundary.
If pk  0 , the infinite extension of L proceeds from inside to
outside of the i th window boundary. For any such non-zero value
of pk , the intersection point will have parameter value u  rk ,
where rk  qk / pk . Thus, refresh u1 and u 2 as follows:
a) if pk  0 , refresh the value of u1 by u1  max{0, rk}
b) if pk  0 , refresh the value of u 2 by u 2  min{ 1, rk} .
5. If u1  u 2, L is invisible and reject it else store L in the frame buffer for
plotting for parameter values u such that u1  u  u2 .

Fig. 4 and Fig. 5 describe various categories of line segments as specified in


the algorithm.

Line L1 : p1  0, q1  0
Line L2 : p 4  0, q 4  0

P1 L2 P2
P2
L3
L1
P2
P1

P1
Line L3 : p1  0 Line proceeds from outside to inside
for first window boundary.
p1  0 Line proceeds from inside to outside
for second window boundary.
Fig. 4.

70
Clipping and 3D Primitives

u 1
P2

u1
u2
u0
P1 u1  u 2 , reject line

Fig. 5.

You may now test your understanding of the algorithm by trying the following
exercise.

E3) Repeat E2) using Liang Barsky line clipping algorithm.

After attempting E3) you must have felt that the Liang-Barsky algorithm
reduces intersection calculations and is more efficient than the Cohen-
Sutherland algorithm.

As you proceed further, you would see that the methods for curve clipping and
character clipping have also been developed based on bounding rectangles.
This means for a given 2D object, you can find out the bounding rectangle with
edges parallel to the coordinate axes (parallel to the sides of clipping window).
In case the bounding rectangle does not intersect the clipping window, reject
the object. Otherwise, find out partial/ total visible objects. Intersection
calculations with the window boundary are performed for partial visible
objects, whenever essential. You may now read the following.

Read Secs. 6-9 and 6-10, Chapter 6, page 264 of the book.

You may try the following exercises before moving to the next section.

Do the exercises no. 6-10, 6-21, 6-22, Chapter 6 on pages 268-269 of the
book.

Real challenge in Computer Graphics is to transform a 3D scene into a 2D


image with realism as far as possible. This transformation involves various
steps before an object model is ready for display. Methods used in this
processing are called 3D display methods. In the next section we shall discuss
some of the aspects covered under 3D display methods of modelling, rendering
and animation for 3D real life objects.

71
Computer Graphics
4.3 THREE DIMENSIONAL CONCEPTS AND
DISPLAY METHODS
Imagine yourself taking a picture by a camera. What do you normally do?
You specify a viewpoint and view direction and then set up a view window.
Once that is done you take a snap and the image of 3D scene/object is captured
in a 2D film. Basic idea behind this is projection from 3D to 2D. Projections
are broadly classified under two names (i) Parallel (ii) Perspective. When
objects are projected onto the display plane along parallel lines, we call it
parallel projection. In this case a view direction is specified and image of
points on the object are obtained by taking lines parallel to the view direction
and finding their intersection with the display plane. If you have seen an
architectural drawing of a building plan, you would have noticed that they
show three different plots, top view, front view and side view normally. This
actually is an example of a parallel projection of the building to be constructed.
On the other hand, perspective projection corresponds more closely to the way
a human eye perceives a 3D scene. A viewpoint is specified and points from
the object are projected to the view plane by considering lines emanating from
the points on the object and converging to the viewpoint. Intersections of these
converging lines with the display plane make the image of the object under
perspective projection.

To have a better understanding of projections from 3D to 2D, read the


following.

Read Sec. 9-1, Chapter 9, pages 317-323 of the book.

Common practice in modelling and rendering a surface is to approximate it by


a polygonal surface. By a polygonal surface we mean a surface which is made
up of polygons. Main reason for converting a surface to a polygonal surface is
to simplify representation for speeding up the surface rendering. This is
possible as all the polygons are described by linear equations.

You may now read the following.

Read Sec. 10-1, Chapter 10, pages 325-330 of the book.

Remember in polygonalization of the surface, following rules must be


followed. Any two polygons
(i) share a common edge,
(ii) share a common vertext,
(iii) are disjoint.

Fig. 6(a) on the next page, gives an example of an admissible approximation of


a surface using triangles, called triangulation. Further, Figs. 6(b) and 6(c) give
examples of some cases that are not admissible as per the rules given above.
This means an edge of a polygon cannot be a part of an edge of another
polygon (see Fig. 6(b)).

72
Similarly, no two polygons can overlap leading to a common non-empty Clipping and 3D Primitives
interior (see Fig. 6(c).

Common Edge

Common
Vertex (b)

Disjoint Polygons

(a) A triangulation (c)

Fig. 6: A Triangulation of a surface.

After reading Sec. 10-1 of the book, you would have acquired sufficient
theoretical background of how a polygonal surface is stored and rendered in
graphics. To understand and implement the ideas about the polygonal surfaces,
consider a simple example of the standard cube. The cube consists of eight
vertices (0, 0, 0), (1, 0, 0), (1, 1, 0), (0, 1, 0), (0, 0, 1), (1, 0, 1), (1, 1, 1), (0, 1, 1) .
So the vertex table is formed as shown in Table-1.

Table-1

Vertex Table
V1 0 0 0
V2 1 0 0
V3 1 1 0
V4 0 1 0
V5 0 0 1
V6 1 0 1
V7 1 1 1
V8 0 1 1

There are twelve edges in the cube. Once the vertices are labeled, we can form
the edge table as given in Table-2.
Table-2

Edge Table
E1 V1 V2
E2 V2 V3
E3 V3 V4
E4 V4 V1
E5 V5 V6
E6 V6 V7
E7 V7 V8
E8 V8 V5
E9 V2 V6
E10 V3 V7
E11 V4 V8
E12 V1 V5
73
Computer Graphics Finally, the Polygon-Surface table formed is given in Table-3.

Table-3

Polygon-Surface Table
S1 E1 E2 E3 E4
S2 E1 E9 E5 E12
S3 E2 E10 E6 E9
S4 E3 E11 E7 E10
S5 E4 E12 E8 E11
S6 E5 E12 E7 E8

The cube conceived as a polygonal surface is shown in Fig. 7 below.


E8

E5 V8= (0,1,1)
V5 = (0,0,1) E7

V6= (1,0,1) V7= (1,1,1)


E12
E11 E4
E9 E6 E10

V1= (0,0,0) V4 = (0,1,0)


E1

V2= (1,0,0)1 V3= (1,1,0)


E3

E2
Fig. 7: A cube as a polygonal surface.

In many graphics packages and APIs like OpenGL, one can skip forming the
edge table and can straightaway form polygons using vertices.

Apart from polygonal surfaces, polygonal meshes are also used extensively in
3D geometric modelling. A mesh is essentially a polygonal approximation,
where the polygons are hollow, that means you only consider the boundary of
polygons and not the filled polygons. Once a polygonal mesh is formed,
different surface filling patterns within the meshes are used as per the
requirement and/or suitability of the application in which the resulting surface
is being used. You will study a little later a variety of surface primitives that
can be used for such filling applications. Triangular or quadrilateral meshes
are most common. A triangular mesh (or a triangulation) is a polygonal mesh
in which all (or almost all) the polygons are triangles. Similarly, for
quadrilateral meshes all the polygons have to be quadrilaterals.

While constructing a polygonal mesh you must again follow the rules as
described on page 72 for construction of polygonal surfaces.

It is now time to check your understanding of what you have studied above.

Do the exercise no. 10-1, 10-7, 10-8 Chapter 10 on pages 424-425 of


the book.

74
We next turn to study some of the important curve and surface generation Clipping and 3D Primitives
techniques. More precisely, we shall study Bezier and B-spline curves and
surfaces with emphasis on their properties suitable for geometric modelling.

4.4 BEZIER AND B-SPLINE CURVES


You may start with reading the following.

Read Sec. 10-8, Chapter 10, pages 347-354 of the book.

We are giving you here another very popular iterative algorithm for generating
Bezier curves known as deCasteljau algorithm. The algorithm uses repeated
linear interpolation and works as follows:

Given n  1 control points Pk , k  0, 1, , n , define


Pkr (u)  (1  u) Pkr-1(u)  u Pkr-11(u), i  0, 1, ..., n  r and r  1, 2, ..., n,
where Pk0 (u)  Pk and the curve at point u is evaluated as P (u )  P0n (u) .
Fig. 8 demonstrates the algorithm for n = 3.

P11 (u) P2
P1

P12 (u)
P21 (u)

P03 (u)

P01 (u)
P02 (u)

P0 P3
Fig. 8: A cube as a polygonal surface.

Lower degree curves have found tremendous applications in geometric


modelling and graphics because of the ease of implementation of such curves
in various applications. Before proceeding further you may try the following
exercises.

Do the exercise no. 10-14, 10-17, Chapter 10 on page 425 of the book.

Bezier surfaces are the natural extension of Bezier curves. There are two types
of Bezier surfaces (i) Tensor product surfaces (ii) Triangular surfaces. Let us
understand here the construction of tensor product Bézier surfaces.

A parametric surface is expressed in the form P(u, v)  ( x (u , v), y(u, v))


where u, v are two independent parameters such that u  [a , b] and v  [c, d ] .
As an example we express a sphere of radius r with centre at the origin as

75
Computer Graphics S(, )  r (cos  cos , cos  sin , sin ) where,    [0, ] .

Tensor product Bezier surface and composite surface: Consider the square
[0, 1]2  [0, 1]  [0, 1] . The tensor product Bezier surface of degree (n,m) is
defined by the map P : [0, 1]2  R 3 given by
m n
P ( u , v)    p j,k BEZ j,m (v) BEZ k ,n (u ),
j 0 k  0

where p j,k  R 3 and BEZ k ,n (u ) are as defined on page 347 of the book. Points
p j,k are called control vertices or control points of the tensor product Bezier
surface and the quadrilateral mesh formed by these points is called control net
of the surface. Notice that for each fixed u the surface reduces to a Bezier
curve in v and vice versa. For example, on the boundary of the square u  0 ,
the surface reduces to the Bezier curve
m
P(0, v)   p j,0 BEZ j,m ( v),
j 0

because BEZ k ,n (0)  0 for k  0 and BEZ 0,n (0)  1 . Similarly, for other three
boundaries.

In order to construct a tensor product composite Bezier surface all you need to
do is join two Bezier surfaces in such a way that the boundary curves match
identically. This will make the composite surface continuous. For higher order
smoothness notice that if the boundary curves are matching then their
derivatives (upto any order ) in the same direction will also match. So all that
you need to ensure for C1 continuity is the continuity of normal derivative.
Normal derivative of a surface at a point means the directional derivative in the
direction normal to the tangent plane. To make this point clear, let us consider
a composite Bezier surface s( u , v) defined on [0, 2]  [0, 1] with two tensor
product Bezier pieces P1 and P2 . While P1 is defined on [0, 1]  [0, 1], P2 is
defined on [1, 2]  [0,1] . Let p j,k and q j,k be the control points for P1 and P2 ,
respectively. Notice that we have defined Bezier surface only for the domain
[0, 1] 2 . Therefore to define the Bezier surface corresponding to the square :
[1, 2]  [0, 1] , you need to establish a local parameterization for [1, 2]  [0, 1]
by mapping [1, 2] to [0, 1] using t  u  1 . Then define the piece P2 using the
local parameter t as follows:
m n
P2 ( t , v)    q j,k BEZ j,m (v)BEZ k ,n (t ), t  u 1.
j0 k 0

Actually here you have performed a change of domain parameter


transformation. The patch P2 remains the same but is now viewed as map from
[ 0,1]2 to R3 as shown in Fig. 9 on the next page.

76
Clipping and 3D Primitives

(1,1)
(0,1) (1,2) (2,2)

(0,0) (1,0) (1,1) (2,1)

Fig. 9: Domain reparametrization of a Tensor product Bezier surface.

On the common boundary of the two squares at u  1 , you have to match the
boundary curves of P1 and P2 as follows:
m m
P1 (1, v)   p j,n BEZ j,m ( v)  P2 ( 0 , v)   q j,0 BEZ j,m ( v) .
j0 j 0

P1 and its control


net
P2 and its control
net
(0,1) (1,1) (2,2)

(0,0) (2,1)
Fig. 10: A composite tensor product Bezier surface.

The two boundary curves will match identically if and only if the boundary
control points p j, n  q j, 0 for j  0, 1, , m . This follows because Bernstein
basis polynomials are linearly independent. All you need to do for continuity
of the composite patch is to keep the same control points for the two Bezier
surfaces on the common boundary (see Fig. 10). Next, for the continuity of the
first derivatives, recall that a Bezier surface is a polynomial surface and hence
possesses continuous derivatives of all orders. Therefore we only need to
check for the C1 continuity of the composite derivatives at the joining curve of
the two pieces. This requires matching of partial derivatives of P1 and P2 in u
and v directions. In the direction u , since the two pieces have a common
boundary curve, their derivatives of all orders in that direction will also be
matching. Therefore you only need to find out conditions under which partial
derivatives of P1 and P2 match in the direction v . Using the formula for the

77
Computer Graphics first derivative of Bezier curve, you can write down the v (-directional) partial
derivatives of P1 and P2 as follows:
P1 m n
( t , v)    1p j,k BEZ j,m1 (v)BEZ k ,n (u ),
v j 0 k 0

P 2 m n
( t , v)  m   1q j,k BEZ j,m1 (v)BEZ k ,n (t ), t  u  1.
v j0 k 0

Along the common boundary the two derivatives reduce to


P1 m
(1, v)  m   1 p j,n BEZ j,m 1 ( v) ,
v j 0

P 2 m
(0, v)  m  1q j,0 BEZ j,m 1 ( v).
v j0
Matching the two derivatives, we get the condition:
1p j,n  1q j,0 or,
p j  1, n  p j, n  q j  1, 0  q j, 0 .
Geometric interpretation of these conditions say that the control points
p j  1, n , p j, n , q j  1, 0 are collinear and the ratio in which p j, n divides the line
segment joining p j 1 , n and q j  1, 0 is 1 : 1 . Fig. 11 below gives you the clear
picture of control point positioning near the boundary control polygon for C1
continuity requirement.

Fig. 11: Red, Green and Blue control points near the boundary fall on a line.

There is another notion of continuity called geometric continuity. Although


the idea existed in differential geometry, the concept was introduced for
geometric modelling and graphics applications in mid 80s. In one variable it
essentially means matching of the directions of the derivatives of the adjacent
curves. In other words, this means matching the derivatives when the two
curves have been defined with respect to arc length as the parameter. This
provides flexibility in the shape modelling. Lot of research has been done in
this direction. Please refer to pages 338-339 of the book for information on
geometric continuity.

B-spline curves are piecewise polynomial cubes with one or more polynomial
pieces with a minimum smoothness requirement. For example, a C 2 cubic
B-spline curve has one or more cubic pieces which are joined in such a way
that the resulting composite curve is twice continuously differentiable.

78
B-spline curves have a very elegant expression similar to the Bezier Clipping and 3D Primitives
representation with basis function as B-spline functions in place of Bernstein
basis polynomials. B-spline basis functions can be computed iteratively using
de Boor algorithm. Sec. 10.9 of the book discusses in detail about B-spline
curves and surfaces which you may read now.

Read Sec. 10-9, Chapter 10, pages 354-365 of the book.

You may try the following exercises.

Do the exercise no. 10-20, 10-21, Chapter 10 on page 425 of the book.

The following exercises will further help you to test your understanding of the
concepts learnt above.

E4) A curve shape has three quadratic Bézier curve segments. The curves
have been joined sequentially so that continuity of the first derivative of
the resulting curve shape is maintained. What is the minimum number of
control points that will be required as input to produce the curve? Justify
your answer.

E5) If the spacing between the knot sequence is uniformly doubled, will the
shape of the resulting B-spline curve change? Justify your answer.

E6) A B-spline curve of degree 5 having all control points on a straight line
segment will lie on the same straight line segment. True or false? Give
reasons in support of your answer.

We now end this unit by giving a summary of what we have covered in it.

4.5 SUMMARY
In this unit, we have covered the following:
1. The concept of clipping 2D objects.
2. Clipping of line segments against a 2D rectangular window.
3. Cohen Sutherland line clipping algorithm: The algorithm uses the
following main steps
 Divide the entire plane into nine disjoint regions using the four
window boundaries of the window.
 Give a unique four bit region code to each region.
 Find out the code of two end points.
 Use the code to check (i) If the line segment can be trivially selected
(ii) trivially rejected (iii) needs further processing.
 Find out the intersection with the window boundaries for line
segments which need further processing.

79
Computer Graphics 4. Liang Barsky line clipping algorithm: The algorithm uses parametric
form of the line segment. Four inequalities are created using the
parametric form of the line segments. These inequalities are used for
processing the line segment in an efficient way.
5. Three dimensional display methods: Among the simplest three
dimensional surface representations are the polygonal surfaces.
 A polygonal surface is described by vertices, edges and polygons.
 Vertex, edge and polygon tables are formed for the convenience of
organization of the polygonal surfaces. This is done because the
polygonal surface representation of most of the real life surfaces
require thousands of polygons to make a good approximation.
6. For computation of Bézier curves an iterative algorithm known as de
Casteljau algorithm is used. The algorithm uses repeated linear
interpolation.
7. Bézier surfaces are simple extensions of Bézier curves.
8. Tensor product Bézier surface is having the parametric form-
m n
P ( u , v)    p j,k BEZ j,m (v)BEZ k ,n (u ),
j0 k 0

where (u, v)  [0, 1]2 , p j, k  R 3 are the Bézier control points


and BEZ k ,n (u ) is the standard k th Bernstein polynomial of degree n .
9. Conditions for smoothness require that the Bézier control points of the
adjacent surfaces satisfy certain collinearity constraints.
10. B-spline curves are piecewise smooth polynomial curves.
 B-spline curves are defined over an interval which has been
partitioned into sub-intervals. On each subinterval B-spline curve
reduces to a polynomial curve. The curve pieces are joined in such
a way that the composite curve satisfies certain smoothness
conditions specified in terms of matching of derivatives of certain
orders. Points defining the partition of the interval are called knots
or knot points. This is because at this common domain point of the
interval, two polynomial curve segments are joined to make the
composite curve.
 B-spline curves as well as blending functions are computed using
the iterative de Boor algorithm.
 B-splines satisfy the important properties suitable for geometric
modellling in computer Graphics. Some of these include (i) local
control (ii) smoothness (iii) degree of spline curve does not depend
on the number of control points (iv) convex hull property (v)
convenient blending functions.
 Uniform B-splines are B-spline curves with uniform spacing
between the knots.
 Uniform B-splines give periodic blending functions. This means all
blending functions are translated versions of a single B-spline.

80
Clipping and 3D Primitives
4.6 SOLUTIONS/ANSWERS
Exercises on pages 68-69 of this unit.

E1) Proceed step by step through the algorithm. Let P1  (0, 0) and
P 2  (4, 5) .
i) Find code (P1)  0101 , code (P 2)  1000 , where code ( P ) means
region code of point P.
ii) code (P1)  0 and code ( P 2)  0 . Also code (P1)  code (P 2)  0 .
Therefore the line segment requires further processing.
iii) For left end point P1 , first find out the intersection with the left
boundary. The intersection point is (1, 5 / 4) which is on the
window boundary. Accept the point and update P1 by this point.
That means now, P1  (1, 5 / 4) . You need not further process it for
finding intersection with bottom window boundary.
iv) For the right end point, find out the intersection with the top
window boundary and obtain the point (16 / 5, 4) . Update the point
P2 by this point on the top boundary of the window.
v) Draw the clipped line segment now from P1 to P2 .

E2) Label the edges as follows:


e1 : (0, 0)  (4, 5), e2 : (4, 5)  (6, 1), e3 : (6,1)  (0, 0) . Proceed as in E1).
i) The edge e1 is already processed in the previous solution E1).
ii) The edge e2 has two end points both having non-zero codes.
Processing in the same way as in E1), you need to calculate
intersection with top and right window boundaries and finally
obtain the updated and clipped edge e2 and e3 as
follows: e2 : (9 / 2, 4)  (5, 3) .
iii) e3 : totally clipped since none of the intersection points will be on
the rectangle boundary.

Exercises on page 71 of this unit.

E3) Parametric form of the line segment is as follows. x  4u, y  5u .


i) Four inequalities are obtained as follows:
 4u  1 (k  1)
4u  5 (k  2)
 5u   1 (k  3)
5u  4 (k  4) .
ii) Initialize u1  0 and u 2  1.
iii) Process line against first window boundary (k  1), p1   4  0 , so
the infinite extension of line segment proceeds from outside to
inside. Further r1  q1 / p1  1/ 4 . Hence you need to update u1 by
this value of r1  q1 / p1  1/ 4 .
81
Computer Graphics iv) Second window boundary (k  2) . p 2  4  0 , so the infinite
extension of line segment proceeds from inside to outside.
Calculate r2  q 2 / p 2  5 / 4  1. No need to update u 2 .
v) Third window boundary (k  3). p 3  5  0 . Further,
r3  q 3 / p 3  1 / 5 . Since the previous value of u1  1 / 4  r3 , no
need to update u 1 . (Recall that u 2  min{1, ri : pi  0} .)
vi) Fourth window boundary (k  4). p 4  5  0 . Further,
r4  q 4 / p 4  4 / 5 . Update u 2  5 / 4 . (Recall that
u 2  min{1, ri : pi  0} .)
vii) Now the updated values of u1  1 / 5 and u 2  4 / 5 give you the
section of the line segment that is inside the window boundary. Plot
the line segment for value of u such that 1/ 5  u  4 / 5 .
viii) Proceed in a similar way for other two edges.

Exercises given on pages 268-269 of the book.

6-10 Consider here line segments with three different slopes. Let the line
segments be labelled L1, L 2, L3 , as shown in Fig. 12. Denote the
addition/ subtraction by A , multiplication/division by M . Use the
notations given in the book for subsequent discussions.

1000
y = ywmax L3

0001 L1 0000 0010

y = ywmin
L2 0100
x = xwmin x = xwmax

Fig. 12.

Comparative table for each line segment is given below.


Line1: Trivial Reject Case
Cohen Sutherland Algorithm Liang Barsky Algorithm

8 Comparisons  4 AND operations. This 4A  2M . 4A to compute


is to compute the code of each end point the parameters
by comparing the point location with p k , q k , k  1, 2, 3, 4 . Then
respect to boundaries. No arithmetic 2M , to compute
operation. You only need to have AND q k / p k , k  1, 2 . This is in
operation of endpoint codes for rejecting
addition to Comparisons
the line segment.
operations.

82
Line2: AND operation of endpoint codes = 0. Clipping and 3D Primitives

Cohen Sutherland Algorithm Liang Barsky Algorithm


4 Comparisons, 4 AND 4A  4M . 4A to compute the
operations, then 2A  2M to parameters p k , q k , k  1, 2, 3, 4 .
calculate the intersection point Then 4M , to compute q k / p k , for
x
x  x1  ( y  y1 ) . each k . This is in addition to
y Comparisons operations.

Line3: Line to be rejected after intersection calculations.

Cohen Sutherland Algorithm Liang Barsky Algorithm


6A  6M , to calculate one 4A  4M . 4A to compute the
intersection point for each non parameters p k , q k , k  1, 2, 3, 4 .
zero bit of the codes of the Then 4M, to compute q k / p k , for each
endpoints. After this line will
k . This is in addition to Comparisons
be rejected operations.

Compare the arithmetic operations for the rest of the line segments in a
similar way.

6-21 For the sake of simplicity, assume that both the ellipse and the window
have axes/edges parallel to the coordinate axes. Suppose the ellipse E
has centre ( xc, yc) with major and minor axes of lengths 2rx and 2ry
respectively. Further, let the window W have two extreme corners
( xw min, yw min) and ( xw max, yw max) as given in the book. Step by
step algorithm, based on which you may write a routine for clipping an
ellipse is as follows:

Define the bounding box R of E as the smallest rectangle that contains


E . It is the rectangle with two extreme corners as ( xc  rx, yc  ry) and
( xc  rx , yc  ry) .

G
E

A B
D

F
C

Fig. 13.

Step 1: Trivial Reject Case: (Ellipse A in Fig. 13)


Check if R is disjoint from W . This amounts to performing the
following test– if ( xc  rx  xw min or xc  rx  xw max), E does not
overlap with W . Reject E , as it is invisible. Stop. Else if
( yc  ry  yw min or yc  ry  yw max), E does not overlap with W .
Reject E. Stop. Else goto next step.
83
Computer Graphics Step 2: Trivial Select Case: (Ellipse B in Fig. 13) This is when R is
inside W. To check this, you need to perform the following test.

if ( xc  rx  xw min and xc  rx  xw max and yc  ry  yw min and


yc  ry  yw max), E is completely inside W . Send E to the frame
buffer for plotting. Else, goto next step.

Step 3: E may be partially visible (All ellipses except ellipses A, B and


C). You need to calculate intersections with boundaries. Process for each
window boundary.

Intersection with left boundary: if xc  rx  xw min, E may cross the


left boundary. In this case, find out the intersection with the left boundary
by solving the following pair of equations.
ry2 (x  x c ) 2  rx2 ( y  y c ) 2  rx2 ry2 ; x  xw min .

Obtain two solutions of this pair of equations.


i) If both the points are inside (Ellipse D in Fig. 13), send the portion
of the ellipse to the frame buffer which has these two points on the
boundary. Remember that here you need to apply a test to find out
which portion of the ellipse is inside and which is outside. There
are several ways of finding out the selected arc of the ellipse. One
could be to find out whether the intersection point ( x , y) lies in
Region 1 or Region 2. ( x , y) is in region 1 if ry2 x  rx2 y . This will
enable you to decide points on the ellipse to be sent to the frame
buffer.
ii) If one point is in W (Ellipse E in Fig. 13), select this point on the
ellipse as first point of intersection. Else the ellipse may be outside
the window boundary (Ellipse F, G in Fig. 13). Keep the ellipse for
the next boundary.

Process in a similar way for other window boundaries. If you get two
intersection points of the ellipse with window boundaries, plot your
ellipse accordingly. Else, reject it.

Important Note: Make sure before sending the selected segment of the
ellipse to the frame buffer, that you are sending the right portion and not
the clipped away portion. Using the above steps, write your routine.

6-22 All or none character clipping


i) Define the rectangular window boundaries.
ii) Define N. (No. of characters in the given string).
iii) Define width w of each character in the string.
iv) Define the bounding range for each character on the string using its
lower left corner. For example, a character 'G' in the string
OPENGL is at raster position (100 , 50 ) , then its bounding range
would be (100 , 50 ) and (100  w , 50 ) (ref. Fig. 14 on the next
page).

84
v) Generate the 4-bit code(above,below,right,left) for the lower left Clipping and 3D Primitives

corner LL(i) of the i th character, i  1, 2, , N .


vi) Initialize i  1;
vii) while(i < = N)
if LL(i)  0,
Continue;
else Display Character;

1000 1000

y = ywmax y = ywmax

0001 0000 0010 0001 0000 0010


O P E N G L
E N G L
y = ywmin y = ywmin
0100 0100
x = xwmin x = xwmin
x = xwmax
Before Clipping After Clipping
Fig. 14.

Exercises given on pages 424-425 of the book.

10-1 Let the cube be labelled as shown in Fig. 15.

V5 = (0, 0, 1) V8= (0, 1, 1)

V6= (1, 0, 1) V7= (1, 1, 1)

V1= (0, 0, 0) V4 = (0, 1, 0)

V2= (1, 0, 0) V3= (1, 1, 0)

Fig. 15.

Vertex Edge Table Polygon Surface


Table E1 : V1,V2 Table
V1 : 0,0,0 E2 : V2,V3 S1 : E4,E3,E2,E1
V2 : 1,0,0 E3 : V3,V4 S2 : E9,E10,E11,E12
V3 : 1,1,0 E4 : V4,V1 S3 : E1,E6,E9,E5
V4 : 0,1,0 E5 : V1,V5 S4 : E2,E7,E10,E6
V5 : 0,0,1 E6 : V2,V6 S5 : E3,E8,E11,E7
V6 : 1,0,1 E7 : V3,V7 S6 : E4,E5,E12,E8
V7 : 1,1,1 E8 : V4,V8
V8 : 0,1,1 E9 : V5,V6
E10 : V6,V7
E11 : V7,V8
E12 : V8,V5

85
Computer Graphics 10-7 Assume that the coordinate system is right handed coordinate system.
Let O be an object having N polygonal surfaces Si , i  1 to n , in its
definition. Each polygonal surface is on a plane Pi with equation of plane
given by

Ai x  Bi , y  Ci z  Di  0, i  1 to n .
Let P  ( x, y, z) be the given point.

A pseudo code is given below.


i) Boolean IN = true;
ii) int i  1 ;
iii) while (i  N )
if A *i x  B*i y  C*i z  D  0)
IN = false;
break;

iv) if (IN) print “point is inside”;


else print “point is outside”;

10-8 In the left handed coordinate system, z will be replaced by  z in the


Exercise 10-7 above.

10-14 Quadratic Bézier blending functions  n  2 .

BEZ k ,n (u )  (1  u )BEZ k ,n 1 (u )  uBEZ k 1,n 1 (u )


k  0 : BEZ 0,1 (u )  (1  u ); BEZ 1,1 (u )  0 .
This gives BEZ 0,2 (u)  (1  u) 2 .

k  1 : BEZ 0,1 (u )  (1  u ); BEZ1,1 (u )  u .


So, BEZ1, 2 (u )  (1  u )  u  u  (1  u )  2(1  u )u .

k  2 : BEZ1,1 (u )  u; BEZ 2,1 (u )  0 .


This gives BEZ 2,2 (u)  u 2 .

Maximum of BEZ 0, 2 is at u  0 , while that of BEZ 2, 2 is at u  1 .


1
Further for BEZ1, 2 maximum is at u  . Write a program to plot
2
these functions.

10-17 The cubic Bézier curve shapes must meet in joints in such a way that
first order continuity is maintained. Suppose you need to plot N
segments, each of which is a cubic Bézier curve. Then each segment
has 4 control points. Let the i th curve segment s i have control points -
P3i , P3i1 , P3i2 , P3i3 , i  0, 1, , N  1 . To ensure continuity of the
first derivative, use formula (10-47) on page 350 of the book with
n  3 . Apply the formula between consecutive pieces. When the first
piece finishes, its local parameter value becomes 1, while for the second
86
piece this is the starting point and therefore local parameter for the Clipping and 3D Primitives
second piece becomes 1. Hence the continuity of derivative between
the first and second piece is guaranteed if s '0 (1)  s1' (0) . This
essentially means

(P3  P2 )  (P4  P3 )
Or,
P3  (P2  P4 ) / 2 .
Similarly, continuity of the first derivative is guaranteed between pieces
i and i  1, provided
Pi  (Pi1  Pi1 ) / 2 .

Write your program using the Program of Bézier curves.

10-20 Use formula (10-55) on page 355 of the book recursively to compute
higher degree blending functions. For d  2 , it goes as follows:

u  uk u k 2  u
B k , 2 (u )  B k ,1 (u)  B k 1,1 (u) .
u k 1  u k u k  2  u k 1

Since B k ,1 (u )  1 on u [u k , u k 1 ) and is 0 otherwise, the above


formula gives following form of B k , 2 (u ) .

 u  uk
 u  u u [u k , u k 1 )
B k , 2 (u )   k 1 k
u k 2  u
 u [u k 1 , u k  2 ).
 u k  2  u k 1

These are well known Hat functions. Proceeding in this way, calculate
for d  3 and d  4 .

10-21 Use the result of the Exercise 10-20 and formula (10-55) of the book
again.

Exercises on page 79 of this unit.

E4) You need 5 control points to completely define the curve. Assume that
the curve shape C has three segments C1 , C 2 , C3 . Each segment is a
quadratic Bézier curve and requires three control points. Let the control
points for first piece be P0 , P1 , P2 . Since first derivative of C is
continuous, it is natural that segments make a continuous piece. Hence
the initial point for the second piece will be P2 . Let the curve C 2 has
control points P2 , P3 , P4 while that of C3 be given by P4 , P5 , P6 . Then
by virtue of continuity of derivative and formula (10-47) on page 350 of
the book, you get P2  (P1  P3 ) / 2, P4  (P3  P5 ) / 2 .

87
Computer Graphics E5) No change in the curve. Computation of B-spline blending functions uses
ratios only, which remain unchanged.

E6) True. Curve points are convex combination of its control points. This is
because the blending functions are non-negative and the sum of the
blending functions is 1 (see Eqn.(10-56) on page 356 of the book).

4.7 PRACTICAL EXERCISE


Session 6

1. Implement Cohen Sutherland and Liang Barsky line clipping algorithms


in C-language. Test your code for line segments with end points falling
in various regions..

***

88
Three Dimensional
UNIT 5 THREE DIMENSIONAL Transformations

TRANSFORMATIONS
Structure Page No
5.1 Introduction 89
Objectives
5.2 3D Primitive and Composite Transformations 89
5.3 Three-Dimensional Viewing 93
5.4 Projections 96
5.5 View Volumes and General Projection Transformations 99
5.6 Summary 102
5.7 Solutions/Answers 104
5.8 Practical Exercises 109

5.1 INTRODUCTION
A 3D geometric transformation is used extensively in object modelling and
rendering. 2D transformations are naturally extended to 3D situations in most
cases. Except for rotation, where you need to specify the axis of rotation, you
actually can easily extend all other geometric transformations.

For displaying a 3D scene on a 2D display, one requires a sequence of


transformations. First you need to fix a position from where you would like to
view the scene. This is called camera or eye position. From that particular
orientation, you can then plan what to display and where to display. To model
the view angle and orientation, you need to apply a sequence of such 3D
primitive geometric transformations. Further, you require certain projection
transformation to display the scene on a 2D viewport. These geometric and
projective transformations are discussed in Chapters 11 and 12 of the book.

Objectives
After reading this unit you should be able to
 explain basic 3D transformations–translation, rotation, scaling, shear and
reflections–applied to objects in space;
 explain how to transform between two coordinate systems;
 explain how to apply basic geometric transformations on objects;
 explain the concepts used in rendering and projecting a 3D scene onto a
2D display;
 build model view matrix capturing the scene from camera position;
 implement projection transformations;
 distinguish between parallel and perspective projections.

5.2 3D PRIMITIVE AND COMPOSITE


TRANSFORMATIONS
In Unit 3, you have studied and implemented 2D geometric transformations for
object definitions in two dimensions. These transformations can be extended to 89
Computer Graphics 3D objects by including considerations for the z coordinate. In this unit, you
will study certain methods to implement such transformations. These methods
are discussed in Chapters 11 and 12 of the book. To know the details about
these methods you may start with reading the following:

Read Secs 11-1 to 11-7, Chapter 11, pages 428-449 of the book.

You must have noticed here that the translation is as simple as in 2D. Rotation
however requires a little more effort in 3D. Standard rotations about three
coordinate axes are simpler to perform. In order to apply rotation about an
arbitrary axis, you need to have a composite transformation while scaling,
shear and reflections are generalized to 3D in a natural way.

For further understanding of the general 3D transformation about an arbitrary


axis, let us consider the following examples.

Example 1: Rotate an object through an angle of 45 degrees with centre of the


object on (1, 2, 3) . The axis of rotation is given by the direction vector (1, 1, 1)
passing through (1, 0, 0) .

Solution: Refer to the book for the notations. Here


1 1
P  (1, 2, 3), V  ( 1, 1, 1), u  (1, 1, 1), u   (0, 1, 1),
3 3
1 1 1
.
cos α  u  u
z
| u | | u | 
z 2
, u x | u  | | u z | sin α  u   u z 
3
u x  sin α 
2

(Remember that the value of cos  can only provide information about the
magnitude of sin  if you use the trigonometric formula sin    1  cos 2  .
Hence you need to use the above computation to know exactly sin  .)

Therefore
1 0 0 0
0 1 / 2  1/ 2 0
R x ( )  
0 1 / 2 1/ 2 0
 
0 0 0 1
Further,
1
u   (1, 0, 2 )
3

u  u
z  2 , sin   u   u   1
cos   z

|u ||u | 3 3
z

Thus
 2/3 0  1/ 3 0
 
R y ()  
0 1 0 0
 1/ 3 0 2/3 0
 
 0 0 0 1

90
So, the object will be rotated with its centre P now on the position P with P Three Dimensional
Transformations
given by the following concatenation of the matrices.

P  T(1, 0, 0) R x (α) R y (β)R z (45 )R y ()R x () T( 1, 0, 0) P

Now you can calculate the coordinates of the changed position of P by


multiplying by the matrices in the above order.

To explain you this sequence order, we begin with the rightmost matrix
transformation. Since the axis of rotation is specified with a direction vector
that passes through (1, 0, 0), you need to translate it by the vector (1, 0, 0) to
make the axis of rotation pass through the origin (T(1, 0, 0)) . Then to let the
rotation vector lie on the xz-plane, you have rotated the vector u about x-axis
through an angle (R x ()) . Again to align the vector with the z-axis, you
have rotated it about y axis with an angle (R y ()) . Finally, you have rotated
the object with centre at P about z-axis (which is also the axis of rotation now)
by an angle of 45. Then apply the inverse transformation one by one in the
reverse order to bring the scene back in its original position and orientation.

***

Example 2: An object has to be rotated about an axis passing through the


points (1, 0, 1), (1, 3, 1) . What will be the resulting rotation matrix?

Solution: The axis is parallel to y axis. All that you need to do is to translate
the axis so that it passes through the origin, apply rotation about y axis and then
translate the axis back to the original position. The sequence of matrix
transformations is given by

1 0 0 1  cos  0 sin  0 1 0 0  1
0 1 0 0  0 1 0 0 0 1 0 0 
R  
0 0 1 1  sin  0 cos  0 0 0 1  1
     
0 0 0 1  0 0 0 1 0 0 0 1

***

Example 3: Scale a sphere centered on the point (1, 2, 3) with radius 1, so that
the new sphere has the same centre with radius 2.

Solution: Translate the sphere so that its centre is at the origin. Scale
uniformly in all the coordinates by a factor of 2, and then translate the sphere
back to its original position. The transformation will be given by taking the
concatenation of following matrices.

1 0 0 1 2 0 0 0 1 0 0  1
0 1 0 2 0 2 0 0 0 1 0  2
R  
0 0 1 3 0 0 2 0 0 0 1  3
     
0 0 0 1 0 0 0 1 0 0 0 1 

***
91
Computer Graphics You may now try the following exercises.

E1) Transform the vector (1, 1, 1) so that it aligns with the vector
( 2, 0,  1) .

E2) A scene is modelled in 3D. Write the sequence of matrix that will scale
the scene by a factor of 2 in y direction while leaving the position of
(2,  1, 1) unchanged.

E3) Rotate a scene modelled in 3D by an angle of  degrees such that the


axis of rotation is along the vector (1, 1, 0) and the origin remains
unchanged. Define the direction of positive rotation as clockwise when
one is looking towards the origin in the direction opposite to the vector
(i.e., in the direction (1,  1, 0) ). Assume a right handed coordinate
system. Write the sequence of transformations to implement the
rotation.

You may also try the following exercises from the book.

Do the exercises no. 11-1, 11-4, 11-9, Chapter 11 on page 450 of the book.

You have studied and implemented OpenGL functions for 2D transformations.


As you know in OpenGL every entity is treated as a 3D entity. Actually the
transformation functions are also created with this philosophy only. Hence the
OpenGL functions that you have used for 2D transformations are also used for
3D geometric transformations. Following OpenGL routines can be used for
applying 3D primitive transformations.
 glScaled(sx,sy,sz): Scale by sx in x coordinate, by sy in
y-coordinate and by sz in z-coordinate direction.

 glTranslated(dx,dy,dz): Translate by dx in x-direction, by dy


in y-direction and by dz in z direction.

 glRotated(angle,vx,vy,vz): Rotate by 'angle' degrees


about the vector (vx, vy, vz). If the vector is not unit vector, the routine
automatically converts that to unit vector.
OpenGL transformation matrices are post-multiplied and you have to use
column major matrices. Note that post-multiplying with column-major
matrices produces the same result as pre-multiplying with row-major matrices.
You can use any notation, as long as it is clearly stated.
OpenGL matrices are 16-value arrays with base vectors laid out contiguously
in memory and the indices are numbered from 1 to 16. The translation
components occupy the 13th, 14th, and 15th elements of the array. To explain
you further the difference between row-major and column-major order we give
here following example. The matrix A given by

92
1 2 3 4 Three Dimensional
2 3 4 1
Transformations

A
3 4 1 2
 
0 0 0 1

In row-major storage A is accessed in linear memory in such a way that rows


are stored one after the other. This means A will be accessed as follows –

[1234234134120001]

The same matrix viewed in column major order will be accessed as follows –

[1230234034104121]

Essentially this means, you have to be careful while applying OpenGL matrix
transformation. For example, if you call the following functions in the order
given below -
glScaled(1.0, 1.0, 1.0); // S

glRotatef(45.0, 0.0, 1.0, 0.0); // R

glTranslatef(3.0, 4.0, 1.0); // T

the transformations occur in the reverse order


S * R * T * P = S * (R * (T * P)),
where P is the point on which you are applying the transformation.
But before using these transformations, you need to know about the concept of
viewing and related operations. So, we move to the next section where you will
learn about setting up a view direction and transforming your coordinate
system accordingly.

5.3 THREE-DIMENSIONAL VIEWING


Three dimensional objects are created using modelling coordinate system. The
modelled objects are then placed in locations specified in the scene with
respect to the world coordinate system. You can view the scene from any
orientation and this is what you call as the view of the scene. Viewing
coordinates are the coordinates generated with respect to the frame of reference
of the viewer, or you may call it the frame of reference created by the position
of camera. Viewing transformation is a composite transformation, which
transforms the scene from world coordinate system to viewing coordinate
system. To understand what viewing coordinate system and the viewing
transformation are, let us consider a simple example. Suppose you have a
vacant room and you want to organize its interior to make it an office. Chairs,
tables, shelves, etc. are created somewhere else (in some factory) using some
coordinate system. Let that be treated as a modelling coordinate system. Then
these items will be brought in the room (Transform to locations in the room).
Finally every object will be placed in appropriate position. This means the
modelled objects are now placed in the world coordinate system. Now you
93
Computer Graphics view it from various angles and then become satisfied with the arrangements.
When you are watching it from various angles, each time you change your
position, the relative positions of the objects in the room also change. Objects
closer to you become farther when you go on the other side of the room. This is
viewing coordinate system and your eyes make an automatic transformation of
objects from world coordinate system to viewing coordinate system.

How to transform from world coordinate system to viewing coordinate system?


To get an answer to this question read the following.

Read Secs. 12-1 to 12-2, Chapter 12, pages 452-458 of the book.

Let us now consider an example.

Example 4: Transform the scene in the world coordinate system to the


viewing coordinate system with viewpoint at ( 2, 2, 2) . The view plane normal
vector is (1,  1,  1) and the view up vector is ( 0, 1, 0) .

Solution: Refer to page 458 of the book. Here you have been given
N  (1,  1,  1) and V  (0, 1, 0) . Therefore you can construct rows of the
orthogonal matrix as follows.
1
(0, 1, 0)  (1, 1, 1)
1 3 1 1
n (1, 1, 1), u   ( 1, 0, 1), v  n  u  (1,  2, 1) .
3 1 2 6
(0, 1, 0)  (1, 1, 1)
3

The resulting sequence of matrix operations is therefore given by

 1 1 
 2 0 0
 2  1 0 0  2
 1  2 1 0 1 0  2
0 
 6 3 6 
 1  0 0 1  2
1 1  
 0
 3 3 3  0 0 0 1 
 0 0 0 1

***

You may now try the following exercise.

E4) Your camera is located at (2, 3, 3) in the world coordinate system. You
are looking at (0, 0, 0) . How will you transform the scene from the world
coordinate system to viewing coordinate system?

OpenGL Model View and related Matrix Operations

Since modelling is constructing the 3D model, modelling transformations are


concerned with positioning (translation), orientation (rotation) and scaling of
94
the objects in the virtual world and with their relative positions, orientations
and sizes (scale). Following functions facilitate you to perform the operations Three Dimensional
Transformations
related to modelling the scene.

• glMatrixMode(GL_MODELVIEW );
 Specifies that we require space for a 4×4 matrix to be used for
setting up the view of the model and in common with all the
derived matrices. The function glMatrixMode( ) is used for
setting up both viewing- modellling and projection. When you
wish to set up the modelling and viewing transformation, use the
symbolic constant GL_MODELVIEW.
 GL_MODELVIEW matrix combines viewing matrix and
modelling matrix into one matrix.
 The matrix mode is one of OpenGL states it remembers.
Basically it means that all subsequent operations will be
performed on whatever matrix you specify in glMatrixMode( )
until you change it to a different mode. So if you specify the
modelling and viewing matrix GL_MODELVIEW, it will
perform all subsequent matrix operations treating this as the
initial matrix. In case of GL_MODELVIEW the matrix specifies
the following:
x-axis vector

y-axis vector

z-axis vector

 m11 m12 m13 m14 


m m 22 m 23 m 24 
 21
 m 31 m 32 m 33 m 34 
 
m 41 m 42 m 43 m 44 

 The x-axis, y-axis ad z-axis vectors are actually view left vector,
view up vector and view forward vectors.

• glLoadIdentity( );.
 Initializes the current matrix as the identity matrix.
 Hence in case of GL_MODELVIEW matrix, the view left
vector is set to (1, 0, 0), view up vector to (0, 1, 0) and view
forward vector is set to ( 0, 0, 1).

• gluLookAt (eyeX, eyeY, eyeZ, atX, atY, atZ, upX,


upY, upZ);
 Used for positioning, pointing and orienting the virtual camera.
Here
• eyeX, eyeY, eyeZ give the location of the camera / eye.
• atX, atY, atZ specify the pointing direction of the camera.
95
Computer Graphics • upX, upY, upZ give the orientation.

 The default settings are


• (eyeX, eyeY, eyeZ) = (0, 0, 0). This is when you do not call
gluLookAt. Using gluLookAt, you can change the position
of viewer or camera.
• (atX, atY, atZ ) = (0, 0,-1). View is along negative z-axis.
• (upX, upY, upZ) = (0, 1, 0) means camera is oriented with
up vector along the y-axis.

Now that you have studied how to transform from world coordinate system to
viewing coordinate system, you would like to know how to project the scene
from three dimensional world to two-dimensional display. Standard projection
transformations help you achieve this. In the next section, you will learn
about projection transformations.

5.4 PROJECTIONS
When all display devices are 2D, you need to devise methods that give a
realistic view of a 3D scene onto 2D display. With more and more devices
coming in the market with high definition resolution and many advanced
features, it becomes even more relevant to find out ways in which you can
effectively project your scene on a 2D device. A camera uses a transformation
method that captures a 3D scene in a 2D film or screen. The principle that a
camera uses is essentially the same as that used in a human eye system. Such
transformations that transform 3D scenes to 2D view plane are called
projections. More precisely a point P  ( x, y, z) in the 3D world is
transformed to a point P   ( x , y , z ) that lies on the projection plane or the
view plane as shown in Fig. 1.
y

P = ( x, y, z)

P   ( x , y , z )

z Projection
plane

Fig. 1.

Projections can be classified into two types – parallel projections and


perspective projections. Depending on the type of application, one of the
projection types is used. While parallel projections are extensively used in
architectural drawing, perspective projections are used to bring reality to the
projected image, e.g., in games and virtual reality problems.

96
The taxonomy of projections is as shown in Fig. 2. Three Dimensional
Transformations

Projection

Parallel Perspective

Oblique Orthographic One point Two point Three point


perspective perspective perspective
Axonometric Multiview

Isometric

Fig. 2.

To know about projections, their types and applications in detail you may read
the following section of the book.

Read Secs. 12-3, Chapter 12, pages 458-466 of the book.

For a better understanding of different types of projections we are giving here


few examples.

Example 5: Imagine a square with vertices (0, 0, 1), (1, 0, 1), (1, 0, 2) and
(0, 0, 2) . Find out its image on the view plane aligned with the xy-plane under
following projections (i) Orthographic parallel projection (ii) Oblique parallel
projection with projection direction vector (1, 1,  1) .

Solution: For an orthographic projection the view direction has to be


perpendicular to the view plane and the square object will be projected as given
below in (i). In case of an oblique parallel projection with projection angle
parameters     45  , a point ( x , y, z) will be projected to
( x  z / 2 , y  z / 2 , 0) . You can now easily compute the projection of each
vertex as per the rule. The transformed points given in the sequence are as
follows.

(i) orthographic projection – (0, 0), (1, 0), (1, 0), (0, 0) .
(ii) oblique parallel projection –
1 1 1 1
( , ), (1  , ), (1  2 , 2 ), ( 2 , 2) .
2 2 2 2

Try drawing the result on the paper. Notice that while the shape of square gets
deformed with oblique parallel projection, in orthographic projection, it
reduces to a straight line segment.

***

Example 6: Let origin be the centre of projection. Find out the perspective
projection when the projection plane passes through the point P(2, 3,  1) and
has normal vector (1, 1,  1) .

97
Computer Graphics Solution: Refer Fig. 3 below.

y
P = ( x, y, z)
P   ( x , y , z )

O x

Projection plane
z
Fig. 3.

Let P be the projection of P on the projection plane. The vectors OP and


OP satisfy OP   c OP . Thus

x '  cx, y'  cy, z'  cz . (1)

Since P falls on the projection plane, it must satisfy the equation


 x ' y'  z'  d where d  1  2  1  3  (1)  (1)  2 (ref. Eqn.(10-6) page 329
of the book). Substituting the values of x , y , z  from Eqn. (1), we get
2
 cx  cy  cz  2  c 
xyz
Hence the projection is given by
2x 2y 2z
x'  , y'  , z'  .
xyz xyz xyz

***

You may now try the following exercises.

E5) Distinguish between parallel and perspective projection.

E6) Find out the matrix of projection for cavalier projection.

E7) Draw an approximate (i) front view (ii) top view and (iii) view from left,
of the objects shown in Fig. 4. ( Hint: Remember that the terms top
view, front view and view from left are normally used in case of parallel
projections only.)

(a) (b)

Fig. 4.

98
When you watch outside a window, you can view more if you are closer Three Dimensional
Transformations
to the window (wide angle view) as compared to the visible volume when
you are away from window. This is equivalent to saying that you are able
to capture more volume of the scene when you have a wide angle lens in
your camera. A view volume which we shall discuss in the next section is
the spatial extent that is visible through a window in the view plane.
Given the view window specification, you can set up a view volume
using the window boundary. The view volume is dependent on the
window shape and size and the projection specifications.

5.5 VIEW VOLUMES AND GENERAL


PROJECTION TRANSFORMATIONS
You can start with reading the following.

Read Sec. 12-4, Chapter 12, pages 467-476 of the book.

You have to remember that the view volume is the volume which sets up near
and far extents, top and bottom extents and left and right extents. These extents
are chosen so that you can eliminate those objects from further processing that
are not visible within the view volume. Any object with its defining boundaries
beyond these extents will be immediately removed from the display list. For
example, in a scene, there could be some very complex objects that are very far
and do not contribute much to the scene. In the absence of a far extent plane
such objects will remain in the display list and will consume lot of processing
time. With these view extents well defined in the beginning, performance of
display processing will be much improved.

View volume depends on the type of projection and the shape and size of the
window. OpenGL provides following functions for performing parallel and
perspective projections.

For orthographic parallel projection:

glOrtho(left, right, bottom, top, near, far);


glOrtho2D(left, right, bottom, top);

Here left, right define the x-direction extents, bottom, top define y-direction
extents and near, far define the z-direction extents of the view volume.
gluOrtho2D is exactly like glOrtho, but with near = −1 and far = 1. That is,
gluOrtho2D views points that have z-value between −1 and 1. Usually,
gluOrtho2D is used when drawing two-dimensional figures that lie in the
xy-plane, with z  0 .

For perspective projection:

glFrustum(float l, float r, float b, float t, float n,


float f );

99
Computer Graphics The quantities are as specified in Fig. 5.

View Volume Frustum


(r, t ,  n )
z  f

z  n (l, b,  n )

Fig. 5.

gluPerspective( float θ, float aspectRatio, float n, float


f );

Here θ is an angle (measured in degrees) specifying the vertical field of view


(θ is the solid angle between the top bounding plane and the bottom bounding
plane of the frustum). The parameter aspectRatio specifies the ratio of the
width of the frustum to the height of the frustum. A call to gluPerspective is
equivalent to calling glFrustum with

t  n tan(  / 2); b  n tan(  / 2); r  t  aspectRatio; l  b  aspectRatio;

Following simple program will give you an idea about how to use the forgoing
functions in a C program for displaying images of 3D objects. The program
displays a cube using orthographic projection.

//Program to draw a cube using orthographic projection

#include <GL/glut.h>
GLint w=500,h=500;

void init(void)
{
glClearColor(0.0,0.0,0.0,0.0);
// Setup the modelview matrix
glMatrixMode(GL_MODELVIEW);
// initialize your modelview matrix by identity
glLoadIdentity();
gluLookAt(0,0,0,5.0, 5.0,5.0,0.0,1.0,0.0);
// Setup the projection matrix
glMatrixMode(GL_PROJECTION);
//Initialize the projection matrix by identity
glLoadIdentity();
// Use orthogonal parallel projection
glOrtho(-5.0,5.0,-5.0,5.0,5.0,-5.0);
// Hide backfaces from the view point
glEnable(GL_CULL_FACE);
//Material variabels declaration
//Setting Specular reflection
GLfloat mat_specular[] = {0.0, 0.0, 1.0, 1.0};
//Material's shine factor
GLfloat mat_shininess[] = {50.0};
100
//Light source variables declaration Three Dimensional
//Position of light source 0 Transformations
GLfloat light_position0[] = { 5.0, 5.0, 5.0, 3.0 };
// Light color
GLfloat light_color[] ={1.0,1.0,1.0,1.0};
//Shading to be applied as flat shading
glShadeModel(GL_FLAT);
//Material Specular reflection settings
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
//Material shine-ness setting
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
//Light 0 position setting
glLightfv(GL_LIGHT0, GL_POSITION, light_position0);

glEnable(GL_LIGHTING);//Power on
glEnable(GL_LIGHT0); //Light switch on

glEnable(GL_DEPTH_TEST);

glFlush();
}

void display(void)
{

glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
typedef GLint vertex[3];
typedef GLfloat color[3];
color c[8] =
{{1,0,0},{1,1,0},{0,1,0},{0,1,1},{0,0,1},{1,0,1},{1,0,0},{
1,0,0}};
vertex v[8] = {{0,0,0},{0,1,0},{1,0,0},{1,1,0},{0,0,1},
{0,1,1}, {1,0,1}, {1,1,1}};

glFrontFace(GL_CW);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_INT, 0, v);
glColorPointer(3, GL_FLOAT, 0, c);
GLubyte vertexIndex[] = { 6,2,3,7, 7,3,1,5, 5,1,0,4,
4,0,2,6,2,0,1,3,7,5,4,6};
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,vertexIndex);
glutSwapBuffers();
}

void reShape(GLint newWidth, GLint newHeight)


{
glViewport(0,0,newWidth, newHeight);
w=newWidth;
h=newHeight;
}

void main()
{
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowPosition(50,50);
glutInitWindowSize(w,h);
glViewport(0,0,w,h);
glutCreateWindow("3D View");
101
Computer Graphics init();
glutDisplayFunc(display);
glutReshapeFunc(reShape);
glutMainLoop();
glDisable(GL_CULL_FACE);
}

The output will be as shown in Fig.6.

Fig. 6.

A 3D program requires much more than these settings. For more information
on 3D graphics programming you may refer the coding resources and tutorials
available on the offical website www.opengl.org of OpenGL.

You may now try the following exercises.

E8) You have a scene specified in the world coordinate system. The scene has
a cube with base square on the xz-plane having vertices (1, 0, 1),
(2, 0, 1), (2, 0, 2) and (1, 0, 2) and top square having vertices (1, 1,1),
(2,1,1), (2, 1, 2) and (1, 1, 2) . The look up vector is (0, 1, 0) and the
viewpoint is at (0, 0, 5) . Find out and draw image of the cube in a view
plane placed at the xy-plane of the world coordinate system. Also, find
out how many vanishing points will be there in the projected scene and
determine the vanishing point(s).

E9) Draw a projected cube with (i) 1-point perspective image (ii) 2-point
perspective image (iii) 3-point perspective image.

E10) State whether the following statements are true or false. Give
reasons/examples to justify your answer.
a) Cavalier projection is a parallel projection.
b) Angles are not preserved in perspective projection in general.
c) Perspective projection is an affine transformation.
d) There can be more than one vanishing point in a projected image but
only one principal vanishing point in a projected image.

We now end the unit by giving a summary of what we have covered in it.

5.6 SUMMARY
In this unit, we have covered the following:

1) A three-dimensional translation shifts the object by a prescribed


102
parameter vector. For example, to show an object moving along a spiral
path C( t )  (cos t , sin t , t ), (0  t  2) , all you need to do is to keep Three Dimensional
Transformations
translating the object along the spiral by a parameter vector
(cos t , sin t , t ) , for t  [0, 2] .

2) Rotations about the coordinate axes are simple extensions of 2D rotation.


The rotation matrices R x , R y , R z about the coordinate axes x, y, z
respectively are given as follows:
1 0 0 0  cos  0 sin  0
0 cos   sin  0  0 1 0 0
Rx   ;R y  
0 sin  cos  0  sin  0 cos  0
   
0 0 0 1  0 0 0 1
cos   sin  0 0
 sin  cos  0 0
Rz  
 0 0 1 0
 
 0 0 0 1
3) For performing rotation about an axis parallel to one of the coordinate
axes (say z-axis), you first need to translate the axis (and hence the
object) so that it aligns with that coordinate axis (z-axis) and then perform
the rotation. Finally, translate the axis back to its original position.

4) Rotation about an arbitrary axis is a composition of several rotations and


translation operations. What you need to do is the following:
a) If the axis passes through the origin, perform the sequence of
rotations so that it aligns with one of the coordinate axes. Then
perform the required rotation of the object. Finally apply the
reverse sequence of inverses rotations so that the axis attains its
original orientation.
b) If the axis does not pass through the origin, translate the axis to
make it pass through the origin. Perform the operations as given in
(a) above and then translate the axis back to its original position.

5) Scaling, shear and reflection operations have natural extensions to 3D.

6) Viewing coordinates are the coordinates with reference to the eyes of the
viewer or the coordinates with reference to camera position and
orientation.

7) To transform from the world coordinate system to viewing coordinate


system you need to perform the following operations.
a) Translate the viewing coordinate origin to the world coordinate
origin (and hence the scene).
b) Rotate the axes so that your viewing coordinate system has the
same orientation as the world coordinate system.

While performing such an operation, you are actually performing a


change of basis transformation. This again results in another orientation
of three mutually perpendicular (linearly independent ) unit vectors.
103
Computer Graphics 8) Simple techniques exist which help you transform your scene from world
coordinate system to viewing coordinate system. For this you need to
define a look at vector and a look up vector. These are used in defining a
Cartesian coordinate system for your view/camera coordinates.

9) For transforming a 3D scene to a 2D display, you need to use a


transformation called projection. In computer graphics, a projection is a
transformation that maps a 3D point (x, y, z) to a 2D point ( x , y) on a
projection plane. Remember that ( x , y) does not mean that we are
simply omitting z-coordinate. Both x  and y  are functions of x, y and z,
in general.

10) There are two types of projections – parallel projection and perspective
projection.

11) In parallel projection, objects in scene are projected onto the 2D view
plane along rays parallel to a projection vector.

12) Parallel projection is orthographic if the projection vector is orthogonal to


the view plane. Otherwise, it is called oblique parallel projection.
Different parallel projections like isometric, cavalier and cabinet
projections have been in use extensively.

13) Perspective projection gives more realistic appearance and uses the same
principle as used in camera.

14) Perspective projection is not an affine transformation.

15) OpenGL functions for modelling and viewing transformations.


a) glMatrixMode(GL_MODELVIEW );
b) gluLookAt (eyeX, eyeY, eyeZ, atX, atY, atZ, upX, upY,
upZ);

16) OpenGL functions for parallel and perspective projections.


a) glFrustum(float l, float r, float b, float t, float n, float f
);

b) gluPerspective( float θ, float aspectRatio, float n, float f


);

5.7 SOLUTIONS/ANSWERS
Exercises given on page 92 of the unit

E1) Hint: Apply the transformations to align (1, 1,1) with a coordinate axis,
say z-axis. Then apply transformation to reorient the z-axis unit vector so
that it aligns with ( 2, 0,  1) .

E2) Translate by (2, 1,  1) , scale by a factor of 2 in y-direction and then


translate by ( 2,  1, 1) . The sequence will be as follows:

104
1 0 0  2 1 0 0 0 1 0 0 2 Three Dimensional
0 1 0 1  0 2 0 0 0 1 0  1
Transformations

C
0 0 1  1  0 0 1 0  0 0 1 1
   
0 0 0 1  0 0 0 1  0 0 0 1

E3) Axis of rotation is in the xy-plane. Rotate the axis so that it aligns with
one of the coordinate axes (say x-axis). The required rotation in this case
will be about z-axis by an angle of  45 . Now apply rotation by the
specified angle   , since it is given that the clockwise rotation is
positive when one is looking towards the origin in the direction opposite
to the vector. Then apply the inverse rotation (rotation by 45 ) so that the
axis is back to its original position. Following sequence of
transformations will be applied.

 1 1   1 1 
  0 0 1 0 0 0  2 0 0
 2 2   2 
1 1 0 cos   sin  0
C 0 0   1 1
0 0
 2 2  0 sin  cos  0  2 2 
 0 0 1 0    0 0 1 0
  0 0 0 1  
 0 0 0 1  0 0 0 1

Exercises given on page 450 of the book.

11-1. The solution is similar to the solution of Exercise 5-6 on page 234 of the
book which is done in Unit 3. Use 4 4 matrix representation of 3D
transformations.

11-4. Please refer to Eqn. (11-35) on page 439 of the book. Since the matrix
aligns vectors u x , u y , u z with x, y and z-coordinate axes, it follows that
it is equivalent to the matrix product R y ()R x () .

d 0  a 0 1 0 0 0
0 1 0 0 0 c/d  b/d 0
R y ()R x ()  
a 0 d 0  0 b/d c/d 0
  
0 0 0 1  0 0 0 1
d  ab/d  ac/d 0
0 c/d  b/d 0

a b c 0
 
0 0 0 1

Observe that the third row vector is the vector about which rotation has to
be performed. Then the two other row vectors are perpendicular to this
vector and also to each other. Hence they form mutually orthogonal
system of vectors. When one of the vectors is exactly the same as the
rotation axis vector, then the matrix (11-35) must be the same as that
obtained by taking the product R y ()R x () .

105
Computer Graphics 11-9. Since the direction angles are given as , ,  (refer to section A-2, page
625 of the book), the unit vector along which you need to scale would be
given by u  (u x , u y , u z ) where

cos   u x , cos   u y , cos   u z .

Apply rotation to align the vector u with one of the coordinate axes, say
z-axis. So, you need to apply the transformation as given in the book.
Then scale the object with respect to z-axis and finally apply the inverse
rotations to bring things in their original orientation. Hence, the following
sequence of operations will be applied.

1 0 0 0  u2  u2 0  ux 0
0  y z

1 0 0  0
S1  
0 1 0
0 0 s 0  u 0 u y  u 2z
2
0
   x 
0 0 0 1  0 0 0 1

1 0 0 0
 uz  uy 
0 0
 u 2y  u 2z u 2y  u 2z 
 uy uz 
0 0
 u 2y  u 2z u 2y  u 2z 
 
0 0 0 1

1 0 0 0
 uy 
0  u 2y  u 2z 0
uz
0 
0 ux

 
u 2y u 2z u 2y  u 2z  0 1 0 0
S2    uy 
0
uz
0   u x 0 u y  u 2z
2
0

 u 2y  u 2z u 2y  u 2z  1

0 0 0

0 0 0 1

S  S 2  S1

Note: The matrices in the transformation S2 are inverse rotation


transformations to matrices in S1. Remember that inverse rotation matrix
for rotation matrix R() is nothing but R () .

Exercise given on page 94 of the unit.

E4) You are given a look at vector, which can be taken as the vector normal to
the view plane and in the opposite direction. You have to select a view up
vector yourself. Based on these vectors, you can proceed as in E1).

106
Exercises given on page 98 of the unit. Three Dimensional
Transformations

E5)
Parallel Projection Perspective projection
Coordinate position are Coordinate position are transformed
transformed to view plane along to view plane along lines that
parallel lines. The vector defining converge to a point called projection
the direction of lines is called reference point (PRP).
projection vector.
Object distance from the view Object's distance from the PRP and
plane is not significant. view plane is important. It plays a
role in determining size of the image.
Farther the object, smaller is the
image.
Parallel lines remain parallel. Parallel lines may not remain
parallel.
Ratios are preserved in general. Ratio are not preserved in general.

Useful in civil, architectural Useful in virtual reality, gaming and


design, machine design, sectional other such applications.
view design, etc.
Less realistic. More realistic.

E6) Cavalier projection corresponds to tan  = 1, leading to the following


matrix.
1 0 cos  0
0 1 sin  0
M parallel   
0 0 0 0
 
0 0 0 1

E7) Figs. 7 and 8 show the top view, front view and the side views of the
objects. Draw the object shape.

(a)

Top view Front view View from left

Fig. 7.

(b)

Top view Front view View from left

Fig. 8.

107
Computer Graphics Exercises given on page 102 of the unit

E8) Refer to the formula (12-16) of the book on page 465. Here
z prp  5, z vp  0, d p  5 .
5x 5y
Therefore x p  , yp  .
5z 5z
Images of points are given as follows:

5 5 10 5
(1, 0, 1)  ( , 0), (2, 0, 1)  ( , 0), (2, 0, 2)  ( , 0), (1, 0, 2)  ( , 0)
4 2 3 3

5 5 5 5 10 5 5 5
(1, 1, 1)  ( , ), (2, 1, 1)  ( , ), (2, 1, 2)  ( , ), (1, 1, 2)  ( , )
4 4 2 4 3 3 3 3

The image of projected cube will be as shown in Fig. 9.

(5/3, 5/3) (10/3, 5/3)

(5/4, 5/4)
(5/2, 5/4)

(0, 0)
(5/4, 0) (5/3, 0) (5/2, 0) (10/3, 0)

Fig. 9.

There will be only one vanishing point since the view plane is on the
xy-plane, so all lines parallel to the xy-plane will remain parallel even
after the projection. The vanishing point in this projection is (0, 0) as
shown.

E9) Projected cube with (i) 1-point perspective image (ii) 2-point perspective
image (iii) 3-point perspective image are shown in Fig. 10 below and
Fig. 11 on the next page.

(i) One point perspective image (ii) Two point perspective image

Fig. 10.

108
Three Dimensional
Transformations

(iii) Three point perspective image

Fig. 11.

E10) a) True. It is a type of oblique parallel projection when the tan   1 ,


where  is the angle between the two lines A and B described as
follows (see Figure 12-21 of the book on page 462).
A-line segment joining points – oblique parallel projected image and
the orthogonal parallel projected image of a point
B-line segment joining points – point to be projected and the
projected image of the point as per the oblique projection.
b) True. Parallel lines appear converging to a point.
c) False. The perspective projection contains z coordinate in the
denominator. Hence it cannot be an affine transformation.
d) False. A vanishing point for any set of lines that are parallel to one of
the principal coordinate axes of object is called a principal vanishing
point. So there can be 1, 2 or 3 principal vanishing points.

5.8 PRACTICAL EXERCISES


Session 7

1. Write a code to generate a composite matrix for general 3D rotation


matrix. Test your code and rotate continuously a cube about an axis.

2. Write a code for (i) general parallel projection where the view vector is at
the choice of user and projection plane is x-y plane (ii) perspective
projection where the projection reference point is at the user’s choice and
the projection plane is again xy-plane.

***

109
Appendix: OpenGL -
APPENDIX: OPENGL-A QUICK START A Quick Start

Structure Page No
1.1 Introduction 111
Objectives
1.2 A Brief Overview 112
1.3 How to Install OpenGL? 114
1.4 Your First Program Using OpenGL 115
1.5 OpenGL Functions, Constants and Data Types 119
1.6 OpenGL Transformation Functions 122
1.7 Animation 123
1.8 Mouse and Keyboard Functions 125

1.1 INTRODUCTION
OpenGL stands for Open Graphics Library. By using it, you can create
interactive applications that render high-quality color images composed of 3D
geometric objects and images. Here the term 'Open' indicates open standards,
which means that many companies are able to contribute to the development of
OpenGL.

OpenGL is a powerful low-level graphics rendering and imaging library


available on all major platforms. It provides the programmers an interface to
the systems graphics hardware. OpenGL is designed to be a device which is
window and operating system independent and allows portability between
various computer platforms (refer Sec. 1.2, about the portability of your
program). It is referred to as an API (application programming interface), that
insulates the programmer from device differences and how they vary from one
system to another.

OpenGL was originally developed by Silicon Graphics Inc. (SGI) as a


multipurpose, platform independent graphics API. Since 1992, OpenGL
Architecture Review Board (ARB) has been looking after the development of
OpenGL. This board consists of some of the major graphics vendors and other
industry leaders such as Hewlett-Packard, IBM, NVIDIA, Sun Microsystems
and Silicon Graphics etc. In view of the fast development of graphics
hardware, ARB is committed to annual updates and the present version of
OpenGL is 3.0. OpenGL is now widely used in CAD, virtual reality, scientific
visualization, flight simulation and video games.

Purpose of this Appendix is to give you a quick exposure to graphics


programming in C using OpenGL and make you acquainted with basic
structure of a graphics program using OpenGL. We do not intend to give here a
complete introduction to OpenGL as we have already incorporated the required
details in the units at various places.

Objectives
After reading this supplement, you should be able to
 outline the basics of OpenGL;
 draw 2D objects with specified fill colors;
111
Computer Graphics  use OpenGL transformation function for 2D and 3D geometric
transformations;
 use simple animation to objects to be displayed in the scene;
 write interactive programs using mouse and keyboard functions of
OpenGL.

1.2 A Brief Overview


As you know OpenGL is a 3D graphics application programmers interface
(API) to produce high-quality, color images of 3D objects (group of geometric
primitives) and images (bitmaps and raster rectangles). Remember that it is not
a programming language like C or Java. To give you an idea of what is
primarily possible with OpenGL, we list here a few basic operations that you
can perform with the help of OpenGL.
 Creating interactive applications which render high-quality color images
composed of 3D geometric objects and images.
 Specification (modelling) of an arbitrarily complex set of objects in 3D
space - creation of a 3D virtual world. This includes
 Object specifications --- In terms of drawing primitives.
 Relative positions of multiple objects defined by transformations.
 Coloring objects, specifying light sources and light direction.
 Specification of a virtual camera by which to view the 3D virtual
world.
 Projection to 2D display.

You will explore many more features of OpenGL later in this appendix.

When a program is executed, OpenGL does the following:


 Assembles the virtual world.
 Points the virtual camera at the world. This means setting up a view angle
or direction and the volume or a frustum which you would like to keep in
your scene.
 Projects the scene onto a 2D projection plane
 Performs the equivalent of spatial sampling and rasterization.

To be more specific on the portability of your program, part of your


application which does rendering is platform independent. However, in order
for OpenGL to be able to render, it needs a window to draw into. OpenGL is
not capable of opening windows or responding to interrupts from a mouse or
keyboard. Generally, this is controlled by the windowing system on whatever
platform you are working on. As OpenGL is platform independent, we need
some way to integrate OpenGL into each windowing system. Every windowing
system where OpenGL is supported has additional API calls for managing
OpenGL windows, color maps, and other features. These additional APIs are
platform dependent. For the sake of simplicity, we will use an additional
freeware library for simplifying interaction with windowing systems. This is
called GLUT (GL Utility Toolkit). GLUT is a library which makes writing of
112
OpenGL programs, regardless of windowing systems, much easier. The GLUT Appendix: OpenGL -
A Quick Start
libraries provide facilities to define and open windows and respond to mouse
and keyboard functions.

Finally we conclude this section by telling you something about the rendering
pipeline of OpenGL. OpenGL Pipeline has a series of processing stages in
order (see Fig.1). Two graphical information, vertex-based data and pixel-
based data, are processed through the pipeline, combined together and then
written into the frame buffer. Notice that OpenGL can send the processed data
back to your application.

Vertex Data per-Vertex Operations


Evaluators
and Primitive Assembly

Display List Per-fragment


Rasterization Operation
Pixel
Operations Texture Frame Buffer
Pixel Data Assembly

Fig. 1.

Display List: Because of its structure, OpenGL require lots of procedure calls
to render a complex object. To improve efficiency, OpenGL allows you to
generate an object, called a display list, into which OpenGL commands are
stored in an efficient internal format, which can be called back again in the
future.

Vertex Operations: Vertex and normal coordinates are transformed by


GL_MODELVIEW matrix (from object coordinates to eye coordinates). Also,
if lighting is enabled, the lighting calculation per vertex is performed using the
transformed vertex and normal data. This lighting calculation updates new
color of the vertex.

Pixel Transfer Operations: After the pixels from client's memory are
unpacked (read), scaling, bias, mapping and clamping are performed on the
data. These operations are called Pixel Operations. The transferred data are
either stored in texture memory or rasterized directly to fragments.

Primitive Assembly: After vertex operation, the primitives (point, line, and
polygon) are transformed once again by projection matrix. These primitives are
clipped against viewing volume clipping planes, changing from eye
coordinates to clip coordinates. After that, perspective division by w occurs and
viewport transform is applied in order to map 3D scene to window space
coordinates. Last thing to do in Primitive Assembly is culling test if culling is
enabled.

Texture Memory: Texture images are loaded into texture memory to be


applied onto geometric objects.

Rasterization: Rasterization is the conversion of both geometric and pixel data


into fragment. Fragments are a rectangular array containing color, depth, line
width, point size and antialiasing calculations.
113
Computer Graphics Fragment Operation: It is the last process to convert fragments to pixels onto
frame buffer. The first process in this stage is texture generation. A texture
element is generated from texture memory and it is applied to the each
fragment. Then fog calculations are applied. After this a variety of different
fragment tests are applied. Finally, blending, dithering, logical operation and a
few more operations are performed and actual pixel data are stored in frame
buffer.

OpenGL as a State Machine

OpenGL is a state machine. You put it into various states (or modes) that
remain in effect until you change them. For example, the current color is a state
variable. Suppose you set the current color to white. Every object is then
drawn with white color until you set the current color to something else. There
are many such state variables that OpenGL maintains. Another example that
you might be using in your codes quite often would be point size. You can take
it to be one pixel thick or it can be thicker with more pixels.

1.3 HOW TO INSTALL OPENGL?


If you are using Window XP, NT or later versions, OpenGL is automatically
installed with your video driver. Since you also want to program in OpenGL,
you need to get some extra OpenGL libraries so that your programs will
compile. You can download information for gl and glut libraries from the
official website of OpenGL

http://www.opengl.org/resources/libraries/glut.html
or,
http://www.xmission.com/~nate/glut.html

Your library files in MS Windows-

If you are working in Visual Studio 6.0 IDE, then


1. Copy the glut.h header file in c:\program files\microsoft visual
studio\vc\include\gl location
2. Copy the glut.lib file in c:\program files\microsoft visual studio\vc\lib
location
3. Copy the glut32.lib file in c:\program files\microsoft visual studio\vc\lib
location
4. Copy the glut32.dll file in c:\windows\system32

If your system has Visual Studio 2005 , then you may try this.
1. Copy the glut.h header file in C:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\Include location
2. Copy the glut.lib file in C:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\Lib location
3. Copy the glut32.lib file in C:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\Lib location
114 4. Copy the glut32.dll file in c:\windows\system32
OpenGL Installation on Linux Appendix: OpenGL -
A Quick Start

Installations of all kinds on Linux are very much dependent on distribution. If


you refer to your distribution's packages, installation may become a bit easier.
For example, on Debian-based distributions (with apt-get or equivalent
installed), following commands could be used to install the header files of
OpenGL libraries.

sudo apt-get update


sudo apt-get install libgl1-mesa-dev

Headers compatible with OpenGL are available from the Mesa3D project. If
your distribution does not contain development files for the Mesa3D project,
you may take help of Mesa3D that comes equipped with the usual installation
procedure.

./configure
make
make install

However, take care of conflicting OpenGL libraries. It may happen that Mesa's
software implementation overrides your distribution's libraries or manually
installed libraries.

The headers will be installed to [install_root]/include/GL - on Debian systems,


this is /usr/local/include/GL when compiled from source or /usr/include/GL
when installed from a pre-built package. Official OpenGL headers are available
from SGI, however, they have been superseeded by their upgrades.

Finally, the following url is useful for setting up OpenGL on any platform.

http://www.polytech.unice.fr/~buffa/cours/synthese_image/DOCS/Tutoriaux/N
ehe/lessons.htm

1.4 YOUR FIRST PROGRAM USING OPENGL


Let us begin by a simple C program which illustrates how you open a window
and specify its position and size. You need to structure your program as
follows:

 Initializing the window: Call the following function to initialize the


GLUT library.
glutInit(&argc, argv);

Its arguments should be the same as that of your main() function.

Now choose the type of window that you need for your application and
initialize it.

glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);

This helps in initializing the graphics display mode (of the GLUT library)
to use a window with RGB buffers for color and with a single buffer for
display. Other options are also available. For example, you may use
115
Computer Graphics GLUT_DOUBLE for using a double buffer. In animation programs, it is
advisable to use double buffer. For complete information on various
options available, you may log on to
http://glprogramming.com/red/appendedixd.html or refer to the book [1],
mentioned at the end of this Study Guide that is also called Red Book.

glutInitWindowPosition (x, y);

specifies the window position at (x,y) pixel location. This means, the
window is x pixels to the right and y pixels down the top left corner of
the display screen.
glutInitWindowSize (w, h);

specifies the window size to be h units high and w units wide. You may
choose height and width as per your program's specifications and object
size.
glutCreateWindow(“A Rectangle”);

creates an OpenGL enabled window.

 State Initialization: Initialize any OpenGL state that you don’t want to
change in every frame of your program. This might include many states
such as background color, positions of light sources, texture maps etc.
For example, you would need to specify the RGB component of
background color to be used when clearing the color buffer using the
following function.

glClearColor(0.0f, 0.0f, 1.0f,1.0f);

Last argument is transparency factor.

glClear(GL_COLOR_BUFFER_BIT);

clears the window with current clearing color.

 Register Callback Functions: A callback function is an executable code


that is passed as an argument to other code. You need to register the
callback functions that you will need in you main program. Then GLUT
calls these functions when a certain sequence of events occur, like a user
input through mouse or the window needing to be refreshed.

The most important callback function is the following which renders your
scene.
glutDisplayFunc(display);

Here display() is the function you will write to create your objects.
The program Blank.c displays nothing, except the blank window. This
is because your display function does not involve any display function.
Use of

glFlush() ;

116 flushes every OpenGL command and buffers for display.


 Enter the main event processing loop: This is where your application Appendix: OpenGL -
A Quick Start
receives events, and starts the GLUT main processing loop.

glutMainLoop();

Let us now have a look at the complete C-code.

//Blank.c
#include <gl\glut.h>
/* Includes the OpenGL core header file. This file is required
by all OpenGL applications. */

//Initialize whatever you want to initialize here.


void init(void)
{
glClearColor(0.0f, 0.0f, 1.0f,1.0f);
}

//Called to draw scene


void display(void){
//clear window with current clearing color
glClear(GL_COLOR_BUFFER_BIT);
// Flush drawing commands
glFlush();
}

//Main Loop
int main(int argc, char** argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
//No window size or position chosen. Program will use default
values.
glutCreateWindow("A Blank Window");
init();
glutDisplayFunc(display);
glutMainLoop();
return 0;
} //Output will be a blank window

What you will get as an output will be the following.

Let us now modify our program to draw a square in the window. This time set
the background color to black. Color of the solid square by default will be
white. Note that there is a very little change in your display function and in
initialization.

//Square.c
#include <gl\glut.h>

117
Computer Graphics void init(void)
{
//Reset the coordinate system
glClearColor(0.0f, 0.0f, 0.0f,1.0f);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 500.0f, 0.0f, 500.0f, 1.0f, -1.0f);
}

//Called to draw scene


void display(void){
//clear window with current clearing color
glClear(GL_COLOR_BUFFER_BIT);
glRectf(150.0f,150.0f,350.0f, 350.0f);
// Flush drawing commands
glFlush();
}

//Main Loop
void main(void)
{
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutCreateWindow("A Rectangle");
init();
glutDisplayFunc(display);
glutMainLoop();
}

The output looks as follows.

Note: You might have noticed that in the program we did not use some
functions and still the program could be executed. For example, we have not
used any arguments in the main function main( ). We have also not used
glutInit( ) here. The idea is to tell you that the program runs without these
functions and arguments also. However, for more efficient use of memory and
for a flawless performance of your program, it is advisable to make use of these
functions and arguments.

For plotting points you can use the following code and add it to your display
function.

glBegin(GL_POINTS);
glVertex3f(100.0f, 150.0f, 150.0f);
glVertex3f(200.0f, 150.0f, 200.0f);
glEnd() ;

118
There are a number of primitives for object modelling and rendering the scene. Appendix: OpenGL -
A Quick Start
You are advised to visit the official website of OpenGL to see the reference
manual for more information on drawing and other primitives.

Window Reshape Function: Every program displaying its output on window


uses a reshape function that indicates what action is to be taken when the
window is resized. This function has the following syntax.

glutReshapeFunc(void (* func)(int w, int h));

The function allows registration of a callback function that has to be called


when the window is resized. As an example of a callback function, we give the
following function reshape( ).

void reshape( GLsizei w, GLsizei h)


{ //Prevent divide by zero; w = width, h = height of window
if(h==0) h = 1;
//Set viewport to window dimension
glViewport(0,0,w,h);
//Reset coordinate system
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//Adjust clipping volume with appropriate proportions
if( w<=h)
glOrtho( 0.0f, 250.0f, 0.0f, 250.0f*h/w, 1.0, -1.0);
else
glOrtho( 0.0f, 250.0f*w/h, 0.0f, 250.0f, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

Then register the callback function

glutReshapeFunc(reshape);

in your main function to get the desired effect.

1.5 OPENGL FUNCTIONS, CONSTANTS, AND


DATA TYPES

Naming Conventions

Function names in the basic OpenGL library begin with the letters gl, and each
component word within a name has its first letter capitalized. For example-
glClear(), glBegin(). Fig. 2 for the glVertex*( ) function on the next page may
help you understand better the general naming conventions in gl and glut
libraries.

119
Computer Graphics Root command
gl Library

glVertex3fv( . . . )

Data type v – vector


2 – (x, y) Omit v for scalar
3 – (x, y, z) b – byte forms
4 – (x, y, z, w) ub – unsigned byte
Used only when the
s – short
data
No. of arguments I – int
is in vector form
ui - unsigned int
f – float
d - double

Fig. 2.

There are certain functions which require arguments that are constant
specifying a parameter name, a value for a parameter, a particular mode, etc.
All such constants begin with the letters GL, component words in the constant
are written in capital letters, and the underscore " _ " is used as a separator
between the words. For example, we have used GL_RGB and
GL_COLOR_BUFFER_BIT in our earlier program.

For making the program platform independent and for fast processing of
various routines, OpenGL has defined its own data types which are as given in
Table-1 below.
Table-1

Suffix Data Type C/C++ type OpenGL type name


b 8-bit int signed char GLbyte
s 16-bit int short GLshort
i 32-bit int int or long GLint, GLsizei
f 32-bit float float GLfloat, GLclampf
d 64-bit float double GLdouble, GLclampd
ub 8-bit unsigned no. unsigned char GLubyte, GLboolean

us 16-bit unsigned no. unsigned short GLushort


ui 32-bit unsigned unsigned int or GLuint, GLenum,
number unsigned short GLbitfield

In addition to the basic, or core, library of functions, a set of "macro" routines


that use core functions are available in the OpenGL Utility Library (GLU).
These routines provide methods for setting up viewing projection matrices,
describing complex objects with line and polygon approximations, surface
rendering, and other complex tasks. In particular, GLU provides methods for
displaying quadrics using linear-equation approximations.

OpenGL Colors and Display Modes

OpenGL colors are typically defined using RGB (red, green, blue) components
120 and a special A (or alpha) component. There are various ways in which you
could interpret A depending on the context (especially with reference to Appendix: OpenGL -
A Quick Start
transparency and color blending). Usually A = 1 when no special effects are
desired. We have used the function glutInitDisplayMode(unsigned int
mode) for setting up display mode. The mode is the logical-OR denoted by "x
| y" with various choices of x and y from the following:

GLUT_RGB for RGB colors, GLUT_RGBA for RGBA colors and


GLUT_INDEX for using color-mapped colors (not recommended) and
GLUT_SINGLE for single-buffering, GLUT_DOUBLE to allow double-
buffering (for smooth animation). Further you can use GLUT_DEPTH for
depth buffering for hidden surface removal.

Finally a brief discussion about the projection matrices. Fig. 3 gives you an
idea about the pipeline of transformations that you need to apply to render a 3D
scene. Remember that OpenGL is a 3D graphics standard and treats every
object as a 3D object. If the object is 2D, you can simply define that as a 3D
object sitting in the plane z = 0.

ModelView Projection Perspective Viewport


Point Matrix Matrix Normalization Transformation
Clipping

Standard Eye Normalized Window


coordinates coordinate Device coordinates
s coordinates

Fig. 3.

When you study 3D geometric and projection transformations in your


Computer Graphics course, you will be able to appreciate better this pipeline.
Because of this pipeline, you need to introduce the following callback
functions in your program.

glMatrixMode(GL_PROJECTION);
//Specifies that the current matrix is prjection
matrix
glLoadIdentity();
//replace the current matrix with the identity
matrix
glOrtho(0.0f, 500.0f, 0.0f, 500.0f, 1.0f, -1.0f);
/* Sets / modifies the clipping volume extents
for orthographic projection. Parameters specify
left, right, bottom, top, near, far extents in
the order */

For more details, you are advised to refer to Unit 5, Secs. 5.3-5.5 of this Study
Guide.

Some Output Primitives

You can use some of the following output primitives of OpenGL to practice
more on OpenGL programming.

121
Computer Graphics GL_POINTS: set of points

GL_LINES: set of line segments (not connected)

GL_LINE_STRIP: chain of connected line segments

GL_LINE_LOOP: closed polygon (may self intersect)

GL_TRIANGLES: set of triangles (not connected)

GL_TRIANGLE_STRIP: linear chain of triangles

GL_TRIANGLE_FAN: fan of triangles (joined to one point)

GL_QUADS: set of quadrilaterals (not connected)

GL_QUAD_STRIP: linear chain of quadrilaterals

GL_POLYGON: a convex polygon

1.6 OPENGL TRANSFORMATION FUNCTIONS


As already discussed, OpenGL treats every thing as 3D. Hence all the
transformations that are defined in OpenGL are 3D in nature. For using them in
2D applications, you simply need to ignore the third dimension parameter. For
example, following function translates the given object by (1, 1) in 2D, since
we have chosen the third parameter to be zero.

glTranslatef(1.0f, 1.0f, 0.0f);

Similarly, rotation about the origin in 2D is essentially the same as 3D rotation


about z-axis. Hence

glRotatef(45, 0.0f, 0.0f,1.0f);

will rotate the object about z-axis by an angle of 45 degrees around the origin.
Similarly, you can use OpenGL scaling function glScale*( ) for 2D
transformations. Following example of display function will help you
understand how rotation and scaling of a square have been performed with
respect to a pivot point (0.5, 0.5)

void myDisplay ( void ) {


glClear ( GL_COLOR_BUFFER_BIT );
// Set display color as green
glColor3f(0.0, 1.0, 0.0);
//scaling and rotation with (0.5,0.5) as the pivot point
//rotate upto 30 degree with an increment of 5
for(float angle=0; angle <30; angle=angle + 5)
{
glTranslated(-0.5f, -0.5f, 0.0f);
glScaled(0.7,0.7,10);
glRotated( angle, 0.0, 0.0, 1.0);
glTranslated(0.5f, 0.5f, 0.0f);
glBegin(GL_LINE_LOOP);
glVertex3f(-0.5, -0.5, 0.0);
glVertex3f(1.0, -0.5, 0.0);
122
glVertex3f(1.0, 1.0, 0.0); Appendix: OpenGL -
glVertex3f(-0.5, 1.0, 0.0); A Quick Start
glEnd();
}
glFlush();
}

To see the effect, replace the display function of one of your previous c-
program code with the above and you will see the following output on the
display window.

Unit 5 gives an introduction to 3D transformations with a brief decription of


OpenGL 3D transformations.

1.7 ANIMATION
OpenGL provides good support for animation design. Here we give a simple
example code showing the animation for two triangles. The executed code
displays falling triangles.
#include <gl/glut.h>
#include <math.h>

//Initialize triangles' position


GLfloat xx=100.0f;
GLfloat yy=100.0f;

//y direction decrement


GLfloat ystep =-5.0f;

//Keep track of window's changing width and height


GLfloat w;
GLfloat h;

//Display the scene


void myDisplay(void)
{
//Clear the window with current clearing color
glClear(GL_COLOR_BUFFER_BIT);

// Set current drawing color to white


glColor3f(1.0f,1.0f,1.0f);

123
Computer Graphics //Draw a filled triangle with current color
glBegin(GL_TRIANGLES);
glVertex2f(xx,yy);
glVertex2f(xx+50,yy);
glVertex2f(xx+25,yy+25);
glEnd();
// Change color
glColor3f(0.0f,1.0f,0.0f);
// Draw another triangle 25 units away in x direction
// and 25 units down in y direction
glBegin(GL_TRIANGLES);
glVertex2f(xx+50,yy-10);
glVertex2f(xx+100,yy-10);
glVertex2f(xx+75,yy+15);
glEnd();
//Flush drawing commands and swap
glutSwapBuffers();
}

//Idle function-when system is idle/no user input/action

void TimerFunction(int value)


{
//If triangles go down the window, lift them up
if(yy < 0)
yy = 250+yy;

//Slide the trinagles down


yy=yy+ystep;

//Redraw the scene with changed coordinates


glutPostRedisplay();
glutTimerFunc(100,TimerFunction,1);
}

void reshape(GLsizei w, GLsizei h)


{
if(h==0) h=1;
glViewport(0,0,w,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//Preserve the proportions in changing window
if(w<=h){
w=250.0f*h/w;
h=250.0f;
}
else
{
w=250.0f*w/h;
h=250.0f;
}
glOrtho(0.0f,w,0.0f,h,1.0f,-1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

void myInit(void)
{
glClearColor(0.0f,0.0f,1.0f,1.0f);
}

void main(void)
{
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB);
glutCreateWindow("Rotate");
124 glutDisplayFunc(myDisplay);
glutReshapeFunc(reshape); Appendix: OpenGL -
glutTimerFunc(100,TimerFunction,1); A Quick Start
myInit();
glutMainLoop();
}

A snapshot of the output is shown below.

A few words about the functions that we have used in making this animation.

void glutPostRedisplay(void) ;

This function tells GLUT that the window needs to be repainted with current
changes. So if you need to re-draw the scene with changes, call this function.

void glutSwapBuffers(void) ;

This function flushes the OpenGL commands and swaps the buffer, in case you
have given the option of double buffer ( GLUT_DOUBLE ) in your code. If
double buffer is not used, then also OpenGL commands are flushed.

void glutTimerFunc(unsigned int m, (*func)(int value), int


value);

This function registers a callback function func that should be called after m
milliseconds have elapsed. The integer value is the user specified value that is
passed to the callback function. Most of the OpenGL implementations provide
double-buffering. When one is being displayed, the other is getting prepared
for display. When the drawing of a frame is complete, the two buffers are
swapped, so that one that was being viewed now becomes available for
drawing, and vice versa.

1.8 MOUSE AND KEYBOARD FUNCTIONS


In order to make your program interactive, you can effectively use the mouse
and keyboard with the help of OpenGL mouse and keyboard functions. Some
of the functions that are frequently used in interactive programs are as follows:

glutMouseFunc(mouseFunc);
This function allows you to link a mouse button with a
function that is invoked when a mouse button is pressed or
released. Here mouseFunc is the name of the function that is
invoked on mouse event.
125
Computer Graphics
glutKeyboardFunc(keyFunc);
This function specifies a function that is to be executed when a
particular key character is selected. This function can also
return the current mouse position in window coordinates.

glutMotionFunc(motionFunc);
This function specifies a function that is to be executed when a
mouse moves within the window while one or more buttons
are pressed.

You can try the following piece of code to understand how these functions
work.

//button = mouse button, action can be pressed or released


// x, y indicate mouse position

void mouseFunc( GLint button, GLint state, GLint x, GLint y)


{
if( button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glRectf(x, h- y, x+50, h -(y+50)); // h is window height
else
if ( button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN)
{ glBegin(GL_TRIANGLES);
glVertex2i(x, 250 - y);
glVertex2i(x+50, 250 - y);
glVertex2i(x+25, 250 -(y+25));
glEnd();
}
glFlush( );
}

The above function is required to be registered in your main program using the
following command.

glutMouseFunc(mouseFunc);

Similarly, a keyboard function example is given below.

void keyFunc(GLubyte key, GLint x, GLint y)


{
GLint mouse_x = x;
GLint mouse_y = h - y; //h is the window height
switch(key)
{
//Plot rectangle
case 'r':
glRectf(mouse_x, mouse_y, mouse_x+50, mouse_y+50);
break;
// Plot triangle
case 't':
glBegin(GL_TRIANGLES);
glVertex2i(mouse_x, mouse_y);
glVertex2i(mouse_x+50, mouse_y);
glVertex2i(mouse_x+25, mouse_y + 25);
glEnd();
break;
default:
126 break;
} Appendix: OpenGL -
A Quick Start
glFlush( );
}

The above callback needs to be registered in your main function as follows.

glutKeyboardFunc(keyFunc);

This gives a quick introduction to OpenGL primitives for graphics support.


OpenGL has a vast range of functions for modelling, rendering, animation and
image processing. For detailed study on OpenGL, you are advised to refer to
the following books/url.

1. Dave Shreiner, Mason Woo, Jakie Neider and Tom Davis, OpenGL
Programming Guide: The Official Guide to Learning OpenGL, Fifth
Edition, Addison Wesley, 2006.

2. Richard S. Wright, Jr., OpenGL Super Bible, Second Edition, Techmedia,


2007.

3. www.opengl.org/ (Official site of OpenGL).

127

You might also like