0% found this document useful (0 votes)
53 views95 pages

3 4 Computer Graphics (CST307) - Lect 3 - 4

This document provides an overview of color image generation and the graphics pipeline in computer graphics. It discusses how color images are generated using red, green, and blue phosphor dots or pixels. It then describes the five main stages of the graphics pipeline: 1) object representation, 2) modeling transformations, 3) lighting and illumination, 4) viewing transformations, and 5) scan conversion. The document explains each stage in the pipeline and how they are used to render color images from 3D objects and scenes.

Uploaded by

Nand Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views95 pages

3 4 Computer Graphics (CST307) - Lect 3 - 4

This document provides an overview of color image generation and the graphics pipeline in computer graphics. It discusses how color images are generated using red, green, and blue phosphor dots or pixels. It then describes the five main stages of the graphics pipeline: 1) object representation, 2) modeling transformations, 3) lighting and illumination, 4) viewing transformations, and 5) scan conversion. The document explains each stage in the pipeline and how they are used to render color images from 3D objects and scenes.

Uploaded by

Nand Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

Computer Graphics (CST307)

Lect 3: Introduction
Dr. Satyendra Singh Chouhan
MNIT Jaipur
Color image generation
 Each pixel contains more than one type of element
 E.g., 3 phosphor dots representing3 primary color ( R G
B)
 When excited, 3 colors combine to produce the desire
output
Color Image generation
 Each element capable of
generating different shades
of the color
 3 shades combined
together gives the
sensation of the desired
color
How to implement
 Direct Coding – individual color information for each R G
and B components of a pixel stored directly in the
corresponding frame buffer location
 Large frame buffer size to store all possible color values
 All possible combinations of RGB values
 Also called color gamut
Another way
Color Look Up Table
 Separate look up table
(Portion of memory)
 With each entry (row) of
table contains a specific
RGB combination
 Frame buffer location
contains pointer to entry in
the table
Color Image generation
CLT assumption
 Only require a small fraction of the whole color gamut.
 We know it a-priori
 If this assumption is not valid then this method is not
much of use
Recap
 In the previous lecture, we got introduction of
the CG
 Basic understanding of a graphic based system
 Today we will introduce the Graphics software
(basic)
Generic Architecture Revisited
Generic Architecture Revisited
 We discussed the color values stored in the frame
buffer
 How these values are obtained?
Generic Architecture Revisited
 We discussed the color values stored in the frame
buffer
 How these values are obtained?
 Display Processor computes these value in stages
Generic Architecture Revisited
 We discussed the color values stored in the frame
buffer
 How these values are obtained?
 Display Processor computes these value in stages
 These stages together are known as Graphic pipeline
Object Representation (1st Stage)
 Defining objects that will be part of image on
display (1st stage)
Object Representation (1st Stage)
 Defining objects that will be part of image on
display (1st stage)
Object Representation (1st Stage)
 Defining objects that will be part of image on
display (1st stage)

 Several representation techniques available for


efficient creation and manipulation of images
Object Representation (1st Stage)
 Objects are passed through the subsequent
pipeline stages to get and render images on
screen
Modeling Transformation (2nd stage)
 Objects defined in
their own (local)
coordinate system
 Need to put them
together to construct
image, in its own
coordinate system
(world coordinate)
Modeling Transformation (2nd stage)
 Objects defined in
their own (local)
coordinate system
 Need to put them
together to construct
image, in its own
coordinate system
(world coordinate)
Modeling Transformation (2nd stage)
 This process of putting individual objects into a
s c e n e , i s k n o w n a s m o d e l i n g / ge o m e t r i c
transformations (2nd stage)
Lightening (Illumination) (3rd stage)
 Once (3D) scene created, objects need to
assign color
Lightening (Illumination) (3rd stage)
 Once (3D) scene created, objects need to
assign color
Lightening (Illumination) (3rd stage)
 Once (3D) scene created, objects need to
assign color

 Color is a psychological phenomena linked to


the way light behaves (i.e., the law of optics)
 Need methods to mimic optical laws (3rd
stage)
Viewing Transformation (4th stage)
 Next task (4th stage)-
map (colored) 3D
scene to 2D device
coordinate
 Similar to taking a
snapshot of a scene
with a camera
Viewing Transformation (4th stage)
 Mathematically, snapshot taking involves several
intermediate operations (form a pipeline in
itself) – Viewing Pipeline
Viewing Transformation (4th stage)
 Mathematically, snapshot taking involves several
intermediate operations (form a pipeline in
itself) – Viewing Pipeline
 First, we setup a camera (aka view) coordinate
system
Viewing Transformation (4th stage)
 Mathematically, snapshot taking involves several
intermediate operations (form a pipeline in
itself) – Viewing Pipeline
 First, we setup a camera (aka view) coordinate
system
 Then, the world coordinate scene is
transferred to the view coordinate system
(viewing transformation)
Viewing Transformation (4th stage)
 Mathematically, snapshot taking involves several
intermediate operations (form a pipeline in
itself) – Viewing Pipeline
 First, we setup a camera (aka view) coordinate
system
 Then, the world coordinate scene is
transferred to the view coordinate system
(viewing transformation)
 From there, we transfer the scene to the 2D
view plane (project transformation)
Viewing Transformation (4th stage)
 For projection, we define a region in the viewing
coordinate space (called view volume)
Viewing Transformation (4th stage)
 For projection, we define a region in the viewing
coordinate space (called view volume)
 Objects inside the volume are projected
 Objects that are outside are not projected
Viewing Transformation (4th stage)
 For projection, we define a region in the viewing
coordinate space (called view volume)
 Objects inside the volume are projected
 Objects that are outside are not projected
Viewing Transformation (4th stage)
 The process of removing objects outside view volume is
called clipping.
Viewing Transformation (4th stage)
 The process of removing objects outside view volume is
called clipping.
 When we project, we consider viewer position
Viewing Transformation (4th stage)
 The process of removing objects outside view volume is
called clipping.
 When we project, we consider viewer position
 Some object will be fully visible, some partially while some
will be invisible, although all are within the same volume
Viewing Transformation (4th stage
 In order to capture the viewing effect, the
process of hidden surface removal (aka visible
surface detection) is performed
Viewing Transformation (4th stage
 In order to capture the viewing effect, the
process of hidden surface removal (aka visible
surface detection) is performed
 After clipping and hidden surface removal are
performed scene is projected on the view
plane
Viewing Transformation (4th stage)
 From View plane, 2D
projected scene
(window) is transferred
to a region on the
device coordinate
system (called view
port)
 Window to view port
transformation
Viewing Transformation (4th stage)
 From View plane, 2D
projected scene
(window) is transferred
to a region on the
device coordinate
system (called view
port)
 Window to view port
transformation
Viewing Transformation (4th stage)
 Thus 4th stage involves
 3 transformations (world coordinate to
camera coordinate, camera coordinate to view
plane, view plane to view port)
Viewing Transformation (4th stage)
 Thus 4th stage involves
 3 transformations (world coordinate to
camera coordinate, camera coordinate to view
plane, view plane to view port)
 + 2 operations (clipping and hidden surface
removal)
Scan Conversion (5th stage)
 Device Coordinate is a continuous space whereas
display contains discrete space (pixel grid)
Scan Conversion (5th stage)
 Device Coordinate is a continuous space whereas
display contains discrete space (pixel grid)
 Need to transform viewport on the (continuous)
device coordinate to (discrete) screen coordinate
system - Scan Conversion (also called Rasterization)
Scan Conversion (5th stage)
 Device Coordinate is a continuous space whereas
display contains discrete space (pixel grid)
 Need to transform viewport on the (continuous)
device coordinate to (discrete) screen coordinate
system - Scan Conversion (also called Rasterization)
Scan Conversion (5th stage)
 Problem – how to minimize distortions (called aliasing
effect) that result from the transformation from
continuous to discrete space?
Scan Conversion (5th stage)
 Problem – how to minimize distortions (called aliasing
effect) that result from the transformation from
continuous to discrete space?

 Solution – anti-aliasing techniques (used in this stage)


Summary
Graphic pipeline implementation
 We discussed theoretical background of CG
Graphic pipeline implementation
 We discussed theoretical background of CG
 A CG programmer need not to implement all the stages
of the pipeline
Graphic pipeline implementation
 We discussed theoretical background of CG
 A CG programmer need not to implement all the stages
of the pipeline
 Instead, can use APIs (Applic ation progr amming
Interfaces) provided by the graphic libraries
Graphic pipeline implementation
 We discussed theoretical background of CG
 A CG programmer need not to implement all the stages
of the pipeline
 Instead, can use APIs (Applic ation progr amming
Interfaces) provided by the graphic libraries
 Example of graphic libraries
 OpenGL (Open source Graphic Library)
 DirectX (By Microsoft)
 Many more
Graphic pipeline implementation
 APIs predefined sets of functions, which, when invoked
with appropriate arguments, perform specific tasks
Graphic pipeline implementation
 APIs predefined sets of functions, which, when invoked
with appropriate arguments, perform specific tasks
 Eliminate need to know the every detail (Processor,
memory, and OS) to build the a graphic application
Graphic pipeline implementation
 APIs predefined sets of functions, which, when invoked
with appropriate arguments, perform specific tasks
 Eliminate need to know the every detail (Processor,
memory, and OS) to build the a graphic application
 For example the function glcolor3f(r, g, b) in OpenGL
assigns color to a 3D point
Graphic pipeline implementation
 APIs predefined sets of functions, which, when invoked
with appropriate arguments, perform specific tasks
 Eliminate need to know the every detail (Processor,
memory, and OS) to build the a graphic application
 For example the function glcolor3f(r, g, b) in OpenGL
assigns color to a 3D point
 Thus, programmer need not to know
 How color is defined is system
 How it is stored and accessed
 How system call managed by CPU to perform these tasks etc.
Ø Graphics applications such as paint, CAD tools video
games are developed using these functions
Thank you
Output Primitives
 Output Primitives: Basic geometric
structures (points, straight line segment,
circles and other conic sections, quadric
surfaces, spline curve and surfaces,
polygon color areas, and character strings)
 These picture components are often defined
in a continuous space.
Output Primitives
 In order to draw the primitive objects, one
has to first scan convert the object.

 Scan convert: Refers to the operation of


finding out the location of pixels to the
intensified and then setting the values of
corresponding bits, in the graphic memory,
to the desired intensity code.
Output Primitives
 Each pixel on the display surface has a
finite size depending on the screen
r e s o l u t i o n a n d h e n c e a p i x e l cannot
represent a single mathematical point.
Scan Converting A Point
 A mathematical point (x,y) needs to be scan
converted to a pixel at location (x´, y´).
Scan Converting A Point

 x´=Round(x) and y´=Round(y)


x  x  x  1
 All points that satisfy:
y  y  y  1
are mapped to pixel (x´,y´)
Scan Converting A Point
 x´=Round(x+0.5) and y´=Round(y+0.5)
All points that satisfy:

x  0.5  x  x  0.5
are mapped to pixel (x´,y´) y  0.5  y  y  0.5
Scan Converting A Line
 The Cartesian slope- intercept equation
for a straight line is:
y  m x b
y2  y1
m
x2  x1
b  y1  m  x1
y
y  mx x 
m
Scan Converting A Line
 These equation form the basic for
determining deflection voltage in
analog devices.

y  mx y
x 
m

|m|<1 |m|>1
Scan Converting A Line
 On raster system, lines are plotted with
pixels, and step size (horizontal & vertical
direction) are constrained by pixel
separation.
Scan Converting A Line
 We must sample a line at discrete positions
and determine the nearest pixel to the line
at each sampled position.
DDA (Digital Differential Analyzer) Algorithm
 Algorithm is an incremental scan conversion
method.
 Based on calculating either x or y
 If |m|<1, (x  1)
y  mx
y K 1  yk  m
DDA Algorithm
 If |m|>1, (y  1)

y
x 
m
1
xK 1  xk 
m

If (x  1)
 y  m x
y K 1  yk  m
inline int round (const float a) { return int (a + 0.5); }
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
if (fabs (dx) > fabs (dy))
steps = fabs (dx);
else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);
yIncrement = float (dy) / float (steps);

setPixel (round (x), round (y));


for (k = 0; k < steps; k++) {
x += xIncrement;
y += yIncrement;
setPixel (round (x), round (y));
}
}
DDA algorithm
• Need a lot of floating point arithmetic.
– 2 ‘round’s and 2 adds per pixel.

• Is there a simpler way ?


• Can we use only integer arithmetic ?
– Easier to implement in hardware.
Bresenham’s Line Algorithm
 A highly efficient incremental method for
scan converting lines.
 Using only incremental integer
calculation.
By testing the sign of an
integer parameter, whose
value is proportional to
the difference between
the separation of two
pixel positions from the
actual line path.
Bresenham’s Line Algorithm
Bresenham’s Line Algorithm
Bresenham’s Line Algorithm

yk+1
} dupper
d2
y
}dd 1
lower

yk

xk+1
Now compute d_upper and d_lower as
Substituting for y the first equation we get
Substituting for y the first equation we get

Remove m by multiplying Δx on both the sides


pk = 2 Δy xk+1 - 2Δxyk +Δx(2c-1)

Where pk is Δx(d_lower-d_upper). And we know that


xk+1 = xk +1. The above equation reduces to
pk = 2 Δy xk+1 - 2Δxyk +Δx(2c-1)

Where pk is Δx(d_lower-d_upper). And we know that


xk+1 = xk +1. The above equation reduces to

pk = 2 Δy xk - 2 Δxyk + 2 Δy +Δx(2c-1)
 Because Δx(2C-1) is a constant with Δx and C being
constants for a given line with end points we can replace
Δx(2C-1) with a constant say ‘b’.

pk = 2 Δy xk - 2 Δxyk + b
 Let pk+1 be the next decision parameter. We do this to
find a recursive relation between pk and pk+1. i.e.
wherever there is k replace that with k+1 in the previous
equation to obtain pk+1

pk+1 = 2 Δy xk+1 - 2 Δxyk+1 + b


 Now taking the difference between pk and pk+1 we get

pk+1 - pk = 2 Δy (xk+1 – xk) - 2Δx(yk+1 - yk )


 Now taking the difference between pk and pk+1 we get

pk+1 - pk = 2 Δy (xk+1 – xk) - 2Δx(yk+1 - yk )

pk+1 = pk +2 Δy (xk+1 – xk) - 2Δx(yk+1 - yk )


 Now taking the difference between pk and pk+1 we get

pk+1 - pk = 2 Δy (xk+1 – xk) - 2Δx(yk+1 - yk )

pk+1 = pk +2 Δy (xk+1 – xk) - 2Δx(yk+1 - yk )

pk+1 = pk +2 Δy - 2Δx(yk+1 - yk )
Now the sign of pk is to be checked.

Case(i):
 If pk <=0 the lower pixel (xk+1, yk) is to be plotted, i.e.
next x is xk+1 and next y is yk.
 Therefore yk+1 = yk. Now the previous equation for pk+1
reduces to

Previous Eqn. pk+1 = pk +2 Δy - 2Δx(yk+1 - yk )


Now the sign of pk is to be checked.

Case(i):
 If pk <=0 the lower pixel (xk+1, yk) is to be plotted, i.e.
next x is xk+1 and next y is yk.
 Therefore yk+1 = yk. Now the previous equation for pk+1
reduces to
pk+1 = pk +2 Δy
Case(ii):
 If pk >0 the upper pixel (xk+1, yk+1) is to be plotted,
Therefore next y is yk,+1 now substituting for yk+1 in the
equation for pk+1 we get

Previous Eqn. pk+1 = pk +2 Δy - 2Δx(yk+1 - yk )


Case(ii):
 If pk >0 the upper pixel (xk+1, yk+1) is to be plotted,
Therefore next y is yk,+1 now substituting for yk+1 in the
equation for pk+1 we get

pk+1 = pk +2 Δy - 2Δx
The initial decision parameter p0 is computed as below (use
the following eq.)
pk = 2 Δy xk - 2 Δxyk + 2 Δy +Δx(2c-1)

replace k with 0 and compute c as shown.

c= y0 – (Δy/Δx) x0
p0 = 2 Δy x0 - 2 Δxy0 + 2 Δy +Δx(2 y0 – 2(Δy/Δx) x0 -1)
p0 = 2 Δy x0 - 2 Δxy0 + 2 Δy +Δx(2 y0 – 2(Δy/Δx) x0 -1)

p0= 2 Δy x0 - 2 Δxy0 + 2 Δy +2Δxy0 – 2Δy x0 -Δx


p0 = 2 Δy x0 - 2 Δxy0 + 2 Δy +Δx(2 y0 – 2(Δy/Δx) x0 -1)

p0= 2 Δy x0 - 2 Δxy0 + 2 Δy +2Δxy0 – 2Δy x0 -Δx

p0= 2 Δy -Δx
If the point on the theoretical line is equidistant from either
of the possible pixel locations then either of the pixels
could be plotted.
Now the algorithm steps can be summarized as follows for
the case when |m|<=1
Input the two end points
 Compute P0
 Check cases depending on the sign of P0
 Continue with the next decision parameters till the end
of the line is reached
Bresenham’s Algorithm
Bresenham’s Line Algorithm
n Example: Digitize the line with endpoint
(20,10) and (30,18)
P0=2ΔY –ΔX
Pk+1 = Pk + 2ΔY
or
= Pk + 2(ΔY – ΔX(
Trivial Situations: Do not need Bresenham

 m  0  horizontal line
 m  1  line y   x
 m    vertical line

You might also like