0% found this document useful (0 votes)
12 views23 pages

CG Unit-1

The document provides an overview of computer graphics, covering key concepts such as graphics systems, image formation, and graphics programming. It discusses the components of graphics systems, including input devices, processors, and output devices, as well as the role of the frame buffer and graphics processing units (GPUs). Additionally, it introduces the OpenGL API for creating 2D and 3D graphics, detailing its functions and primitives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views23 pages

CG Unit-1

The document provides an overview of computer graphics, covering key concepts such as graphics systems, image formation, and graphics programming. It discusses the components of graphics systems, including input devices, processors, and output devices, as well as the role of the frame buffer and graphics processing units (GPUs). Additionally, it introduces the OpenGL API for creating 2D and 3D graphics, detailing its functions and primitives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Computer Graphics

UN I T - 1
 Graphics Systems and Models: Graphics system,
 Images,
 Physical and Synthetic,
 Imaging system,
 Synthetic camera model,
 Programming interface,
 Graphics architectures,
 Programmable pipelines,
 Performance characteristics.
 Graphics Programming: Programming two-dimensional applications,
 OpenGL API,
 Primitives and attributes,
 Color,
 Viewing and Control functions.

Dept. of CSE, NGIT 1


Computer Graphics
1.1 Applications of computer graphics:
The applications of computer graphics are many and varied; we can, however, divide them
into four major areas:

 Display o f Information
 Design
 Simulation & Animation
 User Interfaces
1.2 Graphics systems
A computer graphics system is a computer system; as such, it must have all the components
of a general-purpose computer system.

A Graphics system has 5 main elements:


 Input Devices
 Processor
 Memory
 Frame Buffer
 Output Devices

Dept. of CSE, NGIT 2


Computer Graphics
Pixels and the Frame Buffer
 A picture is produced as an array (raster) of picture elements (pixels).
 These pixels are collectively stored in the Frame Buffer.
Properties of frame buffer:
Resolution – number of pixels in the frame buffer
Depth or Precision – number of bits used for each pixel
E.g.: 1 bit deep frame buffer allows 2 colors
8 bit deep frame buffer allows 256 colors.
In full-color systems, there are 24 (or more) bits per pixel. Such systems can display sufficient
colors to represent most images realistically. They are also called true-color systems, or RGB-
color systems, because individual groups of bits in each pixel are assigned to each of the three
primary colors—red, green, and blue—used in most displays.

A Frame buffer is implemented either with special types of memory chips or it can be a
part of system memory.
In simple systems the CPU does both normal and graphical
processing.
Graphics processing - Take specifications of graphical primitives from application program
and assign values to the pixels in the frame buffer It is also known as Rasterization or scan
conversion.
Today, virtually all graphics systems are characterized by special-purpose graphics processing
units (GPUs), custom-tailored to carry out specific graphics functions. The GPU can be either
on the mother board of the system or on a graphics card. The frame buffer is accessed through
the graphics processing unit and usually is on the same circuit board as the GPU.

Output Devices
The most predominant type of display has been the Cathode Ray Tube
(CRT).

Dept. of CSE, NGIT 3


Computer Graphics

Various parts of a CRT :


 Electron Gun – emits electron beam which strikes the phosphor coating to emit light.
 Deflection Plates – controls the direction of beam. The output of the computer
is converted by digital-to-analog converters o voltages across x & y deflection plates.
 Refresh Rate – In order to view a flicker free image, the image on the screen has to be
retraced by the beam at a high rate (modern systems operate at 85Hz)
2 types of refresh:
 Noninterlaced display: Pixels are displayed row by row at the refresh rate.
 Interlaced display: Odd rows and even rows are refreshed alternately.

1.3 Images: Physical and synthetic


Elements of image formation:
 Objects
 Viewer
 Light source (s)

Image formation models


Ray tracing :
One way to form an image is to follow rays of light from a point source finding which
rays enter the lens of the camera. However, each ray of light may have multiple
interactions with objects before being absorbed or going to infinity.

Dept. of CSE, NGIT 4


Computer Graphics

1.4 Imaging systems


It is important to study the methods of image formation in the real world so that this could
be utilized in image formation in the graphics systems as well.
1. Pinhole camera:

Use trigonometry to find projection of point at (x,y,z)


xp= -x/z/d yp= -y/z/d zp= d

These are equations of simple perspective


2. Human visual system

 Rods are used for : monochromatic, night vision


 Cones

Dept. of CSE, NGIT 5


Computer Graphics
 Color sensitive
 Three types of cones
 Only three values (the tristimulus values) are sent to the brain
 Need only match these three values
– Need only three primary colors

1.5 The Synthetic camera model

The paradigm which looks at creating a computer generated image as being similar
to forming an image using an optical system.
Various notions in the model :
Center of Projection
Projector lines
Image plane
Clipping window

 In case of image formation using optical systems, the image is flipped relative to
the object.
 In synthetic camera model this is avoided by introducing a plane in front of the
lens which is called the image plane.
The angle of view of the camera poses a restriction on the part of the object which can
be viewed.
This limitation is moved to the front of the camera by placing a Clipping Window in
the projection plane.

Dept. of CSE, NGIT 6


Computer Graphics

1.6 P rog ra me r’ s in t erface :


A user interacts with the graphics system with self-contained packages and input devices.
E.g. A paint editor.
This package or interface enables the user to create or modify images without having to write
programs. The interface consists of a set of functions (API) that resides in a graphics library

The application programmer uses the API functions and is shielded from the details of its
implementation.
The device driver is responsible to interpret the output of the API and converting it into a
form understood by the particular hardware.
The pen-plotter model
This is a 2-D system which moves a pen to draw images in 2 orthogonal directions.
E.g. : LOGO language implements this system.
moveto(x,y) – moves pen to (x,y) without tracing a
line. lineto(x,y) – moves pen to (x,y) by tracing a line.
Alternate raster based 2-D model :
Writes pixels directly to frame buffer
E.g. : write_pixel(x,y,color)
In order to obtain images of objects close to the real world, we need 3-D object model.
3-D APIs (OpenGL - basics)
To follow the synthetic camera model discussed earlier, the API should
support: Objects, viewers, light sources, material properties.
OpenGL defines primitives through a list of vertices.
Primitives: simple geometric objects having a simple relation between a list of vertices
Simple prog to draw a triangular polygon :

Dept. of CSE, NGIT 7


Computer Graphics
glBegin(GL_POLYGON)
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( );
Specifying viewer or camera:
Position - position of the COP
Orientation – rotation of the camera along 3 axes
Focal length – determines the size of image
Film Plane – has a height & width & can be adjusted independent of orientation of
lens. Function call for camera orientation :
gluLookAt(cop_x,cop_y,cop_z,at_x,at_y,at_z,up_x,up_y,up_z);
gluPerspective(field_of_view,aspect_ratio,near,far);
Lights and materials :
 Types of lights
– Point sources vs distributed sources
– Spot lights
– Near and far sources
– Color properties

 Material properties
– Absorption: color properties
– Scattering
Modeling Rendering Paradigm :
Viewing image formation as a 2 step process

Modeling Rendering

E.g. Producing a single frame in an animation:

1st step : Designing and positioning objects

2nd step : Adding effects, light sources and other details

Dept. of CSE, NGIT 8


Computer Graphics
The interface can be a file with the model and additional info for final rendering.

1.7 Graphics Architectures


Combination of hardware and software that implements the functionality of the API.

 Early Graphics system :

Host DAC Output Device

Here the host system runs the application and generates vertices of the
image. Display processor architecture :

 Relieves the CPU from doing the refreshing action

 Display processor assembles instructions to generate image once & stores it in the
Display List. This is executed repeatedly to avoid flicker.
 The whole process is independent of the host system.

1.8 Programmable Pipelines


E.g. An arithmetic pipeline
Terminologies :
Latency : time taken from the first stage till the end result is
produced. Throughput : Number of outputs per given time.
Graphics Pipeline :

 Process objects one at a time in the order they are generated by the application

Dept. of CSE, NGIT 9


Computer Graphics
 All steps can be implemented in hardware on the graphics card
Vertex Processor
 Much of the work in the pipeline is in converting object representations from
one coordinate system to another
– Object coordinates
– Camera (eye) coordinates
– Screen coordinates
 Every change of coordinates is equivalent to a matrix transformation
 Vertex processor also computes vertex colors
Primitive Assembly
Vertices must be collected into geometric objects before clipping and rasterization can
take place
– Line segments
– Polygons
– Curves and surfaces

Dept. of CSE, NGIT 10


Computer Graphics

Clipping:
Just as a real camera cannot “see” the whole world, the virtual camera can only see part of
the world or object space

– Objects that are not within this volume are said to be clipped out of the scene

Rasterization :
 If an object is not clipped out, the appropriate pixels in the frame buffer must
be assigned colors
 Rasterizer produces a set of fragments for each object
 Fragments are “potential pixels”
– Have a location in frame bufffer
– Color and depth attributes
 Vertex attributes are interpolated over objects by the rasterizer
Fragment Processor :
 Fragments are processed to determine the color of the corresponding pixel in the
frame buffer
 Colors can be determined by texture mapping or interpolation of vertex colors
 Fragments may be blocked by other fragments closer to the camera
– Hidden-surface removal

1.9 Graphics Programming


The Sierpinski Gasket :
It is an object that can be defined recursively & randomly
Basic Algorithm :
Start with 3 non-collinear points in space. Let the plane be z=0.
1. Pick an initial point (x,y,z) at random inside the triangle.
2. Select 1 of the 3 vertices in random.

Dept. of CSE, NGIT 11


Computer Graphics
3. Find the location halfway between the initial point & the randomly selected vertex.
4. Display the new point.
5. Replace the point (x,y,z) with the new point
6. Return to step 2.
Assumption : we view the 2-D space or surface as a subset of the 3-D space.
A point can be represented as p=(x,y,z). In the plane z=0, p =
(x,y,0). Vertex function genral form – glVertex*() - * is of the form ntv
n – dimensions (2,3,4)
t – data type (i,f,d)
v – if present, represents a pointer to an array.
Programing 2-D applications :
Definition of basic OpenGL types :
 E.g. – glVertex2i(Glint xi, Glint yi)
or
#define GLfloat float.
GLfloat vertex[3]
glVertex3fv(vertex)
E.g. prog :
glBegin(GL_LINES);
glVertex3f(x1,y1,z1);
glVertex3f(x2,y2,z2);
glEnd();
The sierpinski gasket display() function :
void display()
{
GLfloat vertices[3][3] = {{0.0,0.0,0.0},{25.0,50.0,0.0},{50.0,0.0,0.0}};
/* an arbitrary triangle in the plane z=0 */
GLfloat p[3] = {7.5,5.0,0.0}; /* initial point inside the triangle */
int j,k;
int rand();

glBegin(GL_POINTS);
for (k=0;k<5000;k++){

Dept. of CSE, NGIT 12


Computer Graphics
j=rand()%3;
p[0] = (p[0] + vertices[j][0])/2; /* compute new location */
p[1] = (p[1] + vertices[j][1])/2;
/* display new point */
glVertex3fv(p);
}

glEnd();
glFlush();
}
Coordinate Systems :

 One of the major advances in the graphics systems allows the users to work on
any coordinate systems that they desire.
 The user’s coordinate system is known as the “world coordinate system”
 The actual coordinate system on the output device is known as the screen coordinates.
 The graphics system is responsible to map the user’s coordinate to the screen
coordinate.

THE OPENGL

1.10 The OpenGL API

Dept. of CSE, NGIT 13


Computer Graphics

OpenGL is a software interface to graphics hardware.


This interface consists of about 150 distinct commands that you use to specify the objects
and operations needed to produce interactive three-dimensional applications.
OpenGL is designed as a streamlined, hardware-independent interface to be implemented
on many different hardware platforms.
To achieve these qualities, no commands for performing windowing tasks or obtaining
user input are included in OpenGL; instead, you must work through whatever windowing
system controls the particular hardware you’re using.
The following list briefly describes the major graphics operations which OpenGL performs to
render an image on the screen.
1. Construct shapes from geometric primitives, thereby creating mathematical descriptions of
objects.
(OpenGL considers points, lines, polygons, images, and bitmaps to be primitives.)

2. Arrange the objects in three-dimensional space and select the desired vantage point for
viewing the composed scene.

3. Calculate the color of all the objects. The color might be explicitly assigned by the
application, determined from specified lighting conditions, obtained by pasting a texture onto
the objects, or some combination of these three actions.

4. Convert the mathematical description of objects and their associated color information to
pixels on the screen. This process is called rasterization.

OpenGL functions

 Primitive functions : Defines low level objects such as points, line segments, polygons
etc.
 Attribute functions : Attributes determine the appearance of objects
– Color (points, lines, polygons)

– Size and width (points, lines)


– Polygon mode

Dept. of CSE, NGIT 14


Computer Graphics
 Display as filled
 Display edges

 Display vertices

 Viewing functions : Allows us to specify various views by describing the camera’s


position and orientation.

 Transformation functions : Provides user to carry out transformation of objects


like rotation, scaling etc.

‘Input functions : Allows us to deal with a diverse set of input devices like keyboard,
mouse etc

 Control functions : Enables us to initialize our programs, helps in dealing with


any errors during execution of the program.

 Query functions : Helps query information about the properties of the


particular implementation.

The entire graphics system can be considered as a state machine getting inputs from
the application prog.

Dept. of CSE, NGIT 15


Computer Graphics

– inputs may change the state of a machine


– inputs may cause the machine to produce a
visible output.
2 types of graphics functions :
– functions defining primitives
– functions that change the state of the machine.

1.11 Primitives and attributes


OpenGL supports 2 types of primitives :
 Geometric primitives (vertices, line segments..) – they pass through the
geometric pipeline

 Raster primitives (arrays of pixels) – passes through a separate pipeline to the


frame buffer.

Line segments

GL_LINES

GL_LINE_STRIP
GL_LINE_LOOP
Polygons :
Polygons :Object that has a border that can be described by a line loop & also has a
well defined interior
Properties of polygon for it to be rendered correctly
:
 Simple – No 2 edges of a polygon cross each other
 Convex – All points on the line segment between any 2 points inside the object, or on

Dept. of CSE, NGIT 16


Computer Graphics

its boundary, are inside the object.


 Flat – All the vertices forming the polygon lie in the same plane . E.g. a triangle.
Polygon Issues
 User program can check if above true
– OpenGL will produce output if these conditions are violated but it may not
be what is desired
 Triangles satisfy all conditions

Approximating a sphere

 Fans and strips allow us to approximate curved surfaces in a simple way.


 E.g. – a unit sphere can be described by the following set of equations :
 X(Θ,Φ)=sin Θ cos Φ,
 Y(Θ,Φ)=cos Θ sin Φ,
 Z(Θ,Φ)=sin Φ
The sphere shown is constructed using quad
strips. A circle could be approximated using
Quad strips.
The poles of the sphere are constructed using triangle fans as can be seen in the diagram

Dept. of CSE, NGIT 17


Computer Graphics

Graphics Text :
A graphics application should also be able to provide textual display.
 There are 2 forms of text :
– Stroke text – Like any other geometric object, vertices are used to define
line segments & curves that form the outline of each character.
– Raster text – Characters are defined as rectangles of bits called bit blocks.

bit-block-transfer : the entire block of bits can be moved to the frame buffer using a
single function call.
1.12 Color

A visible color can be characterized by the function C(λ)


Tristimulus values – responses of the 3 types of cones to the colors.

3 color theory – “If 2 colors produce the same tristimulus values, then they are
visually indistinguishable.”

Additive color model – Adding together the primary colors to get the percieved colors.
E.g. CRT.

Subtractive color model – Colored pigments remove color components from light that
is striking the surface. Here the primaries are the complimentary colors : cyan, magenta
and yellow.
RGB color
Each color component is stored separately in the frame buffer
Usually 8 bits per component in buffer
Note in glColor3f the color values range from 0.0 (none) to 1.0 (all), whereas in
glColor3ub the values range from 0 to 255

Dept. of CSE, NGIT 18


Computer Graphics

The color as set by glColor becomes part of the state and will be used until changed
– Colors and other attributes are not part of the object but are assigned when
the object is rendered
 We can create conceptual vertex colors by code such as

glColor
glVertex
glColor
glVertex
RGBA color system :
 This has 4 arguments – RGB and alpha
alpha – Opacity.
glClearColor(1.0,1.0,1.0,1.0)
This would render the window white since all components are equal to 1.0, and is
opaque as alpha is also set to 1.0
Indexed color
Colors are indices into tables of RGB values
Requires less memory
o indices usually 8 bits
o not as important now
 Memory inexpensive
 Need more colors for shading

Dept. of CSE, NGIT 19


Computer Graphics

Viewing

The default viewing conditions in computer image formation are similar to the settings on
a basic camera with a fixed lens
The Orthographic view
Direction of Projection : When image plane is fixed and the camera is moved far
from the plane, the projectors become parallel and the COP becomes “direction of
projection”
OpenGL Camera

OpenGL places a camera at the origin in object space pointing in the negat ive
z
direction
The default viewing volume is a box centered at the origin with a side of length 2

Orthographic view
In the default orthographic view, points are projected forward along the z axis onto theplane

Dept. of CSE, NGIT 20


Computer Graphics

z=0

z=0
Transformations and Viewing
The pipeline architecture depends on multiplying together a number of
transformation matrices to achieve the desired image of a primitive.
Two important matrices :
 Model-view
 Projection
The values of these matrices are part of the state of the system.
In OpenGL, projection is carried out by a projection matrix (transformation)
There is only one set of transformation functions so we must set the matrix mode first

glMatrixMode (GL_PROJECTION)
Transformation functions are incremental so we start with an identity matrix and alter it
with a projection matrix that gives the view volume
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

1.13 Control Functions (interaction with windows)


Window – A rectangular area of our display.
Modern systems allow many windows to be displayed on the screen
(multiwindow environment).
The position of the window is with reference to the origin. The origin (0,0) is the
top left corner of the screen.

Dept. of CSE, NGIT 21


Computer Graphics

glutInit allows application to get command line arguments and initializes system
gluInitDisplayMode requests properties for the window (the rendering context)
o RGB color
o Single buffering
o Properties logically ORed together
glutWindowSize in pixels
glutWindowPosition from top-left corner of display
glutCreateWindow create window with a particular title

Aspect ratio and viewports


Aspect ratio is the ratio of width to height of a particular object.

We may obtain undesirable output if the aspect ratio of the viewing rectangle
(specified by glOrtho), is not same as the aspect ratio of the window (specified
by glutInitWindowSize)

Viewport – A rectangular area of the display window, whose height and width can
be adjusted to match that of the clipping window, to avoid distortion of the images.
void glViewport(Glint x, Glint y, GLsizei w, GLsizei h) ;

The main, display and myinit functions

Dept. of CSE, NGIT 22


Computer Graphics

In our application, once the primitive is rendered onto the display and the
application program ends, the window may disappear from the display.
Event processing loop :
void glutMainLoop();
Graphics is sent to the screen through a function called display
callback. void glutDisplayFunc(function name)
The function myinit() is used to set the OpenGL state variables dealing with viewing and
attributes.

Control Functions

glutInit(int *argc, char **argv) initializes GLUT and processes any command line
arguments (for X, this would be options like -display and -geometry). glutInit() should
be called before any other GLUT routine.
glutInitDisplayMode(unsigned int mode) specifies whether to use an RGBA or
color- index color model. You can also specify whether you want a single- or double-
buffered window. (If you’re working in color-index mode, you’ll want to load certain
colors into the color map; use glutSetColor() to do this.)
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH).
If you want a window with double buffering, the RGBA color model, and a depth
buffer, you might call
glutInitWindowPosition(int x, int y) specifies the screen location for the upper-
left
corner of your window
glutInitWindowSize(int width, int size) specifies the size, in pixels, of your window.
int glutCreateWindow(char *string) creates a window with an OpenGL context.
It returns a unique identifier for the new window. Be warned: Until glutMainLoop() is
called.

Dept. of CSE, NGIT 23

You might also like