CG Unit-1
CG Unit-1
UN I T - 1
Graphics Systems and Models: Graphics system,
Images,
Physical and Synthetic,
Imaging system,
Synthetic camera model,
Programming interface,
Graphics architectures,
Programmable pipelines,
Performance characteristics.
Graphics Programming: Programming two-dimensional applications,
OpenGL API,
Primitives and attributes,
Color,
Viewing and Control functions.
Display o f Information
Design
Simulation & Animation
User Interfaces
1.2 Graphics systems
A computer graphics system is a computer system; as such, it must have all the components
of a general-purpose computer system.
A Frame buffer is implemented either with special types of memory chips or it can be a
part of system memory.
In simple systems the CPU does both normal and graphical
processing.
Graphics processing - Take specifications of graphical primitives from application program
and assign values to the pixels in the frame buffer It is also known as Rasterization or scan
conversion.
Today, virtually all graphics systems are characterized by special-purpose graphics processing
units (GPUs), custom-tailored to carry out specific graphics functions. The GPU can be either
on the mother board of the system or on a graphics card. The frame buffer is accessed through
the graphics processing unit and usually is on the same circuit board as the GPU.
Output Devices
The most predominant type of display has been the Cathode Ray Tube
(CRT).
The paradigm which looks at creating a computer generated image as being similar
to forming an image using an optical system.
Various notions in the model :
Center of Projection
Projector lines
Image plane
Clipping window
In case of image formation using optical systems, the image is flipped relative to
the object.
In synthetic camera model this is avoided by introducing a plane in front of the
lens which is called the image plane.
The angle of view of the camera poses a restriction on the part of the object which can
be viewed.
This limitation is moved to the front of the camera by placing a Clipping Window in
the projection plane.
The application programmer uses the API functions and is shielded from the details of its
implementation.
The device driver is responsible to interpret the output of the API and converting it into a
form understood by the particular hardware.
The pen-plotter model
This is a 2-D system which moves a pen to draw images in 2 orthogonal directions.
E.g. : LOGO language implements this system.
moveto(x,y) – moves pen to (x,y) without tracing a
line. lineto(x,y) – moves pen to (x,y) by tracing a line.
Alternate raster based 2-D model :
Writes pixels directly to frame buffer
E.g. : write_pixel(x,y,color)
In order to obtain images of objects close to the real world, we need 3-D object model.
3-D APIs (OpenGL - basics)
To follow the synthetic camera model discussed earlier, the API should
support: Objects, viewers, light sources, material properties.
OpenGL defines primitives through a list of vertices.
Primitives: simple geometric objects having a simple relation between a list of vertices
Simple prog to draw a triangular polygon :
Material properties
– Absorption: color properties
– Scattering
Modeling Rendering Paradigm :
Viewing image formation as a 2 step process
Modeling Rendering
Here the host system runs the application and generates vertices of the
image. Display processor architecture :
Display processor assembles instructions to generate image once & stores it in the
Display List. This is executed repeatedly to avoid flicker.
The whole process is independent of the host system.
Process objects one at a time in the order they are generated by the application
Clipping:
Just as a real camera cannot “see” the whole world, the virtual camera can only see part of
the world or object space
– Objects that are not within this volume are said to be clipped out of the scene
Rasterization :
If an object is not clipped out, the appropriate pixels in the frame buffer must
be assigned colors
Rasterizer produces a set of fragments for each object
Fragments are “potential pixels”
– Have a location in frame bufffer
– Color and depth attributes
Vertex attributes are interpolated over objects by the rasterizer
Fragment Processor :
Fragments are processed to determine the color of the corresponding pixel in the
frame buffer
Colors can be determined by texture mapping or interpolation of vertex colors
Fragments may be blocked by other fragments closer to the camera
– Hidden-surface removal
glBegin(GL_POINTS);
for (k=0;k<5000;k++){
glEnd();
glFlush();
}
Coordinate Systems :
One of the major advances in the graphics systems allows the users to work on
any coordinate systems that they desire.
The user’s coordinate system is known as the “world coordinate system”
The actual coordinate system on the output device is known as the screen coordinates.
The graphics system is responsible to map the user’s coordinate to the screen
coordinate.
THE OPENGL
2. Arrange the objects in three-dimensional space and select the desired vantage point for
viewing the composed scene.
3. Calculate the color of all the objects. The color might be explicitly assigned by the
application, determined from specified lighting conditions, obtained by pasting a texture onto
the objects, or some combination of these three actions.
4. Convert the mathematical description of objects and their associated color information to
pixels on the screen. This process is called rasterization.
OpenGL functions
Primitive functions : Defines low level objects such as points, line segments, polygons
etc.
Attribute functions : Attributes determine the appearance of objects
– Color (points, lines, polygons)
Display vertices
‘Input functions : Allows us to deal with a diverse set of input devices like keyboard,
mouse etc
The entire graphics system can be considered as a state machine getting inputs from
the application prog.
Line segments
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
Polygons :
Polygons :Object that has a border that can be described by a line loop & also has a
well defined interior
Properties of polygon for it to be rendered correctly
:
Simple – No 2 edges of a polygon cross each other
Convex – All points on the line segment between any 2 points inside the object, or on
Approximating a sphere
Graphics Text :
A graphics application should also be able to provide textual display.
There are 2 forms of text :
– Stroke text – Like any other geometric object, vertices are used to define
line segments & curves that form the outline of each character.
– Raster text – Characters are defined as rectangles of bits called bit blocks.
bit-block-transfer : the entire block of bits can be moved to the frame buffer using a
single function call.
1.12 Color
3 color theory – “If 2 colors produce the same tristimulus values, then they are
visually indistinguishable.”
Additive color model – Adding together the primary colors to get the percieved colors.
E.g. CRT.
Subtractive color model – Colored pigments remove color components from light that
is striking the surface. Here the primaries are the complimentary colors : cyan, magenta
and yellow.
RGB color
Each color component is stored separately in the frame buffer
Usually 8 bits per component in buffer
Note in glColor3f the color values range from 0.0 (none) to 1.0 (all), whereas in
glColor3ub the values range from 0 to 255
The color as set by glColor becomes part of the state and will be used until changed
– Colors and other attributes are not part of the object but are assigned when
the object is rendered
We can create conceptual vertex colors by code such as
glColor
glVertex
glColor
glVertex
RGBA color system :
This has 4 arguments – RGB and alpha
alpha – Opacity.
glClearColor(1.0,1.0,1.0,1.0)
This would render the window white since all components are equal to 1.0, and is
opaque as alpha is also set to 1.0
Indexed color
Colors are indices into tables of RGB values
Requires less memory
o indices usually 8 bits
o not as important now
Memory inexpensive
Need more colors for shading
Viewing
The default viewing conditions in computer image formation are similar to the settings on
a basic camera with a fixed lens
The Orthographic view
Direction of Projection : When image plane is fixed and the camera is moved far
from the plane, the projectors become parallel and the COP becomes “direction of
projection”
OpenGL Camera
OpenGL places a camera at the origin in object space pointing in the negat ive
z
direction
The default viewing volume is a box centered at the origin with a side of length 2
Orthographic view
In the default orthographic view, points are projected forward along the z axis onto theplane
z=0
z=0
Transformations and Viewing
The pipeline architecture depends on multiplying together a number of
transformation matrices to achieve the desired image of a primitive.
Two important matrices :
Model-view
Projection
The values of these matrices are part of the state of the system.
In OpenGL, projection is carried out by a projection matrix (transformation)
There is only one set of transformation functions so we must set the matrix mode first
glMatrixMode (GL_PROJECTION)
Transformation functions are incremental so we start with an identity matrix and alter it
with a projection matrix that gives the view volume
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glutInit allows application to get command line arguments and initializes system
gluInitDisplayMode requests properties for the window (the rendering context)
o RGB color
o Single buffering
o Properties logically ORed together
glutWindowSize in pixels
glutWindowPosition from top-left corner of display
glutCreateWindow create window with a particular title
We may obtain undesirable output if the aspect ratio of the viewing rectangle
(specified by glOrtho), is not same as the aspect ratio of the window (specified
by glutInitWindowSize)
Viewport – A rectangular area of the display window, whose height and width can
be adjusted to match that of the clipping window, to avoid distortion of the images.
void glViewport(Glint x, Glint y, GLsizei w, GLsizei h) ;
In our application, once the primitive is rendered onto the display and the
application program ends, the window may disappear from the display.
Event processing loop :
void glutMainLoop();
Graphics is sent to the screen through a function called display
callback. void glutDisplayFunc(function name)
The function myinit() is used to set the OpenGL state variables dealing with viewing and
attributes.
Control Functions
glutInit(int *argc, char **argv) initializes GLUT and processes any command line
arguments (for X, this would be options like -display and -geometry). glutInit() should
be called before any other GLUT routine.
glutInitDisplayMode(unsigned int mode) specifies whether to use an RGBA or
color- index color model. You can also specify whether you want a single- or double-
buffered window. (If you’re working in color-index mode, you’ll want to load certain
colors into the color map; use glutSetColor() to do this.)
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH).
If you want a window with double buffering, the RGBA color model, and a depth
buffer, you might call
glutInitWindowPosition(int x, int y) specifies the screen location for the upper-
left
corner of your window
glutInitWindowSize(int width, int size) specifies the size, in pixels, of your window.
int glutCreateWindow(char *string) creates a window with an OpenGL context.
It returns a unique identifier for the new window. Be warned: Until glutMainLoop() is
called.