Computer Graphics Course Introduction
Computer Graphics Course Introduction
Unit-1 Introduction
Date : 15.12.2013
INTRODUCTION
• Applications of computer graphics
• A graphics system
• Images
– Physical and synthetic
• Imaging systems
• The synthetic camera model
• The programmer’s interface
• Graphics architectures
• Programmable pipelines
• Performance characteristics
• Graphics Programming
– The Sierpinski gasket
– Programming two-dimensional applications.
• Computer Graphics
– What is it?
• Overview of what we will cover
– A Graphics Overview
– Graphics Theory
– A Graphics Software System: OpenGL
• Our approach will be top-down
– We want you to start writing application programs
that generate graphical output as quickly as
possible
Computer Graphics
• Computer Graphics deals with all aspects of
creating images with a computer
– Hardware
• CPU’s
• GPU
– Software
• OpenGL
• DirectX
– Applications
Computer Graphics
• Using a computer as a rendering tool for the
generation (from models) and manipulation of
images is called computer graphics
• More precisely: image synthesis
Applications of computer graphics
The development of Computer Graphics has
been driven by the needs of the user community
and by the advances in hardware and software.
• Display of Information
• Design
• Simulation & Animation
• User Interfaces
1. Display of Information
2. Design
3.Simulations: GAMES!
4: User Interfaces
Applications of computer graphics
• Computer-Aided Design for engineering and architectural systems etc. Objects
maybe displayed in a wireframe outline form. Multi-window environment is also
favored for producing various zooming scales and views. Animations are useful for
testing performance.
• Presentation Graphics : To produce illustrations which summarize various kinds of
data. Except 2D, 3D graphics are good tools for reporting more complex data.
• Computer Art: Painting packages are available. With cordless, pressure-sensitive
stylus, artists can produce electronic paintings which simulate different brush
strokes, brush widths, and colors. Photorealistic techniques, morphing and
animations are very useful in commercial art. For films, 24 frames per second are
required. For video monitor, 30 frames per second are required.
Application of computer graphics
• Entertainment: Motion pictures, Music videos, and TV shows,
Computer games
• Education and Training: Training with computer-generated models
of specialized systems such as the training of ship captains and
aircraft pilots.
• Visualization: For analyzing scientific, engineering, medical and
business data or behavior. Converting data to visual form can help
to understand mass volume of data very efficiently.
• Image Processing: Image processing is to apply techniques to
modify or interpret existing pictures. It is widely used in medical
applications.
• Graphical User Interface: Multiple window, icons, menus allow a
computer setup to be utilized more efficiently.
A graphics system
• A Graphics system has 5 main elements:
– Input Devices
– Processor
– Memory
– Frame Buffer
– Output Devices
Pixels and the Frame Buffer
• A picture is produced as an array (raster) of picture elements (pixels).
• These pixels are collectively stored in the Frame Buffer.
• 2 types of refresh:
– Noninterlaced display: Pixels are displayed row by row at the refresh
rate.
– Interlaced display: Odd rows and even rows are refreshed alternately.
The cathode-ray tube(CRT)
Shadow-Mask CRT
Shadow-Mask CRT
• Here, just behind the phosphorus coated face of the CRT, there is a metal plate.
• The shadow-mask is pierced with small round holes in a triangular pattern.
• The shadow-mask tube uses three guns, grouped in a triangle or delta responsible
for red, green and blue components of the light output of the CRT.
• The deflection system of the CRT operates on all three electron beams
simultaneously, bringing all three to the same point of focus on the shadow-mask.
Where
• The three beams encounter holes in the mask, they pass through and strike the
phosphor.
• The phosphor in tube is laid down very carefully in groups of three spots- one red,
one green and one blue- under each hole in the mask, that each spot is stuck only
by electrons from the appropriate gun.
• The effect of the mask is thus to “shadow” the spots of red phosphor from all but
the red beam, and likewise for the green and blue phosphor spots.
• can therefore control the light output in each of the three component colors by
modulating the beam current of the corresponding gun.
Images: Physical and Synthetic
Computer graphics generates pictures with the aim of:
– to create realistic images
– to create images very close to “traditional” imaging methods
Output
Image
3D Object
Synthetic
Camera
What Now!
• Both the object and the viewer exist in a 3D world.
However, the image they define is 2D
• Image-Formation
– The Object + the Viewer’s Specifications
• An Image
• Future
– Chapter 2
• OpenGL
– Build Simple Objects.
– Chapter 9
• Interactive Objects
– Objects Relations w/ Each Other
Objects, Viewers & Camera
Camera system
– object and viewer exist in E3
– image is formed
• in the Human Visual system
(HSV) – on the retina
• In the film plane if a camera
is used
– Object(s) & Viewer(s) in E3
– Pictures in E2
Transformation from E3 to E2
projection
Light and Images
• Much information was missing from the
preceding picture:
– We have yet to mention light!
• If there were no light sources the objects would be
dark, and there would be nothing visible in our image.
– We have not mentioned how color enters the
picture.
– Or, what are the effects of different kinds of
surfaces have on the objects.
Lights & Images
Light Sources:
– light sources
• position
• monochromatic / color
– if not used scene would be
very dark and flat
– shadows and reflections - very
important for realistic
perception
– geometric optics used for light
modeling
Imitating real life
• Taking a more physical approach, we can start with the
following arrangement:
Courtesy of http://www.webvision.med.utah.edu/into.html
Human Visual System
• different HVS response for single
frequency light – red/green/blue
• relative brightness response at
different frequencies
• this curve is known as
Commision Internationale de
L’Eclairage (CIE) standard
observer curve
• the curve matches the sensitivity
of the monochromatic sensors used
in black&white films and video
camera
• most sensitive to GREEN colors
Human Visual System
• three different cones in HVS
• blue, green & yellow – often
reported as red for compatibility
with camera & film
Synthetic Camera Model
computer-generated image based
on an optical system –
Synthetic Camera Model
viewer behind the camera can
move the back of the camera –
change of the distance d
i.e. additional flexibility
objects and viewer specifications Imaging system
are independent – different
functions within a graphics
library
Synthetic Camera Model
• The objects specification is independent of the viewer
specifications.
• In a graphics library we would expect separate functions for
specifying objects and the viewer.
• We can compute the image using simple trigonometric calculations
a – situation with a camera
b – mathematical model – image plane moved in front of the camera
center of projection – center of the lens
projection plane – film plane
Synthetic Camera Model
Not all objects can be seen
limit due to viewing angle
Solution:
Clipping rectangle or clipping
window placed inn front of
the camera
ad b shows the case when the
clipping rectangle is shifted
aside – only part of the the
scene is projected
Some Adjustments
• Symmetry in projections
– Move the image plane in front of the lens
Constraints
• Clipping
– We must also consider the limited size of the
image.
The Programmer’s Interface
• What is an API?
• Why do you want one?
moveto (0,0);
lineto(1,0);lineto(1,1);lineto(0,1);lineto(0,0); Typical example of “sequential
access”
{ draws a rectangle }
moveto(0,1);
lineto(0.5,1.866); lineto(1.5,1.866);
lineto(1.5,0.866);
lineto(1,0);moveto(0,0);
lineto(1.5,1.866);
{ draws a cube using oblique projection }
Three-Dimensional APIs
-Synthetic Camera Model
• If we are to follow the synthetic camera
model, we need functions in the API to
specify:
– Objects
– The Viewer
– Light Sources
– Material Properties
Objects
• Objects are defined by points or vertices, line segments, polygons etc. to
represent complex objects
• API primitives are displayed rapidly on the hardware
• usual API primitives:
– points
– line segments
– polygons
– text
• The following code fragment defines a triangle in OpenGL
– glBegin(GL_POLYGON);
– glVertex3f(0.0,0.0,0.0);
– glVertex3f(0.0,1.0,0.0);
– glVertex3f(0.0,0.0,1.0);
– glEND();
The Viewer
Camera specification in APIs:
• position – usually center of lens
• orientation – camera coordinate
system in center of lens
camera can rotate around those
three axis
• focal length of lens determines
the size of the image on the film
actually viewing angle
• film plane - camera has a height
and a width
Application Programmer’s Interface
main ( )
{
initialize_the_system ();
for (some_number_of_points )
{
pt = generate_a_point();
display_the_point(pt);
}
cleanup()
}
Final program in OpenGL will almost that
simple, but slightly different
GL specifications
OpenGL types
• GLfloat, GLint – instead of float, int (integer)
data types used in C or Pascal
Examples:
#define GLfloat float simple change of data types
glBegin(GL_LINES);
glVertex2f(x1,y1); glVertex2f(x2,y2);
glEnd( ); /* specifies a line segment */
glBegin(GL_POINTS);
glVertex2f(x1,y1); glVertex2f(x2,y2);
glEnd( ); /* specifies two points */
It is necessary:
• to write the CORE program
• to specify
– color of drawing
– where on the scene the picture will appear
– how large image will be
– how to create an area of the screen for our
image – window
– how much of our infinite pad will appear on the Image generated by
screen
the program
– how long the image will remain on the screen
Those are important issues! – will be solved
latter
Coordinate System