Module 1 ppt
Module 1 ppt
Graphics?
1
Angel: Interactive Computer Graphics 5E © Addison-
Wesley 2009
Chapter 1: Graphics System and Models
1. Applications of Computer Graphics
2. A Graphics System
3. Images: Physical and Synthetic
4. Imaging Systems
5. The Synthetic Camera Model
6. The Programmer’s Interface
7. Graphics Architecture
8. Programmable Pipeline
9. Performance Characteristics.
Computer Graphics
• Computer graphics is concerned with all aspects of producing pictures or
images using a computer
• Hardware
• Software
• Applications
• The field began humbly almost 50 years ago, with the display of a few lines on a
cathode-ray tube (CRT); now, we can create images by computer that are
indistinguishable from photographs of real objects.
• We routinely train pilots with simulated airplanes, generating graphical displays
of a virtual environment in real time.
• Feature-length movies made entirely by computer have been successful, both
critically and financially.
• Massive multiplayer games can involve tens of thousands of concurrent
participants.
Preliminary Answer
3. Simulation
• Flight Simulation for training pilots
• Computer Games
• Television and Computer-Animated Films: Toy Story (Pixar), Ice Age
• Virtual Reality
4. User interfaces
• Window-Based Operating Systems: Microsoft Windows, Macintosh, X
Windows
• Internet Browsers: Netscape, Explorer
1.2 A Graphics System
Output
• A computer graphics system is a device
computer system; as such, it must have
all the components of a general-purpose
computer system.
• There are five major elements in our
system:
1. Input devices
2. Processor
3. Memory Input devices Image
4. Frame buffer formed in
5. Output devices FB
1.2.1 Pixels and the Frame Buffer
• Almost all graphics systems are raster based.
• A picture is produced as an array—the raster—of picture elements, or pixels,
within the graphics system.
• As we can see from Figure 1.2, each pixel corresponds to a location, or
small area, in the image.
CRT Video1
CRT Video2
CRT Display Principles
• Raster-Scan Displays
• Picture elements: screen point referred as “Pixel”
• Picture information stored in refresh (frame) buffer
• When electrons strike the phosphor coating on the tube, light is
emitted.
• The direction of the beam is controlled by two pairs of deflection
plates.
• The output of the computer is converted, by digital-to-analog
converters, to voltages across the x and y deflection plates.
• Light appears on the surface of the CRT when a sufficiently intense
beam of electrons is directed at the phosphor.
• A device is known as the random-scan, calligraphic, or vector CRT,
because the beam can be moved directly from any position to any
other position.
• If the intensity of the beam is turned off, the beam can be moved
to a new position without changing any visible display.
Refresh Rates
• A typical CRT will emit light for only a short time—usually, a
few milliseconds— after the phosphor is excited by the
electron beam.
• For a human to see a steady, flicker-free image on most CRT
displays, the same path must be retraced, or refreshed, by the
beam at a sufficiently high rate, the refresh rate.
• The frequency at which a picture is redrawn on the screen is
referred to as the “refresh rate”
Refresh Rates
• As each screen refresh takes place, we tend to see each frame as a
smooth continuation of the patterns in the previous frame, so long as
the refresh rate is not too low.
• Below 24 fps( frames per second) - picture appears to flicker
• Old silent films, for example, show this effect because they were
photographed at a rate of 16 fps.
• Although CRTs are still the most common display device, they are rapidly
being replaced by flat-screen technologies.
• Flat-panel monitors are inherently raster.
• Although there are multiple technologies available, including light-
emitting diodes (LEDs), liquid-crystal displays (LCDs), and plasma panels,
all use a two-dimensional grid to address individual light-emitting
elements.
• The middle plate in an LED panel contains light-emitting diodes that can
be turned on and off by the electrical signals sent to the grid.
Aspect Ration
• Until recently, most displays had a 4:3 width to height ratio
(or aspect ratio) that correspondeds to commercial
television.
• Computer displays moved up to the popular resolutions of
1024 × 768 (XGA) and 1280 × 1024 (SXGA).
• The newer High Definition Television (HDTV) standard uses
a 16:9 aspect ratio
1.2.3 Input Devices
• Most graphics systems provide a keyboard and at least one other input
device.
• The most common input devices are the mouse, the joystick, and the data
tablet.
• Each provides positional information to the system, and each usually is
equipped with one or more buttons to provide signals to the processor.
• Often called pointing devices, these devices allow a user to indicate a
particular location on the display.
• Game consoles lack keyboards but include a greater variety of
input devices than a standard workstation.
• A typical console might have multiple buttons, a joystick, and dials.
• Devices such as the Nintendo Wii are wireless and can sense
accelerations in three dimensions.
• Games and virtual reality applications have all generated the need
for input devices that provide more than two-dimensional data.
• Higher-dimensional data can be obtained by devices such as data
gloves, which include many sensors, and computer vision systems
1.3 Images: Physical and Synthetic
• Computer-generated images are synthetic or
artificial, in the sense that the objects being
imaged may not exist physically.
• The preferred method to form computer-
generated images is similar to traditional
imaging methods, such as cameras and the
human visual system.
• Hence, before we discuss the mechanics of
writing programs to generate images, we
discuss the way images are formed by optical
systems.
1.3.1 Objects and Viewers
Image Formation
• In computer graphics, we form images which are generally two dimensional
using a process analogous to how images are formed by physical imaging
systems
• Cameras
• Microscopes
• Telescopes
• Human visual system
1.3.1 Objects and Viewers contd..,
Elements of Image Formation
• Objects
• Viewer
Objects
• The object exists in space independent of any image-formation
process and of any viewer.
• In computer graphics, where we deal with synthetic objects, we
form objects by specifying the positions in space of various
geometric primitives, such as points, lines, and polygons
• In most graphics systems, a set of locations in space, or
of vertices, is sufficient to define, or approximate, most
objects.
• For example, a line can be specified by two vertices;
• A polygon can be specified by an ordered list of vertices;
• A sphere can be specified by two vertices that give its
center and any point on its circumference.
Viewers
• Every imaging system must provide a means of forming
images from objects.
• To form an image, we must have someone or something
that is viewing our objects, be it a person, a camera, or a
digitizer.
• It is the viewer that forms the image of our objects.
• In the human visual system, the image is formed on the
back of the eye.
• In a camera, the image is formed in the film plane.
• Figure 1.7 shows a camera system viewing a building.
• Here we can observe that both the object and the viewer exist in a
three-dimensional world.
• However, the image that they define—what we find on the film
plane—is two dimensional.
1.3.2 Light and Images
• The major components of the visual system are shown in Figure 1.15.
• Light enters the eye through the lens and cornea, a transparent structure that
protects the eye.
• The iris opens and closes to adjust the amount of light entering the eye.
• The lens forms an image on a two-dimensional structure called the retina at the
back of the eye.
• The rods and cones (so named because of their appearance when magnified)
are light sensors and are located on the retina.
• They are excited by electromagnetic energy in the range of 350 to 780 nm.
• The rods are low-level-light sensors that account for our night
vision and are not color sensitive; the cones are responsible for
our color vision.
• Whereas intensity is a physical measure of light energy,
brightness is a measure of how intense we perceive the light
emitted from an object to be.
• Brightness is an overall measure of how we react to the
intensity of light.
• The initial processing of light in the human visual system is
based on the same principles used by most optical systems.
1.5 The Synthetic-Camera Model
• We look at creating a computer generated image as being similar to forming
an image using an optical system.
• This paradigm has become known as the synthetic-camera model.
• Consider the imaging system shown in Figure 1.16.
• Again we see objects and a viewer.
• First, the specification of the objects is independent of the specification
of the viewer.
• Hence, we should expect that, within a graphics library, there will be separate
functions for specifying the objects and the viewer.
• Second, we can compute the image using simple geometric calculations,
just as we did with the pinhole camera.
• Figure 1.17, the view in part (a) is similar to that of the pinhole camera.
• Whereas with a real camera, we would simply flip the film to regain the
original orientation of the object, with our synthetic camera we can avoid
the flipping by a simple trick.
• We draw another plane in front of the lens (Figure 1.17(b)), and work in
three dimensions, as shown in Figure 1.18.
• We find the image of a point on the object on the virtual image
plane by drawing a line, called a projector, from the point to the
center of the lens, or the center of projection (COP).
• In our synthetic camera, the virtual image plane that we have
moved in front of the lens is called the projection plane.
• The image of the point is located where the projector passes
through the projection plane.
• As we saw, not all objects can be imaged onto the pinhole camera’s
film plane.
• The angle of view expresses this limitation. In the synthetic camera,
we can move this limitation to the front by placing a clipping
rectangle, or clipping window, in the projection plane (Figure 1.19)
Given the location of the center of projection, the location and orientation of the
projection plane, and the size of the clipping rectangle, we can determine which objects
will appear in the image.
1.6 The Programmer’s Interface
• User can interact with a graphics system by completely self-contained
packages, such as the ones used in the CAD community, a user develops
images through interactions with the display using input devices, such as a
mouse and a keyboard.
• Ex: In a typical application, such as the painting program shown in Figure 1.20,
the user sees menus and icons that represent possible actions
Display
Window
ymax
ymin
xmin
Xmax
OpenGL Point Functions
• OpenGL primitives are displayed with a default size and color.
• The default color for primitives is white, and the default point size is
equal to the size of a single screen pixel.
• We use the following OpenGL function to state the coordinate values
for a single position:
glVertex* ( );
where the asterisk (*) indicates that suffix codes are required for this function
• Coordinate positions in OpenGL can be given in two, three, or four
dimensions.
• We use a suffix value of 2, 3, or 4 on the glVertex function to
indicate the dimensionality of a coordinate position.
• A fourth-dimensional specification indicates a homogeneous-
coordinate representation.
• The second suffix code on the glVertex function specifying a
numerical data type are i (integer), s (short), f (float), and d
(double).
• If we use an array specification for a coordinate position, we need
to append v (for “vector”) as a third suffix code.
• Calls to glVertex functions must be placed between a glBegin
function and a glEnd function.
• The argument of the glBegin function is used to identify the kind of
output primitive that is to be displayed, and glEnd takes no
arguments.
• For point plotting, the argument of the glBegin function is the
symbolic constant GL_POINTS.
• Thus, the form for an OpenGL specification of a point position is
glBegin (GL_POINTS);
glVertex* ( );
glEnd ( );
Example: Three equally spaced points are plotted along a two-dimensional.
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );
OpenGL Line Functions
• In OpenGL, we select a single endpoint coordinate position using the
glVertex function, just as we did for a point position.
glBegin (GL_LINES);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
GL_LINE_STRIP
• With the OpenGL primitive constant GL LINE STRIP, we obtain a
polyline.
• The display is a sequence of connected line segments between the first
endpoint in the list and the last endpoint.
glBegin (GL_LINE_STRIP);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
GL LINE LOOP
• The Third OpenGL line primitive is GL LINE LOOP, which produces a
closed polyline.
• Lines are drawn as with GL LINE STRIP, but an additional line is drawn to
connect the last coordinate position and the first coordinate position.
glBegin (GL_LINE_LOOP);
glVertex2i (p1);
glVertex2i (p2);
glVertex2i (p3);
glVertex2i (p4);
glVertex2i (p5);
glEnd ( );
• Here is an example of specifying two point positions in a
three-dimensional world reference frame.
glBegin (GL_POLYGON);
glVertex3f(0.0, 0.0, 0.0); /* vertex A */
glVertex3f(0.0, 1.0, 0.0); /* vertex B */
glVertex3f(0.0, 0.0, 1.0); /* vertex C */
glEnd ( );
OpenGL Point-Attribute Functions
• Color is specified with glColor function.
• We set the size for an OpenGL point with
glPointSize (size);
• Point is then displayed as a square block of pixels.
• Thus, a point size of 1.0 displays a single pixel, and a
point size of 2.0 displays a 2 × 2 pixel array.
• Attribute functions may be listed inside or outside of a glBegin/glEnd
pair.
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
• We can specify a viewer or camera in a variety of ways
• If we look at the camera shown in Figure 1.25, we can
identify four types of necessary specifications:
• Position: The camera location usually is given by the
position of the center of the lens, which is the center of
projection (COP).
• Orientation: Once we have positioned the camera, we can
place a camera coordinate system with its origin at the
center of projection. We can then rotate the camera
independently around the three axes of this system.
• Focal length: The focal length of the lens determines the
size of the image on the film plane or, equivalently, the
portion of the world the camera sees
• Film plane The back of the camera has a height and a width.
• The classical two-point perspective of a cube shown in Figure 1.26 is a two-
point perspective because of a particular relationship between the viewer
and the planes of the cube.