0% found this document useful (0 votes)
65 views29 pages

Graphics - Lecture 1 Part 2

This lecture discusses e-commerce and OpenGL. It defines OpenGL as a software interface for graphics hardware that consists of over 700 commands. The lecture describes the major graphics operations OpenGL performs, including constructing shapes, arranging objects, calculating colors, and rasterizing to pixels. It also covers key terms like rendering, pixels, bitplanes, and the framebuffer. The lecture provides an example of OpenGL code and explains each part. It discusses OpenGL's use of state variables and modes and how OpenGL acts as a state machine.

Uploaded by

Zohaib Kiyani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views29 pages

Graphics - Lecture 1 Part 2

This lecture discusses e-commerce and OpenGL. It defines OpenGL as a software interface for graphics hardware that consists of over 700 commands. The lecture describes the major graphics operations OpenGL performs, including constructing shapes, arranging objects, calculating colors, and rasterizing to pixels. It also covers key terms like rendering, pixels, bitplanes, and the framebuffer. The lecture provides an example of OpenGL code and explains each part. It discusses OpenGL's use of state variables and modes and how OpenGL acts as a state machine.

Uploaded by

Zohaib Kiyani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 29

E Commerce

Lecture 2
Tooba Nasir

February 20th , 2010


This Lecture
• What is OpenGL?

• About the Slides:

February 20, 2010 Tooba Nasir 2


• About the Slides:
– Bland Slides like this one are home
made
– Cheerful, colored ones are from
• Shreiner, D., The Khronos OpeGL ARB
Working Group, OpenGL Programming
Guide, 7th Edition, Addison Wesley

February 20, 2010 Tooba Nasir 3


What is OpenGL?
• a software interface to graphics hardware
• Consists of more than 700 distinct commands
– used to specify the objects and operations needed to produce interactive 3D
applications
– about 670 commands as specified for OpenGL Version 3.0 and another 50 in the
OpenGL Utility Library

• designed as a streamlined, hardware-independent interface


– to be implemented on many different hardware platforms

– To achieve these qualities, no commands for performing windowing tasks or


obtaining user input are included in OpenGL
– instead, must work through whatever windowing system controls the particular
hardware being used

• OpenGL doesn’t provide high-level commands for describing models of


three-dimensional objects
– Such commands might allow to specify relatively complicated shapes such as
automobiles, parts of the body, airplanes, or molecules.

February 20, 2010 Tooba Nasir 4


What is OpenGL?
• With OpenGL, a model is built from a small set of
geometric primitives—points, lines, and polygons.

• A sophisticated library that provides these


features could certainly be built on top of OpenGL

• GLU:
– The OpenGL Utility Library (GLU) provides many of the
modeling features, such as quadric surfaces and NURBS
curves and surfaces
– a standard part of every OpenGL implementation

February 20, 2010 Tooba Nasir 5


OpenGL Graphics
Operations
• the major graphics operations that OpenGL performs to render
an image on the screen

1. Construct shapes from geometric primitives, thereby creating


mathematical descriptions of objects. (OpenGL considers points,
lines, polygons, images, and bitmaps to be primitives.)
2. Arrange the objects in three-dimensional space and select the
desired vantage point for viewing the composed scene.
3. Calculate the colors of all the objects. The colors might be explicitly
assigned by the application, determined from specified lighting
conditions, obtained by pasting textures onto the objects, or some
combination of these operations. These actions may be carried out
using shaders, where you explicitly control all the color computations,
or they may be performed internally in OpenGL using its
preprogrammed algorithms (by what is commonly termed the fixed-
function pipeline).
4. Convert the mathematical description of objects and their associated
color information to pixels on the screen. This process is called
rasterization.

February 20, 2010 Tooba Nasir 6


• During these stages, OpenGL might
perform other operations, such as
hidden part removal
• after the scene is rasterized but
before it’s drawn on the screen, can
perform some operations on the pixel
data if wanted

February 20, 2010 Tooba Nasir 7


Key Terms
• Rendering
– is the process by which a computer creates
images from models.
– These models, or objects, are constructed from
geometric primitives—points, lines, and
polygons—that are specified by their vertices.

– The final rendered image consists of pixels


drawn on the screen;
• a pixel is the smallest visible element the
display hardware can put on the screen
February 20, 2010 Tooba Nasir 8
• Information about the pixels (for instance, what
color they’re supposed to be) is organized in
memory into bitplanes.

• A bitplane is an area of memory that holds one


bit of information for every pixel on the screen;
– the bit might indicate how red a particular pixel is
supposed to be, for example
• The bitplanes are themselves organized into a
framebuffer
– holds all the information that the graphics display needs
to control the color and intensity of all the pixels on the
screen.

February 20, 2010 Tooba Nasir 9


A Chunk of OpenGL Code
#include <whateverYouNeed.h>
main() {

InitializeAWindowPlease();

glClearColor(0.0, 0.0, 0.0, 0.0);


glClear(GL_COLOR_BUFFER_BIT);

glColor3f(1.0, 1.0, 1.0);

February 20, 2010 Tooba Nasir 10


glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
glBegin(GL_POLYGON);
glVertex3f(0.25, 0.25, 0.0);
glVertex3f(0.75, 0.25, 0.0);
glVertex3f(0.75, 0.75, 0.0);
glVertex3f(0.25, 0.75, 0.0);
glEnd();
glFlush();

UpdateTheWindowAndCheckForEvents();
}

February 20, 2010 Tooba Nasir 11


The Code Explained
• The first line of the main() routine initializes a window on the screen:

– The InitializeAWindowPlease()
• meant as a placeholder for window-system-specific routines, which are
generally not OpenGL calls

glClearColor(0.0, 0.0, 0.0, 0.0);


glClear(GL_COLOR_BUFFER_BIT);

– OpenGL commands that clear the window to black

– glClearColor( ) establishes what color the window will be cleared to


– glClear( ) actually clears the window
– Once the clearing color is set, the window is cleared to that color whenever
glClear( ) is called.
– The clearing color can be changed with another call to glClearColor( )

• glColor3f( ) establishes what color to use for drawing objects—in the


example, the color is white
• All objects drawn after this point use this color, until it’s changed with
another call to set the color.

February 20, 2010 Tooba Nasir 12


• glOrtho()
– specifies the coordinate system OpenGL assumes as
it draws the final image and how the image is
mapped to the screen
• glBegin() and glEnd()
– define the object to be drawn
• The polygon’s “corners” are defined by the
glVertex3f() commands

• glFlush() ensures that the drawing commands


are actually executed, rather than stored in a
buffer awaiting additional OpenGL commands.

February 20, 2010 Tooba Nasir 13


OpenGL Data Types

February 20, 2010 Tooba Nasir 14


• glVertex2i(1, 3);
• glVertex2f(1.0, 3.0);
– are equivalent, except that the first
specifies the vertex’s coordinates as 32-
bit integers, and the second specifies
them as single-precision floatingpoint
numbers

February 20, 2010 Tooba Nasir 15


• a final letter v
– indicates that the command takes a pointer to a
vector (or array) of values, rather than a series of
individual arguments
• Non-Vector
– glColor3f(1.0, 0.0, 0.0);
• Vector
– GLfloat color_array[] = {1.0, 0.0, 0.0};
– glColor3fv(color_array);

February 20, 2010 Tooba Nasir 16


OpenGL as a State
Machine
• OpenGL is a state machine, particularly when using the
fixed-function pipeline
• put it into various states (or modes) that then remain in
effect until changed
• For instance, the current color is a state variable
– can set the current color to white, red, or any other color
– thereafter, every object is drawn with that color until the
current color is set to something else
– the current color is only one of many state variables that
OpenGL maintains
• Others control such things as the current viewing and
projection transformations, line and polygon stipple
patterns, polygon drawing modes, pixel-packing
conventions, positions and characteristics of lights, and
material properties of the objects being drawn

February 20, 2010 Tooba Nasir 17


State Variables
• Many state variables refer to modes that are enabled or disabled
with the command
– glEnable() or glDisable()

• Each state variable or mode has a default value, and at any point the
system can be queried for each variable’s current value
• use one of the six following commands to do this:
– glGetBooleanv()
– glGetDoublev()
– glGetFloatv()
– glGetIntegerv()
– glGetPointerv()
– glIsEnabled()
• Selecting one of these commands depends on what data type the
answer is to be given in
• Some state variables have a more specific query command
– (such as glGetLight*(), glGetError(), or glGetPolygonStipple()).

February 20, 2010 Tooba Nasir 18


State Variables
• can save a collection of state variables on an
attribute stack
– with glPushAttrib() or glPushClientAttrib()

• temporarily modify them, and later restore


the values
– with glPopAttrib() or glPopClientAttrib()
• For temporary state changes
– use these commands rather than any of the query
commands, as they’re likely to be more efficient

February 20, 2010 Tooba Nasir 19


OpenGL Rendering Pipeline
• Geometric data (vertices, lines, and polygons)
follow the path through the row of boxes that
includes evaluators and per-vertex operations
• pixel data (pixels, images, and bitmaps) are
treated differently for part of the process
• Both types of data undergo the same final
steps (rasterization and per-fragment
operations) before the final pixel data is written
into the framebuffer.

February 20, 2010 Tooba Nasir 20


the Henry Ford assembly line
approach

February 20, 2010 Tooba Nasir 21


• Display Lists
– All data, whether it describes geometry or pixels, can be
saved in a display list for current or later use.
– The alternative to retaining data in a display list is processing
the data immediately—also known as immediate mode.
– When a display list is executed, the retained data is sent
from the display list just as if it were sent by the application
in immediate mode.
– Chapter 7 for more information about display lists

February 20, 2010 Tooba Nasir 22


• Evaluators

– All geometric primitives are eventually described by


vertices. Parametric curves and surfaces may be
initially described by control points and polynomial
functions called basis functions.
– Evaluators provide a method for deriving the vertices
used to represent the surface from the control points.
– The method is a polynomial mapping
• which can produce surface normal, texture coordinates,
colors, and spatial coordinate values from the control points.
– Chapter 12 for more about evaluators

February 20, 2010 Tooba Nasir 23


• Per-Vertex Operations

– For vertex data, next is the “per-vertex operations” stage


– which converts the vertices into primitives
– Some types of vertex data (for example, spatial coordinates) are
transformed by 4 X 4 floating-point matrices
– Spatial coordinates are projected from a position in the 3D world to a
position on your screen.
– Chapter 3 for details about the transformation matrices

• If advanced features are enabled, this stage is even busier.


• If texturing is used, texture coordinates may be generated and
transformed here
• If lighting is enabled, the lighting calculations are performed using
the transformed vertex, surface normal, light source position,
material properties, and other lighting information to produce a
color value.

February 20, 2010 Tooba Nasir 24


• Per-Vertex Operations continued …

– Since OpenGL Version 2.0, you’ve had the option of using


fixed-function vertex processing as just previously
described
– or completely controlling the operation of the per-vertex
operations by using vertex shaders
– If shaders are employed, all of the operations in the per-
vertex operations stage are replaced by the shader
– In Version 3.1, all of the fixed-function vertex operations
are removed and using a vertex shader is mandatory
– unless the implementation supports the
GL_ARB_compatibility extension

February 20, 2010 Tooba Nasir 25


A Little Break?!

February 20, 2010 Tooba Nasir 26


• Primitive Assembly

– Clipping, a major part of primitive assembly, is the elimination of portions of


geometry that fall outside a half-space, defined by a plane.
– Point clipping simply passes or rejects vertices; line or polygon clipping can
add additional vertices depending on how the line or polygon is clipped.

– In some cases, this is followed by perspective division, which makes distant


geometric objects appear smaller than closer objects.
– Then viewport and depth (z-coordinate) operations are applied. If culling is
enabled and the primitive is a polygon, it then may be rejected by a culling
test. Depending on the polygon mode, a polygon may be drawn as points or
lines.
– More on “Polygon Details” in Chapter 2

• The results of this stage are complete geometric primitives, which are
the transformed and clipped vertices with related color, depth, and
sometimes texture-coordinate values and guidelines for the
rasterization step.

February 20, 2010 Tooba Nasir 27


Pixel Operations
• While geometric data takes one path through the OpenGL rendering
pipeline, pixel data takes a different route. Pixels from an array in
system memory are first unpacked from one of a variety of formats
into the proper number of components. Next the data is scaled, biased,
and processed by a pixel map. The results are clamped and then either
written into texture memory or sent to the rasterization step.
• “Imaging Pipeline” in Chapter 8

• If pixel data is read from the framebuffer, pixel-transfer operations


(scale, bias, mapping, and clamping) are performed. Then these results
are packed into an appropriate format and returned to an array in
system memory.

• there are special pixel copy operations for copying data in the
framebuffer to other parts of the framebuffer or to the texture
memory. A single pass is made through the pixel-transfer operations
before the data is written to the
• texture memory or back to the framebuffer.

February 20, 2010 Tooba Nasir 28


• Many of the pixel operations described are part of the fixed-
function pixel pipeline and often move large amounts of data
around the system.
• Modern graphics implementations tend to optimize performance
by trying to localize graphics operations to the memory local to
the graphics hardware (this description is a generalization, of
course, but it is how most systems are currently implemented).

• OpenGL Version 3.0, which supports all of these operations, also


introduces framebuffer objects that help optimize these data
movements, in particular, these objects can eliminate some of
these transfers entirely.
• Framebuffer objects, combined with programmable fragment
shaders replace many of these operations (most notably, those
classified as pixel transfers) and provide significantly more
flexibility.

February 20, 2010 Tooba Nasir 29

You might also like