0% found this document useful (0 votes)
29 views51 pages

08-3D Graphics and Rendering System

Uploaded by

mvenom682
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views51 pages

08-3D Graphics and Rendering System

Uploaded by

mvenom682
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

3D Rendering Pipeline

Fundamentals of Game Development


Instructor : Dr. Behrouz Minaei ([email protected])
Teaching Assistant : Morteza Rajabi ([email protected](

1/30
Goals 2

• Understand the difference between inverse-mapping and forward-


mapping approaches to computer graphics rendering

• Be familiar with the graphics pipeline


• From transformation perspective
• From operation perspective

2/30
Approaches to graphics rendering 3

• Ray-tracing approach
• Inverse-mapping approach: starts from pixels
• A ray is traced from the camera through each pixel
• Takes into account reflection, refraction, and diffraction in a multi-
resolution fashion
• High quality graphics but computationally expensive
• Not for real-time applications
• Pipeline approach
• Forward-mapping approach
• Used by OpenGL and DirectX
• State-based approach:
• Input is 2D or 3D data
• Output is frame buffer
• Modify state to modify functionality
• For real-time and interactive applications, especially games 3/30
Ray-tracing – Inverse mapping 4

1. For every pixel, construct a ray from the eye


2. for every object in the scene, intersect ray with object
3. find closest intersection with the ray
4. compute normal at point of intersection
5. compute color for pixel
6. shoot secondary rays

4/30
Pipeline – Forward mapping 5

• Start from the geometric primitives to find the values of the


pixels

5/30
The general view (Transformations)
Modeling
Transformation

Lighting
Viewing
Transformation
Projection
Transformation
3D scene, Camera
Parameters, and Clipping
Framebuffer
light sources Viewport
Transformation Display
Rasterization

graphics Pipeline 6/30


Input and output of the graphics pipeline 7

• Input:
• Geometric model
• Objects
• Light sources geometry and transformations
• Lighting model
• Description of light and object properties
• Camera model
• Eye position, viewing volume
• Viewport model
• Pixel grid onto which the view window is mapped
• Output:
• Colors suitable for framebuffer display
7/30
Graphics pipeline 8

• What is it?
The nature of the processing steps to display a computer graphic and the
order in which they must occur.
• Primitives are processed in a series of stages
• Each stage forwards its result on to the next stage
• The pipeline can be drawn and implemented in different ways
• Some stages may be in hardware, others in software
• Optimizations and additional programmability are available at some
stages
• Two ways of viewing the pipeline:
• Transformation perspective
• Operation perspective
8/30
Modeling transformation 9
Modeling • 3D models defined in their own coordinate
Transformation system (object space)
Lighting • Modeling transforms orient the models
Viewing within a common coordinate frame (world
Transformation space)
Projection
Transformation

Clipping
Viewport
Transformation
World space
Rasterization Object space
9/30
Lighting (shading) 10
Modeling • Vertices lit (shaded) according to material
Transformation properties, surface properties (normal) and
Lighting light sources
Viewing • Local lighting model (Diffuse, Ambient,
Transformation Phong, etc.)
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
10/30
Lighting Simulation 11

• Direct illumination
• Ray casting
• Polygon shading
• Global illumination
• Ray tracing
• Monte Carlo methods
• Radiosity methods

11/30
12

12/30
13

13/30
14

14/30
15

15/30
Viewing transformation 16
Modeling • It maps world space to eye (camera) space
Transformation • Viewing position is transformed to origin and
Lighting viewing direction is oriented along some axis
(usually z)
Viewing
Transformation
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
16/30
17

17/30
Projection transformation
(Perspective/Orthogonal) 18
Modeling • Specify the view volume that will
Transformation ultimately be visible to the camera
Lighting • Two clipping planes are used: near plane
Viewing and far plane
Transformation
• Usually perspective or orthogonal
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
18/30
19

19/30
Clipping 20
Modeling • The view volume is transformed into
Transformation standard cube that extends from -1 to 1 to
Lighting produce Normalized Device Coordinates.
Viewing • Portions of the object outside the NDC cube
Transformation are removed (clipped)
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
20/30
Why clip? 21

• We don’t want to waste time rendering objects that


are outside the viewing window (or clipping window)
• Bad idea to rasterize outside of framebuffer
bounds

21/30
What is clipping? 22

• Analytically calculating the portions of


primitives within the view window

22/30
Clipping 23

• The native approach to clipping lines:

for each line segment


for each edge of view_window
find intersection point
pick “nearest” point
if anything is left, draw it

• What do we mean by “nearest”?


• How can we optimize this?

23/30
Trivial Accepts 24

• Big optimization: trivial accept/rejects


• How can we quickly determine whether a line segment is
entirely inside the view window?
• A: test both endpoints.

24/30
Trivial Rejects 25

• How can we know a line is outside view


window?
• A: if both endpoints on wrong side of same
edge, can trivially reject line

25/30
Clipping Lines To Viewport 26

• Combining trivial accepts/rejects


• Trivially accept lines with both endpoints inside all edges of the
view
window
• Trivially reject lines with both endpoints outside the same edge of
the
view window
• Otherwise,
reduce to trivial cases by splitting into two segments

26/30
Clipping Lines To Viewport 27

• Combining trivial accepts/rejects


• Trivially accept lines with both endpoints inside all edges of the
view
window
• Trivially reject lines with both endpoints outside the same edge of
the
view window
• Otherwise,
reduce to trivial cases by splitting into two segments

27/30
Others Line clipping algorithms 28

• Cohen–Sutherland
• Liang–Barsky
• Cyrus–Beck
• Nicholl–Lee–Nicholl
• Fast clipping
• O(lg N) algorithm
• Skala
• What is the best? What is the differences?

28/30
Clipping Polygons 29

• Clipping polygons is more complex than clipping the individual


lines
• Input: polygon
• Output: original polygon, new polygon, or nothing
• The biggest optimizer we had was trivial accept or reject…
• When can we trivially accept/reject a polygon as opposed to the
line segments that make up the polygon?

29/30
Why Is Clipping Hard? 30

• What happens to a triangle during clipping?


• Possible outcomes:

• How many sides can a clipped triangle have?

30/30
How many sides? 31

• Seven……..

31/30
Why Is Clipping Hard? 32

• A really tough case:

32/30
Why Is Clipping Hard? 33

• A really tough case:

concave polygon => multiple Polygon


33/30
Sutherland-Hodgman Clipping 34

• Basic idea:
• Consider each edge of the view window
individually
• Clip the polygon against the view window
edge’s
equation

34/30
35

35/30
Sutherland-Hodgman Clipping 36

• Input/output for algorithm:


• Input: list of polygon vertices in order
• Output: list of clipped polygon vertices consisting of old
vertices (maybe) and new vertices (maybe)

• Note: this is exactly what we expect from


the clipping operation against each edge

36/30
Sutherland-Hodgman Clipping 37

• Sutherland-Hodgman basic routine:


• Go around polygon one vertex at a time
• Current vertex has position p
• Previous vertex had position s, and it has been added
to the output if appropriate

37/30
Sutherland-Hodgman Clipping 38

• Edge from s to p takes one of four cases:

38/30
Sutherland-Hodgman Clipping 39

• Four cases:
• s inside plane and p inside plane
• Add p to output
• Note: s has already been added
• s inside plane and p outside plane
• Find intersection point i
• Add i to output
• s outside plane and p outside plane
• Add nothing
• s outside plane and p inside plane
• Find intersection point i
• Add i to output, followed by p

39/30
Point-to-Plane test 40

• A very general test to determine if a point p is “inside”


a plane P, defined by q and n:

40/30
Finding Line-Plane Intersections 41

• Edge intersects plane P where E(t) is on P


• q is a point on P
• n is normal to P

(L (t) - q) • n = 0
(L0 + (L1 - L0) t - q) • n = 0
t = [(q - L0) • n] / [(L1 - L0) • n]
• The intersection point i = L(t) for this value of t

41/30
Viewport Transformation 42
Modeling
Transformation
• Maps NDC to 3D viewport:
• xy gives the screen window
Lighting • z gives the depth of each point
Viewing
Transformation
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
42/30
Rasterization 43
Modeling • Rasterizes objects into pixels
Transformation
• Interpolate values as we go (color, depth, etc.)
Lighting
Viewing
Transformation
Projection
Transformation

Clipping
Viewport
Transformation
Rasterization
43/30
Summary of transformations 44

44/30
Recap: Rendering Pipeline 45

• Modeling transformations
• Viewing transformations
• Projection transformations
• Clipping
• Scan conversion
• We now know everything about how to draw
a polygon on the screen, except visible surface
determination

45/30
Invisible Primitives 46

• Why might a polygon be invisible?


• Polygon outside the field of view
• Polygon is backfacing
• Polygon is occluded by object(s) nearer the viewpoint
• For efficiency reasons, we want to avoid spending work on
polygons outside field of view or backfacing
• For efficiency and correctness reasons, we need to
know when polygons are occluded

46/30
View Frustum Clipping 47

• Remove polygons entirely outside frustum


• Note that this includes polygons “behind” eye (actually behind near plane)
• Pass through polygons entirely inside frustum
• Modify remaining polygons to include only portions intersecting
view frustum

47/30
Back-Face Culling 48

• Most objects in scene are typically “solid”


• More rigorously: closed, orientable manifolds
• Must not cut through itself
• Must have two distinct sides
• A sphere is orientable since it has
two sides, 'inside' and 'outside'.
• A Mobius strip or a Klein bottle is not
orientable
• Cannot “walk” from one side to the other
• A sphere is a closed manifold
whereas a plane is not

48/30
Back-Face Culling 49

• On the surface of a closed manifold, polygons


whose normals point away from the camera
are always occluded:

49/30
Back-Face Culling 50

• Not rendering backfacing polygons improves


performance
• By how much?
• Reduces by about half the number of polygons to be
considered for each pixel
• Every front-facing polygon must have a corresponding
rear-facing
one

50/30
Occlusion 51

• For most interesting scenes, some polygons will


overlap
• To render the correct image, we need to
determine which
polygons occlude which
• We don’t focus on this here.

51/30

You might also like