0% found this document useful (0 votes)
15 views7 pages

Report 1

The document outlines the graphical pipeline in computer graphics, detailing stages such as vertex processing, rasterization, fragment processing, and blending. It discusses techniques for rasterizing lines and triangles, clipping operations, and methods for hidden surface removal, including the use of z-buffers. Additionally, it covers shading techniques, antialiasing methods, and culling strategies to enhance rendering efficiency.

Uploaded by

miriveranito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views7 pages

Report 1

The document outlines the graphical pipeline in computer graphics, detailing stages such as vertex processing, rasterization, fragment processing, and blending. It discusses techniques for rasterizing lines and triangles, clipping operations, and methods for hidden surface removal, including the use of z-buffers. Additionally, it covers shading techniques, antialiasing methods, and culling strategies to enhance rendering efficiency.

Uploaded by

miriveranito
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Report 1

Graphical pipeline in
computer graphics

Miriam de Bengoa Aletta


INDEX

1. Introduction

2. Rasterization

3. Operations before and after rasterization

4. Simple antialiasing

5. Culling primitives for efficiency

6. References
Introduction
The graphics pipeline is the sequence of operations which consider each geometric object in turn
and find the pixels that it could have an effect on, meaning, to map the points that define the
geometry from object space to screen space. This process involves a series of sequential stages,
each responsible for specific tasks. These are the most important ones with a little explanation:

1. Vertex Processing: This stage involves manipulating the


vertices of 3D models. The primary task here is to apply matrix
transformations (convert the coordinates of these vertices
from object space to screen space). This ensures that the data
sent to the rasterizer is in pixel coordinates.
2. Rasterization: After the vertices are processed, rasterization
converts the transformed vertex data into fragments (or
pixels). This step determines which pixels on the screen
correspond to the 3D geometry. The rasterizer generates a 2D
representation of the model based on the vertices’ screen
positions.
3. Fragment Processing: Once rasterization is complete, each
fragment get processed, which typically includes shading and
texture application. During this stage, various effects can be
applied to each pixel, enhancing the visual quality by simulating
lighting, colors, and surface properties.
4. Blending: This stage involves compositing the processed
fragments to create the final image. Blending combines pixel
values, taking into account transparency and depth to ensure
that surfaces closer to the viewer correctly obscure those that
are farther away.

Rasterization

As I mentioned before, the previous state to


rasterization is vertex processing, so each vertex
has coordinates in screen space. The rasterizer is
charger of identifying the pixels that the
primitive covers and interpolating values
(attributes) across the primitive. The output
consists of fragments, one for each covered
pixel, each containing its own attribute values.
Line drawing
First of all, it is necessary to define the endpoints of the line using their coordinates in screen
space (let's call them x0, y0 and x1, y1) and a slope which is given by: y1 - y0
m=
x1 - x0
s

This data creates a set of pixels that approximates a line


which is based on a line equation (can be parametric or
implicit). There will always be exactly one pixel in each
column of pixels between the endpoints (midpoint
algorithm).

Triangle rasterization

The most common way to rasterize triangles that avoids the order problem and eliminates
holes is to use the convention that pixels are drawn if and only if their centers are inside the
triangle. In this technique, the vertices of each triangle connect with the vertices of other
triangles, creating 3D models of objects. These models are converted into pixels models (2D)
and every pixel has its own characteristics, which assign the final ones.

Clipping
Clipping is a fundamental operation in graphics that occurs
whenever one geometric shape intersects another. For example,
when you clip a triangle against the plane x=0, the plane divides
the triangle into two sections. Typically, in clipping applications,
the portion of the triangle that lies on the "wrong" side of the
plane is removed, as we can see in the picture.
In the context of preparing for rasterization, the "wrong" side
refers to the area outside the view volume. It is generally safe
to eliminate all geometry that lies outside this volume.
There are 3 types of clipping, clipping before de
transformation , clipping in
homogeneous coordinates and clipping after the
transformation.
Operations before and after rasterization
Before a primitive can be rasterized, its defining vertices must be converted to screen
coordinates, and the colors or other attributes for interpolation must be determined. This
preparation occurs during the vertex-processing stage of the pipeline, where incoming vertices
undergo modeling, viewing, and projection transformations to map them into screen space.
Alongside the vertices, other information, such as colors, surface normals, and texture
coordinates, is also transformed as needed.
After rasterization, further processing calculates the color and depth for complex shading
operations. The final blending phase combines the fragments from overlapping primitives at
each pixel to determine the final color, typically by selecting the fragment with the smallest
depth (closest to the viewer). The functions of these different stages are best illustrated
through examples.

Simple 2D drawing
It is the simplest graphics pipeline. It bypasses vertex and fragment processing, allowing the
blending stage to overwrite the previous fragment's color. Applications provide primitives in
pixel coordinates, and the rasterizer handles all

A minimal 3D pipeline
To draw objects in 3D, the only modification required
in the 2D drawing pipeline is a matrix
transformation. The vertex-processing stage
multiplies incoming vertex positions by the combined
modeling, camera, projection, and viewport matrices,
producing screen-space triangles that can be drawn
just like 2D shapes.

Using a z-buffer for hiden surfaces

In this method, the distance to the closest


drawn surface for each pixel is tracked,
discarding any fragments that are farther
away. This closest distance is stored as an
additional value, known as the depth or z-
value, alongside the red, green, and blue color
values. The grid of these depth values is
referred to as the depth buffer or z-buffer.
A minimal 3D pipeline
The application currently sets the color for triangles,
while the rasterizer interpolates these colors. For 3D
objects, shading computations can be performed in the
vertex stage, using light direction, eye direction, and
surface normals. Normal vectors are supplied at the
vertices, and the shading equation is evaluated for each
vertex to determine its color.
This method, known as Gouraud shading, requires a suitable coordinate system, preferably
orthonormal. However, it has limitations, as it cannot capture details smaller than the primitives
used. For example, large triangles may result in overly dark shading in the center, and curved
surfaces need smaller primitives to resolve specular highlights accurately.

Per-fragment shading
In per-fragment shading, the same shading equations are evaluated for each fragment using
interpolated vectors rather than at each vertex. This requires the vertex stage to prepare
geometric information as attributes for the rasterizer, allowing for interpolation of the eye-
space surface normal and vertex position to be used in the shading calculations.

Texture mapping
Textures are images used to add detail to surface shading, preventing them from appearing too
uniform. Instead of using the geometry's attribute values, the shading computation retrieves
values like diffuse color from a texture through a process called texture lookup. This involves
specifying a texture coordinate, which the texture-mapping system uses to find the
corresponding value in the texture image. Texture coordinates are typically defined as an
additional vertex attribute, allowing each primitive to know its position within the texture.

Shading frequency
Large-scale shading, like diffuse shading, can be computed infrequently and interpolated, while
small-scale features, such as sharp highlights, require a higher frequency—ideally one sample
per pixel. The effects can be computed in the vertex stage, whereas high-frequency effects
should be evaluated at the fragment stage or at the vertex stage if vertices are close. For
example, gaming hardware usually performs shading per fragment for efficiency, while the
PhotoRealistic RenderMan system computes shading per vertex on small quadrilaterals for
detailed rendering.
Simple antialiasing
Rasterization, like ray tracing, can produce jagged lines if pixels are determined to be either fully
inside or outside a primitive. To improve visual quality, especially in animations, allowing partial
coverage of pixels helps blur edges. Various antialiasing methods exist, with one common
approach being box filtering, which averages pixel colors over an area. A practical
implementation is supersampling, where images are created at high resolutions and then
downsampled. A more efficient optimization involves sampling visibility more frequently than
shading, enabling effective antialiasing with minimal color computation. Techniques vary between
systems, with high-resolution rasterization benefiting per-vertex shading and multisample
antialiasing used in per-fragment shading systems.

Culling primitives for efficiency


Object-order rendering's requirement for a single pass over all scene geometry is a drawback in
complex scenes, as much processing is wasted on invisible objects. Culling is a technique used to
eliminate this wasted effort by removing invisible geometry. Three common culling strategies are:
1 View volume culling: Removes geometry outside the visible area.
2 Occlusion culling: Eliminates geometry that is obscured by closer objects.
3 Backface culling: Removes primitives facing away from the camera.
While view volume and backface culling are straightforward, culling in high-performance systems is
complex.

References
http://repo.darmajaya.ac.id/5422/1/
Fundamentals%20of%20Computer%20Graphics%2C%20Fourth%20Edition%20%28%20PDFDrive%2
0%29.pdf

https://www.uni-weimar.de/fileadmin/user/fak/medien/professuren/Computer_Graphics/
course_material/Graphics___Animation/5-GFX-pipeline.pdf

You might also like