Report 1
Report 1
Graphical pipeline in
computer graphics
1. Introduction
2. Rasterization
4. Simple antialiasing
6. References
Introduction
The graphics pipeline is the sequence of operations which consider each geometric object in turn
and find the pixels that it could have an effect on, meaning, to map the points that define the
geometry from object space to screen space. This process involves a series of sequential stages,
each responsible for specific tasks. These are the most important ones with a little explanation:
Rasterization
Triangle rasterization
The most common way to rasterize triangles that avoids the order problem and eliminates
holes is to use the convention that pixels are drawn if and only if their centers are inside the
triangle. In this technique, the vertices of each triangle connect with the vertices of other
triangles, creating 3D models of objects. These models are converted into pixels models (2D)
and every pixel has its own characteristics, which assign the final ones.
Clipping
Clipping is a fundamental operation in graphics that occurs
whenever one geometric shape intersects another. For example,
when you clip a triangle against the plane x=0, the plane divides
the triangle into two sections. Typically, in clipping applications,
the portion of the triangle that lies on the "wrong" side of the
plane is removed, as we can see in the picture.
In the context of preparing for rasterization, the "wrong" side
refers to the area outside the view volume. It is generally safe
to eliminate all geometry that lies outside this volume.
There are 3 types of clipping, clipping before de
transformation , clipping in
homogeneous coordinates and clipping after the
transformation.
Operations before and after rasterization
Before a primitive can be rasterized, its defining vertices must be converted to screen
coordinates, and the colors or other attributes for interpolation must be determined. This
preparation occurs during the vertex-processing stage of the pipeline, where incoming vertices
undergo modeling, viewing, and projection transformations to map them into screen space.
Alongside the vertices, other information, such as colors, surface normals, and texture
coordinates, is also transformed as needed.
After rasterization, further processing calculates the color and depth for complex shading
operations. The final blending phase combines the fragments from overlapping primitives at
each pixel to determine the final color, typically by selecting the fragment with the smallest
depth (closest to the viewer). The functions of these different stages are best illustrated
through examples.
Simple 2D drawing
It is the simplest graphics pipeline. It bypasses vertex and fragment processing, allowing the
blending stage to overwrite the previous fragment's color. Applications provide primitives in
pixel coordinates, and the rasterizer handles all
A minimal 3D pipeline
To draw objects in 3D, the only modification required
in the 2D drawing pipeline is a matrix
transformation. The vertex-processing stage
multiplies incoming vertex positions by the combined
modeling, camera, projection, and viewport matrices,
producing screen-space triangles that can be drawn
just like 2D shapes.
Per-fragment shading
In per-fragment shading, the same shading equations are evaluated for each fragment using
interpolated vectors rather than at each vertex. This requires the vertex stage to prepare
geometric information as attributes for the rasterizer, allowing for interpolation of the eye-
space surface normal and vertex position to be used in the shading calculations.
Texture mapping
Textures are images used to add detail to surface shading, preventing them from appearing too
uniform. Instead of using the geometry's attribute values, the shading computation retrieves
values like diffuse color from a texture through a process called texture lookup. This involves
specifying a texture coordinate, which the texture-mapping system uses to find the
corresponding value in the texture image. Texture coordinates are typically defined as an
additional vertex attribute, allowing each primitive to know its position within the texture.
Shading frequency
Large-scale shading, like diffuse shading, can be computed infrequently and interpolated, while
small-scale features, such as sharp highlights, require a higher frequency—ideally one sample
per pixel. The effects can be computed in the vertex stage, whereas high-frequency effects
should be evaluated at the fragment stage or at the vertex stage if vertices are close. For
example, gaming hardware usually performs shading per fragment for efficiency, while the
PhotoRealistic RenderMan system computes shading per vertex on small quadrilaterals for
detailed rendering.
Simple antialiasing
Rasterization, like ray tracing, can produce jagged lines if pixels are determined to be either fully
inside or outside a primitive. To improve visual quality, especially in animations, allowing partial
coverage of pixels helps blur edges. Various antialiasing methods exist, with one common
approach being box filtering, which averages pixel colors over an area. A practical
implementation is supersampling, where images are created at high resolutions and then
downsampled. A more efficient optimization involves sampling visibility more frequently than
shading, enabling effective antialiasing with minimal color computation. Techniques vary between
systems, with high-resolution rasterization benefiting per-vertex shading and multisample
antialiasing used in per-fragment shading systems.
References
http://repo.darmajaya.ac.id/5422/1/
Fundamentals%20of%20Computer%20Graphics%2C%20Fourth%20Edition%20%28%20PDFDrive%2
0%29.pdf
https://www.uni-weimar.de/fileadmin/user/fak/medien/professuren/Computer_Graphics/
course_material/Graphics___Animation/5-GFX-pipeline.pdf