Lec 3
Lec 3
API’s
► The Graphics API preference order can be overridden in the Player Settings (Edit >
Project Settings > Player).
► Select the platform from the tabs at the top (active build target is selected by default).
► You can then deselect Auto Graphics API and choose which Graphics APIs should be
supported for your game.
API’s In Computer Graphics
► In computer graphics,
► a computer graphics pipeline,
► rendering pipeline (The Rendering Pipeline is the sequence of steps that OpenGL takes when rendering
objects. This overview will provide a high-level description of the steps in the pipeline).
► or simply graphics pipeline,
► is a conceptual model that describes what steps a graphics system needs to perform to render a 3D
scene to a 2D screen.
Computer Graphics Pipeline
► Once a 3D model has been created, for instance in a video game or any other 3D computer
animation, the graphics pipeline is the process of turning that 3D model into what the computer
displays.
► Because the steps required for this operation depend on the software and hardware used and the
desired display characteristics, there is no universal graphics pipeline suitable for all cases.
► However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were
created to unify similar steps and to control the graphics pipeline of a given hardware accelerator.
Structure
► A graphics pipeline can be divided into three main parts: Application, Geometry and Rasterization.
Application
► The application step is executed by the software on the main processor (CPU).
► In the application step, changes are made to the scene as required, for example, by user interaction by
means of input devices or during an animation.
► In a modern Game Engine such as Unity, the programmer deals almost exclusively with the application step,
and uses a high-level language such as C#, as opposed to C or C++.
► The new scene with all its primitives, usually triangles, lines and points, is then passed on to the next step in
the pipeline.
Geometry
► The geometry step (with Geometry pipeline (Geometric manipulation of modelling primitives, such as that
performed by a geometry pipeline, is the first stage in computer graphics systems which perform image generation
based on geometric models)), which is responsible for the majority of the operations with polygons and
their vertices , can be divided into the following five tasks.
► It depends on the particular implementation of how these tasks are organized as actual parallel pipeline steps.
Definitions
► The 3D projection step transforms the view volume into a cube with the corner point
coordinates (-1, -1, 0) and (1, 1, 1); Occasionally other target volumes are also used.
► This step is called projection, even though it transforms a volume into another volume, since
the resulting Z coordinates are not stored in the image, but are only used in Z-buffering in
the later rastering step.
► To limit the number of displayed objects, two additional clipping planes are used; The
visual volume is therefore a truncated pyramid (frustum).
► The parallel or orthogonal projection is used, for example, for technical representations
because it has the advantage that all parallels in the object space are also parallel in the
image space, and the surfaces and volumes are the same size regardless of the distance from
the viewer.
frustum
► Maps use, for example, an orthogonal projection (so-called orthophoto), but oblique images
of a landscape cannot be used in this way - although they can technically be rendered, they
seem so distorted that we cannot make any use of them.
3D projection
► A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to
represent depth information of objects in 3D space from a particular perspective.
► Depth buffers are an aid to rendering a scene to ensure that the correct polygons properly occlude
other polygons.
► Z-buffering was first described in 1974 by Wolfgang Straßer in his PhD thesis on fast algorithms for
rendering occluded objects.
Lighting
► Often a scene contains light sources placed at different positions to make the lighting of the objects appear
more realistic.
► In this case, a gain factor for the texture is calculated for each vertex based on the light sources and the
material properties associated with the corresponding triangle.
► In the later rasterization step, the vertex values of a triangle are interpolated over its surface.
► A general lighting is applied to all surfaces.
► It is the diffuse and thus direction-independent brightness of the scene.
► The sun is a directed light source, which can be assumed to be infinitely far away.
► The illumination effected by the sun on a surface is determined by forming the scalar product of the
directional vector from the sun and the normal vector of the surface.
► If the value is negative, the surface is facing the sun.
Window-Viewport transformation
► Raster Images
► These are the types of images that are produced
when scanning or photographing an object.
► Raster images are compiled using pixels, or tiny dots,
containing unique color and tonal information that
come together to create the image.
► Since raster images are pixel based, they are
resolution dependent.
Rasterization using top left rule