Lecture 8
Lecture 8
Texel a texel, texture element, or texture pixel is the fundamental unit of a texture map.
Textures are represented by arrays of texels representing the texture space, just as other images
a texel (texture element) represents the smallest graphical element in two-dimensional (2-D) texture mapping to
are represented by arrays of pixels.
"wallpaper" the rendition of a three-dimensional ( 3-D ) object to create the impression of a textured surface. A texel is
similar
Whentotexturing
a pixel (picture element)
a 3D surface because (a
or surfaces it represents an elementary
process known as textureunit in a graphic.
mapping), the But there are differences
between
rendererthe texels
maps in a to
texels texture map and
appropriate the in
pixels pixels in an image
the output display.
picture. In special
On modern instances, there.amight
computers, groupbeofatexels
one-to-
one
thiscorrespondence between texels
operation is accomplished andgraphics
on the pixels inprocessing
some partsunit.
of the rendition of a 3-D object. But for most, if not all, of
a 3-D rendition, the texels and pixels cannot be paired off in such a simple way.
The texturing process starts with a location in space. The location can be in world space, but typically it is in
Model space so that the texture moves with the model.
A projector function is applied to the location to change the location from a three-element vector to a two-
element vector with values ranging from zero to one
These values are multiplied by the resolution of the texture to obtain the location of the texel. When a texel is
requested that is not on an integer position, texture filtering is applied.
When a texel is requested that is outside of the texture, one of two techniques is used: clamping or wrapping.
Clamping limits the texel to the texture size, moving it to the nearest edge if it is more than the texture size.
Wrapping moves the texel in increments of the texture's size to bring it back into the texture. Wrapping causes a
texture to be repeated; clamping causes it to be in one spot only.
glGenTextures
Array Texture An Array Texture is a Texture where each mipmap level contains an array of images of the
same size. Array textures may have Mipmaps, but each mipmap in the texture has the
same number of levels. ,Array textures come in 1D and 2D types
Creation and Management 1D array textures are created by binding a newly-created texture object
to GL_TEXTURE_1D_ARRAY, then creating storage for one or more
mipmaps of the texture. This is done by using the "2D" image functions;
Every row of pixel data in the "2D" array of pixels is considered a separate 1D layer.
Depth
testing
Depth testing is done in screen space after the fragment shader has
run
A Fragment Shader is the Shader stage that will process a Fragment generated by the Rasterization into a set of
colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized. For each sample of the pixels
covered by a primitive, a "fragment" is generated. Each fragment has a Window Space position, a few other values,
and it contains all of the interpolated per-vertex output values from the last Vertex Processing stage.
The output of a fragment shader is a depth value, a possible stencil value (unmodified by the fragment shader),
and zero or more color values to be potentially written to the buffers in the current framebuffers.
gl_FragCoord
Depth testing is disabled by default so to enable depth testing we need to enable it with
the GL_DEPTH_TEST option:
Depth test function
OpenGL allows us to modify the comparison operators it uses for the depth test. This allows us to
control when OpenGL should pass or discard fragments and when to update the depth buffer. We can
:set the comparison operator (or depth function) by calling glDepthFunc
glClearDepth
glClearBufferData, glClearNamedBufferData — fill a buffer object's data store with a fixed value
The transparency for a color is called the _____ and its value is a number between 0.0 and 1.0.
In an object with an alpha level of 1.0, you can see the objects that behind it?
The difference between WebGL and OpenGL is that WebGL only supports 2D graphics and OpenGL supports
both 2D and 3D graphics.
The position that you are viewing a scene from is called the: 1.eye point
2.view reference point
3.viewport
4.view frustum
Clipping defines parts of the scene that you do not want to display.
Modeling an object about its local origin involves defining it in terms of:
1.modeling coordinates
2.eye coordinates
3.euclidean coordinates
4.modeling transformations
What is a 'depth buffer' and what does it accumulate?
Generally - what primitive polygon is used for creating a mesh to represent a complex object?
Triangle
Rectangle
Switches to material mode to add visual effects.
Square
Circle