0% found this document useful (0 votes)
107 views

Shader Fundamentals

Shader programs allow developers to control the GPU through programs called shaders. Vertex shaders process vertex attributes like position and pass them to fragment shaders. Fragment shaders then compute pixel colors and textures. Modern OpenGL uses a programmable pipeline where shaders replace fixed functions for more control over processing and effects.

Uploaded by

Alaa Zain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

Shader Fundamentals

Shader programs allow developers to control the GPU through programs called shaders. Vertex shaders process vertex attributes like position and pass them to fragment shaders. Fragment shaders then compute pixel colors and textures. Modern OpenGL uses a programmable pipeline where shaders replace fixed functions for more control over processing and effects.

Uploaded by

Alaa Zain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 154

Shader Fundamentals

OpenGL Versions and Publication Dates


Modern OpenGL
• Performance is achieved by using GPU rather
than CPU
• Control GPU through programs called shaders
• Application’s job is to send data to GPU
• GPU does all rendering
Think parallel
• Shaders are compiled from within your code
– They used to be written in assembler
– Today they’re written in high-level languages
• They execute on the GPU
• GPUs typically have multiple processing units
• That means that multiple shaders execute in parallel!
• At last, we’re moving away from the purely-linear flow
of early “C” programming models…
Visual Pipeline
Vertex shader
• Vertex shader executed once for each vertex
• Vertex position transformation usually using
the modelview and projection matrices
• Normal transformation, normalization
• Texture coordinate generation and
transformation
• Lighting per vertex
• Color computation
Fragment shader
• Fragment - per pixel data
• Fragment shader executed once for each
fragment
• Computing colors and texture coordinates per
pixel
• Texture application
• Fog computation
• Computing normals for lighting per pixel
• Can discard the fragment or compute color
OpenGL 1.x Fixed-Function Pipeline
• Vertex Data
– The vertex data represents the 2D or 3D objects
that are to be rendered. In addition to the
position of the vertex in 2D or 3D space several
other attributes such as vertex color, vertex
normals, texture coordinates, and fog coordinate.
The vertex data processed by the GPU is often
referred to as the vertex stream.
• Primitive Processing
– OpenGL has support for several types of
primitives. These include points, lines, triangles,
quads, and polygons (but quads and polygons are
deprecated as of OpenGL 3.1).
– In addition to the classic primitive types, the patch
primitive is introduced in OpenGL 4.0 to provide
support for tessellation shaders.
• Transform and Lighting
– In this stage the vertex is transformed into view space
by transforming the vertex position and vertex normal
by the current model-view matrix (GL_MODELVIEW).
– After transforming the vertex position and vertex
normal, lighting information is computed for each
individual vertex.
– Since this stage happens for each vertex it is not
capable of performing per-pixel lighting using the
fixed-function pipeline. To achieve per-pixel lighting,
you must use a fragment shader
• Primitive Assembly
– The Primitive Assembly is the grouping of vertices
into lines and triangles. Primitive assembly still occurs
for points, but it is trivial in that case.
• Rasterizer
– The Rasterizer is responsible for converting the
primitives from the primitive assembly into fragments
(which eventually become the individual screen
pixels).
– Vertex attributes are interpolated across the face of
the vertex for attributes such as color, texture
coordinates, and fog coordinates.
– Fragments are “potential pixels”
• Have a location in frame bufffer
• Color and depth attributes
• Texture Environment
• Color Sum
– The Color Sum stage is used to add-in a secondary
color to the geometry after the textures have
been applied.
• Fog
• Alpha Test
– Fragments can be discarded if the alpha value is
below a certain threshold.
– This stage only functions if the GL_ALPHA_TEST is
enabled and the framebuffer uses a color mode
that stores the alpha value (such as RGBA).
• Depth & Stencil
– During the Depth test stage, a fragment can be
discarded if the current framebuffer has a depth
buffer, and the depth test stage (GL_DEPTH_TEST)
is enabled and it fails the depth comparison
function.
– The stencil test can be used to discard fragments
that fail a stencil comparison operation based on
the content of the stencil buffer
• Color Buffer Blend
– If color blending is enabled (glEnable(GL_BLEND))
then blending will be performed based on the
blend function and the blend equations.
• Dither
– If you are using a color palette with few colors,
you can enable the GL_DITHER state to inform
OpenGL to try to simulate a larger color palette by
mixing colors in close proximity.
OpenGL 2.0 Programmable Pipeline
Vertex Shader
• The vertex Shader is a programmable unit that
operates on incoming vertex attributes, such as
position, color, texture coordinates, and so on.
• The vertex Shader is intended to perform
traditional graphics operations such as vertex
transformation, normal
transformation/normalization, texture coordinate
generation, and texture coordinate
transformation.
• The vertex Shader only has one vertex as input
and only writes one vertex as output.
• The previously defined fixed-function transformation
and lighting stage are now replaced by the
programmable vertex shader program.
• This gives the graphics programmer more flexibility
regarding how the vertices are transformed (we can
even decide not to transform the vertices at all) and we
can even perform the lighting computations in the
fragment shader to achieve per-pixel lighting (as
opposed to per-vertex which was a limitation of the
fixed-function pipeline).
• Fragment Shader
– The fragment Shader is intended to perform
traditional graphics operations such as operations
on interpolated values, texture access, texture
application, fog, and color sum.
– The fragment shader program replaces all of the
complicated texture blending, color sum, and fog
operations from the fixed-function pipeline.
OpenGL 3.2 Programmable Pipeline
• Geometry Shader
– The geometry shader comes after the vertex shader in
the programmable shader pipeline and therefore the
output of the vertex shader becomes the input to the
geometry shader.
– The geometry shader runs once per primitive and has
access to all of the input vertex data for all of the
vertices that make up the primitive being processed.
– The geometry shader can be used to process
primitives of one type and generate primitives of
another type.
• For example, you could pass a stream of points (a single
vertex) that represents the 3D positions of particles in a
particle system and the geometry shader can produce a set
of triangles that can be used to render the texture mapped,
orientated particles.
OpenGL 4.0 Programmable Pipeline
Tessellation
• Tessellation is the process of breaking a high-
order primitive (which is known as a patch in
OpenGL) into many smaller, simpler primitives
such as triangles.
• Configurable tessellation engine that is able to
break up quadrilaterals, triangles, and lines into a
potentially large number of smaller points, lines,
or triangles that can be directly consumed by the
normal rasterization hardware further down the
pipeline.
OpenGL Shading Language (GLSL)
• Similar to Nvidia’s Cg and Microsoft HLSL
• Code sent to shaders as source code
• New OpenGL functions to compile, link and
get information to shaders
Data Types for Shaders
• C types: int, float, bool, uint, double
• Vectors:
– float vec2, vec3, vec4
– Also int (ivec), boolean (bvec), uvec, dvec
• Matrices: mat2, mat3, mat4
– Stored by columns
– Standard referencing m[row][column]
Vector and Matrix Types in GLSL
• We can use C structs which can be copied back from
functions
• Because matrices and vectors are basic types they can
be passed into and out from GLSL functions, e.g.
– mat3 func(mat3 a)
– vec4[4] functionThatReturnsArray() {
vec4[4] foo = ...
return foo;
}
– float[6] var = float[6](1.0, 2.0, 3.0, 4.0, 5.0, 6.0);
– Recent versions of GLSL also allow the traditional, C-style
array initializer syntax to be used:
• float var[6] = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0 };
• There are a few built in variables such as gl_Position
float a[10]; // Declare an array of 10 elements
float b[a.length()]; // Declare an array of the same size
mat4 c;
float d = float(c.length()); // d is now 4
int i;
// This loop iterates 10 times
for (i = 0; i < a.length(); i++)
{
b[i] = a[i];
}
• Some differences from C/C++:
– No pointers, strings, chars; no unions, enums; no bytes, shorts, longs;
No switch() statements.
– There is no implicit casting (type promotion):
float foo = 1;
fails because you can’t implicitly cast int to float.
– Explicit type casts are done by constructor:
vec3 foo = vec3(1.0, 2.0, 3.0);
vec2 bar = vec2(foo); // Drops foo.z
• Function parameters are labeled as in (default), out, or inout.
– Functions are called by value-return, meaning that values are copied
into and out of parameters at the start and end of calls.
Operators and Functions
• Standard C functions
– Trigonometric
– Arithmetic
• matrixCompMult()
• transpose()
• inverse()
• determinant()
• ….
– Normalize…etc.

• Overloading of vector and matrix types


mat4 a;
vec4 b, c, d;
c = b*a; // a column vector stored as a 1d array
d = a*b; // a row vector stored as a 1d array
Swizzling and Selection
• Can refer to array elements by element using [] or
selection (.) operator with
– x, y, z, w
– r, g, b, a
– s, t, p, q
– a[2], a.b, a.z, a.p are the same
• Swizzling operator lets us manipulate
components
– vec4 a, b;
– a.yz = vec2(1.0, 2.0);
– a.xw = b.yy;
Swizzle Examples
• vec4 TestVec = vec4(1.0, 2.0, 3.0, 4.0);

• vec4 a = TestVec.xyzw; // (1.0, 2.0, 3.0, 4.0)


• vec4 b = TestVec.wxyz; // (4.0, 1.0, 2.0, 3.0)
• vec4 c = TestVec.xxyy; // (1.0, 1.0, 2.0, 2.0)
• vec2 d = TextVec.zx; // (3.0, 1.0)
How do the shaders communicate?
There are three types of shader parameter
in GLSL: Attributes
• Uniform parameters
– Set throughout execution
Vertex
– Ex: surface color Processor
• Attribute parameters
– Set per vertex Uniform
Varying
– Ex: local tangent params
params
• Varying parameters
– Passed from vertex processor to Fragment
fragment processor Processor

– Ex: transformed normal


Uniform Qualifier
• Variables that are constant for an entire
primitive
• Can be changed in application and sent to
shaders
• Cannot be changed in shader
• Used to pass information to shader
• Access to uniform variables is available after
linking the program.
• With glGetUniformLocation you can retrieve the
location of the uniform variable within the
specified program object.
• Once you have that location you can set the
value.
• If the variable is not found, -1 is returned.
• With glUniform you can set the value of the
uniform variable.
GLint loc = glGetUniformLocation(ProgramObject, "Scale");
if (loc != -1)
{
glUniform1f(loc, 0.432);
}

• The uniform location remains valid until you link the program
again, so there is no need to call glGetUniformLocation every
frame.
• GLfloat fValue = 45.2f;
• glUniform1fv(iLocation, 1, &fValue);
• layout (location = 0) uniform float fTime;
• layout (location = 1) uniform int iIndex;
• layout (location = 2) uniform vec4 vColorValue;
• layout (location = 3) uniform bool bSomeFlag;

• glUseProgram(myShader);
• glUniform1f(0, 45.2f);
• glUniform1i(1, 42);
• glUniform4f(2, 1.0f, 0.0f, 0.0f, 1.0f);
• glUniform1i(3, GL_FALSE);
• uniform vec4 vColor;

• GLfloat vColor[4] = { 1.0f, 1.0f, 1.0f, 1.0f };


• glUniform4fv(iColorLocation, 1, vColor);
• uniform vec4 vColors[2];

• GLfloat vColors[4][2] = { { 1.0f, 1.0f, 1.0f, 1.0f }


, { 1.0f, 0.0f, 0.0f, 1.0f } };
• ...
• glUniform4fv(iColorLocation, 2, vColors);
• void glUniformMatrix4fv(GLint location, GLsiz
ei count, GLboolean transpose, const GLfloat
*value);
• The Boolean flag transpose is set to GL_FALSE
if the matrix is already stored in column-major
ordering (the way OpenGL prefers). Setting
this value to GL_TRUE causes the matrix to be
transposed when it is copied into the shader.
• Uniform Blocks
• OpenGL allows you to combine a group of uniforms into a uniform block.
• This functionality is called the uniform buffer object (UBO).

uniform TransformBlock
{
float scale; // Global scale to apply to everything
vec3 translation; // Translation in X, Y, and Z
float rotation[3]; // Rotation around X, Y, and Z axes
mat4 projection_matrix; // A generalized projection matrix to apply
// after scale and rotate
} transform;

• Inside the shader, you can refer to the members of the block using its
instance name, transform (for example, transform.scale or
transform.projection_matrix).
Varying Qualifier
• Variables that are passed from vertex shader
to fragment shader
• Automatically interpolated by the rasterizer
• Old style used the varying qualifier
varying vec4 color;
• Now use out in vertex shader and in in the
fragment shader
out vec4 color;
Vertex Attributes
• Vertex attributes are used to communicate from
"outside" to the vertex shader. Unlike uniform
variables, values are provided per vertex (and not
globally for all vertices).
• There are built-in vertex attributes like the
normal or the position, or you can specify your
own vertex attribute like a tangent or another
custom value.
• Attributes can't be defined in the fragment
shader.
Built in Vertex Attributes
• In the C++ program you can use the regular OpenGL function to set
vertex attribute values, for example glVertex3f for the position.
• gl_Vertex Position (vec4)
• gl_Normal Normal (vec4)
• gl_Color Primary color of vertex (vec4)
• gl_MultiTexCoord0 Texture coordinate of texture unit 0 (vec4)
• gl_MultiTexCoord1 Texture coordinate of texture unit 1 (vec4)
• ….
• gl_MultiTexCoord7 Texture coordinate of texture unit 7 (vec4)
• gl_FogCoord Fog Coord (float)
Example: Built-in Vertex Attributes

glBegin(GL_TRIANGLES)
glVertex3f(0.0f, 0.0f, 0.0f);
glColor3f(0.1,0.0,0.0);
glVertex3f(1.0f, 0.0f, 0.0f);
glColor3f(0.0,0.1,0.0);
glVertex3f(1.0f, 1.0f, 0.0f);
glColor3f(0.1,0.1,0.0);

glEnd();
• Vertex Shader Source Code
void main(void) {
vec4 a = gl_Vertex + gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * a;
}
• Fragment Shader Source Code
void main (void) {
gl_FragColor = vec4(0.0,0.0,1.0,1.0);
}
// gl_FragColor is deprecated in GLSL 1.3 (OpenGL 3.0)
Custom Vertex Attributes
• A custom, user-defined attribute can also be defined.
• The OpenGL function glBindAttribLocation associates
the name of the variable with an index.
• For example, glBindAttribLocation(ProgramObject, 10,
"myAttrib") would bind the attribute "myAttrib" to
index 10.
• The maximum number of attribute locations is limited
by the graphics hardware. You can retrieve the
maximum supported number of vertex attributes with
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &n).
• Setting attribute values can be done
using glVertexAttrib function.
• Unfortunately there are certain limitations when using
this on NVidia Hardware. In other words, NVidia
hardware indices are reserved for built-in attributes:
• gl_Vertex 0
• gl_Normal 2
• gl_Color 3
• gl_SecondaryColor 4
• gl_FogCoord 5
• gl_MultiTexCoord0 8
• …..
• gl_MultiTexCoord7 15
Process Overview
Our first vertex shader
#version 450 core
void main(void)
{
gl_Position = vec4(0.0, 0.0, 0.5, 1.0);
}
• Inside our main function, we assign a value to
gl_Position. All variables that start with gl_ are
part of OpenGL and connect shaders to each
other or to the various parts of fixed functionality
in OpenGL.
• In the vertex shader, gl_Position represents the
output position of the vertex. The value we assign
(vec4(0.0, 0.0, 0.5, 1.0)) places the vertex right in
the middle of OpenGL’s clip space, which is the
coordinate system expected by the next stage of
the OpenGL pipeline.
Fragment shader
#version 450 core
out vec4 color;
void main(void) {
color = vec4(0.0, 0.8, 1.0, 1.0);
}
• It starts with a #version 450 core declaration.
• Next, it declares color as an output variable
using the out keyword.
• In fragment shaders, the value of output
variables will be sent to the window or screen.
GLuint compile_shaders(void) {
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program;
// Source code for vertex shader
static const GLchar * vertex_shader_source[] =
{
"#version 450 core \n"
" \n"
"void main(void) \n"
"{ \n"
" gl_Position = vec4(0.0, 0.0, 0.5, 1.0); \n"
"} \n"
};
static const GLchar * fragment_shader_source[] =
{
"#version 450 core \n"
" \n"
"out vec4 color; \n"
" \n"
"void main(void) \n"
"{ \n"
" color = vec4(0.0, 0.8, 1.0, 1.0); \n"
"} \n"
};
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex_shader, 1, vertex_shader_source, NULL);
glCompileShader(vertex_shader);
// Create and compile fragment shader
fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragment_shader, 1, fragment_shader_source, NULL);
glCompileShader(fragment_shader);
// Create program, attach shaders to it, and link it
program = glCreateProgram();
glAttachShader(program, vertex_shader);
glAttachShader(program, fragment_shader);
glLinkProgram(program);
// Delete the shaders as the program has them now
glDeleteShader(vertex_shader);
glDeleteShader(fragment_shader);
return program;
}
• glCreateShader() creates an empty shader object, ready to
accept source code and be compiled.
• glShaderSource() hands shader source code to the shader
object so that it can keep a copy of it.
• glCompileShader() compiles whatever source code is
contained in the shader object.
• glCreateProgram() creates a program object to which you
can attach shader objects.
• glAttachShader() attaches a shader object to a program
object.
• glLinkProgram() links all of the shader objects attached to a
program object together.
• glDeleteShader() deletes a shader object. Once a shader
has been linked into a program object, the program
contains the binary code and the shader is no longer
needed.
void startup() {
rendering_program = compile_shaders();
glCreateVertexArrays(1, &vertex_array_object);
glBindVertexArray(vertex_array_object);
}

void shutdown() {
glDeleteVertexArrays(1, &vertex_array_object);
glDeleteProgram(rendering_program);
}
• The compile_shaders function returns the newly
created program object. When we call this function, we
need to keep the returned program object somewhere
so that we can use it to draw things.
• Also, we really don’t want to recompile the whole
program every time we want to use it. So, we need a
function that is called once when the program starts
up.
• The sb7 application framework provides just such a
function: application::startup(), which we can override
in our sample application and use to perform any one-
time setup work.
• One final thing that we need to do before we can draw
anything is to create a vertex array object (VAO), which
is an object that represents the vertex fetch stage of
the OpenGL pipeline and is used to supply input to the
vertex shader.
• As our vertex shader doesn’t have any inputs right now,
we don’t need to do much with the VAO.
• Nevertheless, we still need to create the VAO so that
OpenGL will let us draw.
• To create the VAO, we call the OpenGL function
glCreateVertexArrays(); to attach it to our context, we
call glBindVertexArray().
• void glCreateVertexArrays(GLsizei n, GLuint * arrays);
• void glBindVertexArray(GLuint array);
• We modify our render() function to call glUseProgram() to
tell OpenGL to use our program object for rendering and
then call our first drawing command, glDrawArrays().

void render(double currentTime) {


const GLfloat color[] = { (float)sin(currentTime) * 0.5f + 0.5f,
(float)cos(currentTime) * 0.5f + 0.5f,
0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
// Use the program object we created earlier for rendering
glUseProgram(rendering_program);
// Draw one point
glDrawArrays(GL_POINTS, 0, 1);
}
• The glDrawArrays() function sends vertices into the
OpenGL pipeline.

void glDrawArrays(GLenum mode, GLint first,


GLsizei count);

glDrawArrays(GL_POINTS, 0, 1);

• The last parameter is the number of vertices to render.


Each point is represented by a single vertex, so we tell
OpenGL to render only one vertex, resulting in just one
point being rendered.
Rendering our first point
• To make our point a little more visible, we can
ask OpenGL to draw it a little larger than a
single pixel. To do this, we’ll call the
glPointSize() function, whose prototype is
• void glPointSize(GLfloat size);
• This function sets the diameter of the point in
pixels to the value you specify in size.
• glPointSize(40.0f);
Making our first point bigger
Drawing Our First Triangle
void main(void) {
// Declare a hard-coded array of positions
const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4(0.25, 0.25, 0.5, 1.0));
// Index into our array using gl_VertexID
gl_Position = vertices[gl_VertexID];
}

Producing multiple vertices in a vertex shader


• GLSL includes a special input to the vertex shader
called gl_VertexID, which is the index of the
vertex that is being processed at the time.
• The gl_VertexID input starts counting from the
value given by the first parameter of
glDrawArrays() and counts upward one vertex at
a time for count vertices (the third parameter of
glDrawArrays()).
• We can use this index to assign a different
position to each vertex
void render(double currentTime) {
const GLfloat color[] = { 0.0f, 0.2f, 0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
// Use the program object we created earlier for
rendering
glUseProgram(rendering_program);
// Draw one triangle
glDrawArrays(GL_TRIANGLES, 0, 3);
}
Vertex Attributes
#version 450 core
// 'offset' is an input vertex attribute
layout (location = 0) in vec4 offset;
void main(void) {
const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5,
1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4(0.25, 0.25, 0.5, 1.0));
// Add 'offset' to our hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
}
• If you declare a variable with an in storage qualifier. This
marks the variable as an input to the vertex shader.
– The variable becomes known as a vertex attribute.
• We can tell this stage what to fill the variable with by using
one of the many variants of the vertex attribute functions,
glVertexAttrib*().
• The prototype for glVertexAttrib4fv(), which we use in this
example, is
• void glVertexAttrib4fv(GLuint index, const GLfloat * v);
– The parameter index is used to reference the attribute.

• You may have noticed the layout (location =0) code in the
declaration of the offset attribute. This is a layout qualifier,
which we have used to set the location of the vertex
attribute to zero. This location is the value we’ll pass in
index to refer to the attribute.
• Each time we call one of the glVertexAttrib*()
function, it will update the value of the vertex
attribute that is passed to vertex shader.
virtual void render(double currentTime) {
const GLfloat color[] = { (float)sin(currentTime) * 0.5f + 0.5f,
(float)cos(currentTime) * 0.5f + 0.5f,
0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
// Use the program object we created earlier for rendering
glUseProgram(rendering_program);
GLfloat attrib[] = { (float)sin(currentTime) * 0.5f,
(float)cos(currentTime) * 0.6f,
0.0f, 0.0f };
// Update the value of input attribute 0
glVertexAttrib4fv(0, attrib);
// Draw one triangle
glDrawArrays(GL_TRIANGLES, 0, 3); }
Passing Data from Stage to Stage
• Anything you write to an output variable in one
shader is sent to a similarly named variable
declared with the in keyword in the subsequent
stage.
• For example, if your vertex shader declares a
variable called vs_color using the out keyword, it
would match up with a variable named vs_color
declared with the in keyword in the fragment
shader stage (assuming no other stages were
active in between)
Vertex shader with an output
#version 450 core
// 'offset' and 'color' are input vertex attributes
layout (location = 0) in vec4 offset;
layout (location = 1) in vec4 color;
// 'vs_color' is an output that will be sent to the next shader stage
out vec4 vs_color;
void main(void) {
const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4(0.25, 0.25, 0.5, 1.0));
// Add 'offset' to our hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
// Output a fixed value for vs_color
vs_color = color;
}
Fragment shader with an input
#version 450 core
// Input from the vertex shader
in vec4 vs_color;
// Output to the framebuffer
out vec4 color;
void main(void) {
// Simply assign the color we were given by the
vertex shader to our output
color = vs_color;
}
Interface Blocks
• In most nontrivial applications, you will likely
want to communicate a number of different
pieces of data between stages; these may include
arrays, structures, and other complex
arrangements of variables.
• To achieve this, we can group together a number
of variables into an interface block. The
declaration of an interface block looks a lot like a
structure declaration, except that it is declared
using the in or out keyword.
………
// Declare VS_OUT as an output interface block
out VS_OUT {
vec4 color; // Send color to the next stage
} vs_out;

void main(void) {
…..
vs_out.color = color;
}
• Note that the interface block has both a block name
(VS_OUT, uppercase) and an instance name (vs_out,
lowercase).
• Interface blocks are matched between stages using the
block name (VS_OUT in this case), but are referenced in
shaders using the instance name.
#version 450 core
// Declare VS_OUT as an input interface block
in VS_OUT {
vec4 color;
} fs_in;
// Output to the framebuffer
out vec4 color;
void main(void) {
// Simply assign the color we were given by the vertex shader to
our output
color = fs_in.color;
}
• Note that interface blocks are only for moving
data from shader stage to shader stage—you
can’t use them to group together inputs to the
vertex shader or outputs from the fragment
shader.
Our first geometry shader
#version 450 core
layout (triangles) in;
layout (points, max_vertices = 3) out;

void main(void) {
int i;
for (i = 0; i < gl_in.length(); i++) {
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}}
• The shader converts triangles into points so that we
can see their vertices.
• The first layout qualifier indicates that the geometry
shader is expecting to see triangles as its input.
• The second layout qualifier tells OpenGL that the
geometry shader will produce points and that the
maximum number of points that each shader will
produce will be three.
• In the main function, a loop runs through all of the
members of the gl_in array, which is determined by
calling its .length() function.
• We actually know that the length of the array will be three because
we are processing triangles and every triangle has three vertices.
• The outputs of the geometry shader are again similar to those of a
vertex shader.
• In particular, we write to gl_Position to set the position of the
resulting vertex.
• Next, we call EmitVertex(), which produces a vertex at the output of
the geometry shader.
• Geometry shaders automatically call EndPrimitive() at the end of
your shader, so calling this function explicitly is not necessary in this
example.
• As a result of running this shader, three vertices will be produced
and rendered as points.
Deriving a fragment’s color from its position

#version 450 core


out vec4 color;
void main(void) {
color = vec4(sin(gl_FragCoord.x * 0.25) * 0.5 + 0.5,
cos(gl_FragCoord.y * 0.25) * 0.5 + 0.5,
sin(gl_FragCoord.x * 0.15) * cos(gl_FragCoord.y *
0.15), 1.0);
}
• Available as input to the fragment shader are
several built-in variables such as
gl_FragCoord, which contains the position of
the fragment within the window. It is possible
to use these variables to produce a unique
color for each fragment.
Vertex shader with an output
#version 450 core
// 'vs_color' is an output that will be sent to the next shader stage
out vec4 vs_color;
void main(void) {
const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4(0.25, 0.25, 0.5, 1.0));
const vec4 colors[] = vec4[3](vec4(1.0, 0.0, 0.0, 1.0),
vec4(0.0, 1.0, 0.0, 1.0),
vec4(0.0, 0.0, 1.0, 1.0));
// Add 'offset' to our hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
// Output a fixed value for vs_color
vs_color = color[gl_VertexID];}
Deriving a fragment’s color from its
position
#version 450 core
// 'vs_color' is the color produced by the vertex
shader
in vec4 vs_color;
out vec4 color;
void main(void) {
color = vs_color;
}
The inputs to the fragment shader are somewhat unlike inputs to other shader
stages, in that OpenGL interpolates their values across the primitive that’s being
rendered.
• The first programmable stage in the OpenGL
pipeline (i.e., one that you can write a shader for)
is the vertex shader. Before the shader runs,
OpenGL will fetch the inputs to the vertex shader
in the vertex fetch stage.
• Your vertex shader’s resposibility is to set the
position1 of the vertex that will be fed to the next
stage in the pipeline. It can also set a number of
other user-defined and built-in outputs that
further describe the vertex to OpenGL.
1. void glVertexAttribFormat(GLuint attribindex,
GLint size, GLenum type, GLboolean
normalized, GLuint relativeoffset);
2. void glVertexAttribBinding(GLuint attribindex,
GLuint bindingindex);
3. void glBindVertexBuffer(GLuint bindingindex,
GLuint buffer, GLintptr offset, GLintptr
stride);
Declaration of multiple vertex
attributes
#version 450 core
// Declare a number of vertex attributes
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 tex_coord;
// Note that we intentionally skip location 3 here
layout (location = 4) in vec4 color;
layout (location = 5) in int material_id;
• Consider that we are using a data structure to
represent our vertices, which is defined in C as follows:
typedef struct VERTEX_t
{
vmath::vec4 position;
vmath::vec3 normal;
vmath::vec2 tex_coord;
GLubyte color[3];
int material_id;
} VERTEX;
// position
glVertexAttribFormat(0, 4, GL_FLOAT, GL_FALSE, offsetof(VERTEX,
position));
// normal
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(VERTEX, normal));
// tex_coord
glVertexAttribFormat(2, 2, GL_FLOAT, GL_FALSE, offsetof(VERTEX,
texcoord));
// color[3]
glVertexAttribFormat(4, 3, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(VERTEX,
color));
// material_id
glVertexAttribIFormat(5, 1, GL_INT, offsetof(VERTEX, material_id));
• Now that you’ve set up the vertex attribute
format, you need to tell OpenGL which buffers
to read the data from.
• Each verex shader can have any number of
input attributes (up to an implementation-
defined limit), and OpenGL can provide data
for them by reading from any number of
buffers (again, up to a limit).
• Some vertex attributes can share space in a
buffer; others may reside in different buffer
objects. Rather than individually specifying which
buffer objects are used for each vertex shader
input, we can instead group inputs together and
associate groups of them with a set of buffer
binding points.
• Then, when you change the buffer bound to one
of these binding points, it will change the buffer
used to supply data for all of the attributes that
are mapped to that binding point.
• In our example, we’re going to store all of the
vertex attributes in a single buffer
• void glVertexAttribBinding(GLuint attribindex,
GLuint bindingindex);

glVertexAttribBinding(0, 0); // position


glVertexAttribBinding(1, 0); // normal
glVertexAttribBinding(2, 0); // tex_coord
glVertexAttribBinding(4, 0); // color
glVertexAttribBinding(5, 0); // material_id
• Finally, we need to bind a buffer object to
each of the binding points that is used by our
mapping. To do this, we call
glBindVertexBuffer().
• void glBindVertexBuffer(GLuint bindingindex,
GLuint buffer, GLintptr offset, GLintptr stride);
– stride = sizeof(VERTEX)
• glVertexAttribPointer() is a handy way to set
up virtually everything about a vertex
attribute. However, it can actually be
considered more of a helper function that sits
on top of a few lower-level functions:
glVertexAttribFormat()
glVertexAttribBinding(), and
glBindVertexBuffer().
• void glVertexAttribPointer(GLuint index, GLint
size, GLenum type, GLboolean normalized, GL
sizei stride, const GLvoid * pointer);
– If stride is 0, the generic vertex attributes are
understood to be tightly packed in the array.
Data
Creating and initializing a buffer
// The type used for names in OpenGL is GLuint
GLuint buffer;
// Create a buffer
glCreateBuffers(1, &buffer);
// Specify the data store parameters for the buffer
glNamedBufferStorage(
buffer, // Name of the buffer
1024 * 1024, // 1 MB of space
NULL, // No initial data
GL_MAP_WRITE_BIT); // Allow map for writing
// Now bind it to the context using the GL_ARRAY_BUFFER binding
point
glBindBuffer(GL_ARRAY_BUFFER, buffer);
Updating the content of a buffer with
glBufferSubData()
// This is the data that we will place into the buffer object
static const float data[] =
{
0.25, -0.25, 0.5, 1.0,
-0.25, -0.25, 0.5, 1.0,
0.25, 0.25, 0.5, 1.0
};

// Put the data into the buffer at offset zero


glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(data),
data);
Mapping a buffer’s data store with
glMapNamedBuffer()
// This is the data that we will place into the buffer object
static const float data[] = {
0.25, -0.25, 0.5, 1.0,
-0.25, -0.25, 0.5, 1.0,
0.25, 0.25, 0.5, 1.0
};
// Get a pointer to the buffer's data store
void * ptr = glMapNamedBuffer(buffer, GL_WRITE_ONLY);
// Copy our data into it...
memcpy(ptr, data, sizeof(data));
// Tell OpenGL that we're done with the pointer
glUnmapNamedBuffer(GL_ARRAY_BUFFER);
Setting up a vertex attribute
// First, bind a vertex buffer to the VAO
glVertexArrayVertexBuffer(vao, // Vertex array object
0, // First vertex buffer binding
buffer, // Buffer object
0, // Start from the beginning
sizeof(vmath::vec4)); // Each vertex is one vec4
// Now, describe the data to OpenGL, tell it where it is, and turn on automatic
// vertex fetching for the specified attribute
glVertexArrayAttribFormat(vao, // Vertex array object
0, // First attribute
4, // Four components
GL_FLOAT, // Floating-point data
GL_FALSE, // Normalized – ignored for floats
0); // First element of the vertex
glEnableVertexArrayAttrib(vao, 0);
Using an attribute in a vertex shader
#version 450 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;

void main(void)
{
……
}
Multiple separate vertex attributes
GLuint buffer[2];
GLuint vao;
static const GLfloat positions[] = { ... };
static const GLfloat colors[] = { ... };
// Create the vertex array object
glCreateVertexArrays(1, &vao)
// Get create two buffers
glCreateBuffers(2, &buffer[0]);
// Initialize the first buffer
glNamedBufferStorage(buffer[0], sizeof(positions), positions, 0);
// Bind it to the vertex array - offset zero, stride = sizeof(vec3)
glVertexArrayVertexBuffer(vao, 0, buffer[0], 0, sizeof(vmath::vec3));
// Tell OpenGL what the format of the attribute is
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0);
// Tell OpenGL which vertex buffer binding to use for this attribute
glVertexArrayAttribBinding(vao, 0, 0);
// Enable the attribute
glEnableVertexArrayAttrib(vao, 0);
// Perform similar initialization for the second buffer
glNamedBufferStorage(buffer[1], sizeof(colors), colors, 0);
glVertexArrayVertexBuffer(vao, 1, buffer[1], 0,
sizeof(vmath::vec3));
glVertexArrayAttribFormat(vao, 1, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 1, 1);
glEnableVertexAttribArray(1);
Multiple interleaved vertex attributes
struct vertex
{
// Position
float x;
float y;
float z;
// Color
float r;
float g;
float b;
};
GLuint vao;
GLuint buffer;
static const vertex vertices[] = { ... };
// Create the vertex array object
glCreateVertexArrays(1, &vao);
// Allocate and initialize a buffer object
glCreateBuffers(1, &buffer);
glNamedBufferStorage(buffer, sizeof(vertices),
vertices, 0);
// Set up two vertex attributes - first positions
glVertexArrayAttribBinding(vao, 0, 0);
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE,
offsetof(vertex, x));
glEnableVertexArrayAttrib(0);
// Now colors
glVertexArrayAttribBinding(vao, 1, 0);
glVertexArrayAttribFormat(vao, 1, 3, GL_FLOAT, GL_FALSE,
offsetof(vertex, r));
glEnableVertexArrayAttrib(1);
// Finally, bind our one and only buffer to the vertex array
object
glVertexArrayVertexBuffer(vao, 0, buffer);
Spinning cube
// First create and bind a vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
static const GLfloat vertex_positions[] = {
-0.25f, 0.25f, -0.25f,
-0.25f, -0.25f, -0.25f,
0.25f, -0.25f, -0.25f,
0.25f, -0.25f, -0.25f,
0.25f, 0.25f, -0.25f,
-0.25f, 0.25f, -0.25f,
….
….
-0.25f, 0.25f, -0.25f,
0.25f, 0.25f, -0.25f,
0.25f, 0.25f, 0.25f,
0.25f, 0.25f, 0.25f,
-0.25f, 0.25f, 0.25f,
-0.25f, 0.25f, -0.25f
};
// Now generate some data and put it in a buffer object
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER,
sizeof(vertex_positions), vertex_positions,
GL_STATIC_DRAW);

// Set up our vertex attribute


glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
Building the model–view matrix for a
spinning cube
float f = (float)currentTime * (float)M_PI * 0.1f;
vmath::mat4 mv_matrix =
vmath::translate(0.0f, 0.0f, -4.0f) *
vmath::translate(sinf(2.1f * f) * 0.5f,
cosf(1.7f * f) * 0.5f,
sinf(1.3f * f) * cosf(1.5f * f) * 2.0f) *
vmath::rotate((float)currentTime * 45.0f, 0.0f, 1.0f,
0.0f) *
vmath::rotate((float)currentTime * 81.0f, 1.0f, 0.0f,
0.0f);
Updating the projection matrix for the
spinning cube
void onResize(int w, int h)
{
sb7::application::onResize(w, h);
aspect = (float)info.windowWidth /
(float)info.windowHeight;
proj_matrix = vmath::perspective(50.0f,
aspect,
0.1f,
1000.0f);
}
Rendering loop for the spinning cube
// Clear the framebuffer with dark green
glClearBufferfv(GL_COLOR, 0, sb7::color::Green);
// Activate our program
glUseProgram(program);
// Set the model-view and projection matrices
glUniformMatrix4fv(mv_location, 1, GL_FALSE, mv_matrix);
glUniformMatrix4fv(proj_location, 1, GL_FALSE, proj_matrix);
// Draw 6 faces of 2 triangles of 3 vertices each = 36 vertices
glDrawArrays(GL_TRIANGLES, 0, 36);
Spinning cube vertex shader
#version 450 core
in vec4 position;
out VS_OUT {
vec4 color;
} vs_out;
uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
void main(void) {
gl_Position = proj_matrix * mv_matrix * position;
vs_out.color = position * 2.0 + vec4(0.5, 0.5, 0.5, 0.0);
}
Spinning cube fragment shader
#version 450 core
out vec4 color;
in VS_OUT {
vec4 color;
} fs_in;

void main(void) {
color = fs_in.color;
}
A few frames from the spinning cube
application
Rendering loop for 24 spinning cubes
// Clear the framebuffer with dark green and clear
// the depth buffer to 1.0
glClearBufferfv(GL_COLOR, 0, sb7::color::Green);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);
// Activate our program
glUseProgram(program);
// Set the model-view and projection matrices
glUniformMatrix4fv(proj_location, 1, GL_FALSE,
proj_matrix);
for (i = 0; i < 24; i++) {
// Calculate a new model-view matrix for each one
float f = (float)i + (float)currentTime * 0.3f;
vmath::mat4 mv_matrix =
vmath::translate(0.0f, 0.0f, -20.0f) *
vmath::rotate((float)currentTime * 45.0f, 0.0f, 1.0f, 0.0f) *
vmath::rotate((float)currentTime * 21.0f, 1.0f, 0.0f, 0.0f) *
vmath::translate(sinf(2.1f * f) * 2.0f,
cosf(1.7f * f) * 2.0f,
sinf(1.3f * f) * cosf(1.5f * f) * 2.0f);
// Update the uniform
glUniformMatrix4fv(mv_location, 1, GL_FALSE, mv_matrix);
// Draw - notice that we haven't updated the projection
matrix
glDrawArrays(GL_TRIANGLES, 0, 36);
}
Setting up indexed cube geometry
static const GLfloat vertex_positions[] = {
-0.25f, -0.25f, -0.25f,
-0.25f, 0.25f, -0.25f,
0.25f, -0.25f, -0.25f,
0.25f, 0.25f, -0.25f,
0.25f, -0.25f, 0.25f,
0.25f, 0.25f, 0.25f,
-0.25f, -0.25f, 0.25f,
-0.25f, 0.25f, 0.25f, };
static const GLushort vertex_indices[] = {
0, 1, 2,
2, 1, 3,
2, 3, 4,
4, 3, 5,
4, 5, 6,
6, 5, 7,
6, 7, 0,
0, 7, 1,
6, 0, 2,
2, 4, 6,
7, 5, 3,
7, 3, 1 };
glGenBuffers(1, &position_buffer);
glBindBuffer(GL_ARRAY_BUFFER, position_buffer);
glBufferData(GL_ARRAY_BUFFER,
sizeof(vertex_positions),
vertex_positions,
GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
sizeof(vertex_indices),
vertex_indices,
GL_STATIC_DRAW);
Drawing indexed cube geometry
// Clear the framebuffer with dark green
static const GLfloat green[] = { 0.0f, 0.25f, 0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, green);
// Activate our program
glUseProgram(program);
// Set the model-view and projection matrices
glUniformMatrix4fv(mv_location, 1, GL_FALSE, mv_matrix);
glUniformMatrix4fv(proj_location, 1, GL_FALSE, proj_matrix);
// Draw 6 faces of 2 triangles of 3 vertices each = 36 vertices
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
The Base Vertex
• The first advanced version of glDrawElements()
that takes an extra parameter is
glDrawElementsBaseVertex(), whose prototype
is:

void glDrawElementsBaseVertex(GLenum mode,


GLsizei count,
GLenum type,
GLvoid * indices,
GLint basevertex);
• When you call glDrawElementsBaseVertex(),
OpenGL will fetch the vertex index from the
buffer bound to the
GL_ELEMENT_ARRAY_BUFFER and then add
basevertex to it before it is used to index into
the array of vertices.
– This allows you to store a number of different
pieces of geometry in the same buffer and then
offset into it using basevertex.
Shader Storage Blocks
• In addition to the read-only access to buffer
objects that is provided by uniform blocks, buffer
objects can be used for general storage from
shaders using shader storage blocks.
• The biggest difference between a uniform block
and a shader storage block is that your shader
can write into the shader storage block;
• Furthermore, it can even perform atomic
operations on members of a shader storage
block. Shader storage blocks
• Also have a much higher upper size limit.
Example shader storage block
declaration
#version 450 core
struct my_structure {
int pea;
int carrot;
vec4 potato;
};

layout (binding = 0, std430) buffer my_storage_block {


vec4 foo;
vec3 bar;
int baz[24];
my_structure veggies;
};
Using a shader storage block in place
of vertex attributes
#version 450 core
struct vertex {
vec4 position;
vec3 color;
};
layout (binding = 0, std430) buffer my_vertices {
vertex vertices[];
};
uniform mat4 transform_matrix;
out VS_OUT {
vec3 color;
} vs_out;
void main(void) {
gl_Position = transform_matrix * vertices[gl_VertexID].position;
vs_out.color = vertices[gl_VertexID].color;
}
• You can place data into the buffer using
functions like glBufferData() just as you would
with a uniform block. Because the buffer is
writable by the shader, if you call
glMapBufferRange() with GL_MAP_READ_BIT
(or GL_MAP_WRITE_BIT) as the access mode,
you will be able to read the data produced by
your shader.
Atomic Memory Operations
Synchronizing Access to Memory
• When you are only reading from a buffer, data is
almost always going to be available when you
think it should be and you don’t need to worry
about the order in which your shaders read from
it.
• However, when your shader starts writing data
into buffer objects, either through writes to
variables in shader storage blocks or through
explicit calls to the atomic operation functions
that might write to memory, there are cases
where you need to avoid hazards.
• Memory hazards fall roughly into three
categories:
– A read-after-write (RAW) hazard can occur when
your program attempts to read from a memory
location right after it has written to it. Depending
on the system architecture, the read and write
may be reordered such that the read actually ends
up being executed before the write is complete,
resulting in the old data being returned to the
application.
– A write-after-write (WAW) hazard can occur when
a program performs a write to the same memory
location twice in a row. You might expect that
whatever data was written last would overwrite
the data written first and be the value that ends
up staying in memory. Again, on some
architectures this is not guaranteed; in some
circumstances the first data written by the
program might actually be the data that ends up
in memory.
– A write-after-read (WAR) hazard normally occurs
only in parallel processing systems (such as
graphics processors) and may happen when one
thread of execution (such as a shader invocation)
performs a write to memory after another thread
believes that it has read from memory. If these
operations are reordered, the thread that
performed the read may end up getting the data
that was written by the second thread without
expecting it.
Setting up indexed cube geometry
static const GLfloat vertex_positions[] =
{
-0.25f, -0.25f, -0.25f,
-0.25f, 0.25f, -0.25f,
0.25f, -0.25f, -0.25f,
0.25f, 0.25f, -0.25f,
0.25f, -0.25f, 0.25f,
0.25f, 0.25f, 0.25f,
-0.25f, -0.25f, 0.25f,
-0.25f, 0.25f, 0.25f,
};
static const GLushort vertex_indices[] =
{
0, 1, 2,
2, 1, 3,
2, 3, 4,
4, 3, 5,
4, 5, 6,
6, 5, 7,
6, 7, 0,
0, 7, 1,
6, 0, 2,
2, 4, 6,
7, 5, 3,
7, 3, 1
};
glGenBuffers(1, &position_buffer);
glBindBuffer(GL_ARRAY_BUFFER, position_buffer);
glBufferData(GL_ARRAY_BUFFER,
sizeof(vertex_positions),
vertex_positions,
GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
sizeof(vertex_indices),
vertex_indices,
GL_STATIC_DRAW);
Drawing indexed cube geometry
// Clear the framebuffer with dark green
static const GLfloat green[] = { 0.0f, 0.25f, 0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, green);
// Activate our program
glUseProgram(program);
// Set the model-view and projection matrices
glUniformMatrix4fv(mv_location, 1, GL_FALSE, mv_matrix);
glUniformMatrix4fv(proj_location, 1, GL_FALSE, proj_matrix);
// Draw 6 faces of 2 triangles of 3 vertices each = 36 vertices
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT,
0);
Texture
• Application steps:
– Create/initialize/bind texture to a “texture unit”
• //make #0 the active texture unit
glActiveTexture(GL_TEXTURE0);
• //make “texID” the active texture (bind it to Texture Unit 0)
glBindTexture(GL_TEXTURE_2D, texID);
– Create/load shader program (vertex and fragment
shaders)
– Determine location of shader uniform sampler2D
int texLoc = glGetUniformLocation(progObj,
“theTexture”);
– Pass texture unit ID to shader via sampler2D
//set the shader’s sampler2D (“theTexture”) to Texture
Unit 0
glUniform1i(texLoc, 0);
Texture Mapping Vertex Shader
in vec2 texCoord0; //the (s,t) texture coordinates of
the vertex
out vec2 varyingTexCoord0; //the vertex texture
coordinates
void main(void) {

varyingTexCoord0 = texCoord0;

}
Texture Mapping Fragment Shader
in vec4 varyingColor;
in vec2 varyingTexCoord0; //the interpolated texture coordinate
uniform sampler2D theTexture; //texture from application uniform
bool texturingEnabled; //switch indicating whether to apply texture
mapping
out vec4 fragColor ; //the color assigned to the fragment
void main(void) {
vec4 color ;
if (texturingEnabled) { //get the texel color at (s,t) in the texture
vec3 texColor = vec3( texture2D( theTexture, varyingTexCoord0.st ) ) ;
//assign texel color to fragment
color = vec4(texColor,1); }
else { /
/no texturing; just use input color for fragment
color = varyingColor; }
//output fragment color
fragColor = color ; }
• Shader steps:
– Access texture via texture2D function:
• texel = texture2D(sampler, texCoords)
Generating, initializing, and binding a
texture
// The type used for names in OpenGL is GLuint
GLuint texture;
// Create a new 2D texture object
glCreateTextures(GL_TEXTURE_2D, 1, &texture);

// Specify the amount of storage we want to use for the texture


glTextureStorage2D(texture, // Texture object
1, // 1 mipmap level
GL_RGBA32F, // 32-bit floating-point RGBA data
256, 256); // 256 x 256 texels
// Now bind it to the context using the GL_TEXTURE_2D binding point
glBindTexture(GL_TEXTURE_2D, texture);
Updating texture data with
glTexSubImage2D()
// Define some data to upload into the texture
float * data = new float[256 * 256 * 4];
// generate_texture() is a function that fills memory with image data
generate_texture(data, 256, 256);
// Assume that "texture" is a 2D texture that we created earlier
glTextureSubImage2D(texture, // Texture object
0, // Level 0
0, 0, // Offset 0, 0
256, 256, // 256 x 256 texels, replace entire
image
GL_RGBA, // Four-channel data
GL_FLOAT, // Floating-point data
data); // Pointer to data

// Free the memory we allocated before - OpenGL now has our data
delete [] data;
Reading from a texture in GLSL
#version 450 core
uniform sampler2D s;
out vec4 color;
void main(void) {
color = texelFetch(s, ivec2(gl_FragCoord.xy), 0);
}
• Once you’ve created a texture object and placed some
data in it, you can read that data in your shaders and
use it to color fragments, for example. Textures are
represented in shaders as sampler variables and are
hooked up to the outside world by declaring uniforms
with sampler types.
• The sampler type that represents two-dimensional
textures is sampler2D. To access our texture in a
shader, we can create a uniform variable with the
sampler2D type, and then use the texelFetch built-in
function with that uniform and a set of texture
coordinates at which to read from the texture
Vertex shader with a single texture coordinate
#version 450 core
uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (location = 0) in vec4 position;
layout (location = 4) in vec2 tc;
out VS_OUT {
vec2 tc;
} vs_out;
void main(void) {
// Calculate the position of each vertex
vec4 pos_vs = mv_matrix * position;
// Pass the texture coordinate through unmodified
vs_out.tc = tc;
gl_Position = proj_matrix * pos_vs;
}}
Fragment shader with a single texture
coordinate
#version 450 core
layout (binding = 0) uniform sampler2D tex_object;
// Input from vertex shader
in VS_OUT {
vec2 tc;
} fs_in;
// Output to framebuffer
out vec4 color;
void main(void) {
// Simply read from the texture at the (scaled) coordinates and
// assign the result to the shader's output.
color = texture(tex_object, fs_in.tc * vec2(3.0, 1.0));
}

You might also like