OpenGLTutorial (In QT)
OpenGLTutorial (In QT)
Release 1.0
Nokia, Qt Learning
CONTENTS
Introduction 1.1 Whats OpenGL . . . . . . . . . . . . . 1.2 Drawing in 3D Space . . . . . . . . . . . 1.3 A Short Recapitulation of Linear Algebra 1.4 Coordinate Systems & Frame Concept . 1.5 The OpenGL Rendering Pipeline . . . . 1.6 OpenGL API . . . . . . . . . . . . . . . 1.7 The OpenGL Shading language . . . . . Using OpenGL in your Qt Application 2.1 Hello OpenGL . . . . . . . . . . 2.2 Rendering in 3D . . . . . . . . . 2.3 Coloring . . . . . . . . . . . . . 2.4 Texture Mapping . . . . . . . . . 2.5 Lighting . . . . . . . . . . . . . 2.6 Buffer Object . . . . . . . . . . . Conclusion & Further Reading
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
1 1 2 3 6 8 10 11 15 15 21 25 27 31 38 43
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
ii
CHAPTER
ONE
INTRODUCTION
This tutorial provides a basic introduction to OpenGL and 3D computer graphics in general. It shows how to make use of Qt and its OpenGL related classes to create 3D graphics. We will use the core features of OpenGL 3.0/ 2.0 ES and all following versions, which means that we will be utilizing OpenGLs programmable rendering pipeline to write our own shaders with the OpenGL Shading Language (GLSL) / OpenGL ES Shading Language (GLSL / ES). Chapter one gives an introduction to 3D computer graphics and the OpenGL API including the OpenGL Shading Language (GLSL) / OpenGL ES Shading Language (GLSL / ES). If you are already familiar with this topic and only want to see how to use OpenGL in your Qt programs, you can skip this introductory chapter and continue on to chapter two. In chapter two, we present examples which utilize the information covered in chapter one and show how to use OpenGL together with Qts OpenGL related functionality. At the end of this tutorial you nd some references and links which may come in handy especially when working through the examples. Please note that this tutorial is meant to get you started with this topic and can not go into the same depth as a decent OpenGL dedicated book. Also note that Qts OpenGL-related classes do a lot of work for you by hiding some of the details which you would encounter if you wrote your programs using only OpenGLs API. In the example part, we will use Qts high level functionality whenever possible and only briey name the differences. So if you intend to get a complete understanding of how to use native, you should additionally consult a tutorial or book dedicated to this topic.
OpenGL Tutorial, Release 1.0 it also gives you a lot of freedom to invent your own algorithms and create a variety of new graphics effects. The sole purpose of OpenGL is to render computer graphics. It does not provide any functionality for window management or for handling events like user input. This is what we use Qt for.
To dene the spatial properties of your objects, you set up a list of points, lines and/or triangles. Each primitive in turn is specied by the position of its corners (a vertex / vertices). Thus it is necessary to have a basic understanding of how to dene points in space and to manipulate them efciently. But we will brush up our linear algebra knowledge in a moment. To see those objects, you need to apply coloring to your primitives. Color values are often dened for each primitive (for each vertex to be precise) and used to paint or ll in with color. For more realistic applications, images (called textures) are placed on top of them. The appearance can be further adapted according to material properties or lighting. So how do we actually get our scene displayed on the screen. Since the computer screen is a two dimensional device, we need to project our objects onto a plane. This plane is then mapped to a region on our screen called the viewport. To understand this process illustratively, imagine that you are standing in front of the window and sketching the outlines of the objects you see outside onto the glass without moving your head. The drawing on the glass then represents a two dimensional projection of your environment. Though the technique is different, this kind of projection also occurs in a camera when taking a picture.
Chapter 1. Introduction
The clipped pyramid is called the viewing volume. Everything that is located inside this volume is projected onto the near side towards the imaginary viewer. Everything else is not drawn. There are two major types of projections: perspective projections and orthographic projections. What we just introduced is called perspective projection. It has a viewing volume in the form of a frustum and adds the illusion that distant objects appear smaller than closer objects of the same size. This greatly contributes to realism, and therefore, is used for simulations, games and VR (virtual reality) applications. The other type is called orthographic projection. Orthographic projections are specied by a rectangular viewing volume. Every two objects that have the same size also have the same size in the projection regardless of its distance from the viewer. This is often used in CAD (computer aided design) tools or when dealing with 2D graphics.
An object is then represented by a list of vertices. 1.3. A Short Recapitulation of Linear Algebra 3
One thing you will often want to do is change the position, size or orientation of your object. Translating an object is as easy as adding a constant vector (here named d) that species the displacement to all your objects vertices (here named v).
Scaling means multiplying the vertices by the desired ratio (here named s).
Rotating, stretching, shearing, or reecting is more complicated and is achieved by multiplying the vertices by a transformation matrix (here named T). A matrix is basically a table of coefcients that, when multiplied by a vector, yields a new vector with each element that is a linear combination of the multiplied vectors elements.
As an example, these are matrices rotating the vector around the coordinate systems x, y, and z axes. Arbitrary rotations can be composed by a combination of these.
Chapter 1. Introduction
There is also one matrix that does not change a vector at all. It is called the identity matrix and consists of ones on the main diagonal and zeros elsewhere.
If you use a matrix to transform a vector, it is important that the matrix is written on the left side of the multiplication sign and the vector is on the right side. Also, the number of the matrices colums needs to match the number of the vectors components. Otherwise the multiplication is mathematicaly invalid and math libraries may return unexpected results. Keep in mind that transformations are not commutative, i.e. the result of a concatenation of transformations depends on their order. For example, it makes a difference whether you rst rotate an object and then translate it or if you do it the other way around. As it is more convenient (and even faster for OpenGL) to express all these operations as a single matrix vector multiplication, we extend our formalism to so called homogeneous coordinates. This also enables us to easily apply all kinds of afne transformations such as the projections, which we discussed in chapter 1.2. We basically add a fourth dimension, called a scaling factor, to our vertices. This might seem to complicate things, but you actually do not really have to pay attention to that factor as it is set to 1 by default and you will rarely change it yourself. All you need to do is declare your vertices with an additional element set to 1 (which is even often implied by default). (In this chapter we denote homogeneous coordinates by a hat on the variable names.)
A series of transformations can be written as a series of matrix multiplications, and the resulting transformation can be stored in a single matrix.
can be calculated.
and
represent the same point in space but are only noted down differently.
As for the frame concept, every instance of an object is bound to its own coordinate system (also referred to as its frame). The position and orientation of each object is then dened by placing the objects frames inside the worlds frame. The same applies to the viewer (or camera) with one difference: for simplicity, we actually do not place the viewers frame inside the worlds frame, but instead do it the other way around (i.e. placing the worlds frame inside the viewers frame).
Chapter 1. Introduction
This means we dene the position and rotation of every instance of an object in relation to the worlds coordinate system. The matrix dened by these parameters, which allows us to calculate an objects vertices inside the worlds coordinate system, is commonly called the model matrix. Subsequently, we move from world coordinates to viewer coordinates (commonly called eye coordinates) using a matrix called the view matrix in just the same way. After that, we apply the projection which transforms the objects vertices from viewer coordinates to the projection plane. This is done by a matrix called the projection matrix, which yields normalized device coordinates with x, y and z values ranging from -1 to +1 (The -1 and +1 values correspond to positions on the viewing volumes borders). OpenGL then maps all the objects points on this projection plane to the viewport that is shown on the screen. Another matrix that is often used is the model-view-projection matrix. It is the concatenation of the aforementioned three matrices. The model-view-projection matrix is generally passed to the vertex shader, which multiplies this matrix by the objects vertices in order to calculate the projected form. You will learn about shaders in a later chapter. The denition of these matrices has various advantages: In the design phase, every objects model (i.e. its set of vertices) can be specied in relation to an arbitrary coordinate system (for example its center point) The transformation process is divided into small steps, which as such are quite illustrative All the used transformation matrices can be calculated, stored and combined efciently
The gure above illustrates the steps that are required to yield proper screen coordinates from object vertices. Different kinds of transformations are applied in a certain order. You throw in some object vertices and, after some number crunching, you get the appropriate screen coordinates. In this gure, you can also easily see why this part of 3D programming is called the transformation pipeline.
Chapter 1. Introduction
The graphics card has its own memory and a GPU just like a small, but powerful computer that is highly specialized in processing 3D data. Programs that run on the GPU are called shaders. Both the host computer and the graphics card can work independently. To take full advantage of hardware acceleration, you should keep both of them busy at the same time. During vertex specication, the ordered list of vertices that gets streamed to the next step is set up. This data can either be sent by the program that is executed on the CPU one vertex after the other or read from GPU memory directly using buffer objects. However, repeatedly getting data via the system bus should be avoided whenever possible since it is faster for the graphics card to access its own memory. The vertex shader processes data on a per vertex basis. It receives this stream of vertices along with additional attributes like associated texture coordinates or color values, and data such as the model-view-projection matrix. Its typical task is to transform vertices and to apply the projection matrix. Besides its interface to the immediately following stage, the vertex shader can also pass data to the fragment shader directly. During the primitive assembly stage, the projected vertices are composed into primitives. These primitives can be triangles, lines, point sprites, or more complex entities like quadrics. The user decides which kind of primitive should be used when calling the draw function. For example, if the user wants to draw triangles, OpenGL takes groups of three vertices and converts them all into triangles. During the clipping and culling stage, primitives that lie beyond the viewing volume, and therefore are not visible anyway, are removed. Also, if face culling is enabled, every primitive that does not show its front side (but its reverse side instead) is removed. This step effectively contributes to performance.
The rasterisation stage yields so called fragments. These fragments correspond to pixels on the screen. Depending on the users choice, for each primitive, a set of fragments may be created. You may either ll the whole primitive with (usually colored) fragments, or only generate its outlines (e.g. to render a wireframe model). Each fragment is then processed by the fragment shader. The most important output of the fragment shader is the fragments color value. Texture mapping and lighting are usually applied during this step. Both the program running on the CPU and the vertex shader can pass data to it. Obviously it also has access to the texture buffer. Because there are usually a lot of fragments in between a few vertices, values sent by the vertex shader are generally interpolated. Whenever possible, computational intensive calculations should be implemented in the vertex instead of in the fragment shader as there are usually many more fragments to compute than vertices. The nal stage, per-sample operations, applies several tests to decide which fragments should actually be written to the framebuffer (depth test, masking etc). After this, blending occurs and the nal image is stored in the framebuffer.
OpenGLs various preprocessor denitions are prexed with GL_. Its functions begin with gl. A function that triggers the rendering process for example is declared as void glDrawArrays(GLenum mode, GLint rst, GLsizei count).
10
Chapter 1. Introduction
All these types may be combined using a C like structure or array. To access the elements of a vector or a matrix, square brackets [] can be used (e.g. vector[index] = value and matrix[column][row] = value;). In addition to this, the vectors named components are accessible by using the eld selector operator . (e.g. vector.x = xValue and vector.xy = vec2(xValue, yValue)). The names (x, y, z, w) are used for positions. (r, g, b, a) and (s, t, p, q) are used to address color values and texture coordinates respectively. To dene the linkage between different shaders as well as between shaders and the application, GLSL provides variables with extra functionality by using storage qualiers. These storage qualiers need to be written before the type name during declaration.
11
OpenGL Tutorial, Release 1.0 Storage Qualier none const attribute Description (default) Normal variable Compile-time constant Linkage between a vertex shader and OpenGL for per-vertex data.Since the vertex shader is executed once for every vertex, this read-only value holds a new value every time it runs. It is used to pass vertices to the vertex shader for example. Linkage between a shader and OpenGL for per-rendering data. This read-only value does not change across the the whole rendering process. It issued to pass the model-view-projection matrix for example since this parameter does not change for one object. Linkage between the vertex shader and the fragment shader for interpolated data. This variable is used to pass values calculated in the vertex shader to the fragment shader. For this to work, the variables need to share the same name the in both shaders. Since there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is (by default) interpolated. Such variables are often used as texture coordinates or lighting calculations.
uniform
varying
To send data from the vertex shader to the fragment shader, the out variable of the vertex shader and the in variable of the fragment shader need to share the same name. Since there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is by default interpolated in a perspective correct manner. To enforce this behavior, the additional qualier smooth can be written before in. To use linear interpolation, the noperspective qualier can be set. Interpolation can be completely disabled by using at. Then, for all the fragments in between a primitive, the value output by the rst vertex of this primitive is used. This kind of variables are commonly called varyings, due to this interpolation and because in earlier versions of OpenGL this shader-to-shader linkage was achieved using a variable qualier called varying instead of in and out. Several built-in variables are used for communication with the pipeline. We will use the following: Variable Description Name vec4 The rasterization step needs to know the position of the transformed vertex. gl_Position Therefore, the vertex shader needs to set this variable to the calculated value. vec4 This variable denes the fragments RGBA color that will eventually be gl_FragColor written to the frame buffer. This value can be set by the fragment shader. When using multiple variable qualiers, the order is <storage qualier> <precision qualier> <type> <name>. Just like in C, every GLSL programs entry point is the main() function, but you are also allowed to declare your own functions. Functions in GLSL work quite differently than those in C. They do not have a return value. Instead, values are returned using a calling convention called valuereturn. For this purpose, GLSL uses parameter qualiers, which need to be written before the variable type during function declaration. These qualiers specify if and when values are exchanged between a function and its caller.
12
Chapter 1. Introduction
OpenGL Tutorial, Release 1.0 Parameter qualier in out inout Description (default) On entry, the variable is initialized to the value passed by the caller. On return, the value of this variable is written into the variable passed by the caller. The variable is not initialized. A combination of in and out. The variable is both initialized and returned.
There are actually many more qualiers, but listing all of them goes beyond the scope of this tutorial. The language also offers control structures like if, switch, for, while, and do while, including break and return. Additionally, in the fragment shader, you can call discard to exit the fragment shader and have that fragment ignored by the rest of the pipeline. GLSL also uses several preprocessor directives. The most notable one that you should use in all of your programs is #version followed by the three digits of the language version you want to use (e.g. #version 330 for version 3.3). By default, OpenGL assumes version 1.1, which might not always be what you want. Although GLSL is very similar to C, there are still some restrictions you should be aware of: Functions may not be recursive. For-loops must have an iteration count that is known at compile time. There are no pointers. Array indexing is only possible with constant indices. Type casting is only possible using constructors (e.g. oat(myInt);). myFloat =
Note: The scene you want to render may be so complex that it has thousands of vertices and millions of fragments. This is why modern graphics cards are equipped with several stream processing units, each of which executes one vertex shader or fragment shader at a time. Because all vertices and fragments are processed in parallel, there is no way for the shader to query the properties of another vertex or fragment.
13
14
Chapter 1. Introduction
CHAPTER
TWO
Qt also offers a cross platform abstraction for shader programs called QGLShaderProgram. This class facilitates the process of compiling and linking the shader programs as well as switching between different shaders. Note: You might need to adapt the versions set in the example source codes to those supported by your system.
15
Note: The source code related to this section is located in examples/hello-opengl/ directory. First of all, we need to tell qmake to use the QtOpenGL module. So we add:
QT += opengl
The main() function only serves the purpose of instantiating and showing our QGLWidget subclass.
int main(int argc, char **argv) { QApplication a(argc, argv); GlWidget w; w.show(); return a.exec(); }
Our OpenGL widget class is dened as follows: We want the widget to be a subclass of QGLWidget. Because we might later be using signals and slots, we invoke the Q_OBJECT macro. Additionally we reimplement QWidget::minimumSizeHint() and QWidget::sizeHint() to set reasonable default sizes. To call the usual OpenGL rendering commands, we reimplement the three virtual functions GLWidget::initializeGL(), QGLWidget::resizeGL(), and QGLWidget::paintGL(). We also need some member variables. pMatrix is a QMatrix4x4 that keeps the projection part of the transformation pipeline. To manage the shaders, we will use a QGLShaderProgram we named shaderProgram. vertices is a QVector made of QVector3Ds that stores the triangles vertices. Although the vertex shader will expect us to send homogeneous coordinates, we can use 3D vectors, because the OpenGL pipeline will automatically set the fourth coordinate to the default value of 1. 16 Chapter 2. Using OpenGL in your Qt Application
class GlWidget : public QGLWidget { Q_OBJECT public: GlWidget(QWidget *parent = 0); ~GlWidget(); QSize sizeHint() const; protected: void initializeGL(); void resizeGL(int width, int height); void paintGL(); private: QMatrix4x4 pMatrix; QGLShaderProgram shaderProgram; QVector<QVector3D> vertices; };
Now that we have dened our widget, we can nally talk about the implementation. The constructors initializer list calls QGLWidgets constructor passing a QGLFormat object. This can be used to set the capabilities of the OpenGL rendering context such as double buffering or multisampling. We are ne with the default values so we could as well have omitted the QLFormat. Qt will try to acquire a rendering context as close as possible to what we want. Then we reimplement QWidget::sizeHint() to set a reasonable default size for the widget.
GlWidget::GlWidget(QWidget *parent) : QGLWidget(QGLFormat(/* Additional format options */), parent) { } GlWidget::~GlWidget() { } QSize GlWidget::sizeHint() const { return QSize(640, 480); }
The QGLWidget::initializeGL() method gets called once when the OpenGL context is created. We use this function to set the behavior of the rendering context and to build the shader programs. If we want to render 3D images, we need to enable depth testing. This is one of the tests that can be performed during the per-sample-operations stage. It will cause OpenGL to only display the fragments nearest to the camera when primitives overlap. Although we do not need this capability since we only want to show a plane triangle, we will need this setting in our other examples. If youve omitted this statement, you might see objects in the back popping through objects in the front depending on the order the primitives are rendered. Deactivating this capability is useful if you want to draw an overlay image on top of the screen. As an easy way to signicantly improve the performance of a 3D application, we also enable 2.1. Hello OpenGL 17
face culling. This tells OpenGL to only render primitives that show their front side. The front side is dened by the order of the triangles vertices. You can tell what side of the triangle you are seeing by looking at its corners. If the triangles corners are specied in a counterclockwise order, this means that the front of the triangle is the side facing you. For all triangles that are not facing the camera, the fragment processing stage can be omited. Then we set the background color using QGLWidget::qglClearColor(). It is a function that calls OpenGLs glClearColor(GLclampf red, GLclampf green, GLclampf blue, GLclampf apha) but has the advantage of allowing any color Qt understands to be passed. The specied color will then be used in all subsequent calls to glClear(GLbiteld mask). In the following section we are setting up the shaders. We pass the source codes of the shaders to the QGLShaderProgram, compile and link them, and bind the program to the current OpenGL rendering context. Shader programs need to be supplied as source codes. We can use QGLShaderProgram::addShaderFromSourceFile() to have Qt handle the compilation. This function compiles the source code as the specied shader type and adds it to the shader program. If an error occurs, the function returns false, and we can access the compilation errors and warnings using QGLShaderProgram::log(). Errors will be automatically printed to the standard error output if we run the program in debug mode. After the compilation, we still need to link the programs using QGLShaderProgram::link(). We can again check for errors and access the errors and warnings using QGLShaderProgram::log(). The shaders are then ready to be bound to the rendering context using QGLShaderProgram::bind(). Binding the program to the context means enabling it in the graphics pipeline. After this is done, every vertex that is passed to the graphics pipeline will be processed by these shaders until we call QGLShaderProgram::release() to disable them or a different shader program is bound. Binding and releasing a program can be done several times during the rendering process, which means several vertex and fragment shaders can be used for different objects in the scene. We will therefore use these functions in the QGLWidget::paintGL() function. Last but not least, we set up the triangles vertices. Note that weve dened the triangle with the front side pointing to the positive z direction. Having face culling enabled, we can then see this object if we look at it from viewer positions with a z value greater than this objects z value.
void GlWidget::initializeGL() { glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); qglClearColor(QColor(Qt::black)); shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertexShader.vsh"); shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragmentShader.fsh"); shaderProgram.link(); vertices << QVector3D(1, 0, -2) << QVector3D(0, 1, -2) << QVector3D(-1, 0, -2); }
Now lets take a look at the shaders we will use in this example. 18 Chapter 2. Using OpenGL in your Qt Application
The vertex shader only calculates the nal projection of each vertex by multiplying the vertex with the model-view-projection matrix. It needs to read two input variables. The rst input is the model-view-projection matrix. It is a 4x4 matrix, that only changes once per object and is therefore declared as a uniform mat4. Weve named it mvpMatrix. The second variable is the actual vertex that the shader is processing. As the shader reads a new value every time it is executed, the vertex variable needs to be declared as an attribute vec4. Weve named this variable vertex. In the main() function, we simply calculate the resulting position that is sent to the rasterization stage using built in matrix vector multiplication.
uniform mat4 mvpMatrix; in vec4 vertex; void main(void) { gl_Position = mvpMatrix * vertex; }
The fragment shader simply displays a colored pixel for each fragment it is executed on. The output of the fragment shader is the value written to the frame buffer. We called this variable fragColor. It is an instance of vec4 with one element for the red, green, and blue color value, and one element for the alpha value. We want to use the same plain color for each pixel. Therefore we declare an input variable called color, which is a uniform vec4. The main() function then sets the built in hl_FragColor output variable to this value.
uniform vec4 color; out vec4 fragColor; void main(void) { fragColor = color; }
The reimplemented QGLWidget::resizeGL() method is called whenever the widget is resized. This is why we use this function to set up the projection matrix and the viewport. After we had checked the widgets height to prevent a division by zero, we set it to a matrix that does the perspective projection. Luckily we do not have to calculate it ourselves. We can use one of the many useful methods of QMatrix4x4, namely QMatrix4x4::perspective(), which does exactly what we need. This method multiplies its QMatrix4x4 instance with a projection matrix that is specied by the angle of the eld of view, its aspect ratio and the clipping regions of the near and far planes. The matrix we get using this function resembles the projection of a camera that is sitting in the origin of the world coordinate system looking towards the worlds negative z direction with the worlds x axis pointing to the right side and the y axis pointing upwards. The fact that this function alters its instance explains the need to rst initialize it to an identity matrix (a matrix that, if it is applied as a transformation, does not change a vector at all). 2.1. Hello OpenGL 19
Next we set up the OpenGL viewport. The viewport denes the region of the widget that the result of the projection is mapped to. This mapping transforms the normalized coordinates on the aforementioned cameras lm to pixel coordinates within the QGLWidget. To avoid distortion, the aspect ratio of the viewport should match the aspect ratio of the projection.
void GlWidget::resizeGL(int width, int height) { if (height == 0) { height = 1; } pMatrix.setToIdentity(); pMatrix.perspective(60.0, (float) width / (float) height, 0.001, 1000); glViewport(0, 0, width, height); }
Finally, we have OpenGL draw the triangle in the QGLWidget::paintGL() method. The rst thing we do is clear the screen using glClear(GLbiteld mask). If this OpenGL function is called with the GL_COLOR_BUFFER_BIT set, it lls the color buffer with the color set by glClearColor(GLclampf red, GLclampf green, GLclampf blue, GLclampf aplha). Setting the GL_DEPTH_BUFFER_BIT tells OpenGL to clear the depth buffer, which is used for the depth test and stores the distance of rendered pixels. We usually need to clear both buffers, and therefore, we set both bits. As we already know, the model-view-projection matrix that is used by the vertex shader is a concatenation of the model matrix, the view matrix and the projection matrix. Just like for the projection matrix, we also use the QMatrix4x4 class to handle the other two transformations. Although we do not want to use them in this basic example, we already introduce them here to clarify their use. We use them to calculate the model-view-projection matrix, but leave them initialized to the identity matrix. This means we do not move or rotate the triangles frame and also leave the camera unchanged, located in the origin of the world coordinate system. The rendering can now be triggered by calling the OpenGL function glDrawArrays(GLenum mode, GLint rst, GLsizei count). But before we can do that, we need to bind the shaders and hand over all the uniforms and attributes they need. In native OpenGL the programmer would rst have to query the id (called location) of each input variable using the verbatim variable name as it is typed in the shader source code and then set its value using this id and a type OpenGL understands. QGLShaderProgram instead offers a huge set of overloaded functions for this purpose which allow you to address an input variable using either its location or its name. These functions can also automatically convert the variable type from Qt types to OpenGL types. We set the uniform values for both shaders using QGLShaderProgram::setUniformValue() by passing its name. The vertex shaders uniform Matrix is calculated by multiplying its three components. The color of the triangle is set using a QColor instance that will automatically be converted to a vec4 for us. To tell OpenGL where to nd the stream of vertices, we call QGLShaderProgram::setAttributeArray() and pass the QVector::constData() pointer. Setting attribute arrays works in the same way as setting uniform values, but theres one difference: we additionally
20
need to explicitly enable the attribute array using QGLShaderProgram::enableAttributeArray(). If we did not do this, OpenGL would assume that weve assigned a single value instead of an array. Finally we call glDrawArrays(GLenum mode, GLint rst, GLsizei count) to do the rendering. It is used to start rendering a sequence of geometry primitives using the current conguration. We pass GL_TRIANGLES as the rst parameter to tell OpenGL that each of the three vertices form a triangle. The second parameter species the starting index within the attribute arrays and the third parameter is the number of indices to be rendered. Note that if you later want to draw more than one object, you only need to repeat all of the steps (except for clearing the screen, of course) you took in this method for each new object.
void GlWidget::paintGL() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); QMatrix4x4 mMatrix; QMatrix4x4 vMatrix; shaderProgram.bind(); shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix); shaderProgram.setUniformValue("color", QColor(Qt::white)); shaderProgram.setAttributeArray("vertex", vertices.constData()); shaderProgram.enableAttributeArray("vertex"); glDrawArrays(GL_TRIANGLES, 0, vertices.size()); shaderProgram.disableAttributeArray("vertex"); shaderProgram.release(); }
You should now see a white triangle on black background after compiling and running this program.
2.2 Rendering in 3D
A white triangle on black background is not very interesting and also not 3D, but now that we have a running basis, we can extend it to create a real 3D application. In this example, we will render a more complex object and implement the functionality for interactively exploring our scene.
2.2. Rendering in 3D
21
Note: The source code related to this section is located in examples/rendering-in-3d/ directory. Just as with any QWidget subclass, we can use Qts event system to handle user input. We want to be able to view the scene in the same way we would explore a globe. By dragging the mouse across the widget, we want to change the angle that we look from. The distance to the scene shall change if we turn the mouses scroll wheel. For this functionality, we reimplement QWidget::mousePressEvent(), QWidget::mouseMoveEvent(), and QWidget::wheelEvent(). The new member variables alpha, beta and distance hold the parameters of the view point, and lastMousePosition helps us track mouse movement.
class GlWidget : public QGLWidget { ... protected: ... void mousePressEvent(QMouseEvent *event); void mouseMoveEvent(QMouseEvent *event); void wheelEvent(QWheelEvent *event); private: ... double double double QPoint }; alpha; beta; distance; lastMousePosition;
The most important new thing in this example is the employment of the view matrix. Again, we do not calculate this matrix ourselves but use the QMatrix4x4::lookAt() function to obtain 22 Chapter 2. Using OpenGL in your Qt Application
this matrix. This function takes the position of the viewer, the point the viewer is looking at and a vector that denes the up direction. We want the viewer to look at the worlds origin and start with a position that is located at a certain distance (distance) along the z axis with the up direction being the y axis. We then rotate these two vertices using a transformation matrix. First we rotate them (and their coordinate system) by the alpha angle around their new rotated x axis, which tilts the camera. Note that you can also illustrate the transformation the other way around: rst we rotate the vertices by the beta angle around the worlds x axis and then we rotate them by the alpha angle around the worlds y axis.
void GlWidget::paintGL() { ... QMatrix4x4 cameraTransformation; cameraTransformation.rotate(alpha, 0, 1, 0); cameraTransformation.rotate(beta, 1, 0, 0); QVector3D cameraPosition = cameraTransformation * QVector3D(0, 0, distance); QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0); vMatrix.lookAt(cameraPosition, QVector3D(0, 0, 0), cameraUpDirection); ... }
These three parameters need to be initialized in the constructor and, to account for the users input, we then change them in the corresponding event handlers.
GlWidget::GlWidget(QWidget *parent) : QGLWidget(QGLFormat(/* Additional format options */), parent) { alpha = 25; beta = -25; distance = 2.5; }
In the QWidget::mousePressEvent(), we store the mouse pointers initial position to be able to track the movement. In the QGLWidget::mouseMoveEvent(), we calculate the pointers change and adapt the angles alpha and beta. Since the view points parameters have changed, we then call QGLWidget::updateGL() to trigger an update of the rendering context.
void GlWidget::mousePressEvent(QMouseEvent *event) { lastMousePosition = event->pos(); event->accept(); } void GlWidget::mouseMoveEvent(QMouseEvent *event) { int deltaX = event->x() - lastMousePosition.x(); int deltaY = event->y() - lastMousePosition.y(); if (event->buttons() & Qt::LeftButton) { alpha -= deltaX; while (alpha < 0) { alpha += 360; }
2.2. Rendering in 3D
23
while (alpha >= 360) { alpha -= 360; } beta -= deltaY; if (beta < -90) { beta = -90; } if (beta > 90) { beta = 90; } updateGL(); } lastMousePosition = event->pos(); event->accept(); }
In the QGLWidget::wheelEvent(), we either increase or decrease the viewers distance by 10% and update the rendering again.
void GlWidget::wheelEvent(QWheelEvent *event) { int delta = event->delta(); if (event->orientation() == Qt::Vertical) { if (delta < 0) { distance *= 1.1; } else if (delta > 0) { distance *= 0.9; } updateGL(); } event->accept(); }
In order to nish this example, we only need to change our list of vertices to form a cube.
void GlWidget::initializeGL() { ... vertices << QVector3D(-0.5, -0.5, 0.5) << QVector3D( 0.5, -0.5, 0.5) << QVecto << QVector3D( 0.5, 0.5, 0.5) << QVector3D(-0.5, 0.5, 0.5) << QVector3D( << QVector3D( 0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5, -0.5) << QVector3D( << QVector3D(-0.5, 0.5, -0.5) << QVector3D( 0.5, 0.5, -0.5) << QVector3D( << QVector3D(-0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5, 0.5) << QVector3D( << QVector3D(-0.5, 0.5, 0.5) << QVector3D(-0.5, 0.5, -0.5) << QVector3D( << QVector3D( 0.5, -0.5, 0.5) << QVector3D( 0.5, -0.5, -0.5) << QVector3D( << QVector3D( 0.5, 0.5, -0.5) << QVector3D( 0.5, 0.5, 0.5) << QVector3D( << QVector3D(-0.5, 0.5, 0.5) << QVector3D( 0.5, 0.5, 0.5) << QVector3D( << QVector3D( 0.5, 0.5, -0.5) << QVector3D(-0.5, 0.5, -0.5) << QVector3D( << QVector3D(-0.5, -0.5, -0.5) << QVector3D( 0.5, -0.5, -0.5) << QVector3D( << QVector3D( 0.5, -0.5, 0.5) << QVector3D(-0.5, -0.5, 0.5) << QVector3D(
24
If you now compile and run this program, you will see a white cube that can be rotated using the mouse. Since each of its six sides is painted in the same plane color, depth is not very visible. We will work on this in the next example.
2.3 Coloring
In this example, we want to color each side of the cube in different colors to enhance the illusion of three dimensionality. To archive this, we will extend our shaders in a way that will allow us to specify a single color for each vertex and use the interpolation of varyings to generate the fragments colors. This example will show you how to communicate data from the vertex shader over to the fragment shader.
Note: The source code related to this section is located in examples/coloring/ directory To tell the shaders about the colors, we specify a color value for each vertex as an attribute array for the vertex shader. So on each run of the shader, it will read a new value for both the vertex attribute and the color attribute. As the fragments color eventually has to be set in the fragment shader and not in the vertex shader, we pass the color value over to it. To do this, we need to declare an equally named varying in both shaders. We called this varying varyingColor. If the fragment shader is now run for each fragment between the three vertices of a triangle, the value read by the shader is calculated by an interpolation of the three corners values. This means that if we specify the same color for the three vertices of a triangle, OpenGL will paint a plane colored triangle. If we specify different colors, OpenGL will smoothly blend between those values. In the vertex shaders main function, we only need to set the varying to the color value. 2.3. Coloring 25
uniform mat4 mvpMatrix; in vec4 vertex; in vec4 color; out vec4 varyingColor; void main(void) { varyingColor = color; gl_Position = mvpMatrix * vertex; }
In the fragment shaders main function, we set the gl_FragColor variable to the color received.
in vec4 varyingColor; out vec4 fragColor; void main(void) { fragColor = varyingColor; }
Of course we still need to imploy a new structure to store the color values and send them to the shaders in the QGLWidget::paintGL() method. But this should be very straightforward as we have already done all of this for the vertices attribute array in just the same manner.
// glwidget.h class GlWidget : public QGLWidget { ... private: ... QVector<QVector3D> colors; }; // glwidget.cpp void GlWidget::paintGL() { ... shaderProgram.bind(); shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix); shaderProgram.setAttributeArray("vertex", vertices.constData()); shaderProgram.enableAttributeArray("vertex"); shaderProgram.setAttributeArray("color", colors.constData()); shaderProgram.enableAttributeArray("color"); glDrawArrays(GL_TRIANGLES, 0, vertices.size());
26
There is only one little inconvenience when switching from a color uniform to a color attribute array. Unfortunately QGLShaderProgram::setAttributeArray() does not support the QColor type, so we need to store the colors as a QVector3D (or a QVector4D, if you want to set the alpha value to change the opacity). Valid color values range from 0 to 1. As we want to color each of the cubes faces in a plain color, we set the color value of each faces vertices to the same value.
void GlWidget::initializeGL() { ... colors << << << << << << << << << << << << } QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0) 0) 0) 0) 0) 0) 0) 0) 1) 1) 1) 1) << << << << << << << << << << << << QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0) 0) 0) 0) 0) 0) 0) 0) 1) 1) 1) 1) << << << << << << << << << << << << QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(1, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, QVector3D(0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0) // 0) 0) // 0) 0) // 0) 0) // 0) 1) // 1) 1) // 1); Front Back Left Right Top Bottom
27
Note: The source code related to this section is located in examples/texture-mapping/ directory In order to map a texture to a primitive, we have to specify so-called texture coordinates that tell OpenGL which image coordinate is to be pinned to which vertex. Texture coordinates are instances of vec2 that are normalized to a range between 0 and 1. The origin of the texture coordinate system is in the lower left of an image, having the rst axis pointing to the right side and the second axis pointing upwards (i.e. the lower left corner of an image is at (0, 0) and the upper right corner is at (1, 1). Coordinate values higher than 1 are also allowed, causing the texture to wrap around by default. The textures themselves are OpenGL objects stored in the graphics cards memory. They are created using glGenTextures(GLsizei n, GLuint *texture) and deleted again with a call to glDelereTextures(GLsizei n, const GLuint *texture). To identify textures, each texture is assigned a texture ID during its creation. As with shader programs, they need to be bound to glBindTexture(GLenum target, GLuint texture) before they can be congured and lled with data. We can use Qts QGLWidget::bindTexture() to create the texture object. Normally we would have to make sure that the image data is in a particular format, according to the conguration of the texture object, but luckily QGLWidget::bindTexture() also takes care of that. OpenGL allows us to have several textures accessible to the shaders at the same time. For this purpose, OpenGL uses so-called texture units. So before we can use a texture, we need to bind it to one of the texture units identied by the enum GL_TEXTUREi (with i ranging from 0 to GL_MAX_COMBINED_TEXTURE_UNITS -1). To do this, we call glActiveTexture(GLenum texture) and bind the texture using glBindTexture(GLenum target, GLuint texture). Because we also need to call glBindTexture(GLenum target, GLuint texture) if we want to add new textures or modify them, and binding a texture overwrites the the current active texture unit, you should set the active texture unit to an invalid unit after setting it by calling glActiveTexture(0). This way several texture units can be congured at the same time. Note that texture units need to be used in an ascending order beginning with GL_TEXTURE0. To access a texture in a shader to actually render it, we use the texture2D(sampler2D sampler, 28 Chapter 2. Using OpenGL in your Qt Application
vec2 coord) function to query the color value at a certain texture coordinate. This function reads two parameters. The rst parameter is of the type sampler2D and it refers to a texture unit. The second parameter is the texture coordinate that we want to access. To read from the texture unit i denoted by the enum GL_TEXTUREi, we have to pass the GLuint i as the uniform value. With all of this theory we are now able to make our cube textured. We replace the vec4 color attribute and the corresponding varying with a vec2 variable for the texture coordinates and forward this value to the fragment shader.
uniform mat4 mvpMatrix; in vec4 vertex; in vec2 textureCoordinate; out vec2 varyingTextureCoordinate; void main(void) { varyingTextureCoordinate = textureCoordinate; gl_Position = mvpMatrix * vertex; }
In the fragment shader, we use texture2D(sampler2D sampler, vec2 coord) to look up the right color value. The uniform texture of the type sampler2D chooses the texture unit and we use the interpolated values coming from the vertex shader for the texture coordinates.
uniform sampler2D texture; in vec2 varyingTextureCoordinate; out vec4 fragColor; void main(void) { fragColor = texture2D(texture, varyingTextureCoordinate); }
In the GlWidget class declaration, we replace the previously used colors member with a QVector made of QVector2Ds for the texture coordinates and add a member variable to hold the texture object ID.
class GlWidget : public QGLWidget { ... private: ... QVector<QVector2D> textureCoordinates; GLuint texture; ... };
In the QGLWidget::initializeGL() reimplementation, we set up the texture coordinates and also 2.4. Texture Mapping 29
create the texture object. Each side will be covered with the whole square image that is contained in our resource le.
void GlWidget::initializeGL() { ... textureCoordinates << QVector2D(0, << QVector2D(1, << QVector2D(0, << QVector2D(1, << QVector2D(0, << QVector2D(1, << QVector2D(0, << QVector2D(1, << QVector2D(0, << QVector2D(1, << QVector2D(0, << QVector2D(1,
0) 1) 0) 1) 0) 1) 0) 1) 0) 1) 0) 1)
<< << << << << << << << << << << <<
QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0,
0) 1) 0) 1) 0) 1) 0) 1) 0) 1) 0) 1)
<< << << << << << << << << << << <<
QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0, QVector2D(1, QVector2D(0,
1) // 0) 1) // 0) 1) // 0) 1) // 0) 1) // 0) 1) // 0);
Right Top
Botto
texture = bindTexture(QPixmap(":/texture.png")); }
In the QGLWidget::paintGL() method, we set the fragment shaders sampler2D uniform to the rst texture unit. Then we activate that unit, bind our texture object to it and after that deactivate it again to prevent us from accidentally overwriting this setting. And instead of passing the color attribute array, we pass the array containing the texture coordinates.
void GlWidget::paintGL() { ... shaderProgram.bind(); shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix); shaderProgram.setUniformValue("texture", 0); //glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); //glActiveTexture(0); shaderProgram.setAttributeArray("vertex", vertices.constData()); shaderProgram.enableAttributeArray("vertex");
Note: The Windows OpenGL header le only includes functionality up to OpenGL version 1.1 and assumes that the programmer will obtain additional functionality on his own. This includes OpenGL API function calls as well as enums. The reason is that, because different OpenGL libraries exist, the programmer should request the librarys function entry points at runtime. Qt only denes the functionality required by its own OpenGL-related classes. glActiveTexture(GLenum texture) as well as the GL_TEXTUREi enums do not belong to this subset. Several utility libraries exist to ease the denition of these functions (e.g. GLEW, GLEE, etc). We will dene glActiveTexture(GLenum texture) and GL_TEXTUREi manually. First we include the glext.h header le to set the missing enums and a few typedefs which will help us keep the code readable (since the version shipped with your compiler might be outdated, you may need to get the latest version from the OpenGL homepage1 . Next we declare the function pointer, which we will use to call glActiveTexture(GLenum texture) using the included typedefs. To avoid confusing the linker, we use a different name than glActiveTexture and dene a pre-processor macro which replaces calls to glActiveTexture(GLenum texture) with our own function:
#ifdef WIN32 #include <GL/glext.h> PFNGLACTIVETEXTUREPROC pGlActiveTexture = NULL; #define glActiveTexture pGlActiveTexture #endif //WIN32
In the GlWidget::initializeGL() function, we request this pointer using PROC WINAPI wglGetProcAddress(LPCSTR lpszProc). This function reads the OpenGL API functions name and returns a pointer which we need to cast to the right type:
void GlWidget::initializeGL() { ...
2.5 Lighting
The ability to write your own shader programs gives you the power to set up the kind of lighting effect that best suits your needs. This may range from very basic and time saving approaches to high quality ray tracing algorithms.
1
http://www.opengl.org
2.5. Lighting
31
In this chapter, we will implement a technique called Phong shading, which is a popular baseline shading method for many rendering applications. For each pixel on the surface of an object, we will calculate the color intensity based on the position and color of the light source as well as the objects texture and its material properties. To show the results, we will display the cube with a light source circling above it. The light source will be marked by a pyramid which we will render using the per-vertex color shader of one of the previous examples. So in this example, you will also see how to render a scene with multiple objects and different shader programs.
Note: The source code related to this section is located in examples/lighting/ directory Because we use two different objects and two different shader programs, we added prexes to the names. The cube is rendered using the lightingShaderProgram, for which we need an additional storage that keeps the surface normal of each vertex (i.e. the vector, that is perpendicular to the surface and has the size 1). The spotlight, on the other hand, is rendered using the coloringShaderProgram, which consists of a shader we developed earlier in this tutorial. To track the position of the light source, we introduced a new member variable that holds its rotation. This value is periodically increased in the timeout() slot.
class GlWidget : public QGLWidget { ... private: QGLShaderProgram lightingShaderProgram; QVector<QVector3D> cubeVertices; QVector<QVector3D> cubeNormals; QVector<QVector2D> cubeTextureCoordinates; GLuint cubeTexture; QGLShaderProgram coloringShaderProgram; QVector<QVector3D> spotlightVertices;
32
The Phong reection model assumes that the light reected off an object (i.e. what you actually see) consists of three components: diffuse reection of rough surfaces, specular highlights of glossy surfaces and an ambient term that sums up the small amounts of light that get scattered about the entire scene. For each light source in the scene, we dene id and is as the intensities (RGB values) of the diffuse and the specular components. ia is dened as the ambient lighting component. For each kind of surface (whether glossy, at etc), we dene the following parameters: kd and ks set the ratio of reection of the diffuse and specular component, ka sets the ratio of the reection of the ambient term respectively and is a shininess constant that controls the size of the specular highlights. The equation for computing the illumination of each surface point (fragment) is:
m is the normalized direction vector pointing from the fragment to the light source, N is the L m is the direction of the light reected at this point and V surface normal of this fragment, R points from the fragment towards the viewer of the scene. To obtain the vectors mentioned above, we calculate them for each vertex in the vertex shader and tell OpenGL to pass them as interpolated values to the fragment shader. In the fragment shader, we nally set the illumination of each point and combine it with the color value of the texture. So in addition to passing vertex positions and the model-view-projection matrix to get the fragments position, we also need to pass the surface normal of each vertex. To calculate the m and V , we need to know the model-view part of the transformation. To transformed cubes L calculate the transformed N , we need to apply a matrix that transforms the surface normals. This extra matrix is needed because we only want the normals to be rotated according to the model-view matrix, but not to be translated. This is the vertex shaders source code:
uniform uniform uniform uniform mat4 mat4 mat3 vec3 mvpMatrix; mvMatrix; normalMatrix; lightPosition;
in vec4 vertex; in vec3 normal; in vec2 textureCoordinate; out vec3 varyingNormal; out vec3 varyingLightDirection;
2.5. Lighting
33
out vec3 varyingViewerDirection; out vec2 varyingTextureCoordinate; void main(void) { vec4 eyeVertex = mvMatrix * vertex; eyeVertex /= eyeVertex.w; varyingNormal = normalMatrix * normal; varyingLightDirection = lightPosition - eyeVertex.xyz; varyingViewerDirection = -eyeVertex.xyz; varyingTextureCoordinate = textureCoordinate; gl_Position = mvpMatrix * vertex; }
The fragment shader is supplied with the light sources and the materials properties and the geometry data calculated by the vertex shader. It then sets the fragments color value according to the above formula. This is the fragment shaders source code:
uniform uniform uniform uniform uniform uniform uniform uniform in in in in vec3 vec3 vec3 vec2 vec4 ambientColor; vec4 diffuseColor; vec4 specularColor; float ambientReflection; float diffuseReflection; float specularReflection; float shininess; sampler2D texture; varyingNormal; varyingLightDirection; varyingViewerDirection; varyingTextureCoordinate;
void main(void) { vec3 normal = normalize(varyingNormal); vec3 lightDirection = normalize(varyingLightDirection); vec3 viewerDirection = normalize(varyingViewerDirection); vec4 ambientIllumination = ambientReflection * ambientColor; vec4 diffuseIllumination = diffuseReflection * max(0.0, dot(lightDirection, normal)) vec4 specularIllumination = specularReflection * pow(max(0.0, dot(-reflect(lightDirection ), shininess) * specularCol fragColor = texture2D(texture, varyingTextureCoordinate) * (ambientIllumination + diffuseIllu ) + specularIllumination; }
In the GLWidget::initiliazeGL() method, we set up both shaders and prepare the attribute arrays of the cube and the spotlight. The only thing new here is the QVector made of QVector3Ds that stores the surface normal of each of the cubes vertices.
34
cubeVertices << QVector3D(-0.5, -0.5, 0.5) << QVector3D( 0.5, -0.5, 0.5) << QVecto << QVector3D( 0.5, 0.5, 0.5) << QVector3D(-0.5, 0.5, 0.5) << QVecto << QVector3D( 0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5, -0.5) << QVecto << QVector3D(-0.5, 0.5, -0.5) << QVector3D( 0.5, 0.5, -0.5) << QVecto << QVector3D(-0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5, 0.5) << QVecto << QVector3D(-0.5, 0.5, 0.5) << QVector3D(-0.5, 0.5, -0.5) << QVecto << QVector3D( 0.5, -0.5, 0.5) << QVector3D( 0.5, -0.5, -0.5) << QVecto << QVector3D( 0.5, 0.5, -0.5) << QVector3D( 0.5, 0.5, 0.5) << QVecto << QVector3D(-0.5, 0.5, 0.5) << QVector3D( 0.5, 0.5, 0.5) << QVecto << QVector3D( 0.5, 0.5, -0.5) << QVector3D(-0.5, 0.5, -0.5) << QVecto << QVector3D(-0.5, -0.5, -0.5) << QVector3D( 0.5, -0.5, -0.5) << QVecto << QVector3D( 0.5, -0.5, 0.5) << QVector3D(-0.5, -0.5, 0.5) << QVecto cubeNormals << QVector3D( 0, 0, 1) << QVector3D( 0, 0, 1) << QVector3D( 0, 0, << QVector3D( 0, 0, 1) << QVector3D( 0, 0, 1) << QVector3D( 0, 0, << QVector3D( 0, 0, -1) << QVector3D( 0, 0, -1) << QVector3D( 0, 0, << QVector3D( 0, 0, -1) << QVector3D( 0, 0, -1) << QVector3D( 0, 0, << QVector3D(-1, 0, 0) << QVector3D(-1, 0, 0) << QVector3D(-1, 0, << QVector3D(-1, 0, 0) << QVector3D(-1, 0, 0) << QVector3D(-1, 0, << QVector3D( 1, 0, 0) << QVector3D( 1, 0, 0) << QVector3D( 1, 0, << QVector3D( 1, 0, 0) << QVector3D( 1, 0, 0) << QVector3D( 1, 0, << QVector3D( 0, 1, 0) << QVector3D( 0, 1, 0) << QVector3D( 0, 1, << QVector3D( 0, 1, 0) << QVector3D( 0, 1, 0) << QVector3D( 0, 1, << QVector3D( 0, -1, 0) << QVector3D( 0, -1, 0) << QVector3D( 0, -1, << QVector3D( 0, -1, 0) << QVector3D( 0, -1, 0) << QVector3D( 0, -1, cubeTextureCoordinates << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // F << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0) << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // B << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0) << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // L << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0) << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // R << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0) << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // T << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0) << QVector2D(0, 0) << QVector2D(1, 0) << QVector2D(1, 1) // B << QVector2D(1, 1) << QVector2D(0, 1) << QVector2D(0, 0); cubeTexture = bindTexture(QPixmap(":/cubeTexture.png"));
coloringShaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/coloringVertexSh coloringShaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/coloringFragme coloringShaderProgram.link(); spotlightVertices << << << << << << QVector3D( 0, QVector3D( 0, QVector3D( 0, QVector3D( 0, QVector3D(-0.5, QVector3D( 0.5, 1, 0) << 1, 0) << 1, 0) << 1, 0) << 0, -0.5) << 0, 0.5) << QVector3D(-0.5, QVector3D( 0.5, QVector3D(-0.5, QVector3D( 0.5, QVector3D( 0.5, QVector3D(-0.5,
0, 0.5) << Q 0, -0.5) << Q 0, -0.5) << Q 0, 0.5) << Q 0, -0.5) << Q 0, 0.5) << Q
2.5. Lighting
35
After clearing the screen and calculating the view matrix (which is the same for both objects) in the GlWidget::painGL() method, we rst render the cube using the lighting shaders and then we render the spotlight using the coloring shader. Because we want to keep the cubes origin aligned with the worlds origin, we leave the model matrix mMatrix set to an identity matrix. Then we calculate the model-view matrix, which we also need to send to the lighting vertex shader, and extract the normal matrix with Qts QMatrix4x4::normal() method. As we have already stated, this matrix will transform the surface normals of our cube from model coordinates into viewer coordinates. After that, we calculate the position of the light source in world coordinates according to the angle. We can now render the cube. We bind the lighting shader program, set the uniforms and texture units, set and enable the attribute arrays, trigger the rendering, and afterwards disable the attribute arrays and release the program. For the light sources and the materials properties, we set values that give us a glossy looking surface. Next we render the spotlight. Because we want to move the spotlight to the same place as the light source, we need to modify its model matrix. First we restore the identity matrix (actually we did not modify the model matrix before so it still is set to the identity matrix anyway). Then we move the spotlight to the light sources position. Now we still want to rotate it since it looks nicer if it faces our cube. We therefore apply two rotation matrices on top. Because the pyramid that represents our lightspot is still to big to t into our scene nicely, we scale it down to a tenth of its original size. Now we follow the usual rendering procedure again, this time using the coloringShaderProgram and the spotlight data. Thanks to depth testing, the new object will be integrated seamlessly into our existing scene.
void GlWidget::paintGL() { ... mMatrix.setToIdentity(); QMatrix4x4 mvMatrix; mvMatrix = vMatrix * mMatrix; QMatrix3x3 normalMatrix; normalMatrix = mvMatrix.normalMatrix(); QMatrix4x4 lightTransformation; lightTransformation.rotate(lightAngle, 0, 1, 0); QVector3D lightPosition = lightTransformation * QVector3D(0, 1, 1); lightingShaderProgram.bind();
36
lightingShaderProgram.setUniformValue("mvpMatrix", pMatrix * mvMatrix); lightingShaderProgram.setUniformValue("mvMatrix", mvMatrix); lightingShaderProgram.setUniformValue("normalMatrix", normalMatrix); lightingShaderProgram.setUniformValue("lightPosition", vMatrix * lightPosition); lightingShaderProgram.setUniformValue("ambientColor", QColor(32, 32, 32)); lightingShaderProgram.setUniformValue("diffuseColor", QColor(128, 128, 128)); lightingShaderProgram.setUniformValue("specularColor", QColor(255, 255, 255)); lightingShaderProgram.setUniformValue("ambientReflection", (GLfloat) 1.0); lightingShaderProgram.setUniformValue("diffuseReflection", (GLfloat) 1.0); lightingShaderProgram.setUniformValue("specularReflection", (GLfloat) 1.0); lightingShaderProgram.setUniformValue("shininess", (GLfloat) 100.0); lightingShaderProgram.setUniformValue("texture", 0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, cubeTexture); glActiveTexture(0);
lightingShaderProgram.setAttributeArray("vertex", cubeVertices.constData()); lightingShaderProgram.enableAttributeArray("vertex"); lightingShaderProgram.setAttributeArray("normal", cubeNormals.constData()); lightingShaderProgram.enableAttributeArray("normal"); lightingShaderProgram.setAttributeArray("textureCoordinate", cubeTextureCoordinates. lightingShaderProgram.enableAttributeArray("textureCoordinate"); glDrawArrays(GL_TRIANGLES, 0, cubeVertices.size()); lightingShaderProgram.disableAttributeArray("vertex"); lightingShaderProgram.disableAttributeArray("normal"); lightingShaderProgram.disableAttributeArray("textureCoordinate"); lightingShaderProgram.release(); mMatrix.setToIdentity(); mMatrix.translate(lightPosition); mMatrix.rotate(lightAngle, 0, 1, 0); mMatrix.rotate(45, 1, 0, 0); mMatrix.scale(0.1); coloringShaderProgram.bind(); coloringShaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix); coloringShaderProgram.setAttributeArray("vertex", spotlightVertices.constData()); coloringShaderProgram.enableAttributeArray("vertex"); coloringShaderProgram.setAttributeArray("color", spotlightColors.constData()); coloringShaderProgram.enableAttributeArray("color"); glDrawArrays(GL_TRIANGLES, 0, spotlightVertices.size()); coloringShaderProgram.disableAttributeArray("vertex"); coloringShaderProgram.disableAttributeArray("color"); coloringShaderProgram.release(); }
2.5. Lighting
37
The last thing left to do is to initialize the light sources position and set up the timer. We tell the timer to periodically invoke the timout() slot.
GlWidget::GlWidget(QWidget *parent) : QGLWidget(QGLFormat(/* Additional format options */), parent) { ... lightAngle = 0; QTimer *timer = new QTimer(this); connect(timer, SIGNAL(timeout()), this, SLOT(timeout())); timer->start(20); }
In this slot we update the angle of the light sources circulation and update the screen. We also remove the calls to QGLWidget::updateGL() in the event handlers.
void GlWidget::timeout() { lightAngle += 1; while (lightAngle >= 360) { lightAngle -= 360; } updateGL(); }
Now we are nished with the implementation. If you build and run the program, you will see a lit, textured cube.
each other in memory. We are not limited to using one single vertex buffer for all the attributes. Alternatively we could also use one vertex buffer for each vertex or a combination of both. Note that we can also mix the usage of vertex arrays and vertex buffers in one rendering. Instead of using OpenGL API calls, we use the QGLBuffer class to manage the vertex buffers of the cube and the spotlight. The type of the buffer object can be set in the constructor. It defaults to being a vertex buffer. We add a QGLBuffer member for each object, remove the vertex arrays we used in the previous version of the lighting example and add variables to hold the number of vertices, which will be necessary to tell OpenGL the number of vertices to render in the GlWidget::updateGL() method.
class GlWidget : public QGLWidget { private: ... QGLBuffer cubeBuffer; ... int numSpotlightVertices; QGLBuffer spotlightBuffer; ... };
Buffer objects are OpenGL objects just like the shader programs and textures which we have already used in the preceding examples. So the syntax for handling them is quite similar. In the GlWidget::initializeGL() method, we rst need to create the buffer object. This is done by calling QGLBuffer::create(). It will request a free buffer object id (similar to a variable name) from the graphics card. Then, as with textures, we need to bind it to the rendering context to make it active using QGLBuffer::bind(). After this we call QGLBuffer::allocate() to allocate the amount of memory we will need to store our vertices, normals, and texture coordinates. This function expects the number of bytes to reserve as a parameter. Using this method, we could also directly specify a pointer to the data which we want to be copied, but since we want to arrange several datasets one after the other, we do the copying in the next few lines. Allocating memory also makes us responsible for freeing this space when its not needed anymore by using QGLBuffer::destroy(). Qt will do this for us if the QGLBuffer object is destroyed. Uploading data to the graphics card is done by using QGLBuffer::write(). It reads an offset (in bytes) from the beginning of the buffer object, a pointer to the data in the system memory, which is to be read from, and the number of bytes to copy. First we copy the cubes vertices. Then we append its surface normals and the texture coordinates. Note that because OpenGL uses GLoats for its computations, we need to consider the size of the c {GLoat} type when specifying memory offsets and sizes. Then we unbind the buffer object using QGLBuffer::release(). We do the same for the spotlight object.
39
int offset = 0; cubeBuffer.write(offset, cubeVertices.constData(), numCubeVertices * 3 * sizeof(GLfl offset += numCubeVertices * 3 * sizeof(GLfloat); cubeBuffer.write(offset, cubeNormals.constData(), numCubeVertices * 3 * sizeof(GLflo offset += numCubeVertices * 3 * sizeof(GLfloat); cubeBuffer.write(offset, cubeTextureCoordinates.constData(), numCubeVertices * 2 * s cubeBuffer.release(); ... numSpotlightVertices = 18; spotlightBuffer.create(); spotlightBuffer.bind(); spotlightBuffer.allocate(numSpotlightVertices * (3 + 3) * sizeof(GLfloat));
offset = 0; cubeBuffer.write(offset, spotlightVertices.constData(), numSpotlightVertices * 3 * s offset += numSpotlightVertices * 3 * sizeof(GLfloat); cubeBuffer.write(offset, spotlightColors.constData(), numSpotlightVertices * 3 * siz spotlightBuffer.release(); }
Just in case youre interested, this is how the creation of buffer objects would work if we did not use Qts QGLBuffer class for this purpose: We would call void glGenBuffers(GLsizei n, GLuint *buffers) to request n numbers of buffer objects with their ids stored in buffers. Next we would bind the buffer using void glBindBuffer(enum target, uint bufferName), where we would also specify the buffers type. Then we would use void glBufferData(enum target, sizeiptr size, const void *data, enum usage) to upload the data. The enum called usage species the way the buffer is used by the main program running on the CPU (e.g. write-only, read-only, copy-only) as well as the frequency of the buffers usage, in order to support optimizations. void glDeleteBuffers(GLsizei n, const GLuint *buffers) is the OpenGL API function to delete buffers and free their memory. To have OpenGL use our vertex buffer objects as the source of its vertex attributes, we need to set them differently in the GlWidget::updateGL() method. Instead of calling QGLShaderProgram::setAttributeArray(), we need to call QGLShaderProgram::setAttributeBuffer() with the QGLBuffer instance bound to the rendering context. The parameters of QGLShaderProgram::setAttributeBuffer() are the same as those of QGLShaderProgram::setAttributeArray(). We only need to adapt the offset parameter to uniquely identify the location of the data because we now use one big chunk of memory for every attribute instead of one array for each of them.
40
cubeBuffer.bind(); int offset = 0; lightingShaderProgram.setAttributeBuffer("vertex", GL_FLOAT, offset, 3, 0); lightingShaderProgram.enableAttributeArray("vertex"); offset += numCubeVertices * 3 * sizeof(GLfloat); lightingShaderProgram.setAttributeBuffer("normal", GL_FLOAT, offset, 3, 0); lightingShaderProgram.enableAttributeArray("normal"); offset += numCubeVertices * 3 * sizeof(GLfloat); lightingShaderProgram.setAttributeBuffer("textureCoordinate", GL_FLOAT, offset, 2, 0 lightingShaderProgram.enableAttributeArray("textureCoordinate"); cubeBuffer.release(); ... glDrawArrays(GL_TRIANGLES, 0, numCubeVertices); spotlightBuffer.bind(); offset = 0; coloringShaderProgram.setAttributeBuffer("vertex", GL_FLOAT, offset, 3, 0); coloringShaderProgram.enableAttributeArray("vertex"); offset += numSpotlightVertices * 3 * sizeof(GLfloat); coloringShaderProgram.setAttributeBuffer("color", GL_FLOAT, offset, 3, 0); coloringShaderProgram.enableAttributeArray("color"); spotlightBuffer.release(); glDrawArrays(GL_TRIANGLES, 0, numSpotlightVertices); ... }
Rendering the scene now involves less CPU usage and the attribute data is not repeatedly transferred from system memory to the graphics card anymore. Although this might not be visible in this small example, it certainly boosts up the speed of programs where more geometry data is involved.
41
42
CHAPTER
THREE
1 2
43