Water Simulation On WebGL and Three - Js
Water Simulation On WebGL and Three - Js
Spring 5-2013
Recommended Citation
Pereira, Kerim J., "Water Simulation on WebGL and Three.js" (2013). Honors Theses. 137.
https://aquila.usm.edu/honors_theses/137
This Honors College Thesis is brought to you for free and open access by the Honors College at The Aquila Digital
Community. It has been accepted for inclusion in Honors Theses by an authorized administrator of The Aquila
Digital Community. For more information, please contact [email protected], [email protected].
The University of Southern Mississippi
by
Kerim Pereira
A Thesis
Submitted to the Honors College
of The University of Southern Mississippi
in Partial Fulfillment
of the Requirements for the Degree of
Bachelor of Science
in the Department of Computer Science
March 2013
ii
________________________________________
Beddhu Murali, Associate Professor,
Advisor
Department of Computer Science
________________________________________
Chaoyang (Joe) Zhang, Chair,
Department of Computer Science
________________________________________
David R. Davies, Dean
Honors College
iii
Abstract
Technology is constantly moving forward. Computers are getting better and better
every single day. Processors, memory, hard drives, and video cards are getting more
powerful and more accessible to the users. With all of this hardware progressing, it is also
Since this new hardware is available to the user, the easiest way to make graphics
even more accessible to everyone is through the web browser. This move to the browser
simplifies the life of the end user so that he does not have to install any additional
software. Web browser graphics is a field that has been growing since the launch of
social media that allowed its users to play 2D games for free.
As a result of the increase in demand for better graphics on the web, major
internet browser companies have decided to start implementing WebGL. WebGL is just a
3D drawing context that can be implemented on the web after the addition of the
This thesis will create a water simulation using WebGL and Three.js. The
simulation will depend on a graphics component called a shader that produces moving
water and implements a water texture. The final simulation should look realistic and be
iv
Table of Contents
Abstract iv
v
Section 2.1.12: Merging 9
Chapter 3 - Methodology 15
Chapter 4 - Results 26
vi
Section 5.2: Future Work 31
References 32
vii
Table of Figures
viii
Figure 3.10 – Applying Shaders to the Material 23
ix
Chapter 1 - Introduction
Technology is constantly moving forward. Computers are getting better and better
every single day. Processors, memory, hard drives, and video cards are getting more
powerful and more accessible to the users. With all this hardware progressing, it is also
Computer graphics is a field that is progressing at a rapid rate. Users always want
bigger and better graphics that look more and more realistic. Also, they expect their
experience smooth renderings, the developer must either create software that adapts to
the capabilities of widely varying computers, smart phones, tablets, and game consoles.
With all of this new hardware available to the user, the easiest way to make
graphics more accessible to everyone is through the web browsers. This move to the
browser simplifies the life of the end user that does not have to install any additional
software. Web browser graphics is a field that has been growing since the launch of
social media that allowed its users to play 2D games for free.
As a result of the increase in demand for better graphics on the web, major
internet browser companies have decided to start implementing WebGL. WebGL is just a
3D drawing context that can be implemented on the web after the addition of the
“<canvas>” tag within HTML5. With this new technology we are now able to access the
1
video card directly without having to install a third party plug-in. This allows
programmers to create simulations or video games that can utilize all the power from the
video card and its processing unit. The following example is a small pool simulation that
Although browser companies are improving at a fast pace, there are still several
blocks on the way. A programmer cannot just free their mind and render any scenes that
they want. The developer needs to take into consideration the limitations of the browsers
in comparison to the actual hardware. The browser has a specific amount of cache
memory that the user cannot exceed, or after a certain amount of memory RAM used the
2
Another limitation is that not every web browser acts in the same way. The
manufacturers can decide what they want to implement and what they do not want to
implement. For example, WebGL has been adopted by the majority of browsers, but
This thesis will create a water simulation using WebGL and Three.js. The
simulation will depend on a shader that renders moving water and implements a water
texture. The final simulation should look realistic and be performance-oriented so it can
3
Chapter 2 - Literature Review
dimensional image when given a virtual camera, three-dimensional objects, light sources,
The pipeline has been evolving over many years. In the beginning, the hardware
rendering pipeline was fixed; this means that the developers did not have any means to
make their code interact with the pipeline. With the evolution of software, the
and coding methods that have a significant effect on the performance of the software.
There are various steps in the rendering pipeline and they can be represented in
various ways, so in order to represent it better we will divide the rendering pipeline into
three conceptual stages: the application stage, the geometry stage, and the rasterizer
stage. These conceptual stages are pipelines by themselves, which mean that inside of
4
Figure 2.1 is a good illustration of what the conceptual stages are and how inside
each one of them there is an internal pipeline, like in the geometry stage, or it could be
The application stage is where it all begins. Here, the developer has full control
This is the stage where the developer allows the user to provide all the inputs to
the program and based on this input generates the geometry to be rendered, material
properties and virtual cameras. This data is then passed on to the geometry stage.
Accepting rendering primitives like points, triangles, lines and transferring it to the next
for the majority of the per-vertex operations. This stage is divided into the following
functional stages: model and view transform, vertex shading, projection, clipping, and
5
During the model and view transform step, the input that is coming from the
application stage has to be transformed into the supported coordinate system. This
process grabs the vertices and normals of an object and locates them in the world space.
The world space is a unique entity, so after this process every object is in the same world
space. Only models that can be seen by the virtual camera that is also set during the
application stage can be rendered to the screen. The camera is set in the world space with
Models and camera use the view transform to relocate the objects and the camera.
The purpose of the view transform is to reposition the camera to the center of the
The vertex shading stage is one of the most programmable parts of the rendering
pipeline. In order to create realistic graphics, the developer needs to give an appearance
to the objects. During this phase, the rendering pipeline will be focused on the vertices of
the objects. Each vertex in an object stores different data like: position, normal, color,
6
Vertex shaders can be used to do traditional vertex-based operations such as
transforming the position by a matrix, computing light equations to generate color, and
generating texture coordinates. The output of vertex shaders, stored in the "varying"
variables, are passed to the rasterizer stage to be used as input for the fragment shaders.
The next stage in the rendering pipeline is the projection. The main point of the
projection stage is to convert 3D models into 2D projections with respect to the camera.
Here, the view volume is transformed into the canonical view volume, which is a unit
cube with extreme points (-1,-1,-1) and (1,1,1). Figure 2.4 shows an example of a unit
cube. There are two main types of projections: orthographic and perspective.
form of parallel projection, which means that that all projection lines are orthogonal to
perspective. Objects that are further away from the camera are rendered smaller, and this
7
Section 2.1.6: Clipping
During the clipping stage, only projections that are in the view frustum are passed
to the next stage. Objects that have part of them in the view and part outside the view will
be clipped and only the parts that are inside of the unit cube will remain.
The clipping stage is not programmable for the developer and is limited to how
During the screen mapping stage, the models that were not clipped out are
converted into screen coordinates. A screen coordinate is the position of the object within
the drawing context. Since the drawing context can only be represented in 2D, the screen
coordinates are also represented in 2D with an x-coordinate and a y-coordinate. The new
screen coordinate will be passed to the rasterizer stage along with the z-coordinate that
indicates depth.
After an object passes though the geometry stage, the next stop is the rasterizer
stage. The purpose of this stage is to compute and set the color for each pixel that is part
of the object, this process is called scan conversion or rasterization. Just like the previous
stages, the rasterizer stage can be divided into the following functional stages: triangle
setup, triangle traversal, pixel shading, and merging. A graphical representation of the
8
Figure 2.5 – The Rasterizer Stage (Akenine-Moller et al., 2008)
The first step in the rasterizer stage is the triangle setup. The triangle setup is a
fixed operation, which its main purpose is to compute the triangle’s surface.
Triangle traversal is the process where a line is drawn on a raster screen between
two points; this process is also called scan conversion. With this process, each property of
Just like the vertex shading stage, pixel shading, also called fragment shading, is a
highly programmable step during the rendering pipeline. At this point the developer can
change the appearance of an object one pixel at a time. The pixel shader is extremely
The merging stage is the last step of the rendering pipeline. This stage depends on
several buffers to store information about the individual pixels. The common buffers that
9
we can find during the merging stage are: the color buffer, the depth buffer, and the frame
buffer.
The color buffer is where the color information that comes from the fragment
shader is stored. The color buffer also contains an alpha channel, which is used to store
The depth buffer, also called z-buffer, is where the visibility issue is solved. The
depth buffer will draw only objects that are closer to the camera; this is done with a
simple comparison of the z value, with the pixel with the greater z value being drawn.
The frame buffer is just a name for all the frames on the system. In addition to the
color and depth buffer, there may be an accumulation buffer. The accumulation buffer is
used to create other effects like depth of field, motion blue, antialiasing, etc.
3D graphics that is targeted at embedded devices like video game consoles, cell phones,
tablets, handheld devices, etc. OpenGL ES was created by the Khronos group, which is a
of OpenGL. It provides a low level API between applications and hardware or software
graphics engines. This API addresses some of the problems with embedded devices like:
10
Section 2.3: WebGL
abbreviated as GLSL.
WebGL runs in the HTML5 <canvas> element and has full integration with all
the Document Object Model (DOM) interfaces. WebGL is a cross-platform API since it
can run in many different devices and is for the creation of dynamic web applications.
(Parisi, 2012)
Just like OpenGL, WebGL is a low-level language that requires a lot of code in
order to have something created and working. You have to load, compile, and link the
shaders, set up the variables that are going to be passed to the shaders, and perform
complex matrix math to animate objects. At the same time, it gives freedom to the
Now every major browser supports WebGL, except Internet Explorer, since
HTML5 is the new standard in the industry. Firefox 4 or higher, any Google Chrome
because it updates itself (version 10 or higher), and Safari OS X 10.7 are some of the
examples of current browsers that support this technology. Also, now browsers on tablets
As said in the last section, WebGL is a low-level API that even to draw a simple
triangle to the screen takes a lot of work. Three.js is a framework that simplifies the
11
creation of 3D scenes. With this framework, it is not necessary to explicitly compile and
apply shaders with WebGL commands. All of that is done for you in the background.
https://github.com/mrdoob/three.js/
and can fall back to the 2D rendering context if WebGL is not supported. On this project,
Three.js is the library that is going to be used to create the scene and the water
simulation.
determining the effect of light on a material. Shaders are responsible for the materials and
Shaders are so important than they even have their own programming language.
like language that has its own types that will be showed in the tables below.
12
Figure 2.7 – GLSL Qualifiers (Danchilla, 2012).
Figure 2.7 shows the GLSL qualifiers. These qualifiers are used in different ways
throughout the shaders. Attributes and uniforms are input values for the shaders and
varying types are going to be output values because they can change during the lifetime
of the shader program. Varying variables are changed throughout the vertex shader and
are the output. After the vertex shader outputs the varying varible the fragment shader
uses it as an input.
Figure 2.8 shows the built-in variables that GLSL gives the developer. They are
already implemented and they can be used without any prior declaration.
There is plenty more built-in functionality that will not be discussed in this
chapter. GLSL has plenty of built-in functions like mathematical functions, texture
functions, geometric functions, etc. more information can be found about these
13
functionalities in the following link from the OpenGL website:
http://www.opengl.org/sdk/docs/manglsl/
As mentioned before in Chapter 2, there are two types of shaders: vertex and
fragment. Figure 2.9 will show an example of a simple vertex and fragment shader.
In the above example, we have a very simple vertex and fragment shader within
the “<script>” tag. This is one of the ways that we can use shaders in WebGL and
Three.js. In line 1, we declare our script tag with an id of “shader-vs” and a type that says
that is a vertex shader. Line 4-5 is the main body of any shader since it is required to have
a main function and in this case we just assign our vertex position to the GL variable
“gl_Position”. The fragment shader starts in line 9 with the same declaration of the script
tag, but this time we make the distinction of a fragment shader, then we declare the main
function and assign a vector of four dimensions with the value of red and store it in the
GL variable “gl_FragColor”.
14
Chapter 3 - Methodology
and create the index.html. Figure 3.1 shows the code necessary to create the html file.
Figure 3.1 is a simple HTML page with basic Cascading Style Sheets (CSS) code.
Next, it is necessary to load every library that is needed to create the scene. At this point
the most important one is in line 14, where Three.js is actually loaded and now can be
used in our webpage. During initializing, it is a good idea to check if the browser uses
15
WebGL, which is done in lines 22-25. Finally, in line 28 there is a call to the function
“init()” that comes from the js/app.js file. In this function a WebGL Renderer will be
created whose purpose is to get the drawing context and then render the scene using
WebGL. Figure 3.2 shows how the renderer is set up using init().
Figure 3.2 shows how the WebGL Renderer is being created for the project.
Everything starts from the init() function in app.js. From here, setupRenderer() is called
which creates a new renderer with the size of the canvas and the renderer is attached to
the body tag of the html file using JavaScript. At this point the renderer is set but it has to
be refreshed several times a second, and that is the functionality of the render() function.
16
Section 3.2: Creating a Scene
Now it is time to create the scene where the water simulation is going to take
place. The first thing that is going to be implemented in the scene is the camera and a
simple object (a sphere). Figure 3.3 will show how to set the camera, controls, lights, add
17
Figure 3.3 shows the functions setupSkyBox(), setupScene() and setupLights().
All of these functions are being called in init(). The skybox is a method of creating
background to make a scene look bigger. This is created by adding a cube to the scene
and adding a texture to the inside of the cube. In order to create a skybox in Three.js, it is
necessary to create a camera for the skybox. This is implemented this way because the
skybox needs its own renderer. In line 36, the skybox camera is created as a perspective
line 37, the new scene for the skybox is created. Next, several images are loaded. These
images are going to be attached to the inside part of the cube. After loading the texture, it
is time to use shaders to make the skybox look like a background, so it is convenient that
Three.js already has those shaders created for us (for the skybox and other simple
shaders). Then the material is created and the geometry is created. The last step is to join
The setupScene function simply creates a new scene with the new
THREE.Scene() function, then some fog is added just for looks. Next, an object is added
directional light. In lines 78-80 the lights are set in the desired position and added to the
scene. Figure 3.4 shows the result from the code in Figure 3.3
18
Figure 3.4 – Result from Figure 3.3
In order to create the water simulation, a plane is needed. The plane is a simple
object in Three.js that later will have the water shader applied to it. The code to add a
plane is not large. Figure 3.5 shows how to create a plane in Three.js
The code in the Figure above is pretty much the same as that for creating the ball
in the previous section. Just create the geometry, material, join them in the mesh, and add
it to the scene.
19
Section 3.4: Creating the Vertex Shader
Now that the plane is set, it is time to actually start creating the simulation. Since
what the project is looking for is to create a water simulation, it is necessary to know how
water behaves. Water moves in waves; these waves move just like the sine function.
Knowing that waves behave in the same way as the sine or cosine function tells us
that in order to make the waves move, the sine function will be needed. Since the vertex
shader is the one that will give the effect of movement, the sine function will be applied
here. Now, it is also important to make the shader as easy to configure as possible; the
solution for this is to create uniforms that are easy to change the value. Figure 3.7 shows
how easy uniforms are to configure and change the appearance of the shader.
20
Figure 3.7 – Uniform Declaration
The next step is actually writing the vertex shader. The method that is going to be
used is to implement the shader as a string inside the apps.js file. Figure 3.8 shows the
fragment shader that is based from the work of Concord (2010). The shader starts with a
call to the main() function and from there waveHeight() and waveNormal() are called.
The waveHeight function returns a float value to create the z value for the position. The
waveNormal function will return a vector3 with the normalized value of the each vertex,
21
Figure 3.8 – Vertex Shader
After the vertex shader has calculated its output, it passes all needed values to the
fragment shader. The fragment shader is where the texture is going to be applied. The
main goal after having the shader moving like actual water is to make it look realistic. In
order to make the water texture that has been loaded look better, the built-in function
“noise” will be used to give a sense of randomness to the water. Then the color of the
texture will be passed to gl_FragColor. Figure 3.9 shows the complete fragment shader.
22
Figure 3.9 – Fragment Shader
After both vertex and fragment shader it is time to apply them to the material of
the plane. Three.js makes this extremely simple. Figure 3.10 shows the four lines of code
that are needed for the material and then just create the mesh with the new material as
Figure 3.11 shows the plane with the water shader applied to it.
23
Figure 3.11 – Plane with Shader
In order to prove that the shader can be used in large scenes, a terrain will be
created. This terrain function is exported from a Three.js example that is available when
Three.js is downloaded. Figure 3.11 shows the function that loads the terrain.
24
Section 3.8: Adding Stats.js
The last step in the project is to add a way to record the frames per second (FPS)
that it takes the scene the run. Three.js brings a file called stats.js that can be
implemented on top of our canvas and will show the FPS. In order to make the stats.js
work, it is necessary to create a new container and append it to the body of the html file.
After that is done it is as simple as call the new Stats() function and position it wherever
25
Chapter 4 - Results
After all the programming and over three hundred lines of code, there is a final
scene that can be viewed in Figure 4.1. This scene has a skybox, a sphere in the middle
for reference, the terrain, and the water shader. Also, the scene is highly programmable
and changing its parameters is very simple. The scene can be found at the following url:
http://bigcat.cs.usm.edu/~kpereira/thesis/
The performance of the final scene is very stable. The scene has been tested on 3
26
Framerate Performance:
Memory Performance:
After testing the final scene with the task manager that Google Chrome has built-
in, leaving the scene running for hours produced little to no change in memory. This
means that the final scene does not have any memory leaks and it performs really well. In
Figure 4.3, the amount of Desktop memory and GPU memory that the scene is using is
shown.
27
Figure 4.3 – Google Chrome's Task Manager
The final scene is highly programmable and can change appearance with simple
changes. The texture in the shader, skybox, and terrain can be changed in less than a
minute. The way the shader behaves also can be changed with no effort. The following
28
Figure 4.4 – Different Texture and Higher Waves
29
Figure 4.6 – Dirt Texture in the Water Shader
30
Chapter 5: Analysis and Evaluation
The water simulation looks realistic and has very good performance on a large
space with a terrain in comparison to other water simulations in WebGL like the example
showed in Figure 1-1. The project includes all the advantages that Three.js provides and
it is highly programmable. The thesis accomplished all its objectives and even a little
more.
There is more work to do with the water simulation. For future work, the
simulation can have collision detection with objects. For example, when the waves move
up there is clipping between the terrain and the water, and if collision detection is
implemented this will not happen. Also, the simulation can be more interactive; this
means that objects in the water like the sphere would move and that any interaction with
31
References
Akenine-Moller, Tomas, Haines and Eric, Hoffman, Naty. "Real-Time Rendering, Third
Edition", CRC Press, pp 11-24, July 2008
Danchilla, Brian. "Beginning WebGL for HTML5", Apress, pp 42-49, Aug 2012
Munshi, Aaftab, Ginsburg and Dan, Shreiner Dave. "OpenGL ES 2.0 Programming
Guide", Addison-Wesley Pearson Education, pp 1-5. 77-90, Aug 2008
32