Manual Ogre3D
Manual Ogre3D
Manual Ogre3D
7 (’Cthugha’)
Steve Streeting
Copyright
c Torus Knot Software Ltd
Permission is granted to make and distribute verbatim copies of this manual provided the
copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the con-
ditions for verbatim copying, provided that the entire resulting derived work is distributed
under the terms of a permission notice identical to this one.
i
Table of Contents
OGRE Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Object Orientation - more than just a buzzword . . . . . . . . . . . . . . . . 2
1.2 Multi-everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Material Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.3 Texture Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.4 Declaring Vertex/Geometry/Fragment Programs . . . . . . . . . . 63
3.1.5 Cg programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.6 DirectX9 HLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.1.7 OpenGL GLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1.8 Unified High-level Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass . . . . 79
3.1.10 Vertex Texture Fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.11 Script Inheritence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.1.12 Texture Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.1.13 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.1.14 Script Import Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Compositor Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.2.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.2.2 Target Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.2.3 Compositor Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.2.4 Applying a Compositor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.3 Particle Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.3.1 Particle System Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.3.2 Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.3.3 Particle Emitter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.3.4 Standard Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
ii
7 Shadows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.1 Stencil Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.2 Texture-based Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.3 Modulative Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
7.4 Additive Light Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
iii
8 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.1 Skeletal Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.2 Animation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.3 Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.3.1 Morph Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.3.2 Pose Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.3.3 Combining Skeletal and Vertex Animation . . . . . . . . . . . . . . . 202
8.4 SceneNode Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
8.5 Numeric Value Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
OGRE Manual 1
OGRE Manual
Copyright
c The OGRE Team
This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License.
To view a copy of this licence, visit http://creativecommons.org/licenses/by-sa/2.5/
or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305,
USA.
Chapter 1: Introduction 2
1 Introduction
This chapter is intended to give you an overview of the main components of OGRE and
why they have been put together that way.
Well, nowadays graphics engines are like any other large software system. They start
small, but soon they balloon into monstrously complex beasts which just can’t be all un-
derstood at once. It’s pretty hard to manage systems of this size, and even harder to make
changes to them reliably, and that’s pretty important in a field where new techniques and
approaches seem to appear every other week. Designing systems around huge files full of C
function calls just doesn’t cut it anymore - even if the whole thing is written by one person
(not likely) they will find it hard to locate that elusive bit of code after a few months and
even harder to work out how it all fits together.
I’m not going to teach you OO here, that’s a subject for many other books, but suffice to
say I’d seen enough benefits of OO in business systems that I was surprised most graphics
code seemed to be written in C function style. I was interested to see whether I could apply
my design experience in other types of software to an area which has long held a place in
my heart - 3D graphics engines. Some people I spoke to were of the opinion that using
full C++ wouldn’t be fast enough for a real-time graphics engine, but others (including me)
were of the opinion that, with care, and object-oriented framework can be performant. We
were right.
In summary, here’s the benefits an object-oriented approach brings to OGRE:
Abstraction
Common interfaces hide the nuances between different implementations of 3D
API and operating systems
Chapter 1: Introduction 3
Encapsulation
There is a lot of state management and context-specific actions to be done in
a graphics engine - encapsulation allows me to put the code and data nearest
to where it is used which makes the code cleaner and easier to understand, and
more reliable because duplication is avoided
Polymorphism
The behaviour of methods changes depending on the type of object you are
using, even if you only learn one interface, e.g. a class specialised for managing
indoor levels behaves completely differently from the standard scene manager,
but looks identical to other classes in the system and has the same methods
called on it
1.2 Multi-everything
I wanted to do more than create a 3D engine that ran on one 3D API, on one platform,
with one type of scene (indoor levels are most popular). I wanted OGRE to be able to
extend to any kind of scene (but yet still implement scene-specific optimisations under the
surface), any platform and any 3D API.
Therefore all the ’visible’ parts of OGRE are completely independent of platform, 3D
API and scene type. There are no dependencies on Windows types, no assumptions about
the type of scene you are creating, and the principles of the 3D aspects are based on core
maths texts rather than one particular API implementation.
Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics
of the platform, API and scene, but it does this in subclasses specially designed for the
environment in question, but which still expose the same interface as the abstract versions.
For example, there is a ’Win32Window’ class which handles all the details about render-
ing windows on a Win32 platform - however the application designer only has to manipulate
it via the superclass interface ’RenderWindow’, which will be the same across all platforms.
Similarly the ’SceneManager’ class looks after the arrangement of objects in the scene
and their rendering sequence. Applications only have to use this interface, but there is a
’BspSceneManager’ class which optimises the scene management for indoor levels, meaning
you get both performance and an easy to learn interface. All applications have to do is hint
about the kind of scene they will be creating and let OGRE choose the most appropriate
implementation - this is covered in a later tutorial.
OGRE’s object-oriented nature makes all this possible. Currently OGRE runs on Win-
dows, Linux and Mac OSX using plugins to drive the underlying rendering API (currently
Direct3D or OpenGL). Applications use OGRE at the abstract level, thus ensuring that
they automatically operate on all platforms and rendering subsystems that OGRE provides
Chapter 1: Introduction 4
Introduction
This tutorial gives you a quick summary of the core objects that you will use in OGRE and
what they are used for.
This means every class, type etc should be prefixed with ’Ogre::’, e.g. ’Ogre::Camera’,
’Ogre::Vector3’ etc which means if elsewhere in your application you have used a Vector3
type you won’t get name clashes. To avoid lots of extra typing you can add a ’using
namespace Ogre;’ statement to your code which means you don’t have to type the ’Ogre::’
prefix unless there is ambiguity (in the situation where you have another definition with
the same name).
of the more more significant ones to give you an idea of how it slots together.
At the very top of the diagram is the Root object. This is your ’way in’ to the OGRE
system, and it’s where you tend to create the top-level objects that you need to deal with, like
scene managers, rendering systems and render windows, loading plugins, all the fundamental
stuff. If you don’t know where to start, Root is it for almost everything, although often it
will just give you another object which will actually do the detail work, since Root itself is
more of an organiser and facilitator object.
The majority of rest of OGRE’s classes fall into one of 3 roles:
Scene Management
This is about the contents of your scene, how it’s structured, how it’s viewed
from cameras, etc. Objects in this area are responsible for giving you a natural
declarative interface to the world you’re building; i.e. you don’t tell OGRE "set
these render states and then render 3 polygons", you tell it "I want an object
here, here and here, with these materials on them, rendered from this view",
and let it get on with it.
Resource Management
All rendering needs resources, whether it’s geometry, textures, fonts, whatever.
It’s important to manage the loading, re-use and unloading of these things
carefully, so that’s what classes in this area do.
Rendering Finally, there’s getting the visuals on the screen - this is about the lower-level
end of the rendering pipeline, the specific rendering system API objects like
Chapter 2: The Core Objects 7
buffers, render states and the like and pushing it all down the pipeline. Classes
in the Scene Management subsystem use this to get their higher-level scene
information onto the screen.
You’ll notice that scattered around the edge are a number of plugins. OGRE is designed
to be extended, and plugins are the usual way to go about it. Many of the classes in OGRE
can be subclassed and extended, whether it’s changing the scene organisation through a
custom SceneManager, adding a new render system implementation (e.g. Direct3D or
OpenGL), or providing a way to load resources from another source (say from a web location
or a database). Again this is just a small smattering of the kinds of things plugins can do,
but as you can see they can plug in to almost any aspect of the system. This way, OGRE
isn’t just a solution for one narrowly defined problem, it can extend to pretty much anything
you need it to do.
The root object lets you configure the system, for example through the showConfigDia-
log() method which is an extremely handy method which performs all render system options
detection and shows a dialog for the user to customise resolution, colour depth, full screen
options etc. It also sets the options the user selects so that you can initialise the system
directly afterwards.
The root object is also your method for obtaining pointers to other objects in the system,
such as the SceneManager, RenderSystem and various other resource managers. See below
for details.
Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh
all the rendering targets as fast as possible (the norm for games and demos, but not for
windowed utilities), the root object has a method called startRendering, which when called
will enter a continuous rendering loop which will only end when all rendering windows are
closed, or any FrameListener objects indicate that they want to stop the cycle (see below
for details of FrameListener objects).
setting all the various rendering options. This class is abstract because all the imple-
mentation is rendering API specific - there are API-specific subclasses for each rendering
API (e.g. D3DRenderSystem for Direct3D). After the system has been initialised through
Root::initialise, the RenderSystem object for the selected rendering API is available via the
Root::getRenderSystem() method.
However, a typical application should not normally need to manipulate the RenderSys-
tem object directly - everything you need for rendering objects and customising settings
should be available on the SceneManager, Material and other scene-oriented classes. It’s
only if you want to create multiple rendering windows (completely separate windows in this
case, not multiple viewports like a split-screen effect which is done via the RenderWindow
class) or access other advanced features that you need access to the RenderSystem object.
For this reason I will not discuss the RenderSystem object further in these tutorials. You
can assume the SceneManager handles the calls to the RenderSystem at the appropriate
times.
It is to the SceneManager that you go when you want to create a camera for the scene.
It’s also where you go to retrieve or to remove a light from the scene. There is no need for
your application to keep lists of objects, the SceneManager keeps a named set of all of the
scene objects for you to access, should you need them. Look in the main documentation
under the getCamera, getLight, getEntity etc methods.
The SceneManager also sends the scene to the RenderSystem object when it is time to
render the scene. You never have to call the SceneManager:: renderScene method directly
though - it is called automatically whenever a rendering target is asked to update.
So most of your interaction with the SceneManager is during scene setup. You’re likely to
call a great number of methods (perhaps driven by some input file containing the scene data)
Chapter 2: The Core Objects 9
in order to set up your scene. You can also modify the contents of the scene dynamically
during the rendering cycle if you create your own FrameListener object (see later).
Because different scene types require very different algorithmic approaches to deciding
which objects get sent to the RenderSystem in order to attain good rendering performance,
the SceneManager class is designed to be subclassed for different scene types. The default
SceneManager object will render a scene, but it does little or no scene organisation and
you should not expect the results to be high performance in the case of large scenes. The
intention is that specialisations will be created for each type of scene such that under
the surface the subclass will optimise the scene organisation for best performance given
assumptions which can be made for that scene type. An example is the BspSceneManager
which optimises rendering for large indoor levels based on a Binary Space Partition (BSP)
tree.
The application using OGRE does not have to know which subclasses are available.
The application simply calls Root::createSceneManager(..) passing as a parameter one of a
number of scene types (e.g. ST GENERIC, ST INTERIOR etc). OGRE will automatically
use the best SceneManager subclass available for that scene type, or default to the basic
SceneManager if a specialist one is not available. This allows the developers of OGRE to
add new scene specialisations later and thus optimise previously unoptimised scene types
without the user applications having to change any code.
ResourceManagers ensure that resources are only loaded once and shared throughout
the OGRE engine. They also manage the memory requirements of the resources they look
after. They can also search in a number of locations for the resources they need, including
multiple search paths and compressed archives (ZIP files).
Most of the time you won’t interact with resource managers directly. Resource managers
will be called by other parts of the OGRE system as required, for example when you request
for a texture to be added to a Material, the TextureManager will be called for you. If you
like, you can call the appropriate resource manager directly to preload resources (if for
Chapter 2: The Core Objects 10
example you want to prevent disk access later on) but most of the time it’s ok to let OGRE
decide when to do it.
One thing you will want to do is to tell the resource managers where to look for re-
sources. You do this via Root::getSingleton().addResourceLocation, which actually passes
the information on to ResourceGroupManager.
Because there is only ever 1 instance of each resource manager in the engine, if you do
want to get a reference to a resource manager use the following syntax:
TextureManager::getSingleton().someMethod()
MeshManager::getSingleton().someMethod()
Mesh objects are a type of resource, and are managed by the MeshManager resource
manager. They are typically loaded from OGRE’s custom object format, the ’.mesh’ for-
mat. Mesh files are typically created by exporting from a modelling tool See Section 4.1
[Exporters], page 158 and can be manipulated through various Chapter 4 [Mesh Tools],
page 158
You can also create Mesh objects manually by calling the MeshManager::createManual
method. This way you can define the geometry yourself, but this is outside the scope of
this manual.
Mesh objects are the basis for the individual movable objects in the world, which are
called Section 2.6 [Entities], page 11.
Mesh objects can also be animated using See Section 8.1 [Skeletal Animation], page 197.
Chapter 2: The Core Objects 11
2.6 Entities
An entity is an instance of a movable object in the scene. It could be a car, a person, a
dog, a shuriken, whatever. The only assumption is that it does not necessarily have a fixed
position in the world.
Entities are based on discrete meshes, i.e. collections of geometry which are self-contained
and typically fairly small on a world scale, which are represented by the Mesh object.
Multiple entities can be based on the same mesh, since often you want to create multiple
copies of the same type of object in a scene.
Entities are not deemed to be a part of the scene until you attach them to a SceneNode
(see the section below). By attaching entities to SceneNodes, you can create complex hier-
archical relationships between the positions and orientations of entities. You then modify
the positions of the nodes to indirectly affect the entity positions.
To understand how this works, you have to know that all Mesh objects are actually
composed of SubMesh objects, each of which represents a part of the mesh using one
Material. If a Mesh uses only one Material, it will only have one SubMesh.
2.7 Materials
The Material object controls how objects in the scene are rendered. It specifies what basic
surface properties objects have such as reflectance of colours, shininess etc, how many
texture layers are present, what images are on them and how they are blended together,
what special effects are applied such as environment mapping, what culling mode is used,
how the textures are filtered etc.
Basically everything about the appearance of an object apart from it’s shape is controlled
by the Material class.
The SceneManager class manages the master list of materials available to the scene.
The list can be added to by the application by calling SceneManager::createMaterial, or
by loading a Mesh (which will in turn load material properties). Whenever materials are
added to the SceneManager, they start off with a default set of properties; these are defined
by OGRE as the following:
Entities automatically have Material’s associated with them if they use a Mesh object,
since the Mesh object typically sets up it’s required materials on loading. You can also
customise the material used by an entity as described in Section 2.6 [Entities], page 11.
Just create a new Material, set it up how you like (you can copy an existing material into
it if you like using a standard assignment statement) and point the SubEntity entries at it
using SubEntity::setMaterialName().
2.8 Overlays
Overlays allow you to render 2D and 3D elements on top of the normal scene contents to
create effects like heads-up displays (HUDs), menu systems, status panels etc. The frame
rate statistics panel which comes as standard with OGRE is an example of an overlay.
Overlays can contain 2D or 3D elements. 2D elements are used for HUDs, and 3D elements
can be used to create cockpits or any other 3D object which you wish to be rendered on
top of the rest of the scene.
You can create overlays either through the SceneManager::createOverlay method, or you
can define them in an .overlay script. In reality the latter is likely to be the most practical
because it is easier to tweak (without the need to recompile the code). Note that you can
define as many overlays as you like: they all start off life hidden, and you display them by
calling their ’show()’ method. You can also show multiple overlays at once, and their Z
order is determined by the Overlay::setZOrder() method.
Creating 2D Elements
The OverlayElement class abstracts the details of 2D elements which are added to overlays.
All items which can be added to overlays are derived from this class. It is possible (and
encouraged) for users of OGRE to define their own custom subclasses of OverlayElement in
order to provide their own user controls. The key common features of all OverlayElements
are things like size, position, basic material name etc. Subclasses extend this behaviour to
include more complex properties and behaviour.
ate a panel (a plain rectangular area which can contain other OverlayElements) you would
call OverlayManager::getSingleton().createOverlayElement("Panel", "myNewPanel");
square-looking areas you will have to compensate using the typical aspect ratio
e.g. use (0.1875, 0.25) instead.
Transforming Overlays
Another nice feature of overlays is being able to rotate, scroll and scale them as a whole.
You can use this for zooming in / out menu systems, dropping them in from off screen and
other nice effects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for
more information.
Scripting overlays
Overlays can also be defined in scripts. See Section 3.4 [Overlay Scripts], page 144 for
details.
GUI systems
Overlays are only really designed for non-interactive screen elements, although you can
use them as a crude GUI. For a far more complete GUI solution, we recommend CEGui
(http://www.cegui.org.uk), as demonstrated in the sample Demo Gui.
Chapter 3: Scripts 16
3 Scripts
OGRE drives many of its features through scripts in order to make it easier to set up.
The scripts are simply plain text files which can be edited in any standard text editor, and
modifying them immediately takes effect on your OGRE-based applications, without any
need to recompile. This makes prototyping a lot faster. Here are the items that OGRE lets
you script:
• Section 3.1 [Material Scripts], page 16
• Section 3.2 [Compositor Scripts], page 106
• Section 3.3 [Particle Scripts], page 121
• Section 3.4 [Overlay Scripts], page 144
• Section 3.5 [Font Definition Scripts], page 155
Loading scripts
Material scripts are loaded when resource groups are initialised: OGRE looks in all re-
source locations associated with the group (see Root::addResourceLocation) for files with
the ’.material’ extension and parses them. If you want to parse files manually, use Materi-
alSerializer::parseScript.
It’s important to realise that materials are not loaded completely by this parsing process:
only the definition is loaded, no textures or other resources are loaded. This is because it is
common to have a large library of materials, but only use a relatively small subset of them
in any one scene. To load every material completely in every script would therefore cause
unnecessary memory overhead. You can access a ’deferred load’ Material in the normal
way (MaterialManager::getSingleton().getByName()), but you must call the ’load’ method
before trying to use it. Ogre does this for you when using the normal material assignment
methods of entities etc.
Another important factor is that material names must be unique throughout ALL scripts
loaded by the system, since materials are always identified by name.
Chapter 3: Scripts 17
Format
Several materials may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with
’//’ (note, no nested form comments allowed). The general format is shown below in the
example below (note that to start with, we only consider fixed-function materials which
don’t use vertex, geometry or fragment programs, these are covered later):
// This is a comment
material walls/funkywall1
{
// first, preferred technique
technique
{
// first pass
pass
{
ambient 0.5 0.5 0.5
diffuse 1.0 1.0 1.0
// Texture unit 0
texture_unit
{
texture wibbly.jpg
scroll_anim 0.1 0.0
wave_xform scale sine 0.0 0.7 0.0 1.0
}
// Texture unit 1 (this is a multitexture pass)
texture_unit
{
texture wobbly.png
rotate_anim 0.25
colour_op add
}
}
}
}
Every material in the script must be given a name, which is the line ’material <blah>’
before the first opening ’’. This name must be globally unique. It can include path characters
Chapter 3: Scripts 18
(as in the example) to logically divide up your materials, and also to avoid duplicate names,
but the engine does not treat the name as hierarchical, just as a string. If you include
spaces in the name, it must be enclosed in double quotes.
NOTE: ’:’ is the delimiter for specifying material copy in the script so it can’t be used
as part of the material name.
A material can inherit from a previously defined material by using a colon : after the
material name followed by the name of the reference material to inherit from. You can in
fact even inherit just parts of a material from others; all this is covered in See Section 3.1.11
[Script Inheritence], page 96). You can also use variables in your script which can be
replaced in inheriting versions, see See Section 3.1.13 [Script Variables], page 104.
A material can be made up of many techniques (See Section 3.1.1 [Techniques], page 21)-
a technique is one way of achieving the effect you are looking for. You can supply more than
one technique in order to provide fallback approaches where a card does not have the ability
to render the preferred technique, or where you wish to define lower level of detail versions
of the material in order to conserve rendering power when objects are more distant.
Each technique can be made up of many passes (See Section 3.1.2 [Passes], page 24), that
is a complete render of the object can be performed multiple times with different settings
in order to produce composite effects. Ogre may also split the passes you have defined
into many passes at runtime, if you define a pass which uses too many texture units for
the card you are currently running on (note that it can only do this if you are not using a
fragment program). Each pass has a number of top-level attributes such as ’ambient’ to set
the amount & colour of the ambient light reflected by the material. Some of these options
do not apply if you are using vertex programs, See Section 3.1.2 [Passes], page 24 for more
details.
Within each pass, there can be zero or many texture units in use (See Section 3.1.3
[Texture Units], page 45). These define the texture to be used, and optionally some blending
operations (which use multitexturing) and texture effects.
You can also reference vertex and fragment programs (or vertex and pixel shaders, if
you want to use that terminology) in a pass with a given set of parameters. Programs
themselves are declared in separate .program scripts (See Section 3.1.4 [Declaring Ver-
tex/Geometry/Fragment Programs], page 63) and are used as described in Section 3.1.9
[Using Vertex/Geometry/Fragment Programs in a Pass], page 79.
Chapter 3: Scripts 19
lod strategy
Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changes
based on distance from the camera. Also supported is ’PixelCount’ which changes LOD
based on an estimate of the screen-space pixels affected.
lod values
This attribute defines the values used to control the LOD transition for this material. By
setting this attribute, you indicate that you want this material to alter the Technique that
it uses based on some metric, such as the distance from the camera, or the approximate
screen space coverage. The exact meaning of these values is determined by the option you
select for [lod strategy], page 19 - it is a list of distances for the ’Distance’ strategy, and
a list of pixel counts for the ’PixelCount’ strategy, for example. You must give it a list of
values, in order from highest LOD value to lowest LOD value, each one indicating the point
at which the material will switch to the next LOD. Implicitly, all materials activate LOD
index 0 for values less than the first entry, so you do not have to specify ’0’ at the start of
the list. You must ensure that there is at least one Technique with a [lod index], page 22
value for each value in the list (so if you specify 3 values, you must have techniques for LOD
indexes 0, 1, 2 and 3). Note you must always have at least one Technique at lod index 0.
Example:
lod strategy Distance lod values 300.0 600.5 1200
Chapter 3: Scripts 20
The above example would cause the material to use the best Technique at lod index 0
up to a distance of 300 world units, the best from lod index 1 from 300 up to 600, lod index
2 from 600 to 1200, and lod index 3 from 1200 upwards.
receive shadows
This attribute controls whether objects using this material can have shadows cast upon
them.
Whether or not an object casts a shadow is the combination of a number of factors, See
Chapter 7 [Shadows], page 180 for full details; however this allows you to make a transparent
material cast shadows, when it would otherwise not. For example, when using texture
shadows, transparent materials are normally not rendered into the shadow texture because
they should not block light. This flag overrides that.
This attribute can be used to set the textures used in texture unit states that were
inherited from another material.(See Section 3.1.12 [Texture Aliases], page 100)
Chapter 3: Scripts 21
3.1.1 Techniques
A "technique" section in your material script encapsulates a single method of rendering an
object. The simplest of material definitions only contains a single technique, however since
PC hardware varies quite greatly in it’s capabilities, you can only do this if you are sure
that every card for which you intend to target your application will support the capabilities
which your technique requires. In addition, it can be useful to define simpler ways to render
a material if you wish to use material LOD, such that more distant objects use a simpler,
less performance-hungry technique.
When a material is used for the first time, it is ’compiled’. That involves scanning the
techniques which have been defined, and marking which of them are supportable using the
current rendering API and graphics card. If no techniques are supportable, your material
will render as blank white. The compilation examines a number of things, such as:
• The number of texture unit entries in each pass
Note that if the number of texture unit entries exceeds the number of texture units in
the current graphics card, the technique may still be supportable so long as a fragment
program is not being used. In this case, Ogre will split the pass which has too many
entries into multiple passes for the less capable card, and the multitexture blend will
be turned into a multipass blend (See [colour op multipass fallback], page 58).
• Whether vertex, geometry or fragment programs are used, and if so which syntax they
use (e.g. vs 1 1, ps 2 x, arbfp1 etc.)
• Other effects like cube mapping and dot3 blending
• Whether the vendor or device name of the current graphics card matches some user-
specified rules
In a material script, techniques must be listed in order of preference, i.e. the earlier tech-
niques are preferred over the later techniques. This normally means you will list your most
advanced, most demanding techniques first in the script, and list fallbacks afterwards.
To help clearly identify what each technique is used for, the technique can be named
but its optional. Techniques not named within the script will take on a name that is the
technique index number. For example: the first technique in a material is index 0, its name
would be "0" if it was not given a name in the script. The technique name must be unique
within the material or else the final technique is the resulting merge of all techniques with
the same name in the material. A warning message is posted in the Ogre.log if this occurs.
Named techniques can help when inheriting a material and modifying an existing technique:
(See Section 3.1.11 [Script Inheritence], page 96)
• [scheme], page 22
• [lod index], page 22 (and also see [lod distances], page 19 in the parent material)
• [shadow caster material], page 23
• [shadow receiver material], page 23
• [gpu vendor rule], page 23
• [gpu device rule], page 23
scheme
Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-
level switching from one set of techniques to another. For example, you might use this
to define ’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick
a performance / quality ratio. Another possibility is that you have a fully HDR-enabled
pipeline for top machines, rendering all objects using unclamped shaders, and a simpler
pipeline for others; this can be implemented using schemes. The active scheme is typically
controlled at a viewport level, and the active one defaults to ’Default’.
lod index
Sets the level-of-detail (LOD) index this Technique belongs to.
All techniques must belong to a LOD index, by default they all belong to index 0, i.e.
the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will)
assign more than one technique to the same LOD index, what this means is that OGRE
will pick the best technique of the ones listed at the same LOD index. For readability, it is
advised that you list your techniques in order of LOD, then in order of preference, although
the latter is the only prerequisite (OGRE determines which one is ’best’ by which one is
listed first). You must always have at least one Technique at lod index 0.
The distance at which a LOD level is applied is determined by the lod distances attribute
of the containing material, See [lod distances], page 19 for details.
Chapter 3: Scripts 23
Techniques also contain one or more passes (and there must be at least one), See
Section 3.1.2 [Passes], page 24.
An ’include’ rule means that the technique will only be supported if one of the include rules
is matched (if no include rules are provided, anything will pass). An ’exclude’ rules means
that the technique is considered unsupported if any of the exclude rules are matched. You
can provide as many rules as you like, although <vendor name> and <device pattern> must
obviously be unique. The valid list of <vendor name> values is currently ’nvidia’, ’ati’,
’intel’, ’s3’, ’matrox’ and ’3dlabs’. <device pattern> can be any string, and you can use
wildcards (’*’) if you need to match variants. Here’s an example:
Chapter 3: Scripts 24
These rules, if all included in one technique, will mean that the technique will only be
considered supported on graphics cards made by NVIDIA and Intel, and so long as the
device name doesn’t have ’950’ in it.
Note that these rules can only mark a technique ’unsupported’ when it would otherwise
be considered ’supported’ judging by the hardware capabilities. Even if a technique passes
these rules, it is still subject to the usual hardware support tests.
3.1.2 Passes
A pass is a single render of the geometry in question; a single call to the rendering API
with a certain set of rendering properties. A technique can have between one and 16 passes,
although clearly the more passes you use, the more expensive the technique will be to render.
To help clearly identify what each pass is used for, the pass can be named but its optional.
Passes not named within the script will take on a name that is the pass index number. For
example: the first pass in a technique is index 0 so its name would be "0" if it was not given
a name in the script. The pass name must be unique within the technique or else the final
pass is the resulting merge of all passes with the same name in the technique. A warning
message is posted in the Ogre.log if this occurs. Named passes can help when inheriting a
material and modifying an existing pass: (See Section 3.1.11 [Script Inheritence], page 96)
Passes have a set of global attributes (described below), zero or more nested texture unit
entries (See Section 3.1.3 [Texture Units], page 45), and optionally a reference to a vertex and
/ or a fragment program (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs
in a Pass], page 79).
Here are the attributes you can use in a ’pass’ section of a .material script:
• [ambient], page 25
• [diffuse], page 26
• [specular], page 26
• [emissive], page 27
• [scene blend], page 28
• [separate scene blend], page 29
• [scene blend op], page 30
Chapter 3: Scripts 25
Attribute Descriptions
ambient
Sets the ambient colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much ambient light (directionless global light)
is reflected. It is also possible to make the ambient reflectance track the vertex colour as
defined in the mesh by using the keyword vertexcolour instead of the colour values. The
default is full white, meaning objects are completely globally illuminated. Reduce this if
you want to see diffuse or specular light effects, or change the blend of colours to make the
object have a base colour other than white. This setting has no effect if dynamic lighting is
disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.
diffuse
Sets the diffuse colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much diffuse light (light from instances of
the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance
track the vertex colour as defined in the mesh by using the keyword vertexcolour instead
of the colour values. The default is full white, meaning objects reflect the maximum white
light they can from Light objects. This setting has no effect if dynamic lighting is disabled
using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
specular
Sets the specular colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much specular light (highlights from instances
of the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance
track the vertex colour as defined in the mesh by using the keyword vertexcolour instead
of the colour values. The default is to reflect no specular light. The colour of the specular
highlights is determined by the colour parameters, and the size of the highlights by the
separate shininess parameter.. The higher the value of the shininess parameter, the sharper
the highlight ie the radius is smaller. Beware of using shininess values in the range of 0 to
1 since this causes the the specular colour to be applied to the whole surface that has the
material applied to it. When the viewing angle to the surface changes, ugly flickering will
also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work
best in both DirectX and OpenGL renderers. This setting has no effect if dynamic lighting
is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.
emissive
Sets the amount of self-illumination an object has. This attribute has no effect if a asm, CG,
or HLSL shader program is used. With GLSL, the shader can read the OpenGL material
state.
If an object is self-illuminating, it does not need external sources to light it, ambient
or otherwise. It’s like the object has it’s own personal ambient light. Unlike the name
suggests, this object doesn’t act as a light source for other objects in the scene (if you want
it to, you have to create a light which is centered on the object). It is also possible to make
the emissive colour track the vertex colour as defined in the mesh by using the keyword
vertexcolour instead of the colour values. This setting has no effect if dynamic lighting is
disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.
scene blend
Sets the kind of blending this pass has with the existing contents of the scene. Wheras the
texture blending operations seen in the texture unit entries are concerned with blending
between texture layers, this blending is about combining the output of this pass as a whole
with the existing contents of the rendering target. This blending therefore allows object
transparency and other special effects. There are 2 formats, one using predefined blend
types, the other allowing a roll-your-own approach using source and destination factors.
This is the simpler form, where the most commonly used blending modes are enumerated
using a single parameter. Valid <blend type> parameters are:
add The colour of the rendering output is added to the scene. Good for explosions,
flares, lights, ghosts etc. Equivalent to ’scene blend one one’.
modulate The colour of the rendering output is multiplied with the scene contents. Gen-
erally colours and darkens the scene, good for smoked glass, semi-transparent
objects etc. Equivalent to ’scene blend dest colour zero’.
colour blend
Colour the scene based on the brightness of the input colours, but don’t darken.
Equivalent to ’scene blend src colour one minus src colour’
alpha blend
The alpha value of the rendering output is used as a mask. Equivalent to
’scene blend src alpha one minus src alpha’
Chapter 3: Scripts 29
This version of the method allows complete control over the blending operation, by
specifying the source and destination blending factors. The resulting colour which is written
to the rendering target is (texture * sourceFactor) + (scene pixel * destFactor). Valid values
for both parameters are:
one Constant value of 1.0
zero Constant value of 0.0
dest colour
The existing pixel colour
src colour The texture pixel (texel) colour
one minus dest colour
1 - (dest colour)
one minus src colour
1 - (src colour)
dest alpha The existing pixel alpha value
src alpha The texel alpha value
one minus dest alpha
1 - (dest alpha)
one minus src alpha
1 - (src alpha)
Format1: separate scene blend <simple colour blend> <simple alpha blend>
Chapter 3: Scripts 30
This example would add colour components but multiply alpha components. The blend
modes available are as in [scene blend], page 28. The more advanced form is also available:
Format2: separate scene blend <colour src factor> <colour dest factor> <al-
pha src factor> <alpha dest factor>
Example: separate scene blend one one minus dest alpha one one
Again the options available in the second format are the same as those in the second
format of [scene blend], page 28.
scene blend op
This directive changes the operation which is applied between the two components of the
scene blending equation, which by default is ’add’ (sourceFactor * source + destFactor *
dest). You may change this to ’add’, ’subtract’, ’reverse subtract’, ’min’ or ’max’.
depth check
Sets whether or not this pass renders with depth-buffer checking on or not.
depth write
Sets whether or not this pass renders with depth-buffer writing on or not.
If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth
buffer is updated with the depth value of that new pixel, thus affecting future rendering
operations if future pixels are behind this one. If depth writing is off, pixels are written
without updating the depth buffer. Depth writing should normally be on but can be turned
off when rendering static backgrounds or when rendering a collection of transparent objects
at the end of a scene so that they overlap each other correctly.
depth func
Sets the function used to compare depth values when depth checking is on.
If depth checking is enabled (see depth check) a comparison occurs between the depth
value of the pixel to be written and the current contents of the buffer. This comparison is
normally less equal, i.e. the pixel is written if it is closer (or at the same distance) than the
current contents. The possible functions are:
always fail
Never writes a pixel to the render target
always pass
Always writes a pixel to the render target
less Write if (new Z < existing Z)
Chapter 3: Scripts 32
greater equal
Write if (new Z >= existing Z)
depth bias
Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygons
appear on top of others e.g. for decals.
alpha rejection
Sets the way the pass will have use alpha to totally reject pixels from the pipeline.
The function parameter can be any of the options listed in the material depth function
attribute. The value parameter can theoretically be any value between 0 and 255, but is
best limited to 0 or 128 for hardware compatibility.
alpha to coverage
Sets whether this pass will use ’alpha to coverage’, a way to multisample alpha texture
edges so they blend more seamlessly with the background. This facility is typically only
available on cards from around 2006 onwards, but it is safe to enable it anyway - Ogre will
just ignore it if the hardware does not support it. The common use for alpha to coverage
is foliage rendering and chain-link fence style textures.
light scissor
Sets whether when rendering this pass, rendering will be limited to a screen-space scissor
rectangle representing the coverage of the light(s) being used in this pass, derived from their
attenuation ranges.
This option is usually only useful if this pass is an additive lighting pass, and is at least
the second one in the technique. Ie areas which are not affected by the current light(s) will
never need to be rendered. If there is more than one light being passed to the pass, then
the scissor is defined to be the rectangle which covers all lights in screen-space. Directional
lights are ignored since they are infinite.
This option does not need to be specified if you are using a standard additive
shadow mode, i.e. SHADOWTYPE STENCIL ADDITIVE or SHADOW-
TYPE TEXTURE ADDITIVE, since it is the default behaviour to use a scissor for each
additive shadow pass. However, if you’re not using shadows, or you’re using [Integrated
Texture Shadows], page 189 where passes are specified in a custom manner, then this
could be of use to you.
This option will only function if there is a single non-directional light being used in this
pass. If there is more than one light, or only directional lights, then no clipping will occur.
If there are no lights at all then the objects won’t be rendered at all.
A specific note about OpenGL: user clip planes are completely ignored when you use
an ARB vertex program. This means light clip planes won’t help much if you use ARB
vertex programs on GL, although OGRE will perform some optimisation of its own, in
that if it sees that the clip volume is completely off-screen, it won’t perform a render at
all. When using GLSL, user clipping can be used but you have to use glClipVertex in your
Chapter 3: Scripts 35
shader, see the GLSL documentation for more information. In Direct3D user clip planes
are always respected.
illumination stage
When using an additive lighting mode (SHADOWTYPE STENCIL ADDITIVE or SHAD-
OWTYPE TEXTURE ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or
pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually
OGRE figures out how to categorise your passes automatically, but there are some effects
you cannot achieve without manually controlling the illumination. For example specular
effects are muted by the typical sequence because all textures are saved until the ’decal’
stage which mutes the specular effect. Instead, you could do texturing within the per-light
stage if it’s possible for your material and thus add the specular on after the decal texturing,
and have no post-light rendering.
If you assign an illumination stage to a pass you have to assign it to all passes in the
technique otherwise it will be ignored. Also note that whilst you can have more than one
pass in each group, they cannot alternate, ie all ambient passes will be before all per-light
passes, which will also be before all decal passes. Within their categories the passes will
retain their ordering though.
normalise normals
Sets whether or not this pass renders with all vertex normals being automatically re-
normalised.
Scaling objects causes normals to also change magnitude, which can throw off your
lighting calculations. By default, the SceneManager detects this and will automatically re-
normalise normals for any scaled object, but this has a cost. If you’d prefer to control this
manually, call SceneManager::setNormaliseNormalsOnScale(false) and then use this option
on materials which are sensitive to normals being resized.
transparent sorting
Sets if transparent textures should be sorted by depth or not.
By default all transparent materials are sorted such that renderables furthest away from
the camera are rendered first. This is usually the desired behaviour but in certain cases this
depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure
the rendering order does not change from one frame to the next. In this case you could set
the value to ’off’ to prevent sorting.
You can also use the keyword ’force’ to force transparent sorting on, regardless of other
circumstances. Usually sorting is only used when the pass is also transparent, and has a
depth write or read which indicates it cannot reliably render without sorting. By using
’force’, you tell OGRE to sort this pass no matter what other circumstances are present.
cull hardware
Sets the hardware culling mode for this pass.
A typical way for the hardware rendering engine to cull triangles is based on the ’vertex
winding’ of triangles. Vertex winding refers to the direction in which the vertices are
passed or indexed to in the rendering operation as viewed from the camera, and will wither
be clockwise or anticlockwise (that’s ’counterclockwise’ for you Americans out there ;).
If the option ’cull hardware clockwise’ is set, all triangles whose vertices are viewed in
clockwise order from the camera will be culled by the hardware. ’anticlockwise’ is the
reverse (obviously), and ’none’ turns off hardware culling so all triagles are rendered (useful
for creating 2-sided passes).
cull software
Sets the software culling mode for this pass.
In some situations the engine will also cull geometry in software before sending it to the
hardware renderer. This setting only takes effect on SceneManager’s that use it (since it is
best used on large groups of planar world geometry rather than on movable geometry since
this would be expensive), but if used can cull geometry before it is sent to the hardware.
In this case the culling is based on whether the ’back’ or ’front’ of the triangle is facing
the camera - this definition is based on the face normal (a vector which sticks out of the
front side of the polygon perpendicular to the face). Since Ogre expects face normals
to be on anticlockwise side of the face, ’cull software back’ is the software equivalent of
’cull hardware clockwise’ setting, which is why they are both the default. The naming is
different to reflect the way the culling is done though, since most of the time face normals are
pre-calculated and they don’t have to be the way Ogre expects - you could set ’cull hardware
none’ and completely cull in software based on your own face normals, if you have the right
SceneManager which uses them.
lighting
Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turned
off, all objects rendered using the pass will be fully lit. This attribute has no effect if a
vertex program is used.
Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading
properties for this pass redundant. When lighting is turned on, objects are lit according to
their vertex normals for diffuse and specular light, and globally for ambient and emissive.
Default: lighting on
Chapter 3: Scripts 38
shading
Sets the kind of shading which should be used for representing dynamic lighting for this
pass.
When dynamic lighting is turned on, the effect is to generate colour values at each vertex.
Whether these values are interpolated across the face (and how) depends on this setting.
flat No interpolation takes place. Each face is shaded with a single colour deter-
mined from the first vertex in the face.
gouraud Colour at each vertex is linearly interpolated across the face.
phong Vertex normals are interpolated across the face, and these are used to determine
colour at each pixel. Gives a more natural lighting effect but is more expensive
and works better at high levels of tessellation. Not supported on all hardware.
Default: shading gouraud
polygon mode
Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn
as lines or points.
fog override
Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very
useful for things that you don’t want to be affected by fog when the rest of the scene is
fogged, or vice versa. Note that this only affects fixed-function fog - the original scene
fog parameters are still sent to shaders which use the fog params parameter binding (this
allows you to turn off fixed function fog and calculate it in the shader instead; if you want
to disable shader fog you can do that through shader parameters anyway).
If you specify ’true’ for the first parameter and you supply the rest of the parameters,
you are telling the pass to use these fog settings in preference to the scene settings, whatever
they might be. If you specify ’true’ but provide no further parameters, you are telling this
pass to never use fogging no matter what the scene says. Here is an explanation of the
parameters:
colour write
Sets whether or not this pass renders with colour writing on or not.
If colour writing is off no visible pixels are written to the screen during this pass. You
might think this is useless, but if you render with colour writing off, and with very minimal
other settings, you can use this pass to initialise the depth buffer before subsequently ren-
dering other passes which fill in the colour data. This can give you significant performance
boosts on some newer cards, especially when using complex fragment programs, because if
the depth check fails then the fragment program is never run.
start light
Sets the first light which will be considered for use with this pass.
You can use this attribute to offset the starting point of the lights for this pass. In other
words, if you set start light to 2 then the first light to be processed in that pass will be the
third actual light in the applicable list. You could use this option to use different passes to
process the first couple of lights versus the second couple of lights for example, or use it in
conjunction with the [iteration], page 41 option to start the iteration from a given point in
the list (e.g. doing the first 2 lights in the first pass, and then iterating every 2 lights from
then on perhaps).
max lights
Sets the maximum number of lights which will be considered for use with this pass.
The maximum number of lights which can be used when rendering fixed-function ma-
terials is set by the rendering system, and is typically set at 8. When you are using the
programmable pipeline (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in
a Pass], page 79) this limit is dependent on the program you are running, or, if you use
Chapter 3: Scripts 41
’iteration once per light’ or a variant (See [iteration], page 41), it effectively only bounded
by the number of passes you are willing to use. If you are not using pass iteration, the light
limit applies once for this pass. If you are using pass iteration, the light limit applies across
all iterations of this pass - for example if you have 12 lights in range with an ’iteration
once per light’ setup but your max lights is set to 4 for that pass, the pass will only iterate
4 times.
iteration
Sets whether or not this pass is iterated, i.e. issued more than once.
Examples:
iteration once
The pass is only executed once which is the default behaviour.
iteration once per light point
The pass is executed once for each point light.
iteration 5 The render state for the pass will be setup and then the draw call will execute
5 times.
iteration 5 per light point
The render state for the pass will be setup and then the draw call will execute
5 times. This will be done for each point light.
iteration 1 per n lights 2 point
The render state for the pass will be setup and the draw call executed once for
every 2 lights.
By default, passes are only issued once. However, if you use the programmable pipeline,
or you wish to exceed the normal limits on the number of lights which are supported, you
might want to use the once per light option. In this case, only light index 0 is ever used, and
the pass is issued multiple times, each time with a different light in light index 0. Clearly
this will make the pass more expensive, but it may be the only way to achieve certain effects
such as per-pixel lighting effects which take into account 1..n lights.
Chapter 3: Scripts 42
Using a number instead of "once" instructs the pass to iterate more than once after the
render state is setup. The render state is not changed after the initial setup so repeated
draw calls are very fast and ideal for passes using programmable shaders that must iterate
more than once with the same render state i.e. shaders that do fur, motion blur, special
filtering.
If you use once per light, you should also add an ambient pass to the technique before
this pass, otherwise when no lights are in range of this object it will not get rendered at
all; this is important even when you have no ambient light in the scene, because you would
still want the objects silhouette to appear.
The lightType parameter to the attribute only applies if you use once per light,
per light, or per n lights and restricts the pass to being run for lights of a single type
(either ’point’, ’directional’ or ’spot’). In the example, the pass will be run once per point
light. This can be useful because when you’re writing a vertex / fragment program it is
a lot easier if you can assume the kind of lights you’ll be dealing with. However at least
point and directional lights can be dealt with in one way.
Example: Simple Fur shader material script that uses a second pass with 10 iterations
to grow the fur:
// GLSL simple Fur
vertex_program GLSLDemo/FurVS glsl
{
source fur.vert
default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto passNumber pass_number
param_named_auto multiPassNumber pass_iteration_number
param_named furLength float 0.15
}
}
material Fur
{
technique GLSL
{
pass base_coat
{
ambient 0.7 0.7 0.7
diffuse 0.5 0.8 0.5
specular 1.0 1.0 1.0 1.5
vertex_program_ref GLSLDemo/FurVS
{
}
fragment_program_ref GLSLDemo/FurFS
{
}
texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}
pass grow_fur
{
ambient 0.7 0.7 0.7
diffuse 0.8 1.0 0.8
specular 1.0 1.0 1.0 64
depth_write off
vertex_program_ref GLSLDemo/FurVS
{
}
fragment_program_ref GLSLDemo/FurFS
Chapter 3: Scripts 44
{
}
texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}
}
}
}
Note: use gpu program auto parameters [pass number], page 91 and
[pass iteration number], page 92 to tell the vertex, geometry or fragment pro-
gram the pass number and iteration number.
point size
This setting allows you to change the size of points when rendering a point list, or a list of
point sprites. The interpretation of this command depends on the [point size attenuation],
page 45 option - if it is off (the default), the point size is in screen pixels, if it is on, it
expressed as normalised screen coordinates (1.0 is the height of the screen) when the point
is at the origin.
NOTE: Some drivers have an upper limit on the size of points they support - this can
even vary between APIs on the same card! Don’t rely on point sizes that cause the points
to get very large on screen, since they may get clamped on some cards. Upper sizes can
range from 64 to 256 pixels.
point sprites
This setting specifies whether or not hardware point sprite rendering is enabled for this
pass. Enabling it means that a point list is rendered as a list of quads rather than a list of
dots. It is very useful to use this option if you’re using a BillboardSet and only need to use
point oriented billboards which are all of the same size. You can also use it for any other
point list render.
Chapter 3: Scripts 45
You only have to provide the final 3 parameters if you turn attenuation on. The formula
for attenuation is that the size of the point is multiplied by 1 / (constant + linear * dist +
quadratic * d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic
= 0) and standard perspective attenuation is (constant = 0, linear = 1, quadratic = 0).
The latter is assumed if you leave out the final 3 parameters when you specify ’on’.
Note that the resulting attenuated size is clamped to the minimum and maximum point
size, see the next section.
Attribute Descriptions
texture alias
Sets the alias name for this texture unit.
Setting the texture alias name is useful if this material is to be inherited by other other
materials and only the textures will be changed in the new material.(See Section 3.1.12
[Texture Aliases], page 100)
Default: If a texture unit has a name then the texture alias defaults to the texture unit
name.
texture
Sets the name of the static texture image this layer will use.
This setting is mutually exclusive with the anim texture attribute. Note that the texture
file cannot include spaces. Those of you Windows users who like spaces in filenames, please
get over it and use underscores instead.
The ’type’ parameter allows you to specify a the type of texture to create - the default is
’2d’, but you can override this; here’s the full list:
1d A 1-dimensional texture; that is, a texture which is only 1 pixel high. These
kinds of textures can be useful when you need to encode a function in a texture
and use it as a simple lookup, perhaps in a fragment program. It is impor-
tant that you use this setting when you use a fragment program which uses
1-dimensional texture coordinates, since GL requires you to use a texture type
that matches (D3D will let you get away with it, but you ought to plan for
cross-compatibility). Your texture widths should still be a power of 2 for best
compatibility and performance.
2d The default type which is assumed if you omit it, your texture has a width and
a height, both of which should preferably be powers of 2, and if you can, make
them square because this will look best on the most hardware. These can be
addressed with 2D texture coordinates.
3d A 3 dimensional texture i.e. volume texture. Your texture has a width, a height,
both of which should be powers of 2, and has depth. These can be addressed
with 3d texture coordinates i.e. through a pixel shader.
cubic This texture is made up of 6 2D textures which are pasted around the inside of
a cube. Can be addressed with 3D texture coordinates and are useful for cubic
reflection maps and normal maps.
The ’numMipMaps’ option allows you to specify the number of mipmaps to generate for
this texture. The default is ’unlimited’ which means mips down to 1x1 size are generated.
Chapter 3: Scripts 48
You can specify a fixed number (even 0) if you like instead. Note that if you use the same
texture in many material scripts, the number of mipmaps generated will conform to the
number specified in the first texture unit used to load the texture - so be consistent with
your usage.
The ’alpha’ option allows you to specify that a single channel (luminance) texture should
be loaded as alpha, rather than the default which is to load it into the red channel. This
can be helpful if you want to use alpha-only textures in the fixed function pipeline.
Default: none
The <PixelFormat> option allows you to specify the desired pixel format of the texture
to create, which may be different to the pixel format of the texture file being loaded. Bear
in mind that the final pixel format will be constrained by hardware capabilities so you may
not get exactly what you ask for. The available options are:
PF L8 8-bit pixel format, all bits luminance.
PF L16 16-bit pixel format, all bits luminance.
PF A8 8-bit pixel format, all bits alpha.
PF A4L4 8-bit pixel format, 4 bits alpha, 4 bits luminance.
PF BYTE LA
2 byte pixel format, 1 byte luminance, 1 byte alpha
PF R5G6B5
16-bit pixel format, 5 bits red, 6 bits green, 5 bits blue.
PF B5G6R5
16-bit pixel format, 5 bits blue, 6 bits green, 5 bits red.
PF R3G3B2
8-bit pixel format, 3 bits red, 3 bits green, 2 bits blue.
PF A4R4G4B4
16-bit pixel format, 4 bits for alpha, red, green and blue.
PF A1R5G5B5
16-bit pixel format, 1 bit for alpha, 5 bits for red, green and blue.
PF R8G8B8
24-bit pixel format, 8 bits for red, green and blue.
PF B8G8R8
24-bit pixel format, 8 bits for blue, green and red.
PF A8R8G8B8
32-bit pixel format, 8 bits for alpha, red, green and blue.
PF A8B8G8R8
32-bit pixel format, 8 bits for alpha, blue, green and red.
Chapter 3: Scripts 49
PF B8G8R8A8
32-bit pixel format, 8 bits for blue, green, red and alpha.
PF R8G8B8A8
32-bit pixel format, 8 bits for red, green, blue and alpha.
PF X8R8G8B8
32-bit pixel format, 8 bits for red, 8 bits for green, 8 bits for blue like
PF A8R8G8B8, but alpha will get discarded
PF X8B8G8R8
32-bit pixel format, 8 bits for blue, 8 bits for green, 8 bits for red like
PF A8B8G8R8, but alpha will get discarded
PF A2R10G10B10
32-bit pixel format, 2 bits for alpha, 10 bits for red, green and blue.
PF A2B10G10R10
32-bit pixel format, 2 bits for alpha, 10 bits for blue, green and red.
PF FLOAT16 R
16-bit pixel format, 16 bits (float) for red
PF FLOAT16 RGB
48-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits
(float) for blue
PF FLOAT16 RGBA
64-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits
(float) for blue, 16 bits (float) for alpha
PF FLOAT32 R
16-bit pixel format, 16 bits (float) for red
PF FLOAT32 RGB
96-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits
(float) for blue
PF FLOAT32 RGBA
128-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits
(float) for blue, 32 bits (float) for alpha
PF SHORT RGBA
64-bit pixel format, 16 bits for red, green, blue and alpha
The ’gamma’ option informs the renderer that you want the graphics hardware to per-
form gamma correction on the texture values as they are sampled for rendering. This is
only applicable for textures which have 8-bit colour channels (e.g.PF R8G8B8). Often,
8-bit per channel textures will be stored in gamma space in order to increase the precision
of the darker colours (http://en.wikipedia.org/wiki/Gamma_correction) but this can
throw out blending and filtering calculations since they assume linear space colour values.
For the best quality shading, you may want to enable gamma correction so that the hard-
ware converts the texture values to linear space for you automatically when sampling the
texture, then the calculations in the pipeline can be done in a reliable linear colour space.
Chapter 3: Scripts 50
When rendering to a final 8-bit per channel display, you’ll also want to convert back to
gamma space which can be done in your shader (by raising to the power 1/2.2) or you can
enable gamma correction on the texture being rendered to or the render window. Note
that the ’gamma’ option on textures is applied on loading the texture so must be specified
consistently if you use this texture in multiple places.
anim texture
Sets the images to be used in an animated texture layer. In this case an animated texture
layer means one which has multiple frames, each of which is a separate image file. There
are 2 formats, one for implicitly determined image names, one for explicitly named images.
This sets up an animated texture layer made up of 5 frames named flame 0.jpg,
flame 1.jpg, flame 2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is
set to 0, then no automatic transition takes place and frames must be changed manually
in code.
This sets up the same duration animation but from 5 separately named image files. The
first format is more concise, but the second is provided if you cannot make your images
conform to the naming standard required for it.
Default: none
cubic texture
Sets the images used in a cubic texture, i.e. one made up of 6 individual images making
up the faces of a cube. These kinds of textures are used for reflection maps (if hardware
supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting
image names of a particular format and a more flexible but longer format for arbitrarily
Chapter 3: Scripts 51
named textures.
The base name in this format is something like ’skybox.jpg’, and the system will expect
you to provide skybox fr.jpg, skybox bk.jpg, skybox up.jpg, skybox dn.jpg, skybox lf.jpg,
and skybox rt.jpg for the individual faces.
Format2 (long): cubic texture <front> <back> <left> <right> <up> <down> separateUV
In this case each face is specified explicitly, incase you don’t want to conform to the
image naming standards above. You can only use this for the separateUV version since the
combinedUVW version requires a single texture name to be assigned to the combined 3D
texture (see below).
Default: none
binding type
Tells this texture unit to bind to either the fragment processing unit or the vertex processing
unit (for Section 3.1.10 [Vertex Texture Fetch], page 95).
content type
Tells this texture unit where it should get its content from. The default is to get tex-
ture content from a named texture, as defined with the [texture], page 47, [cubic texture],
Chapter 3: Scripts 52
page 50, [anim texture], page 50 attributes. However you can also pull texture information
from other automated sources. The options are:
named The default option, this derives texture content from a texture name, loaded
by ordinary means from a file or having been manually created with a given
name.
shadow This option allows you to pull in a shadow texture, and is only valid when
you use texture shadows and one of the ’custom sequence’ shadowing types
(See Chapter 7 [Shadows], page 180). The shadow texture in question will be
from the ’n’th closest light that casts shadows, unless you use light-based pass
iteration or the light start option which may start the light index higher. When
you use this option in multiple texture units within the same pass, each one
references the next shadow texture. The shadow texture index is reset in the
next pass, in case you want to take into account the same shadow textures again
in another pass (e.g. a separate specular / gloss pass). By using this option,
the correct light frustum projection is set up for you for use in fixed-function,
if you use shaders just reference the texture viewproj matrix auto parameter
in your shader.
compositor
This option allows you to reference a texture from a compositor, and is only
valid when the pass is rendered within a compositor sequence. This can be ei-
ther in a render scene directive inside a compositor script, or in a general pass
in a viewport that has a compositor attached. Note that this is a reference only,
meaning that it does not change the render order. You must make sure that
the order is reasonable for what you are trying to achieve (for example, texture
pooling might cause the referenced texture to be overwritten by something else
by the time it is referenced).
The extra parameters for the content type are only required for this type:
The second is the name of the texture to reference in the compositor. (Re-
quired)
The third is the index of the texture to take, in case of an MRT. (Optional)
filtering
Sets the type of texture filtering used when magnifying or minifying a texture. There are
2 formats to this attribute, the simple format where you simply specify the name of a
predefined set of filtering options, and the complex format, where you individually set the
minification, magnification, and mip filters yourself.
Simple Format
Format: filtering <none|bilinear|trilinear|anisotropic>
Default: filtering bilinear
With this format, you only need to provide a single parameter which is one of the following:
none No filtering or mipmapping is used. This is equivalent to the complex format
’filtering point point none’.
bilinear 2x2 box filtering is performed when magnifying or reducing a texture, and a
mipmap is picked from the list but no filtering is done between the levels of
the mipmaps. This is equivalent to the complex format ’filtering linear linear
point’.
trilinear 2x2 box filtering is performed when magnifying and reducing a texture, and
the closest 2 mipmaps are filtered together. This is equivalent to the complex
format ’filtering linear linear linear’.
anisotropic
This is the same as ’trilinear’, except the filtering algorithm takes account of
the slope of the triangle in relation to the camera rather than simply doing
a 2x2 pixel filter in all cases. This makes triangles at acute angles look less
fuzzy. Equivalent to the complex format ’filtering anisotropic anisotropic lin-
ear’. Note that in order for this to make any difference, you must also set the
[max anisotropy], page 55 attribute too.
Complex Format
Format: filtering <minification> <magnification> <mip>
Default: filtering linear linear point
This format gives you complete control over the minification, magnification, and mip filters.
Each parameter can be one of the following:
Chapter 3: Scripts 55
none Nothing - only a valid option for the ’mip’ filter , since this turns mipmapping
off completely. The lowest setting for min and mag is ’point’.
point Pick the closet pixel in min or mag modes. In mip mode, this picks the closet
matching mipmap.
linear Filter a 2x2 box of pixels around the closest one. In the ’mip’ filter this enables
filtering between mipmap levels.
anisotropic
Only valid for min and mag modes, makes the filter compensate for camera-
space slope of the triangles. Note that in order for this to make any difference,
you must also set the [max anisotropy], page 55 attribute too.
max anisotropy
Sets the maximum degree of anisotropy that the renderer will try to compensate for when
filtering textures. The degree of anisotropy is the ratio between the height of the texture
segment visible in a screen space region versus the width - so for example a floor plane,
which stretches on into the distance and thus the vertical texture coordinates change much
faster than the horizontal ones, has a higher anisotropy than a wall which is facing you head
on (which has an anisotropy of 1 if your line of sight is perfectly perpendicular to it). You
should set the max anisotropy value to something greater than 1 to begin compensating;
higher values can compensate for more acute angles. The maximum value is determined by
the hardware, but it is usually 8 or 16.
In order for this to be used, you have to set the minification and/or the magnification
[filtering], page 54 option on this texture to anisotropic.
Format: max anisotropy <value>
Default: max anisotropy 1
mipmap bias
Sets the bias value applied to the mipmapping calculation, thus allowing you to alter the
decision of which level of detail of the texture to use at any distance. The bias value is
applied after the regular distance calculation, and adjusts the mipmap level by 1 level for
each unit of bias. Negative bias values force larger mip levels to be used, positive bias values
force smaller mip levels to be used. The bias is a floating point value so you can use values
in between whole numbers for fine tuning.
In order for this option to be used, your hardware has to support mipmap biasing (exposed
through the render system capabilities), and your minification [filtering], page 54 has to be
set to point or linear.
Format: mipmap bias <value>
Default: mipmap bias 0
colour op
Determines how the colour of this texture layer is combined with the one below it (or the
lighting effect on the geometry if this is the first layer).
Chapter 3: Scripts 56
This method is the simplest way to blend texture layers, because it requires only one
parameter, gives you the most common blending types, and automatically sets up 2 blending
methods: one for if single-pass multitexturing hardware is available, and another for if
it is not and the blending must be achieved through multiple rendering passes. It is,
however, quite limited and does not expose the more flexible multitexturing operations,
simply because these can’t be automatically supported in multipass fallback mode. If want
to use the fancier options, use [colour op ex], page 56, but you’ll either have to be sure that
enough multitexturing units will be available, or you should explicitly set a fallback using
[colour op multipass fallback], page 58.
colour op ex
This is an extended version of the [colour op], page 55 attribute which allows extremely
detailed control over the blending applied between this and earlier layers. Multitexturing
hardware can apply more complex blending operations that multipass blending, but you
are limited to the number of texture units which are available in hardware.
See the IMPORTANT note below about the issues between multipass and multitexturing
that using this method can create. Texture colour operations determine how the final colour
of the surface appears when rendered. Texture units are used to combine colour values from
various sources (e.g. the diffuse colour of the surface from lighting calculations, combined
with the colour of the texture). This method allows you to specify the ’operation’ to be
used, i.e. the calculation such as adds or multiplies, and which values to use as arguments,
Chapter 3: Scripts 57
Operation options
source1 Use source1 without modification
source2 Use source2 without modification
modulate Multiply source1 and source2 together.
modulate x2
Multiply source1 and source2 together, then by 2 (brightening).
modulate x4
Multiply source1 and source2 together, then by 4 (brightening).
add Add source1 and source2 together.
add signed
Add source1 and source2 then subtract 0.5.
add smooth
Add source1 and source2, subtract the product
subtract Subtract source2 from source1
blend diffuse alpha
Use interpolated alpha value from vertices to scale source1, then
add source2 scaled by (1-alpha).
blend texture alpha
As blend diffuse alpha but use alpha from texture
blend current alpha
As blend diffuse alpha but use current alpha from previous stages
(same as blend diffuse alpha for first layer)
blend manual
As blend diffuse alpha but use a constant manual alpha value spec-
ified in <manual>
dotproduct
The dot product of source1 and source2
blend diffuse colour
Use interpolated colour value from vertices to scale source1, then
add source2 scaled by (1-colour).
Source1 and source2 options
src current
The colour as built up from previous stages.
src texture
The colour derived from the texture assigned to this layer.
Chapter 3: Scripts 58
src diffuse The interpolated diffuse colour from the vertices (same as
’src current’ for first layer).
src specular
The interpolated specular colour from the vertices.
src manual
The manual colour specified at the end of the command.
For example ’modulate’ takes the colour results of the previous layer, and multiplies them
with the new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0
so multiplying them together will result in values in the same range, ’tinted’ by the multiply.
Note however that a straight multiply normally has the effect of darkening the textures -
for this reason there are brightening operations like modulate x2. Note that because of the
limitations on some underlying APIs (Direct3D included) the ’texture’ argument can only
be used as the first argument, not the second.
Note that the last parameter is only required if you decide to pass a value manually into
the operation. Hence you only need to fill these in if you use the ’blend manual’ operation.
IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers to-
gether. However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it
has to fall back on multipass rendering, i.e. rendering the same object multiple times with
different textures. This is both less efficient and there is a smaller range of blending oper-
ations which can be performed. For this reason, if you use this method you really should
set the colour op multipass fallback attribute to specify which effect you want to fall back
on if sufficient hardware is not available (the default is just ’modulate’ which is unlikely to
be what you want if you’re doing swanky blending here). If you wish to avoid having to do
this, use the simpler colour op attribute which allows less flexible blending options but sets
up the multipass fallback automatically, since it only allows operations which have direct
multipass equivalents.
Because some of the effects you can create using colour op ex are only supported under
multitexturing hardware, if the hardware is lacking the system must fallback on multipass
rendering, which unfortunately doesn’t support as many effects. This attribute is for you
to specify the fallback operation which most suits you.
The parameters are the same as in the scene blend attribute; this is because multipass
rendering IS effectively scene blending, since each layer is rendered on top of the last using
the same mechanism as making an object transparent, it’s just being rendered in the same
place repeatedly to get the multitexture effect. If you use the simpler (and less flexible)
colour op attribute you don’t need to call this as the system sets up the fallback for you.
alpha op ex
Behaves in exactly the same away as [colour op ex], page 56 except that it determines
how alpha values are combined between texture layers rather than colour values.The only
difference is that the 2 manual colours at the end of colour op ex are just single floating-
point values in alpha op ex.
env map
Turns on/off texture coordinate effect that makes this layer an environment map.
Environment maps make an object look reflective by using automatic texture coordinate
generation depending on the relationship between the objects vertices or normals and the
eye.
spherical A spherical environment map. Requires a single texture which is either a fish-
eye lens view of the reflected scene, or some other texture which looks good as a
spherical map (a texture of glossy highlights is popular especially in car sims).
This effect is based on the relationship between the eye direction and the vertex
normals of the object, so works best when there are a lot of gradually changing
normals, i.e. curved objects.
planar Similar to the spherical environment map, but the effect is based on the po-
sition of the vertices in the viewport rather than vertex normals. This effect
is therefore useful for planar geometry (where a spherical env map would not
look good because the normals are all the same) or objects without normals.
Chapter 3: Scripts 60
cubic reflection
A more advanced form of reflection mapping which uses a group of 6 textures
making up the inside of a cube, each of which is a view if the scene down each
axis. Works extremely well in all cases but has a higher technical requirement
from the card than spherical mapping. Requires that you bind a [cubic texture],
page 50 to this texture unit and use the ’combinedUVW’ option.
cubic normal
Generates 3D texture coordinates containing the camera space normal vector
from the normal information held in the vertex data. Again, full use of this
feature requires a [cubic texture], page 50 with the ’combinedUVW’ option.
scroll
Sets a fixed scroll offset for the texture.
This method offsets the texture in this layer by a fixed amount. Useful for small ad-
justments without altering texture coordinates in models. However if you wish to have an
animated scroll effect, see the [scroll anim], page 60 attribute.
scroll anim
Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling
effects on a texture layer (for varying scroll speeds, see [wave xform], page 61).
rotate
Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a
texture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation,
see [rotate anim], page 61.
rotate anim
Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation
animations (for varying speeds, see [wave xform], page 61).
scale
Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of
textures without making changes to geometry. This is a fixed scaling factor, if you wish to
animate this see [wave xform], page 61.
Valid scale values are greater than 0, with a scale factor of 2 making the texture twice
as big in that dimension etc.
wave xform
Sets up a transformation animation based on a wave function. Useful for more advanced
texture layer transform effects. You can add multiple instances of this attribute to a single
texture layer if you wish.
Format: wave xform <xform type> <wave type> <base> <frequency> <phase> <ampli-
tude>
xform type
scroll x Animate the x scroll value
Chapter 3: Scripts 62
The range of the output of the wave will be base, base+amplitude. So the example above
scales the texture in the x direction between 1 (normal size) and 5 along a sine wave at one
cycle every 5 second (0.2 waves per second).
transform
This attribute allows you to specify a static 4x4 transformation matrix for the texture unit,
thus replacing the individual scroll, rotate and scale attributes mentioned above.
Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31
m32 m33
The indexes of the 4x4 matrix value above are expressed as m<row><col>.
Chapter 3: Scripts 63
The definition of a program can either be embedded in the .material script itself (in
which case it must precede any references to it in the script), or if you wish to use the same
program across multiple .material files, you can define it in an external .program script.
You define the program in exactly the same way whether you use a .program script or a
.material script, the only difference is that all .program scripts are guaranteed to have been
parsed before all .material scripts, so you can guarantee that your program has been defined
before any .material script that might use it. Just like .material scripts, .program scripts
will be read from any location which is on your resource path, and you can define many
programs in a single script.
Vertex, geometry and fragment programs can be low-level (i.e. assembler code written
to the specification of a given low level syntax such as vs 1 1 or arbfp1) or high-level such
as DirectX9 HLSL, Open GL Shader Language, or nVidia’s Cg language (See [High-level
Programs], page 67). High level languages give you a number of advantages, such as being
able to write more intuitive code, and possibly being able to target multiple architectures in
a single program (for example, the same Cg program might be able to be used in both D3D
and GL, whilst the equivalent low-level programs would require separate techniques, each
targeting a different API). High-level programs also allow you to use named parameters
instead of simply indexed ones, although parameters are not defined here, they are used in
the Pass.
program is in before reading it, because during compilation of the material, we want to skip
programs which use an unsupportable syntax quickly, without loading the program first.
ps 2 x DirectX pixel shader (ie fragment program) assembler syntax. This is basically
ps 2 0 with a higher number of instructions.
Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series
arbfp1 This is the OpenGL standard assembler format for fragment programs. It’s
roughly equivalent to ps 2 0, which means that not all cards that support basic
pixel shaders under DirectX support arbfp1 (for example neither the GeForce3
or GeForce4 support arbfp1, but they do support ps 1 1).
fp20 This is an nVidia-specific OpenGL fragment syntax which is a superset
of ps 1.3. It allows you to use the ’nvparse’ format for basic fragment
programs. It actually uses NV texture shader and NV register combiners
to provide functionality equivalent to DirectX’s ps 1 1 under GL, but only
for nVidia cards. However, since ATI cards adopted arbfp1 a little earlier
than nVidia, it is mainly nVidia cards like the GeForce3 and GeForce4 that
this will be useful for. You can find more information about nvparse at
http://developer.nvidia.com/object/nvparse.html.
fp30 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps
2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon
HD 2000+ also supports it.
fp40 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps
3.0, which is supported on nVidia GeForce FX 6 series and higher.
gpu gp, gp4 gp
An nVidia-specific OpenGL geometry shader syntax.
Supported cards: nVidia GeForce FX8 series
You can get a definitive list of the syntaxes supported by the current card by calling
GpuProgramManager::getSingleton().getSupportedSyntax().
default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto worldViewProj worldviewproj_matrix
param_named shininess float 10
}
}
The syntax of the parameter definition is exactly the same as when you define parameters
when using programs, See [Program Parameter Specification], page 80. Defining default
parameters allows you to avoid rebinding common parameters repeatedly (clearly in the
above example, all but ’shininess’ are unlikely to change between uses of the program)
which makes your material declarations shorter.
...
}
As you can see, you need to use the keyword ’shared params’ and follow it with the
name that you will use to identify these shared parameters. Inside the curly braces, you
can define one parameter per line, in a way which is very similar to the [param named],
page 92 syntax. The definition of these lines is:
Format: shared param name <param name> <param type> [<[array size]>] [<ini-
tial values>]
The param name must be unique within the set, and the param type can be any one of
float, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2,
matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. The array size option allows
you to define arrays of param type should you wish, and if present must be a number
enclosed in square brackets (and note, must be separated from the param type with white-
space). If you wish, you can also initialise the parameters by providing a list of values.
Once you have defined the shared parameters, you can reference them inside
default params and params blocks using [shared params ref], page 93. You can also
obtain a reference to them in your code via GpuProgramManager::getSharedParameters,
and update the values for all instances using them.
High-level Programs
Support for high level vertex and fragment programs is provided through plugins; this is to
make sure that an application using OGRE can use as little or as much of the high-level
program functionality as they like. OGRE currently supports 3 high-level program types,
Cg (Section 3.1.5 [Cg], page 69) (an API- and card-independent, high-level language which
lets you write programs for both OpenGL and DirectX for lots of cards), DirectX 9 High-
Level Shader Language (Section 3.1.6 [HLSL], page 70), and OpenGL Shader Language
(Section 3.1.7 [GLSL], page 71). HLSL can only be used with the DirectX rendersystem,
and GLSL can only be used with the GL rendersystem. Cg can be used with both, although
experience has shown that more advanced programs, particularly fragment programs which
perform a lot of texture fetches, can produce better code in the rendersystem-specific shader
language.
One way to support both HLSL and GLSL is to include separate techniques in the
material script, each one referencing separate programs. However, if the programs are
basically the same, with the same parameters, and the techniques are complex this can bloat
your material scripts with duplication fairly quickly. Instead, if the only difference is the
language of the vertex & fragment program you can use OGRE’s Section 3.1.8 [Unified High-
level Programs], page 76 to automatically pick a program suitable for your rendersystem
whilst using a single technique.
Chapter 3: Scripts 68
If you use stencil shadows, then any vertex programs which do vertex deformation can
be a problem, because stencil shadows are calculated on the CPU, which does not have
access to the modified vertices. If the vertex program is doing standard skeletal animation,
this is ok (see section above) because Ogre knows how to replicate the effect in software,
but any other vertex deformation cannot be replicated, and you will either have to accept
that the shadow will not reflect this deformation, or you should turn off shadows for that
object.
If you use texture shadows, then vertex deformation is acceptable; however, when ren-
dering the object into a shadow texture (the shadow caster pass), the shadow has to be
rendered in a solid colour (linked to the ambient colour for modulative shadows, black for
additive shadows). You must therefore provide an alternative vertex program, so Ogre pro-
vides you with a way of specifying one to use when rendering the caster, See [Shadows and
Vertex Programs], page 93.
3.1.5 Cg programs
In order to define Cg programs, you have to have to load Plugin CgProgramManager.so/.dll
at startup, either through plugins.cfg or through your own plugin loading code. They are
very easy to define:
fragment_program myCgFragmentProgram cg
{
source myCgFragmentProgram.cg
entry_point main
profiles ps_2_0 arbfp1
Chapter 3: Scripts 70
}
There are a few differences between this and the assembler program - to begin with, we
declare that the fragment program is of type ’cg’ rather than ’asm’, which indicates that
it’s a high-level program using Cg. The ’source’ parameter is the same, except this time
it’s referencing a Cg source file instead of a file of assembler.
Here is where things start to change. Firstly, we need to define an ’entry point’, which is
the name of a function in the Cg program which will be the first one called as part of the
fragment program. Unlike assembler programs, which just run top-to-bottom, Cg programs
can include multiple functions and as such you must specify the one which start the ball
rolling.
Next, instead of a fixed ’syntax’ parameter, you specify one or more ’profiles’; profiles are
how Cg compiles a program down to the low-level assembler. The profiles have the same
names as the assembler syntax codes mentioned above; the main difference is that you
can list more than one, thus allowing the program to be compiled down to more low-level
syntaxes so you can write a single high-level program which runs on both D3D and GL. You
are advised to just enter the simplest profiles under which your programs can be compiled
in order to give it the maximum compatibility. The ordering also matters; if a card supports
more than one syntax then the one listed first will be used.
Lastly, there is a final option called ’compile arguments’, where you can specify argu-
ments exactly as you would to the cgc command-line compiler, should you wish to.
Important Matrix Ordering Note: One thing to bear in mind is that HLSL allows you
to use 2 different ways to multiply a vector by a matrix - mul(v,m) or mul(m,v). The
only difference between them is that the matrix is effectively transposed. You should use
mul(m,v) with the matrices passed in from Ogre - this agrees with the shaders produced
from tools like RenderMonkey, and is consistent with Cg too, but disagrees with the Dx9
Chapter 3: Scripts 71
SDK and FX Composer which use mul(v,m) - you will have to switch the parameters to
mul() in those shaders.
Note that if you use the float3x4 / matrix3x4 type in your shader, bound to an OGRE
auto-definition (such as bone matrices) you should use the column major matrices = false
option (discussed below) in your program definition. This is because OGRE passes float3x4
as row-major to save constant space (3 float4’s rather than 4 float4’s with only the top 3
values used) and this tells OGRE to pass all matrices like this, so that you can use mul(m,v)
consistently for all calculations. OGRE will also to tell the shader to compile in row-major
form (you don’t have to set the /Zpr compile option or #pragma pack(row-major) option,
OGRE does this for you). Note that passing bones in float4x3 form is not supported by
OGRE, but you don’t need it given the above.
Advanced options
GLSL supports the use of modular shaders. This means you can write GLSL external
functions that can be used in multiple shaders.
vertex_program myExternalGLSLFunction1 glsl
{
source myExternalGLSLfunction1.txt
}
void main(void)
Chapter 3: Scripts 73
{
gl_FragColor = texture2D(diffuseMap, UV);
}
In material script:
fragment_program myFragmentShader glsl
{
source example.frag
}
material exampleGLSLTexturing
{
technique
{
pass
{
fragment_program_ref myFragmentShader
{
param_named diffuseMap int 0
}
texture_unit
{
texture myTexture.jpg 2d
}
}
}
}
An index value of 0 refers to the first texture unit in the pass, an index value of 1 refers
to the second unit in the pass and so on.
Matrix parameters
Here are some examples of passing matrices to GLSL mat2, mat3, mat4 uniforms:
material exampleGLSLmatrixUniforms
{
technique matrix_passing
{
pass examples
{
vertex_program_ref myVertexShader
{
// mat4 uniform
param_named OcclusionMatrix matrix4x4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0
// or
param_named ViewMatrix float16 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
Chapter 3: Scripts 74
// mat3
param_named TextRotMatrix float9 1 0 0 0 1 0 0 0 1
}
fragment_program_ref myFragmentShader
{
// mat2 uniform
param_named skewMatrix float4 0.5 0 -0.5 1.0
}
}
}
}
In addition to the built in attributes described in section 7.3 of the GLSL manual,
Ogre supports a number of automatically bound custom vertex attributes. There are some
drivers that do not behave correctly when mixing built-in vertex attributes like gl Normal
and custom vertex attributes, so for maximum compatibility you may well wish to use all
custom attributes in shaders where you need at least one (e.g. for skeletal animation).
vertex Binds VES POSITION, declare as ’attribute vec4 vertex;’.
normal Binds VES NORMAL, declare as ’attribute vec3 normal;’.
colour Binds VES DIFFUSE, declare as ’attribute vec4 colour;’.
secondary colour
Binds VES SPECULAR, declare as ’attribute vec4 secondary colour;’.
uv0 - uv7 Binds VES TEXTURE COORDINATES, declare as ’attribute vec4 uv0;’. Note
that uv6 and uv7 share attributes with tangent and binormal respectively so
cannot both be present.
tangent Binds VES TANGENT, declare as ’attribute vec3 tangent;’.
Chapter 3: Scripts 75
Preprocessor definitions
GLSL supports using preprocessor definitions in your code - some are defined by the imple-
mentation, but you can also define your own, say in order to use the same source code for
a few different variants of the same technique. In order to use this feature, include prepro-
cessor conditions in your GLSL code, of the kind #ifdef SYMBOL, #if SYMBOL==2 etc.
Then in your program definition, use the ’preprocessor defines’ option, following it with a
string if definitions. Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’
operator within them to specify a definition value. Those without an ’=’ will implicitly
have a definition of 1. For example:
// in your GLSL
#ifdef CLEVERTECHNIQUE
// some clever stuff here
#else
// normal technique
#endif
#if NUM_THINGS==2
// Some specific code
#else
// something else
#endif
{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
material SupportHLSLandGLSLwithoutUnified
{
// HLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
// GLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
Chapter 3: Scripts 78
}
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
}
And that’s a really small example. Everything you added to the HLSL technique, you’d
have to duplicate in the GLSL technique too. So instead, here’s how you’d do it with unified
program definitions:
vertex_program myVertexProgramHLSL hlsl
{
source prog.hlsl
entry_point main_vp
target vs_2_0
}
fragment_program myFragmentProgramHLSL hlsl
{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
// Unified definition
vertex_program myVertexProgram unified
{
delegate myVertexProgramGLSL
delegate myVertexProgramHLSL
}
fragment_program myFragmentProgram unified
{
delegate myFragmentProgramGLSL
delegate myFragmentProgramHLSL
}
material SupportHLSLandGLSLwithUnified
Chapter 3: Scripts 79
{
// HLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgram
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgram
{
}
}
}
}
At runtime, when myVertexProgram or myFragmentProgram are used, OGRE automat-
ically picks a real program to delegate to based on what’s supported on the current hardware
/ rendersystem. If none of the delegates are supported, the entire technique referencing the
unified program is marked as unsupported and the next technique in the material is checked
fro fallback, just like normal. As your materials get larger, and you find you need to support
HLSL and GLSL specifically (or need to write multiple interface-compatible versions of a
program for whatever other reason), unified programs can really help reduce duplication.
As well as naming the program in question, you can also provide parameters to it. Here’s
a simple example:
vertex_program_ref myVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed 4 float4 10.0 0 0 0
}
In this example, we bind a vertex program called ’myVertexProgram’ (which will be
defined elsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning
we do not have to supply a value as such, just a recognised code (in this case it’s the
Chapter 3: Scripts 80
The syntax of the link to a vertex program and a fragment or geometry program are
identical, the only difference is that ’fragment program ref’ and ’geometry program ref’
are used respectively instead of ’vertex program ref’.
For many situations vertex, geometry and fragment programs are associated with each
other in a pass but this is not cast in stone. You could have a vertex program that can
be used by several different fragment programs. Another situation that arises is that you
can mix fixed pipeline and programmable pipeline (shaders) together. You could use the
non-programable vertex fixed function pipeline and then provide a fragment program ref
in a pass i.e. there would be no vertex program ref section in the pass. The fragment
program referenced in the pass must meet the requirements as defined in the related API
in order to read from the outputs of the vertex fixed pipeline. You could also just have a
vertex program that outputs to the fragment fixed function pipeline.
The requirements to read from or write to the fixed function pipeline are similar between
rendering API’s (DirectX and OpenGL) but how its actually done in each type of shader
(vertex, geometry or fragment) depends on the shader language. For HLSL (DirectX
API) and associated asm consult MSDN at http://msdn.microsoft.com/library/.
For GLSL (OpenGL), consult section 7.6 of the GLSL spec 1.1 available at
http://developer.3dlabs.com/documents/index.htm. The built in varying variables
provided in GLSL allow your program to read/write to the fixed function pipeline varyings.
For Cg consult the Language Profiles section in CgUsersManual.pdf that comes with the
Cg Toolkit available at http://developer.nvidia.com/object/cg_toolkit.html. For
HLSL and Cg its the varying bindings that allow your shader programs to read/write to
the fixed function pipeline varyings.
Parameter specification
Parameters can be specified using one of 4 commands as shown below. The same syntax is
used whether you are defining a parameter just for this particular use of the program, or
when specifying the [Default Program Parameters], page 66. Parameters set in the specific
use of the program override the defaults.
• [param indexed], page 80
• [param indexed auto], page 81
• [param named], page 92
• [param named auto], page 93
• [shared params ref], page 93
Chapter 3: Scripts 81
param indexed
This command sets the value of an indexed parameter.
The ’index’ is simply a number representing the position in the parameter list which
the value should be written, and you should derive this from your program definition. The
index is relative to the way constants are stored on the card, which is in 4-element blocks.
For example if you defined a float4 parameter at index 0, the next index would be 1. If you
defined a matrix4x4 at index 0, the next usable index would be 4, since a 4x4 matrix takes
up 4 indexes.
The value of ’type’ can be float4, matrix4x4, float<n>, int4, int<n>. Note that ’int’
parameters are only available on some more advanced program syntaxes, check the D3D or
GL vertex / fragment program documentation for full details. Typically the most useful
ones will be float4 and matrix4x4. Note that if you use a type which is not a multiple of
4, then the remaining values up to the multiple of 4 will be filled with zeroes for you (since
GPUs always use banks of 4 floats per constant even if only one is used).
’value’ is simply a space or tab-delimited list of values which can be converted into the
type you have specified.
’index’ has the same meaning as [param indexed], page 80; note this time you do not
have to specify the size of the parameter because the engine knows this already. In the
example, the world/view/projection matrix is being used so this is implicitly a matrix4x4.
Chapter 3: Scripts 82
world matrix
The current world matrix.
inverse world matrix
The inverse of the current world matrix.
transpose world matrix
The transpose of the world matrix
inverse transpose world matrix
The inverse transpose of the world matrix
world matrix array 3x4
An array of world matrices, each represented as only a 3x4 matrix (3 rows
of 4columns) usually for doing hardware skinning. You should make enough
entries available in your vertex program for the number of bones in use, ie an
array of numBones*3 float4’s.
view matrix
The current view matrix.
inverse view matrix
The inverse of the current view matrix.
transpose view matrix
The transpose of the view matrix
inverse transpose view matrix
The inverse transpose of the view matrix
projection matrix
The current projection matrix.
inverse projection matrix
The inverse of the projection matrix
transpose projection matrix
The transpose of the projection matrix
inverse transpose projection matrix
The inverse transpose of the projection matrix
worldview matrix
The current world and view matrices concatenated.
inverse worldview matrix
The inverse of the current concatenated world and view matrices.
transpose worldview matrix
The transpose of the world and view matrices
inverse transpose worldview matrix
The inverse transpose of the current concatenated world and view matrices.
Chapter 3: Scripts 83
viewproj matrix
The current view and projection matrices concatenated.
inverse viewproj matrix
The inverse of the view & projection matrices
transpose viewproj matrix
The transpose of the view & projection matrices
inverse transpose viewproj matrix
The inverse transpose of the view & projection matrices
worldviewproj matrix
The current world, view and projection matrices concatenated.
inverse worldviewproj matrix
The inverse of the world, view and projection matrices
transpose worldviewproj matrix
The transpose of the world, view and projection matrices
inverse transpose worldviewproj matrix
The inverse transpose of the world, view and projection matrices
texture matrix
The transform matrix of a given texture unit, as it would usually be seen in the
fixed-function pipeline. This requires an index in the ’extra params’ field, and
relates to the ’nth’ texture unit of the pass in question. NB if the given index
exceeds the number of texture units available for this pass, then the parameter
will be set to Matrix4::IDENTITY.
render target flipping
The value use to adjust transformed y position if bypassed projection matrix
transform. It’s -1 if the render target requires texture flipping, +1 otherwise.
vertex winding
Indicates what vertex winding mode the render state is in at this point; +1 for
standard, -1 for inverted (e.g. when processing reflections).
light diffuse colour
The diffuse colour of a given light; this requires an index in the ’extra params’
field, and relates to the ’nth’ closest light which could affect this object (i.e. 0
refers to the closest light - note that directional lights are always first in the list
and always present). NB if there are no lights this close, then the parameter
will be set to black.
light specular colour
The specular colour of a given light; this requires an index in the ’extra params’
field, and relates to the ’nth’ closest light which could affect this object (i.e.
0 refers to the closest light). NB if there are no lights this close, then the
parameter will be set to black.
light attenuation
A float4 containing the 4 light attenuation variables for a given light. This
requires an index in the ’extra params’ field, and relates to the ’nth’ closest
Chapter 3: Scripts 84
light which could affect this object (i.e. 0 refers to the closest light). NB
if there are no lights this close, then the parameter will be set to all zeroes.
The order of the parameters is range, constant attenuation, linear attenuation,
quadric attenuation.
spotlight params
A float4 containing the 3 spotlight parameters and a control value. The order of
the parameters is cos(inner angle /2 ), cos(outer angle / 2), falloff, and the final
w value is 1.0f. For non-spotlights the value is float4(1,0,0,1). This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light). If there are less lights
than this, the details are like a non-spotlight.
light position
The position of a given light in world space. This requires an index in the
’extra params’ field, and relates to the ’nth’ closest light which could affect this
object (i.e. 0 refers to the closest light). NB if there are no lights this close,
then the parameter will be set to all zeroes. Note that this property will work
with all kinds of lights, even directional lights, since the parameter is set as
a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional
lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work
consistently on both.
light direction
The direction of a given light in world space. This requires an index in the
’extra params’ field, and relates to the ’nth’ closest light which could affect this
object (i.e. 0 refers to the closest light). NB if there are no lights this close,
then the parameter will be set to all zeroes. DEPRECATED - this property
only works on directional lights, and we recommend that you use light position
instead since that returns a generic 4D vector.
light position object space
The position of a given light in object space (i.e. when the object is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB if there
are no lights this close, then the parameter will be set to all zeroes. Note that
this property will work with all kinds of lights, even directional lights, since the
parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f)
whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot
products will work consistently on both.
light direction object space
The direction of a given light in object space (i.e. when the object is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB
if there are no lights this close, then the parameter will be set to all zeroes.
DEPRECATED, except for spotlights - for directional lights we recommend
that you use light position object space instead since that returns a generic
4D vector.
Chapter 3: Scripts 85
light power
The ’power’ scaling for a given light, useful in HDR rendering. This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light).
light number
When rendering, there is generally a list of lights available for use by all of the
passes for a given object, and those lights may or may not be referenced in
one or more passes. Sometimes it can be useful to know where in that overall
list a given light light (as seen from a pass) is. For example if you use iterate
once per light, the pass always sees the light as index 0, but in each iteration
the actual light referenced is different. This binding lets you pass through the
actual index of the light in that overall list. You just need to give it a parameter
of the pass-relative light number and it will map it to the overall list index.
Chapter 3: Scripts 86
frame time
The current frame time, factored by the optional parameter (or 1.0f if not
supplied).
fps The current frames per second
viewport width
The current viewport width in pixels
viewport height
The current viewport height in pixels
inverse viewport width
1.0/the current viewport width in pixels
inverse viewport height
1.0/the current viewport height in pixels
viewport size
4-element vector of viewport width, viewport height, inverse viewport width,
inverse viewport height
texel offsets
Provides details of the rendersystem-specific texture coordinate offsets required
to map texels onto pixels. float4(horizontalOffset, verticalOffset, horizontalOff-
set / viewport width, verticalOffset / viewport height).
view direction
View direction vector in object space
view side vector
View local X axis
view up vector
View local Y axis
fov Vertical field of view, in radians
near clip distance
Near clip distance, in world units
far clip distance
Far clip distance, in world units (may be 0 for infinite view projection)
texture viewproj matrix
Applicable to vertex programs which have been specified as the ’shadow re-
ceiver’ vertex program alternative, or where a texture unit is marked as con-
tent type shadow; this provides details of the view/projection matrix for the
current shadow projector. The optional ’extra params’ entry specifies which
light the projector refers to (for the case of content type shadow where more
than one shadow texture may be present in a single pass), where 0 is the default
and refers to the first light referenced in this pass.
texture viewproj matrix array
As texture viewproj matrix, except an array of matrices is passed, up to the
number that you specify as the ’extra params’ value.
Chapter 3: Scripts 91
param named
This is the same as param indexed, but uses a named parameter instead of an index. This
can only be used with high-level programs which include parameter names; if you’re using
an assembler program then you have no choice but to use indexes. Note that you can use
indexed parameters for high-level programs too, but it is less portable since if you reorder
your parameters in the high-level program the indexes will change.
The type is required because the program is not compiled and loaded when the material
Chapter 3: Scripts 93
script is parsed, so at this stage we have no idea what types the parameters are. Programs
are only loaded and compiled when they are used, to save memory.
The allowed value codes and the meaning of extra params are detailed in
[param indexed auto], page 81.
The only required parameter is a name, which must be the name of an already defined
shared parameter set. All named parameters which are present in the program that are
also present in the shared parameter set will be linked, and the shared parameters used as
if you had defined them locally. This is dependent on the definitions (type and array size)
matching between the shared set and the program.
If you use stencil shadows, then any vertex programs which do vertex deformation can
be a problem, because stencil shadows are calculated on the CPU, which does not have
access to the modified vertices. If the vertex program is doing standard skeletal animation,
this is ok (see section above) because Ogre knows how to replicate the effect in software,
but any other vertex deformation cannot be replicated, and you will either have to accept
that the shadow will not reflect this deformation, or you should turn off shadows for that
object.
Chapter 3: Scripts 94
If you use texture shadows, then vertex deformation is acceptable; however, when ren-
dering the object into the shadow texture (the shadow caster pass), the shadow has to be
rendered in a solid colour (linked to the ambient colour). You must therefore provide an
alternative vertex program, so Ogre provides you with a way of specifying one to use when
rendering the caster. Basically you link an alternative vertex program, using exactly the
same syntax as the original vertex program link:
shadow_caster_vertex_program_ref myShadowCasterVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 ambient_light_colour
}
When rendering a shadow caster, Ogre will automatically use the alternate program.
You can bind the same or different parameters to the program - the most important thing
is that you bind ambiend light colour, since this determines the colour of the shadow in
modulative texture shadows. If you don’t supply an alternate program, Ogre will fall back
on a fixed-function material which will not reflect any vertex deformation you do in your
vertex program.
In addition, when rendering the shadow receivers with shadow textures, Ogre needs to
project the shadow texture. It does this automatically in fixed function mode, but if the
receivers use vertex programs, they need to have a shadow receiver program which does the
usual vertex deformation, but also generates projective texture coordinates. The additional
program linked into the pass like this:
shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 texture_viewproj_matrix
}
For the purposes of writing this alternate program, there is an automatic parameter
binding of ’texture viewproj matrix’ which provides the program with texture projection
parameters. The vertex program should do it’s normal vertex processing, and generate
texture coordinates using this matrix and place them in texture coord sets 0 and 1, since
some shadow techniques use 2 texture units. The colour of the vertices output by this vertex
program must always be white, so as not to affect the final colour of the rendered shadow.
When using additive texture shadows, the shadow pass render is actually the lighting
render, so if you perform any fragment program lighting you also need to pull in a custom
fragment program. You use the shadow receiver fragment program ref for this:
shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram
{
param_named_auto lightDiffuse light_diffuse_colour 0
Chapter 3: Scripts 95
}
You should pass the projected shadow coordinates from the custom vertex program. As
for textures, texture unit 0 will always be the shadow texture. Any other textures which
you bind in your pass will be carried across too, but will be moved up by 1 unit to make
room for the shadow texture. Therefore your shadow receiver fragment program is likely
to be the same as the bare lighting pass of your normal material, except that you insert an
extra texture sampler at index 0, which you will use to adjust the result by (modulating
diffuse and specular components).
To reflect this, you should use the [binding type], page 51 attribute in a texture unit
to indicate which unit you are targeting with your texture - ’fragment’ (the default) or
’vertex’. For render systems that don’t have separate bindings, this actually does nothing.
But for those that do, it will ensure your texture gets bound to the right processing unit.
Note that whilst DirectX9 has separate bindings for the vertex and fragment pipelines,
binding a texture to the vertex processing unit still uses up a ’slot’ which is then not available
Chapter 3: Scripts 96
for use in the fragment pipeline. I didn’t manage to find this documented anywhere, but
the nVidia samples certainly avoid binding a texture to the same index on both vertex
and fragment units, and when I tried to do it, the texture did not appear correctly in the
fragment unit, whilst it did as soon as I moved it into the next unit.
Hardware limitations
As at the time of writing (early Q3 2006), ATI do not support texture fetch in their current
crop of cards (Radeon X1n00). nVidia do support it in both their 6n00 and 7n00 range.
ATI support an alternative called ’Render to Vertex Buffer’, but this is not standardised
at this time and is very much different in its implementation, so cannot be considered to
be a drop-in replacement. This is the case even though the Radeon X1n00 cards claim to
support vs 3 0 (which requires vertex texture fetch).
For example, to make a new material that is based on one previously defined, add a
colon : after the new material name followed by the name of the material that is to be
copied.
The only caveat is that a parent material must have been defined/parsed prior to the
child material script being parsed. The easiest way to achieve this is to either place parents
at the beginning of the material script file, or to use the ’import’ directive (See Section 3.1.14
[Script Import Directive], page 105). Note that inheritence is actually a copy - after scripts
are loaded into Ogre, objects no longer maintain their copy inheritance structure. If a
parent material is modified through code at runtime, the changes have no effect on child
materials that were copied from it in the script.
Chapter 3: Scripts 97
Material copying within the script alleviates some drudgery from copy/paste but having
the ability to identify specific techniques, passes, and texture units to modify makes material
copying easier. Techniques, passes, texture units can be identified directly in the child
material without having to layout previous techniques, passes, texture units by associating a
name with them, Techniques and passes can take a name and texture units can be numbered
within the material script. You can also use variables, See Section 3.1.13 [Script Variables],
page 104.
Names become very useful in materials that copy from other materials. In order to
override values they must be in the correct technique, pass, texture unit etc. The script
could be lain out using the sequence of techniques, passes, texture units in the child material
but if only one parameter needs to change in say the 5th pass then the first four passes
prior to the fifth would have to be placed in the script:
Here is an example:
material test2 : test1
{
technique
{
pass
{
}
pass
{
}
pass
{
}
pass
{
}
pass
{
ambient 0.5 0.7 0.3 1.0
}
}
}
This method is tedious for materials that only have slight variations to their parent. An
easier way is to name the pass directly without listing the previous passes:
Chapter 3: Scripts 98
Note: if passes or techniques aren’t given a name, they will take on a default name based
on their index. For example the first pass has index 0 so its name will be 0.
}
}
}
material Test
{
technique
{
pass : ParentPass
{
}
}
}
Notice that the pass inherits from ParentPass. This allows for the creation of more
fine-grained inheritance hierarchies.
Along with the more generalized inheritance system comes an important new keyword:
"abstract." This keyword is used at a top-level object declaration (not inside any other
object) to denote that it is not something that the compiler should actually attempt to
compile, but rather that it is only for the purpose of inheritance. For example, a material
declared with the abstract keyword will never be turned into an actual usable material in
the material framework. Objects which cannot be at a top-level in the document (like a
pass) but that you would like to declare as such for inheriting purpose must be declared
with the abstract keyword.
The final matching option is based on wildcards. Using the ’*’ character, you can make
a powerful matching scheme and override multiple objects at once, even if you don’t know
Chapter 3: Scripts 100
material TSNormalSpecMapping
{
technique GLSL
{
pass
Chapter 3: Scripts 101
{
ambient 0.1 0.1 0.1
diffuse 0.7 0.7 0.7
specular 0.7 0.7 0.7 128
vertex_program_ref GLSLDemo/OffsetMappingVS
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named textureScale float 1.0
}
fragment_program_ref GLSLDemo/TSNormalSpecMappingFS
{
param_named normalMap int 0
param_named diffuseMap int 1
param_named fxMap int 2
}
// Normal map
texture_unit NormalMap
{
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}
}
Chapter 3: Scripts 102
technique HLSL_DX9
{
pass
{
vertex_program_ref FxMap_HLSL_VS
{
param_named_auto worldViewProj_matrix worldviewproj_matrix
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
}
fragment_program_ref FxMap_HLSL_PS
{
param_named ambientColor float4 0.2 0.2 0.2 0.2
}
// Normal map
texture_unit
{
texture_alias NormalMap
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}
}
Chapter 3: Scripts 103
}
Note that the GLSL and HLSL techniques use the same textures. For each texture usage
type a texture alias is given that describes what the texture is used for. So the first texture
unit in the GLSL technique has the same alias as the TUS in the HLSL technique since its
the same texture used. Same goes for the second and third texture units.
For demonstration purposes, the GLSL technique makes use of texture unit naming and
therefore the texture alias name does not have to be set since it defaults to the texture unit
name. So why not use the default all the time since its less typing? For most situations you
can. Its when you clone a material that and then want to change the alias that you must
use the texture alias command in the script. You cannot change the name of a texture unit
in a cloned material so texture alias provides a facility to assign an alias name.
Now we want to clone the material but only want to change the textures used. We could
copy and paste the whole material but if we decide to change the base material later then
we also have to update the copied material in the script. With set texture alias, copying a
material is very easy now. set texture alias is specified at the top of the material definition.
All techniques using the specified texture alias will be effected by set texture alias.
Format:
set texture alias <alias name> <texture name>
The same process can be done in code as long you set up the texture alias names
so then there is no need to traverse technique/pass/TUS to change a texture. You just
call myMaterialPtr->applyTextureAliases(myAliasTextureNameList) which will update all
textures in all texture units that match the alias names in the map container reference you
passed as a parameter.
You don’t have to supply all the textures in the copied material.
Another example:
material fxTest3 : TSNormalSpecMapping
{
set_texture_alias DiffuseMap fxTest2Diff.png
}
fxTest3 will end up with the default textures for the normal map and spec map setup
in TSNormalSpecMapping material but will have a different diffuse map. So your base
material can define the default textures to use and then the child materials can override
specific textures.
material Test
{
technique
{
pass : ParentPass
{
set $diffuse_colour "1 0 0 1"
}
}
}
The ParentPass object declares a variable called "diffuse colour" which is then overrid-
den in the Test material’s pass. The "set" keyword is used to set the value of that variable.
The variable assignment follows lexical scoping rules, which means that the value of "1 0 0
1" is only valid inside that pass definition. Variable assignment in outer scopes carry over
into inner scopes.
Chapter 3: Scripts 105
material Test
{
set $diffuse_colour "1 0 0 1"
technique
{
pass : ParentPass
{
}
}
}
The $diffuse colour assignment carries down through the technique and into the pass.
Note, however that importing does not actually cause objects in the imported script to
be fully parsed & created, it just makes the definitions available for inheritence. This has a
specific ramification for vertex / fragment program definitions, which must be loaded before
any parameters can be specified. You should continue to put common program definitions
in .program files to ensure they are fully parsed before being referenced in multiple .ma-
terial files. The ’import’ command just makes sure you can resolve dependencies between
equivalent script definitions (e.g. material to material).
Chapter 3: Scripts 106
Compositor Fundamentals
Performing post-processing effects generally involves first rendering the scene to a texture,
either in addition to or instead of the main window. Once the scene is in a texture, you
can then pull the scene image into a fragment program and perform operations on it by
rendering it through full screen quad. The target of this post processing render can be the
main result (e.g. a window), or it can be another render texture so that you can perform
multi-stage convolutions on the image. You can even ’ping-pong’ the render back and forth
between a couple of render textures to perform convolutions which require many iterations,
without using a separate texture for each stage. Eventually you’ll want to render the result
to the final output, which you do with a full screen quad. This might replace the whole
window (thus the main window doesn’t need to render the scene itself), or it might be a
combinational effect.
So that we can discuss how to implement these techniques efficiently, a number of defi-
nitions are required:
Compositor
Definition of a fullscreen effect that can be applied to a user viewport. This is
what you’re defining when writing compositor scripts as detailed in this section.
Compositor Instance
An instance of a compositor as applied to a single viewport. You create these
based on compositor definitions, See Section 3.2.4 [Applying a Compositor],
page 120.
Compositor Chain
It is possible to enable more than one compositor instance on a viewport at the
same time, with one compositor taking the results of the previous one as input.
This is known as a compositor chain. Every viewport which has at least one
compositor attached to it has a compositor chain. See Section 3.2.4 [Applying
a Compositor], page 120
Target This is a RenderTarget, i.e. the place where the result of a series of render
operations is sent. A target may be the final output (and this is implicit, you
don’t have to declare it), or it may be an intermediate render texture, which
you declare in your script with the [compositor texture], page 109. A target
Chapter 3: Scripts 107
which is not the output target has a defined size and pixel format which you
can control.
Output Target
As Target, but this is the single final result of all operations. The size and pixel
format of this target cannot be controlled by the compositor since it is defined
by the application using it, thus you don’t declare it in your script. However,
you do declare a Target Pass for it, see below.
Target Pass
A Target may be rendered to many times in the course of a composition effect.
In particular if you ’ping pong’ a convolution between a couple of textures, you
will have more than one Target Pass per Target. Target passes are declared in
the script using a Section 3.2.2 [Compositor Target Passes], page 112, the latter
being the final output target pass, of which there can be only one.
Pass Within a Target Pass, there are one or more individual Section 3.2.3 [Compos-
itor Passes], page 114, which perform a very specific action, such as rendering
the original scene (or pulling the result from the previous compositor in the
chain), rendering a fullscreen quad, or clearing one or more buffers. Typically
within a single target pass you will use the either a ’render scene’ pass or a
’render quad’ pass, not both. Clear can be used with either type.
Loading scripts
Compositor scripts are loaded when resource groups are initialised: OGRE looks in all
resource locations associated with the group (see Root::addResourceLocation) for files with
the ’.compositor’ extension and parses them. If you want to parse files manually, use
CompositorSerializer::parseScript.
Format
Several compositors may be defined in a single script. The script format is pseudo-C++,
with sections delimited by curly braces (’’, ’’), and comments indicated by starting a line
with ’//’ (note, no nested form comments allowed). The general format is shown below in
the example below:
// This is a comment
// Black and white effect
compositor B&W
{
technique
{
// Temporary textures
texture rt0 target_width target_height PF_A8R8G8B8
Chapter 3: Scripts 108
target rt0
{
// Render output from previous compositor (or original scene)
input previous
}
target_output
{
// Start with clear output
input none
// Draw a fullscreen quad with the black and white image
pass render_quad
{
// Renders a fullscreen quad with a material
material Ogre/Compositor/BlackAndWhite
input 0 rt0
}
}
}
}
Every compositor in the script must be given a name, which is the line ’compositor
<name>’ before the first opening ’’. This name must be globally unique. It can include path
characters (as in the example) to logically divide up your compositors, and also to avoid
duplicate names, but the engine does not treat the name as hierarchical, just as a string.
Names can include spaces but must be surrounded by double quotes ie compositor "My
Name".
The major components of a compositor are the Section 3.2.1 [Compositor Techniques],
page 108, the Section 3.2.2 [Compositor Target Passes], page 112 and the Section 3.2.3
[Compositor Passes], page 114, which are covered in detail in the following sections.
3.2.1 Techniques
A compositor technique is much like a Section 3.1.1 [Techniques], page 21 in that it describes
one approach to achieving the effect you’re looking for. A compositor definition can have
more than one technique if you wish to provide some fallback should the hardware not
support the technique you’d prefer to use. Techniques are evaluated for hardware support
based on 2 things:
Material support
All Section 3.2.3 [Compositor Passes], page 114 that render a fullscreen quad
use a material; for the technique to be supported, all of the materials refer-
enced must have at least one supported material technique. If they don’t, the
compositor technique is marked as unsupported and won’t be used.
Chapter 3: Scripts 109
As with material techniques, compositor techniques are evaluated in the order you define
them in the script, so techniques declared first are preferred over those declared later.
Format: technique
texture
This declares a render texture for use in subsequent Section 3.2.2 [Compositor Target
Passes], page 112.
Format: texture <Name> <Width> <Height> <Pixel Format> [<MRT Pixel Format2>]
[<MRT Pixel FormatN>] [pooled] [gamma] [no fsaa] [<scope>]
Name A name to give the render texture, which must be unique within this compos-
itor. This name is used to reference the texture in Section 3.2.2 [Compositor
Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3
[Compositor Passes], page 114, when the texture is used as input to a material
rendering a fullscreen quad.
Chapter 3: Scripts 110
Width, Height
The dimensions of the render texture. You can either specify a fixed width
and height, or you can request that the texture is based on the physical di-
mensions of the viewport to which the compositor is attached. The options
for the latter are ’target width’, ’target height’, ’target width scaled <factor>’
and ’target height scaled <factor>’ - where ’factor’ is the amount by which you
wish to multiply the size of the main target to derive the dimensions.
Pixel Format
The pixel format of the render texture. This affects how much memory it
will take, what colour channels will be available, and what precision you
will have within those channels. The available options are PF A8R8G8B8,
PF R8G8B8A8, PF R8G8B8, PF FLOAT16 RGBA, PF FLOAT16 RGB,
PF FLOAT16 R, PF FLOAT32 RGBA, PF FLOAT32 RGB, and
PF FLOAT32 R.
pooled If present, this directive makes this texture ’pooled’ among compositor in-
stances, which can save some memory.
gamma If present, this directive means that sRGB gamma correction will be enabled
on writes to this texture. You should remember to include the opposite sRGB
conversion when you read this texture back in another material, such as a quad.
This option will automatically enabled if you use a render scene pass on this
texture and the viewport on which the compositor is based has sRGB write
support enabled.
no fsaa If present, this directive disables the use of anti-aliasing on this texture. FSAA
is only used if this texture is subject to a render scene pass and FSAA was
enabled on the original viewport on which this compositor is based; this option
allows you to override it and disable the FSAA if you wish.
scope If present, this directive sets the scope for the texture for being accessed by other
compositors using the [compositor texture ref], page 111 directive. There are
three options : ’local scope’ (which is also the default) means that only the
compositor defining the texture can access it. ’chain scope’ means that the
compositors after this compositor in the chain can reference its textures, and
’global scope’ means that the entire application can access the texture. This
directive also affects the creation of the textures (global textures are created
once and thus can’t be used with the pooled directive, and can’t rely on viewport
size).
Example: texture rt0 512 512 PF R8G8B8A8
Example: texture rt1 target width target height PF FLOAT32 RGB
You can in fact repeat this element if you wish. If you do so, that means that this
render texture becomes a Multiple Render Target (MRT), when the GPU writes to multiple
textures at once. It is imperative that if you use MRT that the shaders that render to it
render to ALL the targets. Not doing so can cause undefined results. It is also important
to note that although you can use different pixel formats for each target in a MRT, each
Chapter 3: Scripts 111
one should have the same total bit depth since most cards do not support independent bit
depths. If you try to use this feature on cards that do not support the number of MRTs
you’ve asked for, the technique will be skipped (so you ought to write a fallback technique).
Example : texture mrt output target width target height PF FLOAT16 RGBA
PF FLOAT16 RGBA chain scope
texture ref
This declares a reference of a texture from another compositor to be used in this compositor.
Format: texture ref <Local Name> <Reference Compositor> <Reference Texture Name>
Here is a description of the parameters:
Local Name
A name to give the referenced texture, which must be unique within this com-
positor. This name is used to reference the texture in Section 3.2.2 [Compositor
Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3
[Compositor Passes], page 114, when the texture is used as input to a material
rendering a fullscreen quad.
Reference Compositor
The name of the compositor that we are referencing a texture from
Reference Texture Name
The name of the texture in the compositor that we are referencing
Make sure that the texture being referenced is scoped accordingly (either chain or global
scope) and placed accordingly during chain creation (if referencing a chain-scoped texture,
the compositor must be present in the chain and placed before the compositor referencing
it).
Example : texture ref GBuffer GBufferCompositor mrt output
scheme
This gives a compositor technique a scheme name, allowing you to manually switch be-
tween different techniques for this compositor when instantiated on a viewport by calling
CompositorInstance::setScheme.
compositor logic
This connects between a compositor and code that it requires in order to function correctly.
When an instance of this compositor will be created, the compositor logic will be notified
and will have the chance to prepare the compositor’s operation (for example, adding a
listener).
Chapter 3: Scripts 112
There are two types of target pass, the sort that updates a render texture:
... and the sort that defines the final output render:
The contents of both are identical, the only real difference is that you can only have a
single target output entry, whilst you can have many target entries. Here are the attributes
you can use in a ’target’ or ’target output’ section of a .compositor script:
• [compositor target input], page 112
• [only initial], page 113
• [visibility mask], page 113
• [compositor lod bias], page 113
• [material scheme], page 114
• [compositor shadows], page 113
• Section 3.2.3 [Compositor Passes], page 114
Attribute Descriptions
input
Sets input mode of the target, which tells the target pass what is pulled in before any of its
own passes are rendered.
none The target will have nothing as input, all the contents of the target must be
generated using its own passes. Note this does not mean the target will be
empty, just no data will be pulled in. For it to truly be blank you’d need a
’clear’ pass within this target.
previous The target will pull in the previous contents of the viewport. This will be either
the original scene if this is the first compositor in the chain, or it will be the
output from the previous compositor in the chain if the viewport has multiple
compositors enabled.
only initial
If set to on, this target pass will only execute once initially after the effect has been enabled.
This could be useful to perform once-off renders, after which the static contents are used
by the rest of the compositor.
visibility mask
Sets the visibility mask for any render scene passes performed in this target pass. This
is a bitmask (although it must be specified as decimal, not hex) and maps to SceneMan-
ager::setVisibilityMask. Format: visibility mask <mask>
lod bias
Set the scene LOD bias for any render scene passes performed in this target pass. The
default is 1.0, everything below that means lower quality, higher means higher quality.
shadows
Sets whether shadows should be rendered during any render scene pass performed in this
target pass. The default is ’on’.
Default: shadows on
Chapter 3: Scripts 114
material scheme
If set, indicates the material scheme to use for any render scene pass. Useful for performing
special-case rendering effects.
Default: None
Format: ’pass’ (render quad | clear | stencil | render scene | render custom) [custom
name]
material
For passes of type ’render quad’, sets the material used to render the quad. You
will want to use shaders in this material to perform fullscreen effects, and use the
[compositor pass input], page 115 attribute to map other texture targets into the texture
bindings needed by this material.
input
For passes of type ’render quad’, this is how you map one or more local render textures
(See [compositor texture], page 109) into the material you’re using to render the fullscreen
quad. To bind more than one texture, repeat this attribute with different sampler indexes.
sampler The texture sampler to set, must be a number in the range [0,
OGRE MAX TEXTURE LAYERS-1].
Name The name of the local render texture to bind, as declared in
[compositor texture], page 109 and rendered to in one or more
Section 3.2.2 [Compositor Target Passes], page 112.
MRTIndex
If the local texture that you’re referencing is a Multiple Render Target (MRT),
this identifies the surface from the MRT that you wish to reference (0 is the
first surface, 1 the second etc).
Example: input 0 rt0
identifier
Associates a numeric identifier with the pass. This is useful for registering a listener with
the compositor (CompositorInstance::addListener), and being able to identify which pass it
is that’s being processed when you get events regarding it. Numbers between 0 and 2^32
Chapter 3: Scripts 116
are allowed.
Default: identifier 0
material scheme
If set, indicates the material scheme to use for this pass only. Useful for performing special-
case rendering effects.
This will overwrite the scheme if set at the target scope as well.
Default: None
Clear Section
For passes of type ’clear’, this section defines the buffer clearing parameters.
Format: clear
Chapter 3: Scripts 117
Here are the attributes you can use in a ’clear’ section of a .compositor script:
• [compositor clear buffers], page 117
• [compositor clear colour value], page 117
• [compositor clear depth value], page 117
• [compositor clear stencil value], page 117
buffers
Sets the buffers cleared by this pass.
colour value
Set the colour used to fill the colour buffer by this pass, if the colour buffer is being
cleared ([compositor clear buffers], page 117).
depth value
Set the depth value used to fill the depth buffer by this pass, if the depth buffer is being
cleared ([compositor clear buffers], page 117).
stencil value
Set the stencil value used to fill the stencil buffer by this pass, if the stencil buffer is
being cleared ([compositor clear buffers], page 117).
Stencil Section
For passes of type ’stencil’, this section defines the stencil operation parameters.
Format: stencil
Here are the attributes you can use in a ’stencil’ section of a .compositor script:
• [compositor stencil check], page 118
• [compositor stencil comp func], page 118
• [compositor stencil ref value], page 118
• [compositor stencil mask], page 119
• [compositor stencil fail op], page 119
• [compositor stencil depth fail op], page 119
• [compositor stencil pass op], page 120
• [compositor stencil two sided], page 120
check
Enables or disables the stencil check, thus enabling the use of the rest of the features
in this section. The rest of the options in this section do nothing if the stencil check is
off. Format: check (on | off)
comp func
Sets the function used to perform the following comparison:
(ref value & mask) comp func (Stencil Buffer Value & mask)
What happens as a result of this comparison will be one of 3 actions on the stencil
buffer, depending on whether the test fails, succeeds but with the depth buffer check
still failing, or succeeds with the depth buffer check passing too. You set the actions
in the [compositor stencil fail op], page 119, [compositor stencil depth fail op],
page 119 and [compositor stencil pass op], page 120 respectively. If the stencil check
fails, no colour or depth are written to the frame buffer.
Format: comp func (always fail | always pass | less | less equal | not equal |
greater equal | greater)
ref value
Sets the reference value used to compare with the stencil buffer as described in
[compositor stencil comp func], page 118.
mask
Sets the mask used to compare with the stencil buffer as described in
[compositor stencil comp func], page 118.
fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) and depth comparison is that both fail.
depth fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) passes but the depth comparison fails.
Chapter 3: Scripts 120
Format: depth fail op (keep | zero | replace | increment | decrement | increment wrap
| decrement wrap | invert)
pass op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) and the depth comparison pass.
two sided
Enables or disables two-sided stencil operations, which means the inverse of the oper-
ations applies to back-facing polygons.
CompositorManager::getSingleton().addCompositor(viewport, compositorName);
Where viewport is a pointer to your viewport, and compositorName is the name of the
compositor to create an instance of. By doing this, a new instance of a compositor will
be added to a new compositor chain on that viewport. You can call the method multiple
times to add further compositors to the chain on this viewport. By default, each compositor
which is added is disabled, but you can change this state by calling:
For more information on defining and using compositors, see Demo Compositor in the
Samples area, together with the Examples.compositor script in the media area.
Loading scripts
Particle system scripts are loaded at initialisation time by the system: by default it looks in
all common resource locations (see Root::addResourceLocation) for files with the ’.particle’
extension and parses them. If you want to parse files with a different extension, use the Par-
ticleSystemManager::getSingleton().parseAllSources method with your own extension, or if
you want to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.
Once scripts have been parsed, your code is free to instantiate systems based on them
using the SceneManager::createParticleSystem() method which can take both a name for
the new system, and the name of the template to base it on (this template name is in the
script).
Format
Several particle systems may be defined in a single script. The script format is pseudo-C++,
with sections delimited by curly braces (), and comments indicated by starting a line with
’//’ (note, no nested form comments allowed). The general format is shown below in a
typical example:
// A sparkly purple fountain
particle_system Examples/PurpleFountain
{
material Examples/Flare2
particle_width 20
particle_height 20
cull_each false
quota 10000
billboard_type oriented_self
// Area emitter
emitter Point
Chapter 3: Scripts 122
{
angle 15
emission_rate 75
time_to_live 3
direction 0 1 0
velocity_min 250
velocity_max 300
colour_range_start 1 0 0
colour_range_end 0 0 1
}
// Gravity
affector LinearForce
{
force_vector 0 -100 0
force_application add
}
// Fader
affector ColourFader
{
red -0.25
green -0.25
blue -0.25
}
}
Every particle system in the script must be given a name, which is the line before the
first opening ’’, in the example this is ’Examples/PurpleFountain’. This name must be
globally unique. It can include path characters (as in the example) to logically divide up
your particle systems, and also to avoid duplicate names, but the engine does not treat the
name as hierarchical, just as a string.
A system can have top-level attributes set using the scripting commands available, such
as ’quota’ to set the maximum number of particles allowed in the system. Emitters (which
create particles) and affectors (which modify particles) are added as nested definitions within
the script. The parameters available in the emitter and affector sections are entirely depen-
dent on the type of emitter / affector.
For a detailed description of the core particle system attributes, see the list below:
quota
Sets the maximum number of particles this system is allowed to contain at one time. When
this limit is exhausted, the emitters will not be allowed to emit any more particles until
some destroyed (e.g. through their time to live running out). Note that you will almost
always want to change this, since it defaults to a very low value (particle pools are only
ever increased in size, never decreased).
material
Sets the name of the material which all particles in this system will use. All particles in a
system use the same material, although each particle can tint this material through the use
of it’s colour property.
Chapter 3: Scripts 124
particle width
Sets the width of particles in world coordinates. Note that this property is absolute when
billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the
length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or
’perpendicular common’.
particle height
Sets the height of particles in world coordinates. Note that this property is absolute when
billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the
length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or
’perpendicular common’.
cull each
All particle systems are culled by the bounding box which contains all the particles in the
system. This is normally sufficient for fairly locally constrained particle systems where
most particles are either visible or not visible together. However, for those that spread
particles over a wider area (e.g. a rain system), you may want to actually cull each particle
individually to save on time, since it is far more likely that only a subset of the particles
will be visible. You do this by setting the cull each parameter to true.
renderer
Particle systems do not render themselves, they do it through ParticleRenderer classes.
Those classes are registered with a manager in order to provide particle systems with a
Chapter 3: Scripts 125
particular ’look’. OGRE comes configured with a default billboard-based renderer, but
more can be added through plugins. Particle renders are registered with a unique name,
and you can use that name in this attribute to determine the renderer to use. The default
is ’billboard’.
Particle renderers can have attributes, which can be passed by setting them on the root
particle system.
sorted
By default, particles are not sorted. By setting this attribute to ’true’, the particles will be
sorted with respect to the camera, furthest first. This can make certain rendering effects
look better at a small sorting expense.
local space
By default, particles are emitted into world space, such that if you transform the node to
which the system is attached, it will not affect the particles (only the emitters). This tends
to give the normal expected behaviour, which is to model how real world particles travel
independently from the objects they are emitted from. However, to create some effects you
may want the particles to remain attached to the local space the emitter is in and to follow
them directly. This option allows you to do that.
billboard type
This is actually an attribute of the ’billboard’ particle renderer (the default), and is an
example of passing attributes to a particle renderer by declaring them directly within the
system declaration. Particles using the default renderer are rendered using billboards, which
are rectangles formed by 2 triangles which rotate to face the given direction. However, there
is more than 1 way to orient a billboard. The classic approach is for the billboard to directly
face the camera: this is the default behaviour. However this arrangement only looks good
for particles which are representing something vaguely spherical like a light flare. For more
linear effects like laser fire, you actually want the particle to have an orientation of it’s own.
Chapter 3: Scripts 126
billboard origin
Specifying the point which acts as the origin point for all billboard particles, controls the
fine tuning of where a billboard particle appears in relation to it’s position.
format: billboard origin <top left|top center|top right|center left|center|center right|bottom left|bottom
Chapter 3: Scripts 127
vertex Billboard particles will rotate the vertices around their facing direction to ac-
cording with particle rotation. Rotate vertices guarantee texture corners exactly
match billboard corners, thus has advantage mentioned above, but should take
more time to generate the vertices.
texcoord Billboard particles will rotate the texture coordinates to according with particle
rotation. Rotate texture coordinates is faster than rotate vertices, but has some
disadvantage mentioned above.
Chapter 3: Scripts 128
common direction
Only required if [billboard type], page 125 is set to oriented common or perpendicu-
lar common, this vector is the common direction vector used to orient all particles in the
system.
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors],
page 137
common up vector
Only required if [billboard type], page 125 is set to perpendicular self or perpendicu-
lar common, this vector is the common up vector used to orient all particles in the system.
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors],
page 137
point rendering
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets
whether or not the BillboardSet will use point rendering rather than manually generated
quads.
Using point rendering is faster than generating quads manually, but is more restrictive.
The following restrictions apply:
• Only the ’point’ orientation type is supported
• Size and appearance of each particle is controlled by the material pass ([point size],
page 44, [point size attenuation], page 45, [point sprites], page 44)
Chapter 3: Scripts 129
You will almost certainly want to enable in your material pass both point attenuation
and point sprites if you use this option.
accurate facing
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets
whether or not the BillboardSet will use a slower but more accurate calculation for facing
the billboard to the camera. Bt default it uses the camera direction, which is faster but
means the billboards don’t stay in the same orientation as you rotate the camera. The
’accurate facing true’ option makes the calculation based on a vector from each billboard
to the camera, which means the orientation is constant even whilst the camera rotates.
iteration interval
Usually particle systems are updated based on the frame rate; however this can give variable
results with more extreme frame rate ranges, particularly at lower frame rates. You can use
this option to make the update frequency a fixed interval, whereby at lower frame rates,
the particle update will be repeated at the fixed interval until the frame time is used up. A
value of 0 means the default frame time iteration.
This option lets you set a ’timeout’ on the particle system, so that if it isn’t visible for
this amount of time, it will stop updating until it is next visible. A value of 0 disables the
timeout and always updates.
It is also possible to ’emit emitters’ - that is, have new emitters spawned based on the
position of particles. See [Emitting Emitters], page 137
See also: Section 3.3 [Particle Scripts], page 121, Section 3.3.5 [Particle Affectors], page 137
angle
Sets the maximum angle (in degrees) which emitted particles may deviate from the direction
of the emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees
in any direction away from the emitter’s direction. A value of 180 means emit in any
direction, whilst 0 means emit always exactly in the direction of the emitter.
colour
Sets a static colour for all particle emitted. Also see the colour range start and
colour range end attributes for setting a range of colours. The format of the colour
parameter is "r g b a", where each component is a value from 0 to 1, and the alpha value
is optional (assumes 1 if not specified).
format: as colour
example (generates random colours between red and blue):
colour range start 1 0 0
colour range end 0 0 1
default: both 1 1 1 1
direction
Sets the direction of the emitter. This is relative to the SceneNode which the particle system
is attached to, meaning that as with other movable objects changing the orientation of the
node will also move the emitter.
emission rate
Sets how many particles per second should be emitted. The specific emitter does not have
to emit these in a continuous burst - this is a relative parameter and the emitter may choose
to emit all of the second’s worth of particles every half-second for example, the behaviour
depends on the emitter. The emission rate will also be limited by the particle system’s
’quota’ setting.
position
Sets the position of the emitter relative to the SceneNode the particle system is attached
to.
velocity
Sets a constant velocity for all particles at emission time. See also the velocity min and
velocity max attributes which allow you to set a range of velocities instead of a fixed one.
format: as velocity
example:
velocity min 50
velocity max 100
default: both 1
time to live
Sets the number of seconds each particle will ’live’ for before being destroyed. NB it is
possible for particle affectors to alter this in flight, but this is the value given to particles
on emission. See also the time to live min and time to live max attributes which let you
set a lifetime range instead of a fixed one.
duration
Sets the number of seconds the emitter is active. The emitter can be started again, see
[repeat delay], page 134. A value of 0 means infinite duration. See also the duration min
and duration max attributes which let you set a duration range instead of a fixed one.
format: as duration
example:
duration min 2
duration max 5
default: both 0
repeat delay
Sets the number of seconds to wait before the emission is repeated when stopped by a limited
[duration], page 133. See also the repeat delay min and repeat delay max attributes which
allow you to set a range of repeat delays instead of a fixed one.
repeat delay 2
repeat delay 5
default: both 0
See also: Section 3.3.4 [Standard Particle Emitters], page 135, Section 3.3 [Particle
Scripts], page 121, Section 3.3.5 [Particle Affectors], page 137
Point Emitter
This emitter emits particles from a single point, which is it’s position. This emitter has no
additional attributes over an above the standard emitter attributes.
To create a point emitter, include a section like this within your particle system script:
emitter Point
{
// Settings go here
}
Box Emitter
This emitter emits particles from a random location within a 3-dimensional box. It’s extra
attributes are:
width Sets the width of the box (this is the size of the box along it’s local X axis,
which is dependent on the ’direction’ attribute which forms the box’s local Z).
Chapter 3: Scripts 136
height Sets the height of the box (this is the size of the box along it’s local Y axis,
which is dependent on the ’direction’ attribute which forms the box’s local Z).
format: height <units>
example: height 250
default: 100
depth Sets the depth of the box (this is the size of the box along it’s local Z axis,
which is the same as the ’direction’ attribute).
format: depth <units>
example: depth 250
default: 100
To create a box emitter, include a section like this within your particle system script:
emitter Box
{
// Settings go here
}
Cylinder Emitter
This emitter emits particles in a random direction from within a cylinder area, where the
cylinder is oriented along the Z-axis. This emitter has exactly the same parameters as the
[Box Emitter], page 135 so there are no additional parameters to consider here - the width
and height determine the shape of the cylinder along it’s axis (if they are different it is an
ellipsoid cylinder), the depth determines the length of the cylinder.
Ellipsoid Emitter
This emitter emits particles from within an ellipsoid shaped area, i.e. a sphere or squashed-
sphere area. The parameters are again identical to the [Box Emitter], page 135, except that
the dimensions describe the widest points along each of the axes.
inner depth
The depth of the inner area which does not emit any particles.
Ring Emitter
This emitter emits particles from a ring-shaped area, i.e. a little like [Hollow Ellipsoid
Emitter], page 136 except only in 2 dimensions.
inner width
The width of the inner area which does not emit any particles.
inner height
The height of the inner area which does not emit any particles.
See also: Section 3.3 [Particle Scripts], page 121, Section 3.3.2 [Particle Emitters],
page 130
Emitting Emitters
It is possible to spawn new emitters on the expiry of particles, for example to product
’firework’ style effects. This is controlled via the following directives:
emit emitter quota
This parameter is a system-level parameter telling the system how many emitted
emitters may be in use at any one time. This is just to allow for the space
allocation process.
name This parameter is an emitter-level parameter, giving a name to an emitter. This
can then be referred to in another emitter as the new emitter type to spawn
when an emitted particle dies.
emit emitter
This is an emitter-level parameter, and if specified, it means that when particles
emitted by this emitter die, they spawn a new emitter of the named type.
Particle affectors actually have no universal attributes; they are all specific to the type
of affector.
Chapter 3: Scripts 138
See also: Section 3.3.6 [Standard Particle Affectors], page 138, Section 3.3 [Particle
Scripts], page 121, Section 3.3.2 [Particle Emitters], page 130
force vector
Sets the vector for the force to be applied to every particle. The magnitude of
this vector determines how strong the force is.
format: force vector <x> <y> <z>
example: force vector 50 0 -50
default: 0 -100 0 (a fair gravity effect)
force application
Sets the way in which the force vector is applied to particle momentum.
format: force application <add|average>
example: force application average
default: add
The options are:
average The resulting momentum is the average of the force vector and the
particle’s current motion. Is self-stabilising but the speed at which
the particle changes direction is non-linear.
add The resulting momentum is the particle’s current motion plus the
force vector. This is traditional force acceleration but can poten-
tially result in unlimited velocity.
To create a linear force affector, include a section like this within your particle system script:
Chapter 3: Scripts 139
affector LinearForce
{
// Settings go here
}
Please note that the name of the affector type (’LinearForce’) is case-sensitive.
ColourFader Affector
This affector modifies the colour of particles in flight. It’s extra attributes are:
red Sets the adjustment to be made to the red component of the particle colour per
second.
format: red <delta value>
example: red -0.1
default: 0
green Sets the adjustment to be made to the green component of the particle colour
per second.
format: green <delta value>
example: green -0.1
default: 0
blue Sets the adjustment to be made to the blue component of the particle colour
per second.
format: blue <delta value>
example: blue -0.1
default: 0
alpha Sets the adjustment to be made to the alpha component of the particle colour
per second.
format: alpha <delta value>
example: alpha -0.1
default: 0
To create a colour fader affector, include a section like this within your particle system
script:
affector ColourFader
{
// Settings go here
}
ColourFader2 Affector
This affector is similar to the [ColourFader Affector], page 139, except it introduces two
states of colour changes as opposed to just one. The second colour change state is activated
once a specified amount of time remains in the particles life.
Chapter 3: Scripts 140
red1 Sets the adjustment to be made to the red component of the particle colour per
second for the first state.
format: red <delta value>
example: red -0.1
default: 0
green1 Sets the adjustment to be made to the green component of the particle colour
per second for the first state.
format: green <delta value>
example: green -0.1
default: 0
blue1 Sets the adjustment to be made to the blue component of the particle colour
per second for the first state.
format: blue <delta value>
example: blue -0.1
default: 0
alpha1 Sets the adjustment to be made to the alpha component of the particle colour
per second for the first state.
format: alpha <delta value>
example: alpha -0.1
default: 0
red2 Sets the adjustment to be made to the red component of the particle colour per
second for the second state.
format: red <delta value>
example: red -0.1
default: 0
green2 Sets the adjustment to be made to the green component of the particle colour
per second for the second state.
format: green <delta value>
example: green -0.1
default: 0
blue2 Sets the adjustment to be made to the blue component of the particle colour
per second for the second state.
format: blue <delta value>
example: blue -0.1
default: 0
alpha2 Sets the adjustment to be made to the alpha component of the particle colour
per second for the second state.
Chapter 3: Scripts 141
state change
When a particle has this much time left to live, it will switch to state 2.
format: state change <seconds>
example: state change 2
default: 1
To create a ColourFader2 affector, include a section like this within your particle system
script:
affector ColourFader2
{
// Settings go here
}
Scaler Affector
This affector scales particles in flight. It’s extra attributes are:
rate The amount by which to scale the particles in both the x and y direction per
second.
To create a scale affector, include a section like this within your particle system script:
affector Scaler
{
// Settings go here
}
Rotator Affector
This affector rotates particles in flight. This is done by rotating the texture. It’s extra
attributes are:
rotation speed range start
The start of a range of rotation speeds to be assigned to emitted particles.
format: rotation speed range start <degrees per second>
example: rotation speed range start 90
default: 0
To create a rotate affector, include a section like this within your particle system script:
affector Rotator
{
// Settings go here
}
ColourInterpolator Affector
Similar to the ColourFader and ColourFader2 Affector?s, this affector modifies the colour
of particles in flight, except it has a variable number of defined stages. It swaps the particle
colour for several stages in the life of a particle and interpolates between them. It’s extra
attributes are:
time0 The point in time of stage 0.
format: time0 <0-1 based on lifetime>
example: time0 0
default: 1
[...]
The number of stages is variable. The maximal number of stages is 6; where time5 and
colour5 are the last possible parameters. To create a colour interpolation affector, include
a section like this within your particle system script:
affector ColourInterpolator
{
// Settings go here
}
ColourImage Affector
This is another affector that modifies the colour of particles in flight, but instead of pro-
grammatically defining colours, the colours are taken from a specified image file. The range
of colour values begins from the left side of the image and move to the right over the life-
time of the particle, therefore only the horizontal dimension of the image is used. Its extra
attributes are:
image The start of a range of rotation speed to be assigned to emitted particles.
format: image <image name>
example: image rainbow.png
default: none
To create a ColourImage affector, include a section like this within your particle system
script:
affector ColourImage
{
// Settings go here
}
DeflectorPlane Affector
This affector defines a plane which deflects particles which collide with it. The attributes
are:
plane point
A point on the deflector plane. Together with the normal vector it defines the
plane.
Chapter 3: Scripts 144
plane normal
The normal vector of the deflector plane. Together with the point it defines the
plane.
default: plane normal 0 1 0
DirectionRandomiser Affector
This affector applies randomness to the movement of the particles. Its extra attributes are:
randomness
The amount of randomness to introduce in each axial direction.
example: randomness 5
default: randomness 1
keep velocity
Determines whether the velocity of particles is unchanged.
example: keep velocity true
default: keep velocity false
Loading scripts
Overlay scripts are loaded at initialisation time by the system: by default it looks in all
common resource locations (see Root::addResourceLocation) for files with the ’.overlay’
extension and parses them. If you want to parse files with a different extension, use the
OverlayManager::getSingleton().parseAllSources method with your own extension, or if you
want to parse an individual file, use OverlayManager::getSingleton().parseScript.
Chapter 3: Scripts 145
Format
Several overlays may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (), comments indicated by starting a line with ’//’ (note,
no nested form comments allowed), and inheritance through the use of templates. The
general format is shown below in a typical example:
// The name of the overlay comes first
MyOverlays/ANewOverlay
{
zorder 200
container Panel(MyOverlayElements/TestPanel)
{
// Center it horizontally, put it at the top
left 0.25
top 0
width 0.5
height 0.1
material MyMaterials/APanelMaterial
}
The above example defines a single overlay called ’MyOverlays/ANewOverlay’, with
2 panels in it, one nested under the other. It uses relative metrics (the default if no
metrics mode option is found).
Every overlay in the script must be given a name, which is the line before the first
opening ’’. This name must be globally unique. It can include path characters (as in the
example) to logically divide up your overlays, and also to avoid duplicate names, but the
engine does not treat the name a hierarchical, just as a string. Within the braces are the
properties of the overlay, and any nested elements. The overlay itself only has a single
property ’zorder’ which determines how ’high’ it is in the stack of overlays if more than one
is displayed at the same time. Overlays with higher zorder values are displayed on top.
Chapter 3: Scripts 146
The element and container blocks are pretty identical apart from their ability to store nested
blocks.
type name
Must resolve to the name of a OverlayElement type which has been registered
with the OverlayManager. Plugins register with the OverlayManager to ad-
vertise their ability to create elements, and at this time advertise the name of
the type. OGRE comes preconfigured with types ’Panel’, ’BorderPanel’ and
’TextArea’.
instance name
Must be a name unique among all other elements / containers by which to
identify the element. Note that you can obtain a pointer to any named element
by calling OverlayManager::getSingleton().getOverlayElement(name).
template name
Optional template on which to base this item. See templates.
The properties which can be included within the braces depend on the custom type.
However the following are always valid:
• [metrics mode], page 149
• [horz align], page 149
• [vert align], page 150
• [left], page 150
• [top], page 151
• [width], page 151
• [height], page 151
• [overlay material], page 152
• [caption], page 152
Chapter 3: Scripts 147
Templates
You can use templates to create numerous elements with the same properties. A template is
an abstract element and it is not added to an overlay. It acts as a base class that elements can
inherit and get its default properties. To create a template, the keyword ’template’ must be
the first word in the element definition (before container or element). The template element
is created in the topmost scope - it is NOT specified in an Overlay. It is recommended that
you define templates in a separate overlay though this is not essential. Having templates
defined in a separate file will allow different look & feels to be easily substituted.
Elements can inherit a template in a similar way to C++ inheritance - by using the :
operator on the element definition. The : operator is placed after the closing bracket of the
name (separated by a space). The name of the template to inherit is then placed after the
: operator (also separated by a space).
A template can contain template children which are created when the template is sub-
classed and instantiated. Using the template keyword for the children of a template is
optional but recommended for clarity, as the children of a template are always going to be
templates themselves.
left 0.82
top 0.45
width 0.16
height 0.13
material Core/StatsBlockCenter
border_up_material Core/StatsBlockBorder/Up
border_down_material Core/StatsBlockBorder/Down
}
template element TextArea(MyTemplates/BasicText)
{
font_name Ogre
char_height 0.08
colour_top 1 1 0
colour_bottom 1 0.2 0.2
left 0.03
top 0.02
width 0.12
height 0.09
}
MyOverlays/AnotherOverlay
{
zorder 490
container BorderPanel(MyElements/BackPanel) : MyTemplates/BasicBorderPanel
{
left 0
top 0
width 1
height 1
The above example uses templates to define a button. Note that the button template
inherits from the borderPanel template. This reduces the number of attributes needed to
instantiate a button.
Also note that the instantiate of a Button needs a template name for the caption at-
tribute. So templates can also be used by elements that need dynamic creation of children
elements (the button creates a TextAreaElement in this case for its caption).
See Section 3.4.1 [OverlayElement Attributes], page 149, Section 3.4.2 [Standard Over-
layElements], page 153
metrics mode
Sets the units which will be used to size and position this element.
This can be used to change the way that all measurement attributes in the rest of this
element are interpreted. In relative mode, they are interpreted as being a parametric value
from 0 to 1, as a proportion of the width / height of the screen. In pixels mode, they are
simply pixel offsets.
horz align
Sets the horizontal alignment of this element, in terms of where the horizontal origin is.
This can be used to change where the origin is deemed to be for the purposes of any
horizontal positioning attributes of this element. By default the origin is deemed to be the
Chapter 3: Scripts 150
left edge of the screen, but if you change this you can center or right-align your elements.
Note that setting the alignment to center or right does not automatically force your elements
to appear in the center or the right edge, you just have to treat that point as the origin
and adjust your coordinates appropriately. This is more flexible because you can choose to
position your element anywhere relative to that origin. For example, if your element was
10 pixels wide, you would use a ’left’ property of -10 to align it exactly to the right edge,
or -20 to leave a gap but still make it stick to the right edge.
Note that you can use this property in both relative and pixel modes, but it is most
useful in pixel mode.
vert align
Sets the vertical alignment of this element, in terms of where the vertical origin is.
This can be used to change where the origin is deemed to be for the purposes of any
vertical positioning attributes of this element. By default the origin is deemed to be the
top edge of the screen, but if you change this you can center or bottom-align your elements.
Note that setting the alignment to center or bottom does not automatically force your
elements to appear in the center or the bottom edge, you just have to treat that point as
the origin and adjust your coordinates appropriately. This is more flexible because you
can choose to position your element anywhere relative to that origin. For example, if your
element was 50 pixels high, you would use a ’top’ property of -50 to align it exactly to the
bottom edge, or -70 to leave a gap but still make it stick to the bottom edge.
Note that you can use this property in both relative and pixel modes, but it is most
useful in pixel mode.
left
Sets the horizontal position of the element relative to it’s parent.
Chapter 3: Scripts 151
Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen
size. Therefore 0.5 is half-way across the screen.
Default: left 0
top
Sets the vertical position of the element relative to it’s parent.
Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen
size. Therefore 0.5 is half-way down the screen.
Default: top 0
width
Sets the width of the element as a proportion of the size of the screen.
Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are
not relative to the parent; this is common in windowing systems where the top and left are
relative but the size is absolute.
Default: width 1
Chapter 3: Scripts 152
height
Sets the height of the element as a proportion of the size of the screen.
Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are
not relative to the parent; this is common in windowing systems where the top and left are
relative but the size is absolute.
Default: height 1
material
Sets the name of the material to use for this element.
This sets the base material which this element will use. Each type of element may inter-
pret this differently; for example the OGRE element ’Panel’ treats this as the background
of the panel, whilst ’BorderPanel’ interprets this as the material for the center area only.
Materials should be defined in .material scripts.
Note that using a material in an overlay element automatically disables lighting and depth
checking on this material. Therefore you should not use the same material as is used for
real 3D objects for an overlay.
Default: none
caption
Sets a text caption for the element.
Not all elements support captions, so each element is free to disregard this if it wants.
However, a general text caption is so common to many elements that it is included in the
generic interface to make it simpler to use. This is a common feature in GUI systems.
Default: blank
rotation
Sets the rotation of the element.
Format: rotation <angle in degrees> <axis x> <axis y> <axis z> Example: rotation 30
001
Default: none
This section describes how you define their custom attributes in an .overlay script, but
you can also change these custom properties in code if you wish. You do this by calling
setParameter(paramname, value). You may wish to use the StringConverter class to convert
your types to and from strings.
Panel (container)
This is the most bog-standard container you can use. It is a rectangular area which can
contain other elements (or containers) and may or may not have a background, which can
be tiled however you like. The background material is determined by the material attribute,
but is only displayed if transparency is off.
Attributes:
transparent <true | false>
If set to ’true’ the panel is transparent and is not rendered itself, it is just used
as a grouping level for it’s children.
tiling <layer> <x tile> <y tile>
Sets the number of times the texture(s) of the material are tiled across the panel
in the x and y direction. <layer> is the texture layer, from 0 to the number of
Chapter 3: Scripts 154
texture layers in the material minus one. By setting tiling per layer you can
create some nice multitextured backdrops for your panels, this works especially
well when you animate one of the layers.
uv coords <topleft u> <topleft v> <bottomright u> <bottomright v>
Sets the texture coordinates to use for this panel.
BorderPanel (container)
This is a slightly more advanced version of Panel, where instead of just a single flat panel, the
panel has a separate border which resizes with the panel. It does this by taking an approach
very similar to the use of HTML tables for bordered content: the panel is rendered as 9
square areas, with the center area being rendered with the main material (as with Panel)
and the outer 8 areas (the 4 corners and the 4 edges) rendered with a separate border
material. The advantage of rendering the corners separately from the edges is that the edge
textures can be designed so that they can be stretched without distorting them, meaning
the single texture can serve any size panel.
Attributes:
border size <left> <right> <top> <bottom>
The size of the border at each edge, as a proportion of the size of the screen.
This lets you have different size borders at each edge if you like, or you can use
the same value 4 times to create a constant size border.
border material <name>
The name of the material to use for the border. This is normally a different
material to the one used for the center area, because the center area is often
tiled which means you can’t put border areas in there. You must put all the
images you need for all the corners and the sides into a single texture.
border topleft uv <u1> <v1> <u2> <v2>
[also border topright uv, border bottomleft uv, border bottomright uv]; The
texture coordinates to be used for the corner areas of the border. 4 coordinates
are required, 2 for the top-left corner of the square, 2 for the bottom-right of
the square.
border left uv <u1> <v1> <u2> <v2>
[also border right uv, border top uv, border bottom uv]; The texture coordi-
nates to be used for the edge areas of the border. 4 coordinates are required,
2 for the top-left corner, 2 for the bottom-right. Note that you should design
the texture so that the left & right edges can be stretched / squashed vertically
and the top and bottom edges can be stretched / squashed horizontally without
detrimental effects.
TextArea (element)
This is a generic element that you can use to render text. It uses fonts which can be defined
in code using the FontManager and Font classes, or which have been predefined in .fontdef
files. See the font definitions section for more information.
Chapter 3: Scripts 155
Attributes:
font name <name>
The name of the font to use. This font must be defined in a .fontdef file to
ensure it is available at scripting time.
char height <height>
The height of the letters as a proportion of the screen height. Character widths
may vary because OGRE supports proportional fonts, but will be based on this
constant height.
colour <red> <green> <blue>
A solid colour to render the text in. Often fonts are defined in monochrome, so
this allows you to colour them in nicely and use the same texture for multiple
different coloured text areas. The colour elements should all be expressed as
values between 0 and 1. If you use predrawn fonts which are already full colour
then you don’t need this.
colour bottom <red> <green> <blue> / colour top <red> <green> <blue>
As an alternative to a solid colour, you can colour the text differently at the
top and bottom to create a gradient colour effect which can be very effective.
alignment <left | center | right>
Sets the horizontal alignment of the text. This is different from the horz align
parameter.
space width <width>
Sets the width of a space in relation to the screen.
All font definitions are held in .fontdef files, which are parsed by the system at startup
time. Each .fontdef file can contain multiple font definitions. The basic format of an entry
in the .fontdef file is:
Chapter 3: Scripts 156
<font_name>
{
type <image | truetype>
source <image file | truetype font file>
...
... custom attributes depending on type
}
You can also create new fonts at runtime by using the FontManager if you wish.
Chapter 4: Mesh Tools 158
4 Mesh Tools
There are a number of mesh tools available with OGRE to help you manipulate your meshes.
4.1 Exporters
Exporters are plugins to 3D modelling tools which write meshes and skeletal animation to
file formats which OGRE can use for realtime rendering. The files the exporters write end
in .mesh and .skeleton respectively.
Each exporter has to be written specifically for the modeller in question, although they all
use a common set of facilities provided by the classes MeshSerializer and SkeletonSerializer.
They also normally require you to own the modelling tool.
All the exporters here can be built from the source code, or you can download precom-
piled versions from the OGRE web site.
If you’re creating unanimated meshes, then you do not need to be concerned with the
above.
Full documentation for each exporter is provided along with the exporter itself,
and there is a list of the currently supported modelling tools in the OGRE Wiki at
http://www.ogre3d.org/wiki/index.php/Exporters.
Chapter 4: Mesh Tools 159
4.2 XmlConverter
The OgreXmlConverter tool can converter binary .mesh and .skeleton files to XML and back
again - this is a very useful tool for debugging the contents of meshes, or for exchanging
mesh data easily - many of the modeller mesh exporters export to XML because it is simpler
to do, and OgreXmlConverter can then produce a binary from it. Other than simplicity,
the other advantage is that OgreXmlConverter can generate additional information for the
mesh, like bounding regions and level-of-detail reduction.
Syntax:
Usage: OgreXMLConverter sourcefile [destfile]
sourcefile = name of file to convert
destfile = optional name of file to write to. If you don’t
specify this OGRE works it out through the extension
and the XML contents if the source is XML. For example
test.mesh becomes test.xml, test.xml becomes test.mesh
if the XML document root is <mesh> etc.
When converting XML to .mesh, you will be prompted to (re)generate level-of-
detail(LOD) information for the mesh - you can choose to skip this part if you wish, but
doing it will allow you to make your mesh reduce in detail automatically when it is loaded
into the engine. The engine uses a complex algorithm to determine the best parts of the
mesh to reduce in detail depending on many factors such as the curvature of the surface,
the edges of the mesh and seams at the edges of textures and smoothing groups - taking
advantage of it is advised to make your meshes more scalable in real scenes.
4.3 MeshUpgrader
This tool is provided to allow you to upgrade your meshes when the binary format changes
- sometimes we alter it to add new features and as such you need to keep your own assets
up to date. This tools has a very simple syntax:
OgreMeshUpgrade <oldmesh> <newmesh>
The OGRE release notes will notify you when this is necessary with a new release.
Chapter 5: Hardware Buffers 160
5 Hardware Buffers
Vertex buffers, index buffers and pixel buffers inherit most of their features from the Hard-
wareBuffer class. The general premise with a hardware buffer is that it is an area of memory
with which you can do whatever you like; there is no format (vertex or otherwise) associated
with the buffer itself - that is entirely up to interpretation by the methods that use it - in
that way, a HardwareBuffer is just like an area of memory you might allocate using ’malloc’
- the difference being that this memory is likely to be located in GPU or AGP memory.
For example:
VertexDeclaration* decl = HardwareBufferManager::getSingleton().createVertexDeclaration();
HardwareVertexBufferSharedPtr vbuf =
HardwareBufferManager::getSingleton().createVertexBuffer(
3*sizeof(Real), // size of one whole vertex
numVertices, // number of vertices
HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
false); // no shadow buffer
Don’t worry about the details of the above, we’ll cover that in the later sections. The im-
portant thing to remember is to always create objects through the HardwareBufferManager,
don’t use ’new’ (it won’t work anyway in most cases).
The most optimal type of hardware buffer is one which is not updated often, and is never
read from. The usage parameter of createVertexBuffer or createIndexBuffer can be one of
the following:
Chapter 5: Hardware Buffers 161
HBU_STATIC
This means you do not need to update the buffer very often, but you might
occasionally want to read from it.
HBU_STATIC_WRITE_ONLY
This means you do not need to update the buffer very often, and you do not
need to read from it. However, you may read from it’s shadow buffer if you set
one up (See Section 5.3 [Shadow Buffers], page 161). This is the optimal buffer
usage setting.
HBU_DYNAMIC
This means you expect to update the buffer often, and that you may wish to
read from it. This is the least optimal buffer setting.
HBU_DYNAMIC_WRITE_ONLY
This means you expect to update the buffer often, but that you never want
to read from it. However, you may read from it’s shadow buffer if you set
one up (See Section 5.3 [Shadow Buffers], page 161). If you use this option,
and replace the entire contents of the buffer every frame, then you should
use HBU DYNAMIC WRITE ONLY DISCARDABLE instead, since that has
better performance characteristics on some platforms.
HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE
This means that you expect to replace the entire contents of the buffer on an ex-
tremely regular basis, most likely every frame. By selecting this option, you free
the system up from having to be concerned about losing the existing contents of
the buffer at any time, because if it does lose them, you will be replacing them
next frame anyway. On some platforms this can make a significant performance
difference, so you should try to use this whenever you have a buffer you need
to update regularly. Note that if you create a buffer this way, you should use
the HBL DISCARD flag when locking the contents of it for writing.
Choosing the usage of your buffers carefully is important to getting optimal performance
out of your geometry. If you have a situation where you need to update a vertex buffer
often, consider whether you actually need to update all the parts of it, or just some. If it’s
the latter, consider using more than one buffer, with only the data you need to modify in
the HBU DYNAMIC buffer.
Always try to use the WRITE ONLY forms. This just means that you cannot read directly
from the hardware buffer, which is good practice because reading from hardware buffers is
very slow. If you really need to read data back, use a shadow buffer, described in the next
section.
when you write data into this buffer, it will first update the system memory copy, then it
will update the hardware buffer, as separate copying process - therefore this technique has
an additional overhead when writing data. Don’t use it unless you really need it.
Lock parameters
When you lock a buffer, you call one of the following methods:
// Lock the entire buffer
pBuffer->lock(lockType);
// Lock only part of the buffer
pBuffer->lock(start, length, lockType);
The first call locks the entire buffer, the second locks only the section from ’start’ (as
a byte offset), for ’length’ bytes. This could be faster than locking the entire buffer since
less is transferred, but not if you later update the rest of the buffer too, because doing it in
small chunks like this means you cannot use HBL DISCARD (see below).
The lockType parameter can have a large effect on the performance of your application,
especially if you are not using a shadow buffer.
HBL_NORMAL
This kind of lock allows reading and writing from the buffer - it’s also the least
optimal because basically you’re telling the card you could be doing anything at
all. If you’re not using a shadow buffer, it requires the buffer to be transferred
from the card and back again. If you’re using a shadow buffer the effect is
minimal.
HBL_READ_ONLY
This means you only want to read the contents of the buffer. Best used when
you created the buffer with a shadow buffer because in that case the data does
not have to be downloaded from the card.
HBL_DISCARD
This means you are happy for the card to discard the entire current contents
of the buffer. Implicitly this means you are not going to read the data - it
also means that the card can avoid any stalls if the buffer is currently being
Chapter 5: Hardware Buffers 163
rendered from, because it will actually give you an entirely different one. Use
this wherever possible when you are locking a buffer which was not created with
a shadow buffer. If you are using a shadow buffer it matters less, although with
a shadow buffer it’s preferable to lock the entire buffer at once, because that
allows the shadow buffer to use HBL DISCARD when it uploads the updated
contents to the real buffer.
HBL_NO_OVERWRITE
This is useful if you are locking just part of the buffer and thus cannot use
HBL DISCARD. It tells the card that you promise not to modify any section
of the buffer which has already been used in a rendering operation this frame.
Again this is only useful on buffers with no shadow buffer.
Once you have locked a buffer, you can use the pointer returned however you wish (just
don’t bother trying to read the data that’s there if you’ve used HBL DISCARD, or write
the data if you’ve used HBL READ ONLY). Modifying the contents depends on the type of
buffer, See Section 5.6 [Hardware Vertex Buffers], page 163 and See Section 5.7 [Hardware
Index Buffers], page 168
It’s worth noting that you don’t necessarily have to use VertexData to store your applica-
tions geometry; all that is required is that you can build a VertexData structure when it
comes to rendering. This is pretty easy since all of VertexData’s members are pointers, so
you could maintain your vertex buffers and declarations in alternative structures if you like,
so long as you can convert them for rendering.
To add an element to a VertexDeclaration, you call it’s addElement method. The parameters
to this method are:
source This tells the declaration which buffer the element is to be pulled from. Note
that this is just an index, which may range from 0 to one less than the number
of buffers which are being bound as sources of vertex data. See Section 5.6.3
Chapter 5: Hardware Buffers 165
[Vertex Buffer Bindings], page 166 for information on how a real buffer is bound
to a source index. Storing the source of the vertex element this way (rather
than using a buffer pointer) allows you to rebind the source of a vertex very
easily, without changing the declaration of the vertex format itself.
offset Tells the declaration how far in bytes the element is offset from the start of
each whole vertex in this buffer. This will be 0 if this is the only element being
sourced from this buffer, but if other elements are there then it may be higher.
A good way of thinking of this is the size of all vertex elements which precede
this element in the buffer.
type This defines the data type of the vertex input, including it’s size. This is
an important element because as GPUs become more advanced, we can no
longer assume that position input will always require 3 floating point numbers,
because programmable vertex pipelines allow full control over the inputs and
outuputs. This part of the element definition covers the basic type and size,
e.g. VET FLOAT3 is 3 floating point numbers - the meaning of the data is
dealt with in the next paramter.
semantic This defines the meaning of the element - the GPU will use this to determine
what to use this input for, and programmable vertex pipelines will use this to
identify which semantic to map the input to. This can identify the element
as positional data, normal data, texture coordinate data, etc. See the API
reference for full details of all the options.
index This parameter is only required when you supply more than one element of
the same semantic in one vertex declaration. For example, if you supply more
than one set of texture coordinates, you would set first sets index to 0, and the
second set to 1.
You can repeat the call to addElement for as many elements as you have in your vertex
input structures. There are also useful methods on VertexDeclaration for locating elements
within a declaration - see the API reference for full details.
Important Considerations
Whilst in theory you have completely full reign over the format of you vertices, in reality
there are some restrictions. Older DirectX hardware imposes a fixed ordering on the ele-
ments which are pulled from each buffer; specifically any hardware prior to DirectX 9 may
impose the following restrictions:
• VertexElements should be added in the following order, and the order of the elements
within any shared buffer should be as follows:
1. Positions
2. Blending weights
3. Normals
4. Diffuse colours
5. Specular colours
6. Texture coordinates (starting at 0, listed in order, with no gaps)
Chapter 5: Hardware Buffers 166
• You must not have unused gaps in your buffers which are not referenced by any Ver-
texElement
• You must not cause the buffer & offset settings of 2 VertexElements to overlap
OpenGL and DirectX 9 compatible hardware are not required to follow these strict
limitations, so you might find, for example that if you broke these rules your application
would run under OpenGL and under DirectX on recent cards, but it is not guaranteed to
run on older hardware under DirectX unless you stick to the above rules. For this reason
you’re advised to abide by them!
usage This tells the system how you intend to use the buffer. See Section 5.2 [Buffer
Usage], page 160
useShadowBuffer
Tells the system whether you want this buffer backed by a system-memory copy.
See Section 5.3 [Shadow Buffers], page 161
There are also methods for retrieving buffers from the binding data - see the API reference
for full details.
Lets start with a vert simple example. Lets say you have a buffer which only contains vertex
positions, so it only contains sets of 3 floating point numbers per vertex. In this case, all
you need to do to write data into it is:
Real* pReal = static_cast<Real*>(vbuf->lock(HardwareBuffer::HBL_DISCARD));
... then you just write positions in chunks of 3 reals. If you have other floating point
data in there, it’s a little more complex but the principle is largely the same, you just need
to write alternate elements. But what if you have elements of different types, or you need
to derive how to write the vertex data from the elements themselves? Well, there are some
useful methods on the VertexElement class to help you out.
Firstly, you lock the buffer but assign the result to a unsigned char* rather than
a specific type. Then, for each element whcih is sourcing from this buffer (which
you can find out by calling VertexDeclaration::findElementsBySource) you call
VertexElement::baseVertexPointerToElement. This offsets a pointer which points at the
base of a vertex in a buffer to the beginning of the element in question, and allows you to
use a pointer of the right type to boot. Here’s a full example:
// Get base pointer
unsigned char* pVert = static_cast<unsigned char*>(vbuf->lock(HardwareBuffer::HBL_READ_ONLY)
Real* pReal;
for (size_t v = 0; v < vertexCount; ++v)
{
// Get elements
VertexDeclaration::VertexElementList elems = decl->findElementsBySource(bufferIdx);
Chapter 5: Hardware Buffers 168
VertexDeclaration::VertexElementList::iterator i, iend;
for (i = elems.begin(); i != elems.end(); ++i)
{
VertexElement& elem = *i;
if (elem.getSemantic() == VES_POSITION)
{
elem.baseVertexPointerToElement(pVert, &pReal);
// write position using pReal
...
}
pVert += vbuf->getVertexSize();
}
vbuf->unlock();
See the API docs for full details of all the helper methods on VertexDeclaration and
VertexElement to assist you in manipulating vertex buffer data pointers.
5.8.1 Textures
A texture is an image that can be applied onto the surface of a three dimensional model.
In Ogre, textures are represented by the Texture resource class.
Chapter 5: Hardware Buffers 170
Creating a texture
Textures are created through the TextureManager. In most cases they are created from
image files directly by the Ogre resource system. If you are reading this, you most probably
want to create a texture manually so that you can provide it with image data yourself. This
is done through TextureManager::createManual:
ptex = TextureManager::getSingleton().createManual(
"MyManualTexture", // Name of texture
"General", // Name of resource group in which the texture should be created
TEX_TYPE_2D, // Texture type
256, // Width
256, // Height
1, // Depth (Must be 1 for two dimensional textures)
0, // Number of mipmaps
PF_A8R8G8B8, // Pixel format
TU_DYNAMIC_WRITE_ONLY // usage
);
This example creates a texture named MyManualTexture in resource group General. It
is a square two dimensional texture, with width 256 and height 256. It has no mipmaps,
internal format PF A8R8G8B8 and usage TU DYNAMIC WRITE ONLY.
The different texture types will be discussed in Section 5.8.3 [Texture Types], page 172.
Pixel formats are summarised in Section 5.8.4 [Pixel Formats], page 173.
Texture usages
In addition to the hardware buffer usages as described in See Section 5.2 [Buffer Usage],
page 160 there are some usage flags specific to textures:
TU AUTOMIPMAP
Mipmaps for this texture will be automatically generated by the graphics hard-
ware. The exact algorithm used is not defined, but you can assume it to be a
2x2 box filter.
TU RENDERTARGET
This texture will be a render target, ie. used as a target for render to texture.
Setting this flag will ignore all other texture usages except TU AUTOMIPMAP.
TU DEFAULT
This is actualy a combination of usage flags, and is equivalent to
TU AUTOMIPMAP | TU STATIC WRITE ONLY. The resource system
uses these flags for textures that are loaded from images.
Getting a PixelBuffer
A Texture can consist of multiple PixelBuffers, one for each combo if mipmap level and face
number. To get a PixelBuffer from a Texture object the method Texture::getBuffer(face,
mipmap) is used:
face should be zero for non-cubemap textures. For cubemap textures it identifies the
face to use, which is one of the cube faces described in See Section 5.8.3 [Texture Types],
page 172.
Chapter 5: Hardware Buffers 171
mipmap is zero for the zeroth mipmap level, one for the first mipmap level, and so on.
On textures that have automatic mipmap generation (TU AUTOMIPMAP) only level 0
should be accessed, the rest will be taken care of by the rendering API.
A simple example of using getBuffer is
// Get the PixelBuffer for face 0, mipmap 0.
HardwarePixelBufferSharedPtr ptr = tex->getBuffer(0,0);
blitFromMemory
The easy method to get an image into a PixelBuffer is by using HardwarePixel-
Buffer::blitFromMemory. This takes a PixelBox object and does all necessary pixel format
conversion and scaling for you. For example, to create a manual texture and load an image
into it, all you have to do is
// Manually loads an image and puts the contents in a manually created texture
Image img;
img.load("elephant.png", "General");
// Create RGB texture with 5 mipmaps
TexturePtr tex = TextureManager::getSingleton().createManual(
"elephant",
"General",
TEX_TYPE_2D,
img.getWidth(), img.getHeight(),
5, PF_X8R8G8B8);
// Copy face 0 mipmap 0 of the image to face 0 mipmap 0 of the texture.
tex->getBuffer(0,0)->blitFromMemory(img.getPixelBox(0,0));
{
for(size_t x=0; x<width; ++x)
{
// 0xRRGGBB -> fill the buffer with yellow pixels
data[pitch*y + x] = 0x00FFFF00;
}
}
/// Unlock the buffer again (frees it for use by the GPU)
buffer->unlock();
Colour channels
The meaning of the channels R,G,B,A,L and X is defined as
R Red colour component, usually ranging from 0.0 (no red) to 1.0 (full red).
G Green colour component, usually ranging from 0.0 (no green) to 1.0 (full green).
B Blue colour component, usually ranging from 0.0 (no blue) to 1.0 (full blue).
Chapter 5: Hardware Buffers 174
A Alpha component, usually ranging from 0.0 (entire transparent) to 1.0 (opaque).
L Luminance component, usually ranging from 0.0 (black) to 1.0 (white). The
luminance component is duplicated in the R, G, and B channels to achieve a
greyscale image.
X This component is completely ignored.
If none of red, green and blue components, or luminance is defined in a format, these
default to 0. For the alpha channel this is different; if no alpha is defined, it defaults to 1.
data The pointer to the first component of the image data in memory.
format The pixel format (See Section 5.8.4 [Pixel Formats], page 173) of the image
data.
rowPitch The number of elements between the leftmost pixel of one row and the left pixel
of the next. This value must always be equal to getWidth() (consecutive) for
compressed formats.
slicePitch The number of elements between the top left pixel of one (depth) slice and
the top left pixel of the next. Must be a multiple of rowPitch. This value
must always be equal to getWidth()*getHeight() (consecutive) for compressed
formats.
left, top, right, bottom, front, back
Extents of the box in three dimensional integer space. Note that the left, top,
and front edges are included but the right, bottom and top ones are not. left
must always be smaller or equal to right, top must always be smaller or equal
to bottom, and front must always be smaller or equal to back.
It also has some useful methods:
getWidth()
Get the width of this box
getHeight()
Get the height of this box. This is 1 for one dimensional images.
getDepth()
Get the depth of this box. This is 1 for one and two dimensional images.
setConsecutive()
Set the rowPitch and slicePitch so that the buffer is laid out consecutive in
memory.
getRowSkip()
Get the number of elements between one past the rightmost pixel of one row
and the leftmost pixel of the next row. This is zero if rows are consecutive.
getSliceSkip()
Get the number of elements between one past the right bottom pixel of one slice
and the left top pixel of the next slice. This is zero if slices are consecutive.
isConsecutive()
Return whether this buffer is laid out consecutive in memory (ie the pitches are
equal to the dimensions)
getConsecutiveSize()
Return the size (in bytes) this image would take if it was laid out consecutive
in memory
getSubVolume(const Box &def)
Return a subvolume of this PixelBox, as a PixelBox.
For more information about these methods consult the API documentation.
Chapter 6: External Texture Sources 176
Introduction
This tutorial will provide a brief introduction of ExternalTextureSource and ExternalTex-
tureSourceManager classes, their relationship, and how the PlugIns work. For those inter-
ested in developing a Texture Source Plugin or maybe just wanting to know more about
this system, take a look the ffmpegVideoSystem plugin, which you can find more about on
the OGRE forums.
How do external texture source plugins benefit OGRE? Well, the main answer is: adding
support for any type of texture source does not require changing OGRE to support it...
all that is involved is writing a new plugin. Additionally, because the manager uses the
StringInterface class to issue commands/params, no change to the material script reader
is needs to be made. As a result, if a plugin needs a special parameter set, it just creates
a new command in it’s Parameter Dictionary. - see ffmpegVideoSystem plugin for an
example. To make this work, two classes have been added to OGRE: ExternalTextureSource
& ExternalTextureSourceManager.
ExternalTextureSource Class
The ExternalTextureSource class is the base class that Texture Source PlugIns must be de-
rived from. It provides a generic framework (via StringInterface class) with a very limited
amount of functionality. The most common of parameters can be set through the Tex-
turePlugInSource class interface or via the StringInterface commands contained within this
class. While this may seem like duplication of code, it is not. By using the string command
interface, it becomes extremely easy for derived plugins to add any new types of parameters
that it may need.
• Parameter Name: "frames per second" Argument Type: Ogre::String Set a Frames
per second update speed. (Integer Values only)
ExternalTextureSourceManager Class
ExternalTextureSourceManager is responsible for keeping track of loaded Texture Source
PlugIns. It also aids in the creation of texture source textures from scripts. It also is the
interface you should use when dealing with texture source plugins.
Note: The function prototypes shown below are mockups - param names are simplified
to better illustrate purpose here... Steps needed to create a new texture via ExternalTex-
tureSourceManager:
• Obviously, the first step is to have the desired plugin included in plugin.cfg for it to be
loaded.
• Set the desired PlugIn as Active via AdvancedTextureManager::getSingleton().SetCurrentPlugIn(
String Type ); – type is whatever the plugin registers as handling (e.g. "video",
"flash", "whatever", etc).
• Note: Consult Desired PlugIn to see what params it needs/expects. Set
params/value pairs via AdvancedTextureManager::getSingleton().getCurrentPlugIn()-
>setParameter( String Param, String Value );
• After required params are set, a simple call to AdvancedTextureManager::getSingleton().getCurrentPlugIn(
>createDefinedTexture( sMaterialName ); will create a texture to the material name
given.
The manager also provides a method for deleting a texture source material: Advanced-
TextureManager::DestroyAdvancedTexture( String sTextureName ); The destroy method
works by broadcasting the material name to all loaded TextureSourcePlugIns, and the Plu-
gIn who actually created the material is responsible for the deletion, while other PlugIns
will just ignore the request. What this means is that you do not need to worry about
which PlugIn created the material, or activating the PlugIn yourself. Just call the manager
method to remove the material. Also, all texture plugins should handle cleanup when they
are shutdown.
{
texture_unit
{
texture_source video
{
filename mymovie.mpeg
play_mode play
sound_mode on
}
}
}
}
}
Notice that the first two param/value pairs are defined in the ExternalTextureSource
base class and that the third parameter/value pair is not defined in the base class... That
parameter is added to the param dictionary by the ffmpegVideoPlugin... This shows that
extending the functionality with the plugins is extremely easy. Also, pay particular attention
to the line: texture source video. This line identifies that this texture unit will come from a
texture source plugin. It requires one parameter that determines which texture plugin will
be used. In the example shown, the plugin requested is one that registered with "video"
name.
7 Shadows
Shadows are clearly an important part of rendering a believable scene - they provide a more
tangible feel to the objects in the scene, and aid the viewer in understanding the spatial
relationship between objects. Unfortunately, shadows are also one of the most challenging
aspects of 3D rendering, and they are still very much an active area of research. Whilst there
are many techniques to render shadows, none is perfect and they all come with advantages
and disadvantages. For this reason, Ogre provides multiple shadow implementations, with
plenty of configuration settings, so you can choose which technique is most appropriate for
your scene.
Shadow implementations fall into basically 2 broad categories: Section 7.1 [Stencil Shad-
ows], page 181 and Section 7.2 [Texture-based Shadows], page 185. This describes the
method by which the shape of the shadow is generated. In addition, there is more than one
way to render the shadow into the scene: Section 7.3 [Modulative Shadows], page 190, which
darkens the scene in areas of shadow, and Section 7.4 [Additive Light Masking], page 191
which by contrast builds up light contribution in areas which are not in shadow. You also
have the option of [Integrated Texture Shadows], page 189 which gives you complete control
over texture shadow application, allowing for complex single-pass shadowing shaders. Ogre
supports all these combinations.
Enabling shadows
Shadows are disabled by default, here’s how you turn them on and configure them in the
general sense:
1. Enable a shadow technique on the SceneManager as the first thing you doing your
scene setup. It is important that this is done first because the shadow technique can
alter the way meshes are loaded. Here’s an example:
mSceneMgr->setShadowTechnique(SHADOWTYPE_STENCIL_ADDITIVE);
2. Create one or more lights. Note that not all light types are necessarily supported by
all shadow techniques, you should check the sections about each technique to check.
Note that if certain lights should not cast shadows, you can turn that off by calling
setCastShadows(false) on the light, the default is true.
3. Disable shadow casting on objects which should not cast shadows. Call setCastShad-
ows(false) on objects you don’t want to cast shadows, the default for all objects is to
cast shadows.
4. Configure shadow far distance. You can limit the distance at which shadows are con-
sidered for performance reasons, by calling SceneManager::setShadowFarDistance.
5. Turn off the receipt of shadows on materials that should not receive them. You can
turn off the receipt of shadows (note, not the casting of shadows - that is done per-
object) by calling Material::setReceiveShadows or using the receive shadows material
attribute. This is useful for materials which should be considered self-illuminated for
example. Note that transparent materials are typically excluded from receiving and
Chapter 7: Shadows 181
casting shadows, although see the [transparency casts shadows], page 20 option for
exceptions.
In order to generate the stencil, ’shadow volumes’ are rendered by extruding the silhou-
ette of the shadow caster away from the light. Where these shadow volumes intersect other
objects (or the caster, since self-shadowing is supported using this technique), the stencil
is updated, allowing subsequent operations to differentiate between light and shadow. How
Chapter 7: Shadows 182
exactly this is used to render the shadows depends on whether Section 7.3 [Modulative
Shadows], page 190 or Section 7.4 [Additive Light Masking], page 191 is being used. Ob-
jects can both cast and receive stencil shadows, so self-shadowing is inbuilt.
The advantage of stencil shadows is that they can do self-shadowing simply on low-
end hardware, provided you keep your poly count under control. In contrast doing self-
shadowing with texture shadows requires a fairly modern machine (See Section 7.2 [Texture-
based Shadows], page 185). For this reason, you’re likely to pick stencil shadows if you need
an accurate shadowing solution for an application aimed at older or lower-spec machines.
The disadvantages of stencil shadows are numerous though, especially on more modern
hardware. Because stencil shadows are a geometric technique, they are inherently more
costly the higher the number of polygons you use, meaning you are penalized the more
detailed you make your meshes. The fillrate cost, which comes from having to render
shadow volumes, also escalates the same way. Since more modern applications are likely to
use higher polygon counts, stencil shadows can start to become a bottleneck. In addition,
the visual aspects of stencil shadows are pretty primitive - your shadows will always be
hard-edged, and you have no possibility of doing clever things with shaders since the stencil
is not available for manipulation there. Therefore, if your application is aimed at higher-
end machines you should definitely consider switching to texture shadows (See Section 7.2
[Texture-based Shadows], page 185).
There are a number of issues to consider which are specific to stencil shadows:
• [CPU Overhead], page 182
• [Extrusion distance], page 183
• [Camera far plane positioning], page 183
• [Mesh edge lists], page 183
• [The Silhouette Edge], page 183
• [Be realistic], page 184
• [Stencil Optimisations Performed By Ogre], page 184
CPU Overhead
Calculating the shadow volume for a mesh can be expensive, and it has to be done on the
CPU, it is not a hardware accelerated feature. Therefore, you can find that if you overuse
this feature, you can create a CPU bottleneck for your application. Ogre quite aggressively
eliminates objects which cannot be casting shadows on the frustum, but there are limits
to how much it can do, and large, elongated shadows (e.g. representing a very low sun
position) are very difficult to cull efficiently. Try to avoid having too many shadow casters
around at once, and avoid long shadows if you can. Also, make use of the ’shadow far
distance’ parameter on the SceneManager, this can eliminate distant shadow casters from
the shadow volume construction and save you some time, at the expense of only having
shadows for closer objects. Lastly, make use of Ogre’s Level-Of-Detail (LOD) features; you
can generate automatically calculated LODs for your meshes in code (see the Mesh API
Chapter 7: Shadows 183
docs) or when using the mesh tools such as OgreXmlConverter and OgreMeshUpgrader.
Alternatively, you can assign your own manual LODs by providing alternative mesh files at
lower detail levels. Both methods will cause the shadow volume complexity to decrease as
the object gets further away, which saves you valuable volume calculation time.
Extrusion distance
When vertex programs are not available, Ogre can only extrude shadow volumes a finite
distance from the object. If an object gets too close to a light, any finite extrusion dis-
tance will be inadequate to guarantee all objects will be shadowed properly by this object.
Therefore, you are advised not to let shadow casters pass too close to light sources if you
can avoid it, unless you can guarantee that your target audience will have vertex program
capable hardware (in this case, Ogre extrudes the volume to infinity using a vertex program
so the problem does not occur).
When infinite extrusion is not possible, Ogre uses finite extrusion, either derived from the
attenuation range of a light (in the case of a point light or spotlight), or a fixed extrusion
distance set in the application in the case of directional lights. To change the directional
light extrusion distance, use SceneManager::setShadowDirectionalLightExtrusionDistance.
or more lights in a scene using modulative stencil shadows is not advisable; the silhouette
edges will be very marked. Additive lights do not suffer from this as badly because each
light is masked individually, meaning that it is only ambient light which can show up the
silhouette edges.
Be realistic
Don’t expect to be able to throw any scene using any hardware at the stencil shadow
algorithm and expect to get perfect, optimum speed results. Shadows are a complex and
expensive technique, so you should impose some reasonable limitations on your placing of
lights and objects; they’re not really that restricting, but you should be aware that this is
not a complete free-for-all.
• Try to avoid letting objects pass very close (or even through) lights - it might look nice
but it’s one of the cases where artifacts can occur on machines not capable of running
vertex programs.
• Be aware that shadow volumes do not respect the ’solidity’ of the objects they pass
through, and if those objects do not themselves cast shadows (which would hide the
effect) then the result will be that you can see shadows on the other side of what should
be an occluding object.
• Make use of SceneManager::setShadowFarDistance to limit the number of shadow vol-
umes constructed
• Make use of LOD to reduce shadow volume complexity at distance
• Avoid very long (dusk and dawn) shadows - they exacerbate other issues such as volume
clipping, fillrate, and cause many more objects at a greater distance to require volume
construction.
The main disadvantage to texture shadows is that, because they are simply a texture,
they have a fixed resolution which means if stretched, the pixellation of the texture can
become obvious. There are ways to combat this though:
Chapter 7: Shadows 186
Directional Lights
Directional lights in theory shadow the entire scene from an infinitely distant light. Now,
since we only have a finite texture which will look very poor quality if stretched over the
entire scene, clearly a simplification is required. Ogre places a shadow texture over the area
immediately in front of the camera, and moves it as the camera moves (although it rounds
this movement to multiples of texels so that the slight ’swimming shadow’ effect caused
by moving the texture is minimised). The range to which this shadow extends, and the
offset used to move it in front of the camera, are configurable (See [Configuring Texture
Shadows], page 187). At the far edge of the shadow, Ogre fades out the shadow based on
other configurable parameters so that the termination of the shadow is softened.
Spotlights
Spotlights are much easier to represent as renderable shadow textures than directional
lights, since they are naturally a frustum. Ogre represents spotlight directly by rendering
the shadow from the light position, in the direction of the light cone; the field-of-view of
Chapter 7: Shadows 187
the texture camera is adjusted based on the spotlight falloff angles. In addition, to hide the
fact that the shadow texture is square and has definite edges which could show up outside
the spotlight, Ogre uses a second texture unit when projecting the shadow onto the scene
which fades out the shadow gradually in a projected circle around the spotlight.
Point Lights
As mentioned above, to support point lights properly would require multiple renders (either
6 for a cubic render or perhaps 2 for a less precise parabolic mapping), so rather than do
that we approximate point lights as spotlights, where the configuration is changed on the
fly to make the light shine from its position over the whole of the viewing frustum. This is
not an ideal setup since it means it can only really work if the point light’s position is out
of view, and in addition the changing parameterisation can cause some ’swimming’ of the
texture. Generally we recommend avoiding making point lights cast texture shadows.
If you’re not using depth shadow mapping, OGRE divides shadow casters and receivers into
2 disjoint groups. Simply by turning off shadow casting on an object, you automatically
make it a shadow receiver (although this can be disabled by setting the ’receive shadows’
option to ’false’ in a material script. Similarly, if an object is set as a shadow caster, it
cannot receive shadows.
You can adjust this manually by simply turning off shadow casting for lights you do not
wish to cast shadows. In addition, you can set a maximum limit on the number of shadow
textures Ogre is allowed to use by calling SceneManager::setShadowTextureCount. Each
frame, Ogre determines the lights which could be affecting the frustum, and then allocates
the number of shadow textures it is allowed to use to the lights on a first-come-first-served
basis. Any additional lights will not cast shadows that frame.
Note that you can set the number of shadow textures and their size at the same time by
using the SceneManager::setShadowTextureSettings method; this is useful because both the
individual calls require the potential creation / destruction of texture resources.
Important: if you use the GL render system your shadow texture size can only be larger
(in either dimension) than the size of your primary window surface if the hardware
supports the Frame Buffer Object (FBO) or Pixel Buffer Object (PBO) extensions. Most
modern cards support this now, but be careful of older cards - you can check the ability of
the hardware to manage this through ogreRoot->getRenderSystem()->getCapabilities()-
>hasCapability(RSC HWRENDER TO TEXTURE). If this returns false, if you create a
shadow texture larger in any dimension than the primary surface, the rest of the shadow
texture will be blank.
value, or increasing the texture size, you can improve the quality of shadows from directional
lights at the expense of closer shadow termination or increased memory usage, respectively.
Here is where ’integrated’ texture shadows step in. Both of the texture shadow types
above have alternative versions called SHADOWTYPE TEXTURE MODULATIVE INTEGRATED
and SHADOWTYPE TEXTURE ADDITIVE INTEGRATED, where instead of rendering
the shadows for you, it just creates the texture shadow and then expects you to use that
shadow texture as you see fit when rendering receiver objects in the scene. The downside
is that you have to take into account shadow receipt in every one of your materials if you
use this option - the upside is that you have total control over how the shadow textures
are used. The big advantage here is that you can can perform more complex shading,
taking into account shadowing, than is possible using the generalised bolt-on approaches,
AND you can probably write them in a smaller number of passes, since you know precisely
what you need and can combine passes where possible. When you use one of these
shadowing approaches, the only difference between additive and modulative is the colour
of the casters in the shadow texture (the shadow colour for modulative, black for additive)
- the actual calculation of how the texture affects the receivers is of course up to you.
No separate modulative pass will be performed, and no splitting of your materials into
ambient / per-light / decal etc will occur - absolutely everything is determined by your
original material (which may have modulative passes or per-light iteration if you want of
course, but it’s not required).
You reference a shadow texture in a material which implements this approach by using
the ’[content type], page 51 shadow’ directive in your {texture unit. It implicitly references
a shadow texture based on the number of times you’ve used this directive in the same pass,
and the light start option or light-based pass iteration, which might start the light index
higher than 0.
Shadow Colour
The colour which is used to darken the areas in shadow is set by SceneMan-
ager::setShadowColour; it defaults to a dark grey (so that the underlying colour still shows
through a bit).
Note that if you’re using texture shadows you have the additional option of using
[Integrated Texture Shadows], page 189 rather than being forced to have a separate pass of
the scene to render shadows. In this case the ’modulative’ aspect of the shadow technique
just affects the colour of the shadow texture.
As many technical papers (and game marketing) will tell you, rendering realistic lighting
like this requires multiple passes. Being a friendly sort of engine, Ogre frees you from most
of the hard work though, and will let you use the exact same material definitions whether
you use this lighting technique or not (for the most part, see [Pass Classification and Vertex
Programs], page 193). In order to do this technique, Ogre automatically categorises the
Section 3.1.2 [Passes], page 24 you define in your materials into 3 types:
1. ambient Passes categorised as ’ambient’ include any base pass which is not lit by any
particular light, i.e. it occurs even if there is no ambient light in the scene. The
ambient pass always happens first, and sets up the initial depth value of the fragments,
Chapter 3: Scripts 192
and the ambient colour if applicable. It also includes any emissive / self illumination
contribution. Only textures which affect ambient light (e.g. ambient occlusion maps)
should be rendered in this pass.
2. diffuse/specular Passes categorised as ’diffuse/specular’ (or ’per-light’) are rendered
once per light, and each pass contributes the diffuse and specular colour from that
single light as reflected by the diffuse / specular terms in the pass. Areas in shadow
from that light are masked and are thus not updated. The resulting masked colour
is added to the existing colour in the scene. Again, no textures are used in this pass
(except for textures used for lighting calculations such as normal maps).
3. decal Passes categorised as ’decal’ add the final texture colour to the scene, which is
modulated by the accumulated light built up from all the ambient and diffuse/specular
passes.
In practice, Section 3.1.2 [Passes], page 24 rarely fall nicely into just one of these cate-
gories. For each Technique, Ogre compiles a list of ’Illumination Passes’, which are derived
from the user defined passes, but can be split, to ensure that the divisions between illumi-
nation pass categories can be maintained. For example, if we take a very simple material
definition:
material TestIllumination
{
technique
{
pass
{
ambient 0.5 0.2 0.2
diffuse 1 0 0
specular 1 0.8 0.8 15
texture_unit
{
texture grass.png
}
}
}
}
Ogre will split this into 3 illumination passes, which will be the equivalent of this:
material TestIlluminationSplitIllumination
{
technique
{
// Ambient pass
pass
{
ambient 0.5 0.2 0.2
diffuse 0 0 0
specular 0 0 0
}
Chapter 3: Scripts 193
// Decal pass
pass
{
scene_blend modulate
lighting off
texture_unit
{
texture grass.png
}
}
}
}
So as you can see, even a simple material requires a minimum of 3 passes when using
this shadow technique, and in fact it requires (num lights + 2) passes in the general sense.
You can use more passes in your original material and Ogre will cope with that too, but
be aware that each pass may turn into multiple ones if it uses more than one type of light
contribution (ambient vs diffuse/specular) and / or has texture units. The main nice thing
is that you get the full multipass lighting behaviour even if you don’t define your materials
in terms of it, meaning that your material definitions can remain the same no matter what
lighting approach you decide to use.
In practice this is very easy. Even though your vertex program could be doing a lot
of complex, highly customised processing, it can still be classified into one of the 3 types
listed above. All you need to do to tell Ogre what you’re doing is to use the pass attributes
ambient, diffuse, specular and self illumination, just as if you were not using a vertex
program. Sure, these attributes do nothing (as far as rendering is concerned) when you’re
using vertex programs, but it’s the easiest way to indicate to Ogre which light components
you’re using in your vertex program. Ogre will then classify and potentially split your
programmable pass based on this information - it will leave the vertex program as-is (so
that any split passes will respect any vertex modification that is being done).
Note that when classifying a diffuse/specular programmable pass, Ogre checks to see
whether you have indicated the pass can be run once per light (iteration once per light).
If so, the pass is left intact, including it’s vertex and fragment programs. However, if this
attribute is not included in the pass, Ogre tries to split off the per-light part, and in doing
so it will disable the fragment program, since in the absence of the ’iteration once per light’
attribute it can only assume that the fragment program is performing decal work and hence
must not be used per light.
So clearly, when you use additive light masking as a shadow technique, you need to make
sure that programmable passes you use are properly set up so that they can be classified
correctly. However, also note that the changes you have to make to ensure the classification
is correct does not affect the way the material renders when you choose not to use additive
lighting, so the principle that you should be able to use the same material definitions for
all lighting scenarios still holds. Here is an example of a programmable material which will
be classified correctly by the illumination pass classifier:
// Per-pixel normal mapping Any number of lights, diffuse and specular
material Examples/BumpMapping/MultiLightSpecular
{
technique
{
// Base ambient pass
pass
{
// ambient only, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 1 1 1
diffuse 0 0 0
specular 0 0 0 0
// Really basic vertex program
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
Chapter 3: Scripts 195
}
// Now do the lighting pass
// NB we don’t do decal texture here because this is repeated per light
pass
{
// set ambient off, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 0 0 0
// do this for each light
iteration once_per_light
scene_blend add
// Fragment program
fragment_program_ref Examples/BumpMapFPSpecular
{
param_named_auto lightDiffuse light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
}
texture_unit
{
cubic_texture nm.png combinedUVW
tex_coord_set 1
tex_address_mode clamp
}
}
// Decal pass
pass
{
lighting off
// Really basic vertex program
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
param_named_auto worldViewProj worldviewproj_matrix
param_named ambient float4 1 1 1 1
}
scene_blend dest_colour zero
texture_unit
{
texture RustedMetal.jpg
}
}
}
}
Note that if you’re using texture shadows you have the additional option of using
[Integrated Texture Shadows], page 189 rather than being forced to use this explicit se-
quence - allowing you to compress the number of passes into a much smaller number at the
expense of defining an upper number of shadow casting lights. In this case the ’additive’
aspect of the shadow technique just affects the colour of the shadow texture and it’s up to
you to combine the shadow textures in your receivers however you like.
Static Lighting
Despite their power, additive lighting techniques have an additional limitation; they do not
combine well with pre-calculated static lighting in the scene. This is because they are based
on the principle that shadow is an absence of light, but since static lighting in the scene
already includes areas of light and shadow, additive lighting cannot remove light to create
new shadows. Therefore, if you use the additive lighting technique you must either use
it exclusively as your lighting solution (and you can combine it with per-pixel lighting to
create a very impressive dynamic lighting solution), or you must use [Integrated Texture
Shadows], page 189 to combine the static lighting according to your chosen approach.
Chapter 8: Animation 197
8 Animation
OGRE supports a pretty flexible animation system that allows you to script animation for
several different purposes:
Section 8.1 [Skeletal Animation], page 197
Mesh animation using a skeletal structure to determine how the mesh deforms.
There are many grades of skeletal animation, and not all engines (or modellers for that
matter) support all of them. OGRE supports the following features:
• Each mesh can be linked to a single skeleton
• Unlimited bones per skeleton
• Hierarchical forward-kinematics on bones
• Multiple named animations per skeleton (e.g. ’Walk’, ’Run’, ’Jump’, ’Shoot’ etc)
• Unlimited keyframes per animation
• Linear or spline-based interpolation between keyframes
• A vertex can be assigned to multiple bones and assigned weightings for smoother skin-
ning
• Multiple animations can be applied to a mesh at the same time, again with a blend
weighting
Skeletons and the animations which go with them are held in .skeleton files, which are
Chapter 8: Animation 198
produced by the OGRE exporters. These files are loaded automatically when you create an
Entity based on a Mesh which is linked to the skeleton in question. You then use Section 8.2
[Animation State], page 198 to set the use of animation on the entity in question.
Skeletal animation can be performed in software, or implemented in shaders (hardware
skinning). Clearly the latter is preferable, since it takes some of the work away from the
CPU and gives it to the graphics card, and also means that the vertex data does not need
to be re-uploaded every frame. This is especially important for large, detailed models. You
should try to use hardware skinning wherever possible; this basically means assigning a ma-
terial which has a vertex program powered technique. See hundefinedi [Skeletal Animation
in Vertex Programs], page hundefinedi for more details. Skeletal animation can be com-
bined with vertex animation, See Section 8.3.3 [Combining Skeletal and Vertex Animation],
page 202.
Pose animation is more complex. Like morph animation each track references a single
unique set of vertex data, but unlike morph animation, each keyframe references 1 or more
’poses’, each with an influence level. A pose is a series of offsets to the base vertex data,
and may be sparse - i.e. it may not reference every vertex. Because they’re offsets, they
can be blended - both within a track and between animations. This set of features is very
well suited to facial animation.
For example, let’s say you modelled a face (one set of vertex data), and defined a set of
poses which represented the various phonetic positions of the face. You could then define
an animation called ’SayHello’, containing a single track which referenced the face vertex
data, and which included a series of keyframes, each of which referenced one or more of the
facial positions at different influence levels - the combination of which over time made the
face form the shapes required to say the word ’hello’. Since the poses are only stored once,
but can be referenced may times in many animations, this is a very powerful way to build
up a speech system.
Chapter 8: Animation 200
The downside of pose animation is that it can be more difficult to set up, requiring
poses to be separately defined and then referenced in the keyframes. Also, since it uses
more buffers (one for the base data, and one for each active pose), if you’re animating in
hardware using vertex shaders you need to keep an eye on how many poses you’re blending
at once. You define a maximum supported number in your vertex program definition,
via the includes pose animation material script entry, See hundefinedi [Pose Animation in
Vertex Programs], page hundefinedi.
So, by partitioning the vertex animation approaches into 2, we keep the simple morph
technique easy to use, whilst still allowing all the powerful techniques to be used. Note
that morph animation cannot be blended with other types of vertex animation on the same
vertex data (pose animation or other morph animation); pose animation can be blended
with other pose animation though, and both types can be combined with skeletal animation.
This combination limitation applies per set of vertex data though, not globally across the
mesh (see below). Also note that all morph animation can be expressed (in a more complex
fashion) as pose animation, but not vice versa.
For example, a common set-up for a complex character which needs both skeletal and
facial animation might be to split the head into a separate SubMesh with its own geometry,
then apply skeletal animation to both submeshes, and pose animation to just the head.
To see how to apply vertex animation, See Section 8.2 [Animation State], page 198.
To do this, you have a set of helper functions in Ogre::Mesh. See API Reference entries for
Ogre::VertexData::reorganiseBuffers() and Ogre::VertexDeclaration::getAutoOrganisedDeclaration().
The latter will turn a vertex declaration into one which is recommended for the usage
you’ve indicated, and the former will reorganise the contents of a set of buffers to conform
to that layout.
Chapter 8: Animation 201
Because absolute positions are used, it is not possible to blend more than one morph
animation on the same vertex data; you should use skeletal animation if you want to include
animation blending since it is much more efficient. If you activate more than one animation
which includes morph tracks for the same vertex data, only the last one will actually take
effect. This also means that the ’weight’ option on the animation state is not used for
morph animation.
Morph animation can be combined with skeletal animation if required See Section 8.3.3
[Combining Skeletal and Vertex Animation], page 202. Morph animation can also be im-
plemented in hardware using vertex shaders, See hundefinedi [Morph Animation in Vertex
Programs], page hundefinedi.
In order to do this, pose animation uses a set of reference poses defined in the mesh,
expressed as offsets to the original vertex data. It does not require that every vertex has
an offset - those that don’t are left alone. When blending in software these vertices are
completely skipped - when blending in hardware (which requires a vertex entry for every
vertex), zero offsets for vertices which are not mentioned are automatically created for you.
Once you’ve defined the poses, you can refer to them in animations. Each pose animation
track refers to a single set of geometry (either the shared geometry of the mesh, or dedicated
geometry on a submesh), and each keyframe in the track can refer to one or more poses,
each with its own influence level. The weight applied to the entire animation scales these
influence levels too. You can define many keyframes which cause the blend of poses to
change over time. The absence of a pose reference in a keyframe when it is present in a
neighbouring one causes it to be treated as an influence of 0 for interpolation.
Chapter 8: Animation 202
You should be careful how many poses you apply at once. When performing pose
animation in hardware (See hundefinedi [Pose Animation in Vertex Programs], page hunde-
finedi), every active pose requires another vertex buffer to be added to the shader, and in
when animating in software it will also take longer the more active poses you have. Bear in
mind that if you have 2 poses in one keyframe, and a different 2 in the next, that actually
means there are 4 active keyframes when interpolating between them.
You can combine pose animation with skeletal animation, See Section 8.3.3 [Combin-
ing Skeletal and Vertex Animation], page 202, and you can also hardware accelerate the
application of the blend with a vertex shader, See hundefinedi [Pose Animation in Vertex
Programs], page hundefinedi.
Combining the two is, from a user perspective, as simple as just enabling both animations
at the same time. When it comes to using this feature efficiently though, there are a few
points to bear in mind:
bullet [Combined Hardware Skinning], page 202
bullet [Submesh Splits], page 203
When combining animation types, your vertex programs must support both types of
animation that the combined mesh needs, otherwise hardware skinning will be disabled.
You should implement the animation in the same way that OGRE does, ie perform vertex
animation first, then apply skeletal animation to the result of that. Remember that the
implementation of morph animation passes 2 absolute snapshot buffers of the from & to
keyframes, along with a single parametric, which you have to linearly interpolate, whilst
pose animation passes the base vertex data plus ’n’ pose offset buffers, and ’n’ parametric
weight values.
Chapter 8: Animation 203
Submesh Splits
If you only need to combine vertex and skeletal animation for a small part of your mesh,
e.g. the face, you could split your mesh into 2 parts, one which needs the combination and
one which does not, to reduce the calculation overhead. Note that it will also reduce vertex
buffer usage since vertex keyframe / pose buffers will also be smaller. Note that if you use
hardware skinning you should then implement 2 separate vertex programs, one which does
only skeletal animation, and the other which does skeletal and vertex animation.
At it’s heart, scene node animation is mostly the same code which animates the under-
lying skeleton in skeletal animation. After creating the main Animation using SceneMan-
ager::createAnimation you can create a NodeAnimationTrack per SceneNode that you want
to animate, and create keyframes which control its position, orientation and scale which can
be interpolated linearly or via splines. You use Section 8.2 [Animation State], page 198 in the
same way as you do for skeletal/vertex animation, except you obtain the state from Scene-
Manager instead of from an individual Entity.Animations are applied automatically every
frame, or the state can be applied manually in advance using the applySceneAnimations()
method on SceneManager. See the API reference for full details of the interface for config-
uring scene animations.
AnimableObject
AnimableObject is an abstract interface that any class can extend in order to provide access
to a number of [AnimableValue], page 204s. It holds a ’dictionary’ of the available animable
properties which can be enumerated via the getAnimableValueNames method, and when
its createAnimableValue method is called, it returns a reference to a value object which
forms a bridge between the generic animation interfaces, and the underlying specific object
Chapter 8: Animation 204
property.
One example of this is the Light class. It extends AnimableObject and provides Ani-
mableValues for properties such as "diffuseColour" and "attenuation". Animation tracks
can be created for these values and thus properties of the light can be scripted to change.
Other objects, including your custom objects, can extend this interface in the same way to
provide animation support to their properties.
AnimableValue
When implementing custom animable properties, you have to also implement a number
of methods on the AnimableValue interface - basically anything which has been marked
as unimplemented. These are not pure virtual methods simply because you only have to
implement the methods required for the type of value you’re animating. Again, see the
examples in Light to see how this is done.