0% found this document useful (0 votes)
14 views

Game Engine Notes

Game engines are comprehensive software development kits that enable the creation of diverse video games, featuring core components like rendering, physics, and AI systems. Game development teams typically consist of engineers, artists, designers, producers, and support staff, each playing a crucial role in the game's creation. The architecture of game engines varies across genres, with specific technologies tailored to meet the unique demands of different game types, while also evolving to allow for greater cross-genre compatibility as hardware improves.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Game Engine Notes

Game engines are comprehensive software development kits that enable the creation of diverse video games, featuring core components like rendering, physics, and AI systems. Game development teams typically consist of engineers, artists, designers, producers, and support staff, each playing a crucial role in the game's creation. The architecture of game engines varies across genres, with specific technologies tailored to meet the unique demands of different game types, while also evolving to allow for greater cross-genre compatibility as hardware improves.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Game Engine Architecture

Introduction:
● The software that drives these now-ubiquitous
three-dimensional worlds—Game Engines such as Unity,
Quake and Doom engines,Unreal Engine and Valve’s Source
engine—have become fully featured reusable software
development kits that can be licensed and used to build
almost any game imaginable.
● Virtually all game engines contain a familiar set of core
components, including the rendering engine, the collision and
physics engine, the animation system, the audio system, the
game world object model, the artificial intelligence system,
and so on.

Structure of a Typical Game team:


Game studios are usually composed of five basic disciplines
1. Engineers:
● The engineers design and implement the software that
makes the game, and the tools, work. Engineers are often
categorized into two basic groups: runtime programmers
and tools programmers.
● Some focus in their careers on a single engine system.
And some focus on gameplay programming and
scripting.
● Some engineers are generalists—jacks of all trades who
can jump around and tackle whatever problems might
arise during development.
● The highest engineering-related position at a game
studio is the chief technical officer (CTO).
2. Artists:
● The artists produce all of the visual and audio content in
the game, and the quality of their work can literally make
or break a game.
● There are various types of artists such as Concept
artists, 3d Modelers, Texture Artists, Lighting Artists,
Animators, motion capture actors, sound designers,
voice actors and composers.
● Senior artists are often called upon to be team leaders.
Some teams have one or more art directors, very senior
artists who manage the look of the entire game and
ensure consistency across the work of all team members.
3. Game Designers:
● The game designer’s job is to design the interactive
portion of the player’s experience, typically known as
gameplay.
● Some work on determining the story arc, the overall
sequence of chapters or levels, and the high-level goals
and objectives of the player ad some work on individual
levels or geographical areas within the virtual game
world, laying out the static background geometry,
determining where and when enemies will emerge,
placing supplies like weapons and health packs,
designing puzzle elements, and so on.
● Some teams employ one or more writers. game writer’s
job can range from collaborating with the senior game
designers to construct the story arc of the entire game,
to writing individual lines of dialogue.
● Senior designers play management roles. Many game
teams have a game director, whose job it is to oversee all
aspects of a game’s design, help manage schedules, and
ensure that the work of individual designers is consistent
across the entire product.
4. Producers:
● A producer’s role is to manage the schedule and serve as
a human resources manager.
● In some companies producers serve as a senior game
design capacity. In other companies they are asked to
serve as liaisons between the team and business unit of
the company.
5. Other Staff:
● The team of people who directly construct the game is
typically supported by a crucial team of support staff .
This includes the studio’s executive management team,
the marketing department (or a team that liaises with an
external marketing group), administrative staff , and the
IT department, whose job is to purchase, install, and
configure hardware and software for the team and to
provide technical support.
6. Publishers and Studios:
● The marketing, manufacture, and distribution of a game
title are usually handled by a publisher. A publisher is
typically a large corporation, like Electronic Arts, Sony,
Nintendo, etc.

What is a game?
● In terms of Game theory, a game is where multiple agents
select strategies and tactics in order to maximize their gains
within the framework of a well-defined set of game rules.
● Raph Koster defines a “game” to be an interactive experience
that provides the player with an increasingly challenging
sequence of patterns which he or she learns and eventually
masters.
● Others refer it to the activities of learning and mastering are
at the heart of what we call fun.
Video Games as Soft Real-Time Simulations:
● Games which are manipulation of real world components by
approximation and simplification of objects on Computer as a
game developer, because it's totally impractical to include
every detail down to the level of atoms and quarks are known
as Soft Real-Time simulations.
● An agent-based simulation is one in which a number of
distinct entities known as “agents” interact. These agents can
be vehicles, Characters, Fireballs, animals and so on. Most
games these days are implemented as in object oriented or at
least loosely object based or programming languages.
● All interactive video games are temporal simulations, meaning
the virtual game world model is dynamic, the state of the
game world changes over time as the game progresses and
story unfolds. A game must also respond to unpredictable
inputs from its human player, therefore known as interactive
temporal simulations.
Game Engine:
● Game Engine is conceptually the core software necessary for
a game program to run properly. A core functionality provides
a game engine with a rendering engine, a physics engine or
collision detection, sound, scripting, animation, Artificial
intelligence, networking, streaming, memory management,
threading, localization, scene graph and video support for
cinematics.
● The First engine made was “ID TECH 1” for DOOM, which was
the first game to be architectured with a reasonably well
defined separation between its core components and the art
assets, game worlds, and rules of play that comprised the
player’s gaming experience.
● The value of this separation became evident as developers
began listening to games and re-loading them into new
products by creating new art, world layouts, weapons,
characters, vehicles and game rules with minimal changes to
the engine software. This marked the birth of the “Mod
Community''- a group of individual gamers and small
independent studios that built new games by modifying
existing games using free tool kits provided by the original
developers.
● A Data driven architecture is the only thing which
differentiates a game engine from a game. The developer
should probably reserve the term “game engine” for software
that is extensible and can be used as the foundation for many
different games without major modification.
● The more general purpose a game engine or middleware
component is, the less optimal it is for running a particular
game on a particular platform.

Level of Detail:
● Level-of-detail (LOD) techniques are used to ensure that
distant objects are rendered with a minimum number of
triangles, while using high resolution triangle meshes for
geometry that is close to the camera.
● This technique is widely used in Fast paced games or heavy
games in order to give better experience and smoothness to
the game.
Occlusion Culling:
● Occlusion culling is a process which prevents Unity from
performing rendering calculations for GameObjects that are
completely hidden from view (occluded) by other GameObjects.

Engine Differences across Genres:


● Game engines are typically somewhat genre specific. An
engine designed for a two-person fighting game in a boxing
ring will be very diff erent from a massively multiplayer online
game (MMOG) engine or a first-person shooter (FPS) engine or
a real-time strategy (RTS) engine.
● All 3D games, regardless of genre, require some form of
low-level user input from the joypad, keyboard, and/or mouse,
some form of 3D mesh rendering, some form of heads-up
display (HUD) including text rendering in a variety of fonts, a
powerful audio system etc.
● Unreal Engine, for example, was designed for first-person
shooter games, it has been used successfully to construct
games in a number of other genres as well.
1. First Person Shooter(FPS):
● This genre has involved relatively slow on-foot
roaming of a potentially large but primarily
corridor-based world.modern first-person shooters
can take place in a wide variety of virtual
environments including vast open outdoor areas
and confined indoor areas.
● Modern FPS traversal mechanics can include
on-foot locomotion, rail-confined or free-roaming
ground
● vehicles, hovercraft , boats, and aircraft.
● First Person Shooters aim to provide their players
with the illusion of being immersed in a detailed,
hyperrealistic world and
FPS typically focus on
● Efficient rendering of large 3D virtual worlds
● A responsive camera control/aiming mechanic
high-fidelity animations of the player’s virtual
arms and weapons
● A wide range of powerful hand-held weaponry
● A forgiving player character motion and
collision model, which often gives these games
a “floaty” feel
● High-fidelity simulations and artificial
intelligence for the non-player characters (the
player’s enemies and allies)
● Small-scale online multiplayer capabilities
(typically supporting up to 64 simultaneous
players), and the ubiquitous “deathmatch”
gameplay mode.
2. Platformer and other Third-Person Games:
● Platformer is the term applied to third-person
character-based action games where jumping from
platform to platform is the primary gameplay
mechanic.
● Platformers can usually be lumped together with
third person and third person action/adventure
games.
● Third-person character-based games have a lot in
common with first-person shooters. but a great
deal more emphasis is placed on the main
character’s
● abilities and locomotion modes. In addition,
high-fidelity full-body character animations are
required for the player’s avatar.
● In a platformer, the main character is often
cartoon-like and not particularly realistic or
high-resolution. However, third-person shooters
often feature a highly realistic humanoid player
character. In both cases, the player character
typically has a very rich set of actions and
animations.

This genre focuses on technology such as


● Moving platforms, ladders, ropes, trellises, and
other interesting locomotion modes.
● Puzzle-like environmental elements.
● A third-person “follow camera ” which stays
focused on the player character and whose
rotation is typically controlled by the human
player via the right joypad stick (on a console)
or the mouse (on a PC—note that while there
are a number of popular third-person
shooters on PC, the platformer genre exists
almost exclusively on consoles.
● A complex camera collision system for
ensuring that the view point never “clips”
through background geometry or dynamic
foreground objects.
3. Fighting Games:
● Fighting games are typically two-player games
involving humanoid characters pummeling each
other in a ring of some sort.
Traditionally games in the fighting genre have focused their
technology efforts on
● A rich set of fighting animations.
● Accurate hit detection.
● A user input system capable of detecting complex
button and joystick combinations.
● Crowds, but otherwise relatively static backgrounds.
Modern fighting games includes features such as
● High-definition character graphics, including realistic skin
shaders with subsurface scattering and sweat effects.
● HIgh-fidelity character animations.
● Physics-based cloth and hair simulations for the characters.
4. Racing Games:
● This genre encompases all games whose primary
task is driving a car or other vehicle on some kind
of track.
● A racing game is often very linear, much like older
FPS games. However, travel speed is generally much
faster than in a FPS. Therefore more focus is placed
on very long corridor-based tracks, or looped
tracks.
● Racing games usually focus all their graphic detail
on the vehicles, track, and immediate surroundings.
This genre focuses on technology such as:
● Various “tricks” are used when rendering distant background
elements, such as employing two-dimensional cards for trees,
hills, and mountains.
● The track is often broken down into relatively simple
two-dimensional regions called “sectors.” These data
structures are used to optimize rendering and visibility
determination, to aid in artificial intelligence and path finding
for non-human-controlled vehicles, and to solve many other
technical problems.
● The camera typically follows behind the vehicle for a
third-person perspective,or is sometimes situated inside the
cockpit first-person style.
● When the track involves tunnels and other “tight” spaces, a
good deal of effort is often put into ensuring that the camera
does not collide with background geometry.

5. Real Time Strategy (RTS):


● In this genre, the player deploys the battle units in
his or her arsenal strategically across a large
playing field in an attempt to overwhelm his or her
opponent. The game world is typically displayed at
an oblique top-down viewing angle.
● The RTS player is usually prevented from
significantly changing the viewing angle in order to
see across large distances. This restriction permits
developers to employ various optimizations in the
rendering engine of an RTS game.
● Older games in the genre employed a grid-based
(cell-based) world construction, and an
orthographic projection was used to greatly
simplify the renderer.
● Modern RTS games sometimes use perspective
projection and a true 3D world, but they may still
employ a grid layout system to ensure that units
and background elements, such as buildings, align
with one another properly
RTS uses the following techniques:
● Each unit is relatively low-res, so that the game can support
large numbers of them on-screen at once.
● Height-field terrain is usually the canvas upon which the game
is designed and played.
● The player is often allowed to build new structures on the
terrain in addition to deploying his or her forces.
● User interaction is typically via single-click and area-based
selection of units, plus menus or toolbars containing
commands, equipment, unit types, building types, etc.

6. Massively Multiplayer Online Games(MMOG)


● An MMOG is defined as any game that supports a
huge number of simultaneous players, usually all
playing in one very large persistent virtual world.
● MMOGs is a very powerful batteryMMOGs is a very
powerful battery of servers. These servers maintain
the authoritative state of the game world, manage
users signing in and out of the game, provide
inter-user chat or voice-over-IP servers.
● These servers maintain the authoritative state of
the game world, manage users signing in and out
of the game, provide inter-user chat or
voice-over-IP services. Almost all MMOG’s require
users to pay some kind of subscription fee in order
to play.
● Graphics fidelity in an MMOG is almost always lower
than its non-massively multiplayer counterparts, as
a result of the huge world sizes and extremely large
numbers of users supported by these kinds of
games.

7. Other Genres:
● Each game genre has its own particular
technological requirements. This explains why game
engines have traditionally differed quite a bit from
genre to genre.
● With the advent of more and more powerful
hardware differences between genres that arose
because of optimization concerns are beginning to
evaporate. So it is becoming increasingly possible
to reuse the same engine technology across
disparate genres, and even across disparate
hardware platforms.

Runtime Engine Architecture:


● A game engine generally consists of a tool suite and a runtime
component. The major runtime components that make up a
typical 3D game engine.Game engines are definitely large
software systems. Game engines are built in layers. Normally
upper layers depend on lower layers, but not vice versa. When
a lower layer depends upon a higher layer, we call this a
circular dependency.
● Dependency cycles are to be avoided in any software system.

1. Target Hardware:
● This represents the computer system or console on
which the game will run. These platforms include
Microsoft-Windows, Linux based PC’s, Apple’s
iPhone and Macintosh, Xbox one and Xbox 360,
Playstation etc.

2. Device Drivers:
● Device drivers are low-level software components
provided by the operating system or hardware
vendor.
● Drivers manage hardware resources and shield the
operating system and upper engine layers from the
details of communicating with the myriad variants
of hardware devices available.

3. Operating System:
● It orchestrates the execution of multiple programs
on a single computer, one of which is your game.
● Windows employs a time-sliced approach to
sharing the hardware with multiple running
programs, known as preemptive multitasking.
● PC game can never assume it has full control of the
hardware—it must “play nice” with other programs in
the system.
● On a console, the operating system is often just a
thin library layer that is compiled directly into your
game executable. The game typically “owns” the
entire machine.

4. Third Party SDKs and Middleware:


● Most game engines leverage a number of
third-party software development kits (SDKs) and
middleware. The functional or class based
interface provided by an SDK is often called an
application programming interface (API).
● Data Structure and Algorithms: games depend
heavily on collecting data structures and
algorithms to manipulate them.
● Graphics: Game rendering engines are built on top
of a hardware interface library.
● Collision and Physics: Collision detection and rigid
body dynamics (known simply as “physics” n the
game development community) are provided by the
following well known SDKs.
● Character Animation: A number of commercial
animation packages exist, including but certainly
not limited.
● Artificial Intelligence: provides low level AI building
blocks such as path finding, static and dynamic
object avoidance, identification of vulnerabilities
within a space (e.g., an open window from which an
ambush could come), and a reasonably good
interface between AI and animation.
● Biochemical Character Models: These are
animation packages that produce character
motion using advanced biomechanical models of
realistic human movement.

5. Platform Independent Layer:


● Most game engines are required to be capable of
running on more than one hardware platform.
● The only game studios that do not target at least
two diff erent platforms per game are first party
studios, like Sony’s Naughty Dog and Insomniac
studios. Therefore, most game engines are
architected with a platform independence layer.
● The platform independence layer ensures
consistent behavior across all hardware platforms.

6. Core Systems:
● Assertions: are lines of error-checking code that
are inserted to catch logical mistakes and
violations of the programmer’s original
assumptions.
● Memory management: Virtually every game engine
implements its own custom memory allocation
system(s) to ensure high-speed allocations and
deallocations and to limit the negative effects of
memory fragmentation.
● Math library: provide facilities for vector and matrix
math, quaternion rotations, trigonometry,
geometric operations with lines, rays, spheres,
frusta, etc., spline manipulation, numerical
integration, solving systems of equations, and
whatever other facilities the game programmers
require
● Custom data structures and Algorithms: are often
hand-coded to minimize or eliminate dynamic
memory allocation and to ensure optimal runtime
performance on the target platform.

7. Resource Manager:
■ The resource manager provides a unified interface
(or suite of interfaces) for accessing any and all
types of game assets and other engine input data.

8. Rendering Engine:
The rendering engine is one of the largest and most
complex components of a game engine. Renderers
can be architected in many different ways.
➔ Low Level Renderer:
● It encompassses all of the raw rendering
facilities of the engine. The design is focused
on rendering a collection of geometric
primitives as quickly and richly as possible,
without much regard for which portions a
scene may be visible.
● Graphics Device interface: Graphics SDK’s
such as DirectX and OpenGL, require a
reasonable amount of code to be writt en just
to enumerate the available graphics devices,
initialize them, set up render surfaces
(back-buffer, stencil buffer etc.), and so on.
● Other Renderer Components: The other
components in the low-level renderer
cooperate in order to collect submissions of
geometric primitives (sometimes called render
packet s), such as meshes, line lists, point lists,
particles , terrain patches, text strings, and
whatever else you want to draw, and render
them as quickly as possible.
➔ Scene Graph/Culling Optimizations :
● The low-level renderer draws all of the geometry
submitted to it, without much regard for whether or
not that geometry is actually visible (other than
back-face culling and clipping triangles to the
camera frustum). A higher-level component is
usually needed in order to limit the number of
primitives submitted for rendering, based on some
form of visibility determination.
➔ Visual Effects :
Modern game engines support a wide range of visual
effects such as:
● Particle systems (for smoke, fire, water splashes, etc.).
● Decal systems (for bullet holes, foot prints, etc.).
● Light mapping and environment mapping.
● Dynamic shadows.
● Full-screen post effects , applied after the 3D scene has been
rendered to an off screen buffer.
Some Examples of full-screen Post effects Include:
● High dynamic range (HDR) lighting and bloom.
● Full-screen anti-aliasing (FSAA).
● Color correction and color-shift effects, including bleach
bypass , saturation and desaturation effects, etc.
It is common for a game engine to have an effects system
component that manages the specialised rendering needs of
particles, decals and other visual effects.
➔ Front end
Games employ some kind of 2D graphics overlaid on the 3D
scene for various purposes which include-
● The game’s heads-up display (HUD).
● In-game menus, a console, and/or other development
tools, which may or may not be shipped with the fi nal
product.
● Possibly an in-game graphical user interface (GUI),
allowing the player to manipulate his or her character’s
inventory, configure units for battle, or perform other
complex in-game tasks.

9. Profiling and Debugging Tools:


● Game engineers often need to profile the
performance of their games in order to
optimize performance.
● Developers make heavy use of memory
analysis tools as well. The profiling and
debugging layer, encompasses these tools
and also includes in-game debugging
facilities, such as debug drawing, an in-game
menu system or console, and the ability to
record and play back gameplay for testing
and debugging purposes.
● Most game engines also incorporate a suite
of custom profiling and debugging tools-
A mechanism for manually instrumenting the
code, so that specific sections of code can be
timed.
❖ A facility for displaying the profiling
statistics on-screen while the game is
running.
❖ A facility for dumping performance stats
to A text file or to an Excel spreadsheet.
❖ A facility for determining how much
memory is being used by the engine, and
by each subsystem, including various
on-screen displays.
❖ The ability to dump memory usage,
high-water mark, and leakage stats when
the game terminates and/or during
gameplay.
❖ Tools that allow debug print statements
to be peppered throughout the code
along with an ability to turn on or off
different categories of debug output and
control the level of verbosity of the
output.
❖ The ability to record game events and
play them back.

10.Collision and Physics:


● Collision detection is important for every
game. Without it, objects would interpenetrate,
and it would be impossible to interact with the
virtual world in any reasonable way. Some
games also include a realistic or semi-realistic
dynamics simulation which is known as
“physics system” . The term rigid body
dynamics is really more appropriate, because
we are usually only concerned with the motion
(kinematics) of rigid bodies and the forces and
torques (dynamics) that cause this motion to
occur.
● Collision and physics are usually quite tightly
coupled. This is because when collisions are
detected, they are almost always resolved as
part of the physics integration and constraint
satisfaction logic.

11. Animation:
● Any game that has organic or semi-organic
characters (humans, animals, cartoon
● characters, or even robots) needs an
animation system.
There are five basic types of animation used in games-
➢ Sprite/texture animation
➢ Rigid body hierarchy animation,
➢ Skeletal animation
➢ Vertex animation
➢ Morph targets.

● Skeletal animation permits a detailed 3D


character mesh to be posed by an animator
using a relatively simple system of bones. As
the bones move, the vertices of the 3D mesh
move with them.
● There is also a tight coupling between the
animation and physics systems, when rag
dolls are employed. A rag doll is a limp (ofen
dead) animated character, whose bodily
motion is simulated by the physics system. The
physics system determines the positions and
orientations of the various parts of the body
by treating them as a constrained system of
rigid bodies. The animation system calculates
the palette of matrices required by the
rendering engine inorder to draw the
character on-screen.

12. Human Interface Devices (HID)


● Every game needs to process input from
the player, obtained from various human
interface device s (HIDs) including
➢ The keyboard and mouse,
➢ A joypad
➢ Other specialized game controllers,
like steering wheels, fishing rods,
dance pads, the WiiMote, etc.
● The HID massages the raw data coming
from the hardware, introducing a dead
zone around the center point of each
joypad stick, de-bouncing butt on-press
inputs, detecting button-down and button
up events, interpreting and smoothing
accelerometer inputs (e.g., from the
PLAYSTATION 3 Sixaxis controller), and
more
● It provides a mechanism allowing the
player to customize the mapping between
physical controls and logical game
functions. It also includes a system for
detecting chords, sequences and gestures.

13. Audio:
● It is just as important as graphics in any
game.
● No great game is completed without a
stunning audio engine.
● Audio engines vary greatly in sophistication.
➢ Quake ’s and Unreal ’s audio engines are
pretty basic, and game teams usually
augment them with custom functionality
or replace them with an in-house
solution.
➢ For DirectX platforms (PC and Xbox 360),
Microsoft provides an excellent audio
tool suite called XACT .
➢ Electronic Arts has developed an
advanced, high-powered audio engine
internally called SoundR!OT.
➢ In conjunction with first-party studios
like Naughty Dog, Sony Computer
Entertainment America (SCEA) provides
a powerful 3D audio engine called
Scream.
14. Online Multiplayer/ Networking:
● Many games permit multiple human players to
play within a single virtual world. There are
different types such as:
➢ Single-Screen Multiplayer
➢ Split Screen Multiplayer
➢ Networked Multiplayer
➢ Massively Multiplayer online games (MMOG)
● Multiplayer games are quite similar in many
ways to their single-player counterparts.
However, support for multiple players can
have a profound impact on the design of
certain game engine components.
● Many game engines treat single-player mode
as a special case of a multiplayer game.
● The Quake engine is well known for its
client-on-top-of-server mode, in which a single
executable, running on a single PC, acts both
as the client and the server in single-player
campaigns.

15. Gameplay Foundation Systems:


➢ CamThe term gameplay refers to the action
that takes place in the game, the rules that
govern the virtual world in which the game
takes place, the abilities of the player
character(s) (known as player mechanics) and
of the other characters and objects in the
world, and the goals and objectives of the
players.
➢ It provides a suite of core facilities, upon
which game specific logic can be implemented
conveniently.
Gameworld and object models:
● The gameplay foundations layer introduces
the notion of a game world, containing both
static and dynamic elements. The collection of
object types that make up a game is called the
Game Object Model. The game object model
provides a real-time simulation of
heterogeneous collection of objects in the
virtual game world which include-
➢ Static background geometry, like buildings,
roads, terrain (often a special case), etc.
➢ Dynamic rigid bodies, such as rocks, soda
cans, chairs, etc.
➢ Player characters (PC).
➢ Non-player characters (NPC).
➢ Weapons.
➢ Projectiles.
➢ Vehicles.
➢ Lights (which may be present in the dynamic
scene at run time, or only used for static
lighting offline).
➢ eras.
● The term software object model refers to the
set of language features, policies, and
conventions used to implement a piece of
object-oriented software.

Event System:
● An event driven architecture is a common approach to
inter-object communication.
● In an event-driven system, the sender creates a little data
structure called an event or message, containing the
message’s type and any argument data that are to be
sent. The event is passed to the receiver object by calling
its event handler function.

Scripting System:
● Many game engines employ a scripting language in order to
make development of game-specific gameplay rules and
content easier and more rapid.
● Without a scripting language, you must recompile and relink
your game executable every time a change is made to the
logic or data structures used in the engine.
● Some engines allow scripts to be reloaded while the game
continues to run. Other engines require the game to be shut
down prior to script recompilation.

Artificial Intelligence Foundations:


● AI’s are the objects or characters which work with the help of
Logics and limitations. Game specific AI logic can be quite
easily developed.
● Kynapse provides powerful suit of features which includes:
➢ A network of path nodes or roaming volumes.
➢ Collision information around the edges
➢ Knowledge of the entrances and exits from a region
➢ A pathfinding engine.
➢ Custom world model which tells the AI system where all the
entities of interest are, permits dynamic avoidance of moving
objects and so on.
● Kynapse also provides an architecture for the AI decision layer
including the concept of brains, agents and actions.

16. Game-Specific Subsystems:


● Gameplay systems are usually numerous, highly varied and
specific to the game being developed.
● If a clear line could be drawn between the engine and the
game it would lie between the game specific subsystems and
gameplay foundation layer.

Tools and Asset pipeline :


● Any game engine must be fed a great deal of data in the form
of game assets, configuration files scripts and so on.
● The thick dark grey arrows show how data flows from the tools
used to create the original source assets all the way through
to the game engine itself. The Thinner Light grey to show how
the various types of assets refer to or use other assets.
Digital Content Creation Tools:
● A game engine’s input data comes in a wide variety of forms.
3D mesh data to texture bitmaps to animation data to audio
files. All Of this source data must be created and manipulated
by artists. the tools that the artist use are called digital
content creation (DCC) applications
● A DCC Application is usually targeted at the creation of one
particular type of data- although some tools can produce
multiple data types.
● Some types of game data cannot be created using an
off-the-shelf DCC app. Most game engines provide a custom
editor laying out game worlds.
● The tools must be relatively easy to use and they absolutely
must be reliable, if a game team is going to be able to develop
a highly polished product in a timely manner

Asset Conditioning Pipeline:


● The data formats used by DCC applications are rarely suitable
for direct use in game.
● The DCC app’s in-memory model of the data is usually much
more Complex than what the game engine requires.
● DCC applications file format is often too slow to read at
runtime and in some cases it is a closed proprietary format.
● Once data has been exported from the DCC app it often must
be for the process before being sent to game engine if a game
studio is shifting its game on one or more than one platform
the intermediate files might be processed differently for each
target platform.
● The pipeline from the DCC app to the game engine is
sometimes called an air conditioning pipe line.

3D Models / Mesh Data:


● The Visible Geometry which we see in a game is definitely
made of two kinds of data:
a. Brush Geometry:
● It is defined as a collection of complex hulls each of
which is defined by multiple planes. Brushes are typically
created and edited directly in the game world editor.
Pros:
➢ Fast and easy to create
➢ Accessible to game designers often used to block out a game
level for prototyping purposes.
➢ Can serve both as collision volumes and as enterable
geometry
Cons:
➢ Low resolution-difficult to create Complex shapes.
➢ Cannot support articulated objects or animated characters.

b. 3D Models (Meshes):
● 3D models (also referred to as meshes) are superior to
brush geometry. A mesh is a complex shape composed
of triangles and vertices.
● On modern graphics hardware all shapes must
eventually be translated into Triangles prior to rendering.
A mesh typically has one or more material applied to it in
order to define Visual surface properties.
● Meshes are typically created in a 3D modelling package
such as 3DS Max, Maya etc.
● Exporters must be written to extract the data from the
digital content creation tool and store it on disk in a
form that is digestible by the engine.
● Game teams often create custom file formats and custom
exporters to go with them.

Skeletal animation data: .


● Skeletal mesh is a special kind of mesh that is bound to
skeletal hierarchy for the purposes of articulated animation.
Such a mesh is sometimes called a skin because it forms the
skin that surrounds the Invisible underlying skeleton.
● Each vertex of a skeletal mesh contains a list of indices
indicating to which joint in the Skeleton it is bound.A vortex
usually also includes a set of joint weights specifying the
amount of influence each joint has on the vertex.
● To render a skeletal mesh the game engine requires three
distinct kind of data-
➢ The mesh itself
➢ The skeletal hierarchy
➢ One or more animation clips
● If multiple meshes are bound to a single skeleton then it is
better to export The Skeleton as a distinct file.

Audio Data:
● Audio clips are usually exported from Sound Forge or some
other audio production tool in a variety of formats and at a
number of different data sampling rates.
● They may be in mono, stereo, 5.1, 7.1 or other multi-channel
configurations. Wave files are common but other file formats
such as PlayStation ADPCM are also commonplace.

Particle Systems Data:


● Modern games make use of complex particle effects.
they are created by artists who specialise in the
creation of visual effects.
● Houdini permits film quality effects to be created
however most game engines are not capable of
rendering the full gamut of effects that can be
created with Houdini.
● Many game companies create a custom particle
effect editing tool, which exposes only the effects that
the engine actually supports. A custom tool might
also let the artist see the effect exactly as it will
appear in-game.

Game World data and the World editor:


● The game world is where everything in a game engine comes
together. a number of commercially available game engines
provide good world editors
● Writing a good world editor is difficult but it is an extremely
important part of any good game engine.

3D Math For Games


● A game is a mathematical model of a virtual world
simulated in real time on a computer.
● Game programmers make use of Virtually all branches of
mathematics from trigonometry to algebra to statistics to
calculus.

Solving 3D problems in 2D:


● Sometimes we can solve a 3D vector problem by thinking
and drawing pictures in 2D. Sadly this equivalence
between 2D and 3D does not hold at all times just like the
cross product is only defined in 3D .
● Once you understand the solution into 2D you can think
about the problem extends into three dimensions.

Points and vectors:


The majority of modern 3D games are made up of
three-dimensional objects in a virtual world. A game needs to
keep track of the positions, orientations and scales of all
these objects, animate them in the game world and transform
them into screen space so they can be rendered on screen.
In games 3D objects are almost always made up of triangles,
the vertices of which are represented by points.
Points and Cartesian coordinates:
● A point is a location in n-dimensional space. The
cartesian coordinate system is the most common system
employed by game programmers.
● It uses two or three mutually perpendicular axes to
specify a position in 2D or 3D space.
● Some other common systems include:
➢ cylindrical coordinates
➢ spherical coordinates
Left handed v/s Right-handed Coordinate system:
● We have two choices when arranging the three mutually
perpendicular Axes- Right Handed (RH) and Left
Handed(LH).
● if the y-axis points upward and x points to the right, then
z comes toward us (out of the page) in a right-handed
system, and away from us (into the page) in a left -handed
system.

● It is easy to convert LH to RH coordinates and vice versa.


Left handed and right handed conventions applY to
visualisation only and not to the underlying mathematics.
● 3D graphics program must Khalifa the left-handed
coordinate system, the y-axis pointing up x to the right
and positive z pointing away from the viewer.
● When 3D graphics are rendered onto a 2D screen using
this particular coordinate system, increasing
z-coordinates correspond to increasing depth into the
scene.
Vectors:
● A Vector is a quantity that has both magnitude and
direction in n-dimensional space. Vector can be visualised
as a directed line segment extending from a point called
the tail point called the head.
● A 3D vector can be represented by a triple of scalars (
x,y,z) just as a point can be. a vector is just an absolute
relative to some known point. a vector can be moved
anywhere in 3D space - as long as its magnitude and
direction don't change, it is a same vector.
● A vector can be used to represent a point provide it
provided that we fix the tale of the vector to the origin of
our coordinate system.
● Cartesian basis vectors: It is often useful to define 3
autoconer unit vectors corresponding to the three
principal cartesian axes. the unit vector along the x-axis
is typically called i, The y-axis unit vector is called j, and
the z-axis unit vector is called k.

Vector operations:
Most of the mathematical operations that we perform on
scalars can be applied for vectors as well.

a. Multiplication by scalar: It is accomplished by


multiplying individual components of a by s.
● Multiplication by a scalar has the effect of scaling the
magnitude of the vector while leaving its direction
unchanged. multiplication by -1 flips the direction of a
vector.
● The scalar factor can be different along each axis. We call
this non-uniform scale.

b. Addition and subtraction : Addition of two vectors a


and b is defined as the vector whose components
are the sum of the components of a and b.
Vector

subtraction a-b is nothing more than addition of a


and -b.

c. Magnitude: Magnitude of a vector is a scalar


presenting the length of a vector as it would be
measured in 2D or 3D space.
d. Normalization and Unit Vectors: A unit vector is a
vector with the magnitude ( length) of 1.
Given an arbitrary vector of length v=IvI We can
convert it into a unit vector u that points in the same
direction as v but has unit length. to do this we
simply multiply v buy the reciprocal of its magnitude.
We call this normalisation.

e. Normal Vectors: A vector


is said to bebe normal to a surface if it is
perpendicular to the surface.
● In 3D graphics, lighting calculations make
heavy use of normal vectors to define the
direction of the surface is relative to the
direction of the light rays impinging upon them.
● Normalized vector is any vector of unit length
and normal vector is any vector that is
perpendicular to a surface whether or not It is
of unit length.
f. Dot product and projection: the dot product of two
vectors yields a scalar.

● Dot product is good for testing If two vectors are collinear


perpendicular are whether they point in roughly the same
or roughly opposite directions.
g. Cross Product: cross product of two vectors yields
another vector that is perpendicular to the two
vectors be multiplied.

● The magnitude of the cross product is equal to the area


of the parallelogram whose sides are a and b

Linear Interpolation of Points and Vectors:


● In games we often need to find a vector that is Midway
between two known vectors.
● Linear interpolation is a simple mathematical operation
that finds an intermediate point between two known
points. The name of this operation is shortened to LERP .

Matrices:
● A matrix is a rectangular array of m x n scalars.
Mattresses are a convenient way of representing linear
transformation such as translation rotation and scale.
● A matrix M is usually written as a grid of scalars Enclosed
in square brackets where the subscripts r and c represent
the row and column indices of the entry respectively.
● When all of the row and column vectors of a 3 × 3 matrix
are of unit magnitude, we call it a special orthogonal
matrix. This is also known as an isotropic matrix, or an
orthonormal matrix.
● Under certain constraints, a 4 × 4 matrix can represent
arbitrary 3D transformations ,including translations ,
rotations , and changes in scale . These are called
transformation matrices , and they are the kinds of
matrices that will be most useful to us as game engineers.
The transformations represented by a matrix are applied
to a point or vector via matrix multiplication.

Matrix Multiplication:
● The product P of two matrices A and B is writt en P = AB. If
A and B are transformation matrices, then the product P
is another transformation matrix that performs both of
the original transformations. For example, if A is a scale
matrix and B is a rotation, the matrix P would both scale
and rotate the points or vectors to which it is applied.

Representing Points and Vectors as Matrices:


Points and vectors can be represented as row matrices (1 × n)
or column matrices (n × 1), where n is the dimension of the
space we’re working with (usually 2 or 3).
Identity Matrix:
● The identity matrix is a matrix that, when multiplied by
any other matrix, yields the very same matrix. It is usually
represented by the symbol I. The identity matrix is always
a square matrix with 1’s along the diagonal and 0’s
everywhere else.

Matrix Inversion:
● The inverse of a matrix A is
another matrix (denoted A–1)
that undoes the effects of matrix A
● When a matrix is multiplied by its own inverse, the result is
always the identity matrix.

Transposition:
● The transpose of a matrix M, is obtained by reflecting the
entries of the original matrix across its diagonal. In other
words, the rows of the original matrix become the
columns of the transposed matrix, and vice-versa.
● The inverse of an orthonormal (pure rotation) matrix is
exactly equal to its transpose.
● Transposition can also be important when moving
● data from one math library to another, because some
libraries use column vectors while others expect row
vectors. The matrices used by a row-vector based
● library will be transposed relative to those used by a
library that employs the column vector convention.

Homogeneous Coordinates: A 2 × 2 matrix can represent a


rotation in two dimensions. To rotate a vector r through an
angle of Ⲫ degrees(where positive rotations are
counter-clockwise).

Atomic Transformation Matrices:


● Any affine transformation matrix can be created by
simply concatenating a sequence of 4 × 4 matrices
representing pure translations, pure rotations, pure scale
operations, and/or pure
shears.

➢ The upper 3 × 3 matrix U, which represents the rotation


and/or scale,
➢ a 1 × 3 translation vector t,
➢ a 3 × 1 vector of zeros 0 = [ 0 0 0 ]T, and
➢ a scalar 1 in the bottom-right corner of the matrix

Coordinate Spaces:
● Applying transformation to a rigid object is like applying
that same transformation to every point within the object.
a point (position vector) is always expressed relative to a
set of coordinate axes.
● A set of coordinate axes represents a frame of reference,
so we sometimes refer to a set of axes as a coordinate
frame (or just a frame). People in the game industry also
use the term coordinate space (or simply space) to refer
to a set of coordinate axes.
➢ Model Space: The positions of the triangles’ vertices
are specified relative to a Cartesian coordinate
system which we call model space. The model space
origin is usually at a central location within the
object.
➢ World Space: It is a fixed coordinate space, in which
the positions, orientations, and scales of all objects
in the game world are expressed. This coordinate
space ties all the individual objects together onto a
cohesive virtual world. The location of the
world-space origin is arbitrary.
➢ View Space: it is a coordinate frame fixed into the
camera. The view space origin is place at the focal
point of the camera.

Change of Basis:
● It is often quite useful to convert an object’s position,
orientation, and scale from one coordinate system into
another. We call this operation a change of basis.
● Coordinate Space Hierarchies:
Coordinate frames are relative. That is, if you want to
quantify the position, orientation, and scale of a set of
axes in three-dimensional space, you must specify these
quantities relative to some other set of axes . This implies
that coordinate spaces from a hierarchy every coordinate
space is a child of some other coordinate space, and the
other space acts as its parent. World space has no
parent; it is at the root of the coordinate-space tree, and
all other coordinate systems are ultimately specified
relative to it, either as direct children or more-distant
relatives.

Quaternions:
● A matrix is not always an ideal representation of a
rotation,
Therefore we use quaternions for a number of reasons:
➢ We need nine floating-point values to represent a
rotation, which seems excessive considering that we only
have three degrees of freedom—pitch, yaw, and roll.
➢ To find a rotational representation that is less expensive.
➢ To find lots of intermediate rotations be tween A and B
over the course of the animation.
● There is a rotational representation that overcomes these
three problems. It is a mathematical object known as a
quaternion. A quaternion looks a lot like a
four-dimensional vector, but it behaves quite differently.
We usually write quaternions using non-italic,
non-boldface type, like this: q = [ qx qy qz qw ].

Gimbal Lock: Gimbal lock is the loss of one degree of freedom


in a three-dimensional, three- gimbal mechanism that occurs
when the axes of two of the three gimbals are driven into a
parallel configuration, "locking" the system into rotation in a
degenerate two-dimensional space.
Comparison Of Rotational Representations:
Euler Angles:
● A rotation represented via Euler angles consists of three
scalar values: yaw, pitch, and roll. These quantities are
sometimes represented by a 3D vector
● Euler angles cannot be interpolated easily when the
rotation is about an arbitrarily-oriented axis.
● Euler angles are prone to a condition known as gimbal
lock This occurs when a 90-degree rotation causes one of
the three principal axes to “collapse” onto another
principal axis
● The order in which the rotations are performed around
each axis matters. Each ordering may produce a different
composite rotation. No one standard rotation order exists
for Euler angles across all disciplines.
● Euler angles depend upon the mapping from the x,y and z
axes onto the natural front, left, right and up directions
for the object being rotated.

3x3 Matrices:
● A 3x3 matrix is a convenient and effective rotational
representation. It does not suffer from gimbal lock , and it
can represent arbitrary rotations uniquely.
● It can be applied to points and vectors in a
straightforward manner via matrix multiplication.
● Most CPUs and all GPUs now have built-in support for
hardware-accelerated dot products and matrix
multiplication. Rotations can also be reversed by finding
an inverse matrix, which for a pure rotation matrix is the
same thing as finding the transpose—a trivial operation.
And 4 × 4 matrices offer away to represent arbitrary affine
transformations—rotations, translations, and scaling—in
a totally consistent way.
● Rotation matrices are not easily interpolated and takes
up a lot of storage.

Axis+Angle:
● Rotation can be represented as a unit vector defining the
axis of rotation plus a scalar for the angle of rotation
where ‘a’ is the axis of rotation and 𝚹 the angle in radians.
● The benefits of the axis+angle representation are that it is
reasonably intuitive and also compact.
● Rotations cannot be easily interpolated. rotations in this
format cannot be applied to points and vectors in a
straightforward way—one needs to convert the axis+angle
representation into a matrix or quaternion first.

SQT Transformations:
● When a quaternion is combined with a translation vector
and a scale factor. We sometimes call this an SQT
transform, because it contains a scale factor, a
quaternion for rotation, and a translation vector.

Dual Quaternions:
● Complete transformations involving rotation, translation,
and scale can be represented using a mathematical
object known as a dual quaternion. A dual quaternion is
like an ordinary quaternion, except that its four
components are dual numbers instead of regular
real-valued numbers.

● -ε is a magical number called the dual unit, defined as .


● It works like an analogous imaginary number in order to
find the end result. Is used when writing a complex
number as the sum of a real and an imaginary part.

Rotations and Degrees of Freedom:


● The term “degrees of freedom ” (or DOF for short) refers to
the number of mutually-independent ways in which an
object’s physical state (position and orientation) can
change.
● A three-dimensional object has three degrees of freedom
in its translation (along the x-, y-, and z-axes) and three
degrees of freedom in its rotation (about the x-, y-, and
z-axes), for a total of six degrees of freedom.
● The constraints indicate that the parameters are not
independent—a change to one parameter induces
changes to the other parameters in order to maintain the
validity of the constraint(s).

Real World Mechanics


What is it?
Understanding of motion and the driving forces thereof is
crucial in understanding games. Most of the objects in games
move, it is what makes them dynamic, whether be a
2-dimensional character or a 3-dimensional character. They
and their game environments are in constant motion.

Principle of Vectors:
Vector is a line with Magnitude (Length) and Direction. It is
used to represent measurements such as displacement,
velocity and acceleration. In 2D, a vector has 2 coordinates- x
and y and in 3D, a vector has 3 coordinates- x,y and z . (In pure
mathematics, a vector is a set of changing coordinate
instructions.)
The length of a vector, v, called its magnitude and written.

● The process of scaling the length is called normalizing


(Sometimes it is necessary to scale a vector, so that it has
a length equal to 1) , and the resultant vector, which still
points in the same direction, is called a unit vector (a
vector whose magnitude is 1).
● To find the unit vector, each coordinate of the vector is
divided by the vector’s length.
Two further important calculations can be performed with
vectors:-
Dot Product:
The dot product can be used to calculate the angle between
two vectors. It can be done so by taking two vectors v and w
and multiplying their respective coordinates together and
then adding them. The dot product results in a single value.

The most useful application of the dot product is working out


the angle between two vectors .
If the dot product is greater than zero, the vectors are less
than 90° apart. If the dot product equals zero, then they are at
right angles (perpendicular) and if the dot product is less than
zero, then they are more than 90°apart.
In Computer Graphics, a positive value always indicates an
anticlockwise turn. When calculating the angle between
vectors using the dot product is always positive.
In dot product, if the order of the equation is reversed, the
resulting vector remains the same.
Cross Product:
The cross product of two vectors results in another vector. The
resulting vector is perpendicular (90°) to both the initial
vectors.
Cross product is only defined for 3D.
The cross product of two vectors v and w, denoted by v x w

The equation is defined in terms of standard 3D unit vectors.


These vectors are three unit length vectors orientated in the
directions of the x, y, and z axes.
In the above equation, The first part determines the value of
the x coordinate of the vector, as the unit vector (1,0,0) only has
a value for the x coordinate. The same occurs in the other two
parts for the y and z coordinates.
To find the cross product of two 2D vectors, the vectors first
need to be converted into 3D coordinates. This is as easy as
adding a zero z value.
In cross product, if the order of the equation is reversed, w x v
the resulting vector is different. It will be a vector of the same
length but travelling in the exact opposite direction.
Defining 2d and 3d Space:
In 2D or 3D space, the principles of vectors are applied in the
same way. The difference between a 2D coordinate and
a 3D coordinate is just another value. In 3D game engines such
as unity, 2D games are created by ignoring one of the axes.
(usually z axis). All movements thereafter only move and rotate
the objects in x and y axes.

Cameras:
The camera in a game defines the visible area on the screen. In
addition to defining the height and width of the view, the
camera also sets the depth of what can be seen. The entire
space visible by a camera is called the view volume. If an object
is not inside the view volume, it is not drawn on the screen.The
the shape of the view volume can be set to orthographic or
perspective. Both views are constructed from an eye position
(representing the viewers’ location), a near clipping plane, the
screen, and a far clipping plane.

Orthographic Camera:
An orthographic camera projects all points of 3D objects
between the clipping planes in parallel onto a screen planeThe
screen plane is the view the player ends up seeing. The viewing
volume of an orthographic camera is the shape of a
rectangular prism.

Perspective Camera:
A perspective camera projects all points of 3D objects between
the clipping planes back to the eye. The near clipping plane
becomes the screen. The viewing volume of a perspective
camera is called the frustum as it takes on the volume of a
pyramid with the top cut off. The the eye is located at the apex
of the pyramid.
● The camera in a game is a critical component as it
presents the action to the player. It is literally the lens
through which the game world is perceived.
● Cameras are also important for optimizing a game
performance. all objects inside the view volume get drawn
to the screen. The more objects to be drawn, the slower
the frames per second. Even objects behind other objects
and not noticeably visible will be considered by the game
engine as something to be drawn. So even though an
object does not appear on the screen, if it is inside the
camera’s view volume it will be processed.
● Whether the camera is looking at an orthographic or a
perspective view, the coordinate system within the game
environment remains the same.

Local and world Coordinate systems:


● There are two coordinate systems at work in game
environments: local and world. The local system is relative
to a single game object, and the world system specifies
the orientation and coordinates for the entire world.
● A game object can move and rotate relative to its local
coordinate system or the world. How it moves locally
depends on the position of the origin,the (0,0,0) point.
● When a model is selected, its origin is evident by the
location of the translation handles used for moving the
model around in the Scene.
● Any object at the world origin when rotated will orientate
in the same way around local and world axes. However,
when the model is not at the world origin, a rotation in
world coordinates will move as well as reorient the model.
Local rotations are not affected by the model’s location in
the world.

Transformation, Rotation and Scaling:


Three transformations can be performed on an object whether
it be in 2D or 3D: translation, rotation, and scaling.

● Translation refers to moving an object and is specified by


a vector. A translation occurs whenever the x,y and z
values are modified all at once with a vector or one at a
time.
● Rotation turns an object about a given axis by a specified
number of degrees.An object can rotate about its x, y, or z
axes or the world x, y, or z axes. Combined rotations are
also possible. These cause an object to rotate around an
arbitrary axes defined by vector values.
● Scaling changes the size of an object. An object can be
scaled along the x,y or z axis. Values for the scale are
always multiplied against the original size of the object.
Therefore, a scale of zero is illegal. If a negative scaling
value is used, the object is flipped. For example, setting
the y axis scale to -1 will turn theobject upside down.
Polygons and Normals:
● A polygon in a small shape usually triangles and
sometimes squares that make up 2D and 3D meshes. A
polygon in a mesh also represents a plane. It is a 3D
object that has width and height but no depth. It is
completely flat and can be orientated in any direction,
but not twisted.
● The sides of planes are defined by straight edges between
vertices. Each vertex has an associated point in space. In
addition, planes only have one side, which means that
they can only be seen when viewed from above.
● In order to define the visible side of a plane, it has an
associated vector called a normal. A normal is a vector
that is orthogonal (90°) to the plane.
● Knowing the normal to a plane is critical in determining
how textures and lighting affect a model. It is the side the
normal comes out of what is visible and therefore
textured and lit.
● The angle that a normal makes with any rays from light
sources is used to calculate the lighting effect on the
plane.
● The closer the normal becomes to being parallel with the
vector to the light source, the brighter the plane will be
drawn. This lighting model is called Lambert shading and
is used in computer graphics for diffuse lighting.

Two-Dimensional Games in a 3D Game Engine:


3D game engines are not often written to support 2D game
creation, many developers use them for this purpose because
● They are familiar with them,
● They do not have to purchase software licenses for other
development platforms, and
● The engines provide support for multiple platforms and
game types
Unity added the feature to port of Android and iOS platforms
in earlier versions, it became attractive to developers for the
creation of 2D content. At the time Unity was still there strictly
3D and 2D had to be faked by ignoring one of the axes in the
3D world.
Now-a-days Unity supports 2D game creation by putting the
camera into orthographic mode and looking straight down the
z axis. This setup happens by default when a new project is
created in 2D mode. The scene is still essentially 3D but all the
action appears and operates in the X–Y plane.

Quaternions:
Quaternions are mathematical constructs that allow for
rotations around the three axes to be calculated all at once in
contrast to Euler angles that are calculated one after the
other. A quaternion has an x, y, and z component as well as a
rotation value.
In games using Euler rotations can cause erratic orientations
of objects.Quaternions are used throughout modern game
engines including Unity, as they do not suffer from gimbal
lock.
nity uses quaternions for storing orientations. The most used
quaternion functions are-
LookRotation():
LookRotation() given a vector will calculate the equivalent
quaternion to turn an object to look along the original vector.
Angle():
The Angle() function calculates the angle between two
rotations. It might be used to determine if an enemy is directly
facing toward or away from the player.

Slerp():
Slerp() takes a starting rotation and an ending rotation and
cuts it up into small arcs. It is extremely effective in making
rotating objects change from one direction to another in a
smooth motion

The Laws Of Physics:


The laws of physics are a set of complex rules that describe
the physical nature of the universe. Adhering to these rules
when creating a game is key to creating a believable
environment in which the player interacts.
Physics is a fundamental element in games as it controls the
way in which objects interact with the environment and how
they move.
The Key laws which are used in game environments are
Newton’s three laws of motion and Law of Gravity.

The Law of Gravity:


In game environments, applying a downward velocity to an
object simulates gravity. The y coordinate of the object’s
position is updated with each game loop to make it move in a
downward direction.
transform.position.y = transform.position.y -1 ;

Newton’s First Law:


“Every body continues in its state of rest, or of uniform motion
in a straight line, unless it is compelled to change that state by
forces impressed upon it.”
This means that an object will remain stationary and a moving
object will keep moving in the same direction unless pushed or
pulled. In the real world, a number of different forces act to
move or slow objects. These include gravity and friction. In
addition, objects colliding with each other will also act to
change their movement
Newton’s Second Law:
“The acceleration produced by a particular force acting on a
body is directly proportional to the magnitude of the force
and inversely proportional to the mass of the body.”
This means that a larger force is required to move a heavier
object. In addition, a heavier object with the same acceleration
as a lighter object will cause more destruction when it hits
something as the force will be greater.

Newton’s Third Law:


“To every action there is always opposed an equal reaction; or,
the mutual actions of two bodies upon each other are always
equal, and directed to contrary parts.”
It simply means, for every action there is an equal and
opposite reaction.

Physics and Principles of Animation:


Squash and stretch:
The deformation of objects in reaction to the laws of physics
We can take the example of the game World of Goo where many
moving balls of Goo. Each time Goo accelerates it becomes
elongated along the direction of movement, and it decelerates
and squashes when it collides with another object. Such
movement occurs in the real world and is explained by
Newton’s laws.

Anticipation:
Presenting short actions or hints to a viewer of what is about
to
Happen. A simple implementation of this is seen in racing
games. At the beginning of a race, a traffic light or countdown
will display, giving the player a heads up to when the race is
about to start. Another way to add anticipation is to have
explosive devices with countdown timers.
Follow-through and overlapping action:
This is the way in which momentum acts on a moving object
to cause extra motion even after the initial force has stopped.
For example in racing games, a common follow-through is
when one car clips another car and it goes spinning out of
control. In most games where you have to blow something up
there is bound to be a follow-through action that removes
obstacles from the player’s game progression.

Secondary actions:
These animations support the principle animation. They give
a scene with more realism. Secondary motion brings the game
environment to life. The simplest of movements can hint at a
dynamic realistic environment with a life of its own. For
example, a swaying tree or moving grass provides the illusion
of a light breeze while also suggesting to the player that these
are living things.

Staging:
Presenting an idea such that no mistake can be made as to
what is happening.

Straight-ahead action and pose to pose:


These are animation drawing methods. Straight-ahead action
refers to drawing out a scene frame by frame. Pose to pose
refers to drawing key frames or key moments in a scene and
filling in the gaps later.

Slow in and out:


Natural movement in which there is a change in direction
decelerates into the change and accelerates out.
Arcs:
Motion in animals and humans occurs along curved paths.
This includes the rotation of limbs and the rise and fall of a
body when walking. The same curved movement is also found
in the trajectory of thrown objects.

Timing:
This refers to the speed of actions. It is essential for
establishing mood and realism.

Exaggeration:
Perfect imitations of the real world in animation can appear
dull and static. Often it is necessary to make things bigger,
faster, and brighter to present them in an acceptable manner
to a viewer. Over Exaggeration is also used in physical features
of characters for the effects of physics.

Solid drawing:
This is the term given to an animator’s ability to consider and
draw a character with respect to anatomy, weight, balance,
and shading in a 3D context. A character must have a
presence in the environment, and being able to establish
volume and weight in an animation is crucial to believing the
character is actually in and part of the environment.

Appeal:
This relates to an animator’s ability to bring a character to life.
It must be able to appeal to an audience through physical
form, personality, and actions.

All but a couple of the preceding principles of animation can


be conveyed in a game environment through the physics
system.They are consequences of physics acting in the real
world. We subconsciously see and experience them every day,
although not with as much exaggeration as a game, and come
to expect it in the virtual environment

2D And 3D tricks for Optimizing Game Space:


When designing a game environment it is critical to keep in
mind how many things will need to be drawn in a scene and
reduce this to absolute minimum while keeping quality high.
Some ways are-
Reducing Polygons:
● Each polygon adds to the processing of a scene. On
high-end gaming machines the polycount must increase
dramatically to affect the frame rate. Here are some ways
to reduce polycount Use Only What you Need: When
creating a model for a game, consider the number of
superfluous polygons in the mesh and reduce.
● Backface Culling: Unity does not show the reverse side of
surfaces. This is called backface culling and is a common
technique in computer graphics for not drawing the
reverse side of a polygon. A better way is to turn backface
culling off. To do this in Unity requires the writing of a
shader. A shader is a piece of code the game engine
interprets as a texturing treatment for the surface of a
polygon. When you set a material to Diffuse or
Transparent/Specular, you are using a prewritten shader.
● Level Of Detail: Level of detail (LOD) is a technique for
providing multiple models and textures for a single object
with reducing levels of detail.This method not only mimics
human vision-making objects in the distance less defined,
but also allows for the drawing of more objects in a scene
as the ones farther away take up less memory.
Textures:
Fine-detailed, high-quality textures are the best defense
against high polycounts. There is far more detail in a
photorealistic image of a real-world item than could possibly
fit into the polycount restrictions of any real-time engine. We
discuss two more popular tricks here-
● Moving Textures: When creating materials in Unity you
may have seen properties for x and y offsets. These
values are used to adjust the alignment and location of a
texture on a polygon’s surface. If these values are
adjusted constantly with each game loop, the texture will
appear animated. This is an effective way of creating an
animation that does not involve modeling or extra
polygons. It is often used in creating sky and water
effects.
● Blob Shadows: Shadows give a scene an extra dimension
of depth and add to visual realism. Generating shadows is
processor intensive., a quick and easy method for
generating processor light shadows called Blob Shadows.
Billboards:
Billboarding is a technique that uses planes to fake a lot of
background scenery. A billboard is a plane usually having a
partially transparent texture applied to give it the appearance
of being a shape other than a square. Common uses for
billboards are grass, clouds, and distant trees.To give the
illusion that the billboard is viewable from all angles, the plane
orientates itself constantly so that it is always facing the
player.
Game Rules and Mechanics
Introduction:
● The underlying activity in games is play. Play can be
found in all human cultures and most animal populations.
At the most instinctual level, play provides teaching and
learning opportunities.
● The key element of play is the cause-and-effect nature
that reinforces certain behaviors. To practice is to
attempt to get better at some skill set. Without feedback
on one’s actions, knowing how to improve is impossible.
● When play becomes structured and when goals, rules,
and actions are applied, they turn into games. These
fundamental actions of human behavior found in the play
and games of children are found at the very heart of
computer games and make up the set of core game
mechanics presented herein
Game Mechanics:
● Game mechanics defines and constrain this behaviour in
a system with rules and rewards. Once goals and/or
restrictions are placed on the activity, it becomes a game.
● The word mechanic here refers to the actions taking
place in games from the internal workings of animations
and programming to the interactions between the
environment and the player.
● The term game mechanic, in game studies, is used to
refer to designed game–player relationships that facilitate
and define the game’s challenges.The player is presented
with a challenge. To complete this challenge, they have
tools they can use to perform actions and rules that
define the scope of these actions. The tools include
peripheral computing objects such as keyboards and
game controllers, as well as virtual in-game tools such as
vehicles, weapons, and keys. The rules dictate how the
player can act in the environment. In a board game, rules
are written in an instruction booklet and are monitored by
players.
● In a computer game, the player is made aware of these
rules and the game’s programming code ensures that the
player follows them. The program also provides feedback
to the players based on their actions to assist them in
learning how to better play the game and achieve the
challenge. Part of the feedback mechanism is to also
inform the players when they succeed or fail.

Primary Mechanics:
● Primary mechanics can be understood as core mechanics
that can be directly applied to solving challenges that
lead to the desired end state. Primary mechanics are
readily available, explained in the early stages of the
game, and consistent throughout the game experience.
1. Searching:
● Searching is a basic human cognitive process that
involves perception and scanning of an environment.
● In computer games, this ability is leveraged to make the
player look for a specific piece of information, item,
location, or character in an environment.
● The objective of the player’s search may be to find an item
needed to proceed in the game, for example, a key to
open a door, or to navigate a maze to get from one place
to another.

2. Matching:
● Matching is an activity that is part of the searching
process.
● In a game, matching is used to get players to put one or
more things together because they are smaller pieces of
a whole, or are the same color or shape, or have other
similar characteristics. This common action may find
players putting together parts of a machine to make it
work as a whole or placing the same colored items next to
each other on a grid.

3. Sorting:
● In a game, matching is used to get players to put one or
more things together because they are smaller pieces of
a whole, or are the same color or shape, or have other
similar characteristics. This common action may find
players putting together parts of a machine to make it
work as a whole or placing the same colored items next to
each other on a grid.

4. Chancing:
● Chance devices such as die and the drawing of straws for
purposes of sorting, selecting, and division
● Chance in games is used to determine the probability of
future outcomes. This is one of the oldest actions used in
games involving the use of dice, rolling or coin tossing to
determine an outcome based on chance. Without some
element of probability in which players knew what the
outcome of their actions would be before they did them,
there would not be any need to take a risk.

5. Mixing:
● Mixing actions involves the combining of objects or
actions to produce an outcome unachievable otherwise.
● In computer games, actions can be combined to allow
characters to perform tasks they could not do with single
actions, for example, jumping while running to leap across
a crevasse in the game world or combining multiple
keystrokes to perform special moves

6. Timing:
● The use of time in a computer game can be applied to as
a game mechanic. It could involve completing a task
within an allotted time, timing an action, or waiting for
some event to occur. This mechanism is used to instigate
urgency in situations such as racing, whether it is against
the clock or an opponent or to generate anticipation
when waiting for something to occur or forcing patience
upon a player who has to wait for the game environment
to change.

7. Progressing:
● Games employ a progression scheme in which the player
begins as a noob and progresses to the level of expert at
the end. Along this journey, progression schemes are put
in place that give players a feeling of achievement for
their effort.

8. Capturing:
● To capture is to take something that belongs to someone
else through force or your own effort.
● Some games embed this mechanic as the primary
objective of the game. For example, Civilization requires
players to take over other cities and countries. The Dutch
East India Company challenges players to take cities
along the spice route in order to be able to build a more
profitable trading company between European and East
Asian cities.
● Capturing can also be used in a game in not such a literal
sense. For example, it could involve knocking out another
game character in order to steal his weapon or stealing a
car to make a quick getaway.

9. Conquering:
● In a similar vein to capturing is the action of conquering.
Conquering is about outdoing or annihilating the
competition.
● Outdoing an opponent is a classic game play goal. For
example, in chess, the aim is to get your opponent into
checkmate while taking pieces along theway or make
them surrender

10.Avoidance:
● Numerous games require the player to avoid items and
situations that are harmful to their character.
● Instead of telling players what they can do, avoidance is
all about showing them what they cannot. The inability to
avoid whatever it is they should be avoiding penalizes
players through reduced points or health given the
situation.
● Avoidance places constraints on the actions of players
such that they must keep in mind what they cannot do
while trying to progress through the game environment.
11. Collecting:
● Collecting is another natural human behavior
● Some items can be collected and placed in an inventory
to be used at a later time. When used, these might
disappear or go back into the inventory. The collecting
mechanic is often used for searching. Other collection
activities can happen almost by mistake.

Developing with Some Simple Game Mechanics:


More than one mechanic is required in order to make
a playable game prototype or to make an interactive
application slightly gamelike.
1. Matching and sorting:
● Matching is a simple yet compelling game mechanic that
sees the player scanning a number of items to find ones
that are similar.
● The matching mechanic is found across a wide range of
popular games such as solitaire where card suits are
matched, monopoly in which property colors are
matched, memory where images must be matched, and to
even kinect games where body poses are matched.
● Sorting is a game mechanic that is usually found with
matching. It entails moving objects around to position
them in a specific order or to match them. The game of
memory does not include sorting, but solitaire does as
the player sorts through the deck of cards to arrange
them into suits.
● For either mechanic, there must be visual clues as to how
the player should be sorting or matching game
objects.For this level of simplistic matching and sorting,
iconic art is used throughout.
● It is important to engage the player in these games with
logical icons. If items need to be matched or sorted, they
should look similar in appearance or have very clear
shared characteristics that make them part of a
particular group. If leaving a player to guess what
matches with what or what goes where is not the objective
of your game, do not make it one.

2. Shooting, Hitting, Bouncing and Stacking:


● The shooting, hitting, bouncing, and stacking mechanics
used in computer games are synonymous with similar
mechanics that make real-world games with a ball so
popular. In order to play, players must understand the
laws of physics and how they can achieve their goals by
using the physical properties of the game environment to
their best ability.
● Many computer games implement these trial-and-error
environmental impact practices.
● Game engines that include physics systems such as Unity
take much of the work out of creating these types of
games as developers can rely heavily on the physics
calculations doing most of the work for them

3. Racing:
● Racing involves one or more players attempting to get
from one location to another in order to beat the clock or
beat one another. Racing is a common sport found
involving unaided human participants (in many Olympic
events), motor vehicles (such as formula 1 or drag racing),
and animals (including horses, dogs etc).
● Many computer games include racing in a variety of
forms such as Project Gotham, Grand Theft Auto V etc.
● Racing need not involve people or vehicles moving
around a track. Performing a task within a certain time is
racing against the clock.

4. Avoidance and Collecting:


● Avoidance is a game action that involves players having
to go out of their way not to interact with another game
object or to make an effort not to perform some action.
● Collecting is the opposite action to avoidance. The player
must make all attempts to gather items.
● Parts of these mechanisms are their visual cues. Items to
be avoided should look like they should be avoided.
● In nature, humans are attuned to the warnings of red
colors. Red represents hot. Fire is hot, the sun is hot, and
lava is hot. The game player already has a built-in instinct
for avoidance. The same goes for sharp prickly objects.
From cactus to underwater mines, the spikes relay a
message of “keep away”.
● The same principle works for collecting.

5. Searching:
● Searching is a common human activity Naturally, the
searching mechanism goes hand in hand with matching.
● The ways in which searching is implemented as a game
mechanic are as varied as game genres.
● Searching can be performed by moving the mouse
around a scene to find and pick up objects. It can also
involve moving the main character around a game level.

Rewards and Penalties:


● We play games for the rewards—whether they be in the
form of points, unlocked levels, kudos, virtual clothing,
virtual food, virtual health, more votes, more friends, or
more money. It is the rewards that provide players with
motivation to perform any of the actions.
● In Some texts, rewards are listed as a mechanic
themselves; however, they are really the motivation or
reason for performing the mechanic in the first place.
● Rewards are not just given at the end of the game but
throughout to influence the player’s behavior. They teach
the player how to play and how to play better by
providing continued feedback from the game
environment. Sometimes this feedback can also be
perceived negatively by the player as a penalty for
incorrect game play. .
● Feedback can both be positive or negative and involves
the addition or subtraction of something to/from the
game environment. The feedback can be given for the
player’s actions to indicate success or failure. Together,
action
without feedback and feedback without action make six
distinct categories.

● With the exception of confusion, these classifications


come from the domain of behavior management called
operant conditioning. They are applicable in games, as
operant conditioning is a behavioral management
technique used for teaching the most relevant behaviors
to participants under voluntary circumstances; that is,
people wanting to be conditioned are open to behavior
management techniques that will teach them to behave
in the most appropriate way to succeed. These
techniques include the following.
1. Positive Reinforcement: This is a situation in which
positive behavior is followed by positive
consequences.
2. Negative Reinforcement: This occurs when positive
behavior is followed by the elimination of negative
consequences.
3. Positive Punishment: This occurs when negative
behavior is followed by the addition of a negative
consequence. In a game, this is a situation in which a
handicap is placed on the player’s progress after he
has done something wrong or performed
inadequately.
4. Negative Punishment: This is the condition where
negative behavior is followed by the removal of
positive consequences. It is like taking away a child’s
toy when she is naughty.
5. Extinction: This occurs when neither positive nor
negative behavior attracts any feedback. This could
happen when a player shoots at a tree or picks up a
rock and tosses it in the ocean. As far as the game
environment and the player’s progress through the
game are concerned, the action performed requires
no feedback as it neither interrupts nor enhances
the game play.
6. Confusion: Confusion is not a technique in operant
conditioning. It has the exact opposite effect as it
provides feedback when no action has occurred.
When this occurs in a game, the player becomes
confused as to the reason behind any reward or
penalty.

● There is a fine line in game play between reinforcement


and Punishment. Some rewards and penalties inherently
exist in a game environment without the need for
onscreen advice or information. In the case of 3D
environments, designers can force players in certain
directions by physically restricting their access to
out-of-bound areas of a level map.
● When using feedback to mould a player’s behavior, the
effectiveness of the feedback will be increased and
decreased according to a variety of factors.
1. Saturation: The more a player is rewarded with or
receives the same feedback, the less he will be
motivated by it.
2. Immediacy: The time between the player’s action and
the feedback is critical. Rewarding a player minutes
after he has performed a task successfully will make
it difficult for the player to attribute the reward with
the action.Haptic feedback mechanisms in games
such as vibrating controllers would not make sense if
the actions in the game did not meet exactly with the
vibrations. The same applies to sound effects.
3. Consistency: The feedback given to players needs to
align with their beliefs about the environment and
how consistently it reacts to their interaction. In an
FPS, 100% players would expect to get killed by the
time they step on a landmine. However if this does
not turn out to be the case, and instead they 20%
only die
of the time, the feedback will not become an effective
way to curb player behavior.
4. Cost versus Benefit: Players will evaluate the effort
they need to spend on an action based on the
reward. This fits with the greater the risk, the greater
the reward philosophy. The evaluation will differ from
person to person based on their attitudes toward
risk aversion.
Character Mechanics
● It makes sense that algorithms used for making artificial brains
are modeled on the same ones that compel fundamental human
behavior and make game playing so much fun.

● Of all the forms and applications of AI, games use a very small
subset, the majority of which is to develop the behavior of
nonplayer characters (NPCs). AI is used primarily for decision
making in NPCs that allow them to find their way around maps
and interact intelligently with other players

● Artificial intelligence algorithms require a lot of computational


processing. In the past, after all the animation and special effects
were placed in a game, only about 10% of the computer’s
processing capabilities remained for AI.

● AI is a theoretically intensive field grounded in applied


mathematics, numerical computing, and psychology.

Line Of Sight
● The simplest method for programming an NPC to follow the player
and thus provide it with modest believable behavior is using the
line of sight. The NPC sees the player, turns to face the player, and
travels in a straight line forward until it reaches the player. This
straightforward approach can make the player feel under attack
and that the NPC is a threat and has bad intentions.

● The NPC has no intentions whatsoever. It is just a computer


algorithm. This illustrates how the simplest of programming feats
can create a believable character.

● In an open game environment, the easiest way to determine if an


NPC has seen the player is to use simple vector calculations. a
field of vision is defined for an NPC based on the direction it is
facing, its position, visible range, and the angle of its vision.
● If a player is inside this range, the NPC can be said to have
detected the presence of the player if not, the player is still
hidden.

● The NPC detects the player within its field of vision using the
vector between its position and the players. It is calculated as:

Direction = player.position – NPC.position

● The magnitude (length) of the direction vector represents the


distance the player is from the NPC. If the angle between the
facing vector and direction vector is less than the angle of vision
and if the magnitude of the direction vector is less than the visible
range, then the player will be detected by the NPC.

Graph Theory
● Almost all AI techniques used in games rely on the programmers
having an understanding of graphs

● A graph in this context refers to a collection of nodes and edges.


Nodes are represented graphically as circles, and edges are the
lines that connect them. A graph can be visualized as nodes
representing locations and edges as paths connecting them. A
graph can be undirected, which means that the paths between
nodes can be traversed in both directions, or directed in which
case the paths are one way.
w
● Both nodes and edges can have associated values. These values
could be the distance between nodes.

Waypoints
● It is a means of marking a route on a map that NPC’s can follow. A
waypoint is simply a remembered location on a map. Waypoints
are placed in a circuit over the map’s surface and are connected
by straight line paths. Paths and waypoints are connected in such
a way that an NPC moving along the paths is assured not to
collide with any fixed obstacles. Waypoints and their connecting
paths create a graph. Moving from one waypoint to another
waypoint along a path requires an algorithm to search through
and find all the nodes and how they are connected to each other.

● Usually you will want an NPC to move from one waypoint to


another via the shortest path, the meaning of shortest refers to
the Euclidean distance between points, but not always. In
real-time strategy (RTS) games where maps are divided up into
grids of differing terrain, the shortest path from one point to
another may not be based on the actual distance,but on the time
taken to traverse each location.

● In order to implement waypoints effectively in a game, there needs


to be an efficient way to search through them to find the most
appropriate paths.

Searching through Waypoints


● There are High methods to find the shortest path from one node
to another in a graph. These include algorithms such as
breadth-first search (BFS) and depth-first search (DFS).

● The BFS takes the given starting node and examines all adjacent
nodes. Nodes that are adjacent to a starting node are the ones
that are connected directly to the starting node by an edge. from
each of the adjacent nodes, nodes adjacent to these are
examined. This process continues until the end node is found or
the search for adjacent nodes has been exhausted.

● The most popular algorithm used in games, for searching graphs,


is called A* .Instead of picking the next adjacent node blindly, the
algorithm looks for one that appears to be the most
promisingFrom the starting node, the projected cost of all
adjacent nodes is calculated, and the best node is chosen to be
the next on the path. From this next node, the same calculations
occur again and the next best node is chosen. This algorithm
ensures that all the best nodes are examined first. If one path of
nodes does not work out, the algorithm can return to the next
best in line and continue the search down a different path

● The algorithm determines the projected cost of taking paths


based on the cost of getting to the next node and an estimate of
getting from that node to the goal. The estimation is performed by
a heuristic function. The term Heuristic is a computation that
performs the opposite function to that of an algorithm.

● They define a heuristic as any technique that can be used to


improve the average performance of solving a problem that may
not necessarily improve the worse performance. In the case of
path finding, if the heuristic offers a perfect prediction, that is, if it
can calculate the cost from the current node to the destination
accurately, then the best path will be found. However, in reality, the
heuristic is very rarely perfect and can offer only an
approximation.

Finite State Machines


● The most popular form of AI in games and NPCs is
nondeterministic automata, commonly known as finite state
machines (FSM). An FSM can be represented by a directed graph
(digraph) where the nodes symbolize states and the directed
connecting edges correspond to state transitions. States
represent an NPC’s current behavior or state of mind. Formally, an
FSM consists of a set of states, S, and a set of state transitions, T.
In short, the state transitions define how to get from one state to
another.

● Programming an FSM is a case of providing an NPC with a state


value that represents its current behavior. Each state is
accompanied by an event value representing the status of that
state. The event tells us if the NPC has just entered that state, has
been in the state for a while, or is exiting the state. Keeping track
of this is critical for coordinating NPC movements around the
environment and its animations. The FSM runs each game loop;
therefore, knowing where the NPC is within a state is important for
updating its status, the status of other game characters, and the
environment.

Flocking
● Applying flocking principles to NPCs in a game can add extra
realism to the environment as secondary animations.

● Craig Reynolds developed an unparalleled simple algorithm for


producing flocking behavior in groups of computer characters.
Through the application of three rules, Reynolds developed
coordinated motions such as that seen in flocks of birds and
schools of fish. The rules are applied to each individual character
in a group in a group with the result of very convincing flocking
behaviour.

● The rules include:


➔ Moving toward the average position of the group
➔ Aligning with the average heading of the group
➔ Avoid crowding other group members

● Flocking creates a moving group with no actual leader. The rules


can be increased to take into consideration moving toward a goal
position or the effect of wind.

Decision Trees
● Decision trees are hierarchical graphs that structure complex
Boolean functions and use them to reason about situations. A
decision tree is constructed from a set of properties that describe
the situation being reasoned about. Each node in the tree
represents a single Boolean decision along a path of decision that
leads to the terminal or leaf nodes of the tree.

● A decision tree is constructed from a list of previously made


decisions. These could be from experts, other players or AI game
experience.

● To construct a decision tree from some given data, each attribute


must be examined to determine which one is the most influential.
To do this we examine which attributes split the final decision most
evenly.

Fuzzy Logic
● Fuzzy logic provides a way to make a decision based on vague,
ambiguous, inaccurate, and incomplete information. Fuzzy logic
works by applying the theory of sets to describe the range of
values that exist in vague terminologies.

Player Mechanics
● Players require feedback from the game in order to determine
their status and progress. This could be in the form of a heads-up
display (HUD) that informs them of their health, money, enemy
location, and much more.
● A visual representation of the player’s status is a common way to
relay information from the game to the player. There are all sorts
of numeric values stored in the game code for keeping track of
the player’s status

Principles of Game Interface Design


● For the most part, HUDs are displayed as 2D screen overlays
whether the game is 2D or 3D. Sometimes extra essential
information is displayed directly on or near a character in the
game environment.

● The best-designed HUDs are ones that show the most amount of
information without cluttering the screen or distracting the player
from the game itself.

User Profiling
● We need to know for whom you are designing a HUD , before
designing one. We should consider the player’s existing skills and
experience, goals, and needs.

● If we are creating a serious game, chances are the players will be


nongamers. In this case, the interface needs to be ultra intuitiv-e
to the point of holding the player’s hand through every step in the
initial game play

● If we are developing an Xbox game that is a sequel to an existing


franchise, We would want to keep the interface similar to the other
versions as you might expect a great deal of the players to be
hard-core gamers who have grown up playing the previous
versions.

● The objective of the game will also be central to your design


considerations. The player’s goals and needs should be clearly
visible.

Metaphor
● Metaphors are the perfect example of how we love to liken one
thing to another and how we make sense of new imagery from our
experiences. The perfect example of this with respect to game
mechanics and interfaces is the play button first appearing on
original analogue sound-playing devices.

● It is used in games to indicate play with the meaning of either


starting the game or, if in the game, unpausing it to make the
game time run forward. Play and its counterparts (stop, pause,
rewind, and fast forward) appear in all manner of games. This
recognition of well-known icons extends to other imagery in your
interface.

● Two popular metaphors that have arisen purely within the game
domain are the spacebar for jump and holding down the shift key
to run. Although they are not visual metaphors, they are still part
of the user interface and clearly illustrate how metaphors can be
leveraged. Metaphors also extend to any menu systems in the
game.

● Colors are also metaphoric. The use of the correct color


immediately conveys messages to the player without words or
other imagery. For example- red represents something bad, such
as an error, low health, or danger; yellow represents a warning or
caution that is needed; and green is good. These three colors are
consistent globally.

● Your use of color in a game interface can represent the state of a


player, object, or character. It also assists in the differentiation of
game elements. For example, enemy characters may have different
colored uniforms. Color coding will also group items, making it
easier for the player to distinguish among them.

● If you do not stick with well-used and well-known metaphors, you


will just end up frustrating the player. The interface is not
something you want to get in the players’ way and stop them from
your ultimate objective - having them play it! Navigating the
interface should not be the game.
Feature Exposure
● Unless we are developing a very simple game, displaying all player
options, statuses, and inventory items on the screen at the same
time is impossible. Such information and commands should not
be buried to the point that it becomes frustrating to find. The
interface is not the game.

● Another common metaphor linked to feature exposure is the ESC


key. Many FPS games have this linked to the display of the main
menu, making feature exposure one key press away.

● In games that have toolbars that are commonly used, they always
appear on the screen. Sometimes not all features can be exposed
at the top toolbar level as there may be too many, and the
interface would be over complicated. In this case, the toolbar is
given a hierarchical structure in which if one option is clicked, it
opens up a second lot of options.

● How you decide to expose information and options to the player


should be based on what the player needs to know and do,
followed by various depths of what the player may want to know
and do. What the player needs to know and do will be directly
related to the objectives of the game.

● The following game interface elements should be considered for


exposure depth:
➔ Primary player data, including health, money, armor level, current
remaining ammunition, and current weapon, should always
appear on the screen.

➔ Primary player actions, including building, moving units, pickup


item, drop item, and swap guns, should always appear on the
screen.
➔ The toolbar (if any) should always appear on the screen. It may
have a roll-out feature that extends it into a bigger toolbar
➔ The main menu should be one click, key press, or button press
away. In some cases, it can appear inconspicuously across the top
of the game window as a fully exposed feature.
➔ Submenu items should be one click, key press, or button press
away from their associated main menu item.
➔ Dialog boxes are only for displaying or gathering information as it
➔ becomes available or is required. They involve textual exchange
with the player and are best kept hidden until required.
➔ The help screen should be accessible directly from the main
screen via the use of a metaphoric button with a “?” or one click
away from the main menu.

Coherence
● Elements of your interface design should fit together both
functionally and visually. In addition, the game interface should
match the functionality of the platform for which it has been built.

● Artwork, borders, colors and fonts of toolbars, menus, buttons,


and other interface items should follow a common theme.

● Functionality should be coherent in that the same commands in


different areas of the game perform the same actions. If you can
click an X in the top right of a window to close it, all windows
should allow for it.

Shortcuts
● Learning keyboard shortcuts is synonymous with learning what
the buttons on a game controller control. These shortcuts allow
players to get to regularly used functionality or to information
embedded in the menu system or toolbar and usually require
some clicking action to achieve. Players will use shortcuts to
perform game actions more quickly.

● In regular software applications, shortcut keystrokes reflect their


name and location in the menu system. If possible, shortcuts
should meet with any metaphors already in the space: for
example, ESC for the main menu.
Layout
● The display of the player interface on the screen should follow
graphic design principles. This assists the player develop a spatial
sense of the layout, which enhances his performance. Things to
consider in your design include the presentation of key
information in prominent locations, using sizing and contrast to
distinguish elements, grouping related actions and data, and
using fixed locations for certain types of information and actions.
In addition, the layout should be well organized. Graphic
designers use a grid system of evenly spaced horizontal and
vertical lines to group and align related components.

● This key principle is known as CRAP (contrast, repetition,


alignment, and proximity). Contrast makes some elements more
dominant than others and more important items that are easier
to find on the screen. Repetition refers to the repeated use of
design elements, such as using the same font, sizing, and colors.
Alignment connects the element and creates a visual flow for the
eye to follow. Proximity refers to grouping similar and related
elements of the interface in the same place on the screen. To add
with alignment, the screen is ruled into equal-sized parts with
smaller gutters dividing each area

Focus
● The HUD displays the player status and other information but
should also draw the eye to any critical changes made to the
interface. Color changes and animations work best for attracting
the player’s attention

● A blatant way to attract the player’s attention is to present a


pop-up window, which stops him from playing.

Help
● It goes without saying that a game should present the user with
help on how to play. It needs an interface should be able to
provide the user: goal oriented, descriptive, procedural,
interpretive, and navigational.

Inventories
● Inventories are a game mechanic that span numerous game
genres. Basically they are a list of items that players have in their
possession. Inventories can be a fixed size, allowing only a finite
number of items or infinite items. This inventory size restriction
adds a new mechanic all of its own where players must prefer the
items they have.
● One example of a very complex inventory is the one that players
have in EVE Online. Players can have multitudes of ships, minerals,
ship parts, and blueprints in their possession. And these all are
not in the same location. Minerals and parts can be on ships and
ships can be docked on different planets. It is quite a feat of
logistics to manage the location of items and ships to facilitate
the most efficient game play.

Teleportation
● Teleportation is a player mechanic often used to move the player’s
character very quickly to different locations in a map. It can
happen explicitly via actual teleportation devices placed in a
game environment, which make it obvious that the player’s
location has changed or inexplicitly by which teleportation is the
developer’s trick for loading another game level or another part of
a map. Inexplicit teleportation can occur when very large game
environments need to be divided up into workable chunks the
computer’s processor can handle. If you play the original Halo,
you will experience this transition. When the player reaches the
physical boundary of a game mesh, a shimmer falls over the
screen and the next part of the map is loaded. The same type of
teleportation can occur when a character reaches the exterior
door of a building and the game needs to load the inside map of
the building so the player can enter it and continue playing.

Environmental Mechanics
Map Design Fundamentals
Map design for game levels is a sizable area of discussion and rules,
and opinions on the best approach differ between genre and game
developers.

Provide a Focal Point


● In a vast map, it is a good idea to provide a focal point. This will
act as the player’s goal location. If the terrain is rather desolate.
the player’s eye will be drawn to the dominating tower. In addition,
because there are no other structures or objects in the area, the
player will have no choice but to go toward the tower.

● In a rather busy map, a large focal object that can be seen from
most places in the level will assist the player with orientation

Guide and Restrict the Player’s Movement


● Although many game maps, whether they are outdoor terrains or
inner city streetscapes, may seem endless, they are not. They are
designed cleverly to draw a player along a certain path and
restrict them access to parts of the map that are not actually
there .

● What can be seen in the distance are billboards and 3D objects


with very low polycounts. As the player can never get anywhere
near these objects, having them in high definition is a pointless
waste of computer memory.

● Terrain map, subtly guide the player along a path where the sides
are defined by steep inclines, vast drops, or endless water.

● Enemy and reward placement also push the player into moving in
a certain direction. Players of first person shooters know that they
are moving toward their goal as the number of enemies increases

Scaling
● Do not be afraid to resize an original asset. Look at the real world
and the proportions of objects within.
● Demonstrates the unnatural looking proportions of the barrels (a)
and how they are better matched to the size of the player’s
character (b). If there is a door in the scene, make sure it has
correct proportions with respect to the player’s character. The
same goes for stairs, furniture, trees, cars, and anything else in
the scene.

Detail
● The same observational prowess used for proportions should be
applied to map details. The details that make the real world seem
real.

● If you are creating a map based on a real-world landscape or


building, go to that building. Sit and observe. Take photographs If
you are mapping the inside of a hospital, go and have a look at all
the things in a hospital that make it look and feel like a hospital.
The colors, charts on the walls, trolleys, exit signs, and markings
on the floor, to name a few, are all symbolic things that will keep
the player immersed in your game environment.

Map Layout
● Physical map layout differs dramatically according to the
restrictions of perspective, narrative, and genre. Sometimes the
logical structure of the story and the game play paths from start
to end match with the analytical structure of the game
environment and sometimes they do not.

Open
● A truly open game map will have multiple starting positions and
ending positions with numerous unordered challenges in between.
From a narrative point of view, as you can start with any character
you like, end the game when you like, and take on whatever
challenges take your fancy along the way.

● The game environment of The Sims is also quite open in the way
by which players can decide on the look, personality, and goals of
their avatars and the layout

Linear
● Most games based around a story are linear; that is, they have a
start, a journey, and an end. Players typically accompany their
character, the hero, through a series of carefully crafted
challenges to the end. The start is always the same, the journey is
always the same, and the end is always the same. In terms of a
map, it means a game environment in which there is only one path
to travel to get from the starting position to the end. Along the
way there may be puzzles, barriers and enemies to battle, but the
path is always the same.

● To add some interest to the linear map, sidetracks can be added


that can take players down a dead end path put provide extra
bonuses or clues to aid in their progressions along the main path.

● Even more complexity is created when circuits are added off the
main path. These provide players with opportunities to bypass
parts of the main path. They can also disorient players and send
them back to the beginning.

● Racing games are another example of a linear path with a circuit.


The circuit takes the player from the start, around a track, and
back to the start. The player’s journey is one that evolves
according to the player’s racing skills however, in the end, the
game remains linear, as the start position, end position, and goal
always remain the same.

● Any game that has a linear storyline similar to that in books and
movies will require a linear map to ensure that players play
through the story and live out the full life of their character. If the
physical map has sidetracks and circuits, players will require a
motivational story by which to choose to explore them.

● It can be achieved by placing side rooms or corridors off the main


path that reward players’ exploration with extra ammunition,
health, or other items that can assist their game progress

Branching
● A level with branching has the one starting position and multiple
ending positions. Narratively speaking, this means a story that
starts in the same place but has multiple outcomes depending on
the player’s game choices. These are difficult narratives to write,
as a new story needs to be written for each branch. In addition,
extra branches require extra artwork with respect to map designs.

● If branching is used in physical terms in a game map, it will mean


that players can skip over a lot of game play and be content on
their way to the end.

Spoke and Hub


● This provides a progressive set of challenges by which the player
must achieve challenge 1 to unlock challenge 2, complete
challenge 2 to unlock challenge 3, and so forth. After each
challenge, the player returns to the same central hub state.

● This level design structure requires numerous maps or map areas


in which the challenges take place. Because of the progressing
unlocking nature of the design, the player will eventually explore
and experience all the game play and maps, unlike in a branching
scenario
Other Considerations
Player Starting Positions
● The first thing players want to do when entering a game
environment is to start the game. If they are facing a strange
unexpected direction, they will have little idea where to go next. It
might be that they need only to turn around to see the door they
need to go through or the corridor they need to walk down. But it
is just a neater way of introducing the player to your level.

Flow
● Flow in level design refers to the way in which players move from
the beginning of the level to their goal. It is the level designer’s job
to make the environment flow as best he can to challenge players,
keep them moving toward their goal, and keep them engaged.

● Although in the end we all know that the designer is herding the
player down a certain path, this need not be revealed to the player
immediately. Providing players with numerous paths to take allows
them to make decisions in their game play about the way they
traverse the map.

● Game developers, Valve, and others use a variety of methods to


control the flow through the game. In some areas, you will want the
player to run, in others to walk. Breaking the map into narrow
areas that make the player feel confined, thus increasing tension
in the game play, creates narrow flow

Trapping
● Blocking the exit of a dead end after the player has entered is
another way to create tension and panic. You could use this
pathway through a map or at the goal location. It should be
obvious that if you do trap a player in part of the level, he is able
to get out.

Use the 3rd Dimension


● Three-dimensional environments have height as well as depth and
width. A map with various height levels allows for the player to get
from one place. to another via alternate routes. In addition, if
players can see that there are multiple heights to a building or
terrain, they will expect to be able to get to these heights that
could be used as resting or attacking positions.

Vantage
● Ensure that your map has multiple vantage points where the
player can hide or use as an attacking position. Some might be in
better locations than others and will provide players with choice
and variety in the way they choose to approach the game play.

● If you are designing for a multiplayer environment, then multiple


vantage points will ensure that play does not become predictable
and ultimately boring.It is unavoidable that players will eventually
explore and use up all the vantage points in a multiplayer map;
however, the more variety provided, the longer the game play
potential.

Terrain
● Terrains are meshes that make up the ground in an outdoor
scene. They usually undulate in height and sport a variety of
surface textures to make them look like a real outdoor desert,
mountain, and even alien world scenes. The more detailed a
terrain, the more polygons from which it will be made as
additional vertices are required to give them a real-world smooth
appearance. Because large elaborate terrains require a lot of
processing, numerous tricks are employed to make the terrain
look more detailed.

● While there are many available tools for creating terrains, making
them look real can be quite a challenge. The best examples come
from nature.

Drawing a Terrain
As you will find in the next section, although a terrain can be computer
generated, the most detailed and realistic terrains are best created by
hand.
Procedural Terrain
● Procedural terrain is that generated by a computer algorithm.
Most of the methods are based on fractals. A program is written
that define the vertices of a mesh and recursively raises and
lowers the points to create altitude. One simple terrain producing
method is the midpoint displacement algorithm.

● This algorithm works starting with a flat surface. It divides it into


half and then raises or lowers the middle point. Then each half is
halved and the midpoint of these is raised or lowered.Another
popular procedural terrain generation algorithm is the Diamond
Square method

● The Diamond-Square method continues creating diamonds to


find midpoints and squares for lowering and raising until all
points in the grid have height values associated with them. Perlin
Noise is another popular algorithmic way to generate landscapes.

Procedural Cities
● Generating realistic cities with code is more challenging than
landscapes. While terrain can reuse trees, rocks, grass, and dirt
textures, reuse of the same building or random street structures
can look unrealistic. In the same way that fractals can be used for
generating terrain and layout, they too can be used to create
objects on the terrain such as trees and buildings.

● A Lindenmayer system or L-system is a fractal algorithm used


commonly for generating trees as it has a natural branching
structure.

Infinite Terrain
● Infinite terrain or endless terrain is a form of procedurally
generated landscapes. It uses a mathematical equation and the
player’s current position to create the landscape as the player
moves. Small parts of the map are created as needed based on
the player’s visible distance. Any map outside the visible range is
not generated, and therefore not a load on computer memory.
● To create undulation of a terrain, a mathematical formula is used
for determining the height based on the x and z positions on the
terrain for which there will always be a y value no matter the x and
z values.

● The beauty of using a formula is that the result of y for any x and
z values is always going to be the same, thus assuring us that if we
return to a previous location on the map that was destroyed after
we left but recreated on our return, the height remains the same.

Camera Tricks
● Cameras are the player’s eyes into the game environment. What
the camera sees is projected onto the computer screen. Before
each pixel is drawn, the game developer can add special
treatments to the properties and colors of the pixels by attaching
scripts to the camera. This allows for the creation of many
different visual effects.

● Many of the following effects come from the domain of image


processing and photographic post production and can be found
readily as filters in Adobe Photoshop. They have found their way
into gaming as a way of enhancing the player’s visual perception
of the environment

Depth Of Field
● Depth of field is the term used to describe how an optical lens
focuses on the surrounding environment. It describes the distance
between the closest and the farthest objects in view that appear
in sharp focus.

● As the human eye is a lens, it also projects an image with a depth


of field Unlike in a straightforward virtual terrain where all objects
are in focus, for humans, our real world tends to become fuzzy in
the distance and sometimes close up. Adding a depth of field
effect to the game view camera gives the scene a higher feel of
realism, quality, and 3D effect.
Blur
● Blur is an effect that makes the entire image look out of focus. It
can be used effectively to simulate the look of being underwater.

● Setting a pixel’s color to the average color of its neighbors creates


a blur effect. Within a certain radius, all pixel colors are sampled,
added together, and then divided by the number of pixels. Each
pixel is then given the same average value. Another familiar blur is
Gaussian blur. It adds the pixel colors together to determine an
average; however, pixel colors in the center of the selection are
given more weight, and therefore the blur appears to radiate
outward.

Grayscale
● Grayscale reduces colored images down to variations of gray such
that it looks like a black and white photograph. The simplest
method for creating grayscale from color is the average method
that takes the red, green, and blue components of a pixel and
divides them by three. This new value is reassigned to the original
pixel. The lightness method sets a pixel value to the average of its
highest and lowest color components.

Motion Blur
● Motion blur is an effect used to make something look like it is
moving rapidly or to give the player a dreamy view of the
environment. The pixels appear to streak across the page in the
direction of the motion. In addition, objects can have ghosting
effects surrounding them. The effect is achieved by partially
leaving the previous frame on the screen while rendering the next
one. This effect is also useful in persuading players that their
character is disorientated as a result of being drugged or in a
dream state.

Sepia Tone
● Sepia tone is the brownish tinge seen on old photographs and
film. It is achieved using an algorithm similar to that used for
grayscale. The red, green, and blue values of a pixel are reset
based on the addition of percentages of their original color
components.
Twirl
● The twirl effect takes a portion of the screen pixels and wraps
them around a whirlpool-type motion. This effect is usually seen in
film footage or cut scenes when one scene is crossing into
another.

Bloom
● Bloom is the glow effect seen around light sources that extend
into the environment. It makes an image look overexposed and the
colors washed out. It is more prominent the stronger the light
source and the dustier the environment.

Flares
● Flares are bursts of reflected and refracted light. The most
commonly known flare is a lens flare, which causes a series of
translucent, rainbow, bright circles on an image from light passing
through a camera lens. It is often used in games to give the player
the illusion that they are looking into the sun when facing skyward

Color Correction
● It takes certain colors occurring in an image and changes them to
another color. For example, a hill covered in green grass could be
made to look like a hill of dead brown grass by replacing the color
green with brown.

● In games, color correction can be used for rendering night vision


effects changing most of the colors to green. value is matched
with a replacement R, G, B value. At any time, color correction, as
with all these effects, can be turned off

Edge Detection
● An edge detection algorithm scans an image for areas where
pixels in close proximity contrast in color. Because the difference
between individual pixels would produce erratic effects, the
algorithm must determine if a pixel is on the edge of an area of
similar color by looking at its surrounding pixels. This creates a
picture that looks like a sketched outline The optimal approach to
edge detection is the Canny edge detection algorithm.
Crease
● Creasing is a non photorealistic effect that increases the visibility
of game world objects by drawing a line around the silhouette,
much like in comic book images. In a busy game environment with
many objects, such as buildings and trees, drawn from similar
colors and tones, objects can become difficult to distinguish. By
applying creasing at differing depths, the line can distinguish
between near and far objects.

Fish Eye
● The fish eye effect produces an image as seen in a spherical
mirror or through a wide-angle lens. The image produced is
hemispherical in nature.

● When used in a less exaggerated manner, the effect widens the


field of view, showing more of the scene than is in the camera’s
normal range. However, to fit in the extra parts of the image, at the
outer edges the scene starts to bend with the effect, becoming
more exaggerated the farther it is from the center

Sun Shafts
● Sun shafts are produced by a bright light source being partially
occluded and viewed when passing through atmospheric
particles.

Vignette
● Vignetting is an effect used to focus the viewer’s attention on an
object in an image by darkening and/or blurring the peripheries.
In a game view, a vignette can focus the player’s attention on an
area of the game environment or be used to restrict the player’s
view.

Screen Space Ambient Occlusion


● The screen space ambient occlusion (SSAO) effect approximates
shadowing in from ambient light based purely on the image
produced by the camera. As it is a post production effect, it does
not rely on a light source or on information about an object’s
materials to calculate shadows. It works with the image to
emphasize holes, creases, and areas where objects meet.
● The use of SSAO brings more depth and natural effect to the
image, providing shadowing in and under the rocks and grass
and between the leaves of the tree.

● This section examined a variety of camera effects. These are


added to the rendering post production after the camera has
determined the scene. As such they can require a great deal of
computer processing and are not recommended for use on
platforms without sufficient graphics processing capability

Skies
● There are a number of ways to create skies for a 3D game
environment. The easiest way is to set the background color of the
camera to blue. Alternatively, if you want to include fog, setting
the fog color to the background color gives the illusion of a heavy
mist and provides for landscape optimization.

● The daytime sky appears blue due to a phenomenon known as


Rayleigh scattering. As sunlight enters the earth’s atmosphere, it
interacts with the air and dust particles, which bend the light and
scatter the different colors across the sky. The color that is
scattered the most is blue. At sunset, the sun rays enter the
atmosphere at different angles relative to the viewer and cause
more yellow and red light to be scattered. Rayleigh scattering is
caused by particles smaller than the wavelength of light.

● Particles that are comparable or larger than the wavelength of


light also scatter it. This effect is described by the Mie theory
● The theory explains how particles such as water droplets affect
the scattering of light. The Mie theory elucidates why clouds are
different shades of gray and white.

● Another factor influencing the look of the sky is turbidity. Turbidity


describes the amount of suspended solid particles in a fluid. In
the sky, turbidity relates to the number of dust, ash, water, or
smoke particles in the air. The density of these particles affects
both Rayleigh scattering and the Mie theory as it also adds to
light scattering.

Skyboxes
● The most common method of creating a sky with cloud textures is
to use a skybox. This is essentially an inside-out cube placed over
the camera with seamless images of the sky rendered on it.
Because it only requires six planes and six textures, it is a
relatively cost-effective way to create a convincing-looking.

Sky Domes
● It is a dome mesh placed over the environment with a sky texture
stretched across the inside surface.

● The sky dome is attached to the player’s camera, like an oversized


helmet, such that it moves everywhere with the player. Unlike a real
helmet, it does not rotate as the camera looks around—it just
translates, ensuring that it is always projecting a sky with the
player at the center. This ensures that the player never sees the
edges of the dome. If the dome is not big enough, as terrain and
scenery come into view, they will pop through the dome edges.
Strategic sizing and positioning of the dome are critical to ensure
that this does not occur.

● Because the UVs of the sky dome are inline in arcs across the sky
mesh, it is simple to scroll textures across the mesh in the same
way the textures are scrolled across a plane. This makes it easy to
add cloud textures to the dome and move them across the sky.
Clouds
● The previous two sections examined skies with clouds. Clouds on
the skybox were fixed and did not change position or color. Clouds
on the sky dome moved across the sky and exhibited turbulence.
While the sky dome method includes layers of clouds, giving the
perception of depth, the player can never get above or in among
them. When you want to portray clouds in a 3D fashion such that
you can walk around or through them, then you need to explore
techniques for generating volumetric fog.

● Volumetric fog is a more complex technique than the fog used


previously. Although the stock standard fog used in computer
graphics applies a faded out effect over all 3D assets in a scene,
volumetric fog is contained within a 3D space.

● The types of natural effects that can be achieved with volumetric


fog include low-lying clouds, mist, and dust.

● One method for producing volumetric clouds is to use mass


instances of billboards with cloud textures on.

Weather
● The weather affects the look and feel of our environment
dramatically. The correct lighting, coloring, and special effects will
make them feel like they are there and add an extra dimension to
your scene.

Wind
● Wind is one of these elements. Although you cannot see it, it
moves environmental objects around. It looks especially good in a
3D world when acting on cloth, trees, and grass.

Precipitation
● Rain and snow add yet another dimension to the game
environment. Not Only do they look good, but they add
atmosphere and can be used to impair the vision of the player
strategically. Precipitation falls into a category of computer
graphics special effects called particle systems. They rightly
deserve a category of their own and as such will be dealt with in
the next section.

Mechanics for External Forces


Gestures and Motion
● In 2003, Sony released the EyeToy for the PlayStation 2. This is
essentially a webcam, which attaches to the gaming console.
Through the processing of the image taken by the camera,
computer vision and gesture recognition algorithms can estimate
the movements of a player.

● In 2010 Microsoft released the Kinect. The Kinect provides full-body


3D motion capture through the use of continually projecting an
infrared laser out in front of the screen in a pixel grid. Depth data
are gathered from a monochrome sensor gathering information
about the reflections of the laser. Currently, the Kinect is capable
of tracking six people simultaneously. The Kinect is also able to
perform facial and voice recognition tasks.

● The Kinect differs from the Playstation Move, a game controller


wand with built-in accelerometers and light on top, by sensing the
actual 3D data.

● The Move released also in 2010 uses a combination of motion


sensing with visual recognition from the PlayStation Eye (the
successor of the EyeToy) to recognize player movements.

● OpenKinect is a community of developers bringing free


programming interfaces for the Kinect to Windows, Linux, and Mac
such that independent application developers and researchers
can implement their own Kinect based games and applications
● Succeeding the technology of the Kinect are systems such as the
Stage System by Organic Motion, used for real-time motion
capture, this system has the potential to make games that are
projected onto all the walls of a room where the player is
positioned right in the center of the action. The Stage System has
the ability to capture true full body 3D surround motion and
structure with the use of its six cameras.

● It differs from the Kinect in that the Kinect only captures distances
from the device itself to whatever is in front of it and tracks for key
human features such as the hands, feet, and head.

● Besides the full body detection systems, simple mouse and finger
swiping gestures have been used in games. Matching the swiping
action to a specific gesture is an exercise in machine learning.
There are numerous methods for recognizing gestures including
Bayesian and neural networks, and complex artificial intelligence
techniques

3d Viewing
● Viewing virtual objects in three dimensions, otherwise known as
stereoscopy or stereoscopics, is a technology that has been
around since the beginning of the twentieth century.

Side-By-Side
● The earliest displays worked by showing a pair of 2D images in a
stereogram. By providing each eye with a different image taken
from a slightly different point of view, the brain can be fooled into
perceiving depth. This technique for stereoscopy is called
side-by-side.

● The individual images of a stereogram are taken at slightly


different angles and distances replicating the natural positioning
of our eyes, that is, 15 centimeters apart.

● To assist with the viewing process of side-by-side images, a


stereoscope is employed to keep the individual eyes focused on
the appropriate image.
Anaglyphs
● The anaglyph is an image constructed by overlaying two offset
photographs: one filtered with red and one filtered with cyan.
When viewed with red/cyan glasses, each eye perceives a different
photograph, thus producing a 3D effect. .

Head-Mounted Displays
● The first commercially available HMDs were the Forte VFX-1 and
Sony Glasstron, which was used in the game MechWarrior 2
allowing players to see the game world through their own eyes
inside their craft’s cockpit.

● Today, the hottest HMD gaining traction in gaming is the Oculus


Rift.The Rift (as do other HMDs) work by delivering slightly offset
virtual images to each eye. The brain processes them as it does
with stereograms and anaglyphs and provides the viewer with the
perception of depth The images are delivered via active screens
placed in front of the viewer’s eyes.

Augmented Reality
● Although augmented reality (AR) technology has been available
since 1968, it has only been in the past five or so years that
applications have become increasingly popular. Technology from
the domain of three-dimensional gaming is particularly a key with
respect to AR as it allows efficient and seamless integration of
high-quality animated virtual objects with AR applications.

● Some applications of note include the Son Playstation EyeToy and


the EyePet and the Nintendo DS3D Nintendogs

● In these games, players see their real world streamed via a camera
onto the screen and superimposed with virtual game characters.
The characters are positioned such that they appear to exist in
the real world. The player almost feels like they can reach out and
touch them.
● AR is a multidisciplinary field based on computer science with the
goal of providing a viewer with an environment containing both
real and virtual objects. With the use of mobile devices with
cameras, computer-generated images are projected on top of the
physical environment. To ensure correct positioning of virtual
objects in the real world, hardware and software are required to
determine accurately the viewer’s location and orientation

● The POV can be matched with the location of a camera in a virtual


model of the physical location and augmented objects and
information projected onto the real world image.

● Wearable devices that allow virtual content to be projected atop


vision of the real world have been prototyped since the late 1960s.
However the earliest devices required reprocessing of video from
the real world with the virtual atop.

● This same technique is still used for AR on smartphones. A big


breakthrough has come about with see-through lenses that only
need to process the virtual. Such devices include Google Glass
and the Microsoft HoloLens.

● Without access to such technology, AR development can still be


experienced on desktop machines and smartphones

Engine Support Systems


● Every game engine requires some low-level support systems that
manage very game engine requires some low-level support
systems that manage mundane but crucial tasks, such as
starting up and shutting down the engine, configuring engine
and game features, managing the engine’s memory usage,
handling access to file system(s), providing access to the wide
range of heterogeneous asset types used by the game (meshes,
textures, animations, audio, etc.), and providing debugging tools
for use by the game development team.

Subsystem Start-up and Shut-down


● A game engine is a complex piece of software consisting of many
interacting subsystems. When the engine first starts up, each
subsystem must be configured and initialized in a specific order.
Interdependencies between subsystems implicitly define the
order in which they must be started. ( i.e if sub system B depends
on subsystem A, then A will need to be started up before B can be
initialized. Shut-down typically occurs in the reverse order, so B
would shut down first, followed by A. )

C++ Static Initialization Order


● Since the programming language used in most modern game
engines is C++, we should briefly consider whether C++’s native
start-up and shut-down semantics can be leveraged in order to
start up and shut down our engine’s sub systems

● In C++, global and static objects are constructed before the


program's entry point (main(), or WinMain() under Windows) is
called. However, these constructors are called in a totally
unpredictable order. The destructors of global and static class
instances are called after main() (or WinMain()) returns, and once
again they are called in an unpredictable order

● Clearly this behavior is not desirable for initializing and shutting


down the subsystems of a game engine, or indeed any software
system that has interdependencies between its global objects.

● A common design pattern for implementing major subsystems


such as the ones that make up a game engine s to define a
singleton class for each subsystem .

A Simple Approach that Works


● The simplest “brute-force” approach is to define explicit start-up
and shut-down functions for each singleton manager class.
● These functions take the place of the constructor and destructor,
and in fact we should arrange for the constructor and destructor
to do absolutely nothing. That way, the start-up and shut-down
functions can be explicitly called in the required order from
within main().

Memory Management
● As game developers, we are always trying to make our code run
more quickly. The performance of any piece of software is
dictated not only by the algorithms it employs, or the efficiency
with which those algorithms are coded,but also by how the
program utilizes memory (RAM).

Memory affects performance in two ways


1. Dynamic memory allocation or C++’s global operator new is a very
slow operation. We can improve the performance of our code by
either avoiding dynamic allocation altogether or by making use
of custom memory allocators that greatly reduce allocation costs.

2. On modern CPUs, the performance of a piece of software is often


dominated by its memory access patterns. The data that is
located in small, contiguous blocks of memory can be operated
on much more efficiently by the CPU than if that same data were
to be spread out across a wide range of memory addresses. Even
the most efficient algorithm, coded with the utmost care, can be
brought to its knees if the data upon which it operates is not laid
out efficiently in memory.

Optimizing Dynamic Memory Allocation


● Dynamic memory allocation, also known as heap allocation, is
typically very slow. The high cost can be attributed to two main
factors.

● First, a heap allocator is a general-purpose facility, so it must be


written to handle any allocation size, from one byte to one
gigabyte. This requires a lot of management overhead, making
the malloc() and free() functions inherently slow.
● Second, on most operating systems to call malloc() or free() must
first context-switch from user mode to kernel mode, process the
request, and then context-switch back to the program. These
context switches can be extraordinarily expensive.

● No game engine can entirely avoid dynamic memory allocation,


so most game engines implement one or more custom allocators.
A custom allocator can have better performance characteristics
than the operating system’s heap allocator for two reasons.

● First, a custom allocator can satisfy requests from a preallocated


memory block. This allows it to run in user mode and entirely
avoid the cost of context-switching into the operating system.
Second, by making various assumptions about its usage
patterns, a custom allocator can be much more efficient than a
general-purpose heap allocator.

Stack Based Allocator


● Many games allocate memory in a stack-like fashion. Whenever a
new game level is loaded, memory is allocated for it. Once the
level has been loaded, little or no dynamic memory allocation
takes place. At the conclusion of the level, its data is unloaded
and all of its memory can be freed. It makes a lot of sense to use a
stack-like data structure for these kinds of memory allocations.
● A stack allocator is very easy to implement. We simply allocate a
large contiguous block of memory using malloc() or global new or
by declaring a global array of bytes.

● It is important to realize that with a stack allocator, memory


cannot be freed in an arbitrary order. All frees must be performed
in an order opposite to that in which they were allocated. One
simple way to enforce these restrictions is to disallow individual
blocks from being freed at all. we can provide a function that rolls
the stack top back to a previously-marked location, thereby
freeing all blocks between the current top and the roll-back point.
● A stack allocator often provides a function that returns a marker
representing the current top of the stack. The roll-back function
then takes one of these markers as its argument

Double Ended Stack Allocators


● A double-ended stack allocator is useful because it uses memory
more efficiently by allowing a trade-off to occur between the
memory usage of the bottom stack and the memory usage of the
top stack. In some situations, both stacks may use roughly the
same amount of memory and meet in the middle of the block. In
other situations, one of the two stacks may eat up a lot more
memory than the other stack, but all allocation requests can still
be satisfied as long as the total amount of memory requested is
not larger than the block shared by the two stacks.

Pool Allocators
● A pool allocator works by preallocating a large block of memory
whose size is an exact multiple of the size of the elements that will
be allocated.
● Each element within the pool is added to a linked list of free
elements; when the pool is first initialized, the free list contains all
of the elements. Whenever an allocation request is made, we
simply grab the next free element off the free list and return it.
When an element is freed, we simply tack it back onto the free list.
Both allocations and frees are O(1) operations, since each
involves only a couple of pointer manipulations, no matter how
many elements are currently free.

Memory Fragmentation
● When a program first runs, its heap memory is entirely free. When
a block is allocated, a contiguous region of heap memory of the
appropriate size is marked as “in use,” and the remainder of the
heap remains free. When a block is freed, it is marked as such,
and adjacent free blocks are merged into a single, larger free
block. Over time, as allocations and deallocations of various sizes
occur in random order, the heap memory begins to look like a
patchwork of free and used blocks. We can think of the free
regions as “holes” in the fabric of used memory. When the number
of holes becomes large, and/or the holes are all relatively small,
we say the memory has become fragmented.

● The problem with memory fragmentation is that allocations may


fail even when there are enough free bytes to satisfy the request.
The crux of the problem is that allocated memory blocks must
always be contiguous.
● Memory fragmentation is not as much of a problem on operating
systems that support virtual memory . A virtual memory system
maps discontiguous blocks of physical memory known as pages
into a virtual address space, in which the pages appear to the
application to be contiguous. Stale pages can be swapped to the
hard disk when physical memory is in short supply and reloaded
from disk when they are needed.

Avoiding Fragmentation with Stack and Pool Allocators


● The detrimental effects of memory fragmentation can be avoided
by using stack and/or pool allocators.
➔ A stack allocator is impervious to fragmentation because
allocations are always contiguous, and blocks must be freed in an
order opposite to that in which they were allocated.
➔ A pool allocator is also free from fragmentation problems. Pools
do become fragmented, but the fragmentation never causes
premature out-of- memory conditions as it does in a
general-purpose heap. Pool allocation requests can never fail
due to a lack of a large enough contiguous free block, because
all of the blocks are exactly the same size.

Defragmentation and Relocation


● Defragmentation involves coalescing all of the free “holes” in the
heap by shifting allocated blocks from higher memory addresses
down to lower addresses. One simple algorithm is to search for
the first “hole” and then take the allocated block immediately
above the hole and shift it down to the start of the hole.

● If this process is repeated, eventually all the allocated blocks will


occupy a contiguous region of memory at the low end of the
heap’s address space, and all the holes will have bubbled up into
one big hole at the high end of the heap.

Cache Coherency
● Accessing main system RAM is always a slow operation, often
taking thousands of processor cycles to complete. Contrast this
with a register access on the CPU itself, which takes on the order
of tens of cycles or sometimes even a single cycle. To reduce the
average cost of reading and writing to main RAM, modern
processors utilize a high-speed memory cache.

● A cache is a special type of memory that can be read from and


written to by the CPU much more quickly than main RAM. The
basic idea of memory caching is to load a small chunk of memory
into the high-speed cache whenever a given region of main RAM
is first read. Such a memory chunk is called a cache line and is
usually between 8 and 512 bytes,
Level 1 and Level 2 Caches
● When caching techniques were first developed, the cache
memory was located on the motherboard, constructed from a
faster and more expensive type of memory module than main
RAM in order to give it the required boost in speed.

● This gave rise to two distinct types of cache memory: an on-die


level 1 (L1) cache and an on-motherboard level 2 (L2) cache. More
recently, the L2 cache has also migrated onto the CPU die.

Containers
Game programmers employ a wide variety of collection-oriented data
structures, also known as containers or collections. The job of a
container is always the same—to house and manage zero or more
data elements; however, the details of how they do this varies greatly,
and each type of container has its pros and cons

Common container data types include:


➔ Array
➔ Dynamic Array
➔ Linked List
➔ Stack
➔ Queue
➔ Deque
➔ Priority queue
➔ Tree
➔ Binary search tree
➔ Dictionary
➔ Set
➔ Graph
➔ Directed Acyclic Graph

Container Operations
Game engines that make use of container classes inevitably make use of
various commonplace algorithms as well.
Some examples include:
➔ Include
➔ Remove
➔ Sequential access
➔ Random access
➔ Find
➔ Sort

Iterators
● An iterator is a little class that “knows” how to efficiently visit the
elements in a particular kind of container. It acts like an array
index or pointer—it refers to one element in the container at a
time, it can be advanced to the next element, and it provides
some sort of mechanism for testing whether or not all elements in
the container have been visited.
● The key benefits to using an iterator over attempting to access
the con tainer’s elements directly are:
➔ Direct access would break the container class’
encapsulation. An iterator, on the other hand, is typically a
friend of the container class, and as such it can iterate
efficiently without exposing any implementation details to
the outside world.
➔ An iterator can simplify the process of iterating. Most
iterators act like array indices or pointers, so a simple loop
can be written in which the iterator is incremented and
compared against a terminating condition— even when the
underlying data structure is arbitrarily complex

Strings
The Problem with Strings
● How strings should be stored and managed in your program? In
C and C++, strings are implemented as arrays of characters. C++
programmers often prefer to use a string class, rather than deal
directly with character arrays.

● Another string-related problem is localization. Any string that you


display to the user in English must be translated into whatever
languages you plan to support. This not only involves making
sure that you can represent all the character glyphs of all the
languages you plan to support (via an appropriate set of fonts),
but it also means ensuring that your game can handle diff erent
text orientations.

● Comparing or copying ints or floats can be accomplished via


simple machine language instructions. On the other hand,
comparing strings requires an O(n) scan of the character arrays
using a function like strcmp()

String Classes
● String classes can make working with strings much more
convenient for the programmer. For example passing a string to a
function using a C-style character array is fast because the
address of the first character is typically passed in a hardware
register. On the other hand, passing a string object might incur
the overhead of one or more copy constructors, if the function is
not declared or used properly. Copying strings might involve
dynamic memory allocation.
● As a rule of thumb, always pass string objects by reference, never
by value

Unique Identifiers
● Unique object identifiers allow game designers to keep track of
the myriad objects that make up their game worlds and also
permit those objects to be found and operated on at runtime by
the engine.Strings seem like a natural choice for such identifiers.

● We need a way to get all the descriptiveness and flexibility of a


string, but with the speed of an integer.

Hashed String Ids


● One good solution is to hash our strings. As we’ve seen, a hash
function maps a string onto a semi-unique integer. String hash
codes can be compared just like any other integers, so
comparisons are fast.
Localization
● Localization is the process of customizing your application for a
given culture and locale

Unicode
● ANSI strings work great for a language with a simple alphabet,
like English. But they just don’t cut it for languages with complex
alphabets containing a great many more characters, sometimes
totally diff erent glyphs than English’s 26 letters. To address the
limitations of the ANSI standard, the Unicode character set
system was devised.

UTF-8
● In UTF-8, the character codes are 8 bits each, but certain
characters occupy more than one byte. Hence the number of
bytes occupied by a UTF-8 character string is not necessarily the
length of the string in characters. This is known as a multibyte
character set (MBCS), because each character may take one or
more bytes of storage.

● Since the standard ANSI character codes are all less than 128, a
plain old ANSI string is a valid and unambiguous UTF-8 string as
well.

UTF-16
● Each character takes up exactly 16 bits. As a result, dividing the
number of bytes occupied by the string by two yields the number
of characters. This is known as a wide character set (WCS),
because each character is 16 bits wide instead of the 8 bits used
by “regular” ANSI chars.

Other Localization Concerns


● Audio clips including recorded voices must be translated.
Textures may have English words painted into them that require
translation. Many symbols have diff erent meanings in different
cultures. In addition, some markets draw the boundaries between
the various game-rating levels differently.

● For strings, there are other details to worry about as well. You will
need to manage a database of all human-readable strings in
your game, so that they can all be reliably translated. The
software must display the proper language given the user’s
installation settings.

● The most crucial components in your localization system will be


the central database of human-readable strings and an in-game
system for looking up those strings by id

Engine Configuration
● Game engines are complex beasts, and they invariably end up
having a large number of configurable options. Some of these
options are exposed to the player via one or more options menus
in-game.

● Other options are created for the benefi t of the game


development team only and are either hidden or stripped out of
the game completely before it ships

Loading and Saving Options


● Configurable options are not particularly useful unless their
values can be configured, stored on a hard disk, memory card, or
other storage medium and later retrieved by the game.

Different ways are


➔ Text configuration files: By far the most common method of
saving and loading configuration options is by placing them into
one or more text files.The XML format is another common choice
for configurable game options files.
➔ Compressed binary files: All game consoles since the Super
Nintendo Entertainment System (SNES) have come equipped with
proprietary removable memory cards that permit both reading
and writing of data. Game options are sometimes stored on these
cards, along with saved games. Compressed binary files are the
format of choice on a memory card, because the storage space
available on these cards is often very limited.
➔ The Windows registry: The Microsoft Windows operating system
provides a global options database known as the registry. It is
stored as a tree, where the interior nodes (known as registry keys)
act like fi le folders, and the leaf nodes store the individual
options as key-value pairs.
➔ Command line options: The command line can be scanned for
option settings. The engine might provide a mechanism for
controlling any option in the game via the command line, or it
might expose only a small subset of the game’s options here.
➔ Environment variables : On personal computers running
Windows, Linux, or MacOS, environment variables are sometimes
used to store configuration options as well.
➔ Online user profiles: With the advent of online gaming
communities like Xbox Live , each user can create a profile and
use it to save achievements, purchased and unlockable game
features, game options, and other information. The data is stored
on a central server and can be accessed by the player wherever
an Internet connection is available.

Per User Options


● Most game engines differentiate between global options and
per-user options . This is necessary because most games allow
each player to configure the game to his or her liking. It is also a
useful concept during development of the game, because it
allows each programmer, artist, and designer to customize his or
her work environment without affecting other team members.

● A hidden subfolder named Application Data is used to store


per-user information on a per-application basis; each
application creates a folder under Application Data and can use
it to store whatever per-user information it requires.

● Windows games sometimes store per-user configuration data in


the registry.The registry is arranged as a tree, and one of the
top-level children of the root node, called HKEY_CURRENT_USER,
stores settings for whichever user happens to be logged on

You might also like