0% found this document useful (0 votes)
9 views

Unity - Automation

Uploaded by

Andrea Cicci
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Unity - Automation

Uploaded by

Andrea Cicci
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

Automation and procedural

techniques
Speeding up development with robot butlers
Directions
Automation and delegating
more work to the computer
in general can be interpreted
in a few ways, but mainly,
this presentation will be
covering 3 things
Straight up automation
As in: systems that actively
perform work previously done
manually, by developers
Procedural elements
and systems
Which are not necessarily
eliminating work on some area
altogether, but allow us to reduce
artist workload or otherwise get
more bang for development hours
spent
Interesting workflows
Built around those time-saving
systems
For a start
We hope you'll find the
tricks we'll present here
interesting and
applicable to your own
games.

Let’s begin with a very


short video for a brief
overview of the game
About us
We’re a small indie game
development studio located in
Seattle, WA

TETRAGON WORKS
We’re a small team
But we're building a fairly
ambitious game about
giant robots stomping
through cities.
The game we want needs a lot of content
Just like anyone else, we want to offer
enough value to the players:

● Lots of varied environments


● Big set of character assets
● Large number of levels
● Many scenarios, items, abilities
● Wide tactical options
● Other strategy/RPG goodies
● Modding support
Robot buddies to the rescue
We like to focus on fun, creative parts of
game development - so we found ways to
automate some of the boring parts
Levels first *
The boards on which tactical games
take place. Some stats first:

● Average grid sizes between 50x50


and 100x100 cells with height of
up to 20 cells
● Locations we build can reach into
10000+ objects

* literally, that was the first system we built


Let's start with a spec
What do we want from
a level system?
Predictable art production
First of all, it would be nice to
have a finite set of possible
shapes for modular assets,
making tileset production
straightforward and fixed in
scope.

No endless possibilities for new


modular pieces, no
back-and-forth with level design
requesting forgotten bits.
Learning from modern terrain systems
Some modern terrain systems do a
lot of stuff right. We want similar
things they accomplish:

● Automated shape generation


or object placement
● Reasonably flat performance
cost per surface area
● Potential for granular LoD
fading and frustum culling
● Granular customization of
points (tilesets, materials)
On top of that
Of course, terrain systems are unfit for
our needs. You can't shape a city with
something like that. We also need:

● To represent varied manmade


locations: cities, factories, roads
● Full destructibility - it's not a proper
game about walking tanks if you
can't level buildings
● Use of traditional modular assets,
no procedural geometry - we want
to leverage modeling skills we have
Foundation
Voxel grid is a natural
foundation - no better way to
represent a continuous 3D
volume snapped to a grid.
The problem
Except the voxel-based volumes
are usually rendered with a skin
of procedural geometry, which is
not desirable for us.
The problem
Whether you go with a blocky
aesthetic or a complex smooth
mesh generation, you're
nowhere close to reproducing
architecture, roads and other
manmade objects.
Manually authored
art is key
But how to use it while
reaping benefits of a
voxel grid?
Now that's a harder problem
But it's already solved in 2d
games, like old Zelda
installments. Let's take a
closer look at 2D tilesets - if
we look at roads, cliff faces,
grass to rock edges and such,
what is the underlying idea?
Tile
What is a tile in a tileset
depicting, for instance, grass,
sand, and transitions between
them? I think it's reasonable
to say that every tile can be
looked at as a set of four
points.
Simple case
In case of simple, fully filled
tiles, all 4 points have the
same value.
Permutations
Given four points and two
possible values a point can
take, we get 2^4, or 16
possible configurations.
Permutations
If we exclude rotations, we
are left with just six
configurations.
Interpretations
Those configuration can be
interpreted in a number of
ways.
Interpretations
Those configuration can be
interpreted in a number of
ways.
Limit
Those configurations only
cover all possible shapes in
2D space, and we want a
system that allows us to
cover all possible shapes
arising in a 3d voxel volume,
though.

So let's look for an equivalent


structure in 3D space.
Cell
Normally, voxel volumes are
visualized by blobs built
around full points.

But let's shift our attention


away from individual points,
into spaces between them.
Cell

Notice how 8-point groups


form a cube with points in its
corners.

We'll call that group a cell.


Configurations
Since each voxel point can be
either full or empty, there are
many possible permutations
that 8-point group can
assume.
Configurations
We call them configurations.
Making a model covering it
and placing it at a midpoint
on a voxel grid would cover
part of a 3D volume.
Permutations
Since we have eight corners
with two possible values in
each, we have 2^8, or 256
configurations on our hands.
Permutations
That's... not encouraging.
Who the would want to author
256 unique meshes per tileset
at the very minimum?
Reducing the number
First of all, there are
configurations that match
rotations of other
configurations. Excluding
those gives us 128
configurations left.

And eliminating horizontally


mirrored matches drops the
number of configurations to
more reasonable number: 53.
Simplifying art production
It's hard to keep track of all
the configurations even after
they are trimmed to 53. So,
let's give an artist a helping
hand:

We automatically generate a
foundation template: a scene
which contains base shapes
of each configuration
Modeling on top
With that foundation, we can
start interpreting the shapes
into natural-looking tilesets -
pieces of hills, buildings,
factories.
A bit of variety
We also author multiple
versions of some
configurations to scroll
through them later: a wall
might have a variant with a
window, fire exit and so on.
Export
At this point we're sweating
about the prospect of
exporting and managing
hundreds of strictly named
block files.
Export
It's a potential production
nightmare where one typo can
cause a long-unnoticed error
in level rendering and export
of each new tileset is a long,
tiresome slog. Let's avoid it.
To export, simply
save the 3D scene
We automated the rest.
Processing magic
In Unity, custom processing
scripts perform a lot of
operations on the imported
file.
Processing magic
● Split the scene into
blocks
● Figure out which blocks
belong to which
configuration
● Merge and clean up
internal structure of the
meshes
● Collect all materials in
the tileset
Processing magic
● Swap all materials for ones
in reference library
● Bake material IDs to
secondary UV channels of
all meshes
● Generate texture arrays /
property arrays by packing
every input from precursor
materials together (more
on that later)
● Save the blocks and
material assets
Supporting multi-cell models
We want to support big
designs like road turns, hero
facade elements for buildings,
large gates, natural
formations and so on - things
that can't fit into 3x3x3m
bounds we use for cells.

So, a solution for processing


those into same format had to
be implemented.
Multi-block processing
We simply author big models
with edges adhering to the
grid, and export all those
models for a tileset into
another lone file.
Multi-block processing
In Unity, custom processing script would
do the following:

● Cut the big design into standard


single-cell blocks
● Figure out the shape of resulting
level chunk build from newly
separated submeshes
● Generate the metadata for the level
system, allowing insertion of that
multi-block chunk into levels
● Process the single-cell blocks like
normal tileset assets
To sum level asset production up
● Clearly defined scope for
tileset art, fixed set of models
you need to cover all possible
shapes
● Freedom to author varieties
without cluttering the system
● No need to handle export of
hundreds of separate files
● No need to handle any asset
preparation tasks manually
Finally, how we
use the assets
We don’t place anything
manually. We just fly around
scene view and click on
nice colored cubes.
Level system updates
configurations
Then, it automatically fills all cells
with meshes, effectively covering
the volume with a skin made from
modular tileset assets

It's an abstraction of what level


designers do when they manually
place similar modular blocks.
I want a building
Which 40 pieces do I have
to place next, and what
position/rotation/scale
should they have?

The system answers that


for you, no need to drag
individual pieces.
The system is tightly
integrated into the editor
We took care to make it feel like a native
Unity tool.
Perks of the system
Going with that approach to
level geometry also gives us
some nice features and
performance benefits.

To touch a few of them...


Cheap rendering
Small set of meshes
instantiated thousands of
times on a strict grid with
fixed scale?

Forget demos with asteroid


fields, this is perfect use case
for GPU instancing right here.
Instancing
This whole approach is
prohibitively expensive with
traditionally rendered
geometry - think 15+
thousands of draw calls per
level.

Instancing enables us to drop


draw call count into just
hundreds, making it viable.
Damage
Thanks to levels
being built from
thousands of
separate meshes, we
can leverage
instancing-friendly
per-object shader
properties.
Damage
One example of such
properties would be
vectors with state of
the 8 corner points,
which would allow
the surface shader to
tear off parts of the
level.
Better damage
visualization
Instead of sending just empty
states of the volume grid, we
can also send 0-1 integrities 8
corner points. This can allow
us to visualize partial damage
and make the cell destruction
smoothly animated.
Damage edges
Just as it's possible to
skin arbitrary 3D
volume with a finite
number of 8-point
configurations
(shapes), damage
edges on cells can be
filled in the same way.
Damage edges
All you need is a small
number of decorative
meshes depicting
chunks of concrete
and other appropriate
bits.
Damage edges
With an algorithm
fitting those edges to
various partially
damage block
configurations, we get
better, layered
transitions to
demolished cells
Interiors
Storing damage as
another state of a
voxel grid point has
another benefit - it is
possible to place
interior tileset blocks
to plug any hole
caused by damage
Collisions
Thanks to grid-aligned level
geometry, we can drive
collisions from a simple
pool of box colliders
attached to full voxel points
One more thing
Speaking of which - what is
the next obvious step when
you have a destructible grid
with proper, grid-aligned
colliders?
Physics-driven destruction
How about making all these
damaged buildings fall
down?
Physics-driven destruction
The idea is pretty simple - we
build a graph of filled points
through the volume starting
from the bottom level (using
neighbourhood search), find
remaining full points not
included in resulting graph, and
make those isolated chunks
fall down, inflicting point
damage to voxel grid. To do:
stress analysis to prevent
single-column supports.
Navigation grid
Again, thanks to the shape
of the level being so
ordered, it's incredibly
simple to solve navgrid
generation for pathfinding:

All you need is to find flat


floor configurations, create
a node for each, and link
them to neighbours
Advanced navigation
You can also generate more
elaborate pathfinding grids
with different link types for
jumping, dropping, climbing
and so on:

Simply traverse the grid


starting from flat nodes,
searching for a suitable
chain of configurations
Props
Having a grid of shapes which
can be traversed in code is
also beneficial for decorating
levels with smaller objects.

We implement a prop system


that detects whether prop is
compatible with it's location,
auto-rotates wall props, drops
props attached to destroyed
cells and so on.
Block shading
Speaking of cheap rendering,
we briefly mentioned
generating texture/property
arrays during tileset
processing stage

All tileset blocks use a single


material with a shader that
works similarly to modern
terrain shaders.
Block shading
The shader picks from an
array of textures and PBR
properties using a material
index (which was baked into
secondary UVs).

This allows artists to author


tileset assets without any care
for atlasing, enabling fast
workflows like strip texturing
from multiple maps.
Block shading
No matter the number of
materials in the source scene
of the tileset, blocks always
cost 1 draw call thanks to lone
array-utilizing material being
used ingame.

Well, actually, we use far less


than 1 draw call per cell -
remember, everything is
instanced too.
Other stuff
Of course, support for
per-block colors and emission
values is in too -
implementing that through
instancing-friendly shader
properties allows us to tweak
individual blocks without
breaking batches up.
Mech assets
Level system isn’t the only
area benefitting from
automation, asset
processing and related
trickery
The core
The heart of any game about mechs
is customization.

We need a deep customization


system, allowing you to tinker with
various bits, swap parts and so on -
and we want to leverage it to produce
varied enemies and offer interesting
progression.
Constraints
Since we're a small team, we want to
produce as much content as possible
with as little work hours as possible.

To maximize possibilities for


customization with minimum art cost
per permutation, we decided to go
with a single platform.
Platform
The platform, or skeleton, is shared
by every unit existing in the game
right now. It's the underlying body to
which all parts are attached, be it
armor, weapons or even pilot
capsule.
Platform
Since the body had to work with a
huge number of assets, fitting
potential hundreds of different items
like a body, making it had to be a
deliberate, iterative process.
Iterating
Except iterating on a complex
skeleton built from multiple files is a
bit of a pain.

Deciding how joints should look to


interfere with armor as little as
possible, how limbs should be
proportioned to bear different items
equally well - it all involves
reexporting the assets multiple
times, which results in a bottleneck
from manually reassembling the
character hierarchy in Unity.
First thing to automate
So, first thing we automated was
character hierarchy building itself.

We wrote a script that examined the


description of desired structure (this
joint goes under that joint, etc) and
fetched the meshes from a set of
files to instantly update the playable
character prototype whenever we
changed one of the models it's
composed from.
More requirements
All character meshes had to include
vertex colors baked from special
materials to drive masking of 4
user-controlled customization zones

They also had to have curvature/AO


maps, AO baked into vertex data,
body piece index baked into one of
secondary UV channels and so on.

We automated all that too.


The workflow
Nobody wants to manually export, bake
and assemble 30 body parts. So, the
workflow allows us to iterate quickly,
and goes like this:
The workflow
● Artist exports just one FBX file
● The scripts find correct sub-objects
in them, merge them, bake every
input required by our shaders
● Whole hierarchy used for animation
is automatically rebuilt every time
one of sub-objects is changed
● We immediately hit "Play" and see
it in action to check on, say,
updated leg joint
Animation-unfriendly bits
Mechs are complex, and some
parts of them are pretty
unfriendly to animation.

All those mechanical joints


rotating on multiple axes in a
strict order to match a certain
3D orientation, pistons pushing
an arm into certain pose and so
on.
Procedural
animation
Some parts of the
hierarchy are animated
with scripts, making
clips lightweight and
allowing us to iterate on
the joint design without
breaking any
animations. Animator
creates a keyframe with
an arm pointing
forward, script figures
out how to do that.
Side benefits to
auto-animated joints
Excluding those exotic bits from main
hierarchy chains also makes it
possible to set up the skeleton
identically to standard biped rig,
reaping the benefits of complex
human-targeted IK, FK, ragdolls and
other interesting systems.
Reducing scope of
artistic work further
With the underlying body and
animations out of the way, we turn to
items themselves.

We need to produce hundreds of


pieces of armor and weapons with as
little effort as possible to maximize
the output of our artist.
Textures
One of the biggest time sinks in 3D
art is texturing.

Unwrapping, wrangling bake files,


using them to author proper input
maps for the PBR shaders and so on -
it's all pretty time consuming.
Customization concerns
Also, we can't even use traditional
PBR maps because we want every
PBR property to be customizable -
pre-painted smoothness or albedo
maps are useless in this case.

So, we took a bit of a shortcut.


Shortcuts to texturing
Instead of authoring a full set of PBR
maps for every part, we bake
curvature and AO (which can be
thought of as abstract shape
descriptions).

We interpret those inputs directly in


the surface shader to allow full
customization of the material.
Raw automated bakes
drive shading
Essentially, users control the settings
of a tiny Substance Designer-like
system that’s not being driven by any
manually authored maps.

Which means we don't spend a single


second on texturing. Did I mention
that unwrapping is automated too?
The workflow
So, the workflow for armor pieces
and weapons is simple: 3D artist just
creates basic shapes and marks up
customizable areas with few special
materials, then exports the resulting
untextured bit and gets to work on
the next design.
Well, almost
Except result of that workflow looks
a bit plain - no small details, just
large shapes with neat shading.

And we can't really just paint


rivets/seams/hatches in the
textures - it's time consuming and
incompatible with automated
system. But we solved it in a
different way.
Details, details, details
To think of it, most of hardsurface
details are repetitive.

Dozens of seams, rivets, panels,


vents, protrusions, depressions,
hatches, ports and joints are present
in most hardsurface designs.

All those are usually baked to a


unified texture using so-called
“floaters” or modeled right into the
mesh with great effort.
Removing redundant
work
We don't want to paint 27 vents or 76
rivets manually, nor do we want to
bake them using floaters.

Instead, we author every bit of detail


just once, and pack it into a single
atlas.
Decal
workflow
Next, we map that atlas to some quads
or other simple geometry, and start
slapping that detail onto our parts like
stickers.
The crucial bit
But since we can't touch the
underlying surface texture (it's
automated and should contain
only bakes of plain shape), we
leave all detail floating in the part
model.

In the game, the decal shader


selectively overwrites parts of
G-buffer under those floating
decals, smoothly blending the
detail into the underlying surface.
The benefit
The benefits from this are obvious -
for one, we never waste any texture
space for repetitive detail.

Also, if we want to edit some detail


(like shape of a rivet), changes to a
decal atlas immediately influence the
look of every single model in the
project using that detail.

That’s all well and good, but where is


automation in this area?
Automation in decals
Even in that efficient workflow, there
is work worth getting rid of. For
instance, decal shader needs to know
the customization ID of underlying
surface to blend to it's albedo
correctly, and all decal vertices need
AO baked to vertices to match
shadows on underlying surfaces.

We completely automate that work,


feeding surface mesh and decal
mesh to a script which performs that
work for us.
Damage
We don't author any
"damaged" states for the
armor, even though mech
damage is a very important
part of the game.

You should be able to punch


a hole in the armor or tear
entire arm off the enemy
mech, that's part of the fun
of a game about walking
tanks.
Damage
implementation
Instead of spending time on
special damage assets, we
render it similarly to
buildings, peeling off layers
based on local damage
maps updated on hits.

Add some fancy FX and


deferred decals, and we're
in a pretty good shape even
without bespoke assets.
Dynamic animations
We also don't author any
flinching/staggering
animations for different hit
directions and intensities.
Instead, we use a ragdoll
system which gracefully
blends in whenever a hit
occurs.

It's also used for similar


needs like weapon recoil
animation.
Saving time
Likewise, some movements
work off an IK rig, removing
the need to animate heads,
arm pose during aiming and
so on.

All this, again, is reducing the


workload of the artists,
allowing them to focus on
more creative tasks.
Weapons
Since we need a lot of them and
would prefer to reduce per-weapon
workload as much as possible, we
use an old trick a-la Borderlands -
modular components.
Weapons
Every new
component
increases the
number of
possible
permutations.
Props
Props are all the small decorative
objects placed on top of main level
geometry - street lights, vehicles,
boxes, vegetation and so on.
Basic requirements
Props need to be:

● Very cheap to render (1


draw call)
● Destructible (from mechs
running through levels,
stray shots, destruction of
level underneath them)
● Customizable (lights,
colors, locations).

But before all that, we have to


get them into the levels.
Prop placement
We wanted to completely avoid the
need to position props manually.

Dragging transforms around is time


consuming, imprecise, and leaves us
with no information about relation of
props to the voxel grid, which is
important for integrating them with
main level destruction.
Done quick
So, our level editing tools treat props
as another way of modifying grid
data.

They can have some small offsets


and independent rotation, but
fundamentally, they "inhabit" the cells
of the level.

This makes placement a one-click


affair and greatly simplifies
prop-related logic like collisions
against mechs.
Collision setup
Speaking of collisions and
destruction in general - we automate
collider setup, detection of renderers
to use in active/destroyed state,
setup for fellable props like trees and
street lights (which involves setting
up force application points, various
physical properties, FX points) and
some other prop properties.
Processing, rendering
Just like mech parts and level blocks,
props go through a dedicated 3D
model processor performing some
time-saving tasks like material
merging, vertex AO bakes and so on.
Processing, rendering
Rendering works similarly to level
system, with instancing and injected
properties allowing per-prop
customization without breaking
batches.
Special needs
Speaking of reducing artist work
hours per prop: there are few
vegetation-specific areas to it.

While you can't really invent many


novel ways of modeling boxes and
street lights fast, vegetation work can
be significantly simplified beyond
tricks described above.
Vegetation
For one, all vegetation we use is
made in tree creator system that
ships with Unity, which relieves us
from need to model and unwrap all
the repetitive branch and foliage
geometry.
Vegetation
But that's not exactly novel, many
Unity games use it. What's interesting
is optimization of vegetation
produced that way.
Level of detail
Since we're making a strategy game
with relatively low geometry
complexity required from props, we're
mostly concerned with producing just
one LoD level for our props.

For vegetation, that would be a


billboard - a small piece of geometry
fading in when a tree is seen from a
distance.
Authoring billboards
Usually, LoD billboards are a bit of a
pain - you have to take a shot of a
tree in neutral lighting conditions,
integrate it with normal lighting,
batch them into groups and so on.
Authoring billboards
They rarely look right and do not
blend well with higher LoDs. We
solved that and automated their
production.
Basic automation
First step to optimizing this part of
vegetation production is generating
the billboard texture.

After an artist clicks a button, we set


up a neutral lighting environment,
frame a prop with a camera, switch
on some nice supersampling and
save a shot with alpha, then
downsample and save a texture.
The usual problem
To make the billboard tree look like a
real tree in the distance, it... needs to
write very values very similar to the
original tree into the deferred buffers.

Same albedo, same normals, same


shadows. Which is a bit hard to
achieve if you're using a cutout of
final buffer capture.
The solution
To solve this, we capture full content
of deferred rendering buffers, as well
as a shot of the final buffer with
special replacement shaders leaving
only the shadow mapping visible.

From them, we create textures, when


used with a custom shader, can give
us a tree pretty similar to the real one
in terms of it's PBR output.
Other stuff
Of course, all the small tasks like
setting up a quad mesh, creating a
material, customizing it to match full
tree model values, compositing
textures with appropriate resolution
and channel packing and saving the
result are also automated.

It's a one-click system. LOD1 LOD0

Albedo buffer
Performance
By the way, since we're using
instancing, every single billboard is
rendered with the same material in
just one draw call total.

So, 500 trees of the same type in the


distance past LoD threshold will
require just 1 draw call.
More performance
But wait, there is more. Since
billboards always use the same
mesh, we can generate a per-level
texture array with all billboard
textures and pass a slice index using
an instancing-friendly property to
each billboard.

That way, whole billboard forest with


any number of tree types can be
rendered in... 1 draw call.
Data-driven systems
It might be mundane stuff for some
enterprise developers, but it's a
surprisingly rare and effective approach
in game development
We offload as much as possible
No point limiting yourself to save games and
setting configs. For us, external files store:

● Items
● Scenarios
● Levels
● Props
● Localization
● Units
● Constants
● Characters
● Even unit behaviors/action types!
How it works
We use a YAML serializer to generate
human-readable configuration files containing as
much data as we can take out from the game.

We load those configs at runtime to drive


aforementioned systems instead of hardcoding
them or using internal assets in the Unity project.
Benefit A
For one, this makes the game
extremely friendly to modding.
It's very easy to add new times,
change characters and even
implement new unit abilities
with their own data-driven logic.
Benefit B
This approach also speeds up
iteration during development.
Hot loading data and using
tweakable scalar values to
adjust most of behavior in the
game makes fixing lots of
issues quick and painless.
We sat down
with a playtester once
And rebalanced the game on the spot
as he watched and directed us to
issues standing out to him.

Data-driven systems can be a bit of a


slog to implement, but it pays to go in
that direction.
Spicy hotfixes
Having an ability to fix
issues on the spot without
making a new build is,
well, priceless. Any
developer on a deadline
can probably relate to that
:)
To summarize
Through automation and procedural assistance, we’ve
greatly simplified:

● Level art production


● Level design
● Character art production
● Prop art production
● Balancing the game
● Filling it with items/scenarios/other RPG goodies

That’s not an exhaustive list, we have more systems


(pilots, AI, panoramic art behind the levels and so on),
but it’s probably time to wrap up.
Questions?

You might also like