0% found this document useful (0 votes)
5 views80 pages

Procedural Worlds

This master's thesis presents a tool for procedural landscape creation in Unreal Engine 5, focusing on flexibility and compatibility with existing systems. It details the implementation of five key components including terrain noise generation, biotope interpolation, asset distribution, road generation, and a user interface. The results indicate potential for customization in landscape creation, though some visual artifacts were noted in interpolation methods.

Uploaded by

100182588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views80 pages

Procedural Worlds

This master's thesis presents a tool for procedural landscape creation in Unreal Engine 5, focusing on flexibility and compatibility with existing systems. It details the implementation of five key components including terrain noise generation, biotope interpolation, asset distribution, road generation, and a user interface. The results indicate potential for customization in landscape creation, though some visual artifacts were noted in interpolation methods.

Uploaded by

100182588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Linköping University | Institution for technology and science

Master’s thesis, 30 ECTS | Medieteknik


2023 | LiTH-ISY-EX–23/5546–SE

Procedural Worlds
– A proposition for a tool to assist in creation of landscapes by
procedural means in Unreal Engine 5

Viktor Sjögren
William Malteskog

Supervisor : Harald Nautsch


Examiner : Ingemar Ragnemalm

Linköpings universitet
SE–581 83 Linköping
+46 13 28 10 00 , www.liu.se
Abstract

This thesis explores the possibilities of creating landscapes through procedural means
within the game engine Unreal Engine 5. The aim is to provide a flexible procedural land-
scape tool that doesn’t limit the user and that is compatible with existing systems in the
engine. The research questions focuses on comparison to other work regarding landscape
generation and generation of procedural roads.
The process to achieve this was done through extensive implementation adding mod-
ules that both builds upon and adds to the source code. The implementation was divided
into five major components, which was noise generation for terrain, biotope interpolation,
asset distribution, road generation and a user interface.
Perlin noise, utilizing Fractal Brownian Motion were a vital part of generating terrain
with varying features. For interpolation a modified version of Lowpass Gaussian filter-
ing was implemented in order to blend biotope edges together. Asset distribution and
road generation were implemented in a way that uses pseudo-randomness combined with
heuristics. The user interface was done to tie everything together for testing.
The results shows potential for assisting in procedural landscape creation with a large
amount of freedom in customization. There is however flaws in some aspects, namely the
interpolation methods suffer from clear visual artefacts. Whether it is suitable for profes-
sional standards remains to be fully proven objectively as the testing in this thesis work
was limited.
Acknowledgments

Thank you Ingemar Ragnemalm and Harald Nautsch for assisting us in our work.
Special thanks to Sofie Våglund for creating assets which allowed us to test our implementa-
tion of asset distribution.

iii
Contents

Abstract ii

Acknowledgments iii

Contents iv

List of Figures vi

List of Tables viii

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Theory 3
2.1 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Noise modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Pseudo-random value generators . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Interpolation/Smoothing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Catmull-Rom splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Unreal Engine Slate Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Related work 10
3.1 Unreal Engine plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Procedural biomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Procedural roads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Method 17
4.1 Pre-study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Generating a landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Interpolation between tiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Procedural asset spawning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.5 Procedural road generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.6 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 Results 36
5.1 Landscape generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Tile grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Interpolation between biome terrain data . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Asset Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.5 Procedural roads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

iv
5.6 User interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Comparisons with other work 52


6.1 Comparison to SeamScape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.2 Comparison to Errant Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3 Comparison to Fischer et al . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4 Comparison to Slowroads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

7 Discussion 54
7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

8 Conclusion 61
8.1 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Bibliography 63

A Code snippets 65
A.1 generate() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
A.2 spawnAssets() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
A.3 CRSpline.cpp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

v
List of Figures

2.1 Perlin Noise and the Principle of Fractal Brownian Motion . . . . . . . . . . . . . . 5


2.2 Feature points in a grid, where cells are to be classified using the fundamentals of
Cellular Noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Gaussian distribution in 2D with mean (0,0) and σ = 1. . . . . . . . . . . . . . . . . 7
2.4 Gaussian kernel where σ = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Example result from slate code snippet. . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1 Screenshot taken of the workflow in Errant Worlds Biomes, different biomes are
painted onto the landscape using a brush. . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Whittaker diagram used to classify biome type. . . . . . . . . . . . . . . . . . . . . . 13
3.3 A screenshot of the game slowroads.io. . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1 First iterations of landscape generation . . . . . . . . . . . . . . . . . . . . . . . . . . 19


4.2 Example of smoothing area of the terrain mesh between two adjacent biotopes.
Where tiles of one biotope is painted in gray and tiles in another biotope is painted
in green. The red area visualize the smoothed area between them. . . . . . . . . . . 20
4.3 Interpolation comparison from a side view. . . . . . . . . . . . . . . . . . . . . . . . 21
4.4 Visualisation of culled objects based on intersection check between previously
placed objects. The brown cubes are objects that have been placed and the red
cubes are objects that have been culled. . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.5 Visualisation of the road spline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.6 Visualisation of the random road algorithm . . . . . . . . . . . . . . . . . . . . . . . 26
4.7 Visualisation of height criterion of the road algorithm v1 in its most simple form . 27
4.8 Visualisation of height criterion of the road algorithm v2 in its most simple form . 29
4.9 Visualisation of candidate selection from start point, orange indicates to much
distance to end point and purple indicates too much slope meaning that these
points will not be considered as candidates while yellow segments are considered
as candidates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.10 Visualisation of road after candidate selection and end point is reached, success-
fully creating a road. In comparison to Figure 4.9 it can be seen that the first can-
didate chosen was the yellow segment to the right. . . . . . . . . . . . . . . . . . . . 31
4.11 Smart road generation with and without goBack control logic . . . . . . . . . . . . . 32
4.12 UI noise settings, where slate widgets are highlighted with a white border. . . . . . 33
4.13 The 2D preview together with road placement, settings and listing. . . . . . . . . . 34
4.14 UI listing assets to be distributed in a biotope together with a display of the hier-
archy of the slate widgets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.1 Landscape generated as ULandscape’s using the plugin with different settings,
5.1(a) shows a "plains" type biotope while 5.1(b) shows a "mountain" type biotope. 36
5.2 Landscape generated using 4 different biotopes placed both manual and with
manual voronoi. The biotopes have similar noise settings but differ with cutoff
and inverted cutoff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

vi
5.3 Landscape generated using 3 different biotopes, using cutoff, turbulence and pick-
ing seeds which matches adjacent biomes reasonably well. . . . . . . . . . . . . . . 38
5.4 Landscape generated using 4 different biotopes comparing different results based
on set Lacunarity, that can be seen in table 5.5. The settings is listed top to bottom
relating to the colors in the 2D preview in clockwise rotation starting with the
yellow marked biome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.5 3 preset landscape sizes segmented into the tile grid using landscape streaming
proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.6 Interpolation comparison from a top down view. . . . . . . . . . . . . . . . . . . . . 41
5.7 Interpolation comparison between two similar biotopes . . . . . . . . . . . . . . . . 41
5.8 Visual artifacts caused by the Gaussian lowpass filtering used to blend biotope
edges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.9 Jagged corners being fixed which was caused by a bug in the interpolation imple-
mentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.10 Image taken of four different biotopes placed in each corner of the map each as-
signed with an unique asset to be distributed. . . . . . . . . . . . . . . . . . . . . . . 43
5.11 An image showing the functionality of asset distribution avoiding a road. . . . . . 43
5.12 If no regard to collision is taken, assets might intersect. . . . . . . . . . . . . . . . . 44
5.13 The difference of not considering collision vs using collision with a high sparse-
ness value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.14 Result of setting a low angle of which the terrain can have where assets are placed. 44
5.15 Assets rotation and scale can be randomized. . . . . . . . . . . . . . . . . . . . . . . 45
5.16 The result of populating a biotope tile with a large amount of grass assets. . . . . . 45
5.17 Manual road generation in perspective view . . . . . . . . . . . . . . . . . . . . . . 46
5.18 Manual road generation topdown view . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.19 Smart road generation using the same terrain and start/end points as in Figure
5.17(b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.20 Smart road generation in the same map layout seen in Figure 5.18. . . . . . . . . . 48
5.21 Smart road generation with different road settings . . . . . . . . . . . . . . . . . . . 49
5.22 Smart road generation and road mask material applied to the landscape . . . . . . 49
5.23 UI biotope creating and terrain generation. . . . . . . . . . . . . . . . . . . . . . . . 50
5.24 UI Asset distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.25 UI Biotope creation tab in use, with biotopes and roads placed for generation. . . . 51

vii
List of Tables

4.1 Kernel settings for the interpolation used on the landscape seen in Figure 4.3. . . . 21

5.1 Noise settings for the "plains" biome used in generation for the landscape seen in
Figure 5.1(a). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 Noise settings for the "mountains" biome used in generation for the landscape
seen in Figure 5.1(b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3 Noise settings for multiple biotopes . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4 Noise settings for multiple biotopes . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5 Noise settings for multiple biotopes with different Lacunarity . . . . . . . . . . . . 39
5.6 Noise settings for the two adjacent biotopes used in Figure 5.7, first row corre-
spond to the middle biotope and second row the surrounding biotope. . . . . . . . 41

viii
1 Introduction

This master thesis work was done internally at Linköping University in autumn of 2022.

1.1 Motivation
The game industry is a growing market where technology constantly evolves to allow for
new and interesting experiences which in turn attract users. One interesting technique that is
by no means new, but is still in use today is procedural generation techniques. To be able to
procedurally generate terrain can be used as an effective tool to reduce repetitiveness in level
design and in turn increase replayability for players of the application.
The 3D computer graphics game engine by Epic Games [1], Unreal Engine 5 is the latest
iteration of its predecessor Unreal Engine 4 which is one of the largest game engines on the
market. As the fifth version of the engine was released in spring of 2022, there is a lack
of tools that allows for users to create procedural environments. This is where we want to
offer a procedural tool that reduces the workload through procedural methods and allowing
developers to be able create diverse environments without the need for expertise level design
experience.

1.2 Aim
The purpose of this thesis work is to create and analyze through implementation and com-
parison, how a flexible procedural landscape generation tool can be integrated into Unreal
Engine 5. The idea with the tool being flexible is to make sure that the tool is not limited
by thematic choices or predetermined objects and rules, but rather that full control is given
to the developer to fully capture the creative freedom and versatility. It is also important to
create a tool that operates seamlessly with the already laid out system of the engine.
The reason why this work is done, is not through a general problem description, but rather
by own observations of the procedural tool market which at the beginning of this work was
seemingly lackluster. Both in terms of the number of tools existing in the new version of the
Engine but also the quality and features offered can often force developers to be restrained to
the limited use case of the tool.

1
1.3. Research questions

1.3 Research questions


1. Can a flexible procedural tool for landscape creation be made in such a way that it
matches visual fidelity and features in comparison to other existing tools?

2. Is it possible to generate roads that vary through pseudo-random means and still
retain logical pathing in relation to the procedural terrain?

1.4 Delimitations
AI/Deep learning will not be under consideration for this project, but could be a subject of
comparison when evaluating results.
Related work sources will be targeted at Unreal Engine compatible projects, but in the
events of lacking quantity of these types of sources other work unrelated to Unreal Engine
will be considered.
When we define "Landscape" generation this refers to natural environments in the sense
that the algorithms for asset distribution and road generation will not be targeted for urban
environments.
The goal is to populate the terrain with foliage such as trees, bushes and stones. Creating
assets will not be a priority and will only be created in cases where it would be necessary.
Graphical techniques such as lighting algorithms and shadow techniques will not be im-
plemented as our own solutions, rather usage of the built in methods provided by Unreal
Engine 5 will be applied.
The tool will only be compatible with Unreal Engine 5 and suited for users with prior
knowledge in the engine.
This thesis work will not bring up ethical or moral questions as the work will be focused
on the aim which regards a procedural generation tool only.

2
2 Theory

In this chapter the theoretical methods used in the thesis are presented. These are noise
methods for landscape terrain generation, pseudo-random generators for asset distribution,
smoothing methods for interpolation between biotopes, splines for path creation and lastly
Unreal Engine’s Slate framework for the User Interface.

2.1 Noise
In order to generate irregular procedural textures, a irregular function is needed, often named
noise. The necessity of this function is to prevent monotony patterns to be repeated. The
irregular function appear to be stochastic, but since actual randomness is unusual in com-
puter science, it is most often pseudo-random. Which necessarily is not an unwanted feature.
The procedure to generate pseudo-random values commonly between ´1 and 1 that are uni-
formly distributed in texture space at even integers forming the integer lattice. There are many
ways to construct irregular functions, Pseudo-random binary sequence (PRBS) is such a function,
downside is that it is sequential, which makes it unsuited for parallel fragment shader cal-
culations. Another is Truncated trigonometric numbers which is not sequential. What irregular
function to use is a question of quality, performance end being able to perform parallel calcu-
lations[2].

Value Noise
By generating a pseudo random number between ´1 and 1 at every lattice point, a noise
function can then be constructed by interpolating between these points. This method is called
Value noise. The deciding factor for value noise is the choice of interpolation. The choices are
many, linear interpolation is a poor choice if the aspiration is smooth-looking noise, as the
result tends to appear "boxy". A better suited interpolation method is a cubic method due to
its properties of having continuous first and second derivatives [2].

Gradient noise (Perlin Noise)


The first implementation of Gradient Noise was done by Ken Perlin 1985 and it gave him an
Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences

3
2.1. Noise

in 1996. Instead of generating a number at each lattice point like Value Noise, his method is,
that for each point generate a pseudo-random gradient vector. The stochastic function is then
built from said gradients thus the name Gradient Noise. The interpolation between lattice
points is based on these pseudo-random generated gradients, the four corners of the cell if
working in 2D or eight corners for 3D. Important to note is that the value is 0 at the lattice
points. For interpolation, a fifth degree polynomial seen in equation 2.5 were suggested by
Perlin due to it having continuous second derivatives for the noise. As well as having zero
second derivative at its endpoints, making it second derivative continuous everywhere which
translates into a smoother looking noise [2].
For two dimensional gradient noise, given a 2D point p, the four surrounding gradient
are first accessed by [3]:

p = ( x, y)
xi0 = f loor ( x ), yi0 = f loor (y) (2.1)
xi1 = xi0 + 1, yi1 = yi0 + 1
Given these gradient indices, the gradients can be accessed as g00 = gradients( xi0 , yi0 ),
g10 = gradients( xi1 , yi0 ), g01 = gradients( xi0 , yi1 ),g11 = gradients( xi1 , yi1 ). The four vectors
from the grid points to the point p is created:
t x = x ´ f loor ( x )
ty = y ´ f loor (y)
(2.2)
x0 = t x , x1 = t x ´ 1
y0 = t y , y1 = t y ´ 1
Where x0 , x1 , y0 , y1 are the vector coordinates, the vectors are then constructed and the
scalar product can be computed against the gradients:
   
x x
a = g00 ¨ 0 ¨ (1 ´ f (u)) + g10 ¨ 1 ¨ f (u)
y0 y0
    (2.3)
x0 x1
b = g01 ¨ ¨ (1 ´ f (u)) + g11 ¨ ¨ f (u)
y1 y1
Where f used in equation 2.3, 2.4 and 2.6 is called Smoothstep and can be seen in equation
2.5. The variables u and v are calculated as:
u = f (t x )
(2.4)
v = f (ty )

f (t) = 6t5 ´ 15t4 + 10t3 (2.5)


Lastly the noies value n is calculated by interpolating the the two scalars a and b with
smoothstep:

n = a ¨ (1 ´ f (v)) + b ¨ f (v) (2.6)


For gradient noise in 3D the gradients are in three dimensions and thus interpolation must
be along three axis, otherwise the method is generally the same.

Simplex Noise
Worth mentioning is that in 2001, Ken Perlin presented an improved version of his famous
noise algorithm, Simplex Noise. It has a lower computational complexity, supports more di-
mensions than three and also has less noticeable artifacts. It is however not used for this work
due to Perlin Noise being sufficient and to save implementation time [3].

4
2.1. Noise

Fractal Brownian Motion


Perlin noise and Simplex noise as is, has its uses, but for the purpose of terrain generation
it is lacking. For example smooth looking noise does not always replicate realistic moun-
tains well. A fractal-like noise would be an improvement, which leads us to Fractal Brownian
Motion. This can be achieved by summarizing the value of multiple iterations of noise with
varying characteristics. By for each iteration (octaves) incrementing the frequency by a factor
(lacunarity) and also decrease amplitude by a factor (persistence), a fractal like noise can be
produced. In listing 2.1 a pseudo-code example of how this could be implemented is shown.
const int octaves = 5;

float amplitude = 1.0f;


float persistence = 0.5f;

float frequency = 0.0015f;


float lacunarity = 2.0f;

for(int i = 0; i < octaves; i++)


{
sum += noise(frequency) * amplitude;
frequency *= lacunarity;
amplitude *= persistance;
}
Listing 2.1: Pseudocode of Fractal Brownian Motion

Depending on the values of octaves, amplitude, persistance, frequency and lacunarity the
resulting noise can differ greatly. Figure 2.1 display the result of Fractal Brownian Motion
which could be more suited towards mountain type noise. By letting lacunarity be 2.0 we are
matching the 1/ f rule and giving a fractal like result. These settings allow for countless of
customizable characteristics to be created and then generated.

(a) Perlin (b) FBM

Figure 2.1: Perlin Noise and the Principle of Fractal Brownian Motion

Cellular Noise (Voronoi Noise)


Voronoi Noise can applied at different levels of "difficulty". The core idea is to pseudo-
randomly scatter feature points throughout 2D or 3D space and then define a scalar function
which is based on said feature points. The function most often takes into account the dis-
tances to surrounding points from the sample point. This could for a simple example be used

5
2.2. Noise modifiers

to classify sample points in relation to the closest feature point as seen in figure 2.2(a), where
three feature points has been pseudo-randomly placed in a grid. By then iterating through
each sampling point P which in this case are grid cells and calculating the distances (s1, s2, s3)
to the nearby feature points, the characteristics of the sampling point can be set. In this ex-
ample the each cell is set to the same color as the closest feature point, as can be seen in figure
2.2(b).

(a) (b)

Figure 2.2: Feature points in a grid, where cells are to be classified using the fundamentals of
Cellular Noise.

2.2 Noise modifiers


There are common modifiers that can be applied to noise to give certain results. Turbulence
is one, where the absolute value of the generated noise value of each octave is taken. This
results often in a more "rounded" appearance.
Cutoff is another common modifier where the noise value simply is thresholded, either in
a lower capacity or upper. The result is a capture of the noise characteristic in either the lower
or upper values, while the rest is set to a flat threshold value [2].

2.3 Pseudo-random value generators

Mersenne Twister
The standard library C++ function std::mt19937 pseudo-random generator is known to be
efficient using the Mersenne twister algorithm. The name Mersenne comes from the period
length of the algorithm, which is chosen to be a Mersenne prime which for most language
implementation is set to 219937 ´ 1 [4]. The period length in pseudo-random generation is the
length of the random sequence before it repeats itself again. By modern computational power
standard this is nearing infinity to where the repetition never occurs in normal use cases [5].

FRandRange
Inside the framework of UE, FRandRange is a pseudo-random generator function which is
specifically used for generating values of type float. The function does not take a seed as an
argument but needs the range in which the generator should generate values within. There-
fore only a min value and a max value in range is sufficient as arguments [6].

6
2.4. Interpolation/Smoothing Methods

2.4 Interpolation/Smoothing Methods

Linear Interpolation
A method for aproximating a value given two known points based on a straight line drawn
between said points. Most commonly is basing the interpolation on a variable between [0, 1]..
A common C++ implementation can be seen in 2.2 where x0 and x1 are the given points and
t the parameter locked to the interval deciding the interpolation.[7]

float lerp(float t, float x0 float x1){


return (1-t)*x0 + t * x1;
}
Listing 2.2: Common C++ implementation of linear interpolation between 2 float point
values.

Lowpass Gaussian filtering


Lowpass Gaussian filtering is an operation that is often used to remove noise in images at the
cost of a blurry result. There are two versions of the operation, one dimension and two dimen-
sions. 2D being the relevant operation for this project. The operation is a regular convolution
operation with a kernel constructed using equation 2.7, where x and y are the coordinates
of the kernel and σ is the standard deviation of the distribution. The deviation (σ) are set
depending on desired outcome, the distribution of σ = 1 can be seen in figure 2.3.

1 ´ x2 +2y2
G ( x, y) = e 2σ (2.7)
2πσ2

Figure 2.3: Gaussian distribution in 2D with mean (0,0) and σ = 1.

An example of a kernel of size 5 can be seen in figure 2.4 where σ = 1. Worth noting is
that the values weights are often normalized by dividing the weights by the total sum. Which
is 273 in this example[8].

7
2.5. Catmull-Rom splines

Figure 2.4: Gaussian kernel where σ = 1.

2.5 Catmull-Rom splines


One variant of splines is the Catmull-Rom spines which was named after the computer sci-
entists Edwin Catmull and Raphael Rom. The Catmull-Rom spline itself is a special case of
the cardinal spline type.
The matrix form for Catmull-Rom spline is formed as,

  
0 2 0 0 P0
´1 0 1 0   P1 
t2 t3 τ 
  
P(t) = 1 t   (2.8)
 2 ´5 4 ´1  P2 
´1 3 ´3 1 P3

The value of τ corresponds to the "tension" by affecting how much the curve bends be-
tween the interpolated control points, τ is typically set to 0.5 which makes the curve more
relaxed in its appearance. The tangent at each control points of a Catmull-Rom spline is com-
puted by using the information from the previous and next control point on the spline, which
makes the spline have C1 continuity. There is however a problem with this approach, because
it needs both a previous Pi´1 and next control point Pi+1 in order to compute the tangent for
the current control point Pi , the first and last point of the spline curve cannot be interpolated.
This means that it is common to assume or approximate the first and last tangents as a tan-
gent between itself and the next point or previous point respectively. However there is no
correct way to do this and it is up to the developer to find the best fit. Another attribute of
Catmull-Rom splines is that they are capable of forming loops and can naturally self intersect
[9].

2.6 Unreal Engine Slate Framework


When creating an UI short for (User Interface) the Unreal Engine Slate UI framework is the
provided solution. It can be utilized to create interfaces for tools and applications within the
Unreal Editor, but also for in-game interfaces. For this project the Slate framework was used
to create an user interface for a plugin.[10]
For those that have experience in HTML, the fundamental structure of Slate is similar,
where a UI is built by nestling multiple elements (Widgets) of different functionality. Most
widgets can store nestled widgets in so called "child slots", the amount depends on the type
of widget.
The typical widgets are for displaying data and/or editing data. As can be seen in the
code snippet 2.3 SNew is called for constructing a new slate widget and SVerticalBox is a type

8
2.6. Unreal Engine Slate Framework

of widget which only arranges children and do not display any data of its own. By calling
+SVertical::Slot() a new child slot is added, which store a widget, in this case a SHorizontalbox.
Each Widget can also be passed individual settings refered to as FArguments in documenta-
tion. These settings can vary from adjusting the padding of the widget, to add reference to a
function to be called when the widgets is clicked. The result of the code snippet can be seen
in figure 2.5, where the placement of a SComboButton has been defined along with what it is
containing.
ContextualEditingWidget->AddSlot()
.Padding( 2.0f )
[
SNew( SDetailSection )
.SectionName("StaticMeshSection")
.SectionTitle( LOCTEXT("StaticMeshSection", "Static Mesh").ToString() )
.Content()
[
SNew( SVerticalBox )
+ SVerticalBox::Slot()
.Padding( 3.0f, 1.0f )
[
SNew( SHorizontalBox )
+ SHorizontalBox::Slot()
.Padding( 2.0f )
[
SNew( SComboButton )
.ButtonContent()
[
SNew( STextBlock )
.Text( LOCTEXT("BlockingVolumeMenu", "Create Blocking Volume") )
.Font( FontInfo )
]
.MenuContent()
[
BlockingVolumeBuilder.MakeWidget()
]
]
]

]
];
Listing 2.3: Code snippet of Unreal Engine Slate Framework

Figure 2.5: Example result from slate code snippet.

9
3 Related work

In order to be able to conduct a comparison analysis, related projects are presented in this
chapter. Some of the work is directly implemented in Unreal Engine making for a fairer com-
parison, while some of the sources are unrelated to Unreal Engine but highlights interesting
and relevant methods for specific application such as road generation or biotope blending.

3.1 Unreal Engine plugins

SeamScape
A bachelor thesis project from 2016 called SeamScape built a system for Unreal Engine 4 using
C++ [11]. The method presented by the authors Ekman et al. uses a combination of Perlin
and Worley Noise to generate terrain, stored in the traditional heightmap structure. The noise
itself is segmented into independent regions, similar to tile partitioning. It is motivated that
the reason for combining two noise generations methods was for the characteristics difference
in the resulting noise. According to Ekman et al. Perlin noise is best suited for smoother
surface type terrain, while Worley noise can generate more jagged terrain. The approach
used in order to have continuity between tiles when independently computing noise for each
tile was by using a hash function, namely a type called XXHash.
The implementation to generate the terrain presented by Ekman et al. does not target the
dynamic landscape module that exists within Unreal Engine which supports the landscape
tools in UE. Instead their application focuses on Unreal Engine’s interface called Procedural
Mesh Component. It is also described that in order to boost performance, a level of detail
algorithm is in place that regenerates tiles with higher complexity whenever a higher detail
is needed for a tile. Level of detail will be abbreviated as LOD
SeamScape supports river networks in which their methods are based on the previous
work of Kelley et al. and Génevaux et al. [12] [13]. The approach by Ekman et al. works by
having points uniformly scattered in the world. Iteratively two points are randomly selected
and a line is created between these points representing the basis of a river segment. To ensure
realistic looking river shapes it is explained that by using quadratic Bézier splines it is a
guarantee that continuity between lines is always true. There is also checks that makes sure
the rivers do not intersect each other. Once a river network is formed in the shape of quadratic
Bézier splines it is stored in a separate heightmap to the terrain.

10
3.1. Unreal Engine plugins

The method used in SeamScape does not only support procedural terrain and river net-
works, but also the vegetation itself. The effect is achieved by using L-systems which has
proven to be effective at producing vegetation assets such as shrubs, flowers, trees and much
more [14]. However for rocks, it is explained that another approach was taken as L-systems
are not suited for shapes like rocks which do not share the fractal behaviour of typical vege-
tation.
Ekman et al. presents their way of distributing the vegetation and rocks which is depen-
dent on their LOD algorithm in the world. The tiles that divide the world into segments have
different LOD states, making tiles further away from the camera have significantly less ver-
tice points. Their implementation is based on a previous method by Hamnes which focuses
on ecosystems for real-time purposes [15]. Based on four variables the method will generate
and place different type of species of vegetation or rocks. In the method by Ekman et al. they
used a slightly modified version of Hamnes variables which are height, relative height, slope
and skew.
Alongside the procedural generation of the world itself, SeamScape also features an inter-
face which allows for some basic alterations to how the landscape is generated. The options
are to change the size, the seed of the noise generator and a button to regenerate the terrain
itself, once the previous settings were modified.

Errant Worlds
A studio named Errant Photon is currently developing a group of three plugins named Errant
Worlds, two of them are currently under closed beta testing. The plugins consists of tools that
are designed to help artists create big open worlds by procedural means in Unreal Engine.
The currently open plugin, named Biomes, is responsible for "Procedural placement of
foliage and gameplay elements". The second part is Landscape, which bring support for pro-
cedural brushes to sculpt the terrain of a landscape. Last, the third part is Paths which allows
for creation of spline-based networks such as roads, tunnels, pipelines etc.
The Biomes plugin were tested and the workflow is an integrated system similar to work-
flows already existing in Unreal Engine. Where the artist use a brush tool to paint different
biomes onto an existing landscape terrain. This can be seen in figure 3.1 where each color
represent a unique biotope. These biomes can then be populated with Species, which are de-
scriptions for how an asset should populate a biome. Multiple settings can be customized
to achieve a desired population. Some settings that can be set is minimum spacing between
each asset. A numeric Priority can also be inputted, which determine the priority order of
species in a biome. By doing this, the artist can for example decide which species that will
majorly be placed. This setting can be expanded further by settings a Priority Radius, that will
prevent species of lower priority to grow inside an area depending on the entered radius.
How the species are placed is based on a unspecified kind of noise in conjunction with
mentioned settings for species. The result can be seen in figure 3.1 where one biome is popu-
lated by species with a tree mesh.
The biomes themselves have a couple of settings, the color of which the biome is painted
can be changed after preference. Species can be entered into an array, which will then place
them into the painted areas of the terrain. Last, Sub Biomes can be created and added, these
are similar to regular biomes but they instead contest for generation inside a parent biome.
The second part of the plugin group is Landscape, where the beta unfortunately is locked.
The described functionality is that it uses Landscape Blueprint Brushes to allow modification
to landscapes. What the plugin brings is a number of these brushes that are ready to use, a
Noise brush with a configurable noise function, an Erosion brush which simulate erosion and a
Stamp brush that allows placement of hills or valleys.
The last part of the group is Paths, which is also locked. They describe a tool where the
artist can define the appearance of an asset, for example a road segment and together with
their spline-based network create a road, based on said asset. The network will also be able to

11
3.2. Procedural biomes

Figure 3.1: Screenshot taken of the workflow in Errant Worlds Biomes, different biomes are
painted onto the landscape using a brush.

(depending on settings) alter the terrain by raising or lowering it, but also by removing foliage
that are near the path. A method for Pathfinding is supported, where the artist can declare a
start-and end point, for which the path will find the most cost effective route depending on
customizable parameters [16].

3.2 Procedural biomes


In a paper by Fischer et al. a procedural landscape generation method is presented which
focuses on fast realistic biome based landscapes. Their method is divided into a pipeline
of four major processes, rough terrain generation, climate simulation, biome refinement and
lastly asset placement (which won’t be presented).
In a brief description, the first step generates rough terrain using Simplex noise with a mul-
tiple of octaves to create a typical fractal terrain. It is also possible for the user to customize
the noise parameters and assign a height value where the sea level is set, creating bodies of
water wherever the terrain goes below this level. The second step decides where different
biome types should be located. Each biome is physically based, meaning wind, temperature,
moisture and precipitation all factor in classifying what the biome type is. Fischer et al. goes
in detail on how each of the physical property is calculated. For instance the temperature
can be computed through interpolation methods such as bilinear or sine interpolation, both
having pros and cons. The wind calculations are described as a simplified iterative semi-
Lagrangian scheme, removing the diffusion and pressure components. The temperature and
wind calculations are used in combination to compute and simulate the precipitation model
over the whole terrain. Bodies of water will also be used to impact the model as it is seen
as a moisture source. The wind will affect how much moisture reaches the surrounding dry
land above sea level, based on it’s direction and strength. The temperature data will directly
affect how much evaporation occurs from the moisture. This in turn will shape the precipi-
tation model for the landscape, making it possible to classify regions of biomes based on the
physical attributes.
The approach Fischer et al. has taken to classify biomes is by using a classification diagram
introduced in the book Communities and ecosystems by Robert Whittaker in 1975 [17]. The
diagram uses the average temperature relative to the precipitation rate per year to identify
the specific biome type, see Figure 3.2. This is modeled as a lookup table within the method
presented by Fischer et al.
Once the whole landscape has classified biome regions, the refinement process based on
each classified biome region is started. The main concept used by Fischer et al. for achieving a
realistic looking result is to use Digital elevation models, DEM for short. DEMs can be described
as a form of heightdata, most commonly based on topographic maps of locations on earth
[18]. Each classified biome will be blended with the specific DEM that correlate to the biome.

12
3.3. Procedural roads

This is referred to as an example based method, meaning the DEM is the example that the
base terrain uses to blend itself with to obtain some of the typical characteristics of that biome.
According to Fischer et al. this approach has the advantage of obtaining realistic details
without the need for extensive user tuning that often is needed in other traditional methods.
The next step in order to maintain a realistic multi-biome landscape in Fischer et al.’s
approach is to blend the biome borders together. The main concept used in their method is
to use simplex-based fractal noise in order to break up patterns or straight edges between the
biome borders.
The last step of the refinement process is to further improve the biome border blending.
This is done through computing what Fischer et al. refers to as biome-based DEM weighting.
Essentially the concept is to use a convolution kernel which weighs the DEM data around
biome borders, interpolating the differences to create more seamless transitions between bor-
ders [19].

Figure 3.2: Whittaker diagram used to classify biome type.

3.3 Procedural roads


The concept of generating roads in a world that generates procedurally is not something
new. However the methods for creating believable roads and paths can be done in multiple
interesting ways. It is also important to consider that roads can mean different things based
on what the end application is. Something like a city where streets and intersections are the
goal is typically different in its approach compared to something more similar to a gravel
road on the countryside.

Non-urban road generation


One paper from Marechal et al. presents an approach in which they generate roads in a
landscape using weighted anisotropic shortest path algorithm [20]. This algorithm in short as
explained by Marechal et al. uses cost functions that takes into account environmental factors
such as terrain slants and obstacles like rivers and mountain. The cost function is meant to be
minimized for each road segment in order for the generation to ideally generate roads with
the shortest distance from a start point to an end point. For their shortest path computation
they are using A* algorithm in a discretized grid region. As explained by Marechal et al.
this reduces the problem to a shortest-path problem on finite graph instead of a continuous

13
3.3. Procedural roads

shortest-path problem. This method is also used as road generation in the paper by Emilien et
al. where the goal is to have roads follow a network created by small village type regions [21].
Because the method relies on using ideally the shortest path it also has to cleverly traverse
these obstacles. They have done this by also incorporating procedurally generated bridges
and tunnels when needed.
As presented by Marechal et al. the principle of when to generate bridges and tunnels
are based on characteristics functions that consider factors such as vegetation density, water
depth and curvature. The characteristic functions are then able to be controlled by transfer
functions which thresholds the different factors, meaning when to generate a bridge or tunnel
will be done when it is suitable according to the transfer function threshold.
The authors Mizdal et al. writes in their paper a solution which builds upon the shortest-
path algorithm using A* which Marechal et al. discussed in their paper. However Mizdal et
al. suggests another cost function which they base upon computing the distance in the XY-
plane between the two points of interest of where the current road segment can go. Alongside
this the slope of the terrain of the first point is used in the equation alongside a constant:

Cost = DistXY ( p1, p2) + DistXY ( p1, p2) x (1 ´ α) xβ (3.1)


The constant β determines how much impact the slope should have on the cost. Therefore
if β = 0 the slope would not even be considered and the road generation would simply follow
the shortest path to the end goal. This is however usually not the way paths and roads would
be built if realism is the goal, as the steepness of the terrain in many cases could be very
unnatural for a road or path to go up towards. Something that gives more reasonable results
as seen in the paper by Mizdal et al. is having β move up towards the value of 10 or higher.
This results in the road to prefer going around the steepest terrain points and more naturally
bend and curve around these points [22].
A recent and popular example of procedural roads that surfaced is the web browser ap-
plication slowroads.io. The project is made by a software engineer named anslo and the appli-
cation itself is capable of generating infinitely large worlds alongside an infinite long road,
shaped and intertwined seamlessly with the terrain. An example of how the roads could gen-
erate can be seen in Figure 3.3. The project is still work in progress and is fully accessible and
playable from the web browser, running on JavaScript. The developer has briefly presented
the general concepts and key implementation designs of the application in a dev blog. Focus-
ing on the road algorithm, the first step is to choose a starting point which is only chosen in
a region where the terrain shape is not steep, essentially favoring planar regions. This point
marks the first point in the midline of the road, where the midline can be seen as the spine
of the road. Then a direction needs to be decided which is done in a assess-process in which
each direction that is possible is tested by computing the gradient from the heightmap. This
in turn allows the road to extend in the best suited direction with minimal steepness. The
road extends by 10 metres whereof the points in between the starting point and end point are
stored with information such as gradient, width and curvature. Then the direction assess-
ment and road extension is repeated for as long as the road is within the render distance of
the vehicle, and started up again as soon as the player moves the vehicle forward and more
roads needs to be generated. In order to also ensure that the road doesn’t have unnatural
and abrupt bumps or edges, it is presented that a smoothing window with 9 influence points
interpolates the surface throughout the road generation. It also described that the points
binding together the midline of the road is constructed as a quadratic bezier curve, but this
will only occur once the road segment is within a shorter distance. [23]

14
3.3. Procedural roads

Figure 3.3: A screenshot of the game slowroads.io.

15
3.3. Procedural roads

Urban road generation


Other approaches about road generation such as the on presented by Gang et al. focuses
more on road generation in urban environments [24]. Their method differs heavily from the
previous examples as the focus is to have a method that simulates streets in modern urban
environments. Their approach starts with defining the road elements center curve and width.
The center curve is modelled as a Bezier spline but the information could be obtained as either
raw real-life road data, aerial images data, manual marking or completely procedural. As
explained by Gang et al. there are common obstacles with extracting information from data
such as aerial images as object can occlude parts of the information or shadowing can appear
and mask important features to name a few examples. Therefore their method is motivated
to focus on manual marking to improve the accuracy and visuals of the road generation.
As described by Gang et al. their generation process uses Bezier splines as center curves,
then the road segments are created in a 5-step process. This 5-step method first takes two
adjacent points on the spline and creates a plane between these two points with a given
width. Such a plane is created for each adjacent points on the spline until all points have
been iterated through. Then all the planes compute the intersection point for the two adjacent
planes in XY-plane. All the plane vertices are moved to the points of where the intersection
points are located. Lastly attach all the overlapped vertices which forms the road element.
The method presented in Gang et al. can also handle intersections and generation of side-
walks alongside the roads. To achieve this a 3-step process is presented in which the general
idea is to firstly find the vertices of intersection between the roads that will form the inter-
section. Secondly a smoothing pass which makes the roads that connect to the intersection
bend and curve more naturally, making the intersection more closely represent realistic in-
tersections used in real life. The presented smoothing methods are Nearest polygon expansion
strategy and Control stick intersection strategy. It is not clear which method is used in their
paper but it is hinted at both of them being used in specific cases as the methods have flaws
and strengths in certain situations. The final step is to generate the sidewalks which are cat-
egorized into two types of sidewalks, one for the road elements and one for intersections.
The method for generating sidewalks of the road elements are as they describe just a copy of
the road element generation algorithm, but with a different visual style that generates next
to the road segments. Same procedure applies to the sidewalks in the intersection, as their
method essentially uses the same process as with creating the road intersection, and applying
interpolation to connect the sidewalk in the intersection with the connecting road sidewalk.

16
4 Method

This section goes in detail of how the plugin was implemented during a 20 week period. The
chapter is divided into the five major components of the plugin, landscape generation in the
form of noise, biotope interpolation, asset distribution, road generation and UI to control all
the features of the plugin.

4.1 Pre-study
The initial work process started with a literature study about sources with similar goals. The
baseline for the literature study was set at a minimum of 10 sources in order to gain infor-
mation from a broader scope, with some of the sources having more focus on certain aspects
than others, giving unique points of views and approaches.
The main divider between the sources was if either their approach was primarily meant
for procedural landscapes in the sense of nature landscapes such as forests, mountains and
fields and so on. If the sources were however focused on generating city landscapes with
structured streets and structured buildings the method changes drastically. Having sources
for both methods allows for an insight in what could be more important for specifically gen-
erating procedural nature landscapes and which aspects are not, which are more relevant for
this thesis focused on natural landscapes and less on strict urban type generation.

Unreal Engine 5.0 Source code


The game engine Unreal Engine (UE) underwent a massive update in 2022 affecting many of
its core systems. In order to familiarize with the engine as a whole, general research was done
before any actual implementation. There was a decision that had to be made, which was that
if the engine plugin should be implemented through the normal UE official release branch
or on the experimental source branch. Using the source branch allows fixes from the official
developers to be available instantly instead of being bundled with future official releases that
could take months or more to come out. However the decision was made to use it through the
official release branch in order to avoid potential problems related to experimental features
released on source.
Another detail which helped understand the source code of the engine better was that
different classes always has a prefix character, most commonly "A" or "U". Whenever a class

17
4.2. Generating a landscape

has the prefix "A" it indicates that the class is supposed to be instantiated as an actor. An actor
in UE is a type of object that will have a physical representation in the game world. If a class
instead is prefixed with "U" it indicates the opposite, something that is not physical in the
game world. These types of classes are typically more system oriented, like a point system or
a type of storage structure for variables.
As the goal was to have a plugin for UE that allows for procedural landscape genera-
tion through a user interface, the starting point was to look at the source code of the engines
components that related to landscapes. In UE there is a dedicated landscape system which al-
lows for features such as deformation of the terrain and world partitioning to ensure playable
performance by segmenting the landscape into tiles which can be loaded in and out.

4.2 Generating a landscape


The first code implementation of the plugin was made by creating a class called CreateLand-
scape and within it a function generate() which was going to be responsible for creating and
importing the landscape into the world. To do this the UE landscape class ALandscape was
used which allows for a landscape actor to be spawned in using an array with height-values,
i.e a heightmap. In order to use the ALandscape class the <pluginname>Build.cs file had to
be modified to include the Landscape module. The ALandscape class had to be instantiated
through another UE class UWorld, which is an object which hold the information about the
game world. Through the UWorld object the function SpawnActor() is called with the input
argument being an ALandscape object with a fixed location and rotation. The landscape will
however not show up in the world view by just calling the SpawnActor() function. It is also
needed for the landscape to be called with a function called Import() which takes in informa-
tion about the landscape size, number of components and the components sizes, the height-
data and material-data. In the beginning the height-data is just an array with size 505x505
filled with the values 32768. The reason for the 505x505 size is that it follows one of the rec-
ommended sizes from UE’s documentation for landscapes. The number of components for
this recommended size is 63 which was also used and is needed for the world partitioning.
Filling the array with the values 32768 was done in order to have the landscape terrain spawn
in the vertical "middle" of the world view in UE. The reason why the middle is at these val-
ues is because the height data is stored in the form of uint16 which corresponds to a medium
value of 32768. The Import() function will physically spawn in the landscape terrain into the
game world. The landscape will at this point be a completely flat plane that only consists of
one large tile with 255025 (505x505) vertices. By having the landscape being just one large
singular tile world partitioning was not going to give any performance improvements.
The first iteration of the generate() function can be seen in appendix A.1.

Modifying the heightmap


The heightmap is stored as an array which in UE is the data structure TArray, functioning
similarly to a normal array structure found in normal C++ but have certain optimisations for
UE.

Using value noise


In order to see and test how the data could be manipulated, the most simple type of noise,
value noise with linear interpolation was implemented in a separate class ValueNoiseGener-
ator. The implementation was divided into two functions, one for generating the random
values GenerateNoiseValues and the other function for interpolating each point processCord
Randomly generating the values was first attempted to be achieved using the standard
library C++ function std::mt19937 pseudo-random generator. Through the Mersenne twister

18
4.2. Generating a landscape

algorithm it was possible to fill the heightdata with value noise and use for the terrain. The
first resulting terrains then had the appearance of what can be seen in Figure 4.1(a).

Using Perlin noise


Due to the unnatural features value noise generated when applied to a landscape it was de-
cided that Perlin noise should be implemented as a better alternative to value noise. A new
class was created, PerlinNoiseGenerator which contains the functions generateNoiseVal which is
responsible for bilinearly interpolating the input lattice point P based on the pseudo-random
generated gradients. The gradient generation was done in another function called generate-
Gradients and uses the FMath library function FRandRange to randomize direction. For the
Perlin noise amplitude, frequency, persistence and lacunarity were all implemented so that
these variables could be changed and affect the resulting noise. The resulting values is then
stored in the heightmap and could be used in the landscape generation. The resulting land-
scapes from this implementation, with reasonable settings for the noise looked characteristi-
cally like what can be seen in 4.1(b).

(a) Value noise generation (b) Perlin noise generation

Figure 4.1: First iterations of landscape generation

Landscape streaming proxies


The tile system in UE does not use the naming "tiles", rather they use the name landscape
streaming proxies. These streaming proxies could be created through the class ULandscapeSub-
System which generates the landscape the similar to the way before. However it uses the
components variables set to 63 for QuadsPerComponents which indicates the number of quads
each component of the tile should have. The other component variable is the ComponentsPer-
Proxy which tells how many components each landscape streaming proxy (tile) should contain.
Following the documentation this was recommended to be set to either 1 or 2 and in this case
it was set to 1. This meant that the landscape would be divided into 8x8 streaming proxies
that the world partitioning system can utilize. It was also discovered that in order for the
world partitioning to be used, the world settings "Enable Streaming" for the level in UE had
to be set to true. This settings is however only available if World Partitioning is enabled by the
project. This is on by default if using the Game template project, if this template is not used it
is also possible to do a manual conversion of the project level in order to enable the world
partitioning system.

The landscape tile class


While UE already had their own implementation of landscape tiles which is the previously
mentioned landscape streaming proxies, it was decided that a new overhead class was to be im-
plemented. This was motivated due to it allowing for more control with future features such
as biotopes, roads, cities etc. The class was created with the name UTile and each instance of

19
4.3. Interpolation between tiles

UTile contains a pointer to a landscape streaming proxy, material for the tile, biotope type and
its corresponding tile index.
It was decided that the overhead tile class also should contain information about adjacent
tiles around itself. The main motivation was that this information could be used when inter-
polating the heightmap between tiles. The adjacent tiles are stored in a TArray in the form of
pointers, in an 8-adjacency structure instead of 4-adjacency in order to also be able to store
information about the corners.
Later in the workflow the tile class was also expanded to have information about which
assets exists within the tile as well as the specific biotope noise that the tile should contain.
This information was also stored in the form of TArrays.

4.3 Interpolation between tiles


When tiles of different biotope, with noise of different characteristics meet, they often create
an abrupt transition in the terrain mesh. This is seldom a desired feature, but a good solution
for the current implementation was hard to find. The direction went into applying a Lowpass
Gaussian Filter on the edges which are adjacent to a tile of another biotope. By applying
the Gaussian kernel over an area of the terrain height-data overlapping the adjacent tiles’s
border the drastic difference between them can be smoothed out. An example of the area that
is smoothed over can be seen in figure 4.2. By doing this over multiple passes and vary the
size of the area the difference become less apparent and with lesser apparent artifacts as a
result of the smoothing.

Figure 4.2: Example of smoothing area of the terrain mesh between two adjacent biotopes.
Where tiles of one biotope is painted in gray and tiles in another biotope is painted in green.
The red area visualize the smoothed area between them.

The number of passes, the size of the kernel and the value of σ were tweaked to optimal
settings by trial and error. It were also a question of size of the smoothed area. The same
procedure were also applied here, that the setting was determined by testing. Some of the
tested settings can be seen visually in Figure 4.3 which corresponds to the settings in table
4.1.

20
4.3. Interpolation between tiles

(a) Raw landscape data, no interpolation has been ap- (b) Interpolation is applied.
plied.

(c) Interpolation is applied. (d) Interpolation is applied.

(e) Interpolation is applied. (f) Interpolation is applied.

Figure 4.3: Interpolation comparison from a side view.

Figure Size σ Range Passes


(b) 3x3 1.0 30 20
(c) 9x9 1.0 30 20
(d) 3x3 20.0 30 20
(e) 3x3 1.0 3 20
(f) 3x3 1.0 30 10
Table 4.1: Kernel settings for the interpolation used on the landscape seen in Figure 4.3.

21
4.4. Procedural asset spawning

The interpolation is done in two steps one where the edges between different biotopes are
smoothed and one where corners that have been left out are smoothed. By iterating through
every other tile in the grid and perform smoothing on edges between adjacent tiles of different
biotopes, corners can be left non-smoothed, as seen in figure 4.2. Which is why a second step
of interpolation must be done where these corners are identified and smoothed.

4.4 Procedural asset spawning


To fulfill the idea of a procedurally generated world, a way of procedurally spawning in
objects based on the biotope type was implemented. The implementation started as a limited
approach with three specific predetermined biomes, but later got expanded to be completely
dynamic and customizable by the user.

Specific-biome asset algorithms


The class resposinble for handling asset spawning logic was named ProceduralAssetDistribu-
tion. The class has a struct member Triangle which contains three vertice points, a centroid
point and a triangle normal. The idea with the triangle struct was to be able to identify a
triangle region in the heightmap and approximate the ground slope from the centroid point.
This in theory could then be used to adjust the angle of objects so that the placement follow
the terrain contour naturally.
The first iteration of the ProceduralAssetDistribution worked by randomizing a singular ar-
bitrary point in each and every tile, meaning only one asset could spawn per tile. The random
selection of position was done through FRandRange function with the tile bounds as limits.
With this method it was possible to test the Triangle struct to see if the placement of the objects
could be manipulated to correctly align to the ground slope by the slope approximation.
The slope approximation was done through averaging the Z-value of the three corner ver-
tice points of the triangle and computing the resulting normal vector. The computed normal
vector could then be used to create a FQuat structure, which is the UE equivalent of a quater-
nion for 3D rotation. By using the FQuat structure with the computed angle it was possible
to spawn in objects with the correct ground slope.
Having objects correctly follow the terrain in terms of the angle did improve the natural
look of the landscape, but having only one object spawn per tile leaves a lot to be desired.
Therefore instead of spawning one asset per tile the next step was to simply allow for a vari-
able number of assets to spawn per tile. By doing this, it did however become clear that when
spawning a large number of assets randomly in a tile, the chances of assets spawning too close
or within each other became high and problematic. However it was also considered that for
some type of objects such as foliage and vegetation assets this effect could be desired. Because
of this it was decided that different functions for each biome should be implemented. At this
point three biomes were decided as default biomes which were: mountains, plains and city.
The function responsible for asset spawning in plains was constructed in a way that al-
lowed for assets to spawn without considering intersection. The idea was that plains is going
to be built up by vegetation such as grass, bushes, flowers and the occasional tree which
are all object types which were deemed suitable to not consider intersection as an issue. The
plains function was also constructed to only allow objects to spawn as long as the slope is less
than 30˝ using the Triangle struct computation. Alongside this a random rotation between 0˝
and 360˝ and random scaling between 50 % and 150 % of the original object scale was also
applied to the object. The motivation for this was that it would add more variety to the en-
vironment through randomness. It was determined that the plains biotope should contain
at least a tree type object, a rock object and a grass type object. These object were spawned
in based on simple dice logic where a random number between 1 to 6 is drawn. If 5 or 6 is
drawn a tree or rock would be spawned but any other number would result in grass being

22
4.4. Procedural asset spawning

spawned in that position. Each object that is spawned is added to the UTile member array
tileAssets.
The function related to the mountain biotope was implemented in essence with the same
structure as the plains function, but with only a dead tree object and a rock object. The slope
tolerance which decides the angle at which objects can be spawned was also instead set to
60˝ .
Spawning assets in the city biotope was however done with more modifications, specifi-
cally intersection between objects were deemed as a problem as houses were going to be the
main object type. In order to achieve separation between the objects, the objects sizes in box
format after the scaling has been applied is saved to a vector. The term box format refers to the
simplified collision box that UE creates when an object is created. This vector is then used to
compute two corner points being top left and bottom right of the box. An auxiliary function
was also created called intersecting which checks if two collision boxes overlap. This function
is then repeatedly called inside a loop which iterates through all previously placed houses
in the tile. For each previous house its top left and bottom right corner point is computed
in the same manner as with the current house, and passed as arguments into the intersecting
function. In order for the house to spawn without collision with other buildings the inter-
secting function has to return false. If the intersection check does return true then the house
will not be added to the tileAssets array. Instead the object will be stored in a member array
to the ProceduralAssetDistribution class called culledAssets which will store all objects that are
deemed as colliding with other objects. The main reason for storing the objects that are not
meant to spawn in the world was for debugging purposes. With this array it was possible
to still spawn in the culled objects and differentiate the objects by color, visualising the effect
the intersection check has on the object. An example of this is shown in Figure 4.4.

Figure 4.4: Visualisation of culled objects based on intersection check between previously
placed objects. The brown cubes are objects that have been placed and the red cubes are
objects that have been culled.

Dynamic-biome asset algorithm


The logic for spawning assets through random positioning, random parameters with some
control threshold was now finished in it’s simplest form. However it was realised that the
potential for this approach was very limited by the fixed number of biomes. It was therefore
decided that a new function was to be implemented, intended to replace the existing three
biomes specific ones and be functional for any number of biomes. The function was named
spawnAssets and takes a struct called biomeAssets belonging to the ProceduralAssetDistribution
class as an argument. The BiomeAssets struct contains the biotope type, alongside this its
contains another struct biomeAssetSettings which associates the objects and all the settings for
the parameters for spawning said objects.

23
4.5. Procedural road generation

The main structure of the function spawnAssets starts with iterating through all the tiles in
the landscape. For each tile iterated the type of biotype is identified and its corresponding
objects settings is fetched through the biomeAssetSettings struct. Each object type within this
biotope is then spawned in as many times as the variable assetCount is set to in the biome-
AssetSettings which acts as the outer loop for each object. The function then shares much of
the logic as the older three biome-specific functions, such as the random position, random
rotation and scaling and using the Triangle struct to align the object correctly to the terrain.
The spawnAssets function then also checks the booleans noCollide and considerRoad from
the biomeAssetSettings. These booleans plays a big part in the function as it will drastically
change the behaviour of the asset placement. If noCollide is set to true, the existing function
intersecting is called and culls object that intersects existing ones. If considerRoad is true then
another function roadConsiderCheck is called which only allows the objects to be spawned in
within a customizable range away from potential roads in the landscape. Naturally if both are
true, then both functions are also called in the order of checking intersection first, followed
by road consideration. Of course both booleans can also be set to false, which means the only
criterion for the object is it’s allowed slope threshold.
Full implementation details can be seen in appendix A.2.

4.5 Procedural road generation


An interesting procedural aspect of generating landscape is to bind different locations to one
another. Typically the method for this is through roads or paths. In the implementation for
roads in this work, Catmull-Rom splines was used to construct continuous road segments
between control points which in turn could shape roads in the environment.

The Catmull-Rom spline implementation


One of the first steps was to implement the Catmull-Rom spline class named CRSpline. The
class defines the control points of the spline as a struct which stores information about the
position and the length of spline segment extending from the current control point to the
next. The CRSpline class contains a handful of functions to setup and create the spline as well
as the tension value and the total length of the whole spline and the control points are stored
in an array points. For full details see Appendix A.3.
Apart from the computational functions to create the spline, a function visualizeSpline was
implemented intended to graphically represent the spline. Without it, only the computations
would be done and gauging the result would of been tedious before any actual road mesh or
material would be applied to the landscape. The function visualizeSpline spawns in objects,
in this case cube meshes to represent both control points as well as intermediate segment
points of the spline. These were differentiated by color and size, where the control points
were red cubes at a larger size and the intermediate points were blue cubes and smaller. The
placement of the red cubes simply uses each computed control point stored in points, while
the blue cubes are placed using the location from the GetSplinePoint function which returns
an intermediate point along the spline segment. However one detail that is specific to UE
when placing the cubes to visualize the spline was that the Z-coordinate has to be scaled by
a factor of (100/128). The reason for this is because of how UE calculates coordinates, where
this factor is present in the computations for the Z-coordinate in the source code. An example
of how the visualizeSpline debugging looked can be seen in Figure 4.5.

24
4.5. Procedural road generation

Figure 4.5: Visualisation of the road spline

The road class


Similar to the logic of the UTile class which acts as an overhead class for the Landscape stream-
ing proxies, a road class was implemented to contain the information for each road. The infor-
mation that each instance of a road holds is an array splinePaths which stores all the Catmull-
Rom splines that the road contains, as well as the road width. Within the road class, the first
attempt at a road generation function generateRoad was implemented.

Road algorithm: Random roads


The algorithm implemented in generateRoad was intended to test the functionality of the
Catmull-Rom splines as well as the visualization of the spline done through visualizeSpline.
This lead to the algorithm simply relying on complete randomness rather than heuristics in
favour of ease of implementation. The structure of the algorithm starts by randomly selecting
a tile somewhere in the landscape, and within that tile two 2D-coordinate (XY) is randomly
selected using FRandRange. Two coordinates rather then one has to be selected as the first one
will act as a tangent for the whole Catmull-Rom spline. Once these two control points are
added to the spline, one of the 8-adjacent tiles is selected at random. In the selected adjacent
tile one random point is selected and used as a control point. This keeps happening until a
maximum road length is reached, measured in tiles. Once the maximum length is reached
one random point in the last visited tile is selected and used as the end tangent of the spline.
The spline is then added to the splinePaths array so that it can be visualized through visualize-
Spline. An example of how this random generation could look is seen in Figure 4.6. The code
structure of the algorithm can be seen in Listing 4.1.
Road::generateRoad() {

tileIndex = //Select a landscape tile through RandRange

XYcoords = //Get XY coords of the tile’s origin point

Spline.addControlPoint = //Add a random tangent point within the selected tile


Spline.addControlPoint = //Add a random control point within the selected tile

while(maxRoadTiles > 0) {

//Select a random adjacent tile

if(adjacent tile exists)

tileIndex = //update tile index to the selected adjacent tile


XYcoords = //update XY coords to the new adjacent tile

25
4.5. Procedural road generation

Figure 4.6: Visualisation of the random road algorithm

Spline.addControlPoint = //Add a random control point in this tile


//Decrement maxRoadTiles
}

Spline.addControlPoint = //Add a random tangent point in the last tile

//Add the full spline to the splinePaths array

}
Listing 4.1: Pseudo code for the first road generation algorithm

Road algorithm: Heuristics based roads v.1


As having no logic for avoiding steep terrain and obstacles, generates in general purposeless
and unnatural roads, it was decided that a more robust road algorithm was needed. To make
the algorithm more robust and natural the idea was to add conditions such as identifying
sudden raises or dips in the terrain to avoid unnatural steep roads. Also other criterion’s
such as not visiting the same tile more than once to avoid back and forth behaviour.
The algorithm was decided to be implemented as a member function generateRoadSmart in
the CreateLandscape class rather than in the road class. The motivation was that within the Cre-
ateLandscape class access to the heightdata array is available, which is needed for identifying
terrain elevation changes.
The structure for generateRoadSmart starts out similarly to the previous iteration, by ran-
domly selecting any tile in the landscape and two random control points within that tile,
with the first point acting as the start tangent. Before selecting any more tiles, the height
(Z-coordinate) of the landscape at the first control point is saved. The next step is to find a
suitable adjacent tile, that avoids too much steepness as well as going back to a tile already
visited. To achieve this an array visitedTiles holds all the indexes of tiles that control points
have already been selected within, and this array is compared to each time a random adjacent
tile is selected. If the random adjacent tile would already exist within the array, the attempt
counts as fail and a new adjacent tile is selected. In order to not create situations where the
algorithm gets stuck in an infinite loop, a variable called adjTries is initiated with a maximum
value of attempts that is decremented for each attempt. If this variable is reduced to zero no
further attempts will be made and the road generation is deemed as a failure.
The same principle is also used for selection of a control point in each new tile, where a
variable randomPointTries is initiated to a max value, and decremented for each time a control
point does not fulfill the steepness criterion. For the steepness condition, the algorithm uses
the CRSpline member function GetSplinePoint to compute the intermediate spline point coor-

26
4.5. Procedural road generation

dinates between the control points. For each spline point the terrain height at that coordinate
is fetched through the heightdata array and used in computing the height difference relative
to the current control point’s height value. By having a variable threshold initiated to a maxi-
mum allowed slope value it is possible for the algorithm to compare it to the computed height
difference and evaluate whether or not the path is becoming too steep. If the computed height
difference surpasses the threshold value then the selected control point is removed and and
another attempt is made. If however the path created by the spline points does not create
steep slopes that exceeds the threshold value, then the new control point’s Z-value is kept
and used as the current height value. Both the conditional control variables randomPointTries
and AdjTries are reset to their initial values for the next road segment while the maximum
road length variable maxRoadTiles is decremented. The algorithm is then repeated until ei-
ther one of the conditional control variables reaches zero and the road generation failed, or
the maximum number of road tiles has been reached. In the case of the road tiles reaching
its maximum value, the road generation is classified as a success and becomes added to the
roads array. The general code structure of the algorithm can be seen in Listing 4.2.
A visual illustration can be seen in Figure 4.7 where the red circle represents the current
control point which stores its height value. The blue circles represents control points that
can be randomly chosen in the adjacent tiles as long as the adjacent tile do not exist in the
visitedTiles array. The black dots represent the intermediate spline points which the current
control point compares its height against. In this illustration the control point chosen first,
with maximum height h2 is tested first but the value of h2 surpasses the threshold value and
is discarded. Another point is then chosen as the control point with maximum height h1
which in this theoretical example would be under the the threshold value and the road path
would iterate to the point with height h1 . This would be repeated until one of the control
variables succeeds or fails.

Figure 4.7: Visualisation of height criterion of the road algorithm v1 in its most simple form

Road::generateRoadSmart() {

tileIndex = //Select a landscape tile through RandRange

XYcoords = //Get XY coords of the tile’s origin point

Spline.addControlPoint = //Add a random tangent point within the selected tile


Spline.addControlPoint = //Add a random control point within the selected tile

//Store the height of the first control point

27
4.5. Procedural road generation

//Initate all control variables [maxRoadTiles, AdjTries, randomPointTries,


threshold]

//Initiate control array [visitedTiles(currentTileIndex]

while(maxRoadTiles && AdjTries != zero) {

//Select a random adjacent tile

if(adjacent tile exists && randomPointTries != zero)

//Decrement randomPointTries

//save the old tile index


//update new tile index to the selected adjacent tile

XYcoords = //update XY coords to the new adjacent tile

Spline.addControlPoint = //Add a random control point in this tile

//compute new length after adding control point

for(All intermediate spline points) {

//Compute height difference from control point to next intermediate


spline point

if(height difference > threshold) {

//Part of spline segement classified as too steep


//Remove the last added control point
//Assign tile index back to the old index
//Break the loop
}
}

if(No parts of spline is classified as too steep) {

//Add tile index to visitedTiles array


//Store height of the last added control point
//Reset randomPointTries and AdjTries
//Decrement maxRoadTiles
}
}

//Add the full spline to the road array

}
Listing 4.2: Pseudo code for version 1 of a heuristics based road generation algorithm

Road algorithm: Heuristics based roads v.2


While the first iteration of generateRoadSmart improved the road algorithm by avoiding steep-
ness and some situations where the road would ping pong back and fourth, it still lacked in
many aspects. By analyzing the road generation results it was clear the main issue had to do
with the steepness condition as well as having no control where the road start and ends. One
major flaw of the steepness condition was that it only looks at one control point at the time
and comparing it to all the intermediate points in the segment. Not only was that an issue
but also the fact that the condition only checks the slope in one direction which lead to cases
where a road could generate on mountainsides, as long as the slope which is only checked in
a forward manner is under the slope threshold. It was therefore decided that a new function
generateRoadSmartV2 were to be implemented with the same skeleton as the previous iter-

28
4.5. Procedural road generation

ation but with additions and improvements to make the generation more controllable and
natural.
Instead of randomizing a tile where the road starts, the function takes both a start point
and an end point which replaces the fully random method with FRandRange. From the start
tile a random tangent is still selected while the first control point uses the input values from
the user as the position. A new boolean control variable regardDist was also added which was
intended to be used as form of ensuring that the road prioritizes selecting new control points
that gets closer to the end point. Another new addition that this algorithm incorporates is
a TMap, which is the UE equivalent of an hash map. The map was named candidates as the
purpose of the map was to contain potential control points that the road could lead to based
on factors such as slope and how much closer the new points gets to the end point.
The main loop of the algorithm keeps going as long as a user initiated variable Tries is
greater than zero as well as the last added control point is not already at the end position.
The next step of the algorithm is to iterate through all adjacent tiles and randomize posi-
tion for the potential new control point within the current adjacent tile. This logic was done
in a auxiliary function GetCandidates which also manages all control logic such as checking
vertical and horizontal slope relative to the road direction. In comparison to the previous al-
gorithm generateRoadSmart, the height comparison uses both the black comparison point for
vertical height as well as the yellow dots as horizontal height comparison points as illustrated
in Figure 4.8. In theory this eliminates the case where the road could generate on a horizontal
slope surface relative to the road direction.

Figure 4.8: Visualisation of height criterion of the road algorithm v2 in its most simple form

If a control point generates a spline segment that fulfills all the control checks then the
point will be saved to the candidates map and then removed from the spline segment. This
process will be done for all adjacent tiles for the current tile, ideally filling up the candidates
with multiple potential control points to choose from. If however no candidates can be found
and the number of attempts reaches half the value of Tries, then the regardDist boolean is
toggled to false. This changes the logic for the GetCandidates control checks, which when
regardDist is set to false removes the condition which only accepts points closer to the end
point. This in theory allows for more control points to be considered as a valid path when
the road generation struggles, and more importantly allowing the road to be able to curve
around obstacles even if it means the road doesn’t get closer to the end point for a number of
spline segments. An illustration of the candidate logic can be seen in Figure 4.9.

29
4.5. Procedural road generation

Figure 4.9: Visualisation of candidate selection from start point, orange indicates to much
distance to end point and purple indicates too much slope meaning that these points will not
be considered as candidates while yellow segments are considered as candidates.

30
4.5. Procedural road generation

Whenever at least one or more candidate has been selected, a loop runs which iterates
through all the candidates and compares the distance to the end point. Whichever candidate
has the lowest distance to the end point will be re-added as the new the control point in the
spline. After this the tile index gets updated to the tile the control point was added and the
candidate array is wiped clean as well as the number of attempts variable Tries is reset to its
initial value for the next segment. A road generated using this candidate approach is seen in
Figure 4.10 using the same structure as seen in Figure 4.9.

Figure 4.10: Visualisation of road after candidate selection and end point is reached, success-
fully creating a road. In comparison to Figure 4.9 it can be seen that the first candidate chosen
was the yellow segment to the right.

31
4.6. User Interface

Another check was also added to the algorithm which uses a control boolean goBack to
identify when the road potentially has entered an impasse and only can go back to an already
visited tile. If this case occurs the algorithm simply removes the control points and traverses
back until a new tile that hasn’t been visited is available as an adjacent tile. In theory this
makes the road generation be able to handle U-shaped enclosures that would be guaranteed
to fail if the end point is behind such an enclosure. Testing of this theory can be seen in Figure
4.11.

(a) Smart road generation without impasse handling, (b) Smart road generation with impasse handling, the
road never reaches end point. road removes segments that lead to an impasse and
finds an alternative route.

Figure 4.11: Smart road generation with and without goBack control logic

Road algorithm: Full manual


The simplest road algorithm to implement was the road generation algorithm that simply
follows the control points inputted by the user. This algorithm takes as an input all the user
selected control points via the UI preview window and using the CRspline structure to bind
the control points with spline segments. This algorithm purposefully does not have any con-
ditional checks as the intention is for the user to have extensive control over where the road
is generated. The only non-controllable aspect of this algorithm to the user is that the tan-
gents are randomized which creates slight differences in curvature in each segment. Which
in theory makes the resulting road still have a hint of procedural variation by having small
curvature differences each time the same road path is generated.

4.6 User Interface


The focal point of the UI was to make it as functional as possible, to allow for as much cus-
tomization and versatility as possible. The UI was implemented using the Unreal Engine
Slate framework which has good coverage of desired functionality. The documentation is a
bit lacking in some areas which forced us to look at how the engine UI was implemented
using slate.
The UI were structured in two parts, one part for terrain creation and one for asset dis-
tribution. These parts where separated into two different tabs SDockTab, which divides the
functionality and only exposes the user to one of the parts if desired.
For terrain creation, the user should be able to create noise with properties that can be
customized after desire. Perlin Noise is the only type of noise that is currently supported,

32
4.6. User Interface

but it has quite a few common properties that can be tweaked to regulate the characteristics
of the noise. Since these settings are what define the entire terrain, they are listed vertically
SVerticalBox as a SNumericEntryBox on the left side of the tab together with descriptive text
STextBlock. Figure 4.12 display the settings where each slate element is highlighted with a
white border.

Figure 4.12: UI noise settings, where slate widgets are highlighted with a white border.

It is also necessary for the user to be able to change said settings individually for each
biotope. The slate widget SComboBox suits well because it creates a button which when
clicked generates a drop down menu. By then binding a function to the FArgument respon-
sible for selection change the display of the selected biotope settings can be changed. Most
widgets have different ways to bind a function to their properties, for this implementation
.OnSlectionChanged_Lambda() where used whose in parameter is a lambda function. Which is
handy when the function to be called is only for this specific widget, so there is no need to
define it elsewhere.
To further increase customization, a 2D preview of the landscape to be generated is dis-
played in the center of the UI. With the Slate widget SOverlay, which allow us to stack widgets
on top of each other, SImage widgets containing textures can be stacked, forming the preview.
This can be seen in figure 4.13(a). By creating a UE UTexture2D of the heightmap and assign
it to a SImage a preview of the noise is displayed. This method is extended by layering more
textures, one to draw out the tiles, one to display biotopes in different colors and also road
control points. By layering textures on top of each other they can be individually toggled on
and off.
The preview window was then nestled into a Slate SBorder widget, which among other
features, allows for mouse input to be read inside of the window. The hierarchy of the slate
widgets can be seen in figure 4.13(b), where the SOverlay widget contains multiple SImage
widgets. This was then utilized to allow the user to "paint" tiles in desired biotopes, making
the placement of biotopes fully customizable if desired.
The same technique was also utilized to allow the user to place coordinates for roads.
These coordinates are then drawn onto an addition texture layer as described previously,
where each individual road’s coordinates are drawn in an assigned color.

33
4.6. User Interface

(a) The visual product (b) The slate hierarchy

Figure 4.13: The 2D preview together with road placement, settings and listing.

By storing each 2D visualization in separate textures, one for noise, one for grid, one for
biotopes and one for road points, they can easily be chosen to be rendered or not without
the need to create new textues. By simply excluding a visualization by the use of the widget
SCheckBox and call for re-rendering of the widget, the user can toggle what to display.
The roads are then displayed in a list using the slate widget SListView[25] as can be seen
in figure 4.13. The SListview widget has functionality which allow selection of list elements
to be made and read. By limiting the the amount of selected elements to singular and adding
a delete button in each row element, a selected object can be deleted by pressing said button.
The other part of the UI is the tab for asset distribution. This section is responsible for
allowing the user to pick assets to be distributed into the landscape with desired settings.
Since biotopes are an essential part of this application, it is necessary to allow the user to
adjust assets specifically to each biotope. This were solved by using a Slate Widget SComboBox
to set a state for which biotope the asset settings belong. The tab also contains a SListView
widget seen in figure 4.14, which lists all assets that are to be distributed onto the selected
biotope. In figure 4.14(b), the hierarchy of the slate widgets are displayed, what can be seen
is that the SListView generates aSTableRow for each element to be displayed.
In order to add an asset to be distributed, the user can pick a pre-existing asset in a SOb-
jectPropertyEntryBox Widget. Which depending on implementation settings, list all available
assets fitting the chosen setting. For this application it is set to show objects of class UStat-
icMesh::StaticClass().
When an asset has been selected, the user should be able to tweak settings regarding its
distribution after preference. These settings were modifiable using previously mentioned
Slate Widgets SNumericBox and SCheckBox. Worth mentioning is that these and most wid-

34
4.6. User Interface

(a) The listing of assets to be distributed in a (b) The slate hierarchy


biotope

Figure 4.14: UI listing assets to be distributed in a biotope together with a display of the
hierarchy of the slate widgets.

gets that are suited for user input, also have parameters that can be set that limits the input
values. A useful benefit that were used for all widgets in this implementation.
Another feature that are rather natural, is the possibility to modify or delete an already
added asset. This was done by when selecting an asset in the SListView, an intermediate
distribution asset is created, copying the selected assets settings. These setting are in turn
then connected to the settings section, making said values editable.

35
5 Results

The aim of this work was to give a proposition for a tool to assist in landscape creation. Under
this section the results of the solution is presented, highlighting its versatility and flaws.

5.1 Landscape generation


In this section the landscape generation module is presented by a showcase of generated
landscapes using the plugin.

Singular biotope landscapes


The landscape generation module can support landscapes consisting of only one biotope.
This can be achieved using the manual voronoi or full manual biotope mode and only plac-
ing one type of biotope when creating the terrain layout in the preview window. In Figure
5.1 two landscapes with different biotope noise settings using the manual voronoi biotope
mode placed in the center of the grid can be seen. The exact noise setting used to create the
landscapes can be seen in Table 5.1 and Table 5.2.

(a) "Plains" terrain generation (b) "Mountains" terrain generation

Figure 5.1: Landscape generated as ULandscape’s using the plugin with different settings,
5.1(a) shows a "plains" type biotope while 5.1(b) shows a "mountain" type biotope.

36
5.1. Landscape generation

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


0 3 0,25 0,5 0,015 1,0 None
Table 5.1: Noise settings for the "plains" biome used in generation for the landscape seen in
Figure 5.1(a).

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


0 12 7,0 0,5 0,0015 2,0 Turbulence
Table 5.2: Noise settings for the "mountains" biome used in generation for the landscape seen
in Figure 5.1(b).

Multiple biotopes landscapes


The landscape generation module can support landscapes consisting of multiple biotopes.
This can be done through using random voronoi, manual voronoi or full manual mode in the
UI preview window.
In figure 5.2 the UI preview together with the final result is displayed. The landscape
consists of 4 different biotopes, with different settings. A large portion of the terrain consists
of a mountain like biotope, visualized in pink as can be seen in figure 5.2(a). By using the
Cutoff modifier and picking a suitable seed, a mountain can be generated, shown in both
the preview and the 3D image. The settings for each biome is listed table 5.3 where the first
row is the settings for the mountain biotope, the second row is settings for the valley, third
is settings for "hills" marked as blue in figure 5.2(a) and last row is the settings for the plains
biotope, displayed in the upper left corner in light green.

(a) 2D preview (b) 3D Terrain

Figure 5.2: Landscape generated using 4 different biotopes placed both manual and with
manual voronoi. The biotopes have similar noise settings but differ with cutoff and inverted
cutoff.

The possibility to pick seed for the noise generation individually of each biotope makes
it possible to match the noise by adjacent biomes. It also helps catch a specific characteristic
part of a noise. The result of picking seeds that complements each other can be seen in figure
5.3, together with the corresponding settings in table 5.4. Where the first row is the mountain,
row two is the intensive middle part and the third row is the settings for the valley.

37
5.1. Landscape generation

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


1 12 7,0 0,5 0,0015 2,0 Cutoff
-461531 12 7,0 0,5 0,0015 2,0 inv Cutoff
-661366 4 1,700001 0,5 0,0019 2,0 none
0 3 0,25 0,5 0,015 1,0 none
Table 5.3: Noise settings for multiple biotopes

(a) 2D preview (b) 3D Terrain

Figure 5.3: Landscape generated using 3 different biotopes, using cutoff, turbulence and pick-
ing seeds which matches adjacent biomes reasonably well.

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


1 12 7,0 0,5 0,0015 2,0 Cutoff
-461531 12 7,0 0,5 0,0015 2,0 inv Cutoff
-661366 14 5,0 0,71 0,0009 2,0 none
Table 5.4: Noise settings for multiple biotopes

38
5.1. Landscape generation

In figure 5.4 an example of how different settings of Lacunarity alter the characteristics of
the terrain, where the settings can be seen in table 5.5. A lower value results in a smoother
looking terrain while higher values give a more coarse looking terrain.

(a) 2D preview (b) 3D Terrain

Figure 5.4: Landscape generated using 4 different biotopes comparing different results based
on set Lacunarity, that can be seen in table 5.5. The settings is listed top to bottom relating to
the colors in the 2D preview in clockwise rotation starting with the yellow marked biome.

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


0 5 1,0 0,5 0,015 1,0 none
0 5 1,0 0,5 0,015 0,5 none
0 5 1,0 0,5 0,015 1,5 none
0 5 1,0 0,4 0,015 2,0 none
Table 5.5: Noise settings for multiple biotopes with different Lacunarity

39
5.2. Tile grid

5.2 Tile grid


Each generated landscape uses the UE’s landscape streaming proxies structure to partition
the landscape into a tile grid. In Figure 5.2 three presets out of the seven landscape sizes
that the plugins supports can be seen with the tile grid highlighted. The red box is placed
manually and serves only as a reference for scale.

(a) 505x505, 1 component per proxy (b) 1009x1009, 1 component per proxy

(c) 2017x2017, 2 components per proxy

Figure 5.5: 3 preset landscape sizes segmented into the tile grid using landscape streaming
proxies

40
5.3. Interpolation between biome terrain data

5.3 Interpolation between biome terrain data


The resulting effects of the interpolation has already been partially presented in section 5.1,
however in this section the interpolation results is looked at in more detail.
In Figure 5.6 a landscape is shown with two biotopes with and without filtering applied
to the biome edges. The biotope settings are the same as the ones listed in tables 5.1 and 5.2
using the kernel settings seen in table 4.1 (b).

(a) Landscape with two biotopes, no interpola- (b) Landscape with two biotopes, interpolation
tion is applied between the two. is applied between the two.

Figure 5.6: Interpolation comparison from a top down view.

Another example can be seen in Figure 5.7 where the biotope that are adjacent to each
other are more similar to each other than the extreme case seen in Figure 4.3. The biotope
settings for this example can be seen in table 5.6.

(a) Landscape with two biotopes, no interpolation is ap- (b) Landscape with two biotopes, interpolation is ap-
plied between the two. plied between the two.

Figure 5.7: Interpolation comparison between two similar biotopes

Seed Octaves Amplitude Persistence Frequency Lacunarity Modifier


-996032 5 0,8 0,5 0,0095 1,0 None
0 3 0,17 0,5 0,015 1,0 None
Table 5.6: Noise settings for the two adjacent biotopes used in Figure 5.7, first row correspond
to the middle biotope and second row the surrounding biotope.

41
5.3. Interpolation between biome terrain data

The interpolation does suffer from visual artifacts, some of which can be seen in Figure
5.8.

(a) Lowpass Gaussian filtering resulting in a smoothing (b) Lowpass Gaussian filtering causing unwanted line
effect which eliminates the biotope coarseness charac- edges caused by maximum kernel operation range
terstic at the biome edge.

Figure 5.8: Visual artifacts caused by the Gaussian lowpass filtering used to blend biotope
edges.

At the last stage of the implementation of the plugin. the issue with corner tiles having a
jagged appearance as seen in the Figure 4.3 when interpolation was applied was fixed. The
improved interpolation results can be seen in Figure 5.9.

(a) Corner interpolation bug (b) Corner interpolation fixed

Figure 5.9: Jagged corners being fixed which was caused by a bug in the interpolation imple-
mentation

42
5.4. Asset Distribution

5.4 Asset Distribution


Figure 5.10 is an example of how assets can be distributed depending on biotope. The figure
display four different biotopes that have been painted onto the landscape, each with a unique
asset placed with no regard to terrain characteristics, but only dependent to biotope type.

Figure 5.10: Image taken of four different biotopes placed in each corner of the map each
assigned with an unique asset to be distributed.

The result of using the avoid road setting for asset distribution can be seen in figure 5.11,
where one biome is populated by a red box asset set to avoid two placed roads.

Figure 5.11: An image showing the functionality of asset distribution avoiding a road.

Another setting for asset distribution is to wether or not consider collision between as-
sets. In figure 5.12 trees are populating a biome with no regard to collision, which leaves the
possibility to place intersecting assets open. By using the avoid collision feature and its cor-
responding sparseness variable, it prevents intersection. But also allows the user to alter the
sparseness of the asset, as seen in figure 5.13 where the difference between not taking regard
to collision and taking it into account in conjunction with a high sparseness value of 10.

43
5.4. Asset Distribution

Figure 5.12: If no regard to collision is taken, assets might intersect.

(a) Collision Off (b) Collision On

Figure 5.13: The difference of not considering collision vs using collision with a high sparse-
ness value.

The angle tolerance limits assets to only be placed on positions in the terrain where the
slope is below the set threshold, the result can be seen in figure 5.14. By setting a low thresh-
old, the trees are only placed on the peaks and valleys where the slope is smallest.

Figure 5.14: Result of setting a low angle of which the terrain can have where assets are
placed.

The distribution allows for assets to have a chance to be scaled depending on user pref-
erences. In figure 5.15 multiple trees are populating a biome, by setting the scale variance
to 0.5 the asset is allowed to vary between its original size down to half its size, giving the

44
5.4. Asset Distribution

displayed result. Assets also rotate by varying amounts, as can be seen in 5.15 where trees
are facing different directions.

Figure 5.15: Assets rotation and scale can be randomized.

Figure 5.16 is showing the result of a relatively high density placement of grass assets. As
seen the shape and outline of the tile becomes apparent by doing so.

Figure 5.16: The result of populating a biotope tile with a large amount of grass assets.

45
5.5. Procedural roads

5.5 Procedural roads


There are two modes for road generation in the plugin, manual and smart generation. Both
can be selected via the UI by the user.

Manual road generation


To use the manual generation the user selects the "manual road" toggle button in the UI and
places as many control points as wished inside the preview window. The road will form and
bend between the user inputted control points and deform the terrain based on road settings.
It is possible for the user to adjust these road settings themselves in the form of road width,
slope threshold and deformation strength in the UI. The threshold value is however not utilized
in manual road generation. Manual road generation can be seen in practice in Figures 5.17
and 5.18. In Figure 5.17(a) the manual road has been plotted out by user input, where the
road starts at the bottom-left most control point and end at the topmost control point.

(a) Preview window with manual road (b) Landscape with manual road generated within it, width = 10,
control points strength = 3.

(c) Same manual road but with two "mountain" biotope tiles
added as obstacles to showcase the manual road behaviour.

Figure 5.17: Manual road generation in perspective view

As can be seen in Figure 5.17(c) the manual road generation does not consider slope as a
criterion, allowing the user control and responsibility.
In Figure 5.18 another terrain layout is used to showcases the possibility for the user to
manually select control points in a way that avoids obstacles.

46
5.5. Procedural roads

(a) Preview window where the manual (b) Landscape with manual road generated
road control points has been mapped out within it, width = 10, strength = 3.
by user input

Figure 5.18: Manual road generation topdown view

Smart road generation


With the smart road mode the user only marks the start of the road and the end, the rest of the
control points will be procedurally placed and shape the road according to the environment.
In Figure 5.19(a) and 5.19(b) the same layout and start and end point as seen in 5.17(a) is
used with the smart road generation to highlight the results in the different methods.
If obstacles are present in the direct trajectory of the end and start point, the results of
the smart mode algorithm becomes varied as a result of the procedural algorithm as seen in
Figure 5.20.
In Figure 5.21 the same start and end point is used but with different road settings, show-
casing how it impacts the road generation.
It is also possible to generate a road mask texture which can be manually applied to the
material of the landscape. This can be seen in Figure 5.22(b).
If the smart road’s start and end point would be placed in such a way that is unreachable
with a given threshold, the algorithm would run until a number of attempts has been iterated
and stop and be declared as a failed road generation attempt. This would mean that the road
would not generate in the world at all, even if parts of the road could be generated.

47
5.5. Procedural roads

(a) Smart road generation 1, width = 10, threshold = (b) Smart road generation 2, width = 10, threshold =
600, strength = 3. 600, strength = 3.

(c) Smart road generation with start


and end point mapped out by the user.

Figure 5.19: Smart road generation using the same terrain and start/end points as in Figure
5.17(b).

(a) Smart road generation 1, width = 10, threshold = 600, (b) Smart road generation 2, width = 10, threshold =
strength = 3. 600, strength = 3.

Figure 5.20: Smart road generation in the same map layout seen in Figure 5.18.

48
5.5. Procedural roads

(a) Smart road generation, width = 10, (b) Smart road generation, width = 10,
threshold = 600, strength = 3. threshold = 3500, strength = 3

(c) Landscape with manual road generated (d) Landscape with manual road gener-
within it, width = 10, threshold = 600, ated within it, width = 20, threshold = 600,
strength = 6. strength = 3.

Figure 5.21: Smart road generation with different road settings

(a) Top down view of smart road genera- (b) Top down view of smart road mask
tion

Figure 5.22: Smart road generation and road mask material applied to the landscape

49
5.6. User interface

5.6 User interface


The user interface consists of two tabs, one for creating biotopes, roads and related settings.
In figure 5.23 the final result can be seen. The layout is split into three columns which is num-
bered in the figure. Column 1, contains all settings associated with biotope creation: biotope
selection, creation, deletion, noise settings and terrain size. The second column contains the
2D preview together with biotopes placement features. Under the preview are the road cre-
ation settings as well as their listings. The third column contains actions such as refreshing
the preview, creating/deleting landscape and populate the terrain with assets or removal of
assets.
The second tab seen in figure 5.24 is for setting up asset distribution for biotopes, its layout
consists of two major columns. First option in column 1 lets the user set which biotope all
other settings translates into. Assets to be distributed are listed in this column where they are
select-able and removable. Column 2 lists all possible distribution settings for an asset to be
added or modified.

Figure 5.23: UI biotope creating and terrain generation.

In figure 5.25 the first tab can be seen in use, where the 2D preview of the biomoes place-
ment are displayed together with placed road points in the preview window, as well the
listing of roads underneath.

50
5.6. User interface

Figure 5.24: UI Asset distribution.

Figure 5.25: UI Biotope creation tab in use, with biotopes and roads placed for generation.

51
6 Comparisons with other work

In order to evaluate the presented solution, a comparison to similar application presented in


chapter 3 Related Work is made. The comparison is made in multiple areas, terrain genera-
tion, asset distribution and path generation.

6.1 Comparison to SeamScape


A major difference between their solution and ours is that Procedural Worlds is aimed to be
a tool for artists, while theirs lean more towards creating a finished environment. Something
to take away is Seamscapes method of combining different noises to generate terrain. This
method have given them good looking results, how well it would translate into an exposed
UI for an artist remains to be tested. But it is something that we have thought about but did
not explore due to priorities.
The solutions differ in how a terrain is created in Unreal Engine. SeamsScapes solution
is to use a Procedural Mesh Component while Procedural Worlds use UE landscape proxies.
By using landscape proxies the internal terrain building tools of UE can be used to further
modify the terrain, making the solution seamless in terms of an artists workflow. Another
advantage of using landscape proxies is that the Engine’s built in partitioning system is sup-
ported. SeamScapes approach made it necessary to implement a custom system for parti-
tioning in order to preserve performance. They do however create procedural foliage such
as trees, rocks and grass, if they had added support for artists to customize such assets after
preference, it would have made their solution more applicable to a variety of styles.

6.2 Comparison to Errant Worlds


Not all parts of the plugin have been open for testing which makes a fair comparison on
all aspects difficult. However their method for asset placement were open, which can be
discussed a bit more in detail.
A major difference between their method and ours is that they are using the method for
UE to place foliage by converting static meshes to static foliage meshes. The main difference is
that foliage support instancing, which is essential when many of the same asset is populating
a terrain in terms of performance.

52
6.3. Comparison to Fischer et al

In regard to placement of the assets, our solution also differentiate a bit, their solution is to
"paint" their biotopes onto the world with a brush and then populate said area with selected
assets. Our method is a tile system, painted using a 2D preview. Their hand painted method
give the artist more flexibility than ours, to do however take more time. We do support a
setting that limits the asset to be placed based on terrain inclination, which if using their tool
must be painted as two different biotopes.
When it comes to terrain generation, their tool could not be tested, but they mention a
set of brushes which enables the artist to "stamp" modifications to an already created terrain.
How this translates into practise is hard to tell, but it seems to be a complement tool and
not necessary a method to create a base landscape. Our solution on the other hand do create
an entire landscape, but do not give the artist any tools for after processing in the form of
brushes.
Last is their method of creating paths which could also not be tested. Theirs and our
tool modify terrain based on user parameters and they also mention support for automatic
generation of roads based on a start and end point, as our tool does. Based on previews,
their method is a post process, where a path is placed onto an already created landscape by
the artist. Our tool modify terrain data before generating it. Their tool is on the other hand
besides modifying terrain, also supporting asset placement in such a way that using a path,
a road can be generated from a mesh together with desired assets.

6.3 Comparison to Fischer et al


They present a thesis covering a method for generating terrain consisting of different biotopes
similar to ours. Their solution is heavily based on climate simulation in order to create biomes
for terrain generation. The characteristics of these biotopes are dependant on wind, temper-
ature and moisture. Which differ from our solution quite a bit, where biotopes are purely
based on user determined noise characteristics. By classifying the type of biotope each region
has, these regions are then blended with a respective DEM to achieve different characteris-
tics. This would as we understand it cause repetition if not a different DEMs are used for each
biome to be generated. Our solution then has the advantage of not being based on predefined
data and because of that not limit variations to a fixed amount of DEMs.

6.4 Comparison to Slowroads


Both ours and the roads of Slowroads are tailored for roads on the country side, making them
fair to compare. A difference worth mentioning is that for Slowroads, the noise characteristic
used for the terrain is known. Meaning that the road path algorithm can be adapted to work
specifically for that type of terrain, while our algorithm have to take into account that the
terrain can differ greatly depending on user settings. Another big design difference is the fact
that our plugin is focused on generating roads within a landscape with a predetermined size.
For Slowroads the algorithm runs during runtime and procedurally generates a continuous
road in parallel to when the vehicle moves forward in the world.
One of the aspects that share the same strategy in both our plugin and the Slowroads
application is how the slope criterion is checked. Slowroads checks in a radial area around the
origin point and chooses the best path to take according to slope longitudinally and laterally.
The method used in our plugin uses a simplified version of this exact method, main difference
being that the number of potential paths is divided into eight adjacent tiles instead of the
unspecified number of potential paths Slowroads checks for every point.
Regarding the results of Slowroads compared to our plugin, what can be said objectively
is that the fact that Slowroads algorithm can work in runtime says a lot about the optimization
being better in comparison to the algorithm in our plugin.

53
7 Discussion

The produced results of the work is discussed and compared to similar applications, due
to the limitation in similar applications the evaluation is somewhat incomplete. Potential
improvements is discussed and decisions made in regard to methods are considered and
evaluated.

7.1 Results
The results from the implementation of the plugin has been shown to be both on par with
expectations set from the theory as well as worse in some areas where the theory chosen to
apply to the problem should most likely has been reconsidered.

Unreal Engine integration


Going in the order of implementation, the system created for this plugin to generate land-
scapes does fully utilize the UE modules in the form of the ALandscape and Landscape stream-
ing proxies structures. This is something that was deemed important to have in the plugin as
it does make it possible for the tools to be integrated with the engine, instead of as a separate
component which cannot interact with other systems. In our opinion this makes the plugin
more attractive and usable for developers that are willing to use UE as their platform. Also
when choosing a game engine to implement the plugin in, it was motivated it should be bare
minimum for it be able to interact with existing systems in UE, as otherwise the choice to use
a complex game engine would be redundant in many ways.
The area in which the tool does not support the workflow and integration of unreal is
when it comes to the foliage structure which UE uses for large amount of vegetation assets.
The resulting tool can only support static meshes which is not compatible with the foliage tool
or the optimisation that comes with it.

Landscape Generation
In terms of artistic freedom the plugin delivers fairly well in our opinion. Different land-
scapes can be generated with varying features, where the user is given the possibility to
tweak settings freely. Figure 5.1 a good example of the range of features that can be reached.

54
7.1. Results

The possibility to add multiple biotopes to a landscape, expands the ability to create more
interesting environments. In figure 5.3 the combination of biotopes created by a wide array
of features give in our opinion a good foundation to create a credible landscape.
Based on our own tests, a feature that we feel is missing, is the possibility to offset noise.
By allowing this, the user could move biotope terrain features after desire. The current solu-
tion is to try different seeds until a sufficient placement of characteristics is found.

Tiles
One of the major problem areas in which the results section highlights is the tiling and inter-
polation of the landscape. The tiles are objectively huge when considering them as biotope
separators, making the edges easily visible between biotopes. This is not as apparent when
two similar biotopes are adjacent and one could even argue in those cases the method pro-
duces visibly natural results, as seen in Figure 5.7. But what becomes an apparent problem is
when the biotopes adjacent to each other are of a vast different character in terms of noise, as
seen in for example Figure 5.6 in the results section.

Biotope Interpolation
Regarding the interpolation method which ended up being a version of Lowpass Gaussian
filtering, it can be argued to be both fitting to the problems with biome blending as well as not.
Through our testing the method can handle certain situations reasonably well, but in others
the result is in subjective terms simply unsatisfactory from a visual standpoint. An argument
could be made that it is not only subjectively worse, but also in a way objectively if compared
to other software’s results. Looking at the example Errant Worlds which offers seamless vi-
sual biotope blending when looking specifically at the ground terrain. This in comparison
to Procedural Worlds, unless a blocky terrain layout is desired, then the results produced by
Errant Worlds could be considered objectively better at generating ground terrain for natural
landscapes trying to mimic realistic characteristics of earthly terrain.
It is also clear by the results that the interpolation method suffers from visual artifacts.
These artifacts appear more or less depending on factors such as kernel settings and the char-
acteristics of the area to be interpolated. If a landscape has been procedurally generated with
multiple biotopes all having noise data characterised as smooth, the interpolation results
usually could be deemed as visually satisfactory to blend the biotope edge. It is also worth
mentioning that some of the images shown in the results section have been taken with ma-
terial applied to the biotope tiles, exacerbating the biotope edges like the example in Figure
5.10. Another scenario of our interpolation method preferring biotopes with smooth char-
acteristics is that it avoids eliminating coarseness features in biotopes that have that type of
noise, like in the example seen in Figure 5.8(a) in the results section. But this is more a limit-
ing factor rather then a feature of the interpolation method, as in order to interpolate well the
biotopes should be smooth which in a lot of cases is not wanted. Therefore it is a compromise
that contradicts the intention of the plugin’s goal which was to not limit the users freedom.
Among some of the artifacts, one of the more typical is the artifact displayed in Figure
5.8(b) in the results section. This is also an unwanted effect caused by specifically the kernel
used when interpolating. Due to the kernel interpolating an area, the edges of that area
can become visible in many situations, and how apparent this visual artifact is seen is also
dependent on the σ value.
It is also worth acknowledging one of the issues that is also visible, which is when a tile
is acting as a biotope corner. The result should be a rounded corner after the interpolation
but instead the result looks jagged like what can be seen in Figure 4.3 when interpolation
is applied. This was caused by a bug which wasn’t fixed until the very end of the thesis
work which is why all the results images showcasing interpolation have this defect except
for Figure 5.9(b).

55
7.1. Results

Asset Distribution
In terms of artistic freedom our method for populating biotopes with assets was satisfactory
for a handful of purposes in our opinion. One feature that we felt is missing however, is the
ability to set an assets priority to be placed in a biotope. The current implementation lets
the user set whether an asset should or should not check collision with other assets. But the
order in which assets are placed in the biotope is based on what order they are added to the
list of the UI. This means that if the first asset to be added are set to have many instances, the
second asset added by default have a smaller chance to be placed. If they are set to consider
collision that is.
Noteworthy is that the placement of assets are locked to the tile size of the biotopes. As
long as the density of assets placed is fairly low it is not as apparent, but for dense foliage
such as grass it clearly reveals the grid system as can be seen in figure 5.16. Smaller tiles
would improve this, but in comparison to placing grass manually with UE foliage paint tool,
our implementation can not beat it aesthetically.

Procedural Roads
The two road generation algorithms in the final plugin are capable of generating both roads
that adapt to the environment in the form of the "smart road" algorithm, as well as allowing
for roads that follow user input for more control in the form of "manual roads". Looking at
the results that the manual road can produce, it could be argued that the need for any other
method might be redundant as it is capable of avoiding and bending around obstacles and
slopes, given that the user places control points in a way that complements the terrain. How-
ever one major research area of this thesis work was to identify if a semi-automatic procedural
road generation could reduce the thought process and developer input while still producing
results that could rival a manually inputted road. This however is now not a simple task to
objectively analyze as no user evaluation has been taken in this thesis work. However it is
possible to say to an extent that through our own testing that the procedure to create a man-
ual road shaped in the exact way the user wants and sees fits to the terrain is fairly simple
and doesn’t require more then a few mouse inputs and some basic thought. The difference
in comparison to the smart road generation is that it only requires two mouse inputs in the
form of a start and end point and the algorithm takes care of the rest procedurally by avoid-
ing steep terrain to reach the end point. However the control for how the road traverses the
intermediate terrain is no longer in the users control, which can for most cases be thought of
as a disadvantage.
However one of the positive aspects of the smart road generation is that it fulfills some
of the procedural fantasy by having different results being generated with the same end and
start point in the same terrain. This could be argued to be making the outcome more varied
which could be utilized for automatic level generation in for example a game where this effect
could be desired.
One point worth acknowledging regarding the smart road results seen in for example
Figure 5.21 is that the algorithm does not find the fastest or most efficient way to the end
point. The algorithm was purposefully constructed in a way that in some ways mimics "brute
force" behaviour by simply trying all possible adjacent tiles and preferring the most ideal
out of all candidates. However the candidates as explained in the method chapter are all
randomized and only checks the terrain slope and uses that to choose the best candidate, but
the ideal point is almost never going to be randomly selected as a candidate point. This is in
turn what makes the smart road generation appear procedural and random but still always
reaches the end point without traversing over obstacles which was exactly what the intention
with the algorithm was.
Regarding the road strength parameter in road settings, it can be seen in Figure 5.21 that
the effect of changing the road strength parameter is not impactful to the visual result. This

56
7.2. Method

is unfortunately a side effect of how the parameter was integrated poorly with the algorithm
and not fixed before the implementation phase stopped. The effect of increasing the road
strength parameter was intended to result in the road becoming more aggressively interpo-
lated, making it more flat where the road is being generated. This in turn could of been used
to differentiate roads that are trying to mimic for example country roads where people and
wagons or cars have pressed down the ground over years of use versus small rarely used
trails, beyond just changing the road width.

User Interface
The interface was not a central part of this thesis work, it was however a necessity for testing
our implementations. Implementing our own UI allowed us to save time that would have
been spent on compiling small changes. And since the purpose of the work was aimed at
being a tool, many user input variations needed to be tested, which the UI simplifies. It is
functional, but whether it is user-friendly is debatable.

7.2 Method

Landscape Generation
As is, the tool for creating a biotope have many settings that can be altered by the user. How-
ever something that we chose not to include is the possibility to set parameters on a per-
octave basis. This creates a limitation on the range of characteristics that can be achieved.
That is something that could be added, but can be a question of how many features that a
user should be exposed to without cluttering the interface.
Something that could be explored more is other types of noise. Different noise could be
better suited for different types of biotopes and to support more types could broaden the
artistic freedom further. The thought of combining different types of noise have also been
brought up, but how this could be brought into a tool that is useful for an artist remains to be
tested.

Tiles
Whether or not the tile size or the interpolation method used being the main problem regard-
ing the visual unsatisfactory appeal can be argued. If an alternative to the tile size is consid-
ered then simply reducing the tile size used for the biotopes would minimize the visual effect
of hard edges, however not remove them. It also worth mentioning that the idea for making
the tiles be much smaller was discussed later in the implementation phase. However most of
the tile and landscape foundation was already laid which built upon the structure that each
tile contains biotope noise specific for that proxy and the placement area for assets belonging
to that specific biotope tile was the same size as a landscape streaming proxy. This in turn lim-
ited the options, as the only way to reduce the tile size would be to redo and modify many
aspects of the implementation up to that point, reducing the ambition for other features. That
was in essence the motivation to why the problem was not attempted to be fixed through
reducing the tile size. Another motivation was also the hope that a dedicated interpolation
method would be able to visually fix the most glaring problems even with having such large
tiles.
In hindsight it could be argued that time should of have been allocated to refurnish the
system for tiles, preferably in such a way that the tile size is even modifiable by the user. This
would of opened up the plugin quite a bit making many of the visual issues that is seen in the
plugin now be much less of a problem. However our opinion still leans towards the decision
that was made to leave it be, reason being that in essence has nothing to do with the research
questions directly, but more about the visual appeal.

57
7.2. Method

Biotope Interpolation
The strategy taken when implementing interpolation between biotopes was mostly founded
on our own intuition, not so much by our own choice but rather due to the lack of high
quality sources regarding noise interpolation in this context. Due to the limited sources, it
was decided that the most simple, linear interpolation was to be implemented first. It might
be debatable whether or not this interpolation method even should of had been implemented
as the result was bound to be mediocre at best due to the simplicity of that interpolation
method. However at the time one aspect of the thesis work was to hopefully be able to have
a number of interpolation strategies available to compare and analyze the results in terms
of biotope interpolation. This would of allowed more research regarding something that
was quite unique based on the limited sources that we found regarding this topic of biotope
blending.
However once linear interpolation was implemented and the result was as expected very
much mediocre. The next interpolation method, Gaussian lowpass filtering required more
time and fine-tuning to implement than expected. Most of this was due to the fact that the
source used [8] only considers the main usage of Gaussian lowpass filtering which is for im-
ages, which some of the logic could be applied directly to the heightmap data that follows the
2D structure. However many decisions such as how the kernel should be applied in corners
versus sides and what kernel settings to use in order to get the best results all took more trial
and error than what was expected. It was also during this implementation process that it was
realised that what settings are used for the Gaussian lowpass filtering cannot be suited to be
ideal for every situation. With the procedural nature of the landscapes it becomes impossi-
ble with this approach to handle all the cases of varying biotope noise which are customized
by the user. All this lead to a decision which was that no more interpolation methods was
going to be implemented in favor of time for other features but also the acknowledgements
made that it would require much more than a common interpolation strategy to generate
good results due to the extensive randomness with procedural biotope noise.
However it is now in hindsight an interesting question whether or this issue cold of been
looked at in much more detail, and essentially shifting the thesis focus to identify possible so-
lutions to the problem with visual biotope interpolation. The method could have processed
more interpolation methods and attempting to combine different methods to adapt to differ-
ent situation, or take aspects from the physical based approach seen in the paper by Fischer
et al. to enhance the interpolation results [19]. There could even be consideration to take
strategies from other fields such as AI/deep learning and see if the problem could be applied
to the methods there.
One of the approaches that was discussed involved a way to circumvent the issue by still
using the Gaussian lowpass filtering method, but when a landscape with multiple biotopes
is created the adjacent tiles would be converted to "intermediate tiles". In this case it would
mean that every intermediate tile would be converted to a combination of the biotope noises
that meet. In theory this could of have set up the interpolation to have more favourable
interpolation situations. The most clear example which this method could of avoided to an
extent would be where the abrupt edges between biomes such as mountain regions and plains
meet in which the interpolation strategy clearly fails visually as seen in Figure 4.3.
Unfortunately this strategy was never implemented for multiple reasons, but the main
issue with it being that it would require the landscape to be quite large in order to fit in inter-
mediate tiles, tying back to the problem with having large biotope tiles. Taking for example
the smallest landscape the plugin supports, 505x505 segmented into 4x4 grid. If the user has
more than two biotopes that meets than there really wouldn’t be room for intermediate tiles
to be generated. This would require that a more sophisticated biotope noise blending algo-
rithm is in place which blends the noise without creating intermediate tiles. By no means is
this a very hard task to attempt to solve, but due to the ambition for other features such as

58
7.2. Method

road generation and asset placement it was simply left as a theoretical solution rather than a
tried and tested method.

Asset Distribution
The problem described earlier in 7.1 could be solved by adding further functionality to the
listing of asset settings in the UI see figure 5.24, so that the order of them can be changed and
also to make it apparent that it function in such a way. Another solution would be to add it
as a setting, where the user can set a numeric parameter. Either way it is a feature that could
help create a more natural environment if set correctly.
An oversight that we made, is to lock asset distribution to only place UE static mesh ac-
tors. For foliage the engine has its own class Static Mesh Foliage which among other features
support instancing by default. By not using this object type, performance deteriorate unnec-
essarily. This must be changed in order to make our tool suitable for placing high quantities
of assets and should be solved by supporting Static Mesh Foliage.
A strong point for our solution is that it reduce manual editing, there is no need to "paint"
assets onto the landscape by hand, the biotopes are already placed using the 2D preview. If
the artist is unhappy with the result, all assets can be removed with the click of a button, mod-
ifications can then be made and as easily be placed again. Since our method of distribution is
based on pseudo-random placements, each iteration of placements can differ from the other.
This can be a desired feature giving the user more variety in how assets are distributed. But
it should be improved so that the user has the ability to lock a seed, as done for biotope noise.
So that if a satisfied distribution has been found the seed can be kept to continue distribution
in the same way between iterations.

Procedural Roads
The methodology behind implementing the road algorithms was not the most though out
process from the start. A somewhat convoluted implementation phase therefore became the
reality for the roads, where three separate functions for the smart road algorithm were imple-
mented but in the end only one of them was used in the plugin. However there were some
motivation to this, which was that it was still quite early in the implementation phase and
most of the components of UE’s system was still unclear for us. For that reason starting with
implementing a simpler function responsible for combining the road/spline functionality we
had implemented with the landscape modules in UE’s system became quite natural. It al-
lowed for the foundation to be laid and be built upon once the first algorithm was able to
generate a path using the CRSpline class to create road segments.
But one aspect that undoubtedly could of have been done differently was to have fully
planned out all the essential features that the road generation has to fulfill before starting
building upon the second algorithm. This could of at least reduced the amount of function
implementations down to two, as the main reason to why there even had to be third iteration
of the smart road algorithm was that many features had simply not been realised as essential
before starting implementation the second iteration. This included features such as checking
slope not only in one direction but also sideways relative to the road, as well as the most
basic feature by having the user be able to decide start and end point which was completely
overlooked at that point.
Another aspect of the approach taken when implementing the road algorithms that also
ties in to why a third iteration was needed was that inspiration from other related work such
as the roads from the slowroads.io project or Seamscape’s rivers was not really considered until
the second iteration had already been implemented. All this lead to what has been discussed
in not having all the features and criterion’s for the road algorithm be planned out in advance,
leading to more time having to unnecessarily be put into the road generation module.

59
7.2. Method

User Interface
Although the UI was not supposed to be a central part of this thesis work, but due to the ne-
cessity that we felt for it, a considerate amount of time was still allocated for implementation
of it. One can argue that the time spent might not have saved us time in the long run. But
it can also be arguable that by having a UI, more desired functionality and flaws have been
discovered by us.
The current state of the UI is that it is more functional than optimized after user experi-
ence. In order to construct a more user-friendly UI a field survey would be fundamental for
coming up with a good design. To further narrow it down, the survey should if possible focus
on people in the field with experience working with the engine. This due to the fact that prior
knowledge can help improve the integration of the tool with already existing workflows for
terrain creation.

Evaluation method
In terms of landscape creation in Unreal Engine, the quantity of adequate tools is low. Thus
making the conduction of an objective evaluation of the presented solution difficult. This
caused the evaluation to partially weigh on our personal perception of what is considered
good, both visually and practically. Our perception of what is considered "good", stem largely
from prior experience in the game genre both as consumers and developers. In no capacity
should we be considered as fully capable of conducting an entirely objective evaluation of our
solution. But in conjunction with the comparisons of the few similar tools an evaluation with
some foundation could be performed. It can however not be excluded that it is a partially
subjective evaluation especially since it is conducted without a third party.
As mentioned earlier, user-feedback would bring great value to determine the state of the
solution. Preferably by users with experience in the field who are familiar with the workflow
of the Engine. This could bring much needed criticism and highlight oversights that we as
developers have been blind to.

Source criticism
Finding similar work or applications regarding procedural generation was not itself a hard
task as the usage is wide spread. However finding sources with well documented methods,
especially within the Unreal Engine context was very scarce. This led to a limited amount
of sources, and even some that were not related to Unreal Engine at all. However when
choosing sources, consideration was always taken regarding the validity and reliability of
what the source contains and who stands behinds it.

60
8 Conclusion

The aim of this thesis work was to create a system capable of not limiting the user’s creative
freedom in creating landscape that seamlessly works together with Unreal Engine systems.
This could be argued to have been achieved in it’s simplest form as the tool allows for adjust-
ing most things when considering terrain generation, asset distribution and road generation.
However some components such as the visual tiling and the non-usage of the foliage type
that Unreal Engine supports for it’s vegetation modules does leave a lot of room for improve-
ments.

8.1 Research questions


Can a flexible procedural tool for landscape creation be made in such a way that it matches
visual fidelity and features of existing tools?
The solution for the tool that is presented in this paper have the potential to be a valid
alternative to other tools in assisting in creation of landscapes. In regard to terrain creation,
the limitations are few, the user has full control over biotope characteristics and placement
which supports the flexibility of the tool. What needs to be improved is the method in which
borders between different biotopes are interpolated, Gaussian lowpass filtering is not suitable
for all kinds of noise characteristics in a flexible tool. Tile size should also be addressed,
smaller tiles or the option to vary size in conjunction with a better interpolation method
would complement the current solution well. This would affect both the visual fidelity and
the flexibility of the tool.
Part of creating landscapes is populating terrain with assets, the presented solution is not
sufficient enough to replace manual labor entirely. The arguments for this comes from both
our own testing and comparing to other tools such as Errant Worlds which does offer a more
dedicated system in most aspects. However if the discussed problem of not using the UE
foliage object type is solved, the distribution method could at very least be used for creating
a base population of assets.
Due to the seamless approach of implementation, the user always have the option to use
other tools to improve what Procedural Worlds generates. The terrain and assets have the
potential to be tweaked through most tools and systems provided by the engine.

61
8.2. Future Work

Is it possible to generate roads that vary through pseudo-random means and still retain
logical pathing in relation to the procedural terrain?
Due to the limited amount of other related work explaining their method in detail and
the lack of user tests, the fairness to say whether or not the road generation through the
pseduo-random method in our tool retains logical pathing becomes capricious. However
based on what the tool is capable of generating, in multitude of different noise characteristics
that shapes the terrain, the resulting roads does start and end in the designated points by the
user. All this while avoiding obstacles and steep areas that would otherwise result in roads
being created in non-logical paths that seemingly never is desired.
Based on these empirical results, the possibility of the question can then be answered by
confirming that it is possible, as long as the pseudo-randomness is combined with heuristics
that makes sure the algorithm has a goal and knows what to avoid to an extent. Nonetheless
consideration has to be made that more dedicated evaluation tests and comparisons would
have to be explored to fully reinforce the validity of this statement.

8.2 Future Work


The most lacking aspect of this thesis work is the objective quality and assurance methods
that did not take place. In order to fully evaluate the implementation, user-testing could
bring valuable information about workflow and perceived quality. Conducting a broader
comparison to other tools within other engines could bring more information about what
qualities are desired in a tool for landscape creation.
The greatest obstacle that were discovered during this work is interpolation between
biotopes of different noise characteristic. More research after a solution to interpolate be-
tween different noise without the creation of apparent artifacts, would be useful for our im-
plementation.

62
Bibliography

[1] Epic Games. Unreal Engine. URL: https : / / www . unrealengine . com / en - US/.
(accessed: 30.08.2022).
[2] David S Ebert F. Kenton Musgrave Darwyn Peachey Ken Perlin Steven Worley. Textur-
ing and Modeling A Procedural Approach. Morgan Kaufmann Publishers, 2003.
[3] Stefan Gustavson. “Simplex noise demystified”. In: (2005). URL: https://github.
com/stegu/perlin-noise/blob/master/simplexnoise.pdf.
[4] Chris k. Caldwell. Mersennes Prime: History, Theorems and Lists. URL: https : / /
primes.utm.edu/mersenne/index.html. (accessed: 14.12.2022).
[5] Makoto Matsumoto and Takuji Nishimura. “Mersenne Twister: A 623-Dimensionally
Equidistributed Uniform Pseudo-Random Number Generator”. In: ACM Trans. Model.
Comput. Simul. 8.1 (Jan. 1998), pp. 3–30. ISSN: 1049-3301. DOI: 10 . 1145 / 272991 .
272995. URL: https://doi.org/10.1145/272991.272995.
[6] Epic Games. FMath::FRandRange. URL: https://docs.unrealengine.com/4.27/
en-US/API/Runtime/Core/Math/FMath/FRandRange/. (accessed: 30.01.2023).
[7] Thomas Bloom. Linear interpolation. URL: https : / / encyclopediaofmath . org /
index.php?title=Linear_interpolation. (accessed: 18.01.2023).
[8] A. Walker R. Fisher S. Perkins and E. Wolfart. Gaussian Smoothing. URL: https : / /
homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm. (accessed: 29.01.2022).
[9] Edwin Catmull and Raphael Rom. “A CLASS OF LOCAL INTERPOLATING
SPLINES”. In: Computer Aided Geometric Design. Ed. by ROBERT E. BARNHILL and
RICHARD F. RIESENFELD. Academic Press, 1974, pp. 317–326. ISBN: 978-0-12-079050-
0. DOI: https : / / doi . org / 10 . 1016 / B978 - 0 - 12 - 079050 - 0 . 50020 -
5. URL: https : / / www . sciencedirect . com / science / article / pii /
B9780120790500500205.
[10] Epic Games. Slate Architecture. URL: https://docs.unrealengine.com/5.1/en-
US/understanding- the- slate- ui- architecture- in- unreal- engine/.
(accessed: 24.11.2022).
[11] Sebastian Ekman, Anders A. Hansson, Thomas Högberg, Anna Nylander, Alma Ot-
tedag, and Joakim Thorén. “SeamScape A procedural generation system for accelerated
creation of 3D landscapes”. In: 2016.

63
Bibliography

[12] Alex D. Kelley, Michael C. Malin, and Gregory M. Nielson. “Terrain Simulation Using
a Model of Stream Erosion”. In: Proceedings of the 15th Annual Conference on Computer
Graphics and Interactive Techniques. SIGGRAPH ’88. New York, NY, USA: Association for
Computing Machinery, 1988, pp. 263–268. ISBN: 0897912756. DOI: 10.1145/54852.
378519. URL: https://doi.org/10.1145/54852.378519.
[13] Jean-David Génevaux, Éric Galin, Eric Guérin, Adrien Peytavie, and Bedrich Benes.
“Terrain Generation Using Procedural Models Based on Hydrology”. In: ACM Trans.
Graph. 32.4 (July 2013). ISSN: 0730-0301. DOI: 10 . 1145 / 2461912 . 2461996. URL:
https://doi.org/10.1145/2461912.2461996.
[14] P. Prusinkiewicz and Aristid Lindenmayer. The Algorithmic Beauty of Plants. Berlin, Hei-
delberg: Springer-Verlag, 1990. ISBN: 0387972978.
[15] Johan Hammes. “Modeling of Ecosystems as a Data Source for Real-Time Terrain Ren-
dering”. In: vol. 2181. Jan. 2001, pp. 98–111. ISBN: 978-3-540-42586-1. DOI: 10.1007/3-
540-44818-7_14.
[16] Errant Photon. Errant Worlds. URL: https://www.errantphoton.com/. (accessed:
16.01.2023).
[17] Robert H. Whittaker. Communities and Ecosystems. MacMillan Publishing Co., 1975.
[18] USGS. What is a digital elevation model (DEM)? URL: https://www.usgs.gov/faqs/
what-digital-elevation-model-dem. (accessed: 01.12.2022).
[19] Roland Fischer, Philipp Dittmann, Rene Weller, and Gabriel Zachmann. “AutoBiomes:
procedural generation of multi-biome landscapes”. In: The Visual Computer 36 (Oct.
2020). DOI: 10.1007/s00371-020-01920-7.
[20] Nicolas Marechal, Eric Galin, Adrien Peytavie, N Maréchal, and Eric Guérin. “Procedu-
ral Generation of Roads”. In: Computer Graphics Forum. 2nd ser. 29 (2010), pp. 429–438.
DOI : 10.1111/j.1467- 8659.2009.01612.x. URL : https://hal.archives-
ouvertes.fr/hal-01381447.
[21] Arnaud Emilien, Adrien Bernhardt, Adrien Peytavie, Marie-Paule Cani, and Eric Galin.
“Procedural Generation of Villages on Arbitrary Terrains”. In: Vis. Comput. 28.6–8 (June
2012), pp. 809–818. ISSN: 0178-2789. DOI: 10 . 1007 / s00371 - 012 - 0699 - 7. URL:
https://doi.org/10.1007/s00371-012-0699-7.
[22] Tiago Boelter Mizdal and Cesar Tadeu Pozzer. “Procedural Content Generation of Vil-
lages and Road System on Arbitrary Terrains”. In: 2018 17th Brazilian Symposium on
Computer Games and Digital Entertainment (SBGames). 2018, pp. 205–2056. DOI: 10 .
1109/SBGAMES.2018.00032.
[23] anslo. Slow Roads: tl;dr. URL: https://anslo.medium.com/slow-roads-tl-dr-
a664ac6bce40. (accessed: 25.11.2022).
[24] Li Gang and Shi Guangshun. “Procedural Modeling of Urban Road Network”. In: 2010
International Forum on Information Technology and Applications. Vol. 1. 2010, pp. 75–79.
DOI : 10.1109/IFITA.2010.122.

[25] Epic Games. SListView Widget. URL: https://docs.unrealengine.com/5.0/en-


US/API/Runtime/Slate/Widgets/Views/SListView/. (accessed: 13.01.2023).

64
A Code snippets

A.1 generate()

ALandscape* CreateLandscape::generate(){
FTransform LandscapeTransform{ InRotation, InTranslation, LandscapeScale };
int32 QuadsPerComponent{ 63 };
int32 SizeX{ 505 };
int32 SizeY{ 505 };
int32 ComponentsPerProxy{ 1 };

TArray<FLandscapeImportLayerInfo> MaterialImportLayers;
TMap<FGuid, TArray<uint16>> HeightDataPerLayers;
TMap<FGuid, TArray<FLandscapeImportLayerInfo>> MaterialLayerDataPerLayers;

HeightDataPerLayers.Add(FGuid(), MoveTemp(heightData));
MaterialLayerDataPerLayers.Add(FGuid(), MoveTemp(MaterialImportLayers));

UWorld* World = nullptr;

// We want to create the landscape in the landscape editor mode’s world


FWorldContext& EditorWorldContext = GEditor->GetEditorWorldContext();
World = EditorWorldContext.World();

ALandscape* Landscape = World->SpawnActor<ALandscape>(FVector(0.0f, 0.0f, 0.0f),


FRotator(0.0f, 0.0f, 0.0f));

Landscape->bCanHaveLayersContent = true;
Landscape->LandscapeMaterial = nullptr;

Landscape->SetActorTransform(LandscapeTransform);
Landscape->Import(FGuid::NewGuid(), 0, 0, SizeX - 1, SizeY - 1,
SectionsPerComponent, QuadsPerComponent,
HeightDataPerLayers, nullptr, MaterialLayerDataPerLayers,
ELandscapeImportAlphamapType::Additive);

// Register all the landscape components and set LOD lighting


Landscape->StaticLightingLOD = FMath::DivideAndRoundUp(FMath::CeilLogTwo(
(SizeX * SizeY)/ (2048 * 2048) + 1), (uint32)2);

ULandscapeInfo* LandscapeInfo = Landscape->GetLandscapeInfo();

65
A.2. spawnAssets()

LandscapeInfo->UpdateLayerInfoMap(Landscape);
Landscape->RegisterAllComponents();

// Need to explicitly call PostEditChange on the LandscapeMaterial property


// or the landscape proxy won’t update its material
FPropertyChangedEvent MaterialPropertyChangedEvent(FindFieldChecked<FProperty>
(Landscape->GetClass(), FName("LandscapeMaterial")));
Landscape->PostEditChangeProperty(MaterialPropertyChangedEvent);
Landscape->PostEditChange();

//Changing Gridsize which will create LandscapestreamProxies,


EditorWorldContext.World()->GetSubsystem<ULandscapeSubsystem>()->
ChangeGridSize(LandscapeInfo, ComponentsPerProxy);

gridSizeOfProxies = (SizeX - 1) / ((QuadsPerComponent * QuadsPerComponent)


* ComponentsPerProxy);

return Landscape;
}

A.2 spawnAssets()

void ProceduralAssetDistribution::spawnAssets(TArray<TSharedPtr<biomeAssets>>
biomeSettings, TArray<UTile*> tiles, const int32 ComponentSizeQuads, const int32
ComponentsPerProxy,
const int32 GridSizeOfProxies, const TArray<ControlPoint>& inRoadCoords, const
TArray<Road>& roads, const int32& landscapeScale)
{
UWorld* World = nullptr;
FWorldContext& EditorWorldContext = GEditor->GetEditorWorldContext();
World = EditorWorldContext.World();

FVector proxyLocation;
FVector proxyScale;
FVector Location;
FVector assetScale;
FMath mathInstance;

float minPos = 0.05f;


float maxPos = 0.95f;
float minRot = 0.0f;
float maxRot = 2.0f * PI;
float minScale;
float maxScale;
float RotationAngle;
int AssetCount = 0; //used to count per tile assets

//Iterate through all tiles


for (size_t i = 0; i < tiles.Num(); i++)
{
//Check which type the tile is in terms of biotope
for (size_t j = 0; j < biomeSettings.Num(); j++) {
if (tiles[i]->biotope == biomeSettings[j]->biotopeIndex) {

//Make sure the tiles has properly setup a streaming proxy for world
partitioning
if (tiles[i]->streamingProxy != nullptr)
{
//Fetch a location within the proxy
proxyLocation = tiles[i]->streamingProxy->GetActorLocation();
proxyScale = tiles[i]->streamingProxy->GetActorScale();

}
else {

66
A.2. spawnAssets()

UE_LOG(LogTemp, Warning, TEXT("Something went wrong, tiles are missing


streamingproxies"));
break;
}

//Iterate through the number of assets types this tiles should contain
//UE_LOG(LogTemp, Warning, TEXT("Number of assetstypes : %d"), biomeSettings
[j]->AssetSettings.Num());
for (size_t k = 0; k < biomeSettings[j]->AssetSettings.Num(); k++)
{
//Iterate through the number of instances of this specific asset the tile
should countain
AssetCount = 0;
while (AssetCount < biomeSettings[j]->AssetSettings[k]->assetCount) {

//Random coordinates for X,Y within the bounds of the tiles


float randomValX = mathInstance.FRandRange(minPos, maxPos);
float randomValY = mathInstance.FRandRange(minPos, maxPos);

//Find position in tile, scales based on perProxy


Location = proxyLocation + (ComponentSizeQuads * proxyScale) * (
ComponentsPerProxy / 2.0);

//Set position for X,Y


Location.X = proxyLocation.X + (ComponentSizeQuads * proxyScale.X) * (
ComponentsPerProxy * randomValX);
Location.Y = proxyLocation.Y + (ComponentSizeQuads * proxyScale.Y) * (
ComponentsPerProxy * randomValY);

//Create triangle for normal calculations to match ground tilt


Triangle tri(tiles[i], Location.X, Location.Y);

Location.Z = tiles[i]->streamingProxy->GetHeightAtLocation(Location).
GetValue();

FVector UpVector = FVector(0, 0, 1);

RotationAngle = acosf(FVector::DotProduct(UpVector, tri.normal));

//Check if angle is acceptable to spawn the object within


if (RotationAngle > biomeSettings[j]->AssetSettings[k]->angleThreshold)
{
AssetCount++;
continue;
}

FVector RotationAxis = FVector::CrossProduct(UpVector, tri.normal);


RotationAxis.Normalize();

float randomZRotation = mathInstance.FRandRange(minRot, maxRot);

FQuat Quat = FQuat(RotationAxis, RotationAngle);


FQuat quatRotZ = FQuat(tri.normal, randomZRotation);
Quat = quatRotZ * Quat;

FRotator Rotation(tri.normal.Rotation());
FActorSpawnParameters SpawnInfo;

//Specify where in the world it will spawn, using ground tilt


AStaticMeshActor* MyNewActor = World->SpawnActor<AStaticMeshActor>(
Location, Quat.Rotator(), SpawnInfo);

//For scale variance (assumes uniform scale on all axises which "should"
be true)
FVector defaultScale = MyNewActor->GetActorScale3D();

67
A.2. spawnAssets()

minScale = defaultScale.X - biomeSettings[j]->AssetSettings[k]->scaleVar


;
maxScale = defaultScale.X + biomeSettings[j]->AssetSettings[k]->scaleVar
;

float scaleValue = mathInstance.FRandRange(minScale, maxScale);

assetScale = { scaleValue ,scaleValue ,scaleValue };

MyNewActor->SetActorScale3D(assetScale);

UStaticMesh* Mesh = LoadObject<UStaticMesh>(nullptr, *biomeSettings[j]->


AssetSettings[k]->ObjectPath);

//If noCollide is true, we have to check bounding box to all previous


spawned objects
if (biomeSettings[j]->AssetSettings[k]->noCollide && !biomeSettings[j]->
AssetSettings[k]->considerRoad) {
spawnWithNoCollide(tiles[i], Location, scaleValue, biomeSettings[j]->
AssetSettings[k]->density, MyNewActor, Mesh, AssetCount);
}
//If considerRoad is true, we have to check range to road spline points
and see if it is above threshold
else if (biomeSettings[j]->AssetSettings[k]->considerRoad && !
biomeSettings[j]->AssetSettings[k]->noCollide) {
if (roadConsiderCheck(inRoadCoords, roads, landscapeScale, Location))
{
AssetCount++;
culledAssets.Add(MyNewActor);
}
else {
UStaticMeshComponent* MeshComponent = MyNewActor->
GetStaticMeshComponent();
if (MeshComponent)
{
MeshComponent->SetStaticMesh(Mesh);
}
tiles[i]->tileAssets.Add(MyNewActor);
AssetCount++;
}
}
//If both true, check first road dist, then if check if it collides with
other object
else if (biomeSettings[j]->AssetSettings[k]->considerRoad &&
biomeSettings[j]->AssetSettings[k]->noCollide) {
if (roadConsiderCheck(inRoadCoords, roads, landscapeScale, Location))
{
AssetCount++;
culledAssets.Add(MyNewActor);
}
else {
spawnWithNoCollide(tiles[i], Location, scaleValue, biomeSettings[j
]->AssetSettings[k]->density, MyNewActor, Mesh, AssetCount);
}
}
//Both false, spawn without any criterion
else if (!biomeSettings[j]->AssetSettings[k]->considerRoad && !
biomeSettings[j]->AssetSettings[k]->noCollide) {
UStaticMeshComponent* MeshComponent = MyNewActor->
GetStaticMeshComponent();
if (MeshComponent)
{
MeshComponent->SetStaticMesh(Mesh);
}
tiles[i]->tileAssets.Add(MyNewActor);
AssetCount++;

68
A.3. CRSpline.cpp

}
else {
UE_LOG(LogTemp, Warning, TEXT("This should never happen PA.cpp 163"));
}
}
for (auto& t : culledAssets)
{
if (t.IsValid())
{
t->Destroy();
}
}
culledAssets.Empty();
}

}
else {
/*UE_LOG(LogTemp, Warning, TEXT("Something went wrong, no biome type of this
tile exists (!)"));*/
}
}

A.3 CRSpline.cpp

#include "CRSpline.h"

CRSpline::CRSpline(ControlPoint p0, ControlPoint p1, ControlPoint p2, ControlPoint


p3)
{
points.Add(p0);
points.Add(p1);
points.Add(p2);
points.Add(p3);

CRSpline::CRSpline()
{

CRSpline::~CRSpline()
{
}

ControlPoint CRSpline::GetSplinePoint(float t) const


{

check(t >= 1);


check(t < points.Num()-1)

int i0, i1, i2, i3;

//t = 1,4
i0 = floor(t) - 1; // 0
i1 = floor(t); //1
i2 = i1 + 1; //2

69
A.3. CRSpline.cpp

i3 = i2 + 1; //3

t = t - floor(t); //0.4
float tt = t * t;
float ttt = tt * t;

float q1 = -ttt + 2.0f * tt - t;


float q2 = 3.0f * ttt - 5.0f * tt + 2.0f;
float q3 = -3.0f * ttt + 4.0f * tt + t;
float q4 = ttt - tt;

ControlPoint res;

res.pos.X = tension * (points[i0].pos.X * q1 + points[i1].pos.X * q2 + points[i2


].pos.X * q3 + points[i3].pos.X * q4);
res.pos.Y = tension * (points[i0].pos.Y * q1 + points[i1].pos.Y * q2 + points[i2
].pos.Y * q3 + points[i3].pos.Y * q4);
res.pos.Z = tension * (points[i0].pos.Z * q1 + points[i1].pos.Z * q2 + points[i2
].pos.Z * q3 + points[i3].pos.Z * q4);

return res;

ControlPoint CRSpline::GetSplineGradient(float t)
{
int i0, i1, i2, i3;

i1 = (int)t + 1;
i2 = i1 + 1;
i3 = i2 + 1;
i0 = i1 - 1;

t = t - (int)t;
float tt = t * t;
float ttt = tt * t;

float q1 = -3.0f * tt + 4.0f * t - 1;


float q2 = 9.0f * tt - 10.0f * t;
float q3 = -9.0f * tt + 8.0f * t + 1.0f;
float q4 = 3.0f * tt - 2.0f * t;

ControlPoint res;

res.pos.X = tension * (points[i0].pos.X * q1 + points[i1].pos.X * q2 + points[i2].


pos.X * q3 + points[i3].pos.X * q4);
res.pos.Y = tension * (points[i0].pos.Y * q1 + points[i1].pos.Y * q2 + points[i2].
pos.Y * q3 + points[i3].pos.Y * q4);
res.pos.Z = tension * (points[i0].pos.Z * q1 + points[i1].pos.Z * q2 + points[i2].
pos.Z * q3 + points[i3].pos.Z * q4);

return res;
}

float CRSpline::calcSegmentLength(int cp_index, float stepSize = 0.0001f)


{
float resLength = 0.0f;

ControlPoint old_point, new_point;


old_point = GetSplinePoint((float)cp_index);

for (float t = 0; t < 1.0f; t += stepSize)


{
new_point = GetSplinePoint((float)cp_index + t);
resLength += sqrtf((new_point.pos.X - old_point.pos.X) * (new_point.pos.X -
old_point.pos.X)

70
A.3. CRSpline.cpp

+ (new_point.pos.Y - old_point.pos.Y) * (new_point.pos.Y - old_point.pos.Y)


+ ((new_point.pos.Z - old_point.pos.Z) * (new_point.pos.Z - old_point.pos.Z)))
;
old_point = new_point;

return resLength;
}

float CRSpline::GetNormalisedOffset(float p) const


{
int i = 1;
while (p > points[i].length)
{
p -= points[i].length;
i++;
}
//The fraction is the offset
return floor(i) + (p / points[i].length);
}

void CRSpline::calcLengths()
{
TotalLength = 0;
//UE_LOG(LogTemp, Warning, TEXT("Number of control points: %d"), points.Num());
for (size_t i = 1; i < points.Num()-2; i++)
{
points[i].length = calcSegmentLength(i);
TotalLength += points[i].length;
}
}

void CRSpline::addControlPoint(const ControlPoint& cp)


{
points.Add(cp);
}

void CRSpline::visualizeSpline(const FVector &inLandscapeScale)


{
UWorld* World = nullptr;
FWorldContext& EditorWorldContext = GEditor->GetEditorWorldContext();
World = EditorWorldContext.World();

FVector Location;
FRotator Rotation;
FVector assetScale;
FActorSpawnParameters SpawnInfo;

float scaleValue = 0.9;

for (int i = 0; i < points.Num(); i++) //Control points, changed from i = 0 and i
++ to i+=2
{

Location = points[i].pos;
float temp = Location.X;
Location.X = Location.Y * inLandscapeScale.Y;
Location.Y = temp * inLandscapeScale.X;
Location.Z = (Location.Z - 32768) * (100.0f / 128.0f);

AStaticMeshActor* CP_cube = World->SpawnActor<AStaticMeshActor>(Location,


Rotation, SpawnInfo);
assetScale = { scaleValue ,scaleValue ,scaleValue };
CP_cube->SetActorScale3D(assetScale);

71
A.3. CRSpline.cpp

UStaticMesh* Mesh = LoadObject<UStaticMesh>(nullptr, TEXT("StaticMesh’/Game/


Test_assets/Control_Point.Control_Point’"));

UStaticMeshComponent* MeshComponent = CP_cube->GetStaticMeshComponent();


if (MeshComponent)
{
MeshComponent->SetStaticMesh(Mesh);
}

splineActors.Add(CP_cube);

}
scaleValue = 0.7;
float steplength = TotalLength / 250.0f;

int counter = 0;
for (float i = 0; i < TotalLength; i += steplength) //On line points
{
counter++;
Location = GetSplinePoint(GetNormalisedOffset(i)).pos;
float temp = Location.X;
Location.X = Location.Y * inLandscapeScale.Y;
Location.Y = temp * inLandscapeScale.X;
Location.Z = (Location.Z - 32768) * (100.0f/128.0f);
AStaticMeshActor* SP_cube = World->SpawnActor<AStaticMeshActor>(Location,
Rotation, SpawnInfo);
assetScale = { scaleValue ,scaleValue ,scaleValue };
SP_cube->SetActorScale3D(assetScale);

UStaticMesh* Mesh = LoadObject<UStaticMesh>(nullptr, TEXT("StaticMesh’/Game/


Test_assets/Cube_Ghost.Cube_Ghost’"));

UStaticMeshComponent* MeshComponent = SP_cube->GetStaticMeshComponent();


if (MeshComponent)
{
MeshComponent->SetStaticMesh(Mesh);
}
splineActors.Add(SP_cube);

72

You might also like