0% found this document useful (0 votes)
117 views114 pages

Smooth Signed Distance Field Textures

Uploaded by

Sowaiba Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views114 pages

Smooth Signed Distance Field Textures

Uploaded by

Sowaiba Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Smooth Signed Distance Field Textures

by

Ehren Choy,

A thesis submitted to the


Faculty of Graduate and Postdoctoral Affairs
in partial fulfillment of the requirements for the degree of

Master of Computer Science

Ottawa-Carleton Institute for Computer Science


The School of Computer Science
Carleton University
Ottawa, Ontario
April, 2016

©Copyright
Ehren Choy, 2016
The undersigned hereby recommends to the
Faculty of Graduate and Postdoctoral Affairs
acceptance of the thesis

Smooth Signed Distance Field Textures

submitted by Ehren Choy,

in partial fulfillment of the requirements for the degree of

Master of Computer Science

Professor David Mould, Thesis Supervisor

Professor Oliver van Kaick, School of Computer Science

Professor Eric Dubois,


School of Electrical Engineering and Computer Science

Professor Anil Maheshwari, Chair,


School of Computer Science

Ottawa-Carleton Institute for Computer Science


The School of Computer Science
Carleton University
April, 2016

ii
Abstract

We present a procedural method to produce different types of organic texture. We


create a texture by defining a mesh over the texture domain, with each point of the
mesh being assigned a label. We group the points of the mesh, with each group
consisting of points which all share the same label, and are all connected to each
other in the mesh. An implicit region is created for every group.
To define a region, we derive the interior and exterior contour. The interior contour
connects outer points of a group together, while the exterior contour connects points
that are neighbours of that group. The region is defined to be between the interior and
exterior contour. We construct a smooth signed distance field for each contour, and
blend the fields of the contours together to form the region. Wherever one distance
field is discontinuous, we use the distance field of the opposing contour to define the
region. A texture is created by blending together the regions from all groups.
Our work makes the following contributions. We propose the idea of defining
an implicit field by using the smooth signed distance fields of two sets of boundary
line segments. By using two sets of line segments, we are able to produce a single
implicit field that is everywhere smooth. We apply this idea to the generation of
organic texture. Our method of texture generation is distinguished from previous
techniques by being able to create resolution-independent texture, such that texture
elements can be smooth or irregularly shaped. We have found our method suitable
for modeling many types of patterns found in the natural world, including patterns
commonly found in frogs and lizards.

iii
This thesis is dedicated to my parents, Robin & Marie Choy, for their endless love
and encouragement.

iv
Acknowledgments

My deepest gratitude must first be given to my advisor Professor David Mould.


Throughout all the challenges and setbacks, his guidance and encouragement has
made my academic journey a memorable one. From his lighthearted humour to his
inspirational ideas, he has played a central role in the development of my research.
Not often do I have the opportunity to show my appreciation, but I am glad to do so
here.
I would also like to thank my defense committee, Professor Oliver van Kaick and
Professor Eric Dubois, for providing their comments and suggestions. They have
helped greatly in improving the presentation of my work.
Lastly, I must thank all my lab members in HP 5317. I am glad to have shared
so many long conversations, both serious and mundane, with them. Their kindness
was a consistent source of support, and I will remember the time spent with them
fondly.

v
Table of Contents

Abstract iii

Acknowledgments v

Table of Contents vi

List of Tables x

List of Figures xi

1 Introduction 1
1.1 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Procedural Texture Functions 6


2.1 Describing Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Evaluating Procedural Texture . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Procedural Noise Functions . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Perlin Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.2 Sparse Convolution and Gabor Noise . . . . . . . . . . . . . . 11
2.4 Texture Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.1 Worley Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.2 Partition of Unity Textures . . . . . . . . . . . . . . . . . . . 13
2.5 Reaction Diffusion Textures . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 Exemplar-Based Textures . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.1 Pixel-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

vi
2.6.2 Patch-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.3 Optimization-Based . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.4 Surface Texture Synthesis . . . . . . . . . . . . . . . . . . . . 20
2.6.5 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 Implicit Modeling Methods . . . . . . . . . . . . . . . . . . . . . . . . 22
2.8 Smooth Distance Field Textures . . . . . . . . . . . . . . . . . . . . . 23

3 Smooth Signed Distance Functions 25


3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Normalized Approximation . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Interpolation-Based Methods . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 Inverse-Distance Interpolation . . . . . . . . . . . . . . . . . . 28
3.3.2 Interpolation Framework . . . . . . . . . . . . . . . . . . . . . 28
3.3.3 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Potential-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5 Discrete Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6 Constructive Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Smooth Signed Distance Field Textures 38


4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.1 Mesh Generation . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.2 Label Assignment . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.3 Point Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.4 Contour Definition . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.5 Field Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.6 Region Merging . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2 Blending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 Offset Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.1 Stripes and Spots . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.2 Graph Traversal . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5 Point Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.6 Color Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

vii
5 Results 61
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2 Method Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3 Irregular Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.1 Irregular Spot Texture . . . . . . . . . . . . . . . . . . . . . . 64
5.3.2 Irregular Stripe Texture . . . . . . . . . . . . . . . . . . . . . 65
5.4 Smooth Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4.1 Rounded Spot Texture . . . . . . . . . . . . . . . . . . . . . . 66
5.4.2 Smooth Stripe Texture . . . . . . . . . . . . . . . . . . . . . . 68
5.5 Voronoi Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5.1 Voronoi Definition . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5.2 Irregular Voronoi Patterns . . . . . . . . . . . . . . . . . . . . 71
5.5.3 Smooth Voronoi Patterns . . . . . . . . . . . . . . . . . . . . . 72
5.6 Complex Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.6.1 Image Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.6.2 Non-Stationary Texture . . . . . . . . . . . . . . . . . . . . . 74
5.6.3 Boundary Constraints . . . . . . . . . . . . . . . . . . . . . . 77
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6 Discussion 79
6.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.1.1 Mesh Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.1.2 Field Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.1.3 Texture Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.1.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2 Storage and Performance . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.2.1 Random Access . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2.2 Fixed Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2.3 Field Value Computation . . . . . . . . . . . . . . . . . . . . . 88
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

7 Conclusion 91
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

viii
List of References 94

ix
List of Tables

6.1 Timing results for our algorithm for a binary texture (400 × 400). . . 89
6.2 Timing results for our algorithm for different resolutions. . . . . . . . 90

x
List of Figures

1.1 Examples of inspirational biological patterns. (a) toad [24], (b) ze-
bra [20], (c) blue poison arrow frog [66], (d) leopard [56], (e) lion
fish [73], (d) lizard [61] . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Texture creation using our algorithm. Given a labeled mesh (a), we
derive two sets of line segments (b). The created texture is shown in
(c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Examples of patterns with different characteristics. The pattern found
in the leopard gecko [81] (a) is stationary and appears similar un-
der translation, while the butterfly [51] (b) is an example of a non-
stationary pattern. The zebra pattern [21] (c) is an example of an
anisotropic pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Examples of Worley noise . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Behaviour of partition-of-unity textures. (a) texture created by assign-
ing a random color to each point, and in (b) the support of a feature
point is shown as a red outline, and the underlying graph is shown in
blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Examples of reaction-diffusion . . . . . . . . . . . . . . . . . . . . . . 17
3.1 A signed distance field of an infinite line (a) is intersected with the
signed distance field of a circular field (b) producing a smoothed field
(c). Isocontours of the produced fields are shown on the bottom row. 35
3.2 Joining implicit fields. (a) The smooth distance field of a polygon, and
(b) isocontours of the produced field. . . . . . . . . . . . . . . . . . . 37
4.1 From left to right, top to bottom, the six major steps of our algorithm:
(a) mesh generation, (b) label assignment, (c) point grouping, (d) con-
tour definition, (e) field creation, and (f) region merging. Each field
created in step (e) is combined together to form the texture in step (f). 39

xi
4.2 Initial steps of our method. (a) Mesh generation (b) Label Assignment
(c) Point Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Interior and exterior contours for a set of points. The interior contour
is shown in black, and the exterior contour is shown in gray. The
exterior contour consists of two sets of connected line segments, and a
single point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Polygonal regions of the interior points of a mesh, with regions high-
lighted in gray. (a) Labeled Mesh (b) Region ΩI (c) Region ΩE . . . 44
4.5 Graphical representation of dI (p) and dE (p) for the interior points of
a simple mesh, with the corresponding exact distance fields shown as
dashed lines. The cross-section area is shown as a dashed line on the
mesh image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.6 Smooth signed distance field of the interior contour for the interior
points of a mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 Smooth signed distance field of the exterior contour for the interior
points of a mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.8 Cross-section of dM (p) and dN (p) for the interior points of a simple
mesh, with the exact distance field midway between contours shown as
a dashed line. The cross-section area is shown as a dashed line on the
mesh image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.9 Distance field dM (p) with gradient discontinuities. . . . . . . . . . . . 50
4.10 Distance field dN (p) that is C 1 continuous. . . . . . . . . . . . . . . . 50
4.11 Merging implicit regions to form a textures. (a) simple texture, and
(b) random binary texture. . . . . . . . . . . . . . . . . . . . . . . . . 51
4.12 Blending of implicit regions with different blending radius r. . . . . . 53
4.13 Cross-section of wi (p) with different blending radii. The cross-section
area is shown as a dashed line on the mesh image. . . . . . . . . . . 53
4.14 Different offset distances K to expand or contract a region. . . . . . . 55
4.15 Cross-section of dN (p) showing different offset distances for a simple
mesh. The cross-section area is shown as a dashed line on the mesh
image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.16 Stripe and spot patterns created with a graph-based traversal. (a) spot
patterns, and (b) stripe patterns. . . . . . . . . . . . . . . . . . . . . 57

xii
4.17 Branching structures created using a graph-based traversal. (a) texture
with initial set of labeled points, and (b) texture with labeled points
connected to form a minimum spanning tree. . . . . . . . . . . . . . 57
4.18 Random binary textures with varying point densities . . . . . . . . . 59
4.19 Different textures generated by varying the point distribution. . . . . 59
4.20 Textures produced with a color spline to produce an outline. (a) initial
texture image, (b) discontinuous color transition, and (c) smooth color
transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.1 From left to right, top to bottom, examples of patterns found in the
natural world: (a) spotted marsh frog [80], (b) strawberry poison-
dart frog [34], (c) clown triggerfish [28], (d) emperor angelfish [39], (e)
pufferfish [19], and (f) common giraffe [53]. . . . . . . . . . . . . . . . 62
5.2 Irregular spot and stripe patterns. (a) irregular spot pattern created
with d = 0.10 and k = 4 (b) irregular stripe pattern created with
d = 0.05 and k = 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 Irregular spot patterns each with varying point densities d and cluster
sizes k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.4 Irregular stripe patterns each with varying maximum stripe length k. 66
5.5 Bend in an irregular stripe pattern. (a) stripe pattern created with
k = 8 and d = 0.10, and (b) region in the texture with sharp bend. . 67
5.6 (a) example of a rounded spot texture, and (b) example of smooth
stripe pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.7 Parameters dependent on smooth distance field. (a) smooth distance
field of a single line segment, and (b) implicit field of texture with
varying smooth distance fields. . . . . . . . . . . . . . . . . . . . . . 69
5.8 Examples of Voronoi patterns. (a) example of an irregular Voronoi
pattern, and (b) example of a smooth Voronoi pattern. . . . . . . . . 70
5.9 Irregular Voronoi patterns each with varying point densities d and
boundary widths w. . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.10 Examples of different smooth Voronoi patterns. . . . . . . . . . . . . 72
5.11 Examples of complex patterns found in nature: (a) maze coral [10], (b)
blue poison arrow frog [40], and (c) lizard [7] . . . . . . . . . . . . . 73
5.12 Example of maze texture. (a) maze texture, and (b) corresponding
binary image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

xiii
5.13 Examples of textures created from images [5,45,67]. (a) original image,
and (b) texture produced using probabilistic assignment of labels from
image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.14 Example of non-stationary texture. . . . . . . . . . . . . . . . . . . . 76
5.15 Example of non-stationary labeling. . . . . . . . . . . . . . . . . . . . 76
5.16 Texture with single set of boundary points. . . . . . . . . . . . . . . . 78
5.17 Texture with two sets of boundary points. . . . . . . . . . . . . . . . 78
6.1 Bulging in a smooth distance field. (a) Single line segment smooth
distance field. (b) Bulge formed by joining two separate line segments. 80
6.2 Analysis of bulging problem. (a) Implicit field of two joined line seg-
ments. (b) Gradient magnitude of joined line segments. . . . . . . . . 81
6.3 Implicit field created using an interior and exterior contour. . . . . . 83
6.4 Implicit field created using a single set of line segments. . . . . . . . . 83
6.5 Texture with rotated interior contour. . . . . . . . . . . . . . . . . . . 84
6.6 Examples of patterns created using partition of unity textures. . . . . 86
6.7 Examples of patterns created using reaction diffusion. . . . . . . . . . 86
6.8 Examples of patterns created using our method. . . . . . . . . . . . . 86

xiv
Chapter 1

Introduction

Textures are used extensively in computer graphics to add visual details onto synthetic
images. A texture, or texture map, is an image that is mapped onto the surface
of an object to modify surface properties such as color, reflection, transparency, or
displacement. Although any arbitrary image can be referred to as a texture, we use
a more narrow definition, adapted from Wei [87] and Lewis [49]:

Textures are defined as images, either visual or tactile, consisting of re-


peating elements forming patterns. Texture can be described through its
aggregate characteristics, rather than explicit specification of its elements.

Given this refined definition, a major challenge is how to efficiently create high-quality
textures for use in objects of a generated scene.
Procedural texture functions are automatic methods for generating texture that
can be solely described through algorithms or mathematical functions. These func-
tions do not rely on artists providing digital photographs or samples, but instead
create textures through modifiable parameters. Different procedural texture func-
tions are used depending on the type of texture to be created.
Our work focuses on generating textures that resemble organic patterns found in
the natural world. Textures we wish to generate include the various spots and stripes
found in the animal kingdom. Within this collection of patterns, there is a tremendous
variety in color, shape, and size. Patterns range from the black-and-white stripes of a
zebra, to the complex skin patterns found in certain toads. We show several examples
of organic patterns in Figure 1.1.
There are several advantages for creating texture using procedural texture func-
tions. We can describe texture as a whole, rather than having to specify a particular

1
CHAPTER 1. INTRODUCTION 2

(a) (b) (c)

(d) (e) (f)

Figure 1.1: Examples of inspirational biological patterns. (a) toad [24], (b) ze-
bra [20], (c) blue poison arrow frog [66], (d) leopard [56], (e) lion fish [73], (d)
lizard [61]

spot or stripe in an image. Modifiable parameters allow artists to gradually change


texture appearance, a benefit known as parametric control [25].
Procedural texture functions also provide the ability to discover and synthesize
new textures which may not necessarily exist in photographs. Although not neces-
sarily true for all methods, other potential benefits of procedural texture functions
include being compact, efficient, and continuous. We discuss the possible benefits of
procedural texture functions in detail in Section 2.1.

1.1 Goal
Our goal is to create a procedural texture function capable of automatically generating
a large class of organic patterns, including certain types of amphibian skin and animal
coat markings. The control parameters should be intuitive for a designer, and it
CHAPTER 1. INTRODUCTION 3

should be possible to gradually change a texture once produced. Our method should
also provide the traditional benefits of procedural texture functions, including being
efficient and having a compact and continuous representation.

1.2 Previous Work


Texture synthesis is a topic that has been well studied. There are many methods,
both procedural and non-procedural, that can be used to produce texture. Methods
are distinguished from one another by how they are used, and by the types of textures
they generate.
Noise-based techniques (Section 2.3), such as the popular Perlin noise, have pa-
rameters which modify the spectral properties of the texture. These techniques gen-
erate band-limited random patterns, which can be used to add additional detail to
the surface of an object. Reaction-diffusion textures (Section 2.3) operate through
a biological simulation on a discrete grid, and are specified through non-linear par-
tial differential equations. The chemical concentrations of reactants are mapped onto
colors to produce texture.
Other procedural texture functions use point primitives to specify texture. Worley
noise (Section 2.4.1) operates by taking linear combinations of distances from points,
and produces Voronoi-like patterns. Partition-of-unity textures (Section 2.4.2) uses
points in a mesh to define a set of smooth convex fields.
Many non-procedural methods can also be used to create texture. Exemplar-
based methods (Section 2.6) use a given sample image to produce an arbitrarily
large texture. Artists can also manually design a texture by using types of implicit
surfaces, such as using skeletal implicit surfaces or variational surfaces. Although
time-consuming to manually specify, an artist would have complete control over the
output texture.
Our method is distinguished from previous approaches by being able to generate
texture that is continuous, and that can contain smooth and non-convex texture
elements. A comparison between our method, Worley noise, and partition of unity
textures is given in Section 6.1.4.
CHAPTER 1. INTRODUCTION 4

(a) Mesh generation (b) Contour definition (c) Output texture

Figure 1.2: Texture creation using our algorithm. Given a labeled mesh (a), we
derive two sets of line segments (b). The created texture is shown in (c).

1.3 Overview
Our method creates a texture by first generating a mesh over the texture domain.
Each point of the mesh is then labeled with a color value. Once labels are assigned
to the points of the mesh, we divide the points into different groups. Each group
consists of points which all share the same label, and are all connected to each other
in the mesh. An implicit region will be created for every group.
To define an implicit region, we derive two different sets of mesh edges from each
group of points. The two sets of edges define the interior and exterior contour. The
interior contour connects outer points of the group together, while the exterior contour
connects points that are neighbours of the group. The implicit region is defined to
be between the interior and exterior contour, as illustrated in Figure 1.2.
We construct a smooth signed distance field for each contour, and blend the fields
of the contours together to form the implicit region. By using a pair of contours,
rather than using a single set of line segments, we are able to create an implicit field
that is smooth everywhere. Wherever one distance field is discontinuous, we use the
distance field of the opposing contour to define the region. A texture is created by
blending together the implicit fields from all connected components.
Because regions are defined using smooth implicit fields, we can blend different
regions together, and can also expand or contract each texture region. We demon-
strate the effectiveness of our algorithm by generating a variety of textures evident
in the natural world.
CHAPTER 1. INTRODUCTION 5

1.4 Contributions
Our work makes the following two contributions:

ˆ We propose the idea of defining a smooth implicit region by using the smooth
signed distance fields of two sets of boundary line segments. By using such an
approach, we are able to produce regions whose implicit fields are everywhere
smooth.

ˆ We present a procedural method for texture synthesis, where texture elements


can be rounded or irregularly shaped. Our method is suitable for modeling many
types of patterns found in the natural world, including patterns commonly found
in frogs and lizards.
Chapter 2

Procedural Texture Functions

(a) (b) (c)

Figure 2.1: Examples of patterns with different characteristics. The pattern found
in the leopard gecko [81] (a) is stationary and appears similar under translation,
while the butterfly [51] (b) is an example of a non-stationary pattern. The zebra
pattern [21] (c) is an example of an anisotropic pattern.

2.1 Describing Texture


We have previously described textures as images containing repeating patterns. This
definition is simple, but includes many different types of phenomena. Images of brick
walls, marble tiles, and animal markings are all examples of textures. To further
distinguish between textures, we use the following list of texture properties.

Stationarity - We say that a texture is stationary if statistical properties of the


texture do not vary under translation. That is, given any subset of a texture of

6
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 7

a particular size, the subset appears to come from the same underlying process.
Repeating spot patterns, where spots are distributed uniformly and possess the
same size, are stationary patterns. The wing patterns of many butterflies are
non-stationary (Figure 2.1), as the size of a butterfly spot varies depending on
distance to the boundary of the wing.

Isotropy - We say that a texture is isotropic if rotating a texture does not change
the statistical qualities of the texture. Many stripe patterns are not isotropic,
but are anisotropic because all stripes would follow a particular direction in the
texture. For example, zebra stripes (Figure 2.1) typically run vertically along
the zebra body.

Periodicity - We say that a texture is periodic if there exists a pattern which re-
peats at a regular interval, and where the interval is known as the period. A
checkerboard pattern is an example of a pattern which is periodic. Lattice-
based noise functions (Section 2.3) exhibit periodic artifacts which are caused
by interpolation from an underlying grid structure.

Regularity - Closely related to the concept of periodicity is regularity, referring to


the perceived randomness of a texture. Stochastic textures are generated by a
purely random process, while regular patterns have high degrees of symmetry
or periodicity.

Smoothness - Smoothness refers to the continuity of a texture across its domain,


where continuity can be described by the existence of derivatives at a point. We
say a texture is C k continuous if all derivatives up to the kth order exist and are
continuous for any point in the domain. For many amphibian patterns, smooth
textures are desirable because there is a gradual transition between colors which
should be reflected by the continuity of the texture. Our work produces textures
that are C 1 continuous. Smoothness may also refer to continuity of derivatives
along a boundary curve; the path of a circle is smooth, while for a square it is
not.

Spectral Density - An alternative description of texture can be found in the fre-


quency domain, where an image is described in terms of a phase and magnitude
over a range of frequencies. Procedural noise functions (Section 2.3) operate
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 8

by generating band-limited random patterns. Because the phase of these func-


tions are random, noise functions can be fully characterized by the frequency
magnitude or spectral density.

We will later use these definitions in describing procedurally generated texture.


We will next consider how to evaluate these procedural methods.

2.2 Evaluating Procedural Texture


Different procedural texture functions are used depending on the type of texture that
must be generated. Because of the differing goals of these automatic methods, there
is no general rubric for evaluation. However, there are a set of criteria that are uni-
versally desirable, and which we describe below. This list is based on characteristics
described by Ebert et al. [25].

Compact - The texture function should use a small amount of memory. In the ideal
case, the memory usage of a texture function should be independent of the size
of the texture. In many cases, storage is dependent on the complexity and size
of the generated texture.

Efficient - The texture should be generated quickly and should not be computa-
tionally expensive. This is especially important for procedural textures, as the
texture function must be evaluated many times at different locations in a render.

Continuous - The texture function should be continuous and not dependent on a


grid discretization. Simulation-based methods like reaction-diffusion, and many
exemplar-based texture methods produce a discrete texture which must later
be interpolated to produce a continuous function.

General - The texture function can be parameterized to generate a wide class of


patterns and phenomena. Ideally, the parameterization should be continuous
to allow an artist to gradually change the texture. Worley noise (Section 2.4.1)
is parameterized by the choice of distance functions used for computation, and
Gabor noise (Section 2.3.2) provides control over the orientation, principal fre-
quency, and bandwidth of the produced noise.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 9

Random Access - The value of a texture at a point can be evaluated in con-


stant time, independent of previous evaluations. Simulation-based methods
like reaction-diffusion, and many exemplar-based texture methods, do not al-
low random access of the texture; the entire texture must be evaluated at once.

Surface Parameterized - If a texture is created as an rectangular image in R2 , it


must be mapped onto the surface of the object for rendering. For some surfaces,
there is a natural mapping from texture space to model space. However, textures
generally undergo distortion in this mapping process. Rather than having to
perform this mapping, it would be desirable if a texture function is defined
directly on the surface of the rendered object.

The remainder of this section reviews existing methods for generating texture.
We first discuss procedural noise functions, and describe in detail the popular Perlin
noise and Gabor noise functions. We next explore existing methods for creating
organic patterns including reaction-diffusion textures, Worley noise, and partition of
unity textures. We then deviate from procedural methods to discuss exemplar-based
texture and implicit modeling methods. Exemplar-based methods are not procedural
texture functions, but are still widely used in producing texture. For each method,
we describe the attributes that generated textures possess, and how well it matches
the above desirable properties.
We conclude this section by framing our method in the context of existing proce-
dural texture functions. This section assumes that the textures are generated in R2 ,
although many of the described methods directly generalize to three dimensions.

2.3 Procedural Noise Functions


We start our discussion on existing texture methods with procedural noise functions.
Procedural noise functions operate by generating random sequences of numbers, and
this randomness is used to perturb a regular pattern (such as a solid color image) to
give the final texture. The best-known procedural noise function is Perlin Noise [63],
which can be used to generate several different types of patterns such as marble,
clouds, and fire.
To effectively perturb a regular pattern, procedural noise functions aim to produce
texture which is stationary, isotropic, non-periodic, and stochastic. To control the
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 10

range of perturbation, the range of the noise function should be bounded to some
fixed interval (such as −1 to 1).
A possible, but insufficient, method of perturbation is to use a pseudorandom
number to modify every sample of the original pattern. This approach is insufficient
when we consider rendering the pattern onto a surface; slight changes in the scene re-
sult in different samples to be chosen in the pattern, leading to a completely different
texture. A sequence of random uncorrelated numbers is known as white noise; per-
turbing images using white noise creates textures with arbitrary and uncontrollable
detail.
Described in the frequency domain, white noise has an expected constant power
spectrum, whereas we would like procedural noise functions to produce noise only
within a prescribed range of frequencies. Noise functions are distinguished by their
spectral properties; Perlin noise has been described as “an approximation to white
noise band-limited to a single octave” [62].

2.3.1 Perlin Noise


We now describe the implementation of Perlin noise [63]. The method is initialized
by creating an integer grid of fixed resolution, and assigning a random unit vector to
each grid point. The noise value at some location p is obtained using the four corners
of the grid cell that contains p.
Let xi be a grid point belonging to the cell that contains p, and let g⃗i be the unit
vector associated with xi . The contribution fi (p) of the grid point xi is defined as

fi (p) = (p − xi ) · g⃗i . (2.1)

The final noise value is obtained using an interpolation of the contribution from
the four corners of the cell. A spline interpolation, rather than a direct bilinear
interpolation, is used to create an everywhere smooth texture.
The original Perlin noise function has since been improved in a later work by Per-
lin [64]. The original spline produced texture continuous only to its first derivative,
and visible discontinuities existed on a noise-displaced surface when shaded (as the
surface normal is a derivative operator). Perlin improved the noise function by in-
creasing the order of the polynomial spline to be smooth up to its second derivative.
A second improvement was to modify the way the random unit vector is assigned to
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 11

a grid point to prevent axis-aligned clumping of noise values.


Both Perlin noise and its improved variant are very fast, simple, and produce
noise which is roughly bandlimited. For these reasons, techniques based on Perlin
noise remain extremely popular for creating procedural texture. However there are
several drawbacks not addressed by either variant, and we discuss two of the major
problems here. A first problem is that Perlin Noise exhibits lattice artifacts where
it is evident the texture had been generated from an underlying grid (lattice in R3 )
structure. To see this, we notice that the value, and its gradient, of Perlin noise at each
grid point is zero. A second problem is that it provides only a general approximation
to bandlimited white noise. As described by Cook et al. [22], Perlin noise contains
frequencies which create both loss of detail and aliasing artifacts.

2.3.2 Sparse Convolution and Gabor Noise


In the previous section, we discussed Perlin noise, a well-known lattice-based proce-
dural noise function. One of the major limitations of Perlin noise was that it exhibits
lattice artifacts due to noise values being sampled from a underlying grid. We can
consider interpolating from a random set of points, rather than a regular lattice. A
well-known method for doing so is sparse convolution noise [48].
Sparse convolution noise is implemented by doing the following. First, a set of
randomly located variable magnitude impulses are scattered across the domain. The
value of the noise function at a point is obtained as the convolution of the impulses
with an appropriate low-pass filter. The width of the low pass filter roughly controls
the power spectrum of the resulting noise. This approach removes the lattice artifacts
of Perlin noise, but because the contribution of each impulse from the kernel must
be calculated, it is also much more computationally expensive. Lewis used a smooth
cosine kernel as the low-pass filter.
Lagae et al. [43] used the Gabor kernel in a random pulse process to generate noise.
Like sparse convolution noise, a set of pulses are distributed across the domain, but
the Gabor kernel is used instead of the smooth cosine kernel. The noise is defined as
the summation of the randomly distributed Gabor kernels. The choice of the Gabor
kernel allows for precise spectral control with controllable bandlimits, and can be used
for both anisotropic and isotropic noise generation. The approach of Lagae also has
other desirable characteristics. The method is continuous, uses memory independent
of the complexity of the texture, and can be evaluated at interactive speeds.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 12

Lagae later improved upon Gabor noise [44] by introducing an isotropic kernel for
Gabor noise, an error analysis of Gabor noise from kernel truncation, and spatially
varying Gabor noise. Other researchers have also developed Gabor noise for other
contexts. Galerne et al. consider automatically estimating Gabor noise parameters
from exemplar textures [31]. Zanni et al. considered Gabor noise for the texturing of
implicit surfaces [95].

2.4 Texture Basis Functions


2.4.1 Worley Noise
Procedural noise functions generate random sequences of numbers over the texture
domain, and noise functions are distinguished from one another by the spectral char-
acteristics of the produced noise. Gabor noise provided a precise way of controlling
the magnitude, orientation, principal frequency and bandwidth of the output noise.
There are alternative formulations which do not seek to mimic particular spectral
characteristics, but are nevertheless useful in generating interesting texture. These
texture basis functions [42] are used to directly generate interesting patterns, and are
considered as complementary to the procedural noise functions discussed earlier.
A well-known example of a texture basis function is the Worley Noise function [91].
To implement Worley Noise, a set of feature points are distributed across the domain,
and the value at a location is determined by some linear combination of the distances
from these points.
We denote F1 (p) as the distance from p to the nearest feature point, F2 (p) as
the distance to the second nearest features point, and more generally Fn (p) as the
distance to the nth nearest feature point. The distance function f (p) of the texture
is given by

f (p) = Ai Fi (p), (2.2)
i

where Ai is the scalar coefficient for Fi (p). Values of the distance function are mapped
onto colors to produce a texture. A simple texture can be created by using only the
distance to the nearest feature point F1 (p). Distance values will increase radially
around the a feature point, and mapping this distance onto a color produces a simple
Voronoi pattern.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 13

More interesting patterns can be generated by considering different linear combi-


nations of distances from points, as shown in Figure 2.2. Worley noise has also been
generalized by varying the metric used for distance computation [25].

(a) F1 (b) F2 − F1 (c) F3 − F1

Figure 2.2: Examples of Worley noise.

The distance functions Fi (p) are in a non-decreasing order, where 0 ≤ F1 (p) ≤


F2 (p) ≤ Fn (p), and are everywhere continuous. To give an intuition of the behaviour
of Worley noise, we consider only the distance to closest feature point F1 (p). If we
continuously vary p, the function F1 (p) remains continuous. However, the gradient
of F1 (p) will be discontinuous whenever the distance calculation switches from one
feature point to another. More specifically, gradient discontinuities exist along line
segments which are exactly equidistant from two or more feature points, and for the
case of F1 (p) are the Voronoi boundaries of the feature point.
Similar gradient discontinuities occur for a general Fn (p) as feature points are re-
ordered depending on their nearest distance to p. These discontinuities are what give
Worley noise their cellular Voronoi-like appearance, but an alternative formulation
must be used to create a smooth texture.

2.4.2 Partition of Unity Textures


Worley noise produces textures that are continuous, but contain gradient discontinu-
ities. An alternative formulation by Caron and Mould [17], referred to as partition
of unity textures, creates an everywhere smooth texture continuous in its first and
second derivatives.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 14

Like sparse convolution noise, partition of unity textures first distributes a set of
feature points across the domain, and has each feature point assigned a random value.
From this set of points, a triangulation is constructed. Each feature point xi has a set
of axes, where each axis is an edge originating at xi and terminating at its neighbor
in the triangulation. The value at a location p in the texture is the weighted average
of the values from each of the feature points.
The weight of a feature point at a location p is calculated as follows. The location
p is projected onto each axis of the feature point. Each projection is then normalized
as a fraction of the axis length, and is clamped to be between zero (exactly on the
feature point) and one (beyond the axis length). The normalized distance is then
transformed with a spline function to have a derivative of zero at both ends. The
weight is computed as the product of these transformed distances.
The noise value f (p) at location p is calculated as follows.

wi (p)vi
f (p) = ∑i (2.3)
i wi (p)

The function wi (p) gives the weight of the feature point xi , and vi is a random
variable associated with xi . Partition of unity textures can be considered an interpo-
lation method; the texture smoothly interpolates the feature points where f (p) = vi
when p = xi .
This method of interpolation was proposed by Runions [70], and Caron [17] used
this method for texture. Smooth organic shapes were generated by varying the point
distribution, and mapping the resulting values onto colors. The usefulness of this
formulation can be seen by considering the region of influence, or support, of the
feature point xi . The boundary of support of xi can be considered as the intersection
of half-planes, where each half-plane is perpendicular to its associated axis and passing
through xi ’s immediate neighbor. If the Delaunay triangulation is used, this boundary
can be considered as an extended Voronoi region; dividing all axes lengths by two
exactly reproduces the Voronoi diagram. Figure 2.3(b) illustrates the support of a
single point of the texture.
If the underlying triangulation is the Delaunay triangulation, we can loosely de-
scribe partition of unity textures as a blending of smooth Voronoi regions. The
produced texture would be similar to a texture produced by Worley noise when F1 (p)
(the distance to the nearest feature point) is used. We show an example texture
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 15

Figure 2.3: Behaviour of partition-of-unity textures. (a) texture created by assign-


ing a random color to each point, and in (b) the support of a feature point is
shown as a red outline, and the underlying graph is shown in blue.

created using partition of unity in Figure 2.3(a); the triangulation of the points is
shown in blue, and the Voronoi diagram is shown in red on the texture.

2.5 Reaction Diffusion Textures


Reaction-diffusion was first described by Alan Turing [83] as a chemical mechanism
for pattern formation. Several researchers have since described how certain simple
patterns could be generated with such a mechanism [6, 55, 58]. Reaction-diffusion
systems generate organic and smooth patterns, and has been made popular for use in
procedural texture by the work of Witkin and Kass [90], and Turk [84]. We first give
a simple overview of the reaction diffusion method, and we then discuss its extensions
to generate more elaborate textures.
The method generates texture by simulating two or more chemicals interacting
in a system. Chemical concentrations are mapped to colors to generate the texture.
In this model, the chemical concentrations within the system are governed by two
separate processes of reaction and diffusion. Diffusion is the process of chemicals
moving from a point of higher concentration to a lower concentration, while reaction
describes the interactions between chemicals. If we consider a simple two chemical
system of chemicals a and b, reaction-diffusion can be described with the following
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 16

equations:

∂a
= F (a, b) + Ka ∇2 a (2.4)
∂t
∂b
= G(a, b) + Kb ∇2 b. (2.5)
∂t

The first equation describes the change in concentration of chemical a over time
at a fixed position. This change is dependent on the function F (a, b) of the local
concentrations of a and b, and the difference in neighbor concentrations of chemical
a scaled by some constant Ka . The operator ∇2 is the Laplacian operator, and ∇2 a
will be positive if chemical a diffuses toward this position and negative if chemical a
diffuses away from this position. Similarly, the second equation describes the change
in concentrations of chemical b.
With reaction-diffusion, only a small initial variation of chemicals is required to
drive a complex simulation. The system would converge to a stable state where
chemicals are distributed across the domain based on the reaction-diffusion equations.
A limitation of using reaction-diffusion is the difficulty in selecting a set of equations
to generate a particular texture. Furthermore, only an approximate solution can be
obtained through numerical simulation. Turing gave the following simple discrete
simulation in one dimension:

△ai = Ks (16 − ai bi ) + Ka (ai+1 + ai−1 − 2ai ) (2.6)


△bi = Ks (ai bi − bi − βi ) + Kb (bi+1 + bi−1 − 2bi ). (2.7)

In a single dimension, the simulation produces irregular peaks and valleys in the
chemical concentrations of b relative to a. In two dimensions, a similar procedure
gives spot patterns. A five-chemical system by Meinhardt [55] can be used to create
stripe patterns (Figure 2.4). Witkin and Kass proposed several modifications to
traditional reaction-diffusion to vary the types of patterns generated [90]. Their
work proposed a method of anisotropic diffusion which varied the diffusion rates of
chemicals depending on direction. They also described space-varying diffusion where
parameters of the system varied across the domain, for use in non-stationary patterns.
Turk [84] used a series of reaction-diffusion systems, described as a cascade process,
to create more complex patterns. In this approach an initial system is run, and based
on its chemical concentrations, certain regions of the system are marked as fixed.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 17

(a) (b) (c)

Figure 2.4: Examples of reaction-diffusion. (a) spots in simple two chemical system,
(b) stripes in five chemical system by Meinhardt, and (c) leopard spots created
using cascade system.

A different reaction-diffusion system is then run, but chemicals are only allowed to
change outside the fixed regions. The idea is that the fixed regions act as a set
of constraints for a secondary system. This process of applying constraints on a
simulation can be repeated iteratively with varying fixed regions. Leopard spots
can easily be created with this method: an initial system lays down large spots in
the texture, and a secondary system refines each spot by adding surrounding details
(Figure 2.4).
A standard reaction-diffusion system is solved using a discrete simulation on a grid,
but it is often desirable for a texture to be rendered on a general surface. Witkin
and Kass proposed a method using anisotropic diffusion as a correction mechanism.
In this method, the output texture would appear distorted on a grid, but would
then appear correctly on a parametric surface. Turk proposed an alternative method
that distributed points on the surface of the mesh, and having reaction-diffusion run
directly on this surface. This problem of rendering on a surface is a common problem
in texturing, and is discussed further in Section 2.6.4.

2.6 Exemplar-Based Textures


Exemplar-based methods rely on a sample exemplar image to synthesize a texture.
The aim is to produce an arbitrarily large output image visually similar to the sample,
but without artifacts or repetition. These methods are not procedural, but are still
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 18

widely used and well-studied.


These methods are usually limited in replicating textures which are based on a
local and stationary process. In other words, a pixel from the texture is dependent on
a fixed neighbourhood of nearby pixels, and that this neighborhood size is the same
everywhere in the image. This is referred to as the Markov random field model for
texture [87].
Another limitation is that exemplar-based methods have traditionally been slow,
which is a consequence of the output texture needing to search the exemplar to
ensure similarity. Furthermore, exemplar-based methods do not typically produce a
resolution-independent texture. We consider extensions of traditional exemplar-based
methods which address these limitations in Section 2.6.4.
We discuss three major approaches for implementing exemplar-based textures.
First, we discuss pixel-based methods, where textures are generated one pixel at a
time. We next discuss patch-based methods, where textures are generated by synthe-
sizing groups of pixels at a time, but require additional care to handle when patches
overlap. We lastly discuss optimization-based approaches, where the output texture
is formulated as an energy minimization problem. We conclude with a discussion on
later works which address some of the original limitations of these methods. For a
more thorough discussion, the reader should consult the complete survey on exemplar-
based textures by Wei et al. [87].

2.6.1 Pixel-Based
Pixel-based methods operate by iteratively copying a single pixel from an exemplar
image to an output image. A well known pixel-based method is due to the work of
Efros and Leung [27], which later became the foundation of many other exemplar-
based algorithms. Its approach is simple, but still allows for the generation of many
different types of patterns.
The algorithm of Efros and Leung is initialized by first copying a random region
from the exemplar image to the output image. This region is then grown outward one
pixel at a time through a neighborhood search process. That is, for a user specified
neighbourhood (say 5×5) centered at an output pixel, a set of similar neighbourhoods
are found in the exemplar image. The output pixel is set to be the center value of an
exemplar neighbourhood randomly selected from this set. The neighbourhood size is
a parameter of the method and should be set to be the size of the biggest regular
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 19

feature [27].
To measure the similarity between output and exemplar neighbourhoods, each
known pixel value in the output is compared with the corresponding pixel in the
exemplar. The squared difference between corresponding pixels are taken, and then
multiplied with a weight depending on the spatial distance from the output pixel.
Efros and Leung used a Gaussian function so that pixels further away from the output
pixel are weighted less. The sum of the weighted values give the distance measure.
Because only some of the pixels in an output neighbourhood may be known (be-
cause they have not yet been processed by the algorithm), a normalization process is
used where only a subset of pixels are used for matching.
The searching process of this method is very expensive because it requires a linear
search through all exemplar neighbourhoods, and is due to an output neighbourhood
having a varying number of known pixels for matching. A later method by Wei and
Levoy [88] uses a fixed size output neighbourhood. This allows techniques such as
tree-structured vector quantization [88] or k-coherence [82] to accelerate matching.

2.6.2 Patch-Based
Patch-based methods operate by iteratively copying groups of pixels (patches) from
an exemplar image to an output image. The motivation for patch-based over pixel-
based methods is that there should be an improvement in quality, as pixels that are
together in the exemplar should already be correctly related to one another. The
major issue is to correctly handle joining seams of copied patches.
The approach taken by Praun et al. [65] is to have patches overlap and simply
overwrite previously synthesized pixels. By varying the shape of patches, this ap-
proach is simple but works surprisingly very well for stochastic textures [88]. Liang
et al. [50] blended between overlapping patches, but this may cause blurry output in
some instances.
Instead of attempting to blend overlapping patches, alternative approaches con-
sider cutting patches to fit coherently with each other. Efros and Freeman considered
finding an optimal cut of a patch using dynamic programming [26], and this work
was later improved using graph cuts by Kwatra et al. [41]. Another approach is to
warp patches so that a pattern is continuous across patch boundaries [79, 92].
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 20

2.6.3 Optimization-Based
Optimization-based methods operate by minimizing some global energy function. The
energy function is based on the differences between exemplar and output neighbour-
hoods, and a lower energy implies a more similar texture. The energy function can
be expressed as the following equation:

E(X, Z) = |xp − zp |2 (2.8)
p

In this equation, X is the output image where {x0 x1 . . . xn } is a set of overlapping


output neighbourhoods, and Z is the exemplar image where {z0 z1 . . . zn } is a cor-
responding set of neighbourhoods in the exemplar which are most similar to each
xi .
Kwatra et al. [41] used an Expectation-Maximization (EM) like algorithm to mini-
mize E(X, Z). In the expectation step, the set of similar neighbourhoods {z0 z1 . . . zn }
remain fixed and the output texture X is solved through a least squares method. In
the maximization step, the image X is fixed and the set of similar neighbourhoods
are recomputed using tree search. The method alternates between these two steps
until convergence, or until a fixed number of iterations have occurred.
The method by Kwatra has several drawbacks. The first issue is that neighbour-
hood search for each zi is computationally expensive; each search is O(log N ) where
N is the number of neighbourhoods in the exemplar. The second issue is that blurring
occurs where output neighbourhoods overlap. These drawbacks were later addressed
by Han et al. [35] through the use of a k-coherence algorithm in both the expectation
and maximization steps.

2.6.4 Surface Texture Synthesis


The exemplar-based methods we have described assume that the output is generated
on a discretized grid, but it is often desirable to render to a general surface. A
parametric mapping can be done between the discretized grid and the surface, but
this would introduce undesirable distortion and texture seams. We instead consider
methods which generate output texture specific to a particular surface. We describe
the work of Turk, but other methods such as those by Ying [94] or Soler [79] can also
be used.
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 21

Turk first distributes a set of points regularly along the mesh surface. This is
done by randomly placing points on each triangle in the mesh, and using a point
relaxation scheme where neighbouring points repel each other. Once points are evenly
distributed, a newly formed mesh is created from this point set. This secondary mesh
is created to ensure points are evenly spaced for texturing.
After creating the secondary mesh, a vector field is defined on the mesh surface,
and is used to control the orientation of the texture. The field is defined by having a
user specify directions at several locations on the mesh, and using a diffusion method
to propagate the field everywhere on the surface. A second vector field is then created
by rotating this field 90°. Taken together, these two fields are used to define a local
coordinate system.
Once a local coordinate system is defined everywhere along the surface, texture
synthesis can proceed similar to the pixel and patch methods described earlier. Neigh-
bourhoods are defined along a local coordinate system, instead of the canonical Eu-
clidean axes. Turk used this method in a hierarchical manner to improve performance
and texture quality.

2.6.5 Extensions
Traditional exemplar-based methods are costly in several ways. The methods we have
described require the entire output image to be generated at once, which implies a non-
compact representation. This also impacts performance because often only a subset of
a texture needs to be used for rendering. These limitations lead to the development
of texture-on-the-fly methods which provide random access of synthesized texture.
Later methods by Wei and Levoy [89] and Lefebvre and Hoppe [47] support random
access synthesis of texture.
Exemplar-based methods are also usually done at particular sampled resolutions
where a separate interpolation process is done during rendering. Work by Wang
et al. [86] considered directly synthesizing a vector representation of an exemplar
texture. This was done by using signed distance function to define region boundaries,
and colors in regions were represented using compact radial basis function.
The methods we described so far have only generated stationary patterns, but
other methods can generate globally-varying textures. This can be done with the
introduction of a control map, which describes the global behaviour of the texture.
The work of Ashikhmin [4] synthesized non-stationary textures by allowing a user to
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 22

provide a rough color map.

2.7 Implicit Modeling Methods


Instead of using automatic methods to generate texture, detail on surfaces can be gen-
erated directly using traditional modeling techniques. Implicit modeling techniques
are particularly useful because they provide a resolution-independent representation
where blending is defined in a natural way.
An implicit function f (x, y) in R2 can be described as the composition of a distance
function d(x, y) with a fall-off filter function g(r) where r is the distance to some
primitive.
f (x, y) = g ◦ d(x, y) (2.9)

As an example, let us consider a point primitive with fall-off filter g(r) = 1/r2 .
The field is maximal at the point, and rapidly diminishes as the distance from the
point increases. If we place several points randomly in the plane and sum their field
functions, we can create a texture by mapping the field value to a color.
General splatting of point primitives is an early method of generating texture,
and a thorough discussion is given by Schachter [72]. Partition of unity textures can
be considered as splatting point primitives over the domain with asymmetric field
functions.
We can generalize splatting to include other types of primitives. Other primitives
such as line-segments, triangles or curves can be used, so long as distance to the
primitive is well defined.
We can also generalize how fields are composed. A simple example is to take the
minimum or maximum of the fields. Ricci [69] proposed summing fields exponentiated
to the nth power and then taking the nth root. Another method combines fields of
primitives in a hierarchical manner by combining two primitive fields at a time [93].
To obtain more complex texture, an artist can specify the exact implicit primitives
and blending relations to use.
The subject of implicit modeling covers many techniques, and include skeletal
implicit modeling, offset surfaces, level sets, variational surfaces, and algebraic sur-
faces [77]. Implicit modeling is usually discussed in the context of surface modeling,
where the zero set of the implicit field defines the boundaries of a three-dimensional
model surface. For example, Galbraith et al. [30] previously used implicit modeling
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 23

techniques to model the Murex cabritii sea shell. Several examples using implicit
modeling are described by Gomes [32]. These examples show that implicit methods
are able to generate very complex surfaces, but require detail to be explicitly specified.
The traditional strength of implicit surfaces is that they naturally define the entire
interior of a surface; field values greater than some threshold are said to belong to
the surface. Point classification (determining whether a point is inside, outside or on
the boundary of a surface) is also straightforward. Bloomenthal et al. [14] gives a
thorough discussion of implicit modeling in comparison with alternative parametric
representations.
Using similar approaches to Galbraith et al. and Gomes, complex texture can be
created. Instead of using the zero set of a field as a boundary surface, field values
of the primitives are mapped to colors. Smooth color transitions are obtained by
blending of field values. A key difference is that the whole implicit field, not only the
zero set, must be well behaved. This concern is revisited for smooth distance fields
in Chapter 3.
Our approach, described next, uses many of the techniques found in implicit mod-
eling. We generate complex texture, but without explicit specification of primitives.

2.8 Smooth Distance Field Textures


We described several methods for procedurally generating texture. Noise-based func-
tions, such as Perlin Noise and Gabor Noise, produce random patterns approximately
bandlimited to a range of frequencies. Worley noise and partition of unity textures
generate Voronoi-like cellular patterns using distances from randomly distributed
points. Reaction-diffusion methods generate organic and smooth patterns through
a simulation of chemical processes.
We also described exemplar-based methods for generating texture. These methods
are not procedural, but use a user supplied exemplar image. These methods can
generate a large variety of texture, but usually require a discretization of the surface,
and are typically slower than procedural approaches.
Lastly, we have considered modeling texture directly using implicit surfaces. Im-
plicit surfaces provide a resolution independent representation, and provide a natural
method for combining primitives. These methods have been used successfully to
model organic surfaces, but require an artist to explicitly specify geometry. Implicit
CHAPTER 2. PROCEDURAL TEXTURE FUNCTIONS 24

surfaces can be used for texture synthesis by mapping field values to colors.
Our method combines the advantages of several approaches, we are able to gen-
erate smooth, continuous and organic patterns. Like partition of unity textures, our
method is parameterized only by a set of labeled points on a mesh, but is capable of
generating more general curvilinear shapes. Our method is procedural, smooth, and
does not require a grid discretization.
The solution we provide relies on the blending of smooth normalized distance
fields between primitives, and is similar to the previously described implicit modeling
methods. We review existing methods for creating smooth distance fields in the next
section.
Chapter 3

Smooth Signed Distance Functions

3.1 Definition
Computing the distance from an object is a fundamental operation in implicit model-
ing. Section 2.7 described how an implicit field is constructed by composing a distance
function with a fall-off filter function, with the boundary of a surface defined as some
isocontour of the created field. For some applications, including our method for cre-
ating texture, distance fields are required to be smooth and everywhere differentiable.
This chapter reviews different methods for defining a smooth signed distance function
to create such a field.
A smooth signed distance function can be described as a smooth approximation
of the exact signed distance function. A formal definition of the exact signed distance
function f (p, Ω) of a region Ω can be expressed by the following equation:

f (p, Ω) = sign(p, Ω) · dist(p, ∂Ω), (3.1)

where p is a point in the domain, Ω is the defined region, and ∂Ω is the boundary of
that region. The function dist(p, ∂Ω) is the Euclidean distance to the region and is
given by
dist(p, ∂Ω) = inf ∥ x − p ∥ . (3.2)
xϵ∂Ω

25
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 26

The function sign(p, Ω) is the region’s sign function, and is given by





⎪ −1 pϵΩ

sign(p, Ω) = 0 pϵ∂Ω (3.3)



⎩+1 otherwise.

That is, the sign of the distance function is negative in the interior of a region,
zero at the boundary, and positive otherwise. The zero set of a distance field is the set
of points that lie on the boundary of the surface. The exact signed distance function
is defined everywhere in the domain for one or more regions.
A limitation of using an exact signed distance function is that the produced field
may contain gradient discontinuities, even if the boundary of the surface is smooth.
For example, if the boundary is a circle, there exists a single point of gradient discon-
tinuity at the circle’s center. If the boundary is an ellipse, this gradient discontinuity
stretches into a line.
More generally, gradient discontinuities exist along the medial axis of the region.
The medial axis is where a point is equidistant to two or more points of the region
boundary. Described in terms of the smoothness of a function, an exact distance
function is C 0 continuous everywhere, but has C 1 discontinuities at the medial axis
of its associated region. For any location where the gradient is well-defined, the
gradient magnitude is unity, and the gradient orientation is in the same direction as
the normal at the nearest point of the region boundary.
A smooth signed distance function is an approximation of the exact distance
function that is smooth everywhere, except possibly at the region boundaries. There
are several methods for constructing smooth signed distance functions, and different
methods are used depending on the application. For our purposes, we desire distance
functions that are continuous and fast to evaluate. We also desire our distance func-
tion to be well-behaved, in the sense that the field does not contain widely varying
values some fixed distance away from the region boundary. In the next section, we
will describe how to characterize a signed distance function by its behaviour at the
zero set.
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 27

3.2 Normalized Approximation


The work of Biswas and Shapiro [11] characterized a smooth distance function by its
derivative values at the zero set. Recall that for an exact Euclidean distance function,
for every point where the gradient is well defined, the magnitude of the gradient is
unity and all higher derivatives in the direction of the gradient vanish. For a smooth
distance function, the gradient may be unity at the zero set, but there must be some
non-zero higher derivatives to create a smoothed field.
Specifically, let w be a smooth approximation to an exact distance function, and
let v̄ be the vector of the gradient field at the zero set. As described by Shapiro [75],
the smooth distance function w is said to be normalized to the mth order if higher
order derivatives of degree up to m vanish. In other words, if w is normalized to order
m, then
∂w ∂kw
= 1; ¯k = 0; k = 2, 3, . . . , m. (3.4)
∂v̄ ∂v
We can describe how well a smooth distance field approximates a corresponding exact
distance field by considering the degree of normalization. The degree of normalization
is, however, a limited description of the behaviour of the function; normalization is a
local property of the zero set and does not describe behaviour away from the boundary.
For our method, we desire distance fields to have a gradient of unity at the boundary,
but also to be well-behaved away from that boundary.
In the remainder of this chapter, we will discuss different methods for constructing
smooth signed distance fields. We will describe different interpolation, potential,
discrete, and constructive methods, and we frame our discussion on whether a method
can produce a continuous, smooth, and well-behaved distance field.

3.3 Interpolation-Based Methods


Interpolation-based methods construct a smooth distance field by sampling an exact
distance field at a set of points, and then smoothly interpolating that point set.
Interpolation of distance values from points is closely related to the scattered data
interpolation problem [2]. Given a set of unstructured value-location pairs, scattered
data interpolation obtains a function over the entire domain which interpolates the
given data points.
More formally, given a set of N-dimensional data points {x1 x2 . . . xn } associated
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 28

with scalar data values {v1 v2 . . . vn }, scattered data interpolation obtains a function
f (p) : RN → R for all p such that f (xi ) = vi for i = 1 . . . n.
Note there can be infinitely many such solutions for this interpolation so long
as the function passes through the prescribed data points, and therefore the quality
of the interpolation depends on the application. There are many scattered data
interpolation techniques, but we focus on the well-known inverse-distance, natural-
neighbour, and radial basis interpolation methods. Because our focus is on generating
texture, we restrict ourselves to interpolating over the image plane.

3.3.1 Inverse-Distance Interpolation


A well-known and simple method of scattered data interpolation is Shepard’s
method [76], where the interpolant is defined by
⎧∑ M
⎨ ∑∥p−xk ∥ M vk if ∥ p − xi ∦= 0 for all i
k j ∥p−xj ∥
f (p) = , (3.5)
⎩v
i if ∥ p − xi ∥= 0 for some i

where vi is the value associated with data point xi , ∥ p − xi ∥ denotes the Euclidean
distance between the query point x and the supplied data point xi , and M is a positive
constant.
This method is simple and extends to arbitrary dimensions, but has several lim-
itations as mentioned by Shepard. A major drawback of this method is that it is
a global interpolant; that is, the interpolated value depends on all data points. A
global interpolation has performance implications because the interpolation complex-
ity becomes linear in the number of data points. A modified Shepard’s method was
later proposed by Renka [68], and interpolates using only a local set of data points.
In the method by Renka, each data point contributes to the interpolation calculation
only within a given radius of the datapoint. Both methods can be described within
a framework for interpolation, which we next describe.

3.3.2 Interpolation Framework


Shepard’s method, as well as its related methods, can be described in a more general
partition of unity framework for interpolation [2]. In this framework, an interpolated
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 29

value is obtained as a weighted combination of data points given by



f (p) = wi (p)vi , (3.6)
i

where wi is a non-negative function which weights the contribution of its associated


data point vi . In addition, the sum of the weights must be equal to unity for any
point in the domain, and this requirement can be expressed as

wk (p) = 1. (3.7)
k

The set of weights are said to form a partition of unity. Shepard’s method is a
special case of this framework where the weight functions are

∥ p − xi ∥ M
wi (p) = ∑ M
. (3.8)
j ∥ p − xj ∥

Different methods of interpolation are obtained by using different weight functions.


Another well-known interpolation method is natural neighbor interpolation [78],
which relies on the computation of a Voronoi tessellation. The weight for a datapoint
is calculated by finding how much of the data point’s Voronoi area is taken away when
a query point is added to the Voronoi tessellation. Unlike Shepard’s method, natural
neighbour interpolation provides a local interpolation without needing to specify a
radius for each data point. Natural neighbor interpolation is useful when data points
are unevenly spaced [46].
In partition of unity textures, the weight function of each data point was ob-
tained as the product of axes (Section 2.4.2), and is another example of an interpola-
tion method. The procedure of interpolating using mesh axes was first described by
Runions and Samavati [70], and was considered in the context of generalizing spline
curves and surfaces. The fields of each data point in partition of unity interpola-
tion produces smooth and rounded shapes, which Caron [17] used to produce organic
texture by interpolating between color values.
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 30

3.3.3 Radial Basis Functions


Several interpolation methods, including inverse-distance and natural neighbor inter-
polation, use weight functions with the Kronecker delta property:

⎨1 i = j
wi (vj ) = . (3.9)
⎩0 otherwise

This property provides exact interpolation at the data point, but also results in the
interpolation function having gradient zero at that point and appearing flat. Radial
basis interpolation is an alternative method of interpolation that does not appear flat
at the sampled data points.
In radial basis interpolation [15], a function ψ(r) is selected as a basis to describe
a surface. Every data point is associated with a basis function, and the surface is
described as the sum of these basis functions over all points. The basis functions are
smooth and radially symmetric. Following the earlier notation, let xi be the data
point with value vi and let ψ(r) = exp(−r2 /k 2 ) be the basis function where k is a
constant. With the constraint that the function must interpolate the data points, the
interpolation function is given by

f (p) = ci ψ(∥ p − xi ∥) where f (xi ) = vi . (3.10)
i

The set of constraints f (xi ) = vi can be written in matrix form as Ac = b where


⎡ ⎤
⎢ ψ(∥x1 − x1 ∥) · · · ψ(∥x1 − xn ∥) ⎥

A = ⎢ .. ... .. ⎥
(3.11)
⎢ . . ⎥

⎣ ⎦
ψ(∥xn − x1 ∥) · · · ψ(∥xn − xn ∥)
[ ]|
c = c1 . . . cn (3.12)
[ ]|
v = v1 . . . vn . (3.13)

Solving the above system gives the particular radial basis interpolation.
The choice of using the Gaussian function ensures that this system is positive
definite, and therefore has a unique solution. The Gaussian radial basis interpolation
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 31

we described produces a smooth surface which interpolates the control points, and can
reproduce any surface that can be represented as scaled and shifted Gaussians. Simple
surfaces, such as a plane, cannot be represented in this manner. However, radial basis
interpolation can be extended so that it is possible to reproduce an underlying surface
of fixed polynomial degree [37].
Other kernels besides the Gaussian kernel can be considered. Thin plate splines
have the property of minimizing the bending energy of an interpolated surface, and
are defined using the radial basis kernel ψ(r) = r2 log(r).
Radial basis functions are a very common data interpolation technique, but have
also been used for implicit modeling. Variational implicit surfaces [85] use radial basis
functions to define the surface boundaries in implicit objects. In a variational implicit
surface, values assigned to data points are used as constraints to an implicit model.
Positive values assigned to data points are used as constraints for the interior of the
surface, zero value data points are used for the boundary of the surface, and negative
values are used to define the exterior of the surface. Normal constraints are defined
using a zero value and a positive value data point. A three dimensional interpolation
of all data points defines the surface. Variational implicit surfaces have been used in
many sketch-based algorithms [3, 38] to define a three-dimensional model.
A major limitation of using radial basis functions is that determining the weights
for interpolation requires solving a dense matrix, which would be computationally
expensive for a large number of data points. A faster alternative is to use compactly-
supported radial basis functions, where only a sparse matrix needs to be solved [57].
Other approaches divide the points into separate groups and solve each group sepa-
rately, referred to as partition of unity techniques [59].
Depending on the choice of kernel, another limitation is that radial basis functions
are not well-behaved; field values can take on a wide range of values away from the
interpolated data points. This behaviour is undesirable for applications that use the
magnitude, instead of only the sign, of the distance field. Our method for creating
texture does not use radial basis functions because we uses the signed magnitude of
distance values to blend between texture elements.
In implicit surface modeling, if an insufficient number of constraints are specified,
the variations in field behaviour may create unintended, spurious surfaces. The cre-
ation of spurious surfaces from variational implicit surfaces was discussed in detail
by Mann [52]. Their work found that using normal constraints produced reasonably
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 32

smooth implicit surfaces, but still found instances of spurious surfaces being unex-
pectedly created.
A common concern of all interpolation-based methods is that the accuracy and
smoothness of the distance field will depend on the sampling density of the data
points. The next method we discuss does not create a distance field from sampled
points, but instead derives the distance field from the boundary of the associated
surface.

3.4 Potential-Based Methods


Potential-based methods [11] treat the region boundary as a charged surface. In this
model, the field is maximal at the boundary and dissipates away from the surface. A
single point charge with an inverse distance kernel would contribute 1/rN to the field
at a query point, where r is the distance from the query point to the point charge,
and N is some constant. The total field value at any point p is given by the line
integral ˆ
1
f (p) = N
dS, (3.14)
S r(p)

where S is the boundary of the surface. This function produces a value of infinity at
the boundary, and the field value dissipates further away. We can use the reciprocal
of this function as a general distance field. A potential-based distance field has been
previously used to quickly compute an approximate medial axis [1].
An alternative potential-based method uses a double-layer potential model. In
this alternative model, the boundary of a surface is defined as having a layer of
positive point charges on one side, and an opposing layer of negative point charges
on the other. For the particular case of N = 1, the potential value corresponds to
the normalization function used in transfinite mean-value interpolation [8]. Smooth
signed distance fields using double-layered potentials were later generalized by Belyaev
et al. [8].
Potential field methods are attractive for several reasons. Field values can be
solved analytically for simple boundaries, such as line segments or circular arcs. Fur-
thermore, because potentials are defined as an integral over a point set, the field does
not suffer from the bulging [12] common in many implicit modeling techniques. How-
ever, these fields are flat on the boundary (the derivatives of S vanish) and cannot be
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 33

normalized. Furthermore, numerical instabilities occur because field potentials ap-


proach infinity towards the zero set. Despite these limitations, further investigation
by Belyaev [8] suggests that there is promise in applying potential-based methods for
implicit modeling.

3.5 Discrete Methods


Previously described methods provided an analytic solution for the field value ev-
erywhere in R2 . Higher quality smooth distance fields can be obtained at a cost of
discretizing the domain onto a fixed grid.
Discrete methods can be used to obtain a smooth distance field. A fast and
simple method is to use convolution of the exact signed distance field. To create a
smooth distance field using convolution, the exact signed distance field is computed
at a particular resolution, say n × m, on a fixed grid. This is done by rasterizing
the region boundary onto the grid, and then computing the distance transform. The
distance transform will give the Euclidean distance to the nearest boundary pixel
for every pixel in the image. We can compute the distance transform in O(nm) time
using the approach by Felzenszwalb [29]. The resulting distance field is then smoothed
using a convolution given by
ˆ
g(x) = (w ∗ f )(x) = w(x − p)f (p)dp, (3.15)
R2

where f (x) is the original exact distance field, p is a displacement vector from x,
and w is a smooth kernel such as a Gaussian. The smoothness of the distance field
is parameterized by the choice and width of the kernel. A notable drawback of this
approach is that a general convolution does not preserve the zero set of the distance
field.
Preservation of the zero set can be done with a variable radius convolution where
the width of the kernel decreases based on the distance to the zero set. Sanchez et
al. [71] provided a method of variable radius convolution of a signed distance field,
given by ˆ
g(x) = f (x − ph(x))w(p)dp, (3.16)
R2

where f (x) is the exact distance field, p is a displacement vector from x, and w is
a smooth kernel. The function h(x) is used to control the kernel size, and smoothly
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 34

varies from 0 to 1 away from the zero set. One way to define h(x) is to use the
smoothstep function such that

h(x) = 3r(x)2 − 2r(x)3 (3.17)


r(x) = min(|f (x)|/fc , 1) (3.18)

where fc is some constant. More sophisticated methods of constructing smooth dis-


crete distance fields rely on the solution of particular partial differential equations.
Crane considered the use of heat flow to compute the signed distance to a mesh [23].
Belyaev et al. considered several variations of computing smooth distance fields based
on the Poisson, screened Poisson, and p-Poisson equations [9]. Although we desire
a continuous representation of a smooth signed distance field, high quality discrete
distance fields can be obtained using the aforementioned approaches.

3.6 Constructive Methods


The approach that we use follows from the theory of R-functions [74], which is a
method of constructing implicit functions for given geometric objects. An R-function
can be informally defined as a function whose sign (as positive or negative) is com-
pletely determined by the sign of its arguments. A change of sign of an R-function
can only happen if a change of sign occurs in one or more of its arguments. A simple
example of an R-function is the function

f (x, y) = x + y + x2 + y 2 . (3.19)

We see that the function is negative only if both arguments are negative. If we treat
the sign of the function as a binary variable, this R-function would correspond to
logical disjunction (x ∨ y). In general, R-functions allow applying logical operators
(such as conjunction, disjunction and negation) onto the set of real numbers. Complex
shapes can then be composed by applying R-functions onto a set of simple primitives.
We now describe the method by Biswas and Shapiro [11] to construct a smooth
distance field using R-functions. The method assumes that the region boundary is
composed of a set of connected line segments.
A smooth and continuous distance field is formed by creating an implicit field
from each line segment of the boundary, and then applying a separate R-function
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 35

operation to join the implicit fields together. The implicit field for a single line
segment is created using the R-function intersection between the implicit fields of an
infinite line and a circular disk.

(a) (b) (c)

(d) (e) (f)

Figure 3.1: A signed distance field of an infinite line (a) is intersected with the signed
distance field of a circular field (b) producing a smoothed field (c). Isocontours
of the produced fields are shown on the bottom row.

The implicit field for a single line segment is constructed as follows. For a line seg-

ment ℓ = ((x1, y1 ), (x2 , y2 )), with length d = (x2 − x1 )2 + (y2 − y1 )2 , we construct
the scalar field

1
f (x, y) = ((x − x1 )(y2 − y1 ) − (y − y1 )(x2 − x1 )). (3.20)
d

The field f (x, y) is the signed distance function for an infinite line passing through
the endpoints of ℓ, and has a unit gradient everywhere. We next define the circular
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 36

implicit field
[( ) )2 ]
2 ( )2 (
1 d x1 + x2 y1 + y2
t(x, y) = − x− − y− . (3.21)
d 2 2 2

The field t(x, y) is centered at the midpoint of ℓ, and has its zero set passing through
ℓ’s endpoints. The circular field can be derived from the general equation of a circle
(x − x0 )2 + (y − y0 )2 = r2 . The factor 1/d is used so that the boundary is normalized
to the first order.
We intersect the circular field t(x, y) with the infinite line field f (x, y) to produce
the implicit field for the line segment. Shapiro described a method to intersect a field
w1 with another field w2 to give a combined field w that is normalized to the first
order, given by

(|w2 (x, y)| − w2 (x, y))2
w(x, y) = w1 (x, y)2 + . (3.22)
4

Making the substitution w1 = f and w2 = t gives the normalized field function for
the line segment h, and we have

(|t(x, y)| − t(x, y))2
h(x, y) = f (x, y)2 + . (3.23)
4

We show the procedure of creating the implicit field for a line segment in Figure 3.1.
For every line segment of the boundary, we apply the above procedure. Let L =
{h1 h2 . . . hn } be the field values obtained for the set of boundary line segments. To join
the implicit fields of the line segments together, we use a generalized R-equivalence
operation. The R-equivalence operation of order m is defined as

1
d(h1 , ..., hn ) := √ . (3.24)
1 1 1
m
hm
+ hm
+ ... + hm
1 2 n

The result of applying the R-equivalence operation is an unsigned distance field.


To obtain a signed distance function, a point-containment test must be used on the
original region to determine the sign of the field. The sign is negative in the interior
of a region, zero at the boundary, and positive everywhere else. An example of a
smooth distance field of a polygon is shown in Figure 3.2.
CHAPTER 3. SMOOTH SIGNED DISTANCE FUNCTIONS 37

(a) (b)

Figure 3.2: Joining implicit fields. (a) The smooth distance field of a polygon, and
(b) isocontours of the produced field.

The distance function of Figure 3.2 is reasonably well-behaved, although the gra-
dient magnitude varies away from the zero set. In the next section, we will use smooth
signed distance functions created from R-functions to generate texture.
Chapter 4

Smooth Signed Distance Field Textures

Chapter 2 described several different procedural methods for creating texture. In


particular, we described Worley noise and partition of unity textures as methods for
creating different cellular patterns.
Worley noise operates by taking linear combinations of distances from points, and
different textures are obtained by changing how point distances are weighted in the
texture function. Textures produced using Worley noise are everywhere continuous,
but have gradient discontinuities wherever the distance calculation switches from
using one feature point to another. Partition of unity textures is another method to
create cellular patterns. A texture is created by blending a set of convex fields based
on the points of a mesh. Unlike Worley noise, textures produced with partition of
unity are everywhere smooth.
Worley noise and partition of unity textures both use a set of points to create
texture. Different textures are created by either modifying the point distribution, or
by changing a small set of input parameters. While many different kinds of textures
can be created, neither Worley noise nor partition of unity textures can create smooth
non-convex shapes, such as those found in certain fish and lizards. Reaction-diffusion
textures are capable of generating various spot and stripe patterns, but require dis-
crete simulations.
In this chapter, we present our procedural method for creating organic patterns.
Like partition of unity textures, our method uses a mesh to specify the texture.
Instead of defining an implicit field for every point, we use the edges of a mesh to
define implicit fields based on distances to line segments. Unlike previous methods,
we are able to generate smooth, non-convex shapes in an output texture. Our method
has a continuous and compact representation.

38
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 39

4.1 Algorithm
Our method can be broken down into six steps: mesh generation, label assignment,
point grouping, contour definition, field creation, and region merging. Each step of
our method is shown in Figure 4.1.

(a) Mesh Generation (b) Label Assignment (c) Point Grouping

(d) Contour Definition (e) Field Creation (f) Region Merging

Figure 4.1: From left to right, top to bottom, the six major steps of our algorithm:
(a) mesh generation, (b) label assignment, (c) point grouping, (d) contour def-
inition, (e) field creation, and (f) region merging. Each field created in step (e)
is combined together to form the texture in step (f).

We start by constructing a mesh over the domain of our desired texture. We create
the mesh by distributing a set of points on the plane, and applying a triangulation
to the produced point set.
We next assign a label to each triangulated point of our mesh. A label is defined
as a unique identifier with an associated color. The color of the labels are used to
determine the colors of the texture.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 40

Once labels are assigned to the points of the mesh, we divide the points into
different groups. Each group consists of points which all share the same label, and
are all connected to each other in the mesh. An implicit region will be created for
every group.
To define an implicit region, we derive two different sets of mesh edges from each
group of points. The two sets of edges define the interior and exterior contour. The
interior contour connects outer points of the group together, while the exterior contour
connects points that are neighbours of the group. The implicit region is defined to
be between the interior and exterior contour.
We then construct a smooth signed distance field for each contour. We define an
implicit region by blending together the fields of both contours. By using a pair of
distance fields to define a region, we are able to create an implicit field that is C 1
continuous.
The implicit regions of all groups are combined together to form a texture. A
simple binary texture can be created by separating regions into either background or
foreground. Pixels within any foreground region are assigned the same color, and all
remaining pixels are assigned an alternate color.
Because each region is defined using an implicit field, we can also create more
sophisticated textures by blending regions together, or by expanding or contracting
the boundaries of different regions. Sections 4.1.1 to 4.1.6 give a detailed explanation
of each step of our algorithm, and Sections 4.2 to 4.6 describe the effects of varying
the different algorithm parameters.

4.1.1 Mesh Generation


The first step of our algorithm distributes a set of points with labels on the plane,
and triangulates those points to form a mesh. The edges of the mesh will later be
used to define region boundaries in the texture.
A simple method for creating a mesh is to distribute points in a Poisson disk
distribution, and then to triangulate those points using a Delaunay triangulation. A
Poisson disk distribution is defined as the limit of a uniform sampling process with a
minimum-distance rejection criterion [36], and the Delaunay triangulation maximizes
the minimum angle of the triangles in the mesh.
A mesh produced using a Poisson disk distribution and Delaunay triangulation
will have all triangles be roughly the same size. Triangles with small angles will not be
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 41

produced. Our method uses this type of mesh to generate many different unstructured
patterns, such as the patterns found in certain toads and frogs. Alternative mesh
construction methods may be more suitable depending on the type of texture we
want to generate. Section 5.4 discusses how to create smooth patterns using more
regular point distributions.
To simplify the discussion of our method and the various parameters, we assume
that points are distributed in [0,1]2 in R2 with the origin in the top-left corner. We
will create a square texture by regularly sampling from within this unit square.

4.1.2 Label Assignment


After we have created our mesh, every point is assigned a label. A label is a unique
identifier with an associated color. The final texture will be colored depending on the
colors of the labels.
We can produce different textures by varying the labeling strategy. A simple
labeling strategy is to randomly assign a point one of two different labels, which will
give a simple organic pattern. We can produce different textures by changing how
labels are assigned to points, and we discuss different labeling strategies in Section 4.4.
More sophisticated labeling strategies traverse the edges of the mesh to produce
various spot and stripe patterns.

4.1.3 Point Grouping


Once labels are assigned to the points on the mesh, we divide the points into different
groups. Each group of points consists of points which all share the same label, and
are all connected to each other in the mesh.
More formally, let G be the graph of the original triangulation, and let G′ ⊆ G
be a subgraph obtained by deleting all edges whose endpoints are assigned different
labels. The groups consist of the connected components in G′ .
Every point in a group shares the same label, and we define the label of a group
as the label of its underlying points. Each group of points will be used to create a
region in the texture.
An illustration of the first three steps of our algorithm is shown in Figure 4.2. The
example mesh consists of eight points that have been triangulated using the Delaunay
triangulation. The four points that lie on the boundary of the mesh are given the
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 42

(a) Mesh Generation (b) Label Assignment (c) Point Grouping

Figure 4.2: Initial steps of our method. (a) Mesh generation (b) Label Assignment
(c) Point Grouping

same label, and the four points that lie on the interior of the mesh are given a different
label. The labeled points are then grouped into two different connected components.

4.1.4 Contour Definition


For each group of points defined in the last step, we derive two subsets of edges from
the mesh G. We refer to the two subsets of edges as the interior and the exterior
contour.
The interior and exterior contour are used together to define an implicit region in
the texture, with the contours defining an inner and outer boundary for that region.
The interior contour connects points in a group together to form an inner boundary,
while the exterior contour connects points adjacent to the group to form an outer
boundary. Any point within the interior contour belongs to the region, and any point
outside the exterior contour does not.
Let B be the set of points belonging to a group, and let C contain the neighbours
of B not belonging to B itself. Assuming all groups consist of at least two points, the
exterior contour is the set of all edges whose endpoints are in C.
Let D be a set of points, such that each point is both in B and adjacent to a
point in C. The interior contour is the set of all edges whose endpoints are in D.
The interior and exterior contours of a simple mesh are shown as a solid line in the
middle and right images of Figure 4.4.
To extend the definition of the interior and exterior contour to include connected
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 43

components of a single point, we make the following modifications. We include in the


exterior contour every point whose neighbors are all in B, but do not belong to B
themselves. If the set B consists of a single point, the interior contour is defined as
that point. Figure 4.3 shows the interior and exterior contours for a set of points on
a mesh (points are indicated as circles).
We next compute a smooth signed distance field for each interior and exterior
contour. To compute the smooth signed distance field for a contour, we first compute
the smooth unsigned distance field using the method of Biswas [11]. A detailed
description of this procedure is found in Section 3.6.
We obtain the sign of the distance field by using the faces of G. Let ΩE be the
polygonal region defined as the union of all faces in G incident to any point in C.
Let ΩI be the polygonal region defined as the union of all faces in G that have every
point in the face belonging to C. The boundaries of ΩI and ΩE will lie on the interior
and exterior contours respectively for a group of points.
For the exterior contour, the field value at a location p is positive if p ∈ ΩE and
negative otherwise. For the interior contour, the field value at a location p is negative
if p ∈ ΩI and positive otherwise. We denote dE (p) and dI (p) as the smooth signed
distance functions for the exterior and interior contours of C.
Figure 4.6 shows a graphical representation of dI (p) for the interior points of a
simple mesh. Figure 4.6(a) maps values of dI (p) onto colors. Positive values of dI (p)
are indicated in light yellow, and negative values are indicated in dark blue. The
isocontours of dI (p) are shown in Figure 4.6(b). A corresponding visualization of
dE (p) for the same set of points is shown in Figure 4.7.
Figure 4.5 shows a specific cross-section of the visualized field. The cross-section
area is shown as a dashed line on the mesh image. The x-coordinate of the texture
is shown on the horizontal axis of the graph, and the distance value is shown on
the vertical axis. The distance field of the interior contour is shown in black, and
the distance field of the exterior contour is shown in gray. The corresponding exact
distance fields are shown as a dashed line. In the next step, we use the distance fields
of both contours to define a single smooth signed distance field for an implicit region
in the texture.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 44

Figure 4.3: Interior and exterior contours for a set of points. The interior contour is
shown in black, and the exterior contour is shown in gray. The exterior contour
consists of two sets of connected line segments, and a single point.

(a) Original Mesh (b) Interior Region ΩI (c) Exterior Region ΩE

Figure 4.4: Polygonal regions of the interior points of a mesh, with regions high-
lighted in gray. (a) Labeled Mesh (b) Region ΩI (c) Region ΩE
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 45

0.3

d 0

dI (p)
dE (p)
−0.3
0.5 0.6 0.7 0.8 0.9
x

Figure 4.5: Graphical representation of dI (p) and dE (p) for the interior points of
a simple mesh, with the corresponding exact distance fields shown as dashed
lines. The cross-section area is shown as a dashed line on the mesh image.

4.1.5 Field Creation


There are several different methods for defining an implicit region. Skeletal implicit
surfaces define an implicit region by specifying a particular offset distance from a
set of modeling primitives. The set of primitives is known as the skeleton, and the
skeleton lies within the defined region. An example of skeleton-based modeling was
given in Section 2.7, which described using implicit point primitives to form a texture.
Another approach for defining an implicit region is to specify the boundaries of a
region with an arbitrary polygon. The specified polygon is referred to as the cage of
the implicit surface. Section 3.4 described using double-layered potentials to define
the cage of an implicit surface. Points on the interior of the cage would have a negative
charge, and points on the exterior would have a positive charge. The implicit surface
is defined as some offset from the zero set of the produced distance field.
Unfortunately, neither skeletal implicit surfaces nor double-layered potentials pro-
duce a field that is smooth everywhere. Skeletal implicit surfaces have gradient discon-
tinuities occurring along the medial axis of the skeleton. As described in Section 3.1,
the gradient discontinuities are a consequence of using exact distance fields to define
the surface. For surfaces created using double-layered potentials, the gradient field is
discontinuous at the zero set where two line segments meet.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 46

(a) (b)

Figure 4.6: Smooth signed distance field of the interior contour (highlighted in
white) for the interior points of a mesh with (a) color representation of distance,
and (b) isocontours of the produced field.

(a) (b)

Figure 4.7: Smooth signed distance field of the exterior contour (highlighted in
white) for the set of interior points of a mesh with (a) color representation of
distance, and (b) isocontours of the produced field.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 47

Our approach for defining an implicit region is a combination of both skeleton-


based and cage-based approaches. We interpret the interior and exterior contours as
defining the inner and outer boundaries of a defined region. We blend the distance
fields of each contour together to produce a distance field for the region.
Both interior and exterior contours are used because we desire a combined distance
field that is smooth everywhere. A single contour is insufficient because the gradient
field of a single contour is discontinuous at the zero set where two line segments meet.
For example, Figure 4.6 has gradient discontinuities occurring at each interior point
of the mesh.
The combined distance field of a region is based on computing the signed distance
away from a curve approximately midway between the interior and exterior contour.
We define this midway curve as:

∂Ω = p ∈ R2 |dI (p) − dE (p) = 0 .


{ }
(4.1)

A possible signed distance function that estimates the distance from the midway curve
is given by
dI (p) − dE (p)
dM (p) = . (4.2)
2
This distance function is positive near the exterior contour, and becomes negative
near the interior contour. The zero set of this distance function also matches that of
the midway curve. Unfortunately, the combined distance field dM (p) is not smooth
everywhere, and discontinuities exist wherever the interior or exterior contours are
discontinuous. Specifically, dM (p) is discontinuous at the intersection point of any
two edges of a contour.
We next describe an alternative distance function to produce an everywhere
smooth field, also based on estimating the distance from the midway curve. To
avoid gradient discontinuities, we do not use the distance field of a contour near its
zero set. Instead, we use the distance field of the opposing contour to estimate the
distance from the midway curve.
At the exterior contour, we approximate the distance to the midway curve as
dI (p)/2. At the interior contour, we approximate the distance to the midway curve
as −dE (p)/2. We use a smoothstep function s(x) to blend between approximations.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 48

0.2
dM (p)
dN (p)

d 0

−0.2
0.5 0.6 0.7 0.8 0.9
x

Figure 4.8: Cross-section of dM (p) and dN (p) for the interior points of a simple mesh,
with the exact distance field midway between contours shown as a dashed line.
The cross-section area is shown as a dashed line on the mesh image.

The combined distance function is defined as

−dE (p) dI (p)


dN (p) = wE + wI , (4.3)
2 2

where

wE = s(1 − dI /(dI + dE )) (4.4)


wI = s(dI /(dI + dE )). (4.5)

The smoothstep function we use is a fifth order polynomial originally proposed by


Perlin [64], given by



⎪ 6x5 − 15x4 + 10x3 0<x<1

s(x) = 0 x≤0 . (4.6)


x≥1

⎩1

This smoothstep function gradually transitions between unity and zero, and has zero
first and second derivatives at both ends of the transition.
The function dN (p) has the same zero set as dM (p), and also behaves as a smooth
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 49

signed distance field for the midway curve. We show a comparison between the two
methods we have presented in Figure 4.9 and Figure 4.10 for the interior points of a
simple mesh. Notice that there are gradient discontinuities present in dM (p) at each
vertex in the mesh, but that these discontinuities are not present in dN (p).
Figure 4.8 shows a specific cross-section of the visualized field. The x-coordinate
of the texture is shown on the horizontal axis of the graph, and the distance value
is shown on the vertical axis. The function dM (p) is shown as a solid gray line, and
dN (p) is shown as a solid black line. The exact distance field for the interior contour
is shown as a dashed line on the graph.

4.1.6 Region Merging


After we have created a smooth distance field for every group of points, we combine
the distance fields together to create a texture. Different textures can be created
depending on how regions are combined. As an example, we now describe a simple
method of merging regions together to produce a binary texture.
We assume that each point in the original graph is assigned either a foreground
label or a background label. Let {F0 , F1 . . . Fn } be the set of connected components
which are all assigned a foreground label.
For every Fi with combined distance functions dN (p), we create a foreground region
Ri . This region is defined as the union of all locations p such that dN (p) < 0, and
represents the interior of the midway curve. Every point within a foreground region
is assigned the foreground color, and all other points are assigned the background
color.
Let cf be the color associated with the foreground label, and let cb be the color
associated with the background label. The color of the output texture at some location
p is given by ⎧
⎨c ∃i : p ∈ R
f i
f (p) = . (4.7)
⎩c otherwise
b

Examples of textures produced by this method are shown in Figure 4.11, with the
texture for our example mesh shown in Figure 4.11(a). The texture in Figure 4.11(b)
was created by distributing points in a Poisson disk distribution, and randomly as-
signing a point to have either a foreground or background label.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 50

(a) (b)

Figure 4.9: Distance field dM (p) with gradient discontinuities. (a) color representa-
tion of distance, and (b) isocontours of the produced field.

(a) (b)

Figure 4.10: Distance field dN (p) that is C 1 continuous. (a) color representation of
distance, and (b) isocontours of the produced field.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 51

(a) (b)

Figure 4.11: Merging implicit regions to form a textures. (a) simple texture, and
(b) random binary texture.

We can use the distance fields of each region to create more elaborate effects. In
the next section, we show how to blend regions together by deriving a weight function
from each distance field.

4.2 Blending
To produce a smooth transition between regions in our texture, we use the distance
function of the region to define a weight function. The weight functions of all regions
are blended together to produce the texture.
Let C = {C0 , C1 . . . Cn } be the set of connected components in our mesh. For
each connected component Ci with associated color ai , we define an associated weight
function wi (p) : R2 → R. The color of the blended output texture at some location p
is given by ∑n
wi (p)ai
f (p) = ∑i=1n . (4.8)
i=1 wi (p)

Our texture function f (p) then becomes a weighted average of color values from each
connected component. Other possibilities of combining the different weight functions
can be considered, such as selecting the color with the largest total weight.
To define our weight function wi (p), we use the distance function dN (x) of the
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 52

connected component Ci . We introduce a parameter r representing a blending radius


away from the midway curve dN (p) = 0. The fraction dN (p)/r is exactly −1 if p is r
away from the midway curve in the direction of the interior contour, and +1 away in
the direction of the exterior contour. We next scale this fraction to the range [0, 1]
to give the function:
dN (p)/r + 1
hN (p) = . (4.9)
2
The weight function wN (p) is defined as the composition of hN (p) with a smoothstep
function s(x), and is given by

wN (p) = s(hN (p)), (4.10)

where the definition of s(x) is given in Section 4.6. The smoothstep function is used
to smoothly transition between the weights of the different connected components.
Figure 4.12 shows a set of textures where each texture has a different specified
blending radius. Figure 4.13 shows a cross-section of the different blended textures
in Figure 4.12. The set of textures with a blending radius r = 0, 0.05, and 0.10 are
shown as a solid, dashed, and dotted line.

4.3 Offset Distance


We can use the distance function dN (p) to expand or contract an implicitly defined
region. For a connected component C with distance function dN (p), we define an
implicit region R as
R = {p|dN (p) < K}. (4.11)

The variable K can be interpreted as an offset distance, with the region being defined
as a shifted offset from the midway contour. Inflation of the region R occurs if K is
positive, and deflation occurs if K is negative.
The behaviour of changing the offset distance can be described based on the inte-
rior and exterior contours of a region. When a region is expanded, the boundary more
closely follows the distance field of the interior contour. When a region is contracted,
the boundary more closely follows the distance field of the exterior contour.
In Figure 4.14, we show a set of implicit regions created with different offset
distances for the interior points of a simple mesh. A cross-section of the distance field
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 53

(a) r = 0 (b) r = 0.05 (c) r = 0.10

Figure 4.12: Blending of implicit regions with different blending radius r.

1 r = 0.00
r = 0.05
r = 0.10

0
0.5 0.6 0.7 0.8 0.9
x

Figure 4.13: Cross-section of wi (p) with different blending radii. The cross-section
area is shown as a dashed line on the mesh image.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 54

dN (p) of the mesh is shown in Figure 4.15, with different offset distances highlighted
as a dashed line. The boundary of the implicit region is where the distance field
intersects the offset distance, and is shown as a solid red line.

4.4 Labeling
4.4.1 Stripes and Spots
This section discusses different methods for assigning labels to points in the graph. We
have shown that a random binary labeling of points will produce an appealing organic
image, but more sophisticated patterns can be generated by taking into account the
underlying graph. Caron and Mould [18] considered different methods of reasoning
over a graph in their work on partition of unity textures, and these methods can be
directly applied to our algorithm. We reproduce their method of using a depth-first
and breadth-first traversal of the graph to create stripe and spot patterns.
We generate stripe patterns as follows. We select a random point p in the graph
and a fixed number k representing the length of the stripe. We mark p with a
foreground label, and add its unlabeled neighbors to the front of an exploration list.
To grow the stripe, we iteratively repeat the previous step by removing a point from
the front of the exploration list, marking it as the foreground label, and adding
unlabeled neighbors to the front of the exploration list. The stripe stops growing
when it is of size k, or when the exploration list is empty. Once the stripe ceases
growing, we mark all remaining points in the exploration list with the background
label. We repeat this process until all points are labeled.
We generate spot patterns by modifying how stripe patterns are generated. Specif-
ically, when neighbors are added to the exploration list, we add to the back rather than
the front. The labeling procedure has the effect of forming clusters resembling spot
patterns. Figure 4.16 shows a stripe and spot pattern created using our algorithm.

4.4.2 Graph Traversal


We briefly mention other graph-based labeling strategies. Round-robin clustering [18]
was previously explored to generate organic but irregular structures. In this method,
a set of center points sequentially take turns applying a label to a point. Expansion
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 55

(a) K = 0 (b) K = −0.05 (c) K = 0.05

Figure 4.14: Different offset distances K to expand or contract a region.

0.2

K
d 0
−K

-0.2
0.5 0.6 0.7 0.8 0.9
x

Figure 4.15: Cross-section of dN (p) showing different offset distances for a simple
mesh. The cross-section area is shown as a dashed line on the mesh image.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 56

stops when there are no neighboring points to claim. The result is a connected but
non-Voronoi like growth pattern.
Branching structures can be created by labeling paths between points using the
same label. In this method, a set of initial points are selected, and an approximate
minimum spanning tree is constructed using the method of Mehlhorn [54]. Every
point belonging to the spanning tree is labeled with a foreground label, and all other
points are assigned the background label. Figure 4.17 shows an example texture
created by connecting a set of foreground points to form an approximate minimum
spanning tree. We expect other graph labeling methods to produce their own inter-
esting organic phenomena.

4.5 Point Distribution


The textures we created in the previous sections used the Poisson disk distribution. In
a Poisson disk distribution, points are tightly packed, but are at least some specified
distance apart. This distribution is convenient because it provides a relatively uniform
distance to blend between regions, as well as a uniform level of detail in the texture,
while still being able to generate irregular patterns without lattice artifacts.
Roughly speaking, the density of points in the texture controls the level of detail.
Changing the point density of our algorithm can be done by varying the minimum
distance parameter d of the Poisson disk distribution. In Figure 4.18, we generate a
set of random binary patterns with different minimum point densities d. Varying the
point density of our algorithm has the effect of scaling the texture, with a smaller
Poisson disk spacing creating a more detailed texture.
We can use other point distributions, and our method will still produce smooth
boundary curves. A texture generated with a purely random uniform point distribu-
tion is shown on the left in Figure 4.19. While there are sharper changes in curvature,
the boundary curves of the texture remain continuous
To produce varying levels of detail in a texture, we vary the feature point distribu-
tion across the domain. We use a density map, which is an intensity image specifying
point density, to vary the detail arbitrarily. We take the density map and apply
an adaptive sampling algorithm [60] to generate the point locations. An example of
varying the point density is shown on the right image of Figure 4.19. Using a density
map allows us to easily generate non-stationary patterns, such as having a denser
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 57

(a) (b)

Figure 4.16: Stripe and spot patterns created with a graph-based traversal. (a)
spot patterns, and (b) stripe patterns.

(a) (b)

Figure 4.17: Branching structures created using a graph-based traversal. (a) texture
with initial set of labeled points, and (b) texture with labeled points connected
to form a minimum spanning tree.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 58

spot pattern along edge boundaries.

4.6 Color Values


We previously assumed labels were associated with color values, and an entire region
was composed of a single color. More interesting patterns can be generated if the
color varies across a region. We define ai (p) to be color of a region at location p in
the texture.
There are several different possibilities for how to vary the color within a region.
One approach is to supply an image I(p) to the texture function, and have the color
of the region be the color of the image at each location such that ai (p) = I(p).
Another approach is to have the color depend on the distance function dN (p) or
the weight function wi (p) of the region. We can use a color spline to map field values
to the corresponding colors. A color spline is defined as an arbitrary mapping of
an input value to an output color [25], and often uses continuous curves to provide
a smooth transition between colors. Using a color spline with a distance functions
allows us to create more sophisticated textures.
Figure 4.20 shows examples of using a color spline to add an outline to a spot
texture. The color of the spot is the outline color if dN (p) < K, and is the interior
spot color otherwise. Varying the constant K produces outlines of different thick-
ness. A discontinuous color transition is shown in Figure 4.20(b), and a smooth color
transition is shown in Figure 4.20(c).

4.7 Summary
In this chapter we described a method for constructing organic textures using smooth
signed distance fields. In our method, a collection of points are grouped based on their
labels and connectivity within a mesh. For each group of points, two smooth distance
fields were used to define an implicit region. Implicit regions were then combined
together to produce the output texture.
We showed how blending between regions can be controlled by modification of
defined implicit fields, and how we could expand or contract an implicitly defined
region. We also demonstrated how different textures can be generated by changing
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 59

(a) d = 0.10 (b) d = 0.05 (c) d = 0.025

Figure 4.18: Random binary textures with varying point densities d.

(a) (b)

Figure 4.19: Different textures generated by varying the point distribution. (a)
random uniform point distribution, and (b) varying density point distribution.
CHAPTER 4. SMOOTH SIGNED DISTANCE FIELD TEXTURES 60

(a) (b) (c)

Figure 4.20: Textures produced with a color spline to produce an outline. (a)
initial texture image, (b) discontinuous color transition, and (c) smooth color
transition

how points are labeled and distributed across the domain. In the next section, we
apply these different techniques to produce the organic patterns we desire.
61
CHAPTER 5. RESULTS 62

Chapter 5

Results

5.1 Introduction

(a) (b) (c)

(d) (e) (f)

Figure 5.1: From left to right, top to bottom, examples of patterns found in the
natural world: (a) spotted marsh frog [80], (b) strawberry poison-dart frog [34],
(c) clown triggerfish [28], (d) emperor angelfish [39], (e) pufferfish [19], and (f)
common giraffe [53].
CHAPTER 5. RESULTS 63

In the previous chapter, we presented our method for generating organic textures
using smooth signed distance fields. We showed how to produce a simple binary
texture by distributing points in a Poisson disk distribution, randomly assigning a
point one of two different labels, and having the texture depend on whether or not a
pixel was in a foreground region.
This chapter will show how other kinds of textures can be created. Specifically,
we will show how to produce different irregular patterns, smooth patterns, Voronoi
patterns, and complex patterns. In the next section, we give a summary of the
different parameters of our algorithm. The remainder of this chapter describes how to
vary these parameters to produce specific kinds of textures. The patterns we generate
are motivated by examples found in the natural world, as shown in Figure 5.1.

5.2 Method Inputs


The different inputs to our algorithm are listed below.

G(V, E) Mesh of the texture. The mesh consists of a set of points V =


{p0 , p1 . . . pn } and a set of edges E = {e0 , e1 . . . en }.

A Set of labels A = {a0 , a1 . . . an } in the texture.

L A labeling function L : V → A. Each point pi in the mesh is associ-


ated with a label ai .

d Minimum distance between any two points in the mesh, when points
of the mesh have been distributed in a Poisson Disk distribution.

r Blending radius of the texture.

K Offset distance of a region in the texture. Inflation of a region occurs


if K > 0, and deflation occurs if K < 0.

k Maximum cluster size of a group of points in texture. This param-


eter is used when using a spot or stripe graph traversal strategy for
labeling points.
CHAPTER 5. RESULTS 64

(a) (b)

Figure 5.2: Irregular spot and stripe patterns. (a) irregular spot pattern created
with d = 0.10 and k = 4 (b) irregular stripe pattern created with d = 0.05 and
k=8

5.3 Irregular Patterns


Irregular patterns are characterized by regions that vary in both size and shape, and
have boundaries that appear uneven and bumpy. We can produce different irregular
patterns by using a graph-based labeling strategy, as described in Section 4.4.1.

5.3.1 Irregular Spot Texture


We created the spot texture in Figure 5.2(a) as follows. We formed a mesh by
distributing points in a Poisson disk distribution with d = 0.10, followed by applying
a spot labeling strategy with k = 4. We used a color spline for the foreground and
background labels, and used a small blending radius to blend the spots with the rest
of the texture.
The spots we have created are smooth, but irregularly shaped, resembling the
spots found in the spotted marsh frog in Figure 5.1(a). The shape of the spots depend
on the underlying mesh, as the edges of the mesh define the region boundaries. Using
a Poisson disk distribution with a Delaunay triangulation creates a mesh consisting
of triangles which are similar in size and shape, while still providing a source of
irregularity for the spots in the texture.
CHAPTER 5. RESULTS 65

(a) d = 0.10, k = 4 (b) d = 0.05, k = 8 (c) d = 0.025, k = 35

Figure 5.3: Irregular spot patterns each with varying point densities d and cluster
sizes k.

To create spots that appear more irregular, we can increase the cluster size and
point density of the texture. As the number of points belonging to a spot increases,
the boundaries of the region become more irregular. This irregularity is caused by
the introduction of additional edges used to define the region boundaries, as each
introduced edge creates a bend in the distance field of a contour. We show irregular
spot textures with increasing cluster sizes and offset distances in Figure 5.3.
Using color splines for the background and foreground labels provide smooth color
variations in the texture. Because the distance fields for each spot do not vary widely
away from the zero set, we are able to produce a consistent appearance for each spot.

5.3.2 Irregular Stripe Texture


We create a stripe texture similar to how the spot texture was created. The irregular
stripe texture in Figure 5.2(b) was produced by distributing points in a Poisson disk
distribution with d = 0.05, and then labeled using the stripe labeling strategy of
Section 4.4.1 with k = 15. We show a stripe textures with varying stripe lengths in
Figure 5.4. A color spline is used to vary the color in the interior of the stripe.
The procedurally generated texture is intended to be similar to the skin patterns
found in the poison dart frog (Figure 5.1(b)). Unfortunately, there are several in-
stances in the texture where a stripe appears rigid and unnatural, such as the bend
shown in Figure 5.5. These textures artifacts exist due to how the mesh is defined,
CHAPTER 5. RESULTS 66

(a) k = 8 (b) k = 15 (c) k = 25

Figure 5.4: Irregular stripe patterns each with varying maximum stripe length k.

and occur when two connected edges of a contour form a large bend.
Creating smoother stripes would require modifying the mesh creation process, such
as adding additional points along the path of a contour. Section 5.4.2 will describe
how to produce smooth stripes by changing how the mesh is constructed.

5.4 Smooth Patterns


In the previous section, we discussed how to produce different irregular patterns, and
we gave examples of patterns that resembled certain types of frogs. However, many
patterns in nature are more smooth and rounded. This section will describe how
smoother spot and stripe patterns can be produced.

5.4.1 Rounded Spot Texture


We previously created an irregular spot pattern by grouping neighbouring points
together to form a single region. Alternatively, a rounded spot pattern can be created
by having each point in the mesh be its own region. An example of such a texture
is shown in Figure 5.6(a). The texture was created by distributing points with a
minimum spacing of d = 0.10 in a Poisson disk distribution. Each point is specified
with a unique label, and a color spline was used to create an outline for each spot.
A pixel that is not within the boundaries of any region is set to be the background
color. The spot pattern we have generated shows resemblance to the spotted trunk
CHAPTER 5. RESULTS 67

(a) (b)

Figure 5.5: Bend in an irregular stripe pattern. (a) stripe pattern created with
k = 8 and d = 0.10, and (b) region in the texture with sharp bend.

(a) (b)

Figure 5.6: (a) example of a rounded spot texture, and (b) example of smooth stripe
pattern.
CHAPTER 5. RESULTS 68

fish in Figure 5.1(e).


The behaviour of each spot can be described by the underlying mesh of the texture.
The interior contour of a spot consists of a single point, and the exterior contour
defines a polygon that connects the neighbours of that point together. Each spot in
the texture is a smooth region inscribed within the defined polygon.

5.4.2 Smooth Stripe Texture


To create regular stripe patterns, we modify the distribution of points in the texture.
By replacing the Poisson disk distribution with other point distributions, we can
produce textures with smoother boundaries. Figure 5.6(b) shows an example of a
smooth stripe pattern produced by distributing points in a sine wave pattern. Points
in the texture are specified by the following set of equations

x = x′ + sin(2πkx) (5.1)
y = y′ (5.2)

where k is a parameter used to specify the frequency of the stripe pattern. The
variables x′ and y ′ are points that are sampled from a rectangular grid pattern. For
Figure 5.6(b), the grid pattern has a constant horizontal spacing of w = 0.05 and a
variable vertical spacing of h = 0.05 + 0.01n, where n is the index of a particular row
of points.
A Delaunay triangulation is used to generate a mesh from the set of points we
distributed. Each row of points in the mesh are all given the same label, and a
color spline is used to color each individual stripe. The stripe pattern we produced
is smooth, and resembles the stripes found on the clown triggerfish, as shown in
Figure 5.1(d).
A major limitation of our method is that it is not immediately clear how to produce
more complex stripe patterns, such as producing stripe bifurcations. The mesh in
Figure 5.6(b) was deliberately generated to produce a particular stripe pattern, and
a different labeling of the mesh points would not produce a desirable texture.
A second limitation of our method is evident when we compare between stripes
of different thicknesses in Figure 5.6(b): thinner stripes in the texture have a thicker
yellow outline. The behaviour of the color spline associated with each stripe is not
CHAPTER 5. RESULTS 69

(a) (b)

Figure 5.7: Parameters dependent on smooth distance field. (a) smooth distance
field of a single line segment, and (b) implicit field of texture with varying
smooth distance fields.

consistent throughout the texture.


To explain the variations in the behaviour of the color spline, we consider the
smooth distance field of a single line segment. The isocontours of a field for a single
line segment are shown in Figure 5.7(a). We observe that the isocontours of the field
are closer together further away from the line segment, which implies that distance
values increase faster away from the zero set. This behaviour is a consequence of
needing to create a smooth field for the line segment.
The distance field of a stripe in the texture behaves similarly to the distance field
of a single line segment, with the gradient magnitude increasing away from the zero
set. More generally, the smooth distance field of a region diverges from the exact
distance field away from the region boundary. Parameters of our algorithm that
depend on the distance field of region, such as the color spline used for the stripe
texture, will therefore vary in behaviour depending on the underlying mesh.
Figure 5.7(b) shows another example of a texture parameter being dependent on
the underlying mesh. Each stripe was assigned the same blending radius parameter,
but the thinner stripes appear to have a much larger actual blending radius. To obtain
a uniform blend, a different blending radius parameters would need to be specified
for each individual region.
CHAPTER 5. RESULTS 70

(a) (b)

Figure 5.8: Examples of Voronoi patterns. (a) example of an irregular Voronoi


pattern, and (b) example of a smooth Voronoi pattern.

Despite the previously mentioned limitations, we have shown that our method can
generate different smooth stripe patterns by distributing points in a systematic way.
Stripe patterns similar to Figure 5.6(b) can be found in the emperor angelfish (shown
in Figure 5.1(d)).

5.5 Voronoi Patterns


We now describe how to produce different cellular patterns based on the construction
of the Voronoi diagram. There are many examples of cellular patterns found in the
natural world: the spots of a giraffe, the wings of a dragonfly, and the patterns in
turtle shells are all examples of Voronoi diagrams found in nature. We will show how
to produce both irregular and smooth Voronoi patterns. Examples of both types of
patterns are shown in Figure 5.8.

5.5.1 Voronoi Definition


We first define several terms related to the Voronoi diagram. A Voronoi diagram is a
partitioning of the plane using a set of points. The initial set of points are known as
CHAPTER 5. RESULTS 71

(a) d = 0.030, w = 1 (b) d = 0.015, w = 2 (c) d = 0.0075, w = 4

Figure 5.9: Irregular Voronoi patterns each with varying point densities d and
boundary widths w.

Voronoi sites, and the partitioned regions are known as Voronoi cells. A Voronoi cell
consists of all points that are closer to a fixed site than any other site.
The line segments that form the boundary between Voronoi cells are known as
Voronoi edges. A Voronoi vertex is defined as the intersection of three or more Voronoi
edges, with each Voronoi vertex being equidistant to three or more sites. We will use
the above definitions in the remainder of this chapter.

5.5.2 Irregular Voronoi Patterns


Irregular Voronoi patterns are characterized by regions that resemble cells in a Voronoi
diagram, but have bumpy and uneven boundaries. An example of such a pattern is
found in the common giraffe, shown in Figure 5.1(f).
We can produce a similar giraffe pattern by labeling points on a Voronoi diagram.
We distribute points P = {p0 , p1 . . . pn } in a Poisson disk distribution, and we dis-
tribute a second set of points S = {s0 , s1 . . . sn } that are the sites of the Voronoi
diagram. All points p that have the same nearest site s are given the same label. All
points on the boundary of a group of points are then relabeled to form an outline
around each Voronoi cell.
Figure 5.8(a) shows a giraffe pattern created in the above manner, and Figure 5.9
shows irregular Voronoi patterns with different point densities d and boundary widths
w between Voronoi cells.
CHAPTER 5. RESULTS 72

(a) (b) (c)

Figure 5.10: Examples of different smooth Voronoi patterns.

5.5.3 Smooth Voronoi Patterns


We now describe how to produce a Voronoi texture with smooth region boundaries.
Instead of using a Delaunay triangulation on an initial set of points, we use use the
edges of a Voronoi diagram along with an alternative mesh construction method to
define region boundaries.
The alternative mesh is constructed as follows. We take the initial set of dis-
tributed points P and compute its Voronoi diagram V (P ). The vertices of the graph
consist of the initially distributed points (which are the sites of the Voronoi diagram),
and the set of Voronoi vertices in V (P ). The edges of the graph include the Voronoi
edges of V (P ), and the edges formed by connecting points in P with their respective
Voronoi vertices.
A Voronoi texture can be produced by assigning all points in P a foreground label,
and all other points the background label. The texture will consist of smooth spots
inscribed within Voronoi boundaries. We can control the smoothness of each spot by
replacing points in P with offset polygons of the Voronoi boundaries, with an example
shown in Figure 5.8(b).
In Section 2.4, we saw how to construct different Voronoi patterns using Worley
noise and partition of unity textures. However, those approaches produced fields
which were discontinuous at the Voronoi boundaries. This approach is distinguished
from the previous approaches by being able to produce a Voronoi pattern that is
everywhere smooth, with the smoothness controlled by specifying a particular offset
polygon. We show Voronoi patterns with different offset polygons in Figure 5.10.
CHAPTER 5. RESULTS 73

5.6 Complex Patterns

(a) (b) (c)

Figure 5.11: Examples of complex patterns found in nature: (a) maze coral [10],
(b) blue poison arrow frog [40], and (c) lizard [7]

We now describe approaches to obtain more complex textures. Examples of the


complex textures we wish to produce are shown in Figure 5.11. We will show how to
use a secondary image to guide point labeling, how to produce different non-stationary
textures, and how boundary constraints can be added to a texture.

5.6.1 Image Texture


One way of obtaining more complex patterns is to change the how points are labeled.
Rather than use a graph traversal method (Section 4.4.1), we use a binary image to
define whether a point is assigned a background or foreground label. For a binary
image B(p) the maps every location p to either 0 or 1, a point is assigned the fore-
ground label if B(p) = 1, and is assigned the background label otherwise. Figure 5.12
shows a complex pattern created using a procedurally generated maze image.
Instead of using a binary image, we can use a grayscale image to give a probabilistic
assignment of point labels. We let F (p) be a grayscale image such that 0 ≤ F (p) ≤ 1,
and we let α be a continuous uniform variable in the range [0, 1].
A point p is assigned the foreground label if F (p) > α, and is assigned the back-
ground label otherwise. Figure 5.13 shows several examples of textures created using
CHAPTER 5. RESULTS 74

(a) (b)

Figure 5.12: Example of maze texture. (a) maze texture, and (b) corresponding
binary image.

a probabilistic assignment of labels from an image. The binary textures consist of


smooth regions but are non-stationary, and resemble the supplied image.

5.6.2 Non-Stationary Texture


We now describe how to produce different non-stationary textures that vary the size
and shape of the texture elements across the image. Many examples of non-stationary
patterns can be found in the natural world. One such example is the blue poison arrow
frog, which is shown in Figure 5.11(b).
One approach to creating non-stationary texture is to vary the point distribution
across the domain, as described in Section 4.5. The texture shown in Figure 5.14 was
produced with a variable density point distribution, with spot sizes being larger or
smaller depending on the local point density.
Another approach to creating non-stationary texture is to vary how points are
labeled. Figure 5.15 shows a texture produced by varying the cluster size of the
different spots. We have used a grayscale image to determine the spot size, with areas
of higher intensity in the image corresponding to larger spot sizes in the texture.
CHAPTER 5. RESULTS 75

Figure 5.13: Examples of textures created from images [5,45,67]. (a) original image,
and (b) texture produced using probabilistic assignment of labels from image.
CHAPTER 5. RESULTS 76

(a) (b)

Figure 5.14: Example of non-stationary texture. (a) non-stationary spot texture,


and (b) corresponding density map.

(a) (b)

Figure 5.15: Example of non-stationary labeling. (a) texture with varying cluster
sizes for each spot, and (b) image used to determine cluster size.
CHAPTER 5. RESULTS 77

5.6.3 Boundary Constraints


We have shown how to produce several different types of irregular patterns by dis-
tributing points in a Poisson disk distribution, and then forming a mesh using a
Delaunay triangulation. The boundaries of those patterns were irregular because of
how mesh edges were used to define region boundaries, but we can construct smoother
regions by directly modifying the underlying mesh of the texture.
Figure 5.16 shows a random binary texture constrained to be within a circular
region. Points were added to the boundary of the region, so that no spot would
extend beyond that boundary. By adding a second set of boundary points, shown
in Figure 5.17, we ensure spots near the boundary are curved to follow the circular
outline. Manipulation of the underlying mesh allows us to create both smooth and
irregular regions in the texture.

5.7 Summary
In this chapter, we discussed how different textures can be created by varying the
parameters of our algorithm. We showed how to create different irregular stripe
and spot textures by using a color spline with a graph-based labeling strategy. We
then showed how to create smooth patterns and Voronoi patterns by changing the
point labeling, point distribution, and graph construction of our method. Lastly, we
described several techniques to create more complex patterns.
CHAPTER 5. RESULTS 78

(a) (b)

Figure 5.16: Texture with single set of boundary points. (a) complete texture, and
(b) section of the texture, with boundary edges shown in yellow.

(a) (b)

Figure 5.17: Texture with two sets of boundary points. (a) complete texture, and
(b) section of the texture, with boundary edges shown in yellow.
Chapter 6

Discussion

6.1 Rationale
We started with the goal of creating smooth organic patterns, and we wanted these
patterns to be created procedurally with a minimal set of parameters. The user would
not be required to specify individual elements of the texture, such as spline control
points. The solution we presented uses smooth signed distance fields to define an
organic texture. In this section, we discuss the rationale, benefits, and limitations of
our approach.

6.1.1 Mesh Texture


We chose to use the points of a mesh to define our texture, with the edges of the mesh
defining the boundaries of regions in our texture. We used a mesh for the simplicity
of the texture definition: a suitable mesh is constructed by distributing points in
the plane, and triangulating. We can exploit many existing methods for both point
distribution and mesh creation. For example, Section 5.6.2 described how to use a
mesh with a varying point density to create non-stationary texture.
Another benefit of using a mesh is that the mesh provides a natural way to group
points together, and provides a way to form regions from the grouped points. Specif-
ically, points are grouped together if they share the same label and are connected
to one another in the mesh, and the boundary for a group of points is formed from
neighboring edges. We also use the mesh for point labeling; Section 4.4 described
several different methods of reasoning over a graph. By having a mesh we were able
to create various kinds of spot and stripe patterns. Furthermore, using a mesh avoids

79
CHAPTER 6. DISCUSSION 80

the lattice artifacts present in other techniques such as Perlin noise.

6.1.2 Field Definition


After points of the mesh are grouped together, we defined for each point group the
interior and exterior contour. We computed a smooth signed distance field for each
contour, and blended the distance fields together to create a signed distance field for
a region.
The rationale for using a pair of smooth distance fields, rather than the distance
field of a single contour, was to create a combined field that was smooth everywhere.
As described in Section 4.1.5, wherever one distance field was discontinuous, we would
use the distance field of the opposing contour.
In addition to producing an everywhere smooth field, using a pair of distance fields
limits bulging in the distance field of the region. Bulging of a smooth distance field
occurs when two line segments are joined together using the R-equivalence operation.
Figure 6.1(a) shows the isocontours of a smooth distance field from a single line
segment. Figure 6.1(b) shows, with a bulge present, a smooth distance field of two
line segments joined using R-equivalence.

(a) (b)

Figure 6.1: Bulging in a smooth distance field. (a) Single line segment smooth
distance field. (b) Bulge formed by joining two separate line segments.

Biswas and Shapiro [11] provided an analysis of the problem based on the gradient
CHAPTER 6. DISCUSSION 81

magnitude at the zero isocontour. Recall that in the case of an exact distance field,
for any point not on the medial axis, the gradient magnitude is unity and all higher
derivatives vanish.
In the case of the smooth distance field, the gradient magnitude is not unity every-
where, and specifically is less than unity at a joining point where two line segments
meet. Figure 6.2(a) shows the field of two line segments joined at 90 degrees. Fig-
ure 6.2(b) shows regions, marked in gray, where the gradient magnitude is less than
1. For a R-equivalence operation of two line segments, a further analysis [74] shows
that the minimum value of the gradient occurs at the bisector of the joined segments.

(a) (b)

Figure 6.2: Analysis of bulging problem. (a) Implicit field of two joined line seg-
ments. (b) Gradient magnitude of joined line segments.

This bulge problem has been discussed in the context of other methods. In the
case of binary blending, Gourmel [33] proposed the solution of blending two implicit
primitives based on the angle between their respective gradient fields. An improved
method for general composition was later considered by Canezin et al. [16]. These
methods produce smooth bulge-free joins between implicit primitives, but have the
limitation of not being associative: the field depends on the blending order of the
primitives.
In the case of implicit modeling, convolution surfaces [13] were proposed as a
generalization of point potential surfaces to reduce bulging. In this method, the field
CHAPTER 6. DISCUSSION 82

of a primitive (such as a line segment or polygon) is calculated as the convolution over


the primitive with a specific kernel function. A choice of an appropriate kernel can
then reduce visible bulges, although the field itself will not be bulge-free. Variational
implicit surfaces [85] use radial basis functions to define constraints to form bulge free
surfaces. The fields produced with radial basis functions are smooth everywhere, but
are generally not well behaved away from the zero set.
In our method, bulging from a single contour is limited because we blend between
two smooth distance fields. The weights used in blending a distance field increase as
the distance grows. Since bulges are most prominent at the zero isovalue of a smooth
distance field, the bulge that is present in one contour is replaced by the distance
field of the opposing contour.
If the interior and exterior contours run parallel to each other, the result is a
near bulge-free field. Figure 6.3 shows a smooth field produced using an interior and
exterior contour, while Figure 6.4 shows a field produced using only a single set of line
segments. Bulging occurs at the corners of Figure 6.4, but is not evident in Figure 6.3.
When the interior and exterior contours are not parallel, field irregularities may
occur from the interaction between the two distance fields. Figure 6.5 shows a field
that is produced when contours are not aligned. The combined field is more complex,
because each contour introduces a different bend into the distance field.
The mesh definition is therefore important in controlling irregularities in the tex-
ture. While this is a significant drawback, our method visible reduces bulging for the
entire implicit field, rather than only at a particular isovalue. Furthermore, unlike
the method of Gourmel [33], our method does not depend on the blending order of
line segments.

6.1.3 Texture Synthesis


The implicit fields of each region are combined to generate a texture. In Section 4.1.6,
we showed how to produce a simple binary texture by assigning all pixels in foreground
regions a single color, and assigning a different color to the remaining pixels.
Instead of defining a region using an implicit field, we could have defined a region
by only specifying a boundary. For example, we could have defined a closed spline
curve over a particular group of points in the mesh. The texture would be created by
CHAPTER 6. DISCUSSION 83

(a) (b)

Figure 6.3: Implicit field created using an interior and exterior contour. (a) implicit
texture, and (b) isocontours of the produced field.

(a) (b)

Figure 6.4: Implicit field created using a single set of line segments. (a) implicit
texture, and (b) isocontours of the produced field.
CHAPTER 6. DISCUSSION 84

(a) (b)

Figure 6.5: Texture with rotated interior contour. (a) implicit texture, and (b)
isocontours of the produced field.

determining whether a point was inside or outside the defined curve.


The benefit of using an implicit field is that we are able to vary the behaviour
within a region in the texture. Specifically, having an implicit definition of the texture
allows us to blend regions together, and allows for the color of a region to vary in
its interior. For textures created with a binary labeling, there is a natural symmetry
in the field definition: the interior contour of one region is the exterior contour of
another region. The generated fields of each region will therefore blend smoothly
with one another.
Because of how distance fields are used, our approach requires that field values do
not vary widely away from the region boundary. We have chosen to use R-functions
to produce the distance fields of each contour of a region.
Chapter 5 showed several examples of using the smooth distance field of each
region to create more complex textures. The textures we produced are both smooth
and continuous, and can contain different non-convex shapes.

6.1.4 Comparison
Chapter 2 previously mentioned partition of unity textures and reaction-diffusion
textures as other methods that can be used to produce different types of smooth
CHAPTER 6. DISCUSSION 85

organic texture. Our method is distinguished from previous approaches by being able
to generate texture that is continuous, and that can contain smooth and non-convex
texture elements. This section gives a comparison between our method, Worley noise,
and partition of unity textures.
The approach we take is similar to partition of unity textures in several ways: we
use a mesh to define our texture, and we use a weighted average of field values to
determine the color of a pixel. A major drawback of partition of unity textures is the
difficulty in creating smooth non-convex shapes. To see this drawback, we observe
that the weight functions possess the Kronecker delta property (Section 3.3.3), and
will therefore have a texture with unit gradient at each feature point. When a region
is comprised of multiple points, the boundary of the region becomes bumpy and
irregular. Figure 6.6 shows several textures generated using partition of unity. In our
method, the smoothness of a boundary is governed by the edges of the mesh. We are
able to produce both irregular and smooth patterns by varying how a mesh is defined
and constructed, as shown in Figure 6.8.
Reaction-diffusion textures use chemical simulation systems to generate a variety
of organic patterns, including the smooth and non-convex regions we desire. Reaction-
diffusion textures can also generate additional patterns that cannot be easily produced
with our method. For example, leopard spots were created using reaction-diffusion
by using a cascade system of chemical reactions, as described in Section 2.5. We show
several patterns generated with reaction-diffusion in Figure 6.7.
The difficulty with using reaction-diffusion textures is that the method requires
numerically solving a set of non-linear partial differential equations. The texture is
discretized to a particular resolution, and the parameters of the texture may be diffi-
cult to control. In contrast, our approach gives a continuous texture representation,
with region boundaries defined using the mesh edges. The parameters of our method
are more closely related to the final output texture, and we expect to create different
textures by varying how a mesh is defined and labeled.

6.2 Storage and Performance


In addition to producing smooth organic textures, we also desire our procedural tex-
ture function to use as little memory as possible, and to be able to quickly generate
a texture. This section describes the storage and computational complexity of our
CHAPTER 6. DISCUSSION 86

(a) Random binary (b) Breadth-first (c) Depth-first

Figure 6.6: Examples of patterns created using partition of unity textures.

(a) Stripe pattern (b) Spot pattern (c) Leopard pattern

Figure 6.7: Examples of patterns created using reaction diffusion.

(a) Random binary (b) Spot pattern (c) Stripe pattern

Figure 6.8: Examples of patterns created using our method.


CHAPTER 6. DISCUSSION 87

algorithm.
Because our texture is defined using a mesh, storage of our texture is linear in the
number of mesh vertices. The entire mesh, as well as the labels associated with each
point, are required to determine the pixel value at a specific location.

6.2.1 Random Access


To evaluate the texture at a location, we first find a set of candidate regions that may
contribute to the pixel value. We determine the set of candidate regions as follows.
For each region, we determine the maximal area in the texture that the region can
contribute to the pixel value, and this area we refer to as the support of the region.
We form a boundary box around each region based on its support, and place the
regions into a spatial partitioning structure. The set of candidate regions can be
obtained in O(log n) time in the number of regions.
Once we have obtained the set of candidate regions, we evaluate the contribution
of each region to determine the pixel value. The field value of all regions are combined
together to determine the pixel value. Because we use a spatial partitioning structure
to determine the set of candidate regions, random access evaluation of the texture is
dependent on the number of regions in the texture.

6.2.2 Fixed Resolution


Instead of determining the value of the texture one pixel at a time, a faster method
can be used to obtain the entire texture at once for a given n × n resolution. Instead
of using a spatial partitioning structure to determine which regions contribute to a
pixel, we iterate through the support of each region and add the region contribution to
the corresponding pixel. Such an approach was previously proposed by Runions [70]
for the computation of a generalized spline.
A texture at a fixed n×n resolution is generated as follows. Let {u0 , u1 , . . . , un×n }
be the set of pixel locations we wish to compute a pixel value for, and let wi (p) be
the weight function of a particular region with associated color ai .
We allocate an array of evaluation points A : {0, 1, . . . , n × n} → Rn , and another
array B : {0, 1, . . . , n × n} → R to store the sum of weight values at each pixel
location.
For each region, we iterate through pixel locations that are within the region
CHAPTER 6. DISCUSSION 88

support. For a pixel location uj within a region support we add wi (uj ) · ai to A[j].
At the same time, we also add wi (uj ) to B[j]. After we have iterated through all
regions, we obtain

N

A[j] = wi (uj )ai (6.1)
i=0
∑N
B[j] = wi (uj ) (6.2)
i=0

where N is the number of regions in the texture. The color value of a pixel at location
uj is obtained by dividing A[j] by B[j].

6.2.3 Field Value Computation


Another concern related to the performance of our algorithm is the efficient evaluation
of the weight function wi (p) for a region. Evaluating wi (p) involves finding the smooth
signed distance field for both the interior and exterior contour. For simple spot
patterns, where the number of edges in the interior and exterior contours are small,
we can directly determine the field value described in Section 3.6. However, for larger
regions where a contour can be composed of many segments, direct computation
of the smooth distance field can create both performance problems and numerical
instabilities.
To improve performance, a line segment is included in the field calculation only
within a local area of a region. Because closer line segments are weighted exponentially
more in the distance calculation, we can disregard certain segments far enough away.
The difficulty with this approach is that a cutoff distance would need to be defined
relative to the distance of other line segments. Using a distance field approach allowed
us to define texture values for all of R2 , but can result in a large variation in the sizes
of produced regions and thus large variations of influences from contour line segments.
A simple approach to restrict the area of a line segment contribution is to use the
edges of the texture mesh. Let (v1 , v2 ) be a line segment that is part of a contour,
and let a be the length of the longest edge from either v1 of v2 . We restrict the field
calculation for the line segment to a square region with center (v1 + v2 )/2 with side
length s = K · a where K is a constant used to control the smoothness of the resulting
CHAPTER 6. DISCUSSION 89

Table 6.1: Timing results for our algorithm for a binary texture (400 × 400).

Parameter K Point Spacing d Points Edges Time

K=1 0.025 1109 3304 1.078s


0.050 310 911 0.875s
0.100 92 263 0.689s

K=2 0.025 1109 3304 2.922s


0.050 310 911 2.469s
0.100 92 263 2.016s

K=4 0.025 1109 3304 7.406s


0.050 310 911 6.406s
0.100 92 263 4.922s

field. Larger values of K increase the computation time but generate a smoother field.
Table 6.1 shows the timing results of our method for a texture created with a
random binary labeling in a Poisson disk distribution, with blending radius parameter
r = d/4, and with different values for the parameter K. The timing results were
generated on an Intel Core i5-5200U processor using a single-threaded unoptimized
implementation. We note that the performance of our algorithm is linear in the size
of the image. Table 6.2 shows timing results for a random binary texture generated
at different resolutions.
Choosing a value for K generally depends on the parameters used to generate the
texture. For the binary texture that we generated our timing results for, we do not
notice discontinuity artifacts beyond K = 2.

6.3 Summary
This chapter discussed the rationale of our algorithm. We described how using a mesh
allows us to group points together, and provides us a way to define region boundaries.
Because each region is defined using a pair of contours, we can create smooth fields
that limit bulge artifacts. Creating a field for each region then allows us to create a
CHAPTER 6. DISCUSSION 90

Table 6.2: Timing results for our algorithm for different resolutions.

Resolution Point Spacing d Points Edges Time

200 × 200 0.025 1109 3304 1.078s


0.050 310 911 0.875s
0.100 92 263 0.689s

400 × 400 0.025 1109 3304 2.922s


0.050 310 911 2.469s
0.100 92 263 2.016s

800 × 800 0.025 1109 3304 11.156s


0.050 310 911 9.890s
0.100 92 263 8.109s

variety of effects in the texture.


We then described different performance concerns related to our algorithm. We
showed how a pixel value could be evaluated at a random location in the texture, and
how to generated a texture at a fixed resolution. We concluded by presenting the
timing results of our algorithm for different resolutions and performance parameters.
Chapter 7

Conclusion

7.1 Summary
We presented a method for generating smooth organic patterns using labeled points
on a mesh. We used the edges of a mesh to group points together, and to define
region boundaries in the texture. Each region is defined by blending together the
distance fields of two different sets of boundary edges, and produces a field for the
region that is everywhere smooth. We created a texture by composing the fields of
different regions together.
Our method is distinguished from existing techniques by being able to produce
a continuous, implicit representation of texture that can contain both smooth and
irregularly shaped regions. Regions in the texture do not have to be convex, and
can be smoothly blended with each other. Furthermore, the inputs to our method
are closely related to the output texture, and different textures can be generated by
varying how a mesh is constructed.
Partition of unity textures produce continuous textures, but region boundaries
would appear bumpy and irregular. Reaction-diffusion textures produce a variety of
organic texture, but each texture is discretized and has its input parameters be a set
of partial differential equations.
There are several limitations to our method as well. In our approach, the gradient
magnitude of a region distance field is not constant, which may cause variations in how
regions are blended in the texture. Section 5.4.2 showed how the blending behaviour
of a stripe varied depending on the stripe width. In Section 5.3.2 we discussed how
texture artifacts could be created depending on how the underlying mesh was defined.
Despite the above limitations, we have shown several types of smooth and irregular

91
CHAPTER 7. CONCLUSION 92

patterns. We were able to produce different spot, stripe, and Voronoi textures that
resembled patterns found in the natural world.

7.2 Future Work


We believe there are several promising directions for future work. We have not yet
considered applying our algorithm to generate surface texture, which is a necessarily
requirement for texturing surfaces without warping and seam artifacts. Other possi-
bilities for future work involve varying how the texture mesh is constructed, changing
how a smooth distance field is defined, and producing complex multi-scale patterns.
Many of the patterns we generated were produced by distributing points in a
Poisson disk distribution, and applying a Delaunay triangulation. An area of further
exploration is to consider alternative mesh construction techniques. We have previ-
ously described how varying the distribution of points could produce non-stationary
texture, but more interesting textures could be generated by using more sophisticated
techniques. For example, we can consider using different mesh subdivision schemes
to control the level of detail of the texture.
Another possibility for future work is to examine alternative approaches to con-
structing smooth distance fields. Our method blended smooth distance fields created
using R-functions. Each smooth distance field was constructed by joining together a
set of clipped implicit lines using R-equivalence. We can consider using other varia-
tions of constructing smooth distance fields by changing how implicit fields are created
and combined together.
The more general problem we wish to solve is how to obtain smooth surfaces,
given that surface boundaries are specified using discontinuous line segments. Ideally,
a method would create surfaces that could be easily parameterized to provide con-
trol over the desired smoothness. Further work could involve investigating applying
potential-based or convolution-based methods to construct smooth regions. Instead
of blending between two sets of distance fields, one could investigate producing an
everywhere smooth field using only a single set of line primitives.
We would also like to explore how to generate more complex patterns by combining
multiple layers of texture together. We could consider generating textures at multiple
levels of detail using different point densities, and combining these textures to form
a complex pattern. Another approach is to have the point set and mesh in a layer be
CHAPTER 7. CONCLUSION 93

dependent on layers previously generated. Many patterns found in the natural world
consist of features that have a large variety of size and shape.
Although we can extend our method in several ways, we have shown the usefulness
of using smooth signed distance fields to produce texture. Our approach gave us a
flexible and resolution-independent method to construct several different types of
smooth and organic patterns. We believe further study of our work may lead to the
generation of many other complex organic phenomena.
List of References

[1] N. Ahuja and J.-H. Chuang, “Shape representation using a generalized potential
field model,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 2, pp. 169–
176, Feb. 1997.
[2] K. Anjyo, J. P. Lewis, and F. Pighin, “Scattered data interpolation for computer
graphics,” in ACM SIGGRAPH 2014 Courses, ser. SIGGRAPH ’14. New York,
NY, USA: ACM, 2014, pp. 27:1–27:69.
[3] B. R. D. Arajo, R. A. Redol, J. Armando, and P. Jorge, “Blobmaker: Free-form
modelling with variational implicit surfaces,” in In Proc. of the 12th Portuguese
Computer Graphics Meeting, 2003, pp. 17–26.
[4] M. Ashikhmin, “Synthesizing natural textures,” in Proceedings of the 2001 Sym-
posium on Interactive 3D Graphics, ser. I3D ’01. New York, NY, USA: ACM,
2001, pp. 217–226.
[5] T. Baert, “Eye,” https://www.flickr.com/photos/60588229@N05/5725913600,
2011, [Online; accessed 03-09-16]. Licensed under CC BY-NC 2.0.
[6] J. B. Bard, “A model for generating aspects of zebra and other mammalian coat
patterns,” Journal of Theoretical Biology, vol. 93, no. 2, pp. 363 – 385, 1981.
[7] P. Batty, “Lizard,” https://www.flickr.com/photos/ebatty/17453720, 2003, [On-
line; accessed 03-09-16] Licensed under CC BY-NC-SA 2.0.
[8] A. Belyaev, P.-A. Fayolle, and A. Pasko, “Technical note: Signed Lp-distance
fields,” Comput. Aided Des., vol. 45, no. 2, pp. 523–528, Feb. 2013.
[9] A. G. Belyaev and P.-A. Fayolle, “On variational and PDE-based distance func-
tion approximations,” Computer Graphics Forum, 2015.
[10] Bemap, “Christmas tree worm on brain coral,” https://www.flickr.com/photos/
40626436@N00/4502224625, 2010, [Online; accessed 03-09-16]. Licensed under
CC BY-NC 2.0.
[11] A. Biswas and V. Shapiro, “Approximate distance fields with non-vanishing gra-
dients,” Graph. Models, vol. 66, no. 3, pp. 133–159, May 2004.
[12] J. Bloomenthal, “Bulge elimination in convolution surfaces,” Computer Graphics
Forum, vol. 16, no. 1, pp. 31–41, 1997.

94
95

[13] J. Bloomenthal and K. Shoemake, “Convolution surfaces,” Computer Graphics


(Proc. ACM SIGGRAPH 91), vol. 25, no. 4, pp. 251–256, Jul. 1991.
[14] J. Bloomenthal and B. Wyvill, Eds., Introduction to Implicit Surfaces. San
Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1997.
[15] D. S. Broomhead and D. Lowe, “Multivariable functional interpolation and adap-
tive networks,” Complex Systems 2, pp. 321–355, 1988.
[16] F. Canezin, G. Guennebaud, and L. Barthe, “SMI 2013: Adequate inner bound
for geometric modeling with compact field functions,” Computers & Graphics,
vol. 37, no. 6, pp. 565–573, Oct. 2013.
[17] J. Caron and D. Mould, “Partition of unity parametrics for texture synthesis,”
in Proceedings of Graphics Interface 2013, ser. GI ’13. Toronto, Ont., Canada,
Canada: Canadian Information Processing Society, 2013, pp. 173–179.
[18] ——, “Texture synthesis using label assignment over a graph,” Computers &
Graphics, vol. 39, pp. 24 – 36, 2014.
[19] J. Cathey-Roberts, “Puffer fish,” https://www.flickr.com/photos/
catheyroberts/2913111527, 2008, [Online; accessed 03-09-16]. Licensed un-
der CC BY-NC 2.0.
[20] J. F. Clay, “Zebra,” https://www.flickr.com/photos/jamesclay/1380022122,
2006, [Online; accessed 03-09-16] Licensed under CC BY-NC 2.0.
[21] ——, “Zebra,” https://www.flickr.com/photos/jamesclay/1379943696, 2006,
[Online; accessed 03-09-16]. Licensed under CC BY-NC 2.0.
[22] R. L. Cook and T. DeRose, “Wavelet noise,” ACM Trans. Graph., vol. 24, no. 3,
pp. 803–811, Jul. 2005.
[23] K. Crane, C. Weischedel, and M. Wardetzky, “Geodesics in heat: A new approach
to computing distance based on heat flow,” ACM Trans. Graph., vol. 32, no. 5,
pp. 152:1–152:11, Oct. 2013.
[24] I. Dawned, “Toad,” https://www.flickr.com/photos/ivydawned/8734243781,
2013, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA 2.0.
[25] D. S. Ebert, F. K. Musgrave, D. Peachey, K. Perlin, and S. Worley, Texturing and
Modeling: A Procedural Approach, 3rd ed. San Francisco, CA, USA: Morgan
Kaufmann Publishers Inc., 2002.
[26] A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and trans-
fer,” in Proceedings of the 28th Annual Conference on Computer Graphics and
Interactive Techniques, ser. SIGGRAPH ’01. New York, NY, USA: ACM, 2001,
pp. 341–346.
[27] A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,”
in Proceedings of the International Conference on Computer Vision-Volume 2
- Volume 2, ser. ICCV ’99. Washington, DC, USA: IEEE Computer Society,
1999, pp. 1033–.
96

[28] S. Epstein, “clown triggerfish,” https://www.flickr.com/photos/serenae/


9732242870, 2013, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA
2.0.
[29] P. F. Felzenszwalb and D. P. Huttenlocher, “Distance transforms of sampled
functions,” Theory of Computing, vol. 8, no. 1, pp. 415–428, 2012.
[30] C. Galbraith, P. Prusinkiewicz, and B. Wyvill, “Modeling a murex cabritii sea
shell with a structured implicit surface modeler,” The Visual Computer, vol. 18,
no. 2, pp. 70–80, 2002.
[31] B. Galerne, A. Lagae, S. Lefebvre, and G. Drettakis, “Gabor noise by example,”
ACM Trans. Graph., vol. 31, no. 4, pp. 73:1–73:9, Jul. 2012.
[32] A. Gomes, I. Voiculescu, J. Jorge, B. Wyvill, and C. Galbraith, Implicit Curves
and Surfaces: Mathematics, Data Structures and Algorithms, 1st ed. Springer
Publishing Company, Incorporated, 2009.
[33] O. Gourmel, L. Barthe, M.-P. Cani, B. Wyvill, A. Bernhardt, M. Paulin, and
H. Grasberger, “A gradient-based implicit blend,” ACM Trans. Graph., vol. 32,
no. 2, pp. 12:1–12:12, Apr. 2013.
[34] B. Gratwicke, “Oophaga pumilio,” https://www.flickr.com/photos/
briangratwicke/14823348718, 2014, [Online; accessed 03-09-16]. Licensed
under CC BY 2.0.
[35] J. Han, K. Zhou, L.-Y. Wei, M. Gong, H. Bao, X. Zhang, and B. Guo, “Fast
example-based surface texture synthesis via discrete optimization,” The Visual
Computer, vol. 22, no. 9-11, pp. 918–925, 2006.
[36] T. R. Jones, “Efficient generation of poisson-disk sampling patterns,” Journal of
Graphics, GPU, and Game Tools, vol. 11, no. 2, pp. 27–36, 2006.
[37] T. Ju, “Scattered-data interpolation,” in A Sampler of Useful Computational
Tools for Applied Geometry, Computer Graphics, and Image Processing, 1st ed.,
D. Cohen-Or, Ed. Natick, MA, USA: A. K. Peters, Ltd., 2015, vol. 21, pp.
147–162.
[38] O. Karpenko, J. F. Hughes, and R. Raskar, “Free-form sketching with variational
implicit surfaces,” Computer Graphics Forum, vol. 21, no. 3, pp. 585–594, 2002.
[39] D. Keats, “Emperor angelfish,” https://www.flickr.com/photos/dkeats/
6443170735, 2011, [Online; accessed 03-09-16]. Licensed under CC BY 2.0.
[40] J. Kirkland, “Ebv-4531.jpg,” https://www.flickr.com/photos/27145142@N00/
4223358735, 2009, [Online; accessed 03-09-16] Licensed under CC BY-NC-SA
2.0.
[41] V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures:
Image and video synthesis using graph cuts,” ACM Trans. Graph., vol. 22, no. 3,
pp. 277–286, Jul. 2003.
97

[42] A. Lagae, S. Lefebvre, R. Cook, T. DeRose, G. Drettakis, D. S. Ebert, J. P. Lewis,


K. Perlin, and M. Zwicker, “State of the art in procedural noise functions,” in EG
2010 - State of the Art Reports. Norrkoping, Sweden: Eurographics Association,
May 2010.
[43] A. Lagae, S. Lefebvre, G. Drettakis, and P. Dutré, “Procedural noise using sparse
Gabor convolution,” ACM Trans. Graph., vol. 28, no. 3, pp. 54:1–54:10, Jul.
2009.
[44] A. Lagae, S. Lefebvre, and P. Dutré, “Improving Gabor noise,” IEEE Transac-
tions on Visualization and Computer Graphics, vol. 17, no. 8, pp. 1096–1107,
August 2011.
[45] X. Latta, “Cutie patootie,” https://www.flickr.com/photos/xanboozled/
446681285, 2007, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA
2.0.
[46] H. Ledoux and C. Gold, Developments in Spatial Data Handling: 11th Interna-
tional Symposium on Spatial Data Handling. Berlin, Heidelberg: Springer Berlin
Heidelberg, 2005, ch. An Efficient Natural Neighbour Interpolation Algorithm for
Geoscientific Modelling, pp. 97–108.
[47] S. Lefebvre and H. Hoppe, “Parallel controllable texture synthesis,” ACM Trans.
Graph., vol. 24, no. 3, pp. 777–786, Jul. 2005.
[48] J. P. Lewis, “Algorithms for solid noise synthesis,” Computer Graphics (Proc.
ACM SIGGRAPH 89), vol. 23, no. 3, pp. 263–270, Jul. 1989.
[49] J.-P. Lewis, “Texture synthesis for digital painting,” Computer Graphics (Proc.
ACM SIGGRAPH 84), vol. 18, no. 3, pp. 245–252, Jan. 1984.
[50] L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum, “Real-time texture synthe-
sis by patch-based sampling,” ACM Trans. Graph., vol. 20, no. 3, pp. 127–150,
Jul. 2001.
[51] Lydia, “Butterfly,” https://www.flickr.com/photos/lydz/171355654, 2006, [On-
line; accessed 03-09-16]. Licensed under CC BY-NC-SA 2.0.
[52] S. Mann, “A study of two implicit data interpolation schemes,” Department of
Computer Science, University of Waterloo, Tech. Rep. CS-2013-09, 2013.
[53] Marc, “Sleeping giraffe,” https://www.flickr.com/photos/sumofmarc/
9617372714, 2013, [Online; accessed 03-09-16]. Licensed under CC BY-NC-ND
2.0.
[54] K. Mehlhorn, “A faster approximation algorithm for the Steiner problem in
graphs,” Information Processing Letters, vol. 27, no. 3, pp. 125 – 128, 1988.
[55] H. Meinhardt, Models of Biological Pattern Formation. Academic Press, Lon-
don, 1982.
[56] A. Meintjes, “Leopard,” https://www.flickr.com/photos/arnolouise/3195871485,
2009, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA 2.0.
98

[57] B. S. Morse, T. S. Yoo, P. Rheingans, D. T. Chen, and K. R. Subramanian, “In-


terpolating implicit surfaces from scattered surface data using compactly sup-
ported radial basis functions,” in ACM SIGGRAPH 2005 Courses, ser. SIG-
GRAPH ’05. New York, NY, USA: ACM, 2005.
[58] J. D. Murray, “On pattern formation mechanisms for lepidopteran wing patterns
and mammalian coat markings,” Philosophical Transactions of the Royal Society
of London B: Biological Sciences, vol. 295, no. 1078, pp. 473–496, 1981.
[59] Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H.-P. Seidel, “Multi-level parti-
tion of unity implicits,” in ACM SIGGRAPH 2003 Papers, ser. SIGGRAPH ’03.
New York, NY, USA: ACM, 2003, pp. 463–470.
[60] V. Ostromoukhov, C. Donohue, and P.-M. Jodoin, “Fast hierarchical importance
sampling with blue noise properties,” ACM Trans. Graph., vol. 23, no. 3, pp.
488–495, Aug. 2004.
[61] R. Palazzani, “Lizard,” https://www.flickr.com/photos/veridiano3/
13959669025, 2014, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA
2.0.
[62] K. Perlin and E. M. Hoffert, “Hypertexture,” Computer Graphics (Proc. ACM
SIGGRAPH 89), pp. 253–262, 1989.
[63] K. Perlin, “An image synthesizer,” Computer Graphics (Proc. ACM SIGGRAPH
85), pp. 287–296, 1985.
[64] ——, “Improving noise,” in Proceedings of the 29th Annual Conference on Com-
puter Graphics and Interactive Techniques, ser. SIGGRAPH ’02. New York,
NY, USA: ACM, 2002, pp. 681–682.
[65] E. Praun, A. Finkelstein, and H. Hoppe, “Lapped textures,” in Proceedings of the
27th Annual Conference on Computer Graphics and Interactive Techniques, ser.
SIGGRAPH ’00. New York, NY, USA: ACM Press/Addison-Wesley Publishing
Co., 2000, pp. 465–470.
[66] Rastoney, “Blue poison dart frog,” https://www.flickr.com/photos/
planmygreen/2690548176, 2008, [Online; accessed 03-09-16]. Licensed un-
der CC BY-NC 2.0.
[67] M. Reitter, “smile,” https://www.flickr.com/photos/m reitter/2280508128,
2008, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA 2.0.
[68] R. J. Renka, “Multivariate interpolation of large sets of scattered data,” ACM
Trans. Math. Softw., vol. 14, no. 2, pp. 139–148, Jun. 1988.
[69] A. Ricci, “A constructive geometry for computer graphics.” Comput. J., vol. 16,
no. 2, pp. 157–160, 1973.
[70] A. Runions and F. Samavati, “Partition of unity parametrics: a framework for
meta-modeling,” The Visual Computer, vol. 27, no. 6-8, pp. 495–505, 2011.
99

[71] M. Sanchez, O. Fryazinov, P.-A. Fayolle, and A. Pasko, “Convolution filtering


of continuous signed distance fields for polygonal meshes,” Computer Graphics
Forum, vol. 34, no. 6, pp. 277–288, 2015.
[72] B. Schachter and N. Ahuja, “Random pattern generation processes,” Computer
Graphics and Image Processing, vol. 10, no. 2, pp. 95–114, Jun. 1979.
[73] U. Schwarzbach, “Lionfish,” https://www.flickr.com/photos/uwebkk/
13773065613, 2014, [Online; accessed 03-09-16]. Licensed under CC BY-NC-SA
2.0.
[74] V. Shapiro, “Semi-analytic geometry with R-functions,” Acta Numerica, vol. 16,
pp. 239–303, 5 2007.
[75] V. Shapiro and I. Tsukanov, “Implicit functions with guaranteed differential
properties,” in Proceedings of the Fifth ACM Symposium on Solid Modeling and
Applications, ser. SMA ’99. New York, NY, USA: ACM, 1999, pp. 258–269.
[76] D. Shepard, “A two-dimensional interpolation function for irregularly-spaced
data,” in Proceedings of the 1968 23rd ACM National Conference, ser. ACM ’68.
New York, NY, USA: ACM, 1968, pp. 517–524.
[77] P. Shirley and S. Marschner, Fundamentals of Computer Graphics, 3rd ed. Nat-
ick, MA, USA: A. K. Peters, Ltd., 2009.
[78] R. Sibson, “A brief description of natural neighbour interpolation,” in Interpret-
ing multivariate data, V. Barnett, Ed. John Wiley & Sons, 1981, vol. 21, pp.
21–36.
[79] C. Soler, M.-P. Cani, and A. Angelidis, “Hierarchical pattern mapping,” ACM
Trans. Graph., vol. 21, no. 3, pp. 673–680, Jul. 2002.
[80] I. Sutton, “Spotted marsh frog,” https://www.flickr.com/photos/22616984@
N07/5644930368, 2011, [Online; accessed 03-09-16]. Licensed under CC BY 2.0.
[81] J. Swick, “Al the leopard gecko,” https://www.flickr.com/photos/simplyjessi/
6328338235, 2010, [Online; accessed 03-09-16]. Licensed under CC BY 2.0.
[82] X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, and H.-Y. Shum, “Synthesis
of bidirectional texture functions on arbitrary surfaces,” in Proceedings of the
29th Annual Conference on Computer Graphics and Interactive Techniques, ser.
SIGGRAPH ’02. New York, NY, USA: ACM, 2002, pp. 665–672.
[83] A. M. Turing, “The chemical basis of morphogenesis,” Philosophical Transactions
of the Royal Society of London B: Biological Sciences, vol. 237, no. 641, pp. 37–
72, 1952.
[84] G. Turk, “Generating textures on arbitrary surfaces using reaction-diffusion,”
Computer Graphics (Proc. ACM SIGGRAPH 91), vol. 25, no. 4, pp. 289–298,
Jul. 1991.
[85] G. Turk and J. F. O’Brien, “Variational implicit surfaces,” Georgia Institute of
Technology, Tech. Rep. GITGVU-99-15, May 1999.
100

[86] L. Wang, K. Zhou, Y. Yu, and B. Guo, “Vector solid textures,” ACM Trans.
Graph., vol. 29, no. 4, pp. 86:1–86:8, Jul. 2010.
[87] L.-Y. Wei, S. Lefebvre, V. Kwatra, and G. Turk, “State of the art in example-
based texture synthesis,” in Eurographics 2009, State of the Art Report, EG-
STAR. Eurographics Association, 2009.
[88] L.-Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vec-
tor quantization,” in Proceedings of the 27th Annual Conference on Computer
Graphics and Interactive Techniques, ser. SIGGRAPH ’00. New York, NY,
USA: ACM Press/Addison-Wesley Publishing Co., 2000, pp. 479–488.
[89] L. Wei and M. Levoy, “Order-independent texture synthesis,” Computer Science
Department, Stanford University, Tech. Rep. TR-2002-01, 2002.
[90] A. Witkin and M. Kass, “Reaction-diffusion textures,” Computer Graphics
(Proc. ACM SIGGRAPH 91), vol. 25, no. 4, pp. 299–308, Jul. 1991.
[91] S. Worley, “A cellular texture basis function,” in Proceedings of the 23rd Annual
Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH
’96. New York, NY, USA: ACM, 1996, pp. 291–294.
[92] Q. Wu and Y. Yu, “Feature matching and deformation for texture synthesis,”
ACM Trans. Graph., vol. 23, no. 3, pp. 364–367, Aug. 2004.
[93] B. Wyvill, A. Guy, and E. Galin, “Extending the CSG tree. Warping, blend-
ing and boolean operations in an implicit surface modeling system,” Computer
Graphics Forum, vol. 18, no. 2, pp. 149–158, 1999.
[94] L. Ying, A. Hertzmann, H. Biermann, and D. Zorin, “Texture and shape syn-
thesis on surfaces,” in Rendering Techniques 2001, ser. Eurographics, S. Gortler
and K. Myszkowski, Eds. Springer Vienna, 2001, pp. 301–312.
[95] C. Zanni, P. Bares, A. Lagae, M. Quiblier, and M.-P. Cani, “Geometric details on
skeleton-based implicit surfaces,” in Eurographics 2012 - 33rd Annual Conference
of the European Association for Computer Graphics, ser. Eurographics 2012 :
Short Paper. Cagliari, Italy: Eurographics, May 2012, pp. 49–52.

You might also like