0% found this document useful (0 votes)
19 views61 pages

Full Text 01

Uploaded by

lalityadav54415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views61 pages

Full Text 01

Uploaded by

lalityadav54415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Generating Functions: Powerful Tools for

Recurrence Relations. Hermite Polynomials


Generating Function
Department of Mathematics, Linköping University

Christoffer Rydén

LiTH-MAT-EX–2023/04–SE

Credits: 16 hp
Level: G2
Supervisor: Milagros Izquierdo,
Department of Mathematics, Linköping University
Examiner: Göran Bergqvist,
Department of Mathematics, Linköping University
Linköping: June 2023
Abstract

In this report we will plunge down in the fascinating world of the generating
functions. Generating functions showcase the "power of power series", giving
more depth to the word "power" in power series. We start off small to get
a good understanding of the generating function and what it does. Also, off
course, explaining why it works and why we can do some of the things we do
with them. We will see alot of examples throughout the text that helps the
reader to grasp the mathematical object that is the generating function.
We will look at several kinds of generating functions, the main focus when
we establish our understanding of these will be the "ordinary power series"
generating function ("ops") that we discuss before moving on to the "exponential
generating function" ("egf"). During our discussion on ops we will see a "first
time in literature" derivation of the generating function for a recurrence relation
regarding "branched coverings". After finishing the discussion regarding egf we
move on the Hermite polynomials and show how we derive their generating
function. Which is a generating function that generates functions. Lastly we
will have a quick look at the "moment generating function".

Keywords:
Ordinary power series generating function, Exponential generating func-
tion, Moment generating function, Hermite polynomials.

URL for electronic version:


Theurltothethesis

Rydén, 2023. iii


Acknowledgements

I would like to thank my supervisor: Milagros Izquierdo for helping me through-


out this work. Her knowledge and skill alone make her a mathematical "role
model" and she is someone that I look up to.
Big thanks to Ludwig, for supplying me this template, without it I would
not have gotten far.
Last, but surely not least, a thanks to Kalle for helping me out with my
figures and for being a constant "ballpark" with whom I always can ball idéas
with.

Rydén, 2023. v
Nomenclature
G(x) Ordinary power series, or exponential, generating function
MX (s) Moment generating function
[xn ]G(x) The coefficient in front of xn in the power series expansion of G(x)
xn
[xn /n!]G(x) The coefficient in front of n! in the power series expansion of G(x)
D Differentiation operator
xD Differentiate first, then multiply by x operator
(1 + D) Differentiate then add the identity operator
{an }n=0 ∞ A sequence of real numbers
Fn The n:th Fibonacci number
S(n, k) Stirling numbers of the second kind
Cn The n:th Catalan number
Qn Number of branched coverings with dihedral monodromy of degree 2n
Hn (x) Hermite polynomial of degree n
H2n Hermite numbers
X Random variable/probabilistic distribution
E(X) Expectation or mean of random variable X
V (X) Variance of random variable X
X ∼ Ber(p) Random variable X is Bernoulli distributed
r.h.s. Right hand side
l.h.s. Left hand side

Rydén, 2023. vii


Contents

1 Introduction 1

2 Generating functions 3
2.1 Generating functions and power series . . . . . . . . . . . . . . . 3
2.2 Recurrence relations . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 The method of generating functions . . . . . . . . . . . . 9

3 Ordinary Power Series Generating Function 13


3.1 Stirling numbers of the second kind . . . . . . . . . . . . . . . . . 16
3.2 Catalan numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Exponential Generating Functions 27

5 Generating Function for Hermite Polynomials 33

6 Moment Generating Functions 41

7 Conclusions 45

Rydén, 2023. ix
Chapter 1

Introduction

Generating functions have been around for quite some time. In 1718 Abraham
DeMoivre solved the Fibonacci recurrence relation using the generating function.
Leonhard Euler extended the technique in his study of partitions of integers in
1748. Later the generating functions developed further when they were used
together with probability theory and the moment generating function arose and
were presented by Pierre-Simon de Laplace in 1812. These historical examples
are taken from [6].
Generating functions are sort of an infinitely long measuring tape that, in-
stead of keeping track of units of length, it keeps track of numbers in a sequence.
They bookkeep this information by coding the terms of a sequence as coefficients
in a formal power series, this is a very efficient way to represent a sequence. They
are also useful when solving many types of counting problems[16], for instance,
how many integer solutions are there to the equation
x1 + x2 + · · · + xn = C, for some C ∈ N,
with various constrains on xi [6]. This is often what we first learn when read-
ing literature that introduces generating functions [6] [16]. Apart from solving
counting problems generating functions can be used to solve recurrence rela-
tions, which is something that we focus a lot on in this work.
Some generating functions generates functions instead of a sequence. These
generating functions are often more interesting in a physical point of view since
physicists are often interested in solving differential equations, and such solu-
tions comes in the form of functions.
A different kind of generating function is the "Moment generating function",
they are used in probability and statistics where they offer a good way to rep-
resent probabilistic distributions. As the name implies the moment generating

Rydén, 2023. 1
2 Chapter 1. Introduction

function generates moments, and to possess the moments of a random variable


is to possess all the magnitudes that characterize how it is distributed. This
have applications in data analysis and Artificial Intelligence.
Chapter 2

Generating functions

A generating function keeps track of numbers in a sequence. Sort of an infinitely


long measuring tape that, in oppose to telling us: 0, 1, 2, . . . units of length, tells
us: a0 , a1 , a2 , · · · = {an }∞
n=0 a sequence of numbers. The generating function
comes in the form of a formal power series (here we follow [16]) where the n : th
power of x acts as a "placeholder" for the number an .
They have a wide variety of uses and applications, for instance, they can be
used to solve many different types of counting problems. Such as the number
of non-negative integer solutions to
x1 + x2 + · · · + xn = C, for some C ∈ N. (2.1)
In this text however, we will focus more on another use of generating functions,
namely their ability to solve recurrence relations in an effective way.
Throughout the report we will see many different examples and applications
to different areas when we use generating functions to solve different types of
recurrence relations. However, we will start of with some definitions that can
be found in [6] and [16].

2.1 Generating functions and power series


Definition 1 Let a0 , a1 , a2 , . . . be a sequence of real numbers. Then the func-
tion
X∞
2
G(x) = a0 + a1 x + a2 x + · · · = an xn (2.2)
n=0
is called the ordinary power series generating function ("ops") to the given se-
quence {an }∞n=0 .

Rydén, 2023. 3
4 Chapter 2. Generating functions


x2 x3 X xn
G(x) = a0 + a1 x + a2 + a3 + ··· = an (2.3)
2! 3! n=0
n!

is called the exponential generating function ("egf") to the given sequence {an }∞
n=0 .

Some clarification to some of the notation used in this report.


ops
G ←→ {an }∞ n=0 means that the power series G is the ordinary power se-
ries
P∞ ("ops") generating function for the given sequence {an }∞
n=0 , i.e. G(x) =
n=0 a n xn
.
egf
G ←→ {an }∞ n=0 means that the power series G is the exponential generating
P∞
function for the given sequence {an }∞n=0 , i.e. G(x) = n=0 n! x .
an n

This notation is also used in [19].


P∞
Example 1 Since 1 + x + x2 +· · · = n=0 xn a geometric series for |x| < 1
and we know the formula for such a series to be

X 1
xn = . (2.4)
n=0
1−x

Therefore the generating function for the sequence {1}∞


n=0 is G(x) = 1−x ,
1
and
we denote it by
1 ops
←→ {1}∞ n=0 . (2.5)
1−x

In addition since

x2 x3 X xn
ex = 1 + x + + + ··· = (2.6)
2! 3! n=0
n!

for every real number x we have that ex is the exponential generating function
for the same sequence {1}∞
n=0 , instead denoted by

egf
ex ←→ {1}∞
n=0 . (2.7)

Now that we have some basic understanding of what a generating function


is and how it may appear lets continue and talk about when we can use this
and in what ways we can use this.
A very important property regarding generating functions is that they are
considered to be formal power series and because of that we can view them as
2.1. Generating functions and power series 5

algebraic objects, meaning we do not have to worry about their radius of con-
vergence [16]. An introduction to the theory of formal power series as algebraic
objects can be found in [13].
We started this chapter with a metaphor, describing a generating function
as
P∞ measuring
a tape, in the sense that for a fixed n the xn in the power series
n=0 an x just acts as a way of keeping track of an . Clearly the value of x
n

does not effect the sequence {an }∞n=0 meaning we can consider any x such that
the power series converge and when a series converge we know that the series in
fact represent an analytic function [2], and analytic functions are easy to work
with. So all we need is that the series at least converge somewhere and then
just assume that x is within that radius.
For example, for z ∈ C such that |z| < 1

X 1
zn = (2.8)
n=0
1−z

is an analytic function. As well as



X 1
(az)n = (2.9)
n=0
1 − az

is analytic for |z| < 1/a. But since we are considering formal power series that
converge somewhere we are free to "forget" about |z| and we can just focus
on the result instead of when these results are legit. In this manner we can,
for instance, take the derivative of a power series. Consider 2.8 and take the
derivative on both sides, we get
∞ ∞
X X 1
nz n−1
= (n + 1)z n = . (2.10)
n=1 n=0
(1 − z)2

This "freedom" gives us many useful identities for our generating functions.
Some of them are listed here [19] [6].

1 X
= xn = 1 + x + x2 + x3 +· · · (2.11)
1 − x n=0

1 X
= (n + 1)xn = 1 + 2x + 3x2 + 4x3 +· · · (2.12)
(1 − x)2 n=0
∞        
1 X n+k n 1+k 2+k 2 3+k 3
= x =1+ x+ x + x +· · ·
(1 − x)k+1 n=0
n 1 2 3
(2.13)
6 Chapter 2. Generating functions


1 X
= an xn = 1 + ax + a2 x2 + a3 x3 +· · · (2.14)
1 − ax n=0
∞        
1 X n+k−1 k k n n+1 2 n+2 3
= a x = 1+ x+ x + x +· · ·
(1 − ax)n k 1 2 3
k=0
(2.15)
∞ n 2 3
X x x x
ex = =1+x+ + +· · · . (2.16)
n=0
n! 2! 3!
Adding, and also subtracting, power series as well as multiply with a con-
stant c ∈ C is quite trivial [2]:

X ∞
X ∞
X ∞
X ∞
X
n
an z ± n
bn z = (an ± bn )z n
and c n
an z = (can )z n .
n=0 n=0 n=0 n=0 n=0

Multiplying two power series also behaves as expected, it follows the Cauchy
product rule [2]:

X ∞
X ∞
X n
X
an z n bn z n = cn z n where cn = ak bn−k .
n=0 n=0 n=0 k=0

So far, so good, we can breath out knowing that as long as we keep a formal
view on our power series we can manipulate them without thinking twice if the
series converge. However that does not mean we never have to take into account
if a series converge. If we need to draw some analytical conclusions we will have
to have convergence. When we have convergence our formal power series is just
as any convergent power series and we can use known results regarding such
series to draw our conclusions.
We will run into a situation like that later when we need to consider the
limit of a generating function. When doing that we must respect the radius of
convergence and not let z −→ a if a is not contained within that radius.

2.2 Recurrence relations


It is a common property of many sequences used in algorithms that the n : th
number depends in some way of previous numbers in the sequence. When a
sequence have this property that the number an depends on an−1 , or several
an−i for i < n, we call this relation a "recurrence relation" and the sequence
may be expressed recursively using the previous numbers [16]. For example

an = an−1 (2.17)
2.2. Recurrence relations 7

defines the constant sequence {a0 }∞ n=0 . But here a0 could be any number,
meaning we also need some initial condition for the recurrence to be meaning-
ful. Initial conditions are values of the startup terms, satisfying the recurrence
relation, before the relation takes effect [16]. So if we want to describe {1}∞n=0
with a recurrence relation we would write

an = an−1 , n ≥ 1, a0 = 1. (2.18)

Recurrence relations are central in the field of combinatorics where they arise
naturally when counting "stuff". For instance, consider the following problem:
You have 3 vertical rods on a table and n discs with different diameters and
with holes in the center so that they can "slide" down on the rods. The n discs
are placed on rod nr.1 in order of their diameter, the largest one being at the
bottom and the smallest one being on top. Now you have to move all these
discs from rod nr.1 to one of the other rods. You may only move one disc at
the time and you may never place a larger disc on top of a smaller one.
The question that seeks answer is: What is the fewest amount of moves
needed to achive this?
This problem is called "The towers of Hanoi" and it was made famous by E.
Lucas, a french mathematician of the nineteenth century [3]. Here a recurrence
relation is obtained by the following way of reasoning.

Let an denote the smallest number of moves needed to move n discs from one
rod to another. It is easy to see that a0 = 0 and a1 = 1, also a2 = 3 is not very
hard to see. But what about an for any n ≥ 3 ?

In order to move disc nr. n (the largest disc) clearly it must be that we have all
the other n − 1 discs stacked on one of the rods, say rod nr.2, leaving rod nr.3
empty.
Now the smallest possible moves required to move those n − 1 discs to rod
nr.2 is, by definition, an−1 . Since rod nr.3 is empty we can move disc nr.n there,
yielding one extra move. After moving that disc we will never touch it again,
and now all that is left is to move the n − 1 discs from rod nr.2 to rod nr.3, this
we know can be done in an−1 moves.
Adding all the moves together we get

an = 2an−1 + 1, n ≥ 1, a0 = 0. (2.19)

Which is the sought after recurrence relation.


Even though 2.19 is a nice formula in all its simplicity it do have the downside
of requiring one to calculate every number ai , i < n in the sequence to get the
value of an . It is at this stage that the generating function will prove useful,
8 Chapter 2. Generating functions

especially when it comes to more "complicated" recurrence relations [19]. But


before we look at how we use generating functions to solve recurrence relations
we do a small recap on how we solve them using the method of homogeneous
and particular solution.
The material regarding the method of homogeneous and particular solutions
can be found in [16].
In short, the method of homogeneous and particular solutions builds upon
the fact that a linear, and homogeneous, recurrence relation

an = c1 an−1 + c2 an−2 +· · · + ck an−k (2.20)

has the solution an = rn . Note that the coefficients ci are just constant’s.
This is quite easy to see since if we plug in rn in 2.20 we get

rn = c1 rn−1 + c2 rn−2 +· · · + ck rn−k . (2.21)

And, with the condition r ̸= 0, we can divide 2.21 by rn−k which yields

rk − c1 rk−1 − c2 rk−2 −· · · − ck = 0. (2.22)

Of course 2.22 is true if and only if r ̸= 0 and r is a root in 2.22. Also 2.22
is called the "characteristic equation" of 2.20 and r is called a "characteristic
root".
In addition, a linear combination of two, or more, solutions to a linear and
homogeneous recurrence relation is also a solution [16]. Lets keep this in mind
while looking at a more general recurrence relation

an + c1 an−1 + c2 an−2 +· · · + ck an−k = f (n). (2.23)

Here f (n) is some function depending only on n, possibly f (n) ≡ 0, in which


(p)
case 2.23 is homogeneous. Assuming that f (n) ̸≡ 0 and let an be a partic-
(h)
ular solution to 2.23 and an be the solution to the associated homogeneous
recurrence relation, which is obtained by putting f (n) ≡ 0 in 2.23. Then every
solution to 2.23 is given by
an = a(p)
n + an
(h)
(2.24)
(h)
Since adding an in 2.24 is basically just adding zero.
Now, there is considerably more "depth" to this theory than that I have
described here and I suggest that the reader that wants a more rigorous expla-
nation of the theory behind all this consults [16].
We will now finish this section by solving 2.19 using this method.
2.2. Recurrence relations 9

Example 2 We can rewrite 2.19 as

an − 2an−1 = 1, n ≥ 1, a0 = 0. (2.25)

This form agrees very well with 2.23 with k = 1, c1 = −2 and f (n) = 1.
(h)
First we get an by solving the characteristic equation, in this case it is
trivial to see that r = 2 and the homogeneous solution falls out as

a(h) n
n = A(2) , A ∈ R. (2.26)
(p)
Next we search for a particular solution an . Since the r.h.s. in 2.25 is a
constant, then it is reasonable to believe that a particular solution here also will
be a constant, say C. If we plug in C as the solution in 2.25 we get

C − 2C = 1. (2.27)

We see that C = −1 is in fact a particular solution and

an = a(p) (h) n
n + an = A(2) − 1. (2.28)

Using our initial condition a0 = 0 we find that A = 1 and the solution to 2.19
is
an = 2n − 1. (2.29)

So, we have glanced at the method of homogeneous and particular solutions and
as we saw in the example above it does the job to find an explicit formula very
well. We now leave this topic and will instead look at an alternative way to solve
2.19. One that focus on the essence of this paper, the generating functions.

2.2.1 The method of generating functions


In this section we will first talk about the idea behind the method of generating
functions and explain how we execute it in general followed by an example where
we solve, again, 2.19. This time using the generating function.
To solve a recurrence relation using the method of generating functions one
first finds the generating function as a power series for the recurrence relation.
This involves multiplying the recurrence relation with xn for each n and then
adding them together [19]. Doing this will yield something that looks very much
like a generating function. With some manipulations we get the generating
function expressed as an analytic function that we can expand in a power series
to find an explicit solution to the recurrence relation, namely the coefficient
of xn . As we discussed in the first section of this chapter we will not bother
10 Chapter 2. Generating functions

ourselves with questions about convergence when doing all this, we will just
assume x is sufficiently small. Expanding our generating functions in a power
series can be done in several ways, partial fraction expansion or Maclaurin
expansion for instance[19].
To illustrate this consider a recurrence relation

an = can−1 , n ≥ 1, a0 = a, a, c ∈ R. (2.30)

Now this is a simple equation, actually it is the equation for exponential growth
and has the solution an = (c)n a but nevertheless, it works fine to demonstrate
the method on.
For each n ≥ 1 we just have an equation, if we multiply each of those
equations with xn we get
a1 x = ca0 x (n = 1)
a2 x2 = ca1 x2 (n = 2)
3 3
a3 x = ca2 x (n = 3)
.. ..
. .
Now add all these equations and we get

X ∞
X
an xn = c an−1 xn . (2.31)
n=1 n=1

Here we clearly see something that resembles 2.2 though it is not the P
same, but it

is easy to rewrite in terms of a generating function since, let G(x) = n=0 an xn .
Then 2.31 can be expressed as

G(x) − a = cxG(x) (2.32)

From 2.32 we get an expression for G(x), namely


a
G(x) = . (2.33)
1 − cx
Here we have our generating function expressed as an analytic function, and if
we want to get an explicit formula for an we would expand the right hand side
(from now on we will write r.h.s. for right hand side and l.h.s. for left hand
side) of 2.33 in a power series. Since the formula for an is the same as the
formula for the coefficient in front of xn which we from now on will denote as
[xn ]G(x). This is shown in the example that follows where we solve 2.19 using
this method.
2.2. Recurrence relations 11

P∞
Example 3 First of all let G(x) = n=0 an xn and multiply both sides of 2.19
by xn and sum over the n : s for which the recurrence relation is defined. In
this case n ≥ 1. This yields

X ∞
X ∞
X
an xn = 2 an−1 xn + xn . (2.34)
n=1 n=1 n=1

Looking at the l.h.s. of 2.34 we see



X
an xn = a1 x + a2 x2 + a3 x3 + · · · = G(x) − a0 = G(x) (2.35)
n=1

since ao = 0.
Now look at the r.h.s. of 2.34.
∞ ∞
X X x
2 an−1 xn + xn = 2xG(x) + , (2.36)
n=1 n=1
1−x

since

X
2 an−1 xn = 2(a0 x + a1 x2 + a2 x3 +· · ·) = 2x(a0 + a1 x + a2 x2 +· · ·) = 2xG(x)
n=1
(2.37)
and

X x
xn = x + x2 + x3 +· · · = x(1 + x + x2 +· · ·) = (2.38)
n=1
1−x

Taking what we just learned we can rewrite 2.34 in terms of G(x)


x
G(x) = 2xG(x) +
1−x
x
⇔ G(x) − 2xG(x) = (2.39)
1−x
x
⇔ G(x) =
(1 − x)(1 − 2x)

This is our analytic form of G(x) and the solution we seek is [xn ]G(x). To
find this expression we expand G(x) in a power series using partial fraction
12 Chapter 2. Generating functions

expansion.
x 2 1
= x( − )
(1 − x)(1 − 2x) 1 − 2x 1 − x

= x(2 + 22 x + 23 x2 +· · ·) − x(1 + x + x2 +· · ·)

= (2 − 1)x + (22 − 1)x2 + (23 − 1)x3 +· · · (2.40)


X
= (2n − 1)xn .
n=0

From 2.40 it is easy to see that [xn ]G(x) = 2n − 1 which is the explicit formula
for an that we were looking for.

With this example we finish this section. We will, throughout the text, consider
many important sequences of different kinds. Both homogeneous, linear, non-
linear and of more then one variable. All of these will be solved using this
method and by doing so we illustrate the power of the generating functions
when it comes to solving recurrence relations.
We will treat the different generating functions separately, in Chapter 3 we
focus on the ordinary power series generating function and in Chapter 4 we will
instead look at the exponential generating function.
Chapter 3

Ordinary Power Series


Generating Function

In this chapter we will look at some recursively defined sequences of great impor-
tance and solve their recurrence relations with the method described in Chapter
2, also, along the way we will establish some "rules of computation" that will
come in handy as we progress through the examples.
One of the most famous sequences we have is the Fibonacci numbers. This
sequence has been around for 800 years or so and it was first discovered by
Fibonacci when he presented a counting-problem involving the population of
rabbits [16]. However, to say that the Fibonacci numbers model the population
of rabbits may be to exaggerate a bit, since in Fibonacci´s problem no rabbits
ever die but just keep on breeding. The sequence do however model things
that can be interesting in a combinatorial point of view. For example, let Fn
denote the n:th Fibonacci number and let S = {1, 2, 3, . . . , n} then the number
of subsets of S containing no consecutive integers is given by Fn+2 . Or in how
many ways can one tile a 2 × n units path using tiles of dimension 2 × 1 and
1 × 2 units? The answer is Fn+1 ways [6].
The Fibonacci sequence is recursively defined as

Fn+1 = Fn + Fn−1 , n ≥ 1, F0 = 0, F1 = 1, (3.1)

a second order, linear and homogeneous recurrence relation. In the following


example we willPfirst find the generating function for the Fibonacci sequence,

that is F (x) = n=0 F (n)xn then we will solve for an explicit formula for Fn .

Rydén, 2023. 13
14 Chapter 3. Ordinary Power Series Generating Function

Example 4 (Fibonacci) First we multiply 3.1 by xn and sum over n ≥ 1.


Giving us the l.h.s.

X F (x) − F0 − F1 x F (x) − x
Fn+1 xn = = (3.2)
n=1
x x

since F0 = 0 and F1 = 1. On the r.h.s. we obtain



X ∞
X ∞
X
(Fn + Fn−1 )xn = Fn x n + Fn−1 xn = F (x) + xF (x) (3.3)
n=1 n=1 n=1
P∞ P∞
Again since F0 = 0 we can write n=1 Fn xn = n=0 Fn xn = F (x). By 3.2
and 3.3 we get
F (x) − x
= F (x) + xF (x)
x
⇔ F (x) = F (x)(x + x2 ) + x (3.4)

⇔ F (x)(1 − x − x2 ) = x.

From here we find the closed form of the generating function


x
F (x) = (3.5)
1 − x − x2
and our first step is done. Now to find an explicit formula we use partial fraction
expansion, just as we did with the Hanoi Towers, only this one is a little trickier
in terms of "nice numbers". But with no fancier tools then completing the square
and the use of some symbols instead of root-expressions we get
 
x x 1 1 1
= = − . (3.6)
1 − x − x2 (1 − αx)(1 − βx) α − β 1 − αx 1 − βx
√ √
Where α = 1+ 5
2 and β = 1− 5
2 meaning we can write 3.6 as
 
1 1 1
√ − . (3.7)
5 1 − αx 1 − βx
Expanding 3.7 in a power series using the same technique as we did in 2.40 we
soon realise that
√ √
1 1+ 5 1− 5
n n n
[x ]F (x) = Fn = √ (α − β ), α = ,β = . (3.8)
5 2 2
15

We have now illustrated how the method of generating functions can be used
to solve the Fibonacci recurrence relation. Was it any better then the method
of homogeneous and particular solutions? In this case perhaps not, but at the
same time it did not make the computations harder, just a little bit different and
it shows that generating functions have what it takes to solve such equations.
Later we will show how generating functions possess the power to solve other
kinds of recurrence relations, quite easy, that would be hard to solve otherwise
[19].
But first let us establish a few rules that will make computations faster. The
rules that we formulate here can also be found in [19] without proof. Here we
also provide proofs for them.
ops
Recall that G ←→ {an }∞ n=0 means that G(x) is the ordinary
P∞ power series
generating function for the sequence {an }∞ n=0 i.e. G(x) = n=0 n x .
a n

ops
Theorem 1 (Rules of computation (ops)) Let G ←→ {an }∞
n=0 and
ops
T ←→ {bn }∞
n=0 Then

G(x)−a0 −a1 x−···−ak−1 xk−1 ops


1. xk
←→ {an+k }∞
n=0

ops
2. (xD)k G(x) ←→ {nk an }∞
n=0

(where xD means first differentiate then multiply by x)

ops Pn ∞
3. GT ←→ { r=0 ar bn−r }n=0

Proof.
ops ops
1. Let G ←→ {an }∞
n=0 and let H ←→ {an+k }n=0 . This implies


X
G(x) = an xn = a0 + a1 x − a2 x2 +· · · (3.9)
n=0

and

X
H(x) = an+k xn = ak + ak+1 x + ak+2 x2 +· · · (3.10)
n=0

That is
G(x) − a0 − a1 x −· · · − ak−1 xk−1
H(x) = (3.11)
xk
16 Chapter 3. Ordinary Power Series Generating Function

ops ops
2. Let G ←→ {an }∞
n=0 and let H ←→ {n an }n=0 then for H(x) we have
k ∞


X
nk an xn = a1 x + 2k a2 x2 + 3k a3 x3 +· · ·
n=0
= xD(a0 + a1 x + 2k−1 a2 x2 + 3k−1 a3 x3 +· · ·)
= (xD)(xD)(a0 + a1 x + 2k−2 a2 x2 + 3k−2 a3 x3 +· · ·)
= (xD)2 (a0 + a1 x + 2k−2 a2 x2 + 3k−2 a3 x3 +· · ·) (3.12)
..
.
= (xD)k (a0 + a1 x + 2k−k a2 x2 + 3k−k a3 x3 +· · ·)
= (xD)k (a0 + a1 x + a2 x2 + a3 x3 +· · ·)
= (xD)k G(x).

ops ops
3. Let G ←→ {an }∞ n=0 and T ←→ {bn }n=0 . Assume that both of them

converge in some domain O ∈ C. Then by theorem 4.28 [2]



X n
X
GT = cn x , where cn =
n
ar bn−r (3.13)
n=0 r=0

Soon we are ready to look at a bit more complicated sequence and we will
do so in the example to follow, this will involve a sequence depending on two
independent variables. But before we do that we should point at some properties
of this notation (see [19]). Recall that [xn ]G(x) denotes the coefficient in front
of xn in the power series G(x). Its follows that

[xn ](xa G(x)) = [xn−a ]G(x). (3.14)

It also follows that


1 n
[αxn ]G(x) = [x ]G(x). (3.15)
α

3.1 Stirling numbers of the second kind


Let A be a finite set such that |A| = n. In how many ways can one partition A
into a fix number k parts?
3.1. Stirling numbers of the second kind 17

The answer to that question is the Stirling numbers of the second kind (see
[3]), we denote this number S(n, k). It is quite clear that for all n ≥ 1 it holds
that
S(n, 1) = S(n, n) = 1. (3.16)
Since there is only one way to partition A in one part, namely A itself. Likewise,
since no part of the partition can be empty there is only one way to partition
A into n parts.
Also it is not possible to partition A into zero parts, nor is it possible to
partition A into k parts if n < k since no parts can be empty. From this we get
S(n, 0) = 0, ∀n ≥ 1 (3.17)
S(n, k) = 0, if n < k. (3.18)
Note as well the special case S(0, 0). This number is a bit strange when thinking
of partitioning an empty set into zero parts, but similar to the convention to
put 0! = 1 it is a useful convention to define S(0, 0) = 1 [3].
The Stirling numbers of the second kind is then defined, recursively, through
S(n, k) = S(n − 1, k − 1) + kS(n − 1, k), 1 ≤ k < n (3.19)
For a derivation of the formula one can read [3] or [19]. Now let us solve this
with the help of generating functions.
Example 5 Consider, for every k ≥ 0 the sum

X
Ak (x) = S(n, k)xn . (3.20)
n=0

Let this be our unknown generating function. Note that 3.20 is the l.h.s. of 3.19
multiplied by xn and summed over n ≥ 0. For k = 0 we get
A0 = S(0, 0) + S(1, 0)x + S(2, 0)x2 +· · · = S(0, 0) + 0 + 0 +· · · = 1. (3.21)
If we now express 3.19 in terms of Ak (x) we get, for k ≥ 1,

X ∞
X
Ak (x) = n
S(n − 1, k − 1)x + k S(n − 1, k)xn , A0 = 1. (3.22)
n=1 n=1

Let us look at the terms in the r.h.s. one by one.



X
S(n − 1, k − 1)xn = S(0, k − 1)x + S(1, k − 1)x2 + S(2, k − 1)x3 +· · ·
n=1

X
=x S(n, k − 1)xn = xAk−1 (x)
n=0
(3.23)
18 Chapter 3. Ordinary Power Series Generating Function


X
k S(n − 1, k)xn = k(S(0, k)x + S(1, k)x2 + S(2, k)x3 +· · ·)
n=1

(3.24)
X
n
= kx S(n, k)x = kxAk (x).
n=0

That is 3.22 can be written as

Ak (x) = xAk−1 (x) + kxAk (x), k ≥ 1, A0 = 1. (3.25)

Which yields
x
Ak (x) = Ak−1 (x) . (3.26)
1 − kx
Iterating 3.26 from k ≥ 1 our previously unknown generating function falls out

xk
Ak (x) = k ≥ 0. (3.27)
(1 − x)(1 − 2x)· · · (1 − kx)

Note that 3.27 works for k = 0 as well if we in that case take the denominator
to be 1 − 0x. Again we use partial fraction decomposition to find an explicit
formula for S(n, k).

k
1 X αi
= , αi ∈ R. (3.28)
(1 − x)(1 − 2x)· · · (1 − kx) i=1
1 − ix

Here we can find each αi simply by multiplying both sides of 3.28 with 1 − rx
and simultaneously letting x = 1r for every 1 ≤ r ≤ k. Computing this we get

rk−1
αr = (−1)k−r , 1 ≤ r ≤ k. (3.29)
(r − 1)!(k − r)!

Now we will take advantage of the notation we use, remember we are looking for

xk
   
1
S(n, k) = [xn ] = [xn ]xk . (3.30)
(1 − x)· · · (1 − kx) (1 − x)· · · (1 − kx)

By 3.14 and 3.28


  k
1 X 1
n
[x ]x k
= [xn−k ] αr . (3.31)
(1 − x)· · · (1 − kx) r=1
1 − rx
3.1. Stirling numbers of the second kind 19

From before we know that 1


1−rx is easy to expand in a power series and [xn−k ] 1−rx
1
=
rn−k so we get
k k
X 1 X
[xn−k ] αr = αr rn−k
r=1
1 − rx r=1

k
X rk−1
= (−1)k−r rn−k (3.32)
r=1
(r − 1)!(k − r)!

k
X rn
= (−1)k−r
r=1
r!(k − r)!

To convince yourself that 3.32 truly is an explicit formula one could take a look
at S(n, 2) for every n ≥ 2. According to [3] S(n, 2) = 2n−1 − 1 for every n ≥ 2.
Lets evaluate S(n, 2) using our formula to see that it gives us the correct answer.
1n 2n
S(n, 2) = (−1) + (1) = 2n−1 − 1. (3.33)
1! 1! 2! 0!

P∞Now, in then example above we considered the generating function Ak (x) =


n=0 S(n, k)x , where we fixate k and multiply with x n
and sum over n. What
if we had chosenPto fixate n, multiply by xk and then sum over k instead so

we got Bn (x) = k=0 S(n, k)xk . It is absolutely possible to do this, but it will
not yield an explicit formula for S(n, k) [19]. Let us look at what we get when
considering Bn (x) instead.

X ∞
X
Bn (x) = S(n − 1, k − 1)xk + kS(n − 1, k)xk
k=0 k=0

d (3.34)
= xBn−1 (x) + (x )Bn−1 (x)
dx
= [x(1 + D)]Bn−1 (x), n > 0, B0 (x) = 1

Here we used Theorem1. From 3.34 we see that we get Bn (x) by doing the
operation x(1 + D) on Bn−1 (x). This yields
B1 (x) = x(1 + D)1 = x
B2 (x) = x(1 + D)x = x2 + x
B3 (x) = x(1 + D)(x2 + x) = x3 + 3x2 + x
.. .. ..
. . .
20 Chapter 3. Ordinary Power Series Generating Function

We see that this generating function generates a polynomial of degree n and


[xk ]Bn (x) = S(n, k) but we cannot derive an explicit formula for S(n, k) from
this. But we can however use this function to prove that the sequence {S(n, k)}nk=0
is "unimodal"[19], meaning that if n is fixed and k varies from 0 to n, the num-
bers S(n, k) first increase, up to a maximum, then they decrease. This is an
example of another valuable application of the generating functions.

3.2 Catalan numbers


Lets say we have a circle, on the circumference of this circle there are 2n dots
enumerated 1, 2, 3, . . . , 2n. Now, using n lines, connect all the dots in pairs such
that no lines intersects another line. In how many ways can one do that?
To start let Cn denote the number of ways to pair up 2n dots. Note that we
always have an even number of dots so we will never have a dot left on its own
after pairing. Also if n = 0 we say that there is one way to pair up zero dots,
that is we don´t do anything at all, so C0 = 1. For n = 1 there is one way to
connect the two dots on the circle, if n = 2 there is two ways to pair them, see
Figure 3.1.

1 2 1 2

4 3 4 3

Figure 3.1:

Now consider the circle with 2n dots, see Figure 3.2, for simplicity start in
dot nr. 1 and pick a dot to pair with. It is clear we cannot pick an odd numbered
dot since that will result in an odd number of dots remaining on both sides of
the line, meaning we will be forced to intersect that line to pair up all dots.
Instead pick an even numbered dot, say dot nr. 2r for some 1 ≤ r ≤ n.
This results in a line, splitting the circle in two, on one side of the line we have
2r − 2 = 2(r − 1) dots and on the other side we have 2n − 2r = 2(n − r) dots.
Meaning we can view these two halves as two separate circles with 2(r − 1) and
2(n − r) dots respectively (see Figure 3.3).
3.2. Catalan numbers 21

2
1 3

4
2n ..
.
2r

2r + 1
...

Figure 3.2:

2 2
1 3 1 3

4 4
2(r − 1) .. 2(n − r) ..
. .

Figure 3.3:

So for this choice of r there is, by the multiplication principle, Cr−1 Cn−r
different ways to pair up the dots and we can pick any r such that 1 ≤ r ≤ n.
Hence the addition principle yields
n
X n−1
X
Cn = Cr−1 Cn−r = Cr Cn−r−1 , n ≥ 1, C0 = 1. (3.35)
r=1 r=0

This, nonlinear, recurrence relation describes the Catalan numbers[3]. Named


after E.C. Catalan, a Belgian mathematician from the 19:th century [3]. Though
the sequence was also studied prior to Catalan by Euler for instance [13].
The Catalan numbers model a great deal of different structures, here we list
some of the most useful ones [7].
1. Cn is the number of binary sequences of length 2n containing exactly n
zeros and n ones, such that at each stage of the sequence the number of
22 Chapter 3. Ordinary Power Series Generating Function

ones does not exceed the number of zeros.

2. Cn is the number of ways to parenthesize the product of n + 1 variables


in a computer operational system, to specify the order of operation.

3. (Euler´s Triangulations of Polygons) Cn is the number of ways to divide


a convex (n + 2)-gon into triangles by drawing n − 1 non intersecting di-
agonals.

4. Cn is the number of "up-right paths" on a square grid from the origin


O(0, 0) to the point A(n, n) which never crosses the the diagonal.

5. Cn is the number of rooted binary trees with n nodes.

It is hard to argue that the Catalan numbers is the solution to many counting
problems. But their recurrence formula is nonlinear, meaning it seems hard to
solve for an explicit formula for Cn . No need to worry, generating functions will
take on this task with a breeze.
P∞
Example 6 Let G(x) = n=0 Cn xn . Without loss of generality we can rewrite
3.35 as
n
X
Cn+1 = Cr Cn−r , n ≥ 0, C0 = 1. (3.36)
r=0

Multiply 3.36 by xn and sum over n ≥ 0 and use Theorem 1 we get, without
hardly any effort at all,
G(x) − 1
= G2 (x). (3.37)
x
We see that G(x) is a solution to the quadratic equation

xG2 (x) − G(x) + 1 = 0. (3.38)

Which has the following solutions



1− 1−4x
G1 (x) = 2x

G2 (x) = 1+ 1−4x
2x ,
3.2. Catalan numbers 23

only one of the solutions work and that is G1 (x). To see this consider the limits
of G1 (x) and G2 (x) respectively as x → 0. For G1 (x) we have, with the help of
"L´Hospital´s Rule" [1]

1 − 1 − 4x
lim = 1 = C0 . (3.39)
x→0 2x
While for G2 (x) we clearly have

1+ 1 − 4x
lim = ∞. (3.40)
x→0 2x
So we pick G(x) = G1 (x) as our generating function.√ To find an explicit formula
for Cn , first consider the Maclaurin expansion of 1 − 4x.
∞  
1/2
X 1/2
(1 − 4x) = (−4)k xk
k
k=0 (3.41)
1 1 1 42 x2 1 1 3 43 x3
= 1 − · 4x − · · − · · · −· · · .
2 2 2 2 2 2 2 3!
Hence
1 1 42 x2 1 1 3 43 x3
  
1 1
G(x) = 1 − 1 − · 4x − · · − · · · −· · ·
2x 2 2 2 2 2 2 2 3!

1 1 42 x2 1 1 3 43 x3
 
1 1
= · 4x + · · + · · · +· · · (3.42)
2x 2 2 2 2 2 2 2 3!

1 4x 1 3 42 x2 1 3 5 43 x3
=1+ · + · · + · · · +· · ·
2 2 2 2 3! 2 2 2 4!
1·3·5 ···(2n−1) n
Here we see [xn ]G(x) = 2n (n+1)! 4 . Now, to rewrite this more compact

1 · 3 · 5 · · · (2n − 1) n 2n
4 = 1 · 3 · 5 · · · (2n − 1)
2n (n + 1)! (n + 1)!
2n 1 · 3 · 5 · · · (2n − 1) · 2 · 4 · 6 · · · 2n
=
(n + 1)! 2 · 4 · 6 · · · 2n
(3.43)
2n (2n)!
=
(n + 1)! 2n n!
 
1 (2n)! 1 2n
= = .
n + 1 n!n! n+1 n
Which is a neat formula for Cn .
24 Chapter 3. Ordinary Power Series Generating Function

Before closing this chapter I would like to just give one more example, this
is a rather special one. It is a recurrence relation used by M. Izquierdo in
her article On Klein Surfaces and Dihedral Groups [8]. The following goes far
beyond the scope of this work and include things like complex geometry.
How many non-biconformally equivalent Klein-surfaces are coverings of the
hyperbolic quotient space, whose fundamental group admits the dihedral group
Dn as a group of automorphisms?
The interested reader finds very good account on Klein-surfaces, branched
coverings and complex geometry in Jones and Singerman [10], Stillwell [18],
Lages Lima [12] and Fulton [5].
What is special about it is that no one has ever yielded the generating
function for it in literature, until now, thus, in literature, no one has solved it
using the method of generating functions. So let us do just that.
The recurrence relation is as follows [8]:

Qn = (p − 2)Qn−1 + (p − 1)Qn−2 , n ≥ 5, (3.44)

with initial conditions

Q3 = p(p − 1)(p − 2) and Q4 = p(p − 1)[(p − 1) + (p − 2)2 ], (3.45)

where p is an odd prime.


An interesting observation one can do is that 3.44 looks very much like
the Fibonacci equation, it is just some scaling involved. This resemblance will
persist also in the generating function, as we will soon see.
Example 7 Theorem 2 The generating function of

Qn = (p − 2)Qn−1 + (p − 1)Qn−2 , n≥5

is
p − xp(p − 2)
G(x) =
1 − (p − 2)x − (p − 1)x2
Proof. Equation 3.44 can, equivalently, be rewritten as

(p − 1)Qn−2 = (p − 2)Qn−1 − Qn (3.46)

From this we can derive values for Qi , 0 ≤ i ≤ 2 so that we can start the
sequence from Q0 instead of Q3 . Compute these and we find Q0 = p, Q1 = 0
and Q2 = p(p − 1). Also we rewrite the indices in 3.44 so that we can use
Theorem 1. This yields

Qn+2 = (p − 2)Qn+1 + (p − 1)Qn , n ≥ 0, Q0 = p, Q1 = 0. (3.47)


3.2. Catalan numbers 25

P∞
Now let G(x) = n=0 Qn xn and execute the method. Then by Theorem 1 we
get
G(x) − p (p − 2)(G(x) − p)
= + (p − 1)G(x), (3.48)
x2 x
from which we get
p − xp(p − 2)
G(x) = . (3.49)
1 − (p − 2)x − (p − 1)x2

Note that we can see some resemblance to the generating function of the Fi-
bonacci sequence, at least in the sense that it is a first degree polynomial divided
by a second degree polynomial. If we rewrite a little bit we can actually take a
shortcut and use a result from the Fibonacci example when we are determining
the explicit formula for Qn . First consider this second degree polynomial
√ √
a + 4b + a2 a − 4b + a2
1 − ax − bx = (1 − αx)(1 − βx) where α =
2
, β= .
2 2
(3.50)
If we plug in a = (p − 2) and b = (p − 1) we get the denominator of 3.49, hence
we can rewrite 3.49 as
 
p − xp(p − 2) p x
G(x) = = − p(p − 2)
(1 − αx)(1 − βx) (1 − αx)(1 − βx) (1 − αx)(1 − βx)
(3.51)
From Example 4 we know that
 
x 1 1 1
= − . (3.52)
(1 − αx)(1 − βx) α − β 1 − αx 1 − βx
Partial fraction decomposition yields
   
p 1 p α β
=p = − .
(1 − αx)(1 − βx) (1 − αx)(1 − βx) α − β 1 − αx 1 − βx
(3.53)
Hence
   
p α β p(p − 2) 1 1
G(x) = − − −
α − β 1 − αx 1 − βx α−β 1 − αx 1 − βx
       
pα 1 p(p − 2) 1 p(p − 2) 1 pβ 1
= − + −
α − β 1 − αx α−β 1 − αx α−β 1 − βx α − β 1 − βx
     
pα − p(p − 2) 1 p(p − 2) − pβ 1
= + .
α−β 1 − αx α−β 1 − βx
(3.54)
26 Chapter 3. Ordinary Power Series Generating Function

With the use of 2.14 we get the following formula for Qn :


   
pα − p(p − 2) p(p − 2) − pβ
Qn = αn + βn, (3.55)
α−β α−β

where
p p
(p − 2) + 4(p − 1) + (p − 2)2 (p − 2) − 4(p − 1) + (p − 2)2
α= and β =
2 2
(3.56)
Chapter 4

Exponential Generating
Functions

In this chapter we will talk about the exponential generating function. It is


basically the ordinary power series generating function multiplied with 1/n!.
We will study a few examples and discuss if there are times where it is a better
idea to use this generating function instead of the ordinary one, or if it is just
a matter of taste. As far as the method goes, we do not do anything different
when we solve recurrence relations, except that in the end we are interested in
[xn /n!]G(x) as opposed to [xn ]G(x). In this chapter we follow [19]
egf P∞ xn
Recall that G ←→ {an }∞ n=0 means that G(x) = n=0 an n! and [x /n!]G(x)
n
n
is the coefficient in front of n! in that power series, in this case an .
x

Example 8 Consider the recurrence relation

an+2 = 2an+1 − an , n ≥ 0, a0 = 0, a1 = 1. (4.1)

P∞ n
Define our unknown generating function G(x) = n=0 an xn! , multiply 4.1 with
xn /n! and sum over n ≥ 0 yields

∞ ∞ ∞
X xn X xn X xn
an+2 =2 an+1 − an . (4.2)
n=0
n! n=0
n! n=0 n!

Rydén, 2023. 27
28 Chapter 4. Exponential Generating Functions

Note that

X xn x2 x3
an+1 = a1 + a2 x + a3 + a4 +· · ·
n=0
n! 2 3!
(4.3)
d x2 x3 d
= (a0 + a1 x + a2 + a3 +· · ·) = G(x).
dx 2 3! dx
In addition

X xn x2 x3
an+2 = a2 + a3 x + a4 + a5 +· · ·
n=0
n! 2 3!
(4.4)
2 3 2
d x x d
= (a1 + a2 x + a3 + a4 +· · ·) = 2 G(x).
dx 2 3! dx
So 4.1 expressed in terms of its generating function G(x) turns out to be

G′′ (x) = 2G′ (x) − G(x). (4.5)

A homogeneous, second order, differential equation. This we can easily solve. It


has the solution

G(x) = (c1 x + c2 )ex , for some constants c1 , c2 ∈ C. (4.6)

We get the initial conditions we need from the initial conditions of 4.1. Since
G(0) = a0 = 0 and G′ (0) = a1 = 1, it follows that

G(x) = xex (4.7)

Using the Maclaurin expansion for xex we find an explicit formula for an :
∞ ∞
x3 x4 X xn X xn
xex = x + x2 + + +· · · = = n . (4.8)
2 3! n=1
(n − 1)! n=0 n!

Clearly [xn /n!]G(x) = n and we are finished.


This example demonstrates the method using a exponential generating function.
We can see that it is quite similar to what we did in the previous chapter.
One difference we can note was that when we had terms of the form an+1 it
was the same as taking the derivative of the generating function as opposed to
subtracting a0 and dividing by x as we did with the ops. Just as with the ops we
can make this into a rule of computation. Actually we can translate Theorem
1 so that it is true for the exponential generating function. These rules can be
found in [19].
29

egf
Theorem 3 (Rules of computation (egf )) Let G ←→ {an }∞
n=0 and
egf
n=0 . Then
T ←→ {bn }∞
egf
1. Dk G(x) ←→ {an+k }∞
n=0 (where D is the differentiation operator)

egf
2. (xD)k G(x) ←→ {nk an }∞
n=0

(where xD means first differentiate then multiply by x)

egf Pn
3. GT ←→ { n
ak bn−k }∞

k=0 k n=0

Proof.
egf
1. Let G ←→ {an }∞
n=0 and for 1 ≤ r ≤ k calculate D G(x)
r

x2 x3 x2 x3
DG(x) = D(a0 + a1 x + a2 + a3 +· · ·) = a1 + a2 x + a3 + a4 +· · ·
2 3! 2 3!
x2 x3
D2 G(x) = a2 + a3 x + a4 + a5 +· · ·
2 3!
..
.

k x2 x3 X xn
D G(x) = ak + ak+1 x + ak+2 + ak+3 +· · · = an+k
2 3! n=0
n!
(4.9)

2. Same proof as for its counterpart from Theorem 1.

egf egf
n=0 and T ←→ {bn }n=0 , then their product
3. Let G ←→ {an }∞ ∞

∞ n
X X ak bn−k
GT = cn xn where cn = . (4.10)
n=0
k!(n − k)!
k=0

So
∞ n
!
X X ak bn−k
GT = xn (4.11)
n=0
k!(n − k)!
k=0
30 Chapter 4. Exponential Generating Functions

and
n
!
n
X ak bn−k
[x /n!]GT = n!
k!(n − k)!
k=0
(4.12)
n n  
X n! X n
= ak bn−k = ak bn−k .
k!(n − k)! k
k=0 k=0

egf Pn
n=0 .
n
ak bn−k }∞

∵ GT ←→ { k=0 k

When we discussed the ordinary power series generating function one of the
first recurrence relations we solved was the one of Fibonacci. Lets solve it again,
this time using the exponential generating function and then look at how the
two examples differ from each other.
Example 9 We previously stated that the Fibonacci sequence was defined through
3.1. But it is also possible to change the indices to make it a better fit for us to
use Theorem 3. Equivalent to 3.1 we have the following equation:

Fn+2 = Fn+1 + Fn , n ≥ 0, F0 = 0, F1 = 1. (4.13)

From 4.13 and with the use of Theorem 3 we immediately get the homogeneous
differential equation

G′′ − G′ − G = 0, G(0) = 0, G′ (0) = 1. (4.14)

It has solutions
√ √
1+ 5 1− 5
G(x) = c1 e αx
+ c2 e βx
where α = , β= and c1 , c2 ∈ C. (4.15)
2 2
The initial conditions G(0) = 0 and G′ (0) = 1 gives us the unique solution
1 1 1
G(x) = √ eαx − √ eβx = √ (eαx − eβx ). (4.16)
5 5 5
From which it is easy to get the formula for Fn :
 
1 1
[xn /n!] √ (eαx − eβx ) = √ (αn − β n ), (4.17)
5 5
√ √
where , α = 2 ,
1+ 5
β= 2 .
1− 5
31

So after solving the same problem twice, using two different kinds of generating
functions, what conclusions can we make? Well first of all, they both work
and yields the same solution. That is what is most important here. Another
thing we can notice is that in Example 4 we only needed to solve a ordinary,
algebraic equation, while in Example 9 we were forced to solve a differential
equation. So it seems that the ops version is somewhat easier, in this case, to
obtain the generating function. At the same time, in Example 4, we had to
do some partial fraction decomposition to obtain the explicit formula while in
Example 9 we just applied [xn /n!]G(x) and we were done. So the egf version
seems, in this case, to have an easier way of obtaining the explicit formula once
the generating function is known [19].
Chapter 5

Generating Function for


Hermite Polynomials

The following is taken from [17].


The quantum harmonic oscillator equation, which in itself is a one-dimensional
version of the Schrödinger’s equation, is a partial differential equation of great
importance in physics. The solutions to this equation comes in the form a wave
function that consists of a very special family of polynomials that are called the
Hermite Polynomials. These polynomials possess alot of nice proterties, some
of them we will discuss in this chapter. The main focus, and what we strive for,
though will be to derive their generating function. In order to do that we first
must introduce them.
dn −x2
Lets say we want to know dx ne . The first few are quite easy to just
calculate. The first derivative is
d −x2 2
e = −2xe−x , (5.1)
dx
the second is
d2 −x2 d  −x2
 2
e = −2xe = (4x2 − 2)e−x (5.2)
dx2 dx
and the third is
d3 −x2 d  2 −x2
 2
e = (4x − 2)e = −(8x3 − 12x)e−x . (5.3)
dx3 dx
But this soon gets pretty tedious to calculate. Instead note that when taking
2 2
the n:th derivative of e−x we get a polynomial of degree n times e−x . Since

Rydén, 2023. 33
34 Chapter 5. Generating Function for Hermite Polynomials

we see this clear pattern we can say


2 2
Dn e−x = pn (x)e−x (5.4)

where pn (x) is a polynomial in x of degree n. This is where Hermite polynomials


come in. They are almost these pn (x) except they have their highest order degree
term being positive, thus
2 2
Dn e−x = (−1)n Hn (x)e−x (5.5)

where Hn (x) is the n:th Hermite polynomial.


From 5.5 we get the following definition of the Hermite polynomials:
2 2
Hn (x) = (−1)n ex Dn e−x (5.6)

Actually there are other ways of writing these polynomials, one may multiply
the exponent in 5.6 with some constant in order to make them more suitable for
what ever purpose the user has. The ones that we consider here are sometimes
called the physicists Hermite polynomials and they have their leading coefficient
x2
equal to 2n , another version of the polynomials uses e− 2 instead and therefor
have their leading coefficient equal to one, these are usually referred to as the
probabilists Hermite polynomials [17].
Before we take on the challenge to derive the generating function for Hn (x)
we will look at some of the properties of these polynomials. For instance, what
happens if we take the derivative of Hn (x)? With the product rule we get
n n+1
 
2 d
−x2 x2 d −x2
Hn′ (x) = (−1)n 2xex e + e e (5.7)
dxn dxn+1
Note how the r.h.s. of 5.7 almost looks like the sum of two Hermite polynomials.
In fact n
2 d 2 2 2
2x(−1)n ex e−x = 2x(−1)n ex Dn e−x = 2xHn (x) (5.8)
dxn
and
2 dn+1 −x2 2 2
(−1)n ex n+1
e = (−1)(−1)n+1 ex Dn+1 e−x = −Hn+1 (x) (5.9)
dx
Thus 5.7 can be rewritten as

Hn′ (x) = 2xHn (x) − Hn+1 (x). (5.10)

So by taking the derivative of Hn (x) we obtain a quite nice recurrence relation.


But we can make it even better with the help of the following generalization of
the product rule. See [14].
35

Theorem 4 (The general Leibniz rule) Let f and g be n-times differen-


tiable functions, then the product f g is also n-times differentiable and its n : th
derivative is given by
n  
X n (n−k) (k)
(f g)(n) = f g , (5.11)
k
k=0

where f (j) denotes the j : th derivative of f . In particular f (0) = f .


This can be shown using induction.
Now consider the l.h.s. of 5.9:
2 dn+1 −x2 n x2 d
n 
−x2

(−1)n ex e = (−1) e −2xe . (5.12)
dxn+1 dxn
The only thing we did was to take one derivative. Now with the use of Theorem
4 we get:
 
2
dn 2 2 2
Pn
−2xe−x = (−1)n ex (−2) k=0 nk x(k) (ex )(n−k)

(−1)n ex dx n

2
  2 2 2

= (−1)n ex (−2) n0 x(ex )(n) + n1 (1)(ex )(n−1) + n2 (0)(ex )(n−2) + 0 + 0 +· · ·
 
 n 
2
d −x2 dn−1 −x2
= (−1)n ex (−2) x dx ne + n dx n−1 e

2
dn −x2 d n−1
−x 2
= (−2x)(−1)n ex dxn e − 2n(−1)n dx n−1 e

(5.13)
Plug this expression back in 5.7 and we get
n n n−1
 
′ n x2 d −x2 x2 d −x2 x2 d −x2
Hn (x) = (−1) 2xe e − 2xe e − 2ne e
dxn dxn dxn−1
n−1
 
2 d
−x2
= (−1)n −2nex e
dxn−1 (5.14)
2 dn−1 −x2
= 2n(−1)n−1 ex e
dxn−1
2 2
= 2n(−1)n−1 ex Dn−1 e−x = 2nHn−1 (x).

Equations 5.10 and 5.14 gives us the very nice recurrence relation for n ≥ 1

Hn+1 (x) = 2xHn (x) − 2nHn−1 (x), (5.15)

with initial conditions


H0 (x) = 1, H1 (x) = 2x (5.16)
36 Chapter 5. Generating Function for Hermite Polynomials

Another property of the Hermite polynomials worth mentioning since it is a very


important property, even though we do not use this property here, is that they
2
are orthogonal with respect to the weight function w(x) = e−x [11]. Meaning
that Z ∞
2
Hn (x)Hm (x)e−x dx = 0, whenever n ̸= m, (5.17)
−∞
this means Hermite polynomials can be used as an orthogonal basis.
The Hermite polynomials also have the property of being odd functions
whenever n is an odd number and being even functions whenever n is an even
number [11]. Recall that a function f is odd if it has the property:
f (−x) = −f (x), for every x for which f is defined (5.18)
in addition, f is an even function if
f (−x) = f (x), for every x for which f is defined (5.19)
This property is something we will have use of later so let us convince ourselves
that this is truly the case.
2 dn 2
Hn (−x) = (−1)n e(−x) e−(−x)
d(−x)n
(5.20)
2 dn 2
= (−1)n ex (−1)n n e−x = (−1)n Hn (x)
dx
Here we used the fact that taking a derivative with respect to −x yields
d 2 2 2 d 2 2
e−(−x) = e−(−x) (−2(−x)) = −(−2xe−x ) = − e−x = −De−x .
d(−x) dx
(5.21)
We clearly see in 5.20 that Hn (x) is an odd function whenever n is an odd num-
ber and respectively Hn (x) is an even function whenever n is an even number.
Also it might be interesting, at least it will be for us, to look at what happens
with Hn (x) at the origin. Hn (0) is called the Hermite Numbers and are denoted
Hn [11]. Since Hn (x) is a polynomial of degree n we know that Hn (0) is well
defined. Combine this together with 5.20 we have that Hn (0) = 0 whenever n
is an odd number. We write this
H2n+1 (0) = 0, n = 0, 1, 2, . . . (5.22)
But what about when n is even? Let us write that case as H2n (x) (n =
0, 1, 2, . . . ). This one is a bit trickier and we will approach this in a similar
way as we did when we derived the recurrence relation for Hn (x).
2 2 2
H2n (0) = (−1)2n ex D2n e−x = D2n e−x , (5.23)
x=0 x=0
37

take one derivative and use Theorem 4:

2 d2n −x2
D2n e−x = e
x=0 dx2n x=0

d2n−1 2
= (−2xe−x )
dx2n−1 x=0

2n−1
X  
2n − 1 (k) −x2 (2n−1−k)
= (−2) x (e )
k
k=0 x=0
(5.24)
d2n−1 d2(n−1)
 
2 2
= (−2) x 2n−1 e−x + (2n − 1) 2(n−1) e−x
dx dx x=0

d2(n−1) −x2
= (−2)(2n − 1) e
dx2(n−1) x=0

d2(n−1) −x2
= (2 − 4n) e = (2 − 4n)H2(n−1) (0).
dx2(n−1) x=0

So to summarize, we obtain the following recurrence relation:

H2n = (2 − 4n)H2(n−1) , n ≥ 1, H0 = 1. (5.25)

Here using the notation for the Hermite Numbers. From this we get

H2 = (2 − 4)H0 = (2 − 4)
H4 = (2 − 8)H2 = (2 − 8)(2 − 4)
H6 = (2 − 12)H4 = (2 − 12)(2 − 8)(2 − 4)
..
.
(5.26)
H2n = (2 − 4n)(2 − 4(n − 1))(2 − 4(n − 2)) · · · (2 − 8)(2 − 4)
= 2n (1 − 2n)(1 − 2(n − 1))(1 − 2(n − 2)) · · · (1 − 4)(1 − 2)
= 2n (1 − 2n)(3 − 2n)(5 − 2n) · · · (−3)(−1)
= (−2)n (2n − 1)(2n − 3)(2n − 5) · · · 3 · 1.
38 Chapter 5. Generating Function for Hermite Polynomials

This expression can be rewritten as follows:


(−2)n (2n)!
(−2)n (2n − 1)(2n − 3)(2n − 5) · · · 3 · 1 =
2n(2n − 2)(2n − 4) · · · 4 · 2
(−2)n (2n)!
=
2n · 2(n − 1) · 2(n − 2) · · · 2(2) · 2(1)
(−2)n (2n)!
=
2n (n!)
(2n)!
= (−1)n
n!
(5.27)
This shows that
(2n)!
H2n (0) = (−1)n , n = 0, 1, 2, . . . (5.28)
n!
With these results we are now ready to approach the end goal of this work.
That is to derive the generating function for the Hermite polynomials. We will
approach this one a bit differently then we have done with previous recurrence
relations.
First of all, let us define the generating function, since Hn (x) is a polynomial
in x we need to pick another variable as our "bookkeeping" variable and we
define the generating function as

X tn
G(x, t) = Hn (x) , (5.29)
n=0
n!
which is a multi-variable generating function. Take the derivative with respect
to x and we get
∞ ∞
∂ d X tn X tn
G(x, t) = Hn (x) = Hn′ (x) (5.30)
∂x dx n=0 n! n=0
n!

Now, since H0′ (x) = 0 we can instead choose to let n start from 1, this together
with 5.14 yields
∞ ∞
X tn X tn
Hn′ (x) = 2nHn−1 (x) . (5.31)
n=0
n! n=1
n!
With the following change of variables: m = n − 1 we get
∞ ∞
X tm+1 X tm+1
2(m + 1)Hm (x) = 2Hm (x) = 2tG(x, t) (5.32)
m=0
(m + 1)! m=0 m!
39

So what did we learn? We learned that



G(x, t) = 2tG(x, t) (5.33)
∂x
Which is a differential equation, which is a very strong result since differential
equations model the world. Looking at 5.33 we see that we have a function that
when we take its derivative with respect to x we get the same function back
along with the factor 2t. This implies

G(x, t) = e2tx f (t), (5.34)

where f (t) is a function depending only on t. To determine f (t), consider

G(0, t) = e2t0 f (t) = f (t). (5.35)

So if we know G(0, t) we will know f (t).



X tn
G(0, t) = Hn (0) (5.36)
n=0
n!

And since we know from before what Hn (0) is, we can just plug the results from
5.22 and 5.28 into 5.36 and we get

X t2n
f (t) = H2n (0)
n=0
(2n)!

X (2n)! t2n
= (−1)n (5.37)
n=0
n! (2n)!

X (−t2 )n 2
= = e−t .
n=0
n!

Giving us the generating function


2 2
G(x, t) = e2tx e−t = e2tx−t (5.38)

Now that is a neat and compact function in comparison to the polynomials


it is generating. Many of the good properties you can read from the generat-
ing function. Good properties of the generating function will translate to the
polynomials.
We have now derived the generating function for the Hermite polynomials
defined as 5.6, which is what I wanted us to do. It is not uncommon that
40 Chapter 5. Generating Function for Hermite Polynomials

you instead define the Hermite polynomials by the generating function 5.38 and
instead derive identities from the generating function instead [11]. For instance,
when we were interested in Hn (0), if we had known the generating function we
could have written

2t0−t2
X tn
e = Hn (0) . (5.39)
n=0
n!
Expand the l.h.s. in a power series and get
∞ ∞
X (−t2 )k X tn
= Hn (0) . (5.40)
k! n=0
n!
k=0

At once realising that H2n+1 (0) = 0 since the l.h.s. of 5.40 do only contain even
powers of t. To find H2n (0) we make the change of variables: n = 2k and get
∞ ∞
X (−1)k t2k X t2k
= H2k (0) , (5.41)
k! (2k)!
k=0 k=0

from which it is pretty clear that H2n (0) = (−1)n (2n)!


n! . Again illustrating the
strength of generating functions.
Chapter 6

Moment Generating
Functions

When it comes to random variables and probabilistic distributions the magni-


tudes called moments are of great importance. The first moment is actually the
mean of a random variable. The second moment is a measure of spread, how a
random variable varies and is connected to the variance of a random variable.
The third moment is a measure of asymmetry and is connected to the skewness
of the distributions graph. The fourth moment is a measure of "taildness" and
is connected to kurtosis it gives information on how the graph looks at its tails.
The following is taken from Moment Generating Functions, MIT, Lecture
13 Fall 2018, [15].
As you may have already guessed, the moment generating function is used
in probability theory and statistics and they provide a good way of representing
probability distributions through a function of a single variable. Moment gen-
erating functions do not have a connection to recursion as the other generating
functions we have studied do, instead they are connected to derivatives as they
generate moments through derivation.
Moment generating functions are useful in several ways, to name a few:

1. It is easy to calculate the moments of a distribution using the moment


generating function.

2. Characterizing the distribution of the a sum of independent random vari-


ables is easy using moment generating functions.

Rydén, 2023. 41
42 Chapter 6. Moment Generating Functions

3. When dealing with the distribution of the sum of a random number of


independent random variables they provide some powerful tools.

Also, according to [20], the moment generating function seems to be effective


when it comes to probabilistic inference algorithms with latent count variables.
That is where one do not have complete control of all variables.
Formal definition of a moment generating function, taken from [15], elemen-
tary probabilistic expressions, such as: random variable and expectation, can
be found in [4].
Definition 2 The moment generating function associated with a random vari-
able X is a function MX (s) : R −→ [0, ∞] defined by
MX (s) = E(esX ), (6.1)
where E(X) is the expectation, or mean, of X. Meaning that
X
MX (s) = esx pX (x) (6.2)
x

if X is a discrete random variable with Probability Mass Function (PMF) pX (x)


and Z ∞
MX (s) = esx fX (x) dx (6.3)
−∞
if X is a continuous random variable with Probability Density Function (PDF)
fX (x).
Example 10 Let X be a random variable taking the values 1 and 0 with the
probability p and 1 − p respectively, this is the same as saying X is Bernoulli
distributed and we denote it X ∼ Ber(p).
In this case we get the moment generating function
MX (s) = es0 pX (0) + es pX (1) = pes − p + 1. (6.4)
Now an important property of the moment generating function is its ability to
generate moments [15]. Recall that the first moment of a random variable X is
E(X), the second moment is E(X 2 ), naturally the k:th moment is E(X k ).
Let MX (s) be a moment generating function for a discrete random variable
X. Then by Definition 2 and the Maclaurin expansion of ex we get
s2 2 s3 3
MX (s) = E(1 + sX + X + X +· · ·)
2 3!
(6.5)
s2 s3
= 1 + sE(X) + E(X 2 ) + E(X 3 ) +· · · .
2 3!
43

Look at what happens when we take the derivative, with respect to s and eval-
uate it at s = 0:
d s2 s3
(1 + sE(X) + E(X 2 ) + E(X 3 ) +· · ·) = E(X). (6.6)
ds 2 3! s=0

So we obtain the mean (first moment) of X by just taking the first derivative of
MX (s) and evaluate at s = 0. If we wanted to know the second moment E(X 2 )
we would just take the second derivative of MX (s) and evaluate at s = 0. It is
not to hard to see that the moment generating function "generates" the moment
E(X k ) if we apply the operator Dk to it and evaluate at s = 0 [15]:

s2 s3
Dk MX (s) s=0
= Dk (1 + sE(X) + E(X 2 ) + E(X 3 ) +· · ·)
2 3! s=0

s2
= Dk−1 (0 + E(X) + sE(X 2 ) + E(X 3 ) +· · ·)
2 s=0

s2
= Dk−2 (0 + 0 + E(X 2 ) + sE(X 3 ) + E(X 4 ) +· · ·)
2 s=0

..
.

= 0 + 0 + 0 +· · · + E(X k ) + sE(X k+1 ) +· · · s=0


= E(X k ).
(6.7)

This is a very useful property when it comes to calculating things like the
variance of a random variable. Remember that the variance of a random variable
X is defined as

V (X) = E[(X − E(X))2 ] = E(X 2 ) − E(X)2 . (6.8)

So if we were to calculate the variance of the random variable from Example 10


we just need the first and second derivative of 6.4 and then evaluate them at
s = 0.
d d
E(X) = MX (s) (pes − p + 1) = pes |s=0 = p (6.9)
ds s=0 ds s=0

d2 d s
E(X 2 ) = MX (s) pe = pes |s=0 = p. (6.10)
dss s=0 ds s=0
So we get
V (X) = p − p2 = p(1 − p) (6.11)
44 Chapter 6. Moment Generating Functions

We began this chapter with a little talk regarding the first four moments of a
distribution and how powerful these moments are when it comes to character-
ising a distribution. Now we have illustrated that calculations with moment
generating functions is very efficient. It is not hard to imagine this having a
lot of applications. For instance in Artificial Intelligence (A.I.), or any type of
data-analysis, we analyse a lot of data and this data can be viewed as a ran-
dom variable, with the moment generating function we can then calculate the
moments of that variable and characterize the data and see what distribution
it follows.
Chapter 7

Conclusions

We have, throughout this report, illustrated that generating functions provides


a solid method of solving recurrence relations, that simplify this work, especially
when it comes to recurrence relations that are a bit tougher then the linear ones.
The different kinds of generating functions have different pros. When solving
recurrence relations using ops we had an easier way of determining the gener-
ating function, since it just involved an ordinary algebraic equation that we
solved for the generating function, while we in the egf case often need to solve a
differential equation to determine the generating function. Additionally, if the
generating function is already known, we had an easier time determining an
explicit formula for the sequence when we were working with the egf as oppose
to the ops where we had to do some partial fraction decomposition to derive an
explicit formula.
When we worked with the recurrence relation regarding branched coverings
in Chapter 3 we saw that we were able to recognise that the recurrence relation
had similarities with the Fibonacci recurrence, and this translated to the gener-
ating function that also had strong similarities with the generating function of
the Fibonacci sequence making it possible for us to make our calculations easier
by using results from the Fibonacci generating function. From this we conclude
that it might be a good idea to look for similarities of generating functions that
we already know to help us solve something new.
I choose to consider the derivation of the Hermite polynomials generating
function as, sort of, the main result of this work. This is because that gener-
ating function generates an interesting family of functions, namely the Hermite
polynomials. They are interesting since they arise as solutions to differential
equations, and differential equations are what we we use to model the world.
Also I wanted to show how that generating function can be obtained since it is

Rydén, 2023. 45
46 Chapter 7. Conclusions

common in literature that we already know the generating function for the Her-
mite polynomials and use it for calculations since those calculations are simpler.
Lastly I would like to mention another type of generating function that is
a generalization of the moment generating function. It is called the Universal
Generating Function (ugf). It is a powerful tool when working with random
distributions. When we looked at moment generating functions we looked at
discrete random variables, the ugf sort of "translates" this to a non-discrete
case. This makes the ugf a powerful tool when it comes to simulations for
instance, the interested reader should read Universal Generating Function Based
Probabilistic Production Simulation for Wind Power Integrated Power Systems
[9]. In this article they present a model that simulates the electricity demand and
output in a power-system along with costs. Power-systems are quite complex
and persists of multiple kinds of generators with varying outputs and capacities.
Their model makes very accurate simulations using ugf which require far less
expensive computations then other methods.
Bibliography

[1] S. Abbott, Understanding Analysis, Springer-Verlag,New York, 2015.


[2] L. Alexandersson, Lecture notes for complex analysis, Linköping univer-
sity, Linköping. 2021.
[3] I. Andersson, A First Course in Discrete Mathematics, Springer-Verlag,
London. 2001.
[4] G. Blom, J. Enger, G Englund, J. Grandell, L. Holst Sannolikhetsteori
och Statistikteori med Tillämpningar, Studentlitteratur AB, Lund, 2005.
[5] W. Fulton, Algebraic Topology, GTM 153, Springer-Verlag, New York,
1995.
[6] R. P. Grimaldi, Discrete and Combinatorial Mathematics, An Applied
Introduction, Pearson Education Inc, USA. 2004.
[7] M. Izquierdo, Lecture Notes for Discrete Mathematics , Linköping uni-
versity, https://courses.mai.liu.se/GU/TATA82/Dokument/Lectures4-
5.pdf.
[8] M. Izquierdo, On Klein Surfaces and Dihedral Groups, Math.Scand. 76
(1995), 221-232.
[9] T. Jin, M. Zhou, G. Li, Universal Generating Function Based Probabilis-
tic Production Simulation for Wind Power Integrated Power Systems, J.
Mod. Power Syst. Clean Energy 5 (2017), 134-141.
[10] G. A. Jones, D Singerman, Complex Functions, Cambridge University
Press, Cambridge, 1987.
[11] D. S. Kim, T. Kim, S-H. Rim, S. H. Lee, Hermite Polynomials and their
Applications Associated with Bernoulli and Euler Numbers, Discrete Dy-
namics in Nature and Society, 2012, doi:10.1150/2012/974632.

Rydén, 2023. 47
48 Bibliography

[12] E. Lages Lima, Fundamental Groups and Covering Spaces, A.K.Peters,


Natick MA, 2003.
[13] J. H. van Lint, R. M. Wilson, A Course in Combinatorics, Cambridge
University Press, Cambridge, United Kingdom. 2001.
[14] J. E. Marsden, M. J. Hoffman Elementary Classical Analysis, W. H.
Freeman & Company, New York, 1993
[15] Moment Generating Functions, Massachusetts In-
stitute of Technology, Lecture 13, Fall 2018,
https://ocw.mit.edu/courses/6-436j-fundamentals-of-probability-fall-
2018/1a592ed184fb4c444547f67c9bcdd8ec_MIT6_436JF18_lec13.pdf.
[16] K. H. Rosen, Discrete Mathematics and its Applications, McGraw Edu-
cation, New York, United States of America. 2019.

[17] D. Rule, Lecture Notes for Partial Differential Equations, Linköping uni-
versity, spring 2019.
[18] J.Stillwell, Geometry of Surfaces, Springer-Verlag, New York, 1992.
[19] H. S. Wilf, Generatingfunctionology, Academic Press Inc. 1994.

[20] K. Winner, D. Sheldon, Probabilistic Inference with Generating Func-


tions for Poisson Latent Variable Models, Proceedings 30th Conference
on Neural Information Processing Systems, (NIPS 2016), Barcelona, Ad-
vances in Neural Processing Systems 29.
Linköping University Electronic Press

Copyright
The publishers will keep this document online on the Internet – or its possible
replacement – from the date of publication barring exceptional circumstances.
The online availability of the document implies permanent permission for
anyone to read, to download, or to print out single copies for his/her own use
and to use it unchanged for non-commercial research and educational purpose.
Subsequent transfers of copyright cannot revoke this permission. All other uses
of the document are conditional upon the consent of the copyright owner. The
publisher has taken technical and administrative measures to assure authentic-
ity, security and accessibility.
According to intellectual property law the author has the right to be men-
tioned when his/her work is accessed as described above and to be protected
against infringement.
For additional information about the Linköping University Electronic Press
and its procedures for publication and for assurance of document integrity,
please refer to its www home page: http://www.ep.liu.se/.

Upphovsrätt
Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –
från publiceringsdatum under förutsättning att inga extraordinära omständig-
heter uppstår.
Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda
ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för
ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid
en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av
dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,
säkerheten och tillgängligheten finns lösningar av teknisk och administrativ art.
Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman
i den omfattning som god sed kräver vid användning av dokumentet på ovan
beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan
form eller i sådant sammanhang som är kränkande för upphovsmannens litterära
eller konstnärliga anseende eller egenart.
För ytterligare information om Linköping University Electronic Press se för-
lagets hemsida http://www.ep.liu.se/.

© 2023, Christoffer Rydén

You might also like