LN ML Rug
LN ML Rug
LN ML Rug
Lecture Notes
V 1.13, Jan 19, 2022
3
B Joint, conditional and marginal probabilities 251
4
A note on mathematical background that is required. Neural networks process
“training data” and these data typically originally come in the format of Excel files
(yes! empirical scientists who actually generate those valuable “raw data” often
use Excel!), which are just matrices if seen with the eye of a machine learner.
Furthermore, neural networks are shaped by connecting neurons with weighted
“synaptic links”, and these weights are again naturally sorted in matrices. And
the main operation that a neural network actually does is formalized by a matrix-
vector multiplication. So it’s matrices and vectors all over the place, no escape
possible. You will need at least a basic, robust understanding of linear algebra to
survive or even enjoy this course. We will arrange a linear algebra crash refresher
early in the course. A good free online resource is the book “Mathematics for
Machine Learning” (Deisenroth, Faisal, and Ong 2019).
Furthermore, to a lesser degree, also some familiarity with statistics and prob-
ability is needed. You find a summary of the must-knows in the appendix of these
lecture notes, and again a tutorial exposition in Deisenroth, Faisal, and Ong 2019.
Finally, a little (not much) calculus is needed to top it off. If you are familiar
with the notion of partial derivatives, that should do it. In case of doubt - again
it’s all (and more) in Deisenroth, Faisal, and Ong 2019.
5
1 Introduction
1.1 Human Versus Machine Learning
Humans learn. Animals learn. Societies learn. Machines learn. It looks like
“learning” were a universal phenomenon and all we had to do is to develop a solid
scientific theory of “learning”, turn that into algorithms and then let “learning”
happen on computers. Wrong wrong wrong. Human learning is very different
from animal learning (and amoebas learn different things in different ways than
chimpanzees), societal learning is quite another thing as human or animal learning,
and machine learning is as different from any of the former as cars are from horses.
Human learning is incredibly scintillating and elusive. It is as complex and
impossible to understand as you are yourself — look into a mirror and think of
all the things you can do, all of your body motions from tying your shoes to
playing the guitar; thoughts you can think from “aaagrhhh!” to “I think therefore
I am”; achievements personal, social, academic; all the things you can remember
including your first kiss and what you did 20 seconds ago (you started reading this
paragraph, in case you forgot); your plans for tomorrow and the next 40 years;
well, just everything about you — and almost everything of that wild collection
is the result of a fabulous mixing learning of some kind with other miracles and
wonders of life. To fully understand human learning, a scientist would have to
integrate at least the following fields and phenomena:
Recent spectacular advances in machine learning may have nurtured the im-
pression that machines come already somewhat close. Specifically, neural net-
works with many cascaded internal processing stages (so-called deep networks)
have been trained to solve problems that were considered close to impossible
only a few years back. A showcase example (one that got me hooked) is au-
tomated image caption (technical report: Kiros, Salakhutdinov, and Zemel 2014).
At http://www.cs.toronto.edu/~nitish/nips2014demo you can find stunning
examples of caption phrases that have been automatically generated by a neural
network based system which was given photographic images as input. Figure 1
shows some screenshots. This is a demo from 2014. Since deep learning is evolv-
6
ing incredibly fast, it’s already rather outdated today and current image caption
generators come much closer to perfection. But back in 2014 this was a revelation.
Other fascinating examples of deep learning are face recognition (Parkhi, Vedaldi,
and Zisserman 2015), online text translation (Bahdanau, Cho, and Bengio 2015),
inferring a Turing machine (almost) from input-output examples (Graves et al
2016), or playing the game of Go at and beyond the level of human grand-masters
(Silver et al. 2016). I listed these examples when I wrote the first edition of these
lecture notes and I am too lazy to update this list every year (though I should),
— instead this year (2021) I just point out the current best deep learning system,
a text generation system called GPT-3 (read more about it on Wikipedia and in
this instructively critical blog).
So, apparently machine learning algorithms come close to human performance
in several tasks or even surpass humans, and these performance achievements have
been learnt by the algorithms, — thus, machines today can learn like humans??!?
The answer is NO. ML researchers (the really good ones, not the average Tensor-
Flow user) are highly aware of this. Outside ML however, naive spectators (from
the popular press, politics, or other sciences) often conclude that since learning
machines can perform similar feats as humans, they also learn like humans. It
takes some effort to argue why this is not so (read Edelman 2015 for a refutation
from the perspective of cognitive psychology). I cannot embark on this fascinat-
ing discussion at this point. Very roughly speaking, it’s the same story again as
with chess-playing algorithms: the best chess programs win against the best hu-
man chess players, but not by fair means — chess programs are based on larger
amounts of data (recorded chess matches) than humans can memorize, and chess
programs can do vastly more computational operations per second than a human
can do. Brute force wins over human brains at some point when there is enough
data and processing bandwidth. Progress has accelerated in the last years be-
cause increasingly large training datasets have become available and fast enough
computing systems have become cheap enough.
This is not to say that powerful “deep learning” just means large datasets and
fast machines. These conditions are necessary but not sufficient. In addition,
also numerous algorithmical refinements and theoretical insights in the area of
statistical modeling had to be developed. Some of these algorithmical/theoretical
concepts will be presented in this course.
Take-home message: The astonishing learning achievements of today’s ML are
based on statistical modeling techniques, raw processing power and a lot of a
researcher’s personal experience and trial-and-error optimization. It’s technology
and maths, not neurobiology or psychology. Dismiss any romantic ideas about
ML that you may have had. ML is data technology stuff for sober engineers. But
you are allowed to become very excited about that stuff, and that stuff can move
mountains.
7
Figure 1: Three screenshots from the image caption demo at http://www.cs.
toronto.edu/~nitish/nips2014demo. A “deep learning” system was trained on
some tens of thousands of photos showing everyday scenes. Each photo in the
training set came with a few short captions provided by humans. From these
training data, the system learnt to generate tags and captions for new photos.
The tags and captions on the left were produced by the trained system upon
input of the photos at the right.
8
1.2 The two super challenges of ML - from an eagle’s eye
In this section I want to explain, on an introductory, pre-mathematical level,
that large parts of ML can be understood as the art of estimating probability
distributions from data. And that this art faces a double super challenge: the
unimaginably complex geometry of real-world data distributions, and the extreme
scarcity of information provided by real-world data. I hope that after reading this
section you will be convinced that machine learning is impossible. Since however
GPT-3 exists, where is the trick? ... A nice cliff hanger, you see — I want to make
you read on.
9
ten 300-dimensional vectors, which boils down to a single 3000-dimensional vector,
that is, a point c ∈ R3000 .
Similarly, an input picture sized 600 × 800 pixels with 3 color channels is
represented to TICS as a vector u of dimension 3 × 600 × 800 = 1, 440, 000.
Now TICS generates, upon a presentation of an input picture vector u, a list
of what it believes to be the five most probable captions for this input. That is,
TICS must have an idea of the probability ratios of different caption candidates.
Formally, if C denotes the random variable (RV) that returns captions c, and if
U denotes the RV which produces sample input pictures u, TICS must compute
ratios of conditional probabilities of the form P (C = ci |U = u)/P (C = cj |U = u).
In the semantic word vector space, these ratios become ratios of the values of
probability density functions (pdf) over the caption vector space R3000 . For every
input image u, TICS must have some representation of the 3000-dimensional pdf
describing the probabilities of caption candidates for image u.
Now follow me on a little excursion into geometric imagination.
Consider some specific vector c ∈ R3000 which represents a plausible 10-word
caption for image u, that is, the pdf value p(c) is relatively large. What happens
to the pdf value if we move away from c by taking a little step δ ∈ R3000 of length,
say, kδk = 0.1, that is, how does p(c + δ) compare to p(c)? This depends on
the direction in which δ points. In a few directions, p(c + δ) will be about as
large as p(c). This happens when δ points toward another caption vector c′ which
has one word replaced by another word that is semantically close. For example,
consider the caption “A group of people on a bridge beside a boat”. The last 300
elements in the 3000-dimensional vector c coding this caption stand for the word
boat. Replacing this caption by “a group of people on a bridge beside a ship”
gives another codevector c′ ∈ R3000 which is the same as c except in the last
300 components, which have been replaced by the semantic word vector for ship.
Then, if δ points from c toward c′ (that is, δ is a fraction of c′ − c), p(c + δ) will
not differ from p(c) in a major way.
If you think a little about it, you will come to the conclusion that such δ which
leave p(c + δ) roughly at the same level are always connected with replacing words
by semantically related words. Other change directions δ ∗ will either make no
semantical sense or destroy the grammatical structure connected with c. The pdf
value p(c + δ ∗ ) will drop dramatically compared to p(c) in those cases.
Now, in a 10-word caption, how many replacements of some word with a related
one exist? Some words will be grammatical function words (“a”, “‘of” etc.) which
admit only a small number of replacements, or none at all. The words that carry
semantic meaning (“‘group”, “people” etc.) typically allow for a few sense-making
replacements. Let us be generous and assume that a word in a 10-word caption, on
avaerage, can be replaced by 5 alternative words such that after the replacement,
the new caption still is a reasonable description of the input image.
This means that around c there will be 5 · 10 = 50 directions in which the
relatively large value of p(c) stays large. Assuming these 50 directions are given
10
by linearly independent δ vectors, we find that around c there is a 50-dimensional
affine linear subspace S of R3000 in which we can find high p values in the vicinity
of c, while in the 2950 directions orthogonal to S, the value of p will drop fast if
one moves away from c.
Ok., this was a long journey to a single geometric finding: locally around
some point c where the pdf is relatively large, the pdf will stay relatively large
only a small fraction of the directions - these directions span a low-dimensional
hyperplane around c. If you move a little further away from c on this low-
dimensional “sheets”, following the lead of high pdf values, you will find that this
high-probability surface will take you on a curved path - these high-probability
sheets in R3000 are not flat but curved.
The mathematical abstraction of such relatively high-probability, low-dimensional,
curved sheets embedded in R3000 is the concept of a manifold. Machine learning
professionals often speak of the “data manifold” in a general way, in order to
indicate that the geometry of high-probability areas of real-world pdfs consists
of “thin” (low-dimensional), curved sheet-like domains curled into the embedding
data space. It is good for ML students to know the clean mathematical definition
of a manifold. Although the geometry of real-world pdfs will be less clean than
this mathematical concept, it provides a useful set of intuitions, and advanced
research in DL algorithms frequently starts off from considering manifold models.
So, here is brief intro to the mathematical concept of a manifold. Consider
an n-dimensional real vector space Rn (for the TICS output space of caption
encodings this would be n = 3000). Let m ≤ n be a positive integer not larger
than n. An m-dimensional manifold M is a subset of Rn which locally can be
smoothly mapped to Rm , that is, at each point of M one can smoothly map a
neighborhood of that point to a neighborhood of the origin in the m-dimensional
Euclidean coordinate system (Figure 2A).
1-dimensional manifolds are just lines embedded in some higher-dimensional
R (Figure 2B), 2-dimensional manifolds are surfaces, etc. Manifolds can be wildly
n
curved, knotted (as in Figure 2C), or fragmented (as in Figure 2B). Humans
cannot visually imagine manifolds of dimension greater than 2.
The “data manifold” of a real-world data source is not a uniquely or even
well-defined thing. For instance, returning to the TICS example, the dimension
of the manifold around a caption c would depend on an arbitrary threshold fixed
by the researcher – a direction δ would / would not lead out of the manifold if
the pdf decreases in that direction faster than this threshold. Also (again using
the caption scenario) around some good captions c the number of “good” local
change directions will differ from the number around another, equally good, but
differently structured caption c′ . For these and other reasons, claiming that data
distributions are shaped like manifolds is a strongly simplifying abstraction.
Despite the fact that this abstraction does not completely capture the geometric-
statistical complexity of real-world data points, the geometric intuitions behind
the manifold concept have led to substantial insight into the (mal-)functioning of
11
A
kahrstrom.com/mathematics/illustrations.php
B C
en.wikipedia.org/wiki/Manifold www.math.utah.edu/carlson60/
12
“panda” “gibbon”
58% confidence 99% confidence
Figure 3: Manifold magic: Turning a panda into a gibbon. For explanation see
text. Picture taken from I. J. Goodfellow, Shlens, and Szegedy 2014
13
Figure 4: Two images and their annotations from the training dataset. Taken
from Young et al. 2014.
A frequently found keyword that points to this bundle of ideas and challenges is
“data manifold”.
14
thus represent the information contained in the training data.
y
A green-shirted man
1 with a butcher‘s
?? apron uses a knife to
carve out the
y1 hanging carcass of a
cow.
?? y2
x2
?? u2
u1 u*
1 x1
Figure 5: Training data for TICS (highly simplified). The photo dataspace (light
blue square spanned by pixel intensities x1 , x2 ) is here reduced to 2 dimensions,
and the caption dataspace to a single one only (y-axis). The training dataset is
assumed to contain two photos only (blue diamonds u1 , u2 ), each with one caption
(blue crosses with y-values y1 , y2 ). A test image u∗ (orange diamond) must be
associated by TICS, after it has been trained, with suitable caption which lies
somewhere in the y-direction above u∗ (dashed orange ??-line).
Now consider a new test image u∗ (orange diamond in the figure). The TICS
system must determine a suitable caption for u∗ , that is, an appropriate value y ∗
– a point somewhere on the orange broken ??-line in the figure.
But all that TICS knows about captions is contained the two training data
points (blue crosses). If you think about it, there seems to be no way for TICS to
infer from these two points where y ∗ should lie. Any placement along the ??-line
is logically possible!
In order to determine a caption position along the ??-line, TICS must add
some optimization criterion to the scenario. For instance, one could require one
of the following conditions:
1. Make y ∗ that point on the ??-line which has the smallest total distance to
all the training points.
15
2. One might wish to grant closer-by training images a bigger impact on the
caption determination than further-away training images. Thus, if d(ui , u∗ )
denotes the distance between training image ui and test image u∗ , set y ∗
to the weighted mean of the training captions yi , where the weights are
inversely proportional to the distance of the respective images:
1 X
y∗ = P ∗ )−a
d(ui , u∗ )−a yi .
i d(u i , u i
• All of these optimization criteria make some intuitive sense, but they would
lead to different results. The generated captions will differ depending on
which criterion is chosen.
• The criteria rest on quite different intuitions. It is unclear which one would
be better than another one, and on what grounds one would compare their
relative merits. A note in passing: finding comparison criteria to assess
relative merits of different statistical estimation procedures is a task that
constitutes an entire, important branch of statistics. For an introduction you
might take a look at my lecture notes on “Principles of statistical modeling”,
Section 19.3.1 “Comparing statistical procedures” (https://www.ai.rug.
nl/minds/uploads/LN_PSM.pdf).
• Criteria 2 and 3 require the choice of design parameters (a, σ 2 ). The other
criteria could be upgraded to include reasonable design parameters too. It
is absolutely typical for machine learning algorithms to have such hyperpa-
rameters which need to be set by the experimenter. The effects of these
hyperparameters can be very strong, determining whether the final solution
is brilliant or useless.
16
• Casting an optimization criterion into a working algorithmic procedure may
be challenging. For criteria 1, 2 and 3 in the list, this would be relatively
easy. Turning criterion 4 into an algorithm looks fearsome.
Some methods in ML are cleanly derived from optimization criteria, with math-
ematical theory backing up the design of algorithms. An example of such well-
understood methods are decision trees which will be presented first in our course.
Other branches employ learning algorithms that are so complex that one loses
control over what is actually optimized, and one has only little insight in the
geometrical and statistical properties of the models that are delivered by these
methods. This is particularly true for deep learning methods.
There is something like an irreducible arbitrariness of choosing an optimiza-
tion condition, or, if one gives up on fully understanding what is happening, an
arbitrariness in the design of complex learning algorithms. This turns machine
learning into a quite personal preference thing, even into an art.
To put this all in a nutshell: the available training data do not include the
information of how the information contained in them should be “optimally” ex-
tracted, or what kind of information should be extracted. This could be called a
lack of epistemic information (my private terminology; epistemology is the branch
of philosophy that is concerned with the question by which methods of reasoning
humans get acquire knowledge, and with the question what “knowledge” is in the
first place - check out https://en.wikipedia.org/wiki/Epistemology if you
are interested in these deep, ancient, unsolved questions).
The discussion of lacking epistemic information is hardly pursued in today’s
ML, although there have been periods in the last decades when fierce debates on
such issues have been led.
But there is also another lack of information which is a standard theme in
today’s ML textbooks. This issue is called the curse of dimensionality. I will
highlight this curse with our TICS demo again.
In mathematical terminology, after training our super-simplified 3-dimensional
TICS system from Figure 5 is able to compute a function f : [0, 1]2 → [0, 1]
which computes a caption f (u∗ ) for every input test image u∗ ∈ [0, 1]2 . The only
information TICS has at learning time is contained in the training data points
(blue crosses in our figure).
Looking at Figure 5, estimating a function f : [0, 1]2 → [0, 1] from just the
information contained in the two blue crosses is clearly a dramatically underde-
termined task. You may argue that the version of the TICS learning task which I
gave in Figure 5 has been simplified to an unfair degree, and that the real TICS
system had not just 2, but 30,000 training images to learn on.
But, in fact, for the full-size TICS the situation is even much, much worse than
what appears in Figure 5:
17
• In the caricature demo there were 2 training images scattered in a 2-dimensional
(pixel intensity) space. That is, as many data points as dimensions. In con-
trast, in TICS there are 30,000 images scattered in a 1,440,000-dimensional
pixel intensity space: that is, about 50 times more dimensions than data
points! How should it be possible at all to estimate a function from [0, 1]1440000
to [0, 1]3000 when one has so much fewer training points than dimensions!?
The two conditions together – fewer data points than dimensions, and large
distances between the points – make it appear impossible to estimate a function
on [0, 1]1440000 from just those 30,000 points.
This is the dreaded “curse of dimensionality”.
In plain English, the curse of dimensionality says that in high-dimensional data
spaces, the available training data points will be spread exceedingly thinly in the
embedding data space, and they will lie far away from each other. Metaphorically
speaking, training data points are a few flickering stars in a vast and awfully
empty universe. But, most learning tasks require one to fill the empty spaces with
information.
18
it is hard to say what is the dimension of this vector. A lower bound could be
found in the number of sensory neural fibers reaching the brain. The optical
nerve has about 1,000,000 fibres. Having two of them, plus all sorts of other
sensory fibers reaching the brain (from the nose, ears, body), let us boldly declare
that the dimension of sensor inputs to the brain is 3 million. Now, the learning
process for a human differs from TICS learning in that sensory input arrives in
a continuous stream, not in isolated training images. In order to make the two
scenarios comparable, assume furthermore that a human organizes the continuous
input stream into “snapshots” at a rate of 1 snapshot “picture” of the current
sensory input per second (cognitive psychologists will tell us that the rate is likely
less). In the course of 25 years, with 12 wake hours per day, this makes about 25
× 360 × 12 × 3600 ≈ 390 million “training snapshot images”. This gives a ratio
of number of training points over data dimension of 390 million / 3 million = 130,
which looks so much better than the ratio of 30,000 / 1,443,000 ≈ 0.02 in TICS’s
case.
But even the ratio of 130 data points per dimension is still hopeless, and the
curse of dimensionality strikes just as well. Why?
For simplicity we again assume that the 3-million dimensional sensor image
vectors are normalized to a range of [0, 1], making a sensor impression a point in
[0, 1]3000000 . Statisticians, machine learners, and many cognitive scientists would
tell us that the “world model” of a human can be considered to be a probabil-
ity distribution over the sensory image space [0, 1]3000000 . This is a hypercube
with 23000000 corners. In order to estimate a probability distribution over an n-
dimensional hypercube, a statistician would demand to have many more data-
points than the cube has corners (to see why, think about estimating a probability
distribution over the one-dimensional unit hypercube [0, 1] from data points on
that interval, then extrapolate to higher dimensions). That is, a brain equipped
with ordinary statistical methods would demand to have many more than 23000000
training data points to distil a world model from that collection of experiences.
But there are only 390 million such datapoints collected in 25 years. The ratio
390 million / 23000000 is about 2−2999972 . That is, a human would have to have a
youth lifetime of about 25×22999972 ≈ 22999977 years in order to learn a statistically
defendable world model.
Still, the human brain (and the human around it, with the world around the
human around that brain) somehow can do it in 25 years. Cognitive scientists
believe that the key to this magic lies in the evolutionary history of man. Through
millions of years of incremental, evolutionary brain structure optimization, starting
from worm brains or earlier, the human brain is pre-structured in exactly such ways
that it comes with a built-in-by-birth data manifold geometry which reduces the 3
million-dimensional raw sensor data format to a much lower-dimensional manifold
surface. Then, 390 million data points may be enough to cover this manifold
densely enough for meaningful distribution estimates.
The question which built-in experience pre-shapings a human brings to the
19
table at birth time has a long tradition in philosophy and psychology. A recent
line of work that brings this tradition to bear on machine learning is in the research
of Joshua Tenenbaum – check out, for instance, Tenenbaum, Griffiths, and Kemp
2006 if you are interested.
20
Figure 6: How blackbox models work. For explanation see text.
An interesting and very relevant modeling task is to model the earth’s atmo-
sphere for weather forecasting. This modeling problem has been intensely worked
21
on for many decades, and an interesting mix of analytical and blackbox methods
marks the state of the art. If there is time and interest I will expand on this in
the tutorial session.
I conclude this section with a tale from my professional life which nicely illus-
trates the difference between analytical and blackbox modeling. I was once called
to consult a company in the chemical industry. They wanted a machine learning
solution for the following problem. In one of their factories they had built a pro-
duction line whose output was a certain artificial resin. Imagine a large hall full
of vessels, heaters, pumps and valves and tubes. The production process of that
resin was prone to a particularly nasty possible failure: if the process would not
be controlled correctly, some intermediate products might solidify. The concerned
vessels, tubes and valves would be blocked and could not be cleared – necessitating
to disassemble the facility and replace the congested parts with new ones. Very
expensive! The chemical engineers had heard of the magic of neural networks
and wanted one of these, which should give them early warnings if the production
process was in danger of drifting into the danger zone. I told them that this was
(maybe) possible if they could provide training data. What training data, please?
Well, in order to predict failure, a neural network needs examples of this failure.
So, could the engineers please run the facility through a reasonably large number
of solidification accidents, a few hundreds maybe for good statistics? Obviously,
that was that. Only analytical modeling would do here. A good analytical model
would be able to predict any kind of imminent solidification situtions. But that
wasn’t an option either because the entire production process was too complex for
an accurate enough analytical model. Now put yourself into the skin of the respon-
sible chief engineer. What should he/she do to prevent the dreaded solidification
to happen, ever? Another nice discussion item for our tutorial session.
22
I have a brother, Manfred Jaeger, who is a machine learning professor at Aal-
borg university (http://people.cs.aau.dk/~jaeger/). We naturally often talk
with each other, but never about ML because I wouldn’t understand what he is
doing and vice versa. We have never met at scientific conferences because we
attend different ones, and we publish in different journals.
The leading metaphors of ML have changed over the few decades of the field’s
existence. The main shift, as I see it, was from “cognitive modeling” to “statistical
modeling”. In the 1970-1980s, a main research motif/metaphor in ML (which was
hardly named like that then) was to mimic human learning on computers, which
connected ML to AI, cognitive science and neuroscience. While these connections
persist to the day, the mainstream self-perception of the field today is to view
it very soberly as the craft of estimating complex probability distributions with
efficient algorithms and powerful computers.
My personal map of the ML landscapte divides it into four main segments with
distinct academic communities, research goals and methods:
Segment 1: Theoretical ML. Here one asks what are the fundamental possi-
bilities and limitations of inferring knowledge from observation data. This
is the most abstract and “pure maths” strand of ML. There are cross-
connections to the theory of computational complexity. Practical applicabil-
ity of results and efficient algorithms are secondary. Check out https://en.
wikipedia.org/wiki/Computational_learning_theory and https://en.
wikipedia.org/wiki/Statistical_learning_theory for an impression of
this line of research.
23
to a particular species – first, because one usually doesn’t dig out a com-
plete skeleton, and second because extinct species are not known in the first
place. The field is plagued by misclassifications and terminological uncer-
tainties – often a newly found set of bones is believed to belong to a newly
discovered species, for which a new name is created, although in reality other
fossil findings already named differently belong to the same species. In the
PaleoDeepDive project, the web was crawled to retrieve virtually all scien-
tific pdf documents relating to paleontology – including documents that had
been published in pre-digital times and were just image scans. Using optical
character recognition and image analysis methods at the front end, these
documents were made machine readable, including information contained in
tables and images. Then, unsupervised, logic-based methods were used to
identify suspects for double naming of the same species, and also the oppo-
site: single names for distinct species – an important contribution to purge
the evolutionary tree of the animal kingdom.
Segment 3: Signal and pattern modeling. This is the most diverse sector in
my private partition of ML and it is difficult to characterize globally. The
basic attitude here is one of quantitative-numerical blackbox modeling. Our
TICS demo would go here. The raw data are mostly numerical (like physical
measurement timeseries, audio signals, images and video). When they are
symbolic (texts in particular), one of the first processing steps typically en-
codes symbols to some numerical vector format. Neural networks are widely
used and there are some connections to computational neuroscience. The
general goal is to distil from raw data a numerical representation (often
implicit) of the data distribution which lends itself to efficient application
purposes, like pattern classification, time series prediction, motor control to
name a few. Human-user interpretability of the distribution representation
is not easy to attain, but has lately become an important subject of research.
Like Segment 2, this field is decidedly application-driven. Under the catch-
word “deep learning” a subfield of this area has recently received a lot of
attention.
24
tive science, the cognitive neurosciences, AI, robotics, artificial life, ethology,
and philosophy.
It is hard to judge how “big” these four segments are in mutual comparison.
Surely Segment 1 receives much less funding and is pursued by substantially fewer
researchers than segments 2 and 3. In this material sense, segments 2 and 3 are
both “big”. Segment 4 is bigger than Segment 1 but smaller than 2 or 3. My own
research lies in 3 and 4. In this course I focus on the third segment — you should
be aware that you only get a partial glimpse of ML.
A common subdivision of ML, partly orthogonal to my private 4-section par-
tition, is based on three fundamental kinds of learning tasks:
Unsupervised learning. Training data are just data points xn . The task is to
discover some kind of “structure” (regularities, symmetries, redundancies...)
in the data distribution which can be used to create a compressed repre-
sentation of the data. Unsupervised learning can become very challenging
when data points are high-dimensional and/or when the distribution has a
complex shape. Unsupervised learning is often used for dimension reduction.
The result of an unsupervised learning process is a dimension-reducing, en-
coding function e which takes high-dimensional data points x as inputs and
returns low-dimensional encodings e(x). This encoding should preserve most
of the information contained in the original inputs x. That is, there should
also exist a decoding function d which takes encodings e(x) as inputs and
transforms them back to the high-dimensional format of x. The overall loss
in the encoding-decoding process should be small, that is, one wishes to
obtain x ≈ d(e(x)). A discovery of underlying rules and regularities is the
typical goal for data mining applications, hence unsupervised learning is the
25
main mode for Segment 2 from my private dissection of ML. Unsupervised
methods are often used for data preprocessing in other ML scenarios, be-
cause most ML techniques suffer from curse of dimensionality effects and
work better with dimension-reduced input data.
Reinforcement learning. The set-up for reinforcement learning (RL) is quite
distinct from the above two. It is always related to an agent that can choose
between different actions which in turn change the state of the environment
the agent is in, and furthermore the agent may or may not receive rewards
in certain environment states. RL thus involves at least the following three
types of random variables:
• action random variables A,
• world state random variables S,
• reward random variables R.
In most cases the agent is modeled as a stochastic process: a temporal
sequence of actions A1 , A2 , . . . leads to a sequence of world states S1 , S2 , . . .,
which are associated with rewards R1 , R2 , . . .. The objective of RL is to
learn a strategy (called policy in RL) for choosing actions that maximize
the reward accumulated over time. Mathematically, a policy is a conditional
distribution of the kind
that is, the next action is chosen on the basis of the “lifetime experience” of
previous actions and the resulting world states. RL is naturally connected to
my Segment 4. Furthermore there are strong ties to neuroscience, because
neuroscientists have reason to believe that individual neurons in a brain can
adapt their functioning on the basis of neural or hormonal reward signals.
Last but not least, RL has intimate mathematical connections to a classical
subfield of control engineering called optimal control, where the (engineer-
ing) objective is to steer some system in a way that some long-term objective
is optimized. An advanced textbook example is to steer an interplanetary
missile from earth to some other planet such that fuel consumption is min-
imized. Actions here are navigation maneuvres, the (negative) reward is
fuel consumption, the world state is the missile’s position and velocity in
interplanetary space.
26
advanced ML applications often make use of semi-supervised training schemes. In
such approaches, the original task is supervised: learn some input-output model
from labelled data (xn , yn ). This learning task may strongly benefit from including
additional unlabelled input training points x̃m , helping to distil a more detailed
model of the input distribution PX than would be possible on the basis of only
the labelled xn data. Again, TICS is an example: the data engineers who trained
TICS used 70K un-captioned images in addition to the 30K captioned images to
identify that 4096-dimensional manifold more accurately.
Also, reinforcement learning is not independent of supervised and unsuper-
vised learning. A good RL scheme often involves supervised or unsupervised
learning subroutines. For instance, an agent trying to find a good policy will ben-
efit from data compression (= unsupervised learning) when the world states are
high-dimensional; and an agent will be more capable of choosing good actions if
it possesses an input-output model (= supervised learning) of the environment —
inputs are actions, outputs are next states.
27
2 Decision trees and random forests
This section describes a class of machine learning models which is classical and
simple and intuitive and useful. Decision trees and random forests are not super
“fashionable” in these deep learning times, but practicians in data analysis use
them on a daily basis. The main inventions in this field have been made around
1980-2000. In this chapter I rely heavily on the decision tree chapters in the
classical ML textbooks of Mitchell 1997 and Duda, P. E. Hart, and Stork 2001,
and for the random forest part my source is the landmark paper by Breiman 2001
which, as per today (Nov 15, 2021), has been cited 81000 times (Google Scholar).
Good stuff to know, apparently.
Note: In this chapter of the lecture notes I will largely adhere to the notation
used in Breiman 2001 in order to make it easier for you to read that key paper
if you want to dig deeper. If in your professional future you want to use decision
tree methods, you will invariably use them in random forests, and in order to
understand what you are actually doing, you will have to read Breiman 2001
(like hundreds of thousands before you). Unfortunately, the notation used by
Breiman is inconsistent, which makes the mathematical part of that paper hard
to understand. My hunch is that most readers skipped the two mathy sections and
read only the experimental sections with the concrete helpful hints for algorithm
design and gorgeous results. Inconsistent or even plainly incorrect mathematical
notation (and mathematical thinking) happens a lot in “engineering applied math”
papers and makes it difficult to really understand what the author wants to say (see
my ramblings in my lecture notes on “Principles of Statistical Modeling”, Chapter
14, “A note on what you find in textbooks”, online at https://www.ai.rug.nl/
minds/uploads/LN_PSM.pdf). Therefore I will not use Breiman’s notation 1-1,
but modify it a little to make it mathematically more consistent.
• Start at the root node of the tree. The root node is labeled with a property
that fruit have (color? is the root property used in the figure).
• Underneath the root node you find child nodes, one for each of the three
possible color attributes green, yellow, red. Decide which color your fruit
has and proceed to that child node. Let us assume that your fruit was
yellow. Then you are now at the child node labelled shape?.
• Continue this game of moving downwards in the tree according to the direc-
tion decisions taken at each node according to the attributes of the fruit in
28
your hand. If you reach a leaf node of this tree, it will be labeled with the
type of your fruit. If your fruit is yellow, round and small, it’s a lemon!
29
color = Green?
yes no
yes no yes no
yes no yes no
FIGURE 8.2. A tree with arbitrary branching factor at different nodes can always be rep-
resented by a functionally equivalent binary tree—that is, one having branching factor
Figure 8: A binary version of the decision tree shown in Figure 7. Taken from
B = 2 throughout, as shown here. By convention the “yes” branch is on the left, the “no”
branch on the right. This binary tree contains the same information and implements the
Duda, P. E. Hart, and Stork 2001.
same classification as that in Fig. 8.1. From: Richard O. Duda, Peter E. Hart, and David
G. Stork, Pattern Classification. Copyright ⃝c 2001 by John Wiley & Sons, Inc.
Here we get a first inkling that decision tree learning might not be as trivial as
the final result in Figure 7 makes it appear. The above fruit training data table has
missing values (marked by “?”); not all properties from the training data are used
in the decision tree (property “Weight” is ignored); some examples of the same
class have different attribute vectors (first two rows give different characterizations
of watermelons); some identical attribute vectors have different classes (rows 3
and 4). In summary: real-world training data will be partly redundant, partly
inconsistent and will be containing errors and gaps. All of this points in the same
direction: statistical analyses will be needed to learn decision trees.
30
Figure 9: Another decision tree. The right image shows a page from this botany
field guide. What you see on this page is, in fact, a small section of a large binary
classification tree. Image source: iberlibro.com, booklooker.de.
this course: in a rigorous “true math” version, and in an intuitive “makes some
sense” version.
I called the fruit attribute data table above a sample. Samples are mathemat-
ical objects of key importance in statistics and machine learning (where they are
also called “training data”). Samples are always connected with random variables
(RVs). Here is how.
First, the intuitive version. As an empirical fruit scientist, you would obtain a
“random draw” to get the training data table in a very concrete way: you would
go to the fruit market, collect 3000 fruit “at random”, observe and note down
their color, size, taste, weight and shape attributes in an Excel table, and for each
of the 3000 fruit you also ask the fruit vendor for the name of the fruit to get an
almost but not quite surely correct class label which you also note down in the
last column of the table.
The mathematical representation of observing and noting down the attributes
31
of the i-th fruit that you have picked (where i = 1, . . . , 3000) is Xi (ω). Xi is the
random variable which is the mathematical model of the observation procedure
– Xi could be described as the procedure “pick a random fruit and observe and
report the attribute vector”. The ω in Xi (ω) is the occasion when you actually
executed the procedure Xi – say, ω stands for the concrete data collection event
when you went to the Vismarkt in Groningen last Tuesday and visited the fruit
stands. If the entire procedure of collecting 3000 fruit specimen is executed on
another occasion – for instance, a week earlier, or by your friend on the same day –
this would be mathematically represented by another ω. For instance, Xi (ω) might
be the attribute vector that you observed for the i-th fruit last Tuesday, Xi (ω ′ )
would be the attribute vector that you observed for the i-th fruit one week earlier,
Xi (ω ′′ ) would be the attribute vector that your friend observed for the i-th last
Tuesday when he did the whole thing in parallel with you, etc. In mathematical
terminology, these “observation occasions” ω are called elementary events.
Similarly, Yi (ω) is the fruit class name that you were told by the vendor when
you did the data sampling last Tuesday; Yi (ω ′ ) would be the ith fruit name you
were told when you did the whole exercise a week earlier already, etc.
The three thousand pairs (Xi (ω), Yi (ω)) correspond to the rows in the data
table. I will call each of these (Xi (ω), Yi (ω)) a data point. To get the entire
table as a single mathematical object – namely, the sample – one combines these
singleNfruit data points by N a product operation, obtaining (X(ω), Y(ω)), where
X = i=1,...,N Xi and Y = i=1,...,N Yi .
And here is the rigorous probability theory account of the (X(ω), Y(ω)) no-
tation. As always in probability theory and statistics, we have an underlying
probability space (Ω, A, P ). In this structure, Ω is the set of all possible elemen-
tary events ω ∈ Ω. A is a subset structure imposed on the set Ω called a σ-field,
and P is a probability measure on (Ω, A). The random variable X is a function
which returns samples, that is collections of N training data points. We assume
that all samples which could be drawn have N data points; this assumption is a
matter of convenience and simplifies the ensuing notation and mathematical anal-
ysis a lot. – It is a good exercise to formalize the structure of a sample in more
detail. A single data point consists of an attribute vector x and a class label y.
In formal notation, we have m properties Q1 , . . . , Qm , and property Qj has a set
Aj of possible attribute values. Thus, x ∈ A1 × . . . × Am . We denote the set of
possible class labels by C. A data point (Xi (ω), Yi (ω)) is thus an element of the
set (A1 × . . . × Am × C). See Appendix A for the mathematical notation used
in creating product data structures. We denote the sample space of any random
variable Z by SZ . The sample space for Xi is thus SXi = A1 × . . . × Am and for
Yi it is SYi = C. Because SXi = SXj , SYi = SYj for all 1 ≤ i, j ≤ N , for simplicity
we also write SX for A1 × . . . × Am and SY for C.
The entire trainingN data table is anNelement of the sample space SX⊗Y of the
product RV X⊗Y = ( i=1,...,N Xi )⊗( i=1,...,N Yi ). Just to exercise formalization
32
skills: do you see that
To round off our rehearsal of elementary concepts and notation, I repeat the
basic connections between random variables, probability spaces and sample spaces:
a random variable Z always comes with an underlying probability space (Ω, A, P )
and a sample space SZ . The RV is a function Z : Ω → SZ , and induces a probability
distribution on SZ . This distribution is denoted by PZ .
I torture you with these exercises in notation like a Russian piano teacher will
torture his students with technical finger-exercising études. It is technical and
mechanical and the suffering student may not happily appreciate the necessity of
it all, – and yet this torture is a precondition for becoming a virtuoso. The music
of machine learning is played in tunes of probability, no escape.
I am aware that, if you did not know these probability concepts before, this
condensed rehearsal cannot possibly be understandable. Probability theory is
one of the most difficult to learn sectors of mathematics and it takes weeks of
digesting and exercising to embrace the concepts of probability spaces, random
variables and sample spaces (not to speak of σ-fields). I will give a condensed
probability basics tutorial in the tutorial sessions accompanying this course if
there is a demand. Besides that I want to recommend my lecture notes “Principles
of Statistical Modeling” which I mentioned before. They come from a graduate
course whose purpose was to give a slow, detailed, understandable, yet fully correct
introduction to the concepts of probability theory and statistics.
33
In statistics, such functions h : SX → SY are generally called decision functions
(also in other supervised learning settings that do not involve decision trees), a
term that I will also sometimes use.
Decision tree learning is an instance of the general case of supervised learning:
the training data are labeled pairs (xi , yi )i=1,...,N , with xi ∈ SX , yi ∈ SY , and the
learning aims at finding a decision function h : SX → SY which is “optimal” in
some sense.
In what sense can such a function h : SX → SY (in this section: such a decision
tree) be “optimal”? How can one quantify the “goodness” of functions of the type
h : SX → SY ?
This is a very non-trivial question as we will learn to appreciate as the course
goes on. For supervised learning tasks, the key to optimizing a learning procedure
is to declare a loss function at the very outset of a learning project. A loss function
is a function
L : SY × SY → R≥0 , (1)
which assigns a nonnegative real number to a pair of class labels. The loss function
is used to compare the classification h(x), returned by a decision function h on
input x, with the correct value. After having learnt a decision function h from a
training sample SX × SY , one can quantify the performance of h averaged over
the training data, obtaining a quantity Remp that is called the empirical risk:
1 X
Remp (h) = L(h(xi ), yi ). (2)
N i=1,...,N
Most supervised machine learning procedures are built around some optimiza-
tion procedure which searches for decision functions which minimize the empirical
risk, that is, which try to give a small average loss on the training data.
I emphasize that minimizing the empirical loss (or in another common word-
ing, minimizing the “training error”) by a clever learning algorithm will usually
not lead to a very useful decision function. This is due to the problem of overfit-
ting. “Overfitting” means, intuitively speaking, that if the learning algorithm tries
everything to be good (= small loss) on the training data, it will attempt to min-
imize the loss individually for each training data point. This means, in some way,
to learn the training data “by heart” – encode an exact memory of the training
data points in the learnt decision function h. This is not a good idea because later
“test” data points will be different, and the decision function obtained by “rote
learning” will not know how to generalize to the new data points. The ultimate
goal of supervised machine learning is not to minimize the empirical loss (that
is, to have small loss on the training data), but to minimize the loss on future
“test” data which were not available for training. Thus, the central optimization
criterion for supervised learning methods is to find decision functions h which have
a small risk
R(h) = E[L(h(X), Y )], (3)
34
where E is the statistical expectation operator (see Appendix D). That is, a good
decision function h should incur a small expected loss – it should have small “test-
ing error”. At a later point in this course we will analyse the problem of avoiding
overfitting in more detail. For now it is enough to be aware that overfitting is a
very serious threat, and that in order to avoid it one should not allow the models
h to dress themselves too closely around the training data points.
For a classification task with a finite number of class labels, like in our fruit
example, a natural loss function is one that simply counts misclassifications:
count 0, if h(x) = y
L (h(x), y) = (4)
1, if h(x) 6= y
This counting loss is often a natural choice, but in many situations it is not
appropriate. Consider, for instance, medical diagnostic decision making. Assume
you visit a doctor with some vague complaints. Now compare two scenarios.
• Scenario 1: after doing his diagnostics the doctor says, “sorry to tell you but
you have cancer and you should think about making your will” – and this is
a wrong diagnosis; in fact you are quite healthy.
• Scenario 2: after doing his diagnostics the doctor says, “good news, old boy:
there’s nothing wrong with you except you ate too much yesterday” – and
this is a wrong diagnosis; in fact you have intestinal cancer.
These two errors are called the “false positive” and the “false negative” deci-
sions. A simple counting loss would optimize medical decision making such that
the average number of any sort of error is minimized, regardless whether it is a false
negative or a false positive. But in medicine, false negatives should be avoided as
much as possible because their consequences can be lethal, whereas false positives
will only cause a passing anxiety and inconvenience. Accordingly, a loss function
used to optimize medical decision making should put a higher penality (= larger
L values) on false negatives than on false positives.
Generally, and specifically so in operations research and decision support sys-
tems, loss functions can become very involved, including a careful balancing of
conflicting ethical, financial or other factors. However, we will not consider com-
plex loss functions here and stick to the simple counting loss. This loss is the
one that guided the design of the classical decision tree learning algorithms. And
now, finally, we have set the stage for actually discussing learning algorithms for
decision trees!
35
optimal (more realistically: a rather good) decision tree hopt which will minimize
misclassifications on new “test” data.
In order to distil this hopt from the training data, we need to set up a learning
algorithm which, on input (xi , yi )i=1,...,N , outputs hopt .
All known learning algorithms for decision trees incrementally build hopt from
the root node downwards. The first thing a decision tree learning algorithm
(DTLA) must do is therefore to decide which property is queried first, making
it the root node.
To understand the following procedures, notice that a decision tree iteratively
splits the training dataset in increasingly smaller, disjoint subsets. The root node
can be associated with the entire training dataset – call it D. If the root node νroot
queries the property Qj and this property has kj attributes aj1 , . . . , ajkj , there will
be k child nodes ν1 , . . . , νkj where node νl will be covering all training datapoints
that have attribute ajl ; and so forth down the tree. In detail: if νl1 l2 ···lr is a tree
node at level r (counting from the root), and this node is associated with the
subset Dl1 l2 ···lr of the training data, and this node queries property Qu which has
attributes au1 , . . . , auku , then the child node νl1 l2 ···lr ls will be that subset of Dl1 l2 ···lr
which contains those training data points that have attribute aus of property Qu .
The classical solution to this problem of selecting a property Q for the root
node is to choose that property which leads to a “maximally informative” split of
the training dataset in the first child node level. Intuitively, the data point subsets
associated with the child nodes should be as “pure” as possible with respect to
the classes c ∈ C.
In the best of all cases, if there are q different classes (that is, |C| = q), there
would be a property Qsupermagic with q attributes which already uniquely identify
classes, such that each first-level child node is already associated with a “pure”
set of training examples all from the same class – say, the first child node covers
only apples, the second only bananas, etc.
This will usually not be possible, among other reasons because normally there
is no property with exactly as many attributes as there are classes. Thus, “purity”
of data point sets associated with child nodes needs to be measured in a way that
tolerates class mixtures in each node. The measure that is traditionally invoked
in this situation comes from information theory. It is the entropy of the class
distribution within a child node. If Dl is the set of training points associated with
a child node νl , and nli is the number of data points in Dl that are from class i
(where i = 1, . . . , q), and the total size of Dl is nl , then the entropy Sl of the “class
mixture” in Dl is given by
X nl l
ni
Sl = − i
l
log2 l
. (5)
i=1,...,q
n n
nl
If in
this
sum a term nl happens to be zero, by convention the product
i
nli nl
nl
log2 nil is set to zero, too. I quickly rehearse two properties of entropy which
36
are relevant here:
• The more “mixed” the set Dl , the higher the entropy Sl . In one extreme
case, Dl is 100% clean, that is, it contains only data points of a single class.
Then Sl = 0. The other extreme is the greatest possible degree of mixing,
which occurs when there is an equal number of data points from each class in
Dl . Then Sl attains its maximal possible value of −q (1/q) log2 (1/q) = log q.
When Sl is zero, the set Dl is “pure” — it contains examples of only one class.
The larger Sl , the less pure Dl . This has led to the terminology to call the entropy
measure Sl an impurity measure.
The entropy of the root node is
X nroot root
ni
Sroot = − i
log2 ,
i=1,...,q
N N
where nrooti is the number of examples of class i in the total training data set D
and N is the number of training data points. Following the terminology of Duda,
P. E. Hart, and Stork 2001, I will denote the impurity measure of the root by
ientropy (νroot ) := Sroot .
More generally, if ν is child node of the root, and this node is associated with
the training data point set Dν , and |Dν | = n, and the q classes are represented in
Dν by subsets of size n1 , . . . , nq , the entropy impurity of ν is given by
X ni n
i
ientropy (ν) = − log2 . (6)
i=1,...,q
n n
If the root node νroot queries the property Qj and this property has k attributes,
then the mixing of classes averaged over all child nodes ν1 , . . . , νk is given by
X |Dl |
ientropy (νl ),
l=1,...,k
N
37
For any other node ν labeled with property Q, where Q has k attributes, and
the size of the set associated with ν is n and the sizes of the sets associated with
the k child nodes ν1 , . . . , νk are n1 , . . . , nk , the information gain for node ν is
X nl
∆ientropy (ν, Q) = ientropy (ν) − ientropy (νl ). (8)
l=1,...,k
n
The procedure to choose a property for the root node is to compute the infor-
mation gain for all properties and select the one which maximizes the information
gain.
This procedure of choosing that query property which leads to the greatest
information gain is repeated tree-downwards as the tree is grown by the DTLA.
A node is not further expanded and thus becomes a leaf node if (i) either the
training dataset associated with this node is 100% pure (contains only examples
from a single class), or if (ii) one has reached the level q, that is, all available
properties have been queried on the path from the root to that node. In case (i)
the leaf is labeled with the unique class of its associated data point set. In case
(ii) it is labeled with the class that has the largest number of representatives in
the associated data point set.
Information gain is the most popular and historically first criterion used for
determining the query property of a node. But other criteria are used too. I
mention three.
The first is a normalized version of the information gain. The motivation is
that the information gain criterion (8) favours properties that have more attributes
over properties with fewer attributes. This requires a little explanation. Generally
speaking, properties with many attributes statistically tend to lead to purer data
point sets in the children nodes than properties with only few attributes. To see
this, compare the extreme cases of a property with only a single attribute, which
will lead to a zero information gain, with the extreme case of a property that has
many more attributes than there are data points in the training dataset, which
will (statistically) lead to many children node data point sets which contain only
a single example, that is, they are 100% pure. A split of the data point set Dν
into a large number of very small subsets is undesirable because it is a door-
opener for overfitting. This preference for properties with more attributes can be
compensated if the information gain is normalized by the entropy of the data set
splitting, leading to the information gain ratio criterion, which here is given for
the root node:
∆ientropy (νroot , Qj )
∆ratio ientropy = P . (9)
− l=1,...,k |DNl | log2 |DNl |
The second alternative that I mention measures the impurity of a node by its
Gini impurity, which for a node ν associated with set Dν , where |Dν | = n and the
subsets of points in Dν of classes 1, . . . , q have sizes n1 , . . . , nq , is
X ni nj X ni 2
iGini (ν) = =1− ,
1≤i,j≤q;i̸=j
n n 1≤i≤q
n
38
which is the error rate when a category decision for a randomly picked point in
Dν is made randomly according to the class distribution within Dν .
The third alternative is called the misclassification impurity in Duda, P. E.
Hart, and Stork 2001 and is given by
nl
imisclass (ν) = 1 − max{ | l = 1, . . . , q}.
n
This impurity measure is the minimum (taken over all classes c) probability
that a point from Dν which is of class c is misclassified if the classification is
randomly done according to the class distribution in Dν .
Like the entropy impurity, the Gini and misclassification impurities are non-
negative and equal to zero if and only if the set Dν is 100% pure. Like it was done
with the entropy impurity, these other impurity measures can be used to choose
a property for a node ν through the gain formula (8), plugging in the respective
impurity measure for ientropy .
This concludes the presentation of the core DTLA. It is a greedy procedure
which incrementally constructs the tree, starting from its root, by always choos-
ing that property for the node currently being constructed which maximizes the
information gain (7) or one of the alternative impurity measures.
You will notice some unfortunate facts:
• We started from claiming that our goal is to learn a tree which minimizes
the count loss (or any other loss we might opt for). However, none of the
impurity measures and the associated local gain optimization is connected
in a mathematically transparent way with that loss. In fact, no computa-
tionally tractable method for learning an empirical-loss-minimal tree exists:
the problem is NP-complete (Hyafil and Rivest 1976).
• It is not clear which of the impurity measures is best for the goal of mini-
mizing the empirical loss.
39
rather pure. A zero empirical loss is easily attained if the number of properties
and attributes is large. Then it will be the case that every training example has
a unique combination of attributes, which leads to 100% pure leaf nodes, and
every leaf having only a small set of training examples associated with it, maybe
singleton sets. Then one has zero misclassifications of the training examples. But,
intuitively speaking, the tree has just memorized the training set. If there is some
“noise” in the attributes of data points, new examples (not in the training set)
will likely have attribute combinations that lead the learnt decision tree on wrong
tracks.
We will learn to analyze and fight overfitting later in the course. At this point
I only point out that if a learnt tree T overfits, one can typically obtain from it
another tree T ′ which overfits less, by pruning T . Pruning a tree means to select
some internal nodes and delete all of their children. This makes intuitive sense:
the deeper one goes down in a tree learnt by the core algorithm, the more does
the branch reflect “individual” attribute combinations found in the training data.
To make this point particularly clear, consider a case where there are many
properties, but only one of them carries information relevant for classification,
while all others are purely random. For instance, for a classification of patients into
the two classes “healthy” and “has_cancer”, the binary property “has_increased_
leukocyte_count” carries classification relevant information, while the binary prop-
erty “given_name_starts_with_A” is entirely unconnected to the clinical status.
If there are enough such irrelevant properties to uniquely identify each patient in
the training sample, the core algorithm will (i) most likely find that the relevant
property “has_increased_leukocyte_count” leads to the greatest information gain
at the root and thus use it for the root decision, and (ii) subsequently generate
tree branches that lead to 100% pure leaf nodes. Zero training error and poor
generalization to new patients results. The best tree here would be the one that
only expands the root node once, exploiting the only relevant property.
With this insight in mind, there are two strategies to end up with trees that
are not too deep.
The first strategy is early stopping. One does not carry the core algorithm to
its end but at each node which one has created, one decides whether a further
expansion would lead to overfitting. A number of statistical criteria are known to
decide when it is time to stop expanding the tree; or one can use cross-validation
(explained in later chapters in these lecture notes). We do not discuss this further
here – the textbook by Duda explains a few of these statistical criteria. A problem
with early stopping is that it suffers from the horizon effect: if one stops early at
some node, a more fully expanded tree might exploit further properties that are,
in fact, relevant for classification refinement underneath the stopped node.
The second strategy is pruning. The tree is first fully built with the core
algorithm, then it is incrementally shortened by cutting away end sections of
branches. An advantage of pruning over early stopping is that it avoids the horizon
effect. Duda says (I am not a decision tree expert and can’t judge) that pruning
40
should be preferred over early stopping “in small problems” (whatever “small”
means).
Missing values. Real-world datasets will typically have missing values, that is,
not all training or test examples will have attribute values filled in for all
properties. Missing values require adjustments both in training (an adapted
version of impurity measures which accounts for missing values) and in test-
ing (if a test example leads to a node with property Q and the example
misses the required attribute for Q, the normal classification procedure would
abort). The Duda book dedicates a subsection to recovery algorithms.
All in all, there is a large number of design decisions to be made when setting
up a decision tree learning algorithm, and a whole universe of design criteria and
algorithmic sub-procedures is available in the literature. Some combinations of
such design decisions have led to final algorithms which have become branded with
names and are widely used. Two algorithms which are invariably cited (and which
are available in professional toolboxes) are the ID3 algorithm and its more complex
and higher-performant successor, the C4.5 algorithm. They had been introduced
by the decision tree pioneer Ross Quinlan in 1986 and 1993, respectively. The
Duda book gives brief descriptions of these canonical algorithms.
41
be “the optimal” tree which one might obtain by some other choice of options.
Furthermore, if one uses pruning or early stopping to fight overfitting, one will
likely end up with trees that, while not overfitting, are underfitting – that is,
they do not exploit all the information that is in the training data. In summary,
whatever one does, one is likely to obtain a decision tree that is significantly
sub-optimal.
This is a common situation in machine learning. There are only very few
machine learning techniques where one has full mathematical control over getting
the best possible model from a given training dataset, and decision tree learning
is not among them. Fortunately, there is also a common escape strategy for
minimizing the quality deficit inherent in most learning designs: ensemble methods.
The idea is to train a whole collection (called “ensemble”) of models (here: trees),
each of which is likely suboptimal (jargon: each model is obtained from a “weak
learner”), but if their results on a test example are diligently combined, the merged
result is much better than what one gets from each of the models in the ensemble.
It’s the idea of crowd intelligence.
“Ensemble methods” is an umbrella term for a wide range of techniques. They
differ, obviously, in the kind of the individual models, like for instance decision
trees vs. neural networks. Second, they vary in the methods of how one generates
diversity in the ensemble. Ensemble methods work well only to the extent that the
individual models in the ensemble probe different aspects in the training data –
they should look at the data from different angles, so to speak. Thirdly, ensemble
methods differ in the way how the results of the individual models are combined.
The most common way is majority voting — the final classification decision is the
one made mostly by the individual models. A general theory for setting up an
ensemble learning scheme is not available – we are again thrown back to heuristics.
The Wikipedia articles on “Ensemble learning” and “Ensemble averaging (machine
learning)” give a condensed overview.
Obviously, ensemble methods can only be used when the computational cost
of training a single model is rather low. This is the case for decision tree learning.
Because training an individual tree will likely not give a competitive result, but
is cheap, it is common practice to train not a single decision tree but an entire
ensemble – which in the case of tree learning is called a random forest. In fact,
random forests can yield competitive results (for instance, compared to neural
networks) at moderate cost, and are therefore often used in practical applications.
The definite reference on random forests is Breiman 2001. The paper has two
parts. In the first part, Breiman gives a mathematical analysis of why combin-
ing decision trees does not lead to overfitting, and derives an instructive upper
bound on the generalization error (= risk, see Section 2.3). These results were
in synchrony with mainstream theory work in other areas of machine learning at
the time and made this paper the theory anchor for random forests. However, I
personally find the math notation used by Breiman opaque and hard to penetrate,
and I am not sure how many readers could understand it. The second, larger part
42
of this paper describes several variants and extensions of random forest algorithms,
discusses their properties and benchmarks some of them against the leading clas-
sification learning algorithms of the time, with favourable outcomes. This part
is an easy read and the presented algorithms are not difficult to implement. My
hunch is that it is the second rather than the first part which led to the immense
impact of this paper.
Here I give a summary of the paper, in reverse order, starting with the practical
algorithm recommended by Breiman. After that I give an account of the most
conspicuous theoretic results in transparent standard probability notation.
In ensemble learning one must construct many different models in an auto-
mated fashion. One way to achieve this is to employ a stochastic learning algo-
rithm. A stochastic learning algorithm can be seen as a learning algorithm which
takes two arguments. The first argument is the training data D, the same as in
ordinary, non-stochastic learning algorithms. The second argument is a random
vector, denoted by Θ in Breiman’s paper, which is set to different random values
in each run of the algorithm. Θ can be seen as a vector of control parameters
in the algorithm; different random settings of these control parameters lead to
different outcomes of the learning algorithm although the training data are always
the same, namely D.
Breiman proposes two ways in order to make the tree learning stochastic:
Bagging. In each run of the learning algorithm, the training dataset is resampled
with replacement. That is, from D one creates a new training dataset D′
of the same size as D by randomly copying elements from D into D′ . This
is a general approach for ensemble learning, called bagging. The Wikipedia
article on “Bootstrap aggregating” gives an easy introduction if you are
interested in learning more.
Random feature selection is the term used by Breiman for a randomization
technique where, at each node whose query property has to be chosen dur-
ing tree growing, a small subset of all still unqueried properties is randomly
drawn as candidates for the query property of this node. The winning prop-
erty among them is determined by the information gain criterion.
PE ∗ ≤ ϱ̄ (1 − s2 )/s2 .
43
This theorem certainly added much to the impact of the paper, because it gives
an intuitive guidance for the proper design of random forests, and also because it
connected random decision tree forests to other machine learning methods which
were being mathematically explored at the time when the paper appeared. I will
now try to give a purely intuitive explanation, and then conclude this section with
a clean-math explanation of this theorem.
Note that the factor (1 − s2 )/s2 ranges between zero and infinity and is
monotonously increasing with decreasing s. It is zero with maximal strength
s = 1, and it is infinite if the strength is zero.
The theorem gives an upper bound on the generalization error observed in
(asymptotically infinitely large) random forests. The main message is that this
bound is a product of a factor ϱ̄ which is smaller when the different trees in a
forest vary more in their response to test inputs, with a factor (1 − s2 )/s2 which is
smaller when the trees in the forest place a larger probability gap between correct
and the second most common decision across the forest. The suggestive message
of all of this is that in designing a stochastic decision tree learning algorithm one
should
• aim at maximizing the response variability across the forest, while
• attempting to ensure that trees mostly come out with correct decisions.
If one succeeds to generate trees that always give correct decisions, one has
maximal strength s = 1 and the generalization error is obviously zero. This will
44
usually be impossible. Instead, the stochastic tree learning algorithm will produce
trees that have a residual error probability, that is, s < 1. Then the first factor
implies that one should aim at a stochastic tree generation mechanism which (while
fixing the strength) show great variability in their response behavior.
The remainder of this section is only for those of you who are familiar with
probability theory, and this material will not be required in the final exam. I will
give a mathematically transparent account of Breiman’s Theorem 2.3. This boils
down to an exercise in clean mathematical formalism. We start by formulating the
ensemble learning scenario in rigorous probability terms. There are two underlying
probability spaces involved, one for generating data points, and the other for
generating the random vectors Θ. I will denote these two spaces by (Ω, A, P ) and
(Ω∗ , A∗ , P ∗ ) respectively, with the second one used for generating Θ. The elements
of Ω, Ω∗ will be denoted by ω, ω ∗ , respectively.
Let X be the RV which generates attribute vectors,
X : Ω → A1 × . . . × Am ,
Y : Ω → C.
Zj : Ω∗ → T
be the RV which generates the j-th parameter vector, that is, Zj (ω ∗ ) = Θj . The
RVs Zj (j = 1, . . . , K) are independent and identically distributed (iid).
In the remainder of this section, we fix some training dataset D. Let h(·, Θ)
denote the tree that is obtained by running the stochastic tree learning algorithm
with parameter vector Θ. This tree is a function which outputs a class decision if
the input is an attribute vector, that is,
h(x, Θ) ∈ C.
h(X, Θ) : Ω → C
is a random variable.
Now the stage is prepared to re-state Breiman’s central result (Theorem 2.3)
in a transparent notation.
45
Breiman starts by introducing a function mr, called margin function for a
random forest,
mr : (A1 × . . . × Am ) × C → R
(x, y) 7→ P ∗ (h(x, Z) = y) − max{P ∗ (h(x, Z) = ỹ)},
ỹ̸=y
s = E[mr(X, Y )].
s is a measure to what extent, averaged over data point examples, the correct
answer probability (over all possible decision trees) is larger than the highest
probability of deciding for any other answer. Breiman calls s the strength of the
parametrized stochastic learning algorithm. The stronger the stochastic learning
algorithm, the greater the probability margin between correct and wrong classifi-
cations on average over data point examples.
Breiman furthermore introduces a raw margin function rmgΘ , which is a func-
tion of X and Y parametrized by Θ, through
1, if h(X(ω), Θ) = Y (ω),
−1, if h(X(ω), Θ) gives the maximally probable among the
rmgΘ (ω) =
wrong answers in an asymptotically infinitely large forest,
0, else.
(10)
′
Define ϱ(Θ, Θ ) to be the correlation (= covariance normalized by division with
standard deviations) between rmgΘ and rmgΘ′ . For given Θ, Θ′ , this is a number
in [−1, 1]. Seen as a function of Θ, Θ′ , ϱ(Θ, Θ′ ) maps each pair Θ, Θ′ into [−1, 1],
which gives a RV which we denote with the same symbol ϱ for convenience,
where Z, Z ′ are two independent RVs with the same distribution as the Zj .
Let
ϱ̄ = E[ϱ(Z, Z ′ )]
be the expected value of ϱ(Z, Z ′ ). It measures to what extent, in average across
random choices for Θ, Θ′ , the resulting two trees h(·, Θ) and h(·, Θ′ ) have both the
same correct or wrong decision averaged over data examples.
46
And here is, finally, Breiman’s Theorem 2.3:
PE ∗ ≤ ϱ̄ (1 − s2 )/s2 , (11)
whose intuitive message I discussed earlier.
47
3 Elementary supervised temporal learning
In this section I give an introduction to a set of temporal data modeling meth-
ods which combine simplicity with broad practical usefulness: learning temporal
tasks by training a linear regression map that transforms input signal windows
to output data. In many scenarios, this simple technique is all one needs. It
can be programmed and executed in a few minutes (really!) and you should run
this technique as a first baseline whenever you start a serious learning task that
involves time series data.
This section deals with numerical timeseries where the data format for each
time point is a real-valued scalar or vector. This includes the majority of all
learning tasks that arise in the natural sciences, in engineering, robotics, speech
and in image processing.
Methods for dealing with symbolic timeseries (in particular texts, but also dis-
crete action sequences of intelligent agents / robots, DNA sequences and more) can
be obtained by encoding symbols into numerical vectors and then apply numeri-
cal methods. Often however one uses methods that operate on symbol sequences
directly (all kinds of discrete-state dynamical systems, deterministic or stochastic,
like finite automata, Markov chains, hidden Markov models, dynamical Bayesian
networks, and more). I will not consider such methods in this section.
My secret educational agenda in this section is that this will force you to
rehearse linear regression - which is such a basic, simple yet widely useful method
that everybody should have a totally 100% absolute unshakeable secure firm hold
on it.
X
N
w = argmin (w∗ xi − yi )2 . (12)
w∗
i=1
48
In machine learning terminology, this is a supervised learning task, because
the training data points include “teacher” outputs yi . Note that the abstract
data format (xi , yi )i=1,...,N is the same as for decision trees, but here the data are
numerical and there they were symbolic.
By a small modification, one can make linear regression much more versatile.
Note that any weight vector w∗ in (12) will map the zero vector 0 ∈ Rn on zero.
This is often not what one wants to happen. If one enriches (12) by also training
a bias b ∈ R, via
XN
(w, b) = argmin (w∗ xi + b∗ − yi )2 ,
w ∗ , b∗ i=1
one obtains an affine linear function (linear plus constant offset). The common
way to set up lineare regression such that affine linear solutions become possible
is to pad the original input vectors xi with a last component of constant size 1,
that is, in (12) one uses n + 1-dimensional vectors [x; 1] (using Matlab notation
for the vertical concatentation of vectors). Then a solution of (12) on the basis
of the padded input vectors will be a regression weight vector [w, b] ∈ Rn+1 , with
the last component b giving the offset.
We now derive a solution formula for the minimization problem PN (12). Most2
textbooks start from the observation that the objective function i=1 (wxi − yi )
is a quadratic function in the weights w and then one uses calculus to find the
minimum of this quadratic function by setting its partial derivatives to zero. I
will present another derivation which does not need calculus and better reveals
the underlying geometry of the problem.
Let xi = (x1i , . . . , xni )′ be the ith input vector. The key to understand lin-
ear regression is to realize that the N values xj1 , . . . , xjN of the j-the component
(j = 1, . . . , n), collected across all input vectors, is an N -dimensional vector
φj = (xj1 , . . . , xjN )′ ∈ RN . Similarly, the N target values y1 , . . . , yN can be com-
bined into an N -dimensional vector y. Figure 10 Top shows a case where there
are N = 10 input vectors of dimension n = 4.
Using these N -dimensional vectors as a point of departure, geometric insight
gives us a nice clue how w should be computed. To admit a visualization, we
consider a case where we have only N = 3 input vectors which each have n = 2
components. This gives two N = 3 dimensional vectors φ1 , φ2 (Figure 10 Bot-
tom). The target values y1 , y2 , y3 are combined in a 3-dimensional vector y.
Notice that in machine learning, one should best have more input vectors than
the input vectors have components, that is, N > n. In fact, a very coarse rule of
thumb – with many exceptions – says that one should aim at N > 10 n (if this is
not warranted, use unsupervised dimension reduction methods to reduce n). We
will thus assume that we have fewer vectors φj than training data points. The
vectors φj thus span an n-dimensional subspace in RN (greenish shaded area in
Figure 10 Bottom).
Notice (easy exercise, do it!) that the minimization problem (12) is equivalent
49
Figure 10: Two visualizations of linear regression. Top. This visualization shows
a case where there are N = 10 input vectors xi , each one having n = 4 vector
components x1i , . . . , x4i (green circles). The fourth component is a constant-1 bias.
The ten values xj1 , . . . , xj10 of the j-th component (where j = 1, . . . , 4) form a
10-dimensional (row) vector φj , indicated by a connecting line. Similarly, the
ten target values yi give a 10-dimensional vector y (shown in red). The linear
combination yopt = w[φ1 ; φ2 ; φ3 ; φ4 ] which gives the best approximation to y in
the least mean square error sense is shown in orange. Bottom. The diagram
shows a case where the input vector dimension is n = 2 and there are N = 3
input vectors x1 , x2 , x3 in the training set. The three values x11 , x12 , x13 of the first
component give a three dimensional vector φ1 , and the three values of the second
component give φ2 (green). These two vectors span a 2-dimensional subspace F
in RN = R3 , shown in green shading. The three target values y1 , y2 , y3 similarly
make for a vector y (red). The linear combination yopt = w1 φ1 + w2 φ2 which has
the smallest distance to y is given by the projection of y on this plane F (orange).
The vectors u1 , u2 shown in blue is a pair of orthonormal basis vectors which span
the same subspace F.
50
to
Xn
w = argmin k( wj∗ φj ) − yk2 , (13)
w∗
j=1
X
n
φj = (u′l φj ) ul = U U ′ φj . (15)
l=1
X
n
yopt = (u′l y) ul = U U ′ y, (16)
l=1
U U ′ y = U U ′ X ′ w′ .
U ′ y = U ′ X ′ w′ . (17)
51
It remains to find a weight vector w which satisfies (17). I claim that w′ =
(X X ′ )−1 X y does the trick, that is, U ′ y = U ′ X ′ (X X ′ )−1 X y holds.
To see this, first observe that XX ′ is nonsingular, thus (XX ′ )−1 is defined.
Furthermore, observe that U ′ y and U ′ X ′ (XX ′ )−1 X y are n-dimensional vectors,
and that the N × n matrix U Σ has rank n. Therefore,
Step 1. Sort the input vectors as columns into an n × N matrix X and the
targets into an N -dimensional vector y.
• What we have derived here generalizes easily to cases where the data are of
the form (xi , yi )i=1,...,N where xi ∈ Rn , yi ∈ Rk . That is, the output data
are vectors, not scalars. The objective is to find a k × n regression weight
matrix W which solves
X
N
W = argmin kW ∗ xi − yi k2 . (20)
W∗ i=1
W ′ = (XX ′ )−1 X Y,
• For an a×b sized matrix A, where a ≥ b and A has rank b, the matrix A+ :=
(A′ A)−1 A′ is called the (left) pseudo-inverse of A. It satisfies A+ A = Ib×b .
It is often also written as A† .
52
• Computing the inversion (XX ′ )−1 may suffer from numerical instability
when XX ′ is close to singular. Remark: this happens more often than
you would think - in fact, XX ′ matrices obtained from real-world, high-
dimensional data are often ill-conditioned (= close to singular). You should
always feel uneasy when your program code contains a matrix inverse! A
quick fix is to always add a small multiple of the n × n identity matrix before
inverting, that is, replace (19) by
′
wopt = (XX ′ + α2 In×n )−1 X y. (21)
This is called ridge regression. We will see later in this course that ridge
regression not only helps to circumvent numerical issues, but also offers a
solution to the problem of overfitting.
• A note on terminology. Here we have described linear regression. The word
“regression” is used in much more general scenarios. The general setting
goes like this:
Given: Training data (xi , yi )i=1,...,N , where xi ∈ Rn , yi ∈ Rk .
Also given: a search space H containing candidate functions h : Rn → Rk .
Also given: a loss function L : Rk × Rk → R≥0 .
Wanted: A solution to the optimization problem
X
N
hopt = argmin L(h(xi ), yi )
h∈H i=1
In the case of linear regression, the search space H consists of all linear
functions from Rn to Rk , that is, it consists of all k × n matrices. The loss
function is the quadratic loss which you see in (20). When one speaks of
linear regression, the use of the quadratic loss is implied.
Search spaces H can be arbitrarily large and rich in modeling options –
for instance, H might be the space of all deep neural networks of a given
structure and size.
Classification tasks look similar to regression tasks at first sight: training
data there have the format (xi , ci )i=1,...,N . The difference is that that the
target values ci are not numerical but symbolic — they are class labels.
an input signal (u(t))t∈T , where T is an ordered set of time points and for every
t ∈ T , u(t) ∈ Rk ;
53
a “teacher” output signal (y(t))t∈T , where y(t) ∈ Rm .
For simplicity we will only consider discrete time with equidistant unit timesteps,
that is T = N (unbounded time) or T = {0, 1, . . . , T } (finite time).
The learning task consists in training a system which operates in time and, if
it is fed with the input signal u(t), produces an output signal ŷ(t) which approx-
imates the teacher signal y(t) (in the sense of minimizing a loss L(ŷ(t) − y(t)) in
time average over the training data). A few examples for illustration:
• Input signal: a noisy radio signal with lots of statics and echos. Desired
output signal: the input signal in a version which has been de-noised and
where echos have been cancelled.
These are quite different sorts of tasks. There are many names in use for
quick references to temporal tasks. The ECG monitoring task would be called
a temporal classification or fault monitoring task; the 0-1 switching of the badly
designed thermostat is too simplistic to have a respectable name; radio engineers
would speak of de-noising and equalizing, and speech translation is too royally
complex to have a name besides “online speech translation”. Input-output signal
transformation tasks are as many as there are wonders under the sun.
In many cases, one may assume that the current output data point y(t) only
depends on inputs up to that time point, that is, on the input history . . . , u(t −
2), u(t − 1), u(t). Specifically, y(t) does not depend on future inputs. Input-
output systems where the output does not depend on future inputs are called
causal systems in the signal processing world.
A causal system can be said to have memory if the current output y(t) is not
fully determined by the current input u(t), but is influenced by earlier inputs as
well. I must leave the meaning of “influenced by” vague at this point; we will make
it precise in a later section when we investigate stochastic processes in more detail.
All examples except the (poorly designed) thermostat example have memory.
Often, the output y(t) is influenced by long-ago input only to a negligeable ex-
tent, and it can be explained very well from only the input history extending back
to a certain limited duration. All examples in the list above except the English
54
translation one have such limited relevant memory spans. In causal systems with
bounded memory span, the current output y(t) thus depends on an input window
u(t − d + 1), u(t − d + 2), . . . , u(t − 1), u(t) of d steps duration. Figure 11 (top)
gives an impression.
u, y
y(t)
u(t-4)
u(t-3)
u(t-2)
u(t-1) u(t)
… t-5 t-4 t-3 t-2 t-1 t t+1 t+2 … Time
u u(t-4)
u(t-3)
u(t-2)
u(t-1) u(t) u(t+1)
… t-5 t-4 t-3 t-2 t-1 t t+1 t+2 … Time
55
a problem; all one has to do is to flatten the collection of d k-dimensional input
vectors which lie in a window into a single d · k dimensional vector, and then apply
(20) as before.
Linear regression is often surprisingly accurate, especially when one uses large
windows and a careful regularization (to be discussed later in this course) through
ridge regression. When confronted with a new supervised temporal learning task,
the first thing one should do as a seasoned pro is to run it through the machinery of
window-based linear regression. This takes a few minutes programming and gives,
at the very least, a baseline for comparing more sophisticated methods against —
and often it gives even already a very good soluation already without more effort.
But, linear regression only can give linear regression functions. This is not good
enough if the dynamical input-output system behavior has significant nonlinear
components. Then one must find a nonlinear regression function f .
If that occurs, one can take resort to a simple method which yields nonlinear
regression functions while not renouncing the conveniences of the basic linear re-
gression learning formula (19). I discuss this for the case of scalar inputs u(t) ∈ R.
The trick is to add fixed nonlinear transforms to the collection of input arguments
u(t−d+1), u(t−d+2), . . . , u(t). A common choice is to add polynomials. To make
notation easier, let us rename u(t − d + 1), u(t − d + 2), . . . , u(t) to u1 , u2 , . . . , ud .
If one adds all polynomials of degree 2, one obtains a collection of d + d(d + 1)/2
input components for the regression, namely
If one wants “even more nonlinearity”, one can venture to add higher-order poly-
nomials. The idea to approximate nonlinear regression functions by linear combi-
nations of polynomial terms is a classical technique in signal processing, where it
is treated under the name of Volterra expansion or Volterra series. Very general
classes of nonlinear regression functions can be approximated to arbitrary degrees
of precision with Volterra expansions.
Adding increasingly higher-order terms to a Volterra expansion obviously leads
to a combinatorial explosion. Thus one will have to use some pruning scheme to
keep only those polynomial terms which lead to an increase of accuracy. There
is a substantial literature in signal processing dealing with pruning strategies for
Volterra series (google “Volterra pruning”). I personally would never try to use
polynomials of degree higher than 2. If that doesn’t give satisfactory results, I
would switch to other modeling techniques, using neural networks for instance.
56
Time series prediction tasks come in many variants and I will not attempt
to draw the large picture but restrict this treatment to timeseries with integer
timesteps and vector-valued observations, as in the previous subsections.
Re-using terminology and notation, the input signal in the training data is, as
before, (u(t))t∈T . If one wants to predict this sequence of observations h timesteps
ahead (h is called prediction horizon), the desired output y(t) is just the input,
shifted by h timesteps:
(y(t))t∈T = (u(t + h))t∈T .
A little technical glitch is that due to the timeshift h, the last h data points
in the input signal (u(t))t∈T cannot be used as training data because their h-step
future would lie beyond the maximal time T .
Framed as an u(t)–to–y(t) input-output signal transformation task, all meth-
ods that can be applied for the latter can be used for timeseries prediction too.
Specifically, simple window-based linear regression (as in Figure 11 bottom) is
again a highly recommendable first choice for getting a baseline predictor when-
ever you face a new timeseries prediction problem with numerical timeseries.
57
a highly corrupted version of the cleanly transmitted one. Signal processing
engineers call this part of the world-in-the-middle the channel. Does it
not seem hopeless to model the input-output transformation of such super-
complex physical channels? – There can hardly be a more classical problem
than this one; analysing signal transmission channels gave rise to Shannon’s
information theory (Shannon 1948).
Abstracting from these examples, consider how natural scientists and math-
ematicians describe dynamical systems with input and output. We stay in line
with earlier parts of this section and consider only discrete-time models with a time
index set is T = N or T = Z. Three timeseries are considered simultaneously:
In soccer reporting example, x(t) would refer to some kind of brain state —
for instance, the vector of activations of all the reporter’s brain’s neurons. In the
signal transmission example, x(t) would be the state vector of some model of the
physical world stretched out between the sending and receiving antenna.
A note in passing: variable naming conventions are a little confusing. In the
machine learning literature, x’s and y’s in (x, y) pairs usually mean the arguments
and values (or inputs and outputs) of classification/regression functions. Both x
and y are observable. In the dynamical systems and signal processing literature
(mathematics, physics and engineering), the variable name x typically is reserved
for the state of a physical system that generates or “channels” (= “transduces”,
“filters”) signals. The internal physical state of these systems is normally not
fully observable and not part of training data. Only the input and output signals
u(t), y(t) are observable data which are available for training models.
In a discrete-time setting, the temporal evolution of u(t), x(t), y(t) is governed
by two functions, the state update map
which describes how the internal states x(t) develop over time under the influence
of input, and the observation function
which describes which outputs y(t) can be observed when the physical system is
in state x(t).
The input signal u(t) is not specified by some equation, it is just “given”.
Figure 12 visualizes the structural difference between the signal-based and the
state-based input-output transformation models.
58
Figure 12: Contrasting signal-based with state-based timeseries transformations.
Top: Window-based determination of output signal through a regression function
h (here the window size is 3). Bottom: Output signal generation by driving in
intermediary dynamical system with an input signal. The state x(t) is updated
by f .
There are many other types of state update maps and observation functions,
for instance ODEs and PDEs for continuous-time systems, automata models for
discrete-state systems, or a host of probabilistic formalisms for random dynamical
systems. For our present discussion, considering only discrete-time state update
maps is good enough.
A core difference between signal-based and state-based timeseries transfor-
mations is the achievable memory timespans. In windowed signal transformations
through regression functions, the memory depth is bounded by the window length.
In contrast, the dynamical system state x(t) of the intermediary system is poten-
tially co-determined by input that was fed to the dynamical system in an arbitrary
deep past – the memory span can be unbounded! This may seem counterintuitive
if one looks at Figure 12 because at each time point t, only the input data point
u(t) from that same timepoint is fed to the dynamical system. But u(t) leaves
some trace on the state x(t), and this effect is forwarded to the next timestep
through f , thus x(t + 1) is affected by u(t), too; and so forth. Thus, if one expects
long-range or even unbounded memory effects, using state-based transformation
models is often the best way to go.
Machine learning offers a variety of state-based models for timeseries transfor-
59
mations, together with learning algorithms. The most powerful ones are hidden
Markov models (which we’ll get to know in this course) and other dynamical
graphical models (which we will not touch), and recurrent neural networks (which
we’ll briefly meet I hope).
In some applications it is important that the input-output transformation can
be learnt in an online adaptive fashion. The input-output transformation is not
trained just once, on the basis on a given, fixed training dataset. Instead, training
never ends; while the system is being used, it continues to adapt to changing
input-output relationships in the data that it processes. My favourite example
is the online adaptive filtering (denoising, echo cancellation) of the radio signal
received by a mobile phone. When the phone is physically moving while it is
being used (phonecall in a train, or just while walking up and down in a room),
the signal channel from the transmitter antenna to the phone’s antenna keeps
changing its characteristics because the radiowaves will take different mixes of
reflection pathways all the time. The denoising, echo-cancelling filter has to be re-
learnt every few milliseconds. This is done with a window-based linear regression
(window size several tens of thousands) and ingeniously simplified/accelerated
algorithms. Because this is powerful stuff, we machine learners should not leave
these powerful methods only to the electrical engineers (who invented them) but
learn to use them ourselves. I will devote a session later in this course to these
online adaptive signal processing methods.
60
dynamics governed by differential equations. Many variants and extensions of
Takens theorem are known today. To stay in tune with earlier parts of this section
I present a discrete-time version. Consider the input-free dynamical system
61
Figure 13: Takens theorem visualized. Left plot shows the Lorenz attractor, a
chaotic attractor with a state sequence x(t) defined in an n = 3 dimensional
state space. The center plot shows a 1-dimensional observation y(t) thereof. The
right plot (orange) shows the state sequence y(t) obtained from y(t) by three-
dimensional delay embedding (I forget which delay δ I used to generate this plot).
The deep, nontrivial message behind Takens-like theorems is that one can trade
time for space in certain dynamical systems. If by “space” one means dimensions
of state vectors, and by “time” lengths of observation windows of a scalar ob-
servable, Takens-like theorems typically state that if the dimension of the original
attractor set (manifold or fractal) is m, then using a delay embedding of an in-
teger dimension larger or equal than 2m + 1 will always recuperate the original
geometry; and sometimes smaller delay embedding dimensions suffice. Just for
completeness: the fractal dimension of the Lorenz attractor is about m = 2.06,
thus any delay embedding dimension exceeding 2m + 1 = 5.12 would be certainly
sufficient for reconstruction from a scalar observation; it turns out in Figure 13
that an embedding dimension of 3 is already working.
In timeseries prediction tasks in machine learning, Takens theorem gives a
justification why in deterministic dynamical systems without input it should be
possible to predict the future of a scalar timeseries y(t) from its past. If the
timeseries in question has been observed from a system like (25), Takens theorem
establishes that the information contained in the last 2m + 1 timesteps before the
prediction point t comprises the full information about the state of the system
that generated y(t) and thus knowledge of the last 2m + 1 timesteps is enough to
predict y(t + 1) with absolute accuracy.
Even if you forget about the Takens theory background, it is good to be fa-
miliar with delay-embeddings of scalar timeseries, because they help creating in-
structive graphics. If you transform a one-dimensional scalar timeseries y(t) into
a 2-dimensional one by a 2-element delay embedding, plotting this 2-dimensional
trajectory y(t) = y(t − δ), y(t))′ in its delay coordinates (y(t − δ), y(t) will often
give revealing insight into the structure of the timeseries – our visual system is
optimized for seeing patterns in 2-D images, not in 1-dimensional timeseries plots.
62
Figure 14: Getting nice graphics from delay embeddings. Left: a timeseries
recorded from the “Mackey-Glass” chaotic attractor. Right: plotting the tra-
jectory of a delay-embedded version of the left signal.
63
4 Basic methods for dimension reduction
One way to take up the fight with the “curse of dimensionality” which I briefly
highlighted in Section 1.2.2 is to reduce the dimensionality of the raw input data
before they are fed to subsequent learning algorithms. The dimension reduction
ratio can be enormous.
In this Section I will introduce three standard methods for dimension reduc-
tion: K-means clustering, principal component analysis, and self-organizing fea-
ture maps. They all operate on raw vector data which come in the form of points
x ∈ Rn , that is, this section is only about dimension reduction methods for nu-
merical data. Dimension reduction is the archetypical unsupervised learning task.
• m < n, that is, we indeed reduce the number of dimensions — maybe even
dramatically;
• the low-dimensional vectors f (xi ) should preserve from the raw data the
specific information that is needed to solve the learning task that comes
after this dimension-reducing data “preprocessing”.
• The number m of features should be small — after all, one of the reasons
for using features is dimension reduction.
64
• Each feature fi should be relevant for the task at hand. For example, when
the task is to distinguish helicopter images from winged aircraft photos (a
2-class classification task), the brightness of the background sky would be
an irrelevant feature; but the binary feature “has wings” would be extremely
relevant.
• A general intuition about features is that they should be rather cheap and
easy to compute at the front end where the ML system meets the raw data.
The “has wings” feature for helicopter vs. winged aircraft classification more
or less amounts to actually solving the classification task and presumably
is neither cheap nor easy to compute. Such highly informative, complex
features are sometimes called high-level features; they are usually computed
on the basis of more elementary, low-level features. Often features are com-
puted stage-wise, low-level features first (directly from data), then stage by
stage more complex, more directly task-solving, more “high-level cognition”
features are built by combining the lower-level ones. Feature hierarchies
are often found in ML systems. Example: in face recognition from photos,
low-level features might extract coordinates of isolated black dots from the
photo (candidates for the pupils of the person’s eyes); intermediate features
might give distance ratios between eyes, nose-tip, center-of-mouth; high-level
features might indicate gender or age.
Mean brightness. f1 (x) = 1′n x / n (1n is the vector of n ones). This is just the
mean brightness of all pixels. Might be useful e.g. for distinguishing “1”
images from “8” images because we might suspect that for drawing an “8”
one needs more black ink than for drawing a “1”. Cheap to compute but not
very class-differentiating.
65
Figure 15: Some examples from the Digits dataset.
right has a sequence of pixels that changes from black to white to black; (ii)
same for the center vertical pixel line. If this double condition is not met, the
image is assigned a feature value of f2 (x) = 0. f2 thus has only two possible
values; it is called a binary feature. We might suspect that only the “0”
images have this property. This would be a slightly less cheap-to-compute
feature compared to f1 but more informative about classes.
πj′ x. We might hope that f3j has a high value for patterns of class j and low
values for other patterns.
66
However, since hand-designing good features means good insight on the side of
the engineer, and good engineers are rare and have little time, the practice of ML
today relies much more on features that are obtained from learning algorithms.
Numerous methods exist. In the following subsections we will inspect three such
methods.
Given: a training data set (xi )i=1,...,N ∈ Rn , and a number K of clusters that one
maximally wishes to obtain.
Initialization: randomly assign the training points to K sets P Sj (j = 1, . . . , K).
Repeat: For each set Sj , compute the mean µj = |Sj |−1 x∈Sj x. This mean
vector µj is the “center of gravity” of the vector cluster Sj . Create new sets Sj′ by
putting each data point xi into that set Sj′ where kxi − µj k is minimal. If some
Sj′ remains empty, dismiss it and reduce K to K ′ by subtractring the number of
dismissed empty sets (this happens rarely). Put Sj = Sj′ (for the nonempty sets)
and K = K ′ .
Termination: Stop when in one iteration the sets remain unchanged.
67
Figure 16: Clusters obtained from K-means clustering (schematic): For a training
set of data points (light blue dots), a spatial grouping into clusters Cj is determined
by the K-means algorithm. Each cluster becomes represented by a codebook vector
(dark blue crosses). The figure shows three clusters. The light blue straight lines
mark the cluster boundaries. A test data point xtest (red cross) may then become
coded in terms of the distances αj of that point to the codebook vectors. Since
this xtest falls into the second cluster C2 , it could also be compressed into the
codebook index “2” of this cluster.
X
K X
J= kx − µj k2 (26)
j=1 x∈Sj
will not increase. The algorithm typically converges quickly and works well in
practice. It finds a local minimum or saddle point of J. The final clusters Sj
may depend on the random initialization. The clusters are bounded by straight-
line boundaries; each cluster forms a Voronoi cell. K-means cannot find clusters
defined by curved boundaries. Figure 17 shows an example of a clustering run
using K-means.
K-means clustering and other clustering methods have many uses besides di-
mension reduction. Clustering can also be seen as a stand-alone technique of
unsupervised learning. The detected clusters and their corresponding codebook
vectors are of interest in their own right. They reveal a basic structuring of a
set of patterns {xi } into subsets of mutually similar patterns. These clusters may
be further analyzed individually, given meaningful names and helping a human
data analyst to make useful sense of the original unstructured data cloud. For
instance, when the patterns {xi } are customer profiles, finding a good grouping
into subgroups may help to design targetted marketing strategies.
68
Figure 17: Running K-means with K = 3 on two-dimensional training points.
Thick dots mark cluster means µj , lines mark cluster boundaries. The algorithm
terminates after three iterations, whose boundaries are shown in light gray, dark
gray, red. (Picture taken from Chapter 10 of the textbook Duda, P. E. Hart, and
Stork 2001).
69
for the history). Because of its simplicity, analytical transparency, modest compu-
tational cost, and numerical robustness PCA is widely used — it is the first-choice
default method for dimension reduction that is tried almost by reflex, before more
elaborate methods are maybe considered.
PCA is best explained alongside with a visualization (Figure 18). Assume the
patterns are 3-dimensional vectors, and assume we are given a sample of N = 200
raw patterns x1 , . . . , x200 . We will go through the steps of computing a PCA for
this demo dataset.
A B C
u3
u1 u1 u1
u2 u2
D E
u3
u1 u1
u2 u2
Figure 18: Visualization of PCA. A. Centered data points and the first principal
component vector u1 (blue). The origin of R3 is marked by a red cross. B. Pro-
jecting all points to the orthogonal subspace of u1 and computing the second PC
u2 (green). C. Situation after all three PCs have been determined. D. Summary
visualization: the original data cloud with the three PCs and an ellipsoid aligned
with the PCs whose main axes are scaled to the standard deviations of the data
points in the respective axis direction. E. A new dimension-reduced coordinate
system obtained by the projection of data on the subspace Um spanned by the m
first PCs (here: the first two).
70
in the direction of u is given by the projection of x̄i on u, that is, the inner product
u′ x̄i (see Figure 19).
Figure 19: Projecting a point x̄i on a direction vector u: the inner product u′ x̄i
(length of the green vector) is the distance of x̄i from the origin along the direction
given by u.
and call it the second PC of the centered pattern sample. From this procedure it
is clear that u1 and u2 are orthogonal, because u2 lies in the orthogonal subspace
of u1 .
Now repeat this procedure: In iteration k, the k-th PC uk is constructed by
projecting pattern points to the linear subspace that is orthogonal to the already
computed PCs u1 , . . . , uk−1 , and uk is obtained as the unit-length vector pointing
in the “longest” direction of the current (n−k+1)-dimensional pattern point distri-
bution. This can be repeated until n PCs u1 , . . . , un have been determined. They
71
form an orthonormal coordinate system of Rn . Figure 18C shows this situation,
and Figure 18D visualizes the PCs plotted into the original data cloud.
Now define features fk (where 1 ≤ k ≤ n) by
that is, fk (x̄) is the projection component of x̄ on uk . Since the n PCs form an
orthonormal coordinate system, any point x ∈ Rn can be perfectly reconstructed
from its feature values by
X
x=µ+ fk (x) uk . (29)
k=1,...,n
The PCs and the corresponding features fk can be used for dimension reduction
as follows. We select the first (“leading”) PCs u1 , . . . , um up to some index m.
Then we obtain a feature map
How “good” is this dimension reduction, that is, how similar are the original
patterns xi to their reconstructions d ◦ f (xi )?
If dissimilarity of two patterns x1 , x2 ∈ Rn is measured in the square error
sense by
δ(x1 , x2 ) := kx1 − x2 k2 ,
a full answer can be given. Let
X
σk2 = 1/N fk (xi )2
i
denote the variance of the feature values fk (xi ) (notice that the mean of the fk (xi ),
taken over all patterns, is zero, so σk2 is indeed their variance). Then the mean
square distance between patterns and their reconstructions is
X X
n
1/N kxi − d ◦ f (xi )k = 2
σk2 . (32)
i k=m+1
72
Equation (32) gives an absolute value for dissimilarity. For applications how-
ever the relative amount of dissimilarity compared to the mean variance of patterns
is more instructive. It is given by
P Pn
i kxP
i − d ◦ f (xi )k
2 2
1/N k=m+1 σk
= P . (33)
i kx̄i k
2 n 2
1/N k=1 σk
2. The feature variances σ12 , . . . , σn2 are the eigenvalues of these eigenvectors.
A derivation of these facts can be found in my legacy ML lecture notes, Section
4.4.1 (https://www.ai.rug.nl/minds/uploads/LN_ML_Fall11.pdf). Thus, the
principal component vectors uk and their associated data variances σk2 can be
directly gleaned from C.
Computing a set of unit-norm eigenvectors and eigenvalues from C can be
most conveniently done by computing the singular value decomposition (SVD) of
C. Algorithms for computing SVDs of arbitrary matrices are shipped with all
numerical or statistical mathematics software packages, like Matlab, R, or Python
with numpy. At this point let it suffice to say that every covariance matrix C is a
so-called positive semi-definite matrix. These matrices have many nice properties.
Specifically, their eigenvectors are orthogonal and real, and their eigenvalues are
real and nonnegative.
73
In general, when an SVD algorithm is run on an n-dimensional positive semi-
definite matrix C, it returns a factorization
C = U Σ U ′,
where U is an n×n matrix whose columns are the normed orthogonal eigenvectors
u1 , . . . , un of C and where Σ is an n × n diagonal matrix which has the eigenvalues
λ1 , . . . , λn on its diagonal. They are usually arranged in descending order. Thus,
computing the SVD of C = U Σ U ′ directly gives us the desired PC vectors uk ,
lined up in U , and the variances σk2 , which appear as the eigenvalues of C, collected
in Σ.
This enables a convenient control of the goodness of similarity that one wants
to ensure. For example, if one wishes to preserve 98% of the variance information
from the original patterns, one can use the r.h.s. of (33) to determine the “cutoff”
m such that the ratio in this equation is about 0.02.
Procedure.
Step 1. Compute the pattern mean µ and center the patterns to obtain
a centered pattern matrix X̄ = [x̄1 , . . . , x̄N ].
Step 2. Compute the SVD U Σ U ′ of C = 1/N X̄ X̄ ′ and keep from U
only the first m columns, making for a n × m sized matrix Um .
4.6 Eigendigits
For a demonstration of dimension reduction by PCA, consider the “3” digit images.
After reshaping the images into 240-dimensional grayscale vectors and centering
and computing the PCA on the basis of N = 100 training examples, we obtain
240 PCs uk associated with variances σk2 . Only the first 99 of these variances are
74
nonzero (because the 100 image vectors xi span a 100-dimensional subspace in
R240 ; after centering the x̄i however span only a 99-dimensional subspace – why?
homework exercise! – thus the matrix C = 1/N X̄ X̄ ′ has rank at most the rank of
X̄, which is 99), thus only the first 99 PCs are useable. Figure 20 A shows some
of these eigenvectors ui rendered as 15 × 16 grayscale images. It is customary
to call such PC re-visualizations eigenimages, in our case “eigendigits”. (If you
have some spare time, do a Google image search for “eigenfaces” and you will find
weird-looking visualizations of PC vectors obtained from PCA carried out on face
pictures.)
Figure 20 B shows the variances σi2 of the first 99 PCs. You can see the rapid
(roughly exponential) decay. Aiming for a dissimilarity ratio (Equation 33) of
0.1 gives a value of m = 32. Figure 20 C shows the reconstructions of some “3”
patterns from the first m PC features using (31).
75
Figure 20: A. Visualization of a PCA computed from the “3” training images.
Top left panel shows the mean µ, the next 7 panels (row-wise) show the first 7
PCs. Third row shows PCs 20–23, last row PCs 96-99. Grayscale values have
been automatically scaled per panel such that they spread from pure white to
pure black; they do not indicate absolute values of the components of PC vectors.
B. The (log10 of) variances of the PC features on the “3” training examples. C.
Reconstructions of digit “3” images from the first m = 32 features, corresponding
to a re-constitution of 90% of the original image dataset variance. First row: 4
original images from the training set. Second row: their reconstructions. Third
row: 4 original images from the test set. Last row: their reconstruction.
76
Also given: a 2-dimensional grid of “neurons” V = (vkl )1≤k≤K, 1≤l≤L , where k, l
are the grid coordinates of neuron vkl .
Three comments: (i) Here I use a grid with a rectangular neighborhood struc-
ture. Classical SOM papers and many applications of SOMs use a hexagonal
neighborhood structure instead, where each neuron has 6 neighbors, all at the
same distance. (ii) The “learning objective” sounds vague. It is. At the time when
Kohonen introduced SOMs, tailoring learning algorithms along well-defined loss
functions was not standard. Kohonen’s modeling attitude was biological modeling
oriented and a mathematical analysis of the SOM algorithm was not part of his
agenda. In fact the problem to find a loss function which is minimized by the orig-
inal SOM algorithm was still unsolved in the year 2008 (Yin 2008). I don’t know
what the current state of research in this respect is — I would guess not much has
happened since. (iii) The learning task is impossible to solve. One cannot map the
n-dimensional pattern space Rn to the lower-dimensional (even just 2-dimensional)
space Rm while preserving neighborhood relations. For a graphical illustration of
this impossibility, consider the case where the patterns are 3-dimensional and are
uniformly spread in the unit cube [0, 1] × [0, 1] × [0, 1]. Then, in order to let every
grid neuron have about the same number of pattern points which are mapped on it
(condition 2), the SOM learning task would require that the 2-dimensional neuron
grid — think of it as a large square sheet of paper with the gridlines printed on
— becomes “wrinkled up” in the 3-dimensional cube such that in every place in
the cube the surrounding “neural sheet density” is about the same (Figure 21).
When this condition 2 is met (as in the figure), condition 1 is necessarily violated:
there will be points in the high-dimensional space which are close to each other
but which will become mapped to grid neurons that are far from each other on
the grid. Thus, SOM training is always finding a compromise.
In a trained SOM, each grid neuron vkl is “located” in the input pattern space
Rn by an n-dimensional weight vector w(vkl ), which just gives the n coordinates
of the “location” of vkl in Rn (in Figure 21, think of every gridline crossing point
as a location of a grid neuron; its weight vector w(vkl ) gives the 3-dimensional
position in the cube volume).
If a new test pattern x ∈ Rn arrives, its κ-image is computed by
77
Figure 21: Trying to uniformly fill a cube volume with a 2-dimensional grid sheet
will invariably lead to some points in the cube which are close to two (or more)
“folds” of the sheet. That is, points that are far away from each other in the
2-dimensional sheet (here for instance: points on the red gridline vs. points on the
blue gridline segment) will be close to each other in the 3-dimensional cube space;
or stated the other way round: some points that are close in the 3D cube will
become separated to distant spots on the neuron grid. “Crumpled grid” graphic
taken from Obermayer, Ritter, and Schulten 1990.
In words, the neuron v whose weight vector w(v) best matches the input
pattern x is chosen. In the SOM literature this neuron is called the best matching
unit (BMU). Clearly the map κ is determined by the weight vectors w(v). In
order to train them on the training data set P, a basic SOM learning algorithm
works as follows:
78
is a non-negative, monotonously decreasing function defined on the nonnegative
reals which satisfies fr (0) = 1 and which goes to zero as its argument grows. The
function is parametrized by a radius parameter r > 0, where greater values of r
spread out the range of fr . A common choice is to use a Gaussian-like function
fr (x) = exp(−x2 /r2 ) (see Figure 22).
0.5
0
-5 0 5
79
patterns has grown from the singleton population {v0 } to a larger one. This
growth will continue until the population of grid neurons responding to cluster
patters has become so large that each member’s BMU-response-rate has become
too low to further drive this population’s expansion.
The radius r is set to large values initially in order to let all (or most) patterns in
P compete with each other, leading to a coarse global organization of the emerging
map κ. In later iterations, increasingly smaller r leads to a fine-balancing of the
BMU responses of patterns that are similar to each other.
SOM learning algorithms come in many variations. I sketched an arbitrary
exemplar. The core idea is always the same. Setting up a SOM learning algorithm
and tuning it is not always easy – the weight initialization, the decrease schedule
for r, the learning rate, the random sampling strategy of training patterns from
P, the grid dimension (2 or 3 or even more... 2 is the most common choice),
or the pattern preprocessing (for instance, normalizing all patterns to the same
norm) are all design decisions that can have a strong impact on the convergence
properties and final result quality of the learning process.
Yin 2008 includes a brief survey of SOM algorithm variants. I mention in
passing an algorithm which has its roots in SOMs but is quite significantly differ-
ent: The Neural Gas algorithm (good brief intro in https://en.wikipedia.org/
wiki/Neural_gas), like SOMs, leads to a collection of neurons which respond to
patterns from a training set through trainable weight vectors. The main difference
is that the neurons are not spatially arranged on a grid but are spatially uncoupled
from each other (hence, neural “gas”). The spatially defined distance d appearing
in the adaptation efficacy term fr (d(vkl , vBMU )) is replaced by a rank ordering: the
neuron with the best response to training pattern x (i.e., the BMU) is adapted
most, the unit v with the second best response (i.e., second largest value of w(v) x)
is adapted second most strongly, etc.
For a quick SOM demo I used a version of a legacy Matlab toolbox published
by Kohonen’s own research group (I downloaded it almost 20 years ago). As a
pattern dataset P I used the Digits dataset that I also used before in this section.
I used 100 examples from each digit class. Figure 23 shows the result of training
an 8 × 8 neuron grid on this dataset. As expected, the ten digit classes become
represented each by approximately the same number of grid neurons, reflecting
the fact that the classes were represented in P in equal shares.
Just for the fun of it, I also generated an unbalanced pattern set that had 180
examples of class “5” and 20 examples of every other class. The result is shown
in Figure 24. As it should be, roughly half of the SOM neurons are covering the
“5” class examples, while the other SOM neurons reach out their weight vectors
into the remaining classes. This is a desirable effect: SOM training on the basis
of unbalanced datasets will lead to a higher resolution for the pattern sorts that
occur more often in the dataset (more grid neurons covering the more often found
sorts of data points).
Practical uses of SOMs appear to have been mostly 2-dimensional visualiza-
80
0 0 4 4 4 1 2 2
8 6 6 6 6 8 8
2 8 5 6 6 8 8 8
2 7 3 1 3 5 8
7 7 3 1 5 5 9
7 1 1 1 1 5 5 9
1 3 3 4 4 5 5 9
4 3 3 3 3 5 9 9
81
5 5 5 8 0 8 5 5
5 5 5 8 0 6 5 5
5 5 5 8 6 6
3 5 5 9 8 5 1 4
3 8 9 8 1 4
2 2 7 9 1 1 4
7 7 7 5 5 5 0
1 1 9 5 5 5 3 3
Figure 24: Similar to previous figure, but with an unbalanced pattern set where
the number of “5” examples was the same as the total number of all other classes.
82
Figure 25: The SOM algorithm reproducing biological cortical response patterns.
The scenario: an anaesthesized but eye-open ferret is shown moving images of a
bar with varying orientation and direction of movement, while response activity
from neurons on a patch (about 10 square mm) of visual cortex is recorded. A.
A color-coded recording of neural response activity depending on the orientation
of the visually presented bar. For instance, if the bar was shown in horizontal
orientation, the neurons who responded to this orientation with maximal activity
are rendered in red. B. Like in panel A., but for the motion direction of the bar
(same cortical patch). C., D. Artificial similies of A., B. generated with a SOM
algorithm. Figures taken from Swindale and Bauer 1998, who in turn took the
panels A. and B from Weliky, Bosking, and Fitzpatrick 1996.
83
pression, a real-valued vector will become encoded in a natural number, the
index of its associated codebook vector. Again there is a close connection
with probability, established through Shannon information theory.
Model reduction is a term used when it comes to trim down not just static
“data points” but entire dynamical “system models”. All branches of sci-
ence and engineering today deal with models of complex physical dynamical
systems which are instantiated as systems of coupled differential equations
— often millions, sometimes billions ... or more ... of them. One obtains
such gargantuan systems of coupled ODEs almost immediately when one
discretizes system models expressed in partial differential equations (PDEs).
Such sytems cannot be numerically solved on today’s computing hardware
and need to be dramatically shrunk before a simulation can be attempted.
I am not familiar with this field. The mathematical tools and leading intu-
itions are different from the ones that guide dimension reduction in machine
learning. I mention this field for completeness and because the name “model
reduction” invites misleading analogies with dimension reduction. Antoulas
and Sorensen 2001 give a tutorial overview with instructive examples.
In this section we took a look at three methods for dimension reduction of high-
dimensional “raw” data vectors, namely K-means clustering, PCA, and SOMs.
While at first sight these methods appear quite different from each other, there is
a unifying view which connects them. In all three of them, the reduction was done
by introducing a comparatively simple kind of geometric object in the original
high-dimensional pattern space Rn , which was then used to re-express raw data
points in a lightweight encoding:
1. In K-means-clustering, this object is the set of codebook vectors, which can
be used to compress a test data point to the mere natural number index
of its associated codebook vector; or which can be used to give a reduced
K-dimensional vector comprised of the distances αj to the codebook vectors.
2. In PCA, this object is the m-dimensional linear (affine) hyperplane spanned
by the first m eigenvectors of the data covariance matrix. An n-dimensional
test point is represented by its m coordinates in this hyperplane.
3. In SOMs, this object is the “crumpled-up” grid of SOM neurons v, including
their associated weight vectors w(v). A new test data point can be com-
pressed to the natural number index of its BMU. This is entirely analog
to how the codebook vectors in K-means clustering can be used. Further-
more, if an m-dimensional neuron grid is used (we considered only m = 2
above), the grid plane is nonlinearly “folded” into the original dataspace Rn ,
in that every grid point v becomes projected to w(v). It thus becomes an m-
dimensional manifold embedded in Rn , which is characterized by codebook
vectors. This can be seen as a nonlinear generalization of the m-dimensional
linear affine hyperplanes embedded in Rn in PCA.
84
Thus, the SOM shares properties with both K-means clustering and PCA. In
fact, one can systematically explore a whole spectrum of dimension reduction /
data compression algorithms which are located between K-means clustering and
PCA, in the sense that they describe m-dimensional manifolds of different degrees
of nonlinearity through codebook vectors. K-means clustering is the extreme case
that uses only codebook vectors and no manifolds; PCA is the other extreme with
only manifolds and no codebook vectors. The extensive Preface to the collec-
tion volume Principal Manifolds for Data Visualization and Dimension Reduction
(Gorban et al. 2008) gives a readable intro to this interesting field.
In today’s deep learning practice one often ignores the traditional methods
treated in this section. Instead one immediately fires the big cannon, training a
deep neural network wired up in an auto-encoder architecture. An autoencoder
network is a multilayer feedforward network whose ouput layer has the same large
dimension n as the input layer. It is trained in a supervised way, using training
data (xi , yi ) to approximate the identity function: the training output data yi are
identical to the input patterns xi (possibly up to some noise added to the inputs).
The trick is to insert a “bottleneck” layer with only m n neurons into the
layer sequence of the network. In order to achieve a good approximation of the
n-dimensional identity map on the training data, the network has to discover an
n → m-dimensional compression mapping which preserves most of the information
that is needed to describe the training data points. I will not give an introduction
to autoencoder networks in this course (it’s a topic for the “Deep Learning” course
given by Mathia Sabatelli). The Deep Learning standard reference I. Goodfellow,
Bengio, and Courville 2016 has an entire section on autoencoders.
85
5 Discrete symbolic versus continuous real-valued
I hope this section will be as useful as it will be short and simple. Underneath
it, however, lurks a mysterious riddle of mathematics, philosophy and the neuro-
sciences.
Some data are given in symbolic form, for instance
• financial data,
86
fields of mathematics arise from crossover formalisms between the Discrete and
the Continuous. The fundamental difference between the two is not dissolved in
these theories, but the tension between the Discrete and the Continuous sets free
new forms of mathematical energy. Sadly, these lines of research are beyond what
I understand and what I can explain (or even name), and certainly beyond what
is currently used in machine learning.
The hiatus (an educated word of latin origin, meaning “dividing gap”) between
the Discrete and the Continuous is also the source of one of the great unresolved
riddles in the neurosciences, cognitive science and AI: how can symbolic reasoning
(utterly discrete) emerge from the continuous matter and signal processing in
our material brains (very physical, very continuous)? This question has kept AI
researchers and philosophers busy (and sometimes aggressively angry with one
another) for 5 decades now and is not resolved; if you are interested, you can get
a first flavor in the Wikipedia article on “Physical symbol system” or by reading
up on the overview articles listed in http://www.neural-symbolic.org/.
Back to our down-to-earth business. Machine learning formalisms and al-
gorithms likewise are often either discrete-flavored or continuous-flavored. The
former feed on symbolic data and create symbolic results, using tools like de-
cision trees, Bayesian networks and graphical models (including hidden Markov
models), inductive logic, and certain sorts of neural networks where neurons have
0-1-valued activations (Hopfield networks, Boltzmann machines). The latter di-
gest vector data and generate vector output, with deep neural network methods
currently being the dominating workhorse which overshadow more traditional ones
like support vector machines and various sorts of regression learning “machines”.
The great built-in advantage of discrete formalisms is that they lend themselves
well to explainable, human-understandable solutions. Their typical disadvantage
is that learning or inference algorithms are often based on combinatorial search,
which quickly lets computing times explode. In contrast, continuous formalisms
typically lead to results that cannot be intuitively interpreted – vectors don’t talk
– but lend themselves to nicely, smoothly converging optimization algorithms.
When one speaks of “machine learning” today, one mostly has vector processing
methods in mind. Also this RUG course is focussing on vector data. The discrete
strands of machine learning are more associated with what one often calls “data
mining”. This terminology is however not clearly defined. I might mention that I
have a brother, Manfred Jaeger (http://people.cs.aau.dk/~jaeger/), who is
also a “machine learning” prof, but he is from the discrete quarters while I am
more on the continuous side. We often meet and talk, but not about science,
because we don’t understand what the respective other researches; we publish in
different journals and go to different conferences.
Sometimes one has vector data but wants to exploit benefits that come with
discrete methods, or conversely, one has symbolic data and wants to use a neural
network (because everybody else seems to be using them, or because one doesn’t
want to fight with combinatorial explosions). Furthermore, many an interesting
87
dataset comes as a mix of symbolic-discrete and numerical-continuous data – for
instance, data originating from questionnaires or financial/business/admin data
often are mixed-sort.
Then, one way to go is to convert discrete data to vectors or vice versa. It is a
highly empowering professional skill to know about basic methods of discrete ↔
continuous conversions.
Here are some discrete-to-continuous transformations:
One-hot encodings. Given: data points aν that are symbols from a finite “al-
phabet” A = {a1 , . . . , ak }. Examples: yes/no answers in a questionnaire;
words from a vocabulary; nucleic acids A, C, G, T occuring in DNA. Turn
each aν into the k-dimensional binary vector vν ∈ {0, 1}k which is zero every-
where except at position ν. This is a very common way to present symbolic
input to neural networks. On the output side of a neural network (or any
other regression learning machine), one-hot encodings are also often used to
give vector teacher data in classification tasks: if (xi , ci ) is a classification-
task training dataset, where ci is a symbolic class label from a class set
C = {c1 . . . , ck }, transform each ci to its k-dimensional one-hot vector vi
and get a purely vector-type training dataset (xi , vi ).
Binary pattern encodings. If the symbol alphabet A is of a large size k, one
might shy away from one-hot encoding because it gives large vectors. In-
stead, encode each aν into a binary vector of length dlog2 ke, where dlog2 ke
is the smallest integer larger or equal to log2 k. For example, the alphabet
{a, b, c, d} would be encoded in the vectors {[0, 0]′ , [0, 1]′ , [1, 0]′ , [1, 1]′ } Advan-
tage: small vectors. Disadvantage: subsequent vector-processing algorithms
must invest substantial nonlinear effort into decoding this essentially arbi-
trary encoding. This will often be crippling, and binary pattern encodings
should only be considered if there is some intuitive logic in the encoding.
Linear scale encoding. If there is some natural ordering in the symbols aν ∈ A,
encode every aν by the index number ν. Makes sense, for instance, when
the aν come from a Likert-scale questionnaire item, as in
A = {certainly not, rather not, don’t know, rather yes, certainly yes}.
88
lie at a great distance to both v and v ′ . So the goal is to find vectors v, v ′ , v ′′
in this example that have small distance kv−v ′ k and large distances kv−v ′′ k,
kv ′ − v ′′ k. This is achieved by measuring the similarity of two words w, w′
through counting how often they occur in similar locations in training texts.
A large collection of English texts is processed, collecting statistics about
similar sub-phrases in those texts that differed only in the two words whose
similarity one wished to assess (plus, there was another trick: can you think
of an important improvement of this basic idea?). – Mikolov’s paper has
been (Google-scholar) cited 30,000 times in merely eight years!
89
• Discretizing a continuous value range into bins with adaptive bin bound-
aries is the key for using decision trees (which are discrete models) for
continuous attribute ranges.
Multi-dimensional discretization by hierarchical refinement. If one wants
to discretize a set {xi } of n-dimensional vectors, one has to split the n-
dimensional volume which contains the points {xi } into a finite set of discrete
regions Rν . A common approach is to let these regions be n-dimensional
hypercubes. By a process of hierarchical refinement one constructs these re-
gions Rν such that in areas where there is a higher point density, or in areas
where there is much fine-grained information encoded in the local point dis-
tribution, one uses smaller hypercubes to increase resolution. This leads to
a tree structure, which in the case n = 2 is called a quadtree (because every
non-leaf node has four children) and in the case n = 3 an octree of hierar-
chically nested hypercubes. This tree structure enables a computationally
efficient indexing of the hypercubes. The left panel in Figure 26 shows an
example. The Wikipedia article on quadtrees is quite instructive. One may
also opt for regions Rν of other polygonal shape (see right panel in Figure
26). There are many such mesh refinement methods, with task-specific opti-
mization criteria. They are not typically used in machine learning but rather
in methods for simulating spatiotemporal (fluid) dynamics by numerically
solving PDEs. Still, it is good to know that such methods exist.
Vector quantization. Using K-means or other clustering algorithms, the vector
set {xi } is partitioned into k cells whose center of gravity vectors are indexed
and the indices are used as symbolic encodings of the {xi }. We have seen this
in Section 4.2. This is a typical machine learning method for discretization.
Turning neural dynamics into symbol sequences. When we (I really mean
“we”, = us humans!) speak or write, the continuous-time, continuous-valued
neural brain dynamics leads to a discrete sequence of words. Somehow, this
symbolic sequence is encoded in the “subsymbolic”, continuous brain dy-
namics. It is unknown how, exactly, this encoding is realized. Numerous
proposals based on nonlinear dynamical systems theory have been made.
This is an area of research in which I am personally engaged. If you are in-
terested: some approaches are listed in Durstewitz, Seamans, and Sejnowski
2000, Pascanu and Jaeger 2011, Fusi and Wang 2016. In machine learning,
the problem of transforming continuous-valued neural state sequences to se-
quences of words (or letters) arises in applications like speech recognition
(“speech-to-text”) or gesture recognition. Here the most common solution
(which is not necessarily biologically plausible) is to use the core neural net-
work to generate a sequence of continuous-valued hypothesis output vectors
with as many components as there are possible target symbols. At each
point in time, the numerical value of each vector component reflects a cur-
rent “degree of belief” about which symbol should be generated. With some
90
postprocessing mechanism (not easy to set up), this hypothesis stream is
denoised and turned into a symbol sequence, for instance by selecting at
each point in time the symbol that has the largest degree of belief.
91
6 The bias-variance dilemma and how to cope
with it
Reality is rich in detail and surprises. Measuring reality gives a finite number
of data points – and the reality between and beyond these data points remains
unmeasured, a source of unforseeable surprises. Furthermore, measuring is almost
always “noisy” – imprecise or prone to errors. And finally, each space-time event
point of reality can be measured in innumerable ways, of which only a small
choice is actually measured. For instance, the textbook measurement of “tossing
a coin” events is just to record “heads” or “tail”. But one could also measure
the weight and color of coin, the wind speed, the falling altitude, the surface
structure of the ground where the coin falls, and ... everything else. In summary,
any dataset will capture only a supremely scarce coverage of space-time events
whose unfathomable qualitative richness has been reduced to a ridiculously small
number of “observables”. From this empoverished, punctuated information basis,
called “training data”, a machine learning algorithm will generate a “model” of a
part of reality. This model will then be used to predict properties of new space-
time events, called “test inputs”. How should it be ever possible for a model to
“know” anything about those parts of reality that lie between the pinpoints hit
by the training data?
This would be a good point to give up.
Figure 27: He gave the reason why machine learning works (from https:
//commons.wikimedia.org/wiki/File:Albert_Einstein_-_Colorized.jpg)
But ... quoting Einstein: “Subtle is the Lord, but malicious He is not” (https:
//en.wikiquote.org/wiki/Albert_Einstein). Reality is co-operative. Between
measurement points, reality changes not arbitrarily but in a lawful manner. The
question is, which law. In machine learning, this question is known as the problem
of overfitting, or in more educated terms, the bias-variance dilemma. Welcome
to this chapter which is all about this question. For all your practical exploits
92
of machine learning in your professional future, this is the most important and
enabling chapter in this course. If you feel this menace is staring at you, this
chapter shows you how to speak to it and tame it, and your final trained model
will be so much better in generalizing from your training data to the data that
your customers will have.
First stage: dimension reduction by PCA: 1. Center the set of image vec-
tors (xtrain
i )i=1,...,1000 by subtracting the mean vector µ, obtaining (x̄train
i )i=1,...,1000 .
2. Assemble the centered image vectors column-wise in a 240 × 1000 ma-
trix X̄. Compute the correlation matrix C = 1/1000 X X ′ and factorize
C into its SVD C = U ΣU ′ . The columns of U are the 240 principal
component vectors of C.
3. Decide how strongly you want to reduce the dimension, shrinking it
from n = 240 to m < n. Let Um be the matrix made from the first m
columns from U .
4. Project the centered patterns x̄train
i on the m first principal components,
′
obtaining m-dimensional feature vectors fitrain = Um x̄train
i .
Vectorize the class labels by one-hot encoding: Each ctrain i is re-written as
a binary 10-dimensional vector vitrain which has a 1 entry in the corresponding
class ci . Assemble these vectors column-wise in a 10 × 1000 matrix V .
93
Compute a linear regression classifier: Assemble the 1000 m-dimensional fea-
ture vectors fitrain into a m × 1000 dimensional matrix F and obtain a 10 × m
dimensional regression weight matrix W by
W ′ = (F F ′ )−1 F V ′ .
Compute the training MSE and training error rate: The training mean square
error (MSE) is given by
X
1000
MSEtrain = 1/1000 kvitrain − W (fitrain )k2 .
i=1
where maxInd picks the index of the maximal element in a vector. (Note:
the “−1” is owed to the fact that the vector indices range from 1 to 10, while
the class labels go from 0 to 9).
X
1000
test
MSE = 1/1000 kvitest − W (fitest )k2
i=1
and
ϱtest = 1/1000 |{i | maxInd W fitest − 1 6= citest }|,
′
where fitest = Um (xtest
i − µ). Note: for centering the test data, use the mean
µ obtained from the training data!
This is a procedure made from basic linear operations that you should com-
mand even when sleepwalking; with some practice the entire thing should not take
you more than 30 minutes for programming and running. Altogether a handy
quick-and-not-very-dirty routine that you should consider carrying out in every
classification task, in order to get a baseline before you start exploring more so-
phisticated methods.
And now - let us draw what is probably the most helpful graphics in these
lecture notes, worth to be burnt into your subconscious. Figure 28 shows these
diagnostics for all possible choices of the number m = 1, . . . , 240 of PC features
used.
This plot visualizes one of the most important issues in (supervised) machine
learning and deserves a number of comments.
94
Figure 28: Dashed: Train (blue) and test (red) MSE obtained for m = 1, . . . , 240
PCs. Solid: Train (blue) and test (red) misclassification rates. The y-axis is
logarithmic base 10.
• The analog performance curves for the testing MSE and misclassification
first exhibit a decrease, followed by an increasing tail. The testing misclas-
sification rate is minimal for m = 34.
• This “first decrease, then increase” behavior of testing MSE (or classifica-
tion rate) is always observed in supervised learning tasks when models are
compared which have growing degrees of data fitting flexibility. In our digit
example, this increase in flexibility was afforded by growing numbers of PC
features, which in turn gave the final linear regression a richer repertoire of
feature values to combine into the hypothesis vectors.
• The increasing tail of testing MSE (or classification rate) is the hallmark
of overfitting. When the learning algorithm admits too much flexibility, the
resulting model can fit itself not only to what is “lawful” in the training data,
95
but also to the random fluctuations in the training data. Intuitively and
geometrically speaking, a learning algorithm that can shape many degrees of
freedom in its learnt models allows the models to “fold in curls and wiggles to
accomodate the random whims of the training data”. But then, the random
curls and wiggles of the learnt model will be at odds with fresh testing data.
Figure 29: An example of training data (red squares) obtained from a noisy ob-
servation of an underlying “correct” function sin(2 π x) (dashed blue line).
We now want to solve the task of learning a good approximation for h from the
training data (xi , yi ) by applying polynomial curve fitting, an elementary technique
you might be surprised to meet here as a case of machine learning. Consider an
96
m-th order polynomial
p(x) = w0 + w1 x + · · · + wm xm . (36)
Figure 30: Fitting polynomials (green lines) for polynomial orders 1, 3, 10 (from
left to right).
If we compute the MSE’s for the three orders m = 1, 3, 10, we get MSEtrain =
0.4852, 0.0703, 0.0000 respectively. Some observations:
• For m = 3, we get a polynomial that hits our target sine apparently quite
well.
• For m = 10, we get a polynomial that perfectly matches the training data,
but apparently misses the target sine function (overfitting).
97
However, please switch now into your most critical thinking mode and recon-
sider what I have just said. Why, indeed, should we judge the linear fit “under-
fitting”, the order-3 fit “seems ok”, and the order-10 fit “overfitting”? There is
no other ground for justifying these judgements than our visual intuition! In fact,
the order-10 fit might be the right one if the data contain no noise! the order-1
fit might be the best one if the data contain a lot of noise! We don’t know!
1.5 6
150
1 4
1
100
1 1
0.5 2
50
0 0.5
0.5 0 0.5
1 0
1
1
0.5 0.5 0.5
0 0 0 0 0 0
Figure 31: Estimating a pdf from 6 data points. Model flexibility grows from left
to right. Note the different scalings of the z-axis: the integral of the pdf is 1 in
each of the three cases.
98
presumably underfitting. When the flexibility is too large, each individual training
point can be “lasso-ed” by a sling of the decision boundary, presumably overfitting.
Do I need to repeat that while these graphics seem to indicate under- or over-
fitting, we do not actually know?
99
Figure 33: A nightmare case of overfitting. Picture spotted by Yasin Cibuk (2018
ML course participant) on http://dominicwilcox.com/portfolio/bed/ – now
no longer accessible –, designed and crafted by artist Dominic Wilcox, 1999. Quote
from the artist’s description of this object: “I used my own body as a template for
the mattress”. From a ML point of view, this means a size N = 1 training data
set.
methods to measure and tune the flexibility of a learning algorithm have their
specific names, and it is these names that you will find in the literature. The
most famous among them is model capacity. This concept has been developed in a
field now called statistical learning theory, and (not only) I consider it a highlight
of modern ML theory. We will however not treat the concept of model capacity
in this course, since it is not an easy concept, and it needs to be spelled out in
different versions for different types of learning tasks and algorithms. Check out
en.wikipedia.org/wiki/Vapnik-Chervonenkis_theory if you want to get an
impression. Instead, in Sections 6.4.1 and 6.4.2 I will present two simpler methods
for handling modeling flexibility which, while they lack the analytical beauty and
depth of the model capacity concept, are immensely useful in practice.
I emphasize that finding the right flexibility for a learning algorithm is ab-so-lu-
te-ly crucial for good performance of ML algorithms. Our little visual examples do
not do justice to the dismal effects that overfitting may have in real-life learning
tasks where a high dimension of patterns is combined with a small number of
training examples — which is a situation faced very often by ML engineers in
practical applications.
100
reading this section, make sure that you understand the difference between the
expectation of a random variable and the sample mean (explained in Appendix D).
In supervised learning scenarios, one starts from a training sample of the form
(xi , yi )i=1,...,N , which is drawn from a joint distribution PX,Y of two random vari-
ables X and Y . So far, we have focussed on tasks where the xi were vectors and
the yi were class labels or numbers or vectors, but supervised learning tasks can
be defined for any kind variables. In this subsection we will take an abstract
view and just consider any kind of supervised learning based on a training sam-
ple (xi , yi )i=1,...,N . We then call the xi the arguments and the yi the targets of
the learning task. The target RV Y is also sometimes referred to as the teacher
variable.
The argument and target variables each come with their specific data value
space – a set of possible values that the RVs X and Y may take. For instance,
in our digit classification example, the data value space for X was R240 (or more
constrained, [0, 1]240 ) and the data value space for Y was the set of class labels
{0, 1, . . . , 9}. After the extraction of m features and turning the class labels to
class indicator vectors, the original picture–label pairs (xi , yi ) turned into pairs
(fi , vi ) with data value spaces Rm , {0, 1}k respectively. In English-French sentence
translation tasks, the data value spaces of the random variables X and Y would
be a (mathematical representation of a) set of English and French sentences. We
now abstract away from such concrete data value spaces and write EX , EY for the
data value spaces of X and Y .
Generally speaking, the aim of a supervised learning task is to derive from
the training sample a function h : EX → EY . We called this function a decision
function earlier in these lecture notes, and indeed that is the term which is used
in abstract statistical learning theory.
Some of the following I already explained before (Section 2.3) but it will be
helpful if I repeat this here, with a little more detail.
The decision function h obtained by a learning algorithm should be optimized
toward some objective. One introduces the concept of a loss function. A loss
function is a function
L : EY × EY → R≥0 . (37)
The idea is that a loss function measures the “cost” of a mismatch between
the target values y and the values h(x) returned by a decision function. Higher
cost means lower quality of h. We have met two concrete loss functions so far:
• A loss that counts misclassifications in pattern classification: when the de-
cision function returns a class label, define
0, if h(x) = y
L(h(x), y) = (38)
1, if h(x) 6= y
101
This loss is often just called “quadratic loss”. We used it as a basis for
deriving the algorithm for linear regression in Section 3.1.
which is just the mean loss calculated over the training examples. Minimizing this
empirical risk is an achievable goal, and a host of optimization algorithms for all
kinds of supervised learning tasks exist which do exactly this, that is, they find
X
N
hopt = argmin 1/N L(h(xi ), yi ). (42)
h∈H i=1
The set H is the hypothesis space – the search space within which a learning may
look for an optimal h.
It is important to realize that every learning algorithm comes with a specific
hypothesis space. For instance, in decision tree learning H is the set of all decision
trees that use a given set of properties and attributes. Or, in linear regression, H
is the set of all affine linear functions from Rn to Rk . Or, if one sets up a neural
network learning algorithm, H is typically the set of all neural networks that have
a specific connection structure (number of neuron layers, number of neuros per
layer); the networks in H then differ from each other by the weights associated
with the synaptic connections.
The empirical risk is often – especially in numerical function approximation
tasks – also referred to as training error.
102
While minimizing the empirical loss is a natural way of coping with the im-
possibility of minimizing the risk, it may lead to decision functions which combine
a low empirical risk with a high risk. This is the ugly face of overfitting which I
highlighted in the previous subsection. In extreme cases, one may learn a decision
function which has zero empirical risk and yet has a extremely large expected
testing error which makes it absolutely useless.
There is no easy or general solution for this conundrum. It has spurred statis-
ticians and mathematicians to develop a rich body of theories which analyze the
relationships between risk and empirical risk, and suggest insightful strategies to
manage as well as one can in order to keep the risk within provable bounds. These
theories, sometimes referred to as statistical learning theory (or better, theories),
are beyond the scope of this lecture.
If you are in a hardcore mood and if you have some background in proba-
bility theory, you can inspect Section 18 of lecture notes of my legacy “Princi-
ples of Statistical Modeling” course (online at https://www.ai.rug.nl/minds/
uploads/LN_PSM.pdf). You will find that the definitions of loss and risk given
there in the spirit of mathematical statistics are a bit more involved than what I
presented above, but the definitions of loss and risk that I gave here are used in
textbooks on machine learning.
103
Figure 34: The generic, universal, core challenge of machine learning: finding the
right model flexibility which gives the minimal risk.
There are many, quite different, ways of tuning flexibility of a learning algo-
rithm (that is, an algorithm that solves the optimization problem (42)). Note that
the word “flexibility” is only intuitive; how this is concretely formalized, imple-
mented and measured differs between the methods of adjusting this “flexibility”.
I proceed to outline the most common ones.
• In the Digits example (Figure 28) we found that the number m of principal
component features that were extracted from the raw image vectors was de-
cisive for the testing error. When m was too small, the resulting models were
too simple to distinguish properly between different digit image classes (un-
derfitting). When m was too large, overfitting resulted. Fixing a particular
m determines the class H of candidate decision functions within which the
empirical risk (42) is minimized. Specifically, using a fixed m meant that the
optimal decision function hopt was selected from the set Hm which contains
all decision functions which first extract m principal component features be-
fore carrying out the linear regression. It is clear that Hm−1 is contained in
Hm , because decision functions that only combine the first m − 1 principal
component features into the hypothesis vector can be regarded as special
cases of decision functions that combine m principal component features
into the hypothesis vector, namely those whose linear combination weight
for the m-th feature is zero.
104
• In the polynomial curve fitting example from Section 6.2.1, the model pa-
rameters were the monomial coefficients w0 , . . . , wm (compare Equation 36).
After fixing the polynomial order m, the optimal decision
Pm function p(x) was
selected from the set Hm = {p : R → R | p(x) = j=0 wj x }. Again it is
j
X
N
hopt m = argmin 1/N L(h(xi ), yi ). (43)
h∈Hm i=1
So... how can we find the best model class mopt which gives us the best risk
– note: not the best empirical risk? Or stated in more basic terms, which model
class will give us the smallest expected test error? Expressed formally, how can
we find
mopt = argmin R(hopt m )? (44)
m
105
The basic geometric intuition behind modeling flexibility is that low-flexibility
models should be “smooth”, “more linear”, “flatter”, admitting only “soft curva-
tures” in fitting data; whereas high-flexibility models can yield “peaky”, “rugged”,
“sharply twisted” curves (see again Figures 30, 31, 32).
When one uses model regularization, one fixes a single model structure and size
with a fixed number of trainable parameters, that is, one fixes H. Structure and
size of the considered model should be rich and large enough to be able to overfit
(!) the available training data. Thus one can be sure that the “right” model is
contained in the search space H. For instance, in our evergreen digit classification
task one would altogether dismiss the dimension reduction through PCA (which
has the same effect as using the maximum number m = 240 of PC components)
and directly use the raw picture vectors (padded by a constant 1 component to
enable affine linear maps) as argument vectors for a training a linear regression
decision function. Or, in polynomial curve-fitting, one would fix a polynomial
order that clearly is too large for the expected kind of true curve.
The models in H are typically characterized by a set of trainable parameters.
In our example of digit classification through linear regression from raw images,
these trainable parameters are the elements of the regression weight matrix; in
polynomial curve fitting these parameters are the monomial coefficients. Following
the traditional notation in the machine learning literature we denote this collection
of trainable parameters by θ. This is a vector that has as many components as
there are trainable parameters in the chosen kind of model. We assume that we
have M tuneable parameters, that is θ ∈ RM .
Such a high-flexibility model type would inevitably lead to overfitting when an
“optimal” model would be learnt using the basic learning equation (42) which I
repeat here for convenience:
X
N
hopt = argmin 1/N L(h(xi ), yi ).
h∈H i=1
X
N
hopt = argmin 1/N L(h(xi ), yi ) + α2 R(θh ). (45)
h∈H i=1
106
The design of a useful penalty term is up to your ingenuity. A good penalty
term should, of course, assign high penalty values to parameter vectors θ which
represent “wiggly” models; but furthermore it should be easy to compute and
blend well with the algorithm used for empirical risk minimization.
Two examples of such regularizers:
1. In the polynomial fit task from Section 6.2.1 one might consider for H all 10th
order polynomials, but penalize the “oscillations” seen in the right panel of
Figure 30, that is, penalize such 10th order polynomials that exhibit strong
oscillations. The degree of “oscillativity” can be measured, for instance, by
the integral over the (square of the) second derivative of the polynomial p,
Z 1 2 2
d p(x)
R(θ) = R((w0 , . . . , w10 )) = dx.
0 dx2
Investing a little calculus (good exercise! not too difficult), it can be seen
that this integral resolves to a quadratic form R(θ) = θ′ C θ where C is an
11 × 11 sized positive semi-definite matrix. That format is more convenient
to use than the original integral version.
2. A popular regularizer that often works well is just the squared sum of all
model parameters, X
R(θ) = w2 .
w∈θ
This regularizer favors models with small absolute parameters, which often
amounts to “geometrically soft” models. This regularizer is popular among
other reasons because it supports simple algorithmic solutions for minimizing
risk functions that contain it. It is called the L2 -norm regularizer because
it measures the (squared) L2 -norm of the parameter vector θ.
107
with differently structured models. One selects a model type with a very large
(unregularized) flexibility, which typically means to select a big model with many
parameters (maybe hundreds of thousands). Then all one has to do (and one
must do it – it is a most crucial part of any professionally done machine learning
project) is to find the optimal degree of flexibility which minimizes the risk. At
this moment we don’t know how to do that – the secret will be revealed later in
this section.
I introduced the word ”regularization” here for the specific flexibility-tuning
method of adding a penalty term to the loss function. The same word is however
often used in a generalized fashion to denote any method used for steering model
flexibility.
X
N
wopt = argmin (w xi − yi )2 , (46)
w∈Rn i=1
where (xi , yi )i=1,...,N is a set of training data with xi ∈ Rn , yi ∈ R. Like any other
supervised learning algorithm, linear regression may lead to overfitting solutions
wopt . It is always advisable to control the flexibility of linear regression with an
L2 norm regularizer, that is, instead of solving (46) go for
1 X
N
wopt = argmin (w xi − yi )2 + α2 kwk2 (47)
w∈Rn N i=1
and find the best regularization coefficient α2 . The optimization problem (47)
admits a closed-form solution, namely the ridge regression formula that we have
already met in Equation 21. Rewriting it a little to make it match with the current
general scenario, here it is again:
′ 1 1
wopt =( X X ′ + α2 In×n )−1 X Y = (X X ′ + α2 In×n )−1 X Y, (48)
N N
where X = [x1 , . . . , xN ] and Y = (y1 , . . . , yN )′ .
In Section 3.1 I motivated to use the ridge regression formula because it war-
rants numerical stability. Now we see that a more fundamental reason to prefer
ridge regression over the basic kind of regression (46) is that it implements L2 norm
regularization. The usefulness of ridge regression as an allround simple baseline
tool for supervised learning tasks can hardly be overrated.
108
6.4.4 Tuning model flexibility through adding noise
Another way to tune model flexibility is to add noise. Noise can be added at
different places. Here I discuss three scenarios.
Use stochastic ensemble learning. We have seen this strategy in the presen-
tation of random forests. A stochastic learning algorithm is repeatedly ex-
ecuted with different random seeds (called Θ in Section 2.7). The stronger
the randomness of the stochastic learning algorithm, and the more members
are included in the ensemble, the stronger the regularization effect.
109
In order to determine whether a given model is under- or overfitting, one would
need to run it on test data that are “new” and not contained in the training data.
This would allow one to get a hold on the red curve in Figure 34.
However, at training time only the training data are available.
The idea of cross-validation is to artificially split the training data set D =
(xi , yi )i=1,...,N into two subsets T = (xi , yi )i∈I and V = (x′i , yi′ )i∈I ′ . These two
subsets are then pretended to be a ”training” and a ”testing” dataset. In the
context of cross-validation, the second set is called a validation set.
A bit more formally, let the flexibility axis in Figure 34 be parametrized by m,
where small m means “strong regularization”, that is, “go left” on the flexibility
axis. Let the range of m be m = 1, . . . , L.
For each setting of regularization strength m, the data in T is used to train an
optimal model hopt m . The test generalization performance on “new” data is then
tested on the validation set, for each m = 1, . . . , L. It is determined which model
hopt m performs best on the validation data. Its regularization strength m is then
taken to be the sought solution mopt to (44). After this screening of model classes
for the best test performance, a model within the found optimal regularization
strength is then finally trained on the original complete training data set D.
This whole procedure is called cross-validation. Notice that nothing has been
said so far about how to split D into T and V . This is not a trivial question: how
should D be best partitioned?
A clever way to answer this question is to split D into K subsets Dj of equal
size (j = 1, ..., K). Then carry out K complete screening runs via cross validation,
where in the j-th run the subset Dj is withheld as a validation set, and the
remaining K − 1 sets joined together make for a training set. After these K
runs, average the validation errors in order to find mopt . This is called K-fold
cross-validation. Here is the procedure in detail (where I use class inclusion as the
method for navigating through the flexibilities m):
110
Given: A set (xi , yi )i=1,...,N of training data, and a loss function L.
Also given: Some method which allows one to steer the model flexibility
along a regularization strength parameter m. The weakest regularization
should be weak enough to allow overfitting.
Step 1. Split the training data into K disjoint subsets Dj = (xi , yi )i∈Ij of
roughly equal size N ′ = N/K.
Step 2. Repeat for m = 1, . . . , L:
Step 2.1 Repeat for j = 1, . . . , K:
Step 2.2.1 Designate Dj as validation set Vj and the union of the
other Dj ′ as training set Tj .
Step 2.2.1 Compute the model with minimal training error on Tj
X
hopt m j = argmin 1/|Tj | L(h(xi ), yi ),
h∈Hm
(xi ,yi )∈Tj
val
Step 2.2 Average the K validation risks Rm j obtained from the “folds”
carried out for this m, obtaining
X
val val
Rm = 1/K Rm j.
j=1,...,K
Step 3. Find the optimal class by looking for that m which minimizes the
averaged validation risk:
val
mopt = argmin Rm .
m
Step 4. Compute hmopt using the complete original training data set:
X
hmopt = argmin 1/N L(h(xi ), yi ).
h∈Hmopt
i=1,...,N
This procedure contains two nested loops and looks expensive. For economy,
one starts with the low-end m and increases it stepwise, assessing the generaliza-
tion quality through cross-validation for each regularization strength m, until the
111
validation risk starts to rise. The strength mopt reached at that point is likely to
be about the right one.
The best assessment of the optimal class is achieved when the original training
data set is split into singleton subsets – that is, each Dj contains just a single
training example. This is called leave-one-out cross-validation. It looks like a
horribly expensive procedure, but yet it may be advisable when one has only a
small training data set, which incurs a particularly large danger of ending up with
poorly generalizing models when a wrong model flexibility was used.
K-fold cross validation is widely used – it is a factual standard procedure
in supervised learning tasks when the computational cost of learning a model is
affordable.
112
statistician could give us a formal derivation of the distribution of ĥ(x) from PX,Y
and knowledge of A, but we don’t need that for our purposes here. The only insight
that we need to take home at this point is that for a fixed x, ĥ(x) is a random
variable whose value is determined by the drawn training sample (xi , yi )i=1,...,N ,
and which has an expectation which we write as Eretrain [ĥ(x)] to indicate that the
expectation is taken over all possible training runs with freshly drawn training
data.
Understanding this point is the key to understanding the inner nature of un-
der/overfitting.
If you feel that you have made friends with this Eretrain [ĥ(x)] object, we can
proceed. The rest is easy compared to this first conceptual clarification.
Without proof I note the following, intuitively plausible fact. Among all deci-
sion functions (from any candidate space H), the quadratic risk (49) is minimized
by the function
∆(x) = EY |X=x [Y ], (50)
that is, by the expectation of Y given x. This function ∆ : Rn → R, x 7→ E[Y |X =
x] is the gold standard for minimizing the quadratic risk; no learning algorithm
can give a better result than this. Unfortunately, of course, ∆ remains unknown
because the underlying true distribution PX,Y cannot be exactly known.
Now fix some x and ask by how much ĥ(x) deviates, on average and in the
squared error sense, from the optimal value ∆(x). This expected squared error is
Eretrain [(ĥ(x) − ∆(x))2 ].
We can learn more about this error if we re-write (ĥ(x) − ∆(x))2 as follows:
(ĥ(x) − ∆(x))2 = (ĥ(x) − Eretrain [ĥ(x)] + Eretrain [ĥ(x)] − ∆(x))2
= (ĥ(x) − Eretrain [ĥ(x)])2 + (Eretrain [ĥ(x)] − ∆(x))2
+2 (ĥ(x) − Eretrain [ĥ(x)]) (Eretrain [ĥ(x)] − ∆(x)). (51)
Now observe that
h i
Eretrain (ĥ(x) − Eretrain [ĥ(x)]) (Eretrain [ĥ(x)] − ∆(x)) = 0, (52)
and
h i
Eretrain ĥ(x) − Eretrain [ĥ(x)] = Eretrain [ĥ(x)] − Eretrain [Eretrain [ĥ(x)]]
= Eretrain [ĥ(x)] − Eretrain [ĥ(x)]
= 0.
113
Inserting (52) into (51) and taking the expectation on both sides of (51) of
finally gives us
114
7 Representing and learning distributions
Almost all machine learning tasks are based on training data that have some
random component. Having completely noise-free data from deterministic sources
observed with high-precision measurements is a rare exception. Thus, machine
learning algorithms are almost all designed to cope with stochastic data. Their
ultimate functionality (classification, prediction, control, ...) will be served well
or poorly to the extent that the probability distribution of the training data has
been properly accounted for. We have seen this in the previous section. If one
(wrongly) believes that there is little randomness in the training data, one will
take the given training points as almost correct – and end up with an overfitted
model. Conversely, if one (wrongly) thinks the training data are almost completely
“white noise randomness”, the learnt model will under-exploit the information in
the training data and underfit. Altogether it is fair to say that machine learning
is the art of probability distribution modeling (plus subsequent use of that learnt
distribution for classification etc.)
In many machine learning algorithms, the distribution model remains implicit
– there is no place or data structure in the algorithm which explicitly models the
data distribution. In some algorithms however, and for some tasks, the probability
distribution of the training data is explicitly modeled. We will encounter some of
these algorithms later in this course (hidden Markov models, Bayesian networks,
Boltzmann machines are of that kind). At any rate, it is part of the standard
knowledge of a machine learning professional to know some ways to represent
probability distributions and to estimate these representations from data.
115
data? Or, expressing the same question in terms of a loss function, which decision
function minimizes the risk connected to the counting loss
(
1 if y 6= c,
L(y, c) =
0 if y = c?
It turns out that the minimal-risk decision function is in fact well-defined and
unique, and it can (and must) be expressed in terms of the distribution of our
data-generating RVs X and Y .
Our starting point is the true joint distribution PX,Y of patterns and labels.
This joint distribution is given by all the probabilities of the kind
P (X ∈ A, Y = c), (54)
where A is some subvolume of P and c ∈ C. The subvolumes A can be n-
dimensional hypercubes within P, but they also can be arbitrarily shaped “volume
bodies”, for instance balls or donuts or whatever. Note that the probabilities
P (X ∈ A, Y = c) are numbers between 0 and 1, while the distribution PX,Y is
the function which assigns to every choice of A ⊆ P, c ∈ C the number P (X ∈
A, Y = c) (probability theory gives us a rigorous formal way to define and handle
this strange object, PX,Y — it is explained with loving care in my lecture notes
for “Principles of Statistical Modeling”).
The joint distribution PX,Y is the “ground truth” – it is the real-world statistical
distribution of pattern-label pairs of the kind we are interested in. In the Digits
example, it would be the distribution of pairs made of (i) a handwritten digit and
(ii) a human-expert provided class label. Test digit images and their class labels
would be randomly “drawn” from this distribution.
A decision function h : P → {c1 , . . . , ck } partitions the pattern space P into k
disjoint decision regions R1 , . . . , Rk by
Ri = {x ∈ P | h(x) = ci }. (55)
A test pattern xtest is classified by h as class i if and only if it falls into the decision
region Ri .
Now we are prepared to analyze and answer our ambitious question, namely
which decision functions yield the lowest possible rate of misclassifications. Since
two decision functions yield identical classifications if and only if their decision
regions are the same, we will focus our attention on these regions and reformulate
our question: which decision regions yield the lowest rate of misclassifications, or
expressed in its mirror version, which decision regions give the highest probability
of correct classifications?
Let fi be the pdf for the conditional distribution PX | Y =ci . It is called the
class-conditional distribution.
The probability to obtain a correct classification
P for a random test pattern,
when the decision regions are Ri , is equal to ki=1 P (X ∈ Ri , Y = ci ). Rewriting
this expression using the pdfs of the class conditional distributions gives
116
X
k
P (X ∈ Ri , Y = ci ) =
i=1
X
k
= P (X ∈ Ri | Y = ci ) P (Y = ci )
i=1
Xk Z
= P (Y = ci ) fi (x) dx
i=1 Ri
Xk Z
= P (Y = ci ) fi (x) dx. (56)
i=1 Ri
Note that the integral is taken over a region that possibly has curved boundaries,
and the integration variable x is a vector. The boundaries between the decision
regions are called decision boundaries. For patterns x that lie exactly on such
boundaries, two or more classifications are equally probable. For instance, the
digit pattern shown in the last but third column in the second row in Figure 15
would likely be classified by humans as a “1” or “4” class pattern with roughly the
same probability; this pattern would lie close to a decision boundary.
The expression (56) obviously becomes maximal if the decision regions are
given by
Thus we have found the decision function which is optimal in the sense that it
maximizes the probability of correct classifications: namely
A learning algorithm that finds the optimal decision function (or some function
approximating it) must learn (implicitly or explicitly) estimates of the the class-
conditional distributions PX | Y =ci and the class probabilities P (Y = ci ).
The class probabilities are also called the class priors. Figure 35 visualizes
optimal decision regions and decision boundaries. In higher dimensions, the ge-
ometric shapes of decision regions can become exceedingly complex, fragmented
and “folded into one another” — disentangling them during a learning process is
one of the eternal challenges of ML.
117
Figure 35: Optimal decision regions Ri . A case with a one-dimensional pattern
space P and k = 3 classes is shown. Broken lines indicate decision boundaries.
Decision regions need not be connected!
Quite generally, any machine learning task can be solved “optimally” (in terms
of minimizing some risk) only if the solution takes the true distribution of all
task-relevant RVs into account. As I mentioned before, many learning algorithms
estimate a model of the underlying data distribution only implicitly. But some
ML algorithms generate explicit models of probability distributions, and in the
wider fields of statistical modeling, explicit models of probability distributions are
often the final modeling target.
Representing a probability distribution in mathematical formalism or by an
algorithm is not always easy. Real-world probability distributions can be utterly
complex and high-dimensional objects which one cannot just “write down” in a for-
mula. Over the last 60 or so years, ML and statistics research has developed a wide
range of formalisms and algorithms for representing and estimating (“learning”)
probability distributions. A machine learning professional should know about
some basic kinds of such formalisms and algorithms. In this section I present a
choice, ranging from elementary to supremely complex, powerful — and compu-
tationally costly.
118
Definition 7.1 Given a discrete sample space S (finite or countable infinite), a
P on S is a function p : S → [0, 1] whose total mass is 1,
probability mass function
that is, which satisfies s∈S p(s) = 1.
Bernoulli distribution. The Bernoulli distribution arises when one deals with
observations that have only two possible outcomes, like tail – head, female – male,
pass – fail, 0 – 1. This means a two-element sample space S = {s1 , s2 } equipped
with the power set σ-field, on which a Bernoulli distribution is defined by its pmf,
which in this case has its own standard terminology:
X ∼ Bi(10, 0.25)
119
p
Figure 36: The pmf’s of Bi(20, 0.1) (blue), Bi(20, 0.5) (red), and Bi(20, 0.9)
(green). Figure taken from www.boost.org.
120
The expected number of events E[X] is called the rate of the Poisson distribution,
and is commonly denoted by λ. The pmf of a Poisson distribution with rate λ is
given by
λk e−λ
p(k) = . (59)
k!
Figure 37 depicts the pmf’s for three different rates.
Figure 37: The pmf of the Poisson distribution for various values of the param-
eter λ. The connecting lines bewteen the dots are drawn only for better vi-
sual appearance (image source: https://commons.wikimedia.org/wiki/File:
Poisson_pmf.svg).
121
in Rn . The simplest point distribution is defined on the real line S = R and has
the defining property that for any subset A ⊆ R it holds that
(
1 if 0 ∈ A
P (X ∈ A) =
0 if 0 ∈/ A.
There are several ways to write down such a distribution in mathematical for-
malism. In the machine learning literature (and throughout the natural sciences,
especially physics), the above point distribution would be represented by a weird
kind of pdf-like function, called the Dirac delta function δ (the Wikipedia article
https://en.wikipedia.org/wiki/Dirac_delta_function is recommendable if
you want to understand this function better). The Dirac delta function is used
inside an integral just like a normal pdf is used in (60). Thus, for a subset A ⊆ R
one has Z
P (X ∈ A) = δ(x) dx.
A
The Dirac delta is also used in R , where likewise it is a “pdf” which describes a
n
The uniform distribution. We don’t need to make a big fuzz about this. If
I = [a1 , b1 ]×. . .×[an , bn ] is a n-dimensional interval in Rn , the uniform distribution
on I is given by the pdf
(
1
if x ∈ I
p(x) = (b1 −a1 )·...·(bn −an ) (61)
0 if x ∈
/ I.
122
The exponential distribution. This distribution is defined for S = [0, ∞)
and could be paraphrased as “the distribution of waiting times until the next of
these things happens”. Consider any of the kinds of temporal events listed for the
Poisson distribution, for instance the event “meteorite hits earth”. The exponential
distribution characterizes how long you have to wait for the next impact, given that
one impact has just happened. Like in the Poisson distribution, such random event
processes have an average rate events / unit reference time interval. For instance,
meteorites of a certain minimum size hit the earth with a rate of 2.34 per year
(just guessing). This rate is again denoted by λ. The pdf of the exponential
distribution is given by
p(x) = λ e−λx (note that x ≥ 0). (62)
It is (almost) self-explaining that the expectation of an exponential distribution
is the reciprocal of the rate, E(X) = 1/λ. Figure 38 shows pdf’s for some rates λ.
Figure 38: The pdf of the exponential distribution for various values of
the parameter λ (image source: https://commons.wikimedia.org/wiki/File:
Exponential_pdf.svg).
The exponential distribution plays a big role in spiking neural networks (SNNs).
Biological neurons communicate with each other by sending short “point-like”
electrical pulses, called spikes. Many people believe that Nature invented commu-
nication by spikes to let the brain save energy – your head shouldn’t perceptibly
warm up (like your PC does) when you do a hefty thinking job! For the same
reason, microchip engineers have teamed up with deep learning researchers to de-
sign novel kinds of microchips for deep learning applications. These microchips
contain neuron-like processing elements which communicate with each other by
spikes. IBM and Intel have actually built such chips (check out “IBM TrueNorth”
and “Intel Loihi” if you want to learn more). Research about artificial SNNs for
machine learning applications goes hand in hand with research in neuroscience.
In both domains, one often uses models based on the assumption that the tem-
poral pattern in a spike sequence sent by a neuron is a stochastic process called a
123
Poisson process. In a Poisson process, the waiting times between two consecutive
spikes are exponentially distributed. Recordings from real biological neurons often
show a temporal randomness of spikes that can be almost perfectly modeled by a
Poisson process.
Learning / estimation: The rate λ is the only parameter characterizing an
exponential distribution. Training data: a sequence t1 , t2 , . . . , tN of time points
where the eventP in question was observed. Then the rate can be estimated by
λ̂ = (1/(N − 1) i=1,...,N −1 ti+1 − ti .
Figure 39: pdf of a normal distribution with mean 2 and standard deviation 1.
124
The majestic power of the normal distribution, which makes her reign almost
universally over almost all natural phenomena, comes from one of the most central
theorems of probability theory, the central limit theorem. It is stated in textbooks
in a variety of (not always exactly equivalent) versions. It says, in brief, that one
gets the normal distribution whenever random effects of many independent small-
sized causes sum up to large-scale observable effects. The following definition —
which I do not expect you to fully understand (which would need a substantial
training in probability theory), but you should at least have seen it — makes this
precise:
Definition 7.5 Let (Xi )i∈N be a sequence of independent, real-valued, square in-
tegrable random variables with nonzero variances Var(Xi ) = E[(Xi − E[Xi ])2 ].
Then we say that the central limit theorem holds for (Xi )i∈N if the distributions
PSn of the standardized sum variables
Pn
(Xi − E[Xi ])
Sn = i=1 Pn (64)
σ ( i=1 Xi )
Explanations:
for all continuous, bounded functions f : R → R. You will find the nota-
tion of these integrals unfamiliar, and indeed you see here cases of Lebesgue
integrals – a far-reaching generalization of the Riemann integrals that you
know. Lebesgue integrals can deal with a far greater range of functions
than the Riemann integral. Mathematical probability theory is formulated
exclusively with the Lebesgue integral. We cannot give an introduction
to Lebesgue integration theory in this course. Therefore, simply ignore the
precise meaning of “weak convergence” and take home that sequences of dis-
tributions are required to converge to a target distribution in some subtly
defined way.
A sequence (Xi )i∈N of random variables (or, equivalently, its associated se-
quence of distribution (PXi )i∈N ) obeys the central limit theorem under rather weak
conditions – or in other words, for many such sequences the central limit theorem
holds.
125
A simple, important class of (Xi ) for which the central limit theorem holds is
obtained when the Xi are identically distributed (and, of course, are independent,
square integrable and have nonzero variance). Notice that regardless of the shape
of the distribution of each Xi , the distribution of the normalized sums converges
to N (0, 1)!
The classical demonstration of the central limit theorem is the Galton board,
named after Sir Francis Galton (1822–1911), an English multi-scientist. The idea
is to let little balls (or beans, hence this device is sometimes called “bean machine”)
trickle down a grid of obstacles which randomly deflect the ball left or right (Figure
40). It does not matter how, exactly, these deflections act — in the simplest case,
the ball is just kicked right or left by one space grid unit with equal probability.
The deeper the trickling grid, the closer will the resulting distribution be to a
normal distribution. A nice video can be watched at https://www.youtube.com/
watch?v=PM7z_03o_kk.
Figure 40: The Dalton board. Compare text for explanation. Figure taken from
https://janav.wordpress.com/2013/09/26/power-law/.
However, this simple case does not explain the far-reaching, general importance
of the central limit theorem (rather, property). In textbooks one often finds state-
ments like, “if the outcomes of some measurement procedure can be conceived to
be the combined effect of many independent causal effects, then the outcomes will
be approximately normal distributed”. The “many independent causal effects”
that are here referred to are the random variables (Xi ); they will typically not
be identically distributed at all. Still the central limit theorem holds under mild
assumptions. Intuitively, all that one has to require is that none of the individual
126
random variables Xi dominates all the others – the effects of any single Xi must
asymptotically be “washed out” if an increasing number of other Xi′ is entered
into the sum variable Sn . In mathematical textbooks on probability you may find
numerous mathematical conditions which amount to this “washing out”. A special
case that captures many real-life cases is the condition that the Xi are uniformly
bounded, that is, there exists some b > 0 such that |Xi (ω)| < b for all i and ω.
However, there exist much more general (nontrivial to state) conditions that like-
wise imply the central limit theorem. For our purposes, a good enough take-home
message is
1. Transform the problem from its original version N (µ, σ 2 ) to the standard
normal distribution N (0, 1), by using
Z b Z b−µ
1 (x−µ)2 σ 1 (x)2
√ e− 2σ2 dx = √ e− 2 dx. (66)
a 2πσ a−µ
σ
2π
2. Compute the numerical value of the r.h.s. in (66) by using the cumulative
density function of N (0, 1), which is commonly denoted by Φ:
Z b−µ
σ 1 (x)2 b−µ a−µ
√ e− 2 dx = Φ( ) − Φ( ).
a−µ
σ
2π σ σ
127
estimation formulas for these two parameters are
1 X
µ̂ = xi ,
N i
1 X
σ̂ 2 = (xi − µ̂)2 .
N −1 i
The n–dimensional normal distribution. If data points are not just real
numbers but vectors x = (x1 , . . . , xn )′ ∈ Rn , whose component RVs Xi which give
the vector components xi fulfill the central limit theorem, the joint distribution
of the RVs X1 , . . . , Xn is the multidimensional normal distribution which has the
pdf
1 1 ′ −1
p(x) = exp − (x − µ) Σ (x − µ) . (67)
(2π)n/2 det(Σ)1/2 2
Here µ is the expectation E[(X1 , . . . , Xn )′ ] and Σ is the covariance matrix of the
n component variables, that is Σ(i, j) = E[(Xi − E[Xi ])(Xj − E[Xj ])]. Figure
41 shows the pdf of a 2-dimensional normal distribution. In geometrical terms, a
multidimensional normal distribution is shaped as an ellipsoid, whose main axes
coincide with the eigenvectors ui of the covariance matrix Σ. Like in PCA one
can obtain them from the SVD: if U DU ′ = Σ is the singular value decomposition
of Σ, the eigenvectors ui are the columns of U .
x2
u2 u1
x1
128
binations of Gaussians to approximate complex distributions - we will see this
later in today’s lecture. At the advanced end, there is an entire modern branch
of ML, Gaussian processes, where complex distributions (and hyperdistributions)
are modeled by infinitely many, interacting Gaussians. This is quite beyond the
scope of our course.
Learning / estimation: Given a set of n-dimensional training data points
(xi )i=1,...,N , the expectation µ and the covariance matrix Σ can be estimated from
the training data in the obvious way:
X X
µ̂ = 1/N xi and Σ̂ = 1/(N − 1) (xi − µ̂)(xi − µ̂)′
i i
.
... and many more! The few common, named distributions that I displayed
in this section are only meant to be illustrative picks from a much, much larger
reservoir of well-known, completely analyzed, tabulated, pre-computed, and indi-
vidually named distributions. The online book “Field Guide to Continuous Prob-
ability Distributions” Crooks 2017 attempts a systematic overview. You should
take home the following message:
129
5. finally allows you to state something like, “given the observation data,
with a probability of 0.95, the true distribution θtrue differs from the
estimate θ̂ by less than 0.01 percent.”
• Professionally documented statistical analyses will always state not only the
estimated model, but also in addition quantify how accurate the model esti-
mate is. This can take many forms, like error bars drawn around estimated
parameters or stating “significance levels”.
• If you read a report that only reports a model estimate, without any such
quantification of accuracy levels, then this report has not been written by a
professional scientist. There are two kinds of such reports. Either the author
was ignorant about how to carry out a statistical analysis – then trash the
report and forget it. Or the report was written by a machine learner in
a task context where accuracy levels are not important and/or cannot be
technically obtained because it is not possible to identify the distribution
kind of the data generating system.
130
Figure 42: A. A richly structured 2-dimensional pdf (darker: larger values of pdf;
white: zero value of pdf). B. Same, discretized to a pmf. C. A MoG coverage of the
pmf with 20 Gaussians, showing the ellipsoids coresponding to standard deviations.
D. Contour plot of the pdf represented by the mixture of Gaussians. (The picture I
used here is an iconic photograph from the history of aviation, July 1909. It shows
Latham departing from Calais, attempting to cross the Channel in an aeroplane
for the first time in history. The motor of his Antoinette IV monoplane gave up
shortly before he reached England and he had to water. A few days later his rival
Blériot succeeded. https://en.wikipedia.org/wiki/Hubert_Latham
131
Let us first fix notation for MoGs. We consider probability distributions in Rn
— like the 2-dimensional distributions shown in Figure 42. A Gaussian (multi-
dimensional normal distribution) in Rn is characterized by its mean µ and its
n-dimensional covariance matrix Σ. We lump all the parameters inside µ and Σ
together and denote the resulting parameter vector by θ. We denote the pdf of
the Gaussian with parameters θ by p(·|θ). Now we can write down the pdf p of a
MoG that is assembled from m different Gaussians as
X
m
≥0
p : R → R , x 7→
n
p(x|θj ) P (j), (68)
j=1
where θj = (µj , Σj ) are the parameters of the jth Gaussian, and where the mixture
coefficients P (j) satisfy
X
m
0 ≤ P (j) and P (j) = 1.
j=1
Pm
Pm For notational simplification we will mostly write j=1 p(x|j) P (j) instead of
j=1 p(x|θj ) P (j).
MoG’s are used in unsupervised learning situations: given a data set (xi )i=1,...,N
of points in Rn , one wants to find a “good” MoG approximation of the distribution
from which the xi have been sampled. But... what does “good” mean?
This is the point to introduce the loss function which is invariably used when
the task is to find a “good” distribution model for a set of training data points.
Because this kind of loss function and this way of thinking about optimizing distri-
bution models is not limited to MoG models but is the universal way of handling
distribution modeling throughout machine learning, I devote a separate subsection
to it and discuss it in general terms.
132
To make this precise, one considers the probability that the dataset D was
drawn given that the true distribution has the pdf p (which is characterized by
parameters θ).
That exactly this sample D was drawn from p is obviously a random event —
if one would repeat the entire data sampling procedure, one would have obtained
another sample D′ . It thus makes sense to ask, “what is the probability to draw
just exactly this sample D?” The probability of drawing D is described by a pdf
f (D|θ) over (Rn )N (don’t read on before you fully understand this). At this point
one assumes that the datapoints in D have been drawn independently (each of
them from a distribution with pdf p), which leads to
Y
N
f (D|θ) = p(xi | θ). (69)
i=1
This pdf f (D|θ) value gives the probability density of getting the sample D
if the underlying distribution is the one modeled by θ. Conversely, one says that
this value f (D|θ) is the likelihood of the distribution θ given the data D, and
writes L(θ | D). The words “probability” and “likelihood” are two different words
which refer to the same mathematical object, namely f (D|θ), but this object is
read from left to right when speaking of “probability”:
• “the likelihood of the model θ given a dataset D”, written L(θ | D).
If you meet someone who is able to correctly use the words “probability” vs.
“likelihood”, you can trust that person — s/he has received a good education in
machine learning.
The likelihood pdf (69) is not suitable for doing numerical computations, be-
cause a product of very many small numbers (the p(xi | θ)) will lead to numerical
underflow and become squenched to zero on digital computers. To avoid this, one
always works with the log likelihood
Y
N X
N
L(θ|D) = log p(xi | θ) = log p(xi | θ) (70)
i=1 i=1
133
Machine learners like to think of their learning procedures as minimizing some
cost or loss. Thus one also finds in the literature the sign-inverted version of (71)
which obviously is equivalent. If one writes out the L(θ | D) one sees the structural
similarity of this unsupervised learning task with the supervised loss-minimization
task which we met earlier in this course (e.g. (42)):
X
N
θML = argmin − log p(xi | θ).
θ∈H i=1
134
points in time series measurements, or measurements of physical systems where
the available sensors give only a partial account of the system state. The general
idea is that the observed data have been caused or influenced by some “hidden”
mechanism which by itself could not be observed. A solid and reliable model of
the distribution of the observed measurable data should somehow include a model
of these unobserved hidden mechanisms. This is mathematically done by creating
models with two sorts of random variables: the observables (lumped together in a
random variable X) and the hiddens (collectively written as Y ). If one manages
to estimate a good model of the joint distribution of both X and Y , one can use
this “complete” model for many purposes, for instance
• for filling in “best guesses” for missing values in time series data where
observations are unavailable for certain time points (like handwritten texts
with unreadable words in them);
• for estimating models of complex systems that are described by many inter-
acting random variables, some of which are not observable — an example
from human-machine interfacing: modeling the decisions expected from a
user assuming that these decisions are influenced by emotional states that
are not directly observable; such models are called Bayesian networks or
more generally, graphical models;
• for a host of other practical modeling tasks, like our MoG optimization task.
135
variable that decides which of the m Gaussians isNused to generate xi , thus
SY = {1, . . . , m}N . The RV Y is a product Y = Yi of i.i.d. RVs Yi , where
Yi takes values j ∈ {1, . . . , m}. Concretely, Yi selects one of the m Gaussians by
a random choice weighted by the probability vector (P (1), . . . , P (m)). Thus the
value of Yi is ji , an element of {1, .N
. . , m}.
The RV X is a product X = Xi of i.i.d. RVs Xi , where Xi generates the
i-th training data point xi by a random draw from the ji -th Gaussian, that is from
the normal distribution N (µji , Σji ).
Figure 7.3.2 illustrates what it means not to know the values of the hidden
variable Y .
Figure 43: A point set sampled from a mixture of Gaussians. Left: what we
(don’t) know if we don’t know from which Gaussian a point was drawn. Right:
the “hidden” variable made visible. (Figure copied from an online presentation of
Christopher M. Bishop, no longer accessible)
136
SX × SY . The marginal
R distributions of X and Y Rare then given by parametrized
pdf’s pX (x|θ) = Y pX,Y (x, y|θ)dy and pY (y|θ) = X pX,Y (x, y|θ)dx. We can then
switch to a description of distributions using only pdfs, that is, the distribution of
samples D is described by the pdf pX (D|θ).
In the case of all-continuous distributions, the log likelihood of θ becomes
The MoG case has a mix of types: X gives values in the continuous space (Rn )N
and Y gives values in the discrete space {1, . . . , m}N . This makes it necessary to
use a mix of pdf’s and pmf’s when working out the MoG case in detail.
The difficulty we are facing is that the pdf value pX (D|θ) depends on the values
that the hidden variables take, that is,
Z
L(θ | D) = log pX (D|θ) = log pX,Y (D, y | θ) dy.
SY
where pX|Y =(j1 ,...,jN ) is the pdf of the conditional distribution of X given that
the hidden variables take the value (j1 , . . . , jN ) and pXi |Yi =ji is the pdf of the
conditional distribution of Xi given that the hidden variable Yi takes the value ji .
Now let q be any pdf for the hidden variables Y . Then we can obtain a lower
bound on log pX (D|θ) by
Z
L(θ | D) = log pX,Y (D, y | θ) dy =
SY
Z
pX,Y (D, y | θ)
= log q(y) dy
SY q(y)
Z
pX,Y (D, y | θ)
≥ q(y) log dy
q(y)
ZSY
Z
= q(y) log pX,Y (D, y | θ) dy − q(y) log q(y) dy (75)
SY SY
=: F(q, θ),
(76)
where the inequality follows from a version of Jensen’s inequality which states
that
E[f ◦ Y ] ≤ f (E[Y ])
137
for any concave function f (the log is concave).
EM algorithms maximize the lower bound F(q, θ) by alternatingly and itera-
tively maximizing F(q, θ) first with respect to q, then with respect to θ, starting
from an initial guess q (0) , θ(0) which is then updated by:
= log pX (D | θ(k) )
= L(θ(k) | D).
Z
θ (k+1)
= argmax q (k+1) (y) log pX,Y (D, y | θ) dy
θ
ZSY
= argmax pY |X=D,θ(k) (y) log pX,Y (D, y | θ) dy (80)
θ SY
138
How the M-step is concretely computed (i.e. how the argmax problem (80)
is solved) depends on the particular kind of model. Can be tricky and worth a
publication to find an M-step algorithm for your new kind of model.
Because we have F = L(θ(k) | D) before each M-step, and the E-step does
not change θ(k) , and F cannot decrease in an EM-double-step, the sequence
L(θ(0) | D), L(θ(1) | D), . . . monotonously grows toward a supremum. The itera-
tions are stopped when L(θ(k) | D) = L(θ(k+1) | D), or more realistically, when a
predefined number of iterations is reached or when the growth rate falls below a
predetermined threshold. The last parameter set θ(k) that was computed is taken
as the outcome of the EM algorithm.
It must be emphasized that EM algorithms steer toward a local maximum of
the likelihood. If started from another initial guess, another final parameter set
may be found. Here is a summary of the EM principle:
E-step: Estimate the distribution pY |X=D,θ(k) (y) of the hidden variables, given
data D and the parameters θ(k) of a preliminary model. This can be intu-
itively understood as inferring knowledge about the hidden variables, thereby
completing the data.
There are other ways to compute maximum likelihood solutions in tasks that
involve visible and hidden variables. Specifically, one can often invoke a gradient
descent optimization. A big advantage of EM over gradient descent is that EM
does not need a careful tuning of algorithm control parameters (like learning rates,
an eternal bummer in gradient descent methods), simply because there are no
tuning parameters. Furthermore, EM algorithms are typically numerically robust,
which cannot be said about gradient descent algorithms.
Now let us put EM to practice for the MoG estimation. For better didactic
transparency, I will restrict my treatment to a special case where we require all
Gaussians to be spheric, that is, their covariance matrices are of the form Σ =
σ 2 In where σ 2 is the variance of the Gaussian in every direction and In is the n-
dimensional identity matrix. The general case of mixtures of Gaussians composed
of member Gaussians with arbitrary Σ is described in textbooks, for instance Duda,
P. E. Hart, and Stork 2001. And anyway, you would probably use a ready-made
online tool for MoG estimation... Here we go.
A MoG pdf with m spherical components is given by the vector of parameters
2
θ = (µ1 , . . . , µm , σ12 , . . . , σm , P (1), . . . , P (m))′ . This gives the following concrete
139
optimization problem:
Assume that we are after iteration k and want to estimate θ(k+1) . In the E-step
we have to compute the conditional distribution of the hidden variable Y , given
data D and the preliminary model
Unlike in the treatment that I gave for the general case, where we assumed Y
to be continuous, now Y is discrete. Its conditional distribution is given by the
probabilities P (Yi = ji | Xi = xi , θ(k) ). These probabilities are
In the M-step we have to find maximum likelihood estimates for all parameters
in θ. I do not give a derivation here but just report the results, which are intuitive
enough:
PN
i=1 P (Yi = ji | Xi = xi , θ
(k)
(k+1) ) xi
µj = P ,
i=1 P (Yi = ji | Xi = xi , θ
N (k) )
PN (k+1)
i=1 P (Yi = ji | Xi = xi , θ − xi k 2
(k)
2 (k+1) 1 )kµj
σj = PN , and
n i=1 P (Y i = ji | X i = x i , θ (k) )
1 X
N
P (k+1) (j) = P (Yi = ji | Xi = xi , θ(k) ).
N i=1
140
Figure 44: A two-dimensional dataset.
141
Figure 46: Rectangular Parzen window representation of a distribution given by a
sample of 5 real numbers. The sample points are marked by colored circles. Each
data point lies in the middle of a square ”Parzen window”, that is, a rectangular
pdf centered on the point. Weighted by 1/5 (colored rectangles) and summed
(solid black staircase line) they give a pdf.
which makes H the indicator function of a unit hypercube centered at the origin.
Using H, we get the n-dimensional analog of the staircase pdf in Figure 46 for a
sample D = (xi )i=1,...,N by
1 X 1
N
(D) x − xi
p (x) = H , (84)
N i=1 dn d
observing that the volume of such a cube is dn . The superscript (D) in p(D) is
meant to indicate that the pdf depends on the sample D.
Clearly, given some sample (xi )i=1,...,N , we do not believe that such a rugged
staircase reflects the true probability distribution the sample was drawn from. We
would rather prefer a smoother version. This can be easily done if we use smoother
kernel functions. A standard choice is to use multivariate Gaussians with diagonal
142
covariance matrix and uniform standard deviations σ =: d for H. This turns (84)
into
1 X
N
1 kx − xi k2
(D)
p (x) = exp −
N i=1 (2πd2 )n/2 2d2
1 X 1
N
1 1 x − xi 2
= exp − k k (85)
N i=1 dn (2π)n/2 2 d
where the second line brings the expression to a format that is analog to (84).
It is clear that any nonnegative kernel function H which integrates to unity
can be used in an equation of this sort such that the resulting p(D) will be a pdf.
The scaling factor d determines the width of the Parzen window and thereby the
amount of smoothing. Figure 47 illustrates the effect of varying d.
0.6 3
0.2 8 8 8
0.4 2
0.1 6 0.2 6 6
1
0 0 0
0 4 0 4 0 4
2 2 2
4 2 4 2 4 2
6 6 6
8 8 8
10 0 10 0 10 0
FIGURE 4.4. Three Parzen-window density estimates based on the same set of five samples, using the window
functions
Figure 47:in Fig.
The 4.3.effect
As before,
of the vertical axes
choosing have beenwidths
different scaled to show
d in the structure of eachadistribution.
representing 5-point, 2-
From: Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification. Copyright ⃝c 2001 by John
dimensional sample
Wiley & Sons, Inc. by Gaussian windows. (Taken from the online set of figures
of the book by Duda, Hart & Stork, ftp://ftp.wiley.com/public/sci_tech_
med/pattern/)
Comments:
• Parzen window representations of pdfs are ”non-parametric” in the sense
that the shape of such a pdf is determined by a sample (plus, of course, by
the shape of the kernel function, which however mainly serves a smoothing
purpose). This fact also can render Parzen window representations compu-
tationally expensive, because if the sample size is large, a large number of
data points have to be stored (and accessed if the pdf is going to be used).
• The basic Parzen windowing scheme, as introduced here, can be refined in
many ways. A natural way to improve on it is to use different widths d
for different sample points xi . One then makes d narrow in regions of the
sample set which are densely populated, and wide in regions that are only
thinly covered by sample points. One way of doing that (which I invented
while I was writing this — there are many ways to go) would be to (i) choose
a reasonably small integer K; (ii) for each sample point xi determine its K
143
nearest neighbors xi1 , ..., xiK ; (iii) compute the mean squared distance δ of
xi from these neighbors, (iv) set d proportional to this δ for this sample
point xi .
XN !
1 1 kx − x i k2
log p(D) (x) = log exp − . (86)
N i=1 (2πd2 )n/2 2d2
Still this will not usually solve all underflow problems, because the sum-exp
terms within the log still may underflow. Here is a trick to circumvent this
problem, known as the ”log-sum-exp” trick. Exploit the following:
log(exp(−A) + exp(−B)) =
= log (exp(−A + C) exp(−C) + exp(−B + C) exp(−C))
= −C + log (exp(−A + C) + exp(−B + C)) ,
144
8 Bayesian model estimation
Machine learning is based on probability, and probability is ... ? — We don’t know
what probability is! After centuries of thinking, philosophers and mathematicians
have not arrived to a general consensus of how to best understand randomness.
The proposals can be broadly grouped into objectivistic and subjectivistic inter-
pretations of “probability”.
According to the objectivistic view, probability resides physically in the real
world — it is a phenomenon of nature, as fundamentally a property of physical
reality as for instance “time” or “energy”.
According to the subjectivistic view, probability describes an observer’s sub-
jective opinion on something observed — a degree of belief, of uncertainty, of
plausibility of judgement, or missing information etc.
Calling the two views “objectivistic” versus “subjectivistic” is the philosophers’
terminology. Statisticians and machine learners rather call it frequentist statistics
versus Bayesian statistics.
The duality of understandings of “probability” has left a strong mark on ma-
chine learning. The two ways of thinking about probability lead to two different
ways of designing learning algorithms. Both are important and both are in daily
use. A machine learning professional should be aware of the fundamental intuitions
behind the two sorts of algorithms and when to use which. In this section I explain
the fundamental ideas behind the two understandings of probability, outline the
general principles of constructing Bayesian algorithms, and demonstrate them in
a case study of great importance in bioinformatics, namely the classification of
proteins.
145
6) = 1/5 would then become “measurable in principle” by
If one looks at this “definition” with a critical mind one will find that it is
loaden with difficulties.
Second, it does not inform us about how, exactly, the “repeated trials” of exe-
cuting X should be done in order to be “unbiased”. What does that mean
in terms of experimental procedures? This is a very critical issue. To appre-
ciate its impact, consider the example of a bank who wishes to estimate the
probability that a customer will fail to pay back a loan. In order to get an
estimate of this probability, the bank can only use customer data collected in
the past, but wants to base creditworthyness decisions for future customers
on those data. Picking only past data points to base probability estimates on
hardly can qualify as an “absolutely unbiased” sampling of data, and in fact
the bank may grossly miscalculate credit risks when the general customer
146
body or their economical conditions change over time. These difficulties
have of course been recognized in practical applications of statistics. Text-
books and statistics courses for psychologists and economists contain entire
chapters with instructions on how to collect “unbiased samples”.
Third, if one repeats the repeated measurement, say by carrying out one mea-
surement sequence giving x1 , x2 , . . . and then (or in parallel) another one
giving x′1 , x′2 , . . ., the values P̂N from Equation (88) are bound to differ be-
tween the two series. The limit indicated in Equation (88) must somehow
be robust against different versions of the P̂N . Mathematical probability
theory offers several ways to rigorously define limits of series of probability
quantities which we do not present here. Equation (88) is suggestive only
and I marked it with a (∗ ) to indicate that it is not technically complete.
Among these three difficulties, only the second one is really problematic. The
first one is just a warning that in order to measure a probability with increasing
precision we need to invest an increasingly large effort, — but that is the same
for other measurables in the sciences. The third difficulty can be fully solved
by a careful definition of suitable limit concepts in mathematical probability the-
ory. But the second difficulty is fundamental and raises its ugly head whenever
statistical assertions about reality are made.
In spite of these difficulties, the objectivist view on probability in general and
the frequentist account of how to measure it in particular is widely shared among
empirical scientists. It is also the view of probability which is commonly taught
in courses of statistics for mathematicians. A student of mathematics may not
even have heard of Bayesian statistics at the end of his/her studies. In machine
learning, the frequentist view often leads to learning algorithms which are based
on the principle of maximum likelihood estimation of distributions — as we have
seen in Section 7.3.1. In fact, all what I wrote in these lecture notes up to now
was implicitly based on a frequentist understanding of probability.
147
developed by subjectivists can by and large be seen as generalizations of classical
logic. Classical logic only knows two truth values: true or false. In subjectivist ver-
sions of logic formalisms, a proposition can be assigned graded degrees of “belief”,
“plausibility”, etc. For a very first impression, contrast a classical-logic syllogism
like
therefore, B is true
This example is taken from Jaynes 2003, where a situation is described in which
a policeman sees a masked man running away from a juweler’s shop whose window
was just smashed in. The plausibility rule captures the policeman’s inference that
the runner is a thief (A) because if a person is a thief, it is more likely that the
person will run away from a smashed-in shop window (B) than when the person
isn’t a thief. From starting points like this, a number of logic formalisms have been
devised which enrich/modify classical two-valued logic in various ways. If you want
to explore these areas a little further, the Wikipedia articles probabilistic logic,
Dempster-Shafer theory, fuzzy logic, or Bayesian probability are good entry points.
In some of these formalisms the Kolmogorov axioms of frequentist probability re-
appear as part of the respective mathematical apparatus. Applications of such
formalisms arise in artificial intelligence (modeling reasoning under uncertainty),
human-machine interfaces (supporting discourse generation), game theory and
elsewhere.
The discipline of statistics has almost entirely been developed in an objectivist
spirit, firmly rooted in the frequentist interpretation of probability. Machine learn-
ing also in large parts roots in this view. However, a certain subset of machine
learning models and computational procedures have a subjectivist component.
These techniques are referred to as Bayesian model estimation methods. Bayesian
modeling is particulary effective and important when training datasets are small.
I will explain the principle of Bayesian model estimation with a super-simple syn-
thetic example. A more realistic but also more complex example will be given in
a subsection 8.3.
Consider the general statistical modeling task, here discussed for real-valued
random variables only to simplify the notation. A measurement which yields
148
real-valued outcomes (like measuring the speed of a diving falcon) is repeated N
times, giving a measurement sequence x1 , . . . , xN . The ith measurement value is
obtained from a RV Xi . These RVs Xi which model the individual measurements
are i.i.d. We assume that the distribution of each Xi can be represented by a pdf
pXi : R → R≥0 . The i.i.d. property of the family (Xi )i=1,...,N implies that all these
pXi are the same, and we call that pdf pX . We furthermore assume that pX is
a parametric pdf, that is, it is a function which is parametrized by a parameter
vector θ, for which we often write pX (θ). Then, the statistical modeling / machine
learning task is to estimate θ from the sample data x1 , . . . , xN . We have seen that
this set-up naturally leads to maximum-likelihood estimation algorithms which
need to be balanced with regards to bias-variance using some regularization and
cross-validation scheme.
For concreteness let us consider a case where N = 2, that is two observa-
tions (only two!) have been collected, forming a training sample D = (x1 , x2 ) =
(0.9, 1.0). We assume that the pdf pX is a normal distribution
√ with unit standard
deviation, that is, the pdf has the form pX (x) = 1/ 2π exp(−(x − µ)2 /2). This
leaves the expectation µ as the only parameter that has to be estimated, thus
θ = µ. The learning task is to estimate µ from D = (x1 , x2 ) = (0.9, 1.0).
The classical frequentist answer to this question is to estimate µ by the sample
mean (which, by the way, is the maximum-likelihood estimator for the expectation
µ). That is, one computes
1 X
N
µ̂ = xi , (89)
N i=1
which in our example gives µ̂ = (0.9 + 1.0)/2 = 0.95.
This is the best a classical-frequentist modeler can do. In a certain well-defined
sense which we will not investigate, the sample mean is the optimal estimator for
the true mean of a real-valued distribution. But “best” is not “good”: with only
two data points, this estimate is quite shaky. It has a high variance: if one would
repeat the observation experiment, getting a new sample x′1 , x′2 , very likely one
would obtain a quite different sample and thus an estimate µ̂′ that is quite different
from µ̂.
Bayesian model estimation shows a way how to do better. It is a systematic
method to make use of prior knowledge that the modeler may have beforehand.
This prior knowledge takes the form of “beliefs” which parameter vectors θ are
more or less plausible. This belief is cast in the form of a probability distribution
over the space Θ of possible parameter vectors θ. In our super-simple example let
us assume that the modeler knows or believes that
That’s all as far as prior knowledge goes in our example. In the absence of more
detailed insight, this belief that the range of µ is limited to [0, 1] is cast in the
Bayesian prior distribution h:
149
• the degrees of plausibility for µ are described by the uniform distribution h
on [0, 1] (see Figure 48).
This kind of prior knowledge is often available, and it can be quite weak (as in
our example). Abstracting from our example, this kind of knowledge means to fix
a “belief profile” (in the mathematical format of a probability distribution) over
the space Θ of possible parameters θ for a model family. In our mini-example the
modeler felt confident to restrict the possible range of the single parameter θ1 = µ
to the interval [0, 1], with a uniform (= non-committing) distribution of “belief”
over this interval.
Before I finish the treatment of our baby example, I will present the general
schema of Bayesian model estimation.
To start the Bayesian model estimation machinery, available prior knowledge
about parameters θ is cast into the form of a distribution over parameter space.
For a K-parametric pdf pX , the parameter space is RK . Two comments:
In order to proceed with our discussion of general principles, we need to lift our
view from the pdf’s pXi (θ), which model the distribution of single data points, to
the N -dimensional pdf p⊗ Xi : RN → R≥0 for the distribution of the product
the N
RV i Xi . It can be written as
or in another notation (observing that all pdfs pXi are identical, so we can use the
generic RV X with pX = pXi for all i), as
Y
p⊗i X ((x1 , . . . , xN )) = pX (xi ). (91)
i
We write p⊗i X (D | θ) to denote the pdf value p⊗i X (θ)((x1 , . . . , xN )) = p⊗i X (θ)(D)
of p⊗i X (θ) on a particular training data sample D.
Summarizing:
150
• The pdf h(θ) encodes the modeler’s prior beliefs about how the parametrized
distribution pX (θ) should look like. Parameters θ where h(θ) is large corre-
spond to data distributions that the modeler a priori finds more plausible.
The distribution represented by h(θ) is a hyperdistribution and it is often
called the (Bayesian) prior.
• If θ is fixed, p⊗i X (D | θ) can be seen as a function of data vectors D. This
function is a pdf over the training sample data space. For each possible
training sample D = (x1 , . . . , xN ) it describes how probable this particular
outcome is, assuming the true distribution of X is pX (θ).
• If, conversely, D is fixed, then p⊗i X (D | θ) can be seen as a function of
θ. Seen as a function of θ, p⊗i X (D | θ) is not something like a pdf over
θ-space. Its integral over θ will not usually be one. Seen as a function of θ,
p⊗i X (D | θ) is called a likelihood function — given data D, it reveals certain
models θ as being more likely than others. A model (characterized by θ)
“explains” given data D better if p⊗i X (D | θ) is higher. We have met the
concept of a likelihood function before in Section 7.3.1.
We thus have two sources of information about the sought-after, unknown
true distribution pX (θ): the likelihood p⊗i X (D | θ) of θ given data, and the prior
plausibility encoded in h(θ). These two sources of information are independent
of each other: the prior plausibility is settled by the modeler before data have
been observed, and should not be informed by data. Because the two sources of
information come from “independent” sources of information (belief and data), it
makes sense to combine them by multiplication and consider the product
p⊗i X (D | θ) h(θ).
This product combines the two available sources of information about the
sought-after true distribution pX (θ). When data D are given, this product is a
function of model candidates θ. High values of this product mean that a candidate
model θ is a good estimate, low values mean it’s bad — in the light of both observed
data and prior assumptions.
With fixed D, the product p⊗i X (D | θ) h(θ) is a non-negative function on the
K-dimensional parameter space θ ∈ RK . It will not in general integrate to unity
and thus is not a pdf. Dividing this product by its integral however gives a pdf,
which we denote by h(θ | D):
p⊗i X (D | θ) h(θ)
h(θ | D) = R (92)
RK
p⊗i X (D | θ) h(θ) dθ
151
• The posterior distribution h(θ | D) is the final result of a Bayesian model
estimation procedure. It is a probability distribution over candidate models,
which is a richer and often a more useful thing than the single model that is
the result of a classical frequentist model estimation (like the sample mean
from Equation 89).
• Here I have considered real-valued distributions that can be represented
by pdfs throughout. If some of the concerned distributions are discrete or
cannot be represented by pdfs for some reason, one gets different versions of
(92).
• If one wishes to obtain a single, definite model estimate from a Bayesian
modeling exercise, a typical procedure is to compute the mean value of the
posterior. The resulting model θ̂ is called the posterior mean estimate
Z
PME
θ̂ = θ = θ h(θ | D) dθ.
RK
1 : posterior mean
h(μ) estimate, ≈ 0.565
: sample mean,
= 0.95
p iX
(D | μ)
The green line in Figure 48 gives a plot of p⊗i X (D | µ), and the red bro-
R line a plot of the posterior distribution h(µ | D). A numerical integration of
ken
R
µ h(µ | D) dµ yields a posterior mean estimate of θ̂ ≈ 0.565. This is quite
different from the sample mean 0.95, revealing the strong influence of the prior
distribution on possible models.
153
is, corresponding sites in the amino acid string can be detected, put side by side,
and compared. For instance, consider the following short section of an alignment
of 7 amino acids, all from the class of “globuline” proteins:
...GHGK...
...AHGK...
...KHGV...
...THAN...
...WHAE...
...AHAG...
...ALGA...
154
The following lengthy derivations show how the general formula (93) for Bayesian
model estimation is worked out in the case of amino acid distribution estimation.
The 20-dimensional pmf for the amino acid distribution at a given site is char-
acterized by a 20-dimensional parameter vector
N! Y20
n
P (D | θ) = θj j , (94)
n1 ! · · · n20 ! j=1
1 Y αj −1
20
h(θ|α) = θ , (95)
Z(α) j=1 j
R Q αj −1
where Z(α) = H 20 j=1 θj dθ is the normalization constant which ensures that
the integral of h over H is one.
155
Fortunately the normalization denominator p(D) in (93) need not be analyzed
in more detail because it will later cancel out.
Now we have everything together to calculate the Bayesian posterior distribu-
tion on H:
P (D | θ) h(θ|α)
p(θ | D, α) =
p(D)
1 N! Y20
nj 1 Y αj −1
20
= θ θ
p(D) n1 ! · · · n20 ! j=1 j Z(α) j=1 j
1 Y nj +αj −1
20
1 N!
= θ
p(D) n1 ! · · · n20 ! Z(α) j=1 j
1 N! Z(D + α)
= h(θ | D + α), (96)
p(D) n1 ! · · · n20 ! Z(α)
where D + α = (n1 + α1 , . . . , n20 + α20 )′ . Because both p(θ | D, α) and h(θ | D + α)
are pdfs, the product of the three first factors in (96) must be one, hence the
posterior distribution of models θ is
In order to get the posterior mean estimate, we integrate over the model can-
didates θ with the posterior distribution. I omit the derivation (can be found in
Durbin et al. 2000) and only report the result:
Z
PME nj + αj
θj = θi h(θ | D + α) dθ = , (98)
H N +A
where N = n1 + · · · n20 and A = α1 + · · · + α20 . If one compares this to the
maximum-likelihood estimates
nj
θjML = , (99)
N
we can see that the αj parameters of the Dirichlet distribution can be understood
as “pseudo-counts” that are added to the actually observed counts. These pseudo-
counts reflect the subjective intuitions of the biologist, and there is no formal rule
of how to set them correctly.
Adding the αj pseudocounts in (98) can also be considered just as a regular-
ization tool in a frequentist setting. One would skip the Bayesian computations
that give rise to the Bayesian posterior (97) and directly use (98), optimizing α
to navigate on the model flexibility scale in a cross-validation scheme. With α set
to all zero, one gets the maximum-likelihood estimate (99) which will usually be
overfitting; with large values for α one smoothes out the information contained in
the data — underfitting.
You may ask, why go through all this complex Bayesian thinking and calcu-
lating, and not just do a regularized maximum-likelihood estimation with a solid
156
cross-validation scheme to get the regularizers αj ’s right? The reason is that the
two methods will not give the same results, although both ultimately use the same
n +α
formula θ̂j = Nj +Aj . In the frequentist, regularized, maximum-likelihood approach,
the only information that is made use of is the data D, which is scarce and the
careful cross-validation will merely make sure that the estimated model θ̂ = θML
will be the best (with regards to not over- or underfitting) among the ones that
can be estimated from D. In contrast, a Bayesian modeler inserts additional in-
formation by fixing the αj ’s beforehand. If the Bayesian modeler has the right
intuitions about these αj ’s, the found model θ̂ = θPME will generalize better than
θML , possibly much better.
The textbook of Durbin et al, from which this example is taken, shares some
thoughts on to how a biosequence modeler should select the pseudo-counts. The
proper investment of such soft knowledge makes all the difference in real-life ma-
chine learning problems when data are not abundant.
8.4 Terminology
I conclude this section with a brief summary:
157
heritage has been more or less been lost from sight. The link to the subjec-
tivist philosophical attitude is the way how the modeler’s private beliefs are
inserted into the model estimation via hyperdistributions — the “Bayesian
priors”. Furthermore, prominent current modeling approaches in the cogni-
tive and neurosciences posit that human cognitive processing (Clark 2013;
Tenenbaum, Griffiths, and Kemp 2006) and even the brain “hardware” (Fris-
ton 2003) are organized in hierarchies of Bayesian priors.
158
9 Sampling algorithms
Many tasks that arise in machine learning (and in mathematical modeling of com-
plex systems in general) require one to “draw” random samples from a probability
distribution. Algorithms which compute “random” samples from a distribution
are needed, for instance (very incomplete listing),
• when noise is needed, e.g. for regularization of learning algorithms,
• when ensembles of models are computed for better accuracy and generaliza-
tion (as in random forests),
• generally, when natural or social or economical systems are simulated,
• when certain stochastic neural network models are trained and exploited
(Hopfield networks, Boltzmann machines),
• when training and evaluating many sorts of graphical models, like Bayesian
networks (next chapter in these LNs).
Sampling algorithms are an essential “fuel” which powers many modern com-
putational techniques. Entire branches of physics, chemistry, biology (and meteo-
rology and economics and ...) could only grow after efficient sampling algorithms
became available.
Designing algorithms which produce (pseudo-) random numbers from a given
distribution are not easy to design. Even something that looks as elementary as
sampling from the uniform distribution — with pseudo-random number generating
algorithms — becomes mathematically and algorithmically involved if one wants
to do it well.
In this section I describe a number of design principles for sampling algorithms
that you should know about.
1 X
N
PX (A) = lim 1A ◦ Xi P -almost surely,
N →∞ N
i=1
159
The notion of a sample must be clearly distinguished from the notion of a
sampler. A “sample” is more often than not implied to be an “i.i.d. sample”,
that is, it results from independent random variables X1 , . . . , XN that all have the
same distribution as X, the random variable whose distribution PX the sample
is drawn from. This i.i.d. assumption is standardly made for real-world training
data.
In contrast, the random variables X1 , X2 , . . . of a “sampler” for PX need not
individually have the same distribution as X, and they need not be independent.
For instance, let PX be the uniform distribution on the binary event space E =
[0, 1], that is, P (X = 1) = P (X = 0) = 1/2. Here are some samplers:
• All Xi are i.i.d. with each of them being uniformly distributed on [0, 1]. This
is a dream of a sampler. But nobody knows how to build such a sampler
without using quantum computers.
• The subset X2i of RVs with even indices are identically and uniformly dis-
tributed on [0, 1]. The RVs X2i+1 repeat the value of X2i . Here the RVs Xi
are identically but not independently distributed.
• The subset X2i of RVs with even indices are identically and uniformly dis-
tributed on [0, 1/2] and the subset of X2i+1 are identically and uniformly
distributed on [1/2, 1]. Here the RVs Xi are independently but not identi-
cally distributed.
• Let Yi be an i.i.d. sequence where each Yi draws a value yi from N (0, 1),
and let ε be a small positive real number. X1 is always evaluating to 0, and
following Xi are inductively and stochastically defined by xi = xi−1 + ε yi (
mod 1). This gives a random walk (more specifically, a Brownian motion
process) whose paths are slowly and randomly migrating across [0, 1]. This
kind of samplers will turn out to be the most important type when it comes
to sampling from complex distributions — this is a Markov chain Monte
Carlo (MCMC) sampler.
160
which draws a random double-precision number from the continuous, uniform dis-
tribution over the interval [0, 1]; in C it’s also called rand but here generates a
random integer from the discrete uniform distribution between 0 and some max-
imal value. At any rate, a pseudo-random number generator of almost uniformly
distributed numbers over some interval is offered by every programming language
that I know, including Microsoft Word. And that’s about it; the only sampler
you can directly call in most programming environments is just that, a uniform
sampler.
By the way, it is by no means easy to program a good pseudo-random number
generator – in fact, designing such generators is an active field of research. If you
are interested – the practical guide to using pseudorandom number generators by
Jones 2010 is fun to read and very illuminating.
Assume you have a sampler Ui for the uniform distribution on [0, 1], but you
want to sample from another distribution PX on the measure space E = R, which
has a pdf f (x). Then you can use the sampler Ui indirectly to sample from PX by
a coordinate transformation, as follows.
First, compute
Rx the cumulative density function φ : R → [0, 1], which is defined
by φ(x) = −∞ f (u) du, and its inverse φ−1 . The latter may be tricky or impossible
to do analytically – then numerical approximations must be used. Now obtain a
sampler Xi for PX from the uniform sampler Ui by
Xi = φ−1 ◦ Ui .
1 X 1 X
N N
PX (A) = lim 1A ◦ Xi = lim 1A ◦ φ−1 ◦ Ui .
N →∞ N N →∞ N
i=1 i=1
1 X 1 X
N N
−1
lim 1A ◦ φ ◦ Ui = lim 1φ(A) ◦ Ui = PU φ(A) = PX (A).
N →∞ N N →∞ N
i=1 i=1
161
j(x)
Ui(w)
f(x)
Xi(w) = j -1(Ui(w))
x
org/wiki/Normal_distribution#Generating_values_for_normal_random_variables
) which I found very elegant is the Box-Muller algorithm, which produces a N (0, 1)
distributed RV C from two independent uniform-[0,1]-distributed RVs A and B:
√
C = −2 ln A cos(2 π B).
The computational cost is to compute a logarithm and a cosine. An even
(much) faster, but more involved (and very tricky) method is the Ziggurat al-
gorithm (check Wikipedia for “Ziggurat_algorithm” for an article written with
tender care). There exist wonderful things under the sun.
Sampling by coordinate transformation can be generalized to higher-dimensional
distributions. Here is the case of a 2-dimensional pdf f (x1 , x2 ), from which the
general pattern should become clear. First define the cumulative density function
in the first dimension as the cumulative density function of the marginal distribu-
tion of the first coordinate x1
Z x1 Z ∞
φ1 (x1 ) = f (u, v) dv du
−∞ −∞
A widely used method for drawing a random vector x from the n-dimensional
multivariate normal distribution with mean vector µ and covariance matrix Σ
works as follows:
1. Compute the Cholesky decomposition (matrix square root) of Σ, that is,
find the unique lower triangular matrix A such that A A′ = Σ.
2. Sample n numbers z1 , . . . , zn from the standard normal distribution.
3. Output x = µ + A(z1 , . . . , zn )′ .
162
9.3 Rejection sampling
Sampling by transformation from the uniform distribution can be difficult or im-
possible if the pdf f one wishes to sample from has no cumulative density function
with a simple-to-compute inverse. If this happens, it is sometimes possible to sam-
ple from a simpler distribution with pdf g that is loosely related to the target pdf f
and for which sampling by transformation works, and with a simple trick get from
that a sampler for the pdf of interest, f . To understand this rejection sampling
(also known as importance sampling) we first need to generalize the notion of a
pdf:
If one divides a proto-pdf g0 by its integral, one obtains a pdf g. Now consider
a pdf f on Rn that you want to sample from, but sampling by transformation
doesn’t work. However, you find a proto-pdf g0 ≥ f for whose associated pdf g
you know how to sample from, that is, you have a sampler for g. With that in
your hands you can construct a sampler Xi for f as follows. In order to generate
a random value x for Xi , carry out the following procedure:
While found_x == 0 do
163
g0(x)
f(x)
x~ x
Figure 50: The principle of rejection sampling. Candidates x̃ are first sampled
from g, then accepted (“painted orange”) with probability f (x̃)/g0 (x̃).
9.4 Proto-distributions
The pdf’s of parametrized continuous distributions over Rk are typically repre-
sented by a formula of the kind
1
p(x | θ) = R p0 (x | θ), (100)
p (x | θ) dx
Rk 0
R
where p0 : Rk → R≥0 defines the shape of the pdf and the factor 1/( Rk p0 (x | θ) dx)
normalizes p0 to integrate to 1. For example, in the pdf
1 1 ′ −1
p(x | µ, Σ) = exp − (x − µ) Σ (x − µ)
(2π)k/2 det(Σ)1/2 2
for the multidimensional normal distribution, the shape is given by p0 (x|µ, Σ) =
′ −1
exp − 2 (x − µ)
1
R Σ (x − µ) and the normalization prefactor 1/((2π) det(Σ) )
k/2 1/2
is equal to 1/ Rk p0 (x | µ, Σ) dx.
It is a quite general situation in machine learning that a complex model of a
distribution is only known up to an undetermined scaling factor — that is, one has
a proto-distribution but not a distribution proper. This is often good enough. For
instance, in our initial TICS demo (Section 1.2.1), we saw that the TICS system
generated a list of captions, ordered by plausibility. In order to produce this list,
the TICS system did not need to command on a proper probability distribution on
the space of possible captions — a proto-distribution is good enough to determine
the best, second-best, etc. caption.
Proto-distributions are not confined to continuous distributions on Rk , where
they appear as proto-pdf’s. In discrete sample spaces S = {s1 , . . . , sN } or S =
{s1 , s2 , . . . , } where the distribution is given by a parametrized pmf p(θ), the nor-
malization factor becomes a sum over S:
1
p(x | θ) = P p0 (x),
s∈S p0 (s)
164
p0 only. Very large S appear standardly in systems which are modeled by many
interacting random variables. We will meet with such proto-pmf’s in the next
section on graphical models.
This is also a good moment to mention the softmax function. This is a ubiqui-
tously used machine learning trick to transform any vector y = (y1 , . . . , yn )′ ∈ Rn
into an n-dimensional probability vector p by
1
p= P (exp(α y1 ), . . . , exp(α yn ))′ .
i=1,...,n exp(α y i )
The softmax is standardly used, for instance, to transform the output vector
of a neural network into a probability vector, which is needed when one wants to
interpret the network output probabilistically.
The factor α ≥ 0 determines the entropy of the resulting probability vector.
If α = 0, p is the uniform distribution on {1, . . . , n}, and in the limit α → ∞, p
becomes the binary vector which is zero everywhere except at the position i where
y is maximal.
Notes:
1. The word “proto-distribution” is not in general use and can be found in the
literature only rarely.
165
other distributions given by a shape-defining proto-pdf. Boltzmann distri-
butions are imported from statistical physics and thermodynamics, where
they play a leading role. In machine learning, Boltzmann distributions oc-
cur, for instance, in Markov random fields which are a generalization of
Markov chains to spatial patterns that are used (among other applications)
in image analysis. They are a core ingredient for Boltzmann machines (Ack-
ley, Hinton, and Sejnowski 1985), a type of neural network which I consider
one of the most elegant and fundamental models for general learning sys-
tems. A computationally simplified version, restricted Boltzmann machines,
became the starting point for what today is known as deep learning (Hinton
and Salakuthdinov 2006). Furthermore, Boltzmann distributions also are
instrumental for free-energy models of intelligent agents, a class of models
of brain dynamics popularized by Karl Friston (for example, Friston 2005)
which explain the emergence of adaptive, hierarchical information repre-
sentations in biological brains and artificial intelligent “agents”, and which
today are a mainstream approach in the cognitive neurosciences and cog-
nitive science to understanding adaptive cognition. And last but not least,
the Boltzmann distribution is the root mechanism in simulated annealing
(Kirkpatrick, Gelatt, and Vecchi 1983), a major general-purpose strategy
for finding minimal-cost solutions in complex search spaces.
Sadly in this course there is no time for treating energy-based models of
information processing. I will present this material in my course on neural
networks in the Summer semester (lecture notes remain to be written).
166
And if you want to inform yourself about the use of MCMC in deep learning
(advanced stuff), a talk of Radford Neal that he recently delivered in honor of
Geoffrey Hinton (Neal 2019) gives you an orientation. And if you think of entering
a spectacular academic career, it is not a bad idea to write a PhD thesis which
“only” gives the first readable, coherent, comprehensive tutorial introduction to
some already existing kind of mathematical modeling method which until you
write that tutorial has been documented only in disparate technical papers written
from different perspectives and using different notations, making that knowledge
inaccessible to a wider audience.
• right-infinite: n = 1, 2, 3, . . ., or
• left-right infinite: n ∈ Z.
For defining samplers, on has to use n ∈ N: a sampler must, by definition, be able
to be run for arbitrarily long times.
A Markov chain on S is fully characterized by the initial distribution PX1
(needed for starting finite and right-infinite chains but not needed for left-right
infinite chains which have no start) and the conditional transition distributions
167
for which we also write (following Neal’s notation)
Tn (x | y). (104)
2. If xn has been generated, generate xn+1 by a random draw from the distri-
bution PXn+1 |Xn =xn which is specified by the transition kernel T (x|xn ).
A note on notation: a mathematically correct and general definition and no-
tation for transition kernels on arbitrary observation spaces S requires tools from
measure theory which we haven’t introduced. Consider the notation T (x|y) as a
somewhat sloppy shorthand. When dealing with discrete distributions where S is
finite, say, S = {s1 , . . . , sk }, then consider T (x|y) as a k × k Markov transition
matrix M where M (i, j) = P (Xn+1 = sj | Xn = si ). The i-th row of M has the
vector of the probabilities by which the process will transit from state sj to the
states s1 , . . . , sk indexing the columns.
An example: consider the two-state observation space S = {x1 , x2 }. Then
the initial distribution PX1 is given by a 2-dimensional probability vector, for
instance (0.3, 0.7)′ would mean that the process starts with an observation x1
with probability 0.3. A transition matrix might be
0.1 0.9
M= ,
0.4 0.6
whose first row means that if at time n the process has generated state x1 , at time
n + 1 one will observe x1 again with probability 0.1 and x2 with probability 0.9.
When dealing with continuous distributions of next states x (given y) which
have a pdf, regard T (x|y) as denoting the conditional pdf pX|Y =y of x. Note that
when we write T (x|y), we refer not to a single pdf but to a family of pdf’s; for
every y we have another conditional pdf T (x|y).
If a Markov chain with finite state set S = {s1 , . . . , sk } and Markov transition
matrix M is executed m times, the transition probabilities to transit from state
si to state sj after m steps can be found in the m-step transition matrix M m :
168
where M m = M · M · . . . · M (m times).
We now consider the sample space S = Rk — this being the state space needed
for most practical uses of MCMC sampling — and assume that all distributions
of interest are specified by pdf’s. We consider a homogeneous Markov chain. Its
transition kernel T (x|y) can be identified with the pdf pXn+1 |Xn =y , and in the
remainder of this section, we will write T (x|y) for this pdf. Such a Markov chain
with continuous sample space Rk is specified by an initial distribution on Rk which
we denote by its pdf g (1) . The pdf’s g (n+1) of distributions of subsequent RVs Xn+1
can be calculated from the pdf’s g (n) of the preceding RV Xn by
Z
(n+1)
g (x) = T (x|y) g (n) (y) dy. (106)
Rk
Please make sure you understand this equation. It is the key to everything
which follows.
For the theory of MCMC sampling, a core concept is an invariant distribution
of a homogenous Markov chain.
Definition 9.4 Let g be the pdf of some distribution on Rk , and let T (x|y) be the
(pdf of the) transition kernel of a homogeneous Markov chain with values in Rk .
Then g is the pdf of an invariant distribution of T (x|y) if
Z
g(x) = T (x|y) g(y) dy. (107)
Rk
Except for certain pathological cases, a transition kernel has generically at least
one invariant distribution.
Furthermore, it is often the case that there exists exactly one invariant distri-
bution g of T (x|y), and the sequence of distributions g (n) converges to g from any
initial distribution. We will call the transition kernel T (x|y) ergodic if it has this
property. The (unique) invariant distribution g of an ergodic Markov chain is also
called its asymptotic distribution or its stationary distribution or its equilibrium
distribution.
169
first, the amount of computation required to simulate each transition; second, the
time for the chain to converge to the equilibrium distribution, which gives the
number of states that must be discarded from the beginning of the chain; third, the
number of transitions needed to move from one state drawn from the equilibrium
distribution to another state that is almost independent, which determines the
number of states taken from the chain at equilibrium that are needed to produce
an estimate of a given accuracy. The latter two factors are related...”
A brief explanation: The best possible sampler for g would be one where each
new sampled value is independent from the previous ones — in other words, one
would like to have an i.i.d. sampler. If observations xn obtained from a sampler
depend on previous observations xn−1 , xn−2 , . . ., there is redundant information in
the sample path. Typically paths (xn )n=1,2,... obtained from running an MCMC
sampler will have more or less strong dependencies between values observed at
nearby times. The first values x2 , x3 , . . . values will depend, to a decreasing de-
gree, on the arbitrary initial value x1 and should be discarded. After this initial
“washout” phase, one usually keeps only those values xn , xn+d , xn+2d , . . . whose
distance d from each other is large enough to warrant that the dependency of
xn+d on xn has washed out to a negligible amount, that is, P (Xn+d = x | Xn =
y) ≈ P (Xn+d = x).
A standard way to construct an MCMC transition kernel T (x|y) which leads
to a Markov chain that has the target distribution g as its invariant distribution is
to ensure that the Markov chain (Xn )n=1,2,... has the property of detailed balance
with respect to g. Detailed balance connects X1 , X2 , . . . to g in a strong way.
It says that if we pick some state x ∈ Rk with the probability given by g and
multiply its probability g(x) with the transition probability density T (y|x) —
that is, we consider the probability density of transiting from x to y weighted
with the probability density of x — then this is the same as the reverse weighted
transiting probability density from y to x:
Definition 9.5 Let g be a pdf on Rk and let T (y|x) be the pdf of a transition
kernel of a homogeneous Markov chain on Rk . Then T (y|x) has the detailed
balance property with respect to g if
170
9.5.3 The Gibbs sampler
Let g be a pdf on Rk . For i = 1, . . . , k and x ∈ Rk , where x = (x1 , . . . , xk )′ , let
gi (· | x) R → R≥0
:
g((x1 , . . . , xi−1 , y, xi+1 , . . . xk )′ )
gi (y |x) = R
R
g((x1 , . . . , xi−1 , z, xi+1 , . . . xk )′ ) dz
be the conditional density function of the coordinate i given the values of x on the
other coordinates. Let g (1) be the pdf of an initial distribution on Rk , and let an
initial value x(1) = (x1 , . . . , xk )′ be drawn from g (1) . We define a Markov chain
(1) (1)
equal to the previous state except in coordinate i where the value y which is freshly
sampled from gi .
This method is known as the Gibbs sampler. It uses k different transition
kernels T1 , . . . , Tk , where Ti is employed at times νk + i and updates only the i-th
coordinate.
This Markov chain is not homogeneous because we cycle through different
transition kernels. However, we can condense a sequence of k successive updates
into a single update that affects all coordinates by putting T = Tk ◦ · · · ◦ T1 , which
yields a homogeneous Markov chain (Yn )n=1,2,... with transition kernel T whose
path is derived from a path (xn )n=1,2,... of the “cycling” Markov chain by
y(1) = x(1) ,
y(2) = x(k+1) ,
y(3) = x(2k+1) , . . . .
171
whether it is needs to be done on a case by case basis. For instance, if g is a
distribution on R2 whose support lies exclusively in the first and third orthant, T
would not be ergodic, because the Gibbs sampler, when started from a point in
the third orthant, would be unable to jump into the first orthant. This situation
is depicted in Figure 51.
Figure 51: A bipartite pdf g where the Gibbs sampler would fail.
The Gibbs sampler is practically applicable only if one can easily sample from
the 1-dimensional conditional distributions gi . Therefore, the Gibbs sampler is
mostly employed in cases where these gi are parametric, analytical distributions,
or in cases where S is finite and the gi thus become simple probability vectors (and
would be represented by pms, not pdfs). The Gibbs sampler is attractive for its
simplicity. A number of extensions and refinements of the basic idea is presented
in Neal 1993.
in two substeps, which together ensure detailed balance w.r.t. the conditional
distribution gi :
172
symmetric in the sense that
, y ′ , xi+1
(νk+i−1) (νk+i−1) (νk+i−1) (νk+i−1) ′
Si (y |(x1 , . . . , xi−1 , . . . xk ) =
′ (νk+i−1) (νk+i−1) (νk+i−1) (νk+i−1) ′
Si (y |(x1 , . . . , xi−1 , y, xi+1 , . . . xk )
for all y, y ′ , x.
∀x = (x1 , . . . , xk )′ ∈ Rk , ∀y, y ′ ∈ R :
gi (y | x) Ti (y | (x1 , . . . , xi−1 , y ′ , xi+1 , . . . , xk )′ ) =
gi (y ′ | x) Ti (y | (x1 , . . . , xi−1 , y, xi+1 , . . . , xk )′ ). (109)
There are several ways to ensure this. For instance, it is not difficult to see
(exercise!) that if Si (y ∗ | x(νk+i−1) ) is symmetric, using
gi (y ∗ | (x1 , . . . , xk )′ )
Ai (y ∗ | (x1 , . . . , xk )′ ) = (110)
gi (y ∗ | (x1 , . . . , xk )′ ) + gi (xi | (x1 , . . . , xk )′ )
yields detailed balance. This is called the Boltzmann acceptance function. How-
ever, the Metropolis algorithm standardly uses another acceptance distribution,
namely the Metropolis acceptance distribution
∗ ′ gi (y ∗ | (x1 , . . . , xk )′ )
Ai (y | (x1 , . . . , xk ) ) = min 1, . (111)
gi (xi | (x1 , . . . , xk )′ )
173
gi (y ∗ | (x1 , . . . , xk )′ ) Ti (xi | (x1 , . . . , y ∗ , . . . , xn )′ ) =
= gi (y ∗ | x) Si (xi | (x1 , . . . , y ∗ , . . . , xn )′ ) Ai (xi | x)
= Si (xi | (x1 , . . . , y ∗ , . . . , xn )′ ) min (gi (y ∗ | x), gi (xi | x))
= Si (y ∗ | (x1 , . . . , xi , . . . , xn )′ ) min (gi (y ∗ | x), gi (xi | x))
= gi (xi | x) Si (y ∗ | (x1 , . . . , xi , . . . , xn )′ ) Ai (y ∗ | x)
= gi (xi | x) Ti (y ∗ | (x1 , . . . , xi , . . . , xn )′ ).
Notes:
1. Both the Boltzmann and the Metropolis acceptance functions also work with
proto-pdf’s gi . This is an invaluable asset because often it is infeasible to
compute the normalization factor 1/Z needed to turn a proto-pdf into a pdf.
2. Like the Gibbs sampler, this local Metropolis algorithm can be turned into a
global one, then having a homogeneous Markov chain with transition kernel
T , by condensing one k-cycle through the component updates into a single
observation update.
4. The Metropolis algorithm is more widely applicable than the Gibbs sampler
because it obviates the need to sample from conditional distributions. The
price one has to pay is a higher computational cost, because the rejection
events lead to duplicate sample points which obviously leads to an undesir-
able “repetition redundancy” in the sample that has to be compensated by
a larger sample size.
6. The quality of Metropolis sampling depends very much on the used proposal
distribution. Specifically, the variance of the proposal distribution should
neither be too small (then exploration of new states is confined to a narrow
neighborhood of the current state, implying that the Markov chain traverses
the distribution very slowly) nor too large (then one will often be propelled
far out in the regions of g where it almost vanishes, leading to numerous
rejection events). A standard choice is a normal distribution centered on
the current state.
174
7. The Metropolis algorithm (and the Gibbs sampler, too) works best if the k
state components are statistically maximally independent. Then the state
space exploration in each dimension is independent from the exploration in
the other dimensions, whereas if the components are statistically coupled,
a fast exploration in one dimension is hindered by the fact that moving
in this dimension entails a synchronized moving in the other dimensions,
thus larger changes in one dimension have to wait for the correlated changes
in the other dimensions. Thus if possible one should coordinate-transform
the distribution before sampling with the aim to decorrelate the dimensions
(which in principle does not imply making them independent but in practice
is the best one can do).
175
I also mention that reconstructions of evolutionary trees from currently ob-
served DNA outfits of different species is a task that recently has acquired some
unforeseen relevance: if a new Corona virus variant appears, it is exactly this sort
of analysis that generates epidemeological hypothesis about where and when this
new variant may have evolved away from previously known variants.
Data. DNA strings from related l living species, each string of length N , have
been aligned without gaps. In the reported paper, l = 32 African fish species
(cichlids from central African lakes except one cichlid species from America that
served as a control) were represented by DNA sequences of length 1044. 567 of
these 1044 sites were identical across all considered species and thus carried no
information about phylogeny. The remaining N = 477 sites represented the data
D that was entered into the analysis. Reminder: the DNA symbol alphabet is
Σ = {A, C, G, T }.
Task. Infer from data D the most likely phylogenetic tree, assuming that the
considered living species have a common ancestor from which they all descended.
Modeling assumptions. Mutations act on all sites independently. Muta-
tions occur randomly according to a “molecular clock”, i.e. a probability distribu-
tion of the form
P clock (y | x, t, θ) (112)
specifing the probability that the symbol y ∈ {A, C, G, T } occurs at a given site
where t years earlier the symbol x occurred. θ is a set of parameters specifying
further modeling assumptions about the clock mechanism. Mau et al. used the
molecular clock model proposed in Hasegawa, Kishino, and Yano 1985 which uses
two parameters θ = (ϕ, κ), the first quantifying an overall rate of mutation, and
the second a difference of rates between the more frequent mutations that leave the
type of nucleic acid (purine or pyrmidine) unchanged (“transitions”) vs. change
the type (“transversions”). All that we need to know here, not pretending to be
biologists, is that (112) can be efficiently computed. Note that θ is not known
beforehand but has to be estimated/optimized in the modeling process, based on
the data D.
Representing phylogenetic trees. A phylogenetic tree is a binary tree
Ψ. The nodes represent taxa (species); leaves are living taxa, internal nodes
are extinct taxa, the root node is the assumed common ancestor. Mau et al.
plot their trees bottom-up, root node at the bottom. Vertical distances between
nodes metrically represent the evolutionary timespans t between nodes. Clades
are subsets of the leaves that are children of a shared internal node. Figure 52
shows a schematic phylogenetic tree and some clades.
A given evolutionary history can be represented by trees in 2n−1 different but
equivalent ways (where n is the number of living species), through permuting
the two child branches of an internal node. For computational purposes a more
convenient representation than a tree graph is given by
1. a specification of a left-to-right order σ of leaves (in Figure 52, σ = (1, 4, 7, 2, 3, 6, 5)),
plus
176
Figure 52: An examplary phylogenetic tree (from the Mau et al paper). {4,7}, {1
4 7}, {2, 3, 6} are examples of clades in this tree.
177
where par(ν) is the parent node of ν and tν is the timespan between par(ν) and
ν. From (115) we could obtain (114) by summing over all possible assignments of
symbols to internal nodes, which is clearly infeasible. Fortunately there is a cheap
recursive way to obtain (114), which works top-down from the leaves, inductively
assigning conditional likelihoods Lν (y) = P (Di ↾ ν | Ψ, θ, node ν = y) to nodes ν,
where y ∈ Σ and Di ↾ ν is the subset of the Di which are siblings of node ν, as
follows:
(
1, if y = yν
Case 1: ν ∈ / I : Lν (y) =
0, else
! !
X X
Case 2: ν ∈ I : Lν (y) = Lλ (z)P clock (z|y, tλ , θ) Lµ (z)P clock (z|y, tµ , θ) ,
z∈Σ z∈Σ
where λ, µ are the two children of ν, tλ is the timespan from ν to λ, and tµ is the
timespan from ν to µ. Then (114) is obtained from
X
Li (Ψ, θ) = π0 (z) Lϱ (z),
z∈Σ
O(N |Σ|l) flops are needed to compute L(Ψ, θ) – in our example, N = 477, |Σ| = 4,
l = 32.
The posteriori distribution of trees and mutation parameters. We
are actually not interested in the likelihoods L(Ψ, θ) but rather in the distribution
of Ψ, θ (a Bayesian hyperdistribution!) given D. Bayes theorem informs us that
this desired distribution is proportional to the likelihood times the prior (hyper-
)distribution P (Ψ, θ) of Ψ, θ:
Lacking a profound theoretical insight, Mau et al. assume for P (Ψ, θ) a very simple,
uniform-like distribution (such uninformedness is perfectly compatible with the
Bayesian approach!). Specifically:
1. They bound the total hight of trees by some arbitrary maximum value, that
is, all trees Ψ with a greater hight are assigned P (Ψ, θ) = 0.
2. All trees of lesser height are assigned the same probability. Note that this
does not imply that all topologies are assigned equal prior probabilities.
Figure 53 shows two topologies, where the one shown in a. will get a prior
twice as large as the one shown in b. Reason: the two internal nodes of
178
the first topology can be shifted up- and downwards independently, whereas
this is not the case in the tree b., thus there are twice as many trees of
the topology a. than of b. Note that for evolutionary biologists it is the
tree topology (which species derives from which species) rather than the tree
metrics (how long took what) which is of prime interest!
3. The mutation parameters θ are assigned a prior distribution that is uniform
on a range interval chosen generously large to make sure that all biologically
plausible possibilities are contained in it.
Figure 53: Two tree topologies that get different prior probabilities. Topologies
are defined by the parenthesis patterns needed to describe a tree. For instance,
the tree a. would be characterized by a pattern ((x x)(x x)) and the tree b. by
((x (x x)) x).
The structure of P (Ψ, θ | D). Before we set forth to compute the posterior
hyperdistribution P (Ψ, θ | D), let us take a closer look at the structure of the
mathematical “space” in which the parameter pairs (Ψ, θ) lie.
Remember that a tree Ψ is specified by (σ, a), where σ is a permutation vector
of (1, . . . , l) and a is a numerical vector of length l1. Noticing that there are l!
permutations, a pair (σ, a) reflects a point in a product space {1, . . . , l!} × Rl−1 ;
together with the two real-valued parameters comprised in θ this brings us to a
space {1, . . . , l!}×Rl+1 . Specifically, this space is a product of a discrete space (the
finite but large set {1, . . . , l!}) with a continuous space (Rl+1 ). As a consequence,
one cannot mathematically describe a probability measure on this space with a
pmf, nor with a pdf! And thus, one cannot “compute” a pmf or pdf for P (Ψ, θ | D).
So — what can we compute for, about, with, or on P (Ψ, θ | D)? The answer is:
we can get arbitrarily precise and exhaustive information about P (Ψ, θ | D) by ...
you guess right: sampling.
The Metropolis algorithm at work. Mau et al. use the Metropolis algo-
rithm (in a global version) to sample trees Ψ and evolutionary clock parameters
θ from P (Ψ, θ | D). A crucial design task is to find a good proposal distribution
S((Ψ∗ , θ∗ )|(Ψ, θ)). It should lead from any plausible (Ψ, θ) [“plausible” means that
g((Ψ, θ)) is not very small] to another plausible (Ψ∗ , θ∗ ) which should be however
as distinct from (Ψ, θ) as possible. The way how Mau et al. go about this task is
one of the core contributions of their work.
The authors alternate between updating only θ and only Ψ. Proposing θ∗ from
θ is done in a straightforward way: the new ∗ -parameters are randomly drawn from
a rectangular distribution centered on the current settings θ.
179
The tricky part is to propose an as different as possible, yet “plausibility-
preserving” new tree Ψ∗ from Ψ. Mau et al. transform Ψ = (σ, a) into Ψ∗ =
(σ ∗ , a∗ ) in two steps:
1. The current tree Ψ is transformed into one of its 2n−1 equivalent topological
versions by randomly reversing with 0.5 probability every of its internal
branches, getting Ψ′ = (σ ′ , a′ ).
2. In Ψ′ the evolutionary inter-species time spans t are varied by changing the
old values by a random increment drawn from the uniform distribution over
[δ, δ], where δ is a fixed bound (see Figure 54). This gives Ψ∗ = (σ ′ , a∗ ).
Figure 54: Proposal candidate trees, attainable from the current tree, are found
within timeshift intervals of size 2δ, centered at the current internal nodes, that
constrain the repositioning of the internal nodes. Note that if the two rightmost
internal nodes are shifted such that their relative heights become reversed (dashed
blue circles), the topology of the tree would change (dashed blue lines). Figure
adapted from the Mau et al paper.
Mau et al. show that this method yields a symmetric proposal distribution, and
that every tree Ψ′ can be reached from every other tree Ψ in a bounded number
of such transformations – the Markov chain is thus ergodic.
Concretely, the Metropolis algorithm was run for 1,100,000 steps (20 hours
CPU time on a 1998 Pentium 200 PC), the first 100,000 steps were discarded
to wash out possible distortions resulting from the arbitrary starting tree, the
remaining 1,000,000 trees were subsampled by 200 (reflecting the finding that
after every 200 steps, trees were empirically independent [zero empirical cross-
correlation at 200 step distance]), resulting in a final tree sample of size 5000.
Final findings. Recall that the overall goal of this entire effort was to de-
terming which tree topology is best explaining the current genetic Data D, under
180
the assumptions of the Bayesian prior P (Ψ, θ). After the sampling was done, 5000
trees had been collected whose sample frequencies reflect their Bayesian posterior
probabilities. The rest is easy: sort these sampled trees into different subsets, each
of which is defined by a specific tree topology, then interpret the relative sizes of
these sets as probability estimates.
The 600 (!) most frequent topologies make up for 90% of the total probability
mass. This high variability however is almost entirely due to minute variations
within 6 clades (labelled A, ..., F) that are stable across different topologies. Figure
55 shows the two most frequently found topologies resolved at the clade level, with
a very strong posterior probability indication for the first of the two.
Figure 55: The two clade tree structures of highest posterior probability found by
Mau et al.
For quality control, this was repeated 10 times, with no change of the final
outcome, which makes the authors confident of their work’s statistical reliability.
181
10 Graphical models
My favourite motivational example for introducing graphical models is the Space
Shuttle. In the launch phase, the engineering staff in the control center and the
astronauts on board have to make many critical decisions under severe time pres-
sure. If anything on board of the roaring Space Shuttle seems to go wrong, it
must be immediately determined, for instance, whether to shut down an engine
(means ditching the Space Shuttle in the ocean but might save the astronauts) or
not (might mean explosion of the engine and death of astronauts — or it might be
just the right thing to do if it’s a case of sensor malfunction and in fact all is fine
with the engine). The functioning of the Space Shuttle is monitored with many
dozens of sensors which under the enormous vibration stress are more prone to
malfunction than you would believe. They are delivering a massive flow of informa-
tion which is impossible for human operators to evaluate for fast decision-making.
So-called decision support systems are needed which automatically calculate proba-
bilities for the actual system state from sensor readings and display to the human
operators a condensed, human-understandable view of its most decision-critical
findings. Read the original paper Horvitz and Barry 1995 on the probabilistic
modeling of the combined SpaceShuttle-Pilots-GroundControlStaff system if you
want to see how thrilling and potentially life-saving statistical modeling can be.
Such decision support systems are designed around hidden and visible random
variables. The visible RVs represent observable variables, for instance the pressure
or temperature readings from Space Shuttle engine sensors, but also actions taken
by the pilot. The hidden RVs represent system state variables which cannot be di-
rectly measured but which are important for decision making, for instance a binary
RV “Sensor 32 is functioning properly — yes / no”, or “pilot-in-command is aware
of excess temperature reading in engine 3 — yes / no”. There are many causal
chains between the hidden and visible RVs which lead to chains of conditional
probability assessments, for instance “If sensor reading 21 is ’normal’ and sensor
reading 122 is ’normal’ and sensor reading 32 is ’excess temperature’, the proba-
bility that sensor 32 is misreading is 0.6”. Such causal dependencies between RVs
are mathematically represented by arrows between the participating RVs, which
in total gives a directed graph whose nodes are the RVs. In such a graph, each
RV X has its own sample space SX which contains the possible values that X
may take. When the graph is used (for instance, for decision support), for each
RV a probability distribution PX on SX is computed. If X1 , . . . , Xk are RVs with
arrows fanning in on a RV Y , the probability distribution PY on SY is calculated
as a conditional distribution which depends on the distributions PX1 , . . . , PXk .
These calculations quickly become expensive when the graph is large and richly
connected (exact statistical inference in such graphs is NP-hard) and require ap-
proximate solution strategies — for instance, through sampling methods. Despite
the substantial complexity of algorithms in the field, such graphical models are
today widely used. Many special sorts of graphical models have traditional special
182
names and have been investigated in a diversity of application contexts long be-
fore the general, unifying theory of graphical models was developed. Such special
kinds of graphical models or research traditions include
• Hidden Markov models, a basic model of stochastic processes with memory,
for decades the standard machine learning model for speech and handwrit-
ing recognition and today still the reference model for DNA and protein
sequences;
• models for diagnostic reasoning where a possible disease is diagnosed from
symptoms and medical lab results (also in fault diagnostics in technical
systems);
• decision support systems, not only in Space Shuttle launching but also in
economics or warfare to name only two;
• human-machine interfaces where the machine attempts to infer the user’s
state of mind from the user’s actions — the (in-)famous and now abolished
Microsoft Office “paperclip” online helper was based on a graphical model;
• Markov random fields which model the interactions between local phenomena
in spatially extended systems, for instance the pixels in an image;
• Boltzmann machines, a neural network model of hierarchical information
processing.
In this section I will give an introduction to the general theory of graphical
models and give a brief outlook on Markov random fields and hidden Markov
models.
Graphical models are a heavyweight branch of machine learning and would best
be presented in an entire course of their own. The course Probabilistic Graphical
Models of Stefano Ermon at Stanford University is a (beautifully crafted) exam-
ple. The course homepage https://cs228.stanford.edu/ gives links to litera-
ture and programming toolboxes — and it serves you a transparently written set
of lecture notes on https://ermongroup.github.io/cs228-notes/ which culmi-
nates in explaining deep variational autoencoders, a powerful recent deep learning
method which includes a number of techniques from graphical models.
183
pieces of stochastic evidence in uncertain systems. They tend to make gross,
systematic prediction errors even in simple diagnostic tasks involving only two
observables — which is why in good universities, future doctors must take courses
in diagnostic reasoning —, let alone be capable of dealing with complex systems
involving dozens of random variables.
In my condensed treatment I lean heavily on the two tutorial texts by Pearl
and Russell 2003 and K. Murphy 2001.
For a first impression, we consider a classical example. Let X1 , . . . , X5 be five
discrete random variables, indicating the following observations:
X3 : indicates whether the lawn sprinkler is on, has values {0, 1},
X4 : indicates whether the pavement (close to the lawn) is wet, values {0, 1},
X5 : indicates whether the pavement is slippery, values from {0, 1}, too.
There are certain causal influences between some of these random variables.
For instance, the season co-determines the probabilities for rain; the sprinkler
state co-determines whether the pavement is wet (but one would not say that
the wetness of the pavement has a causal influence on the sprinkler state), etc.
Such influences can be expressed by arranging the Xi in a directed acyclic graph
(a DAG), such that each random variable becomes a node, with an edge (i, j)
indicating that what is measured by Xi has some causal influence on what is
measured by Xj . Of course there will be some subjective judgement involved
in claiming a causal influence between two observables, and denying it for other
pairs – such dependency graphs are not objectively “true”, they are designed to
represent one’s view of a part of the world. Figure 56 is the DAG for our example
which serves as standard example in many introductions to Bayesian nets.
The intuitive interpretation of arrows (i, j) in a Bayesian network as “causal
influence” is mathematically captured by the following formal, semantic constraint
that a DAG must satisfy in order to qualify as a Bayesian network:
Definition 10.1 A directed acyclic graph with nodes labelled by RVs {X1 , . . . , Xn }
(each with a separate sample space Si ) is a Bayesian network (BN) for the joint
distribution PX of X = X1 ⊗ . . . ⊗ Xn if every Xi is conditionally independent of
its non-descendants in the graph given its parents.
184
Figure 56: A simple Bayesian network. Image taken from Pearl and Russell 2003
185
X2 X3 P (X4 = 0) P (X4 = 1)
0 0 1.0 0.0
0 1 0.1 0.9 (116)
1 0 0.1 0.9
1 1 0.01 0.99
To specify a BN, one must supply such conditional distributions for every
RV Xi in the network. They give the probabilities for values of Xi for all value
combinations of its parents. If a RV Xi has no parents (that is, it is a root node
in the DAG), then this conditional distribution is conditioned on nothing — it is
just a plain probability distribution of RV Xi .
For continuous-valued random variables, such conditional distributions cannot
in general be specified in a closed form (one would have to specify pdf’s for each
possible combination of values of the conditioning variables), except in certain spe-
cial cases, notably Gaussian distributions. One must then supply a computable
mathematical function which allows one to compute all concrete probability den-
sities like p(X4 = x4 | X2 = x2 , X3 = x3 ), where p is a pdf.
I use lowercase P (x4 | x2 , x3 ) as a shorthand for P (X4 = x4 | X2 = x2 , X3 =
x3 ). This denotes a single probability number for particular values of our random
variables. – End of the remark on notation.
A Bayesian network can be used for reasoning about uncertain causes and
consequences in many ways. Here are three kinds of arguments that are frequently
made, and for which BNs offer algorithmic support:
Prediction. “If the sprinkler is on, the pavement is wet with a probability P (X4 =
1|X3 = 1) = ###”: reasoning from causes to effects, along the arrows of
the BN in forward direction. Also called forward reasoning in AI contexts.
Abduction. “If the pavement is wet, it is more probable that the season is spring
than that it is summer, by a factor of ### percent”: reasoning from effects
to causes, that is, diagnostic reasoning, backwards along the network links.
(By the way, for backward reasoning you need Bayes’ formula, which is what
gave Bayesian networks their name.)
Explaining away, and the rest. “If the pavement is wet and we don’t know
whether the sprinkler is on, and then observe that it is raining, the probability
of the sprinkler being on, too, drops by ### percent: in “explaining away”
there are several possible causes C1 , . . . , Ck for some observed effect E, and
when we learn that actually cause Ci holds true, then the probabilities drop
that the other causes are likewise true in the current situation. There are
many other variants of reasoning “sideways”.
Bayesian networks offer inference algorithms to carry out such arguments and
compute the correct probabilities, ratios of probabilities, etc. These inference
186
algorithms are not trivial at all, and Bayesian networks have only begun to make
their way into applications since efficient inference algorithms had been discovered
in the mid-80’ies. I will explain a classical (and still widely used) algorithm in this
section (the join tree algorithm for exact inference); it is still widely used and
efficient in BNs where the connectivity is not too dense. Because exact statistical
inference in BNs is NP-hard, one has to take resort to approximate algorithms
in many cases. There are two main families of such algorithms, one based on
sampling and the other on variational approximation. I will give a hint on inference
by sampling and omit variational inference.
In a sense, the most natural (the only?) thing you can do when it comes to
handle the interaction between many random variables is to arrange them in a
causal influence graph. So it is no surprise that related formalisms have been
invented independently in AI, physics, genetics, statistics, and image processing.
However, the most important algorithmic developments have been in AI / machine
learning, where feasible inference algorithms for large-scale BNs have first been
investigated. The unrivalled pioneer in this field is Judea Pearl http://bayes.
cs.ucla.edu/jp_home.html, who laid the foundations for the algorithmic theory
of BNs in the 1980’s. These foundations were later developed to a more general
theory of graphical models, a development promoted (among others) by Michael
I. Jordan https://people.eecs.berkeley.edu/~jordan/. Michael Jordan not
only helped to build the general theory of graphical models but also had a shaping
influence on other areas in machine learning. He is a machine learning superpower.
The list of his past students and postdocs on his homepage is awe-inspiring and
reads like a Who’s Who of machine learning.
In the world of BNs (and graphical models in general) there exist two funda-
mental tasks:
Inference: Find algorithms that exploit the graph structure as ingeneously as
possible to yield feasible algorithms for computing all the quantities asked
for in prediction, abduction and explaining-away tasks.
Learning: Find algorithms to estimate a BN from observed empirical data. Learn-
ing algorithms typically call inference algorithms as a subroutine, so we will
first treat the latter.
187
sample spaces, and how the graph structure can be exploited to factorize distri-
butions of interest, which makes them computationally more tractable.
The joint distribution in our Californian pavement example is a distribution
over the sample space
{Winter, Spring, Summer, Fall} × {0, 1} × {0, 1} × {0, 1} × {0, 1},
which has 4 · 2 · 2 · 2 · 2 = 64 elements. Thus one would need a 64-dimensional
pmf to characterize it, which has 63 degrees of freedom (one parameter we get
for free because the pmf must sum to 1). This pmf would not be given in vector
format but as a 5-dimensional probability array. We will now investigate brute-
force methods for elementary inference in this joint probability table and see how
the graph structure leads to a dramatic reduction in computational complexity
(which however still is too high for practical applications with larger BNs – more
refined algorithms will be presented later).
By a repeated application of the factorization formula (174) (in Appendix B),
the joint distribution of our five random variables is
O
P (X) = P ( Xi ) =
i=1,...,5
P (X1 ) P (X2 |X1 ) P (X3 |X1 , X2 ) P (X4 |X1 , X2 , X3 ) P (X5 |X1 , X2 , X3 , X4 ).(117)
Exploiting the conditional independencies expressed in the BN graph, this
reduces to
P (X) = P (X1 ) P (X2 |X1 ) P (X3 |X1 ) P (X4 |X2 , X3 ) P (X5 |X4 ). (118)
For representing the factors on the right-hand side of (118) by tables like (116),
one would need tables of sizes 1×4, 4×2, 4×2, 4×2, and 2×2, respectively. Because
the entries per row in each of these tables must sum to 1, one entry per row is
redundant, so these tables are specified by 3, 4, 4, 4 and 2 parameters, respectively.
All in all, this makes 17 parameters needed to specify P (X), as opposed to the 63
parameters needed for the naive representation P (X) in a 5-dimensional array.
In general, the number of parameters required to specify the joint distribution
of n discrete random variables with maximally ν values each arranged in a BN with
a maximum fan-in of k is O(n ν k ) as opposed to the raw number of parameters
O(ν n ) needed for a naive characterization of the joint distribution. This is a
reduction from a space complexity that is exponential in n to a space complexity
that is linear in n! This simple fact has motivated many a researcher to devote
his/her life to Bayesian networks.
Any reasoning on BNs (predictive, abductive, sidestepping or other) boils down
to calculate conditional or marginal probabilities. For instance, the abductive
question, “If the pavement is wet, by which factor y is it more probable that the
season is spring than that it is summer”, asks one to compute
P (X1 = spring | X4 = 1)
y= . (119)
P (X1 = summer | X4 = 1)
188
Such probability ratios are often sought in diagnostic reasoning — as for in-
stance in “by which factor is it more probable that my symptoms are due to cancer,
than that they are due to a harmless cause”.
Any conditional probability P (y1 , . . . , ym | z1 , . . . , zl ), where the Yi and Zj are
among the RVs in the BN, can be computed from the joint distribution of all
variables in the BN by first transforming P (y1 , . . . , ym | z1 , . . . , zl ) into a fraction
of two marginal probabilities,
P (y1 , . . . , ym , z1 , . . . , zl )
P (y1 , . . . , ym | z1 , . . . , zl ) =
P (z1 , . . . , zl )
and then computing the denominator and the enumerator by marginalization from
the joint distribution of all RVs in the BN, exploiting efficient BN factorizations of
the kind exemplified in Equation 118. The probability P (X1 = spring | X4 = 1),
for instance, can be computed by
where I abbreviated “spring” to “s” in order to squeeze the expression into a single
line.
(120) can be computed by a brute-force evaluation of the concerned summa-
tions. The sum to be taken in the denominator would run over 4×2×2×2 =
32 terms, each of which is a product of 5 subterms; we thus would incur 128
multiplications. It is apparent that this approach generally incurs a number of
multiplications that is exponential in the size of the BN.
A speedup strategy is to try pulling the sum into the product as far as possible,
and evaluate the resulting formula from the inside of bracketing levels. This is
called the method of variable elimination. For example, an equivalent formula for
the sum in the denominator of (120) would be
!!!
X X X
P (x1 ) P (x2 |x1 )P (x3 |x1 )P (X4 = 1|x2 , x3 ) P (x5 |X4 = 1) .
x1 x2 ,x3 x5
To evaluate this expression from the inside out, we note that the sum over x5 in
the innermost term is 1 and need not be explicitly calculated. For the remaining
189
calculations, 15 sums and 36 multiplications are needed, as opposed to the 31
sums and 128 multiplications needed for the naive evaluation of the denominator
in (120). However, finding a summation order where this pulling-in leads to the
minimal number of summations and multiplications is again NP-hard, although
greedy algorithms for that purpose are claimed to work well in practice.
In this situation, computer scientists can choose between three options:
2. Use heuristic algorithms, i.e. algorithms that embody human insight for com-
putational shortcuts. Heuristic algorithms need not always be successful in
leading to short runtimes; if they do, their result is perfectly accurate. The
goal is to find heuristics which lead to fast runtimes almost always and which
run too long only rarely. The “join tree” algorithm which we will study later
contains heuristic elements.
3. Use approximate algorithms, i.e. algorithms that yield results always and fast,
but with an error margin (which should be controllable). For BN inference,
a convenient class of approximate algorithms is based on sampling. In order
to obtain an estimate of some marginal probability, one samples from the
distribution defined by the BN and uses the sample as a basis to estimate
the desired marginal. There is an obvious tradeoff between runtime and
precision. Another class of approximate algorithms is to use variational
inference. Variational algorithms need some insight into the shape of the
conditional distributions in a BN. Their speedup results from restricting the
admissible shapes of these distributions to analytically tractable classes. I
will not go into this topic in this course — it’s not easy. Jordan, Ghahramani,
et al. 1999 give a tutorial introduction.
190
Given: A BN with DAG Gd with nodes X = {X1 , . . . , Xn }
Step 3: Detect all cliques Ci in Gt . While this is NP-complete for general undi-
rected graphs, it can be done efficiently for triangulated undirected graphs
(Note: a clique in an undirected graph is a subset of nodes that are all pair-
wise connected to each other. A subset of a clique is again a clique. To find
all cliques it is therefore enough to find all maximal cliques.)
Step 4: Build an undirected join tree T with nodes Ci . This is the desired target
structure. It represents an elegant factorization of the joint probability P (X)
which in turn can be processed with a fast, local inference algorithm known
as message passing. Again there is no unique way to create the join tree and
heuristics are used to obtain one that leads to fast inference algorithms.
Undirected graphical models. Before we can delve into the join tree algo-
rithm, we have to introduce the concept of undirected graphical models (UGMs),
because these will be constructed as intermediate data structures when a BN is
transformed to a join tree.
The essence of BNs is that conditional independency relationships between RVs
are captured by directed graph structures, which in turn guide efficient inference
algorithms. But it is also possible to use undirected graphs. This leads to undi-
rected graphical models (UGMs), which have a markedly different flavour from
directed BNs. UGMs originated in statistical physics and image processing, while
BNs were first explored in Artificial Intelligence. A highly readable non-technical
overview and comparison of directed and undirected models is given in Smyth
1997.
191
We will use the following compact notation for statistical, conditional inde-
pendence between two sets Y and Z of random variables, given a another set S of
random variables:
Definition 10.2 Two sets Y and Z of random variables are independent given
S, if
P (Y, Z | S) = P (Y | S) P (Z | S).
We write Y⊥Z | S to denote this conditional independence.
Definition 10.3 Let G = (V, E) be an undirected graph with vertices V and edges
E ⊆ V × V . Let Y, Z, S ⊂ V be disjoint, nonempty subsets. Then Y is separated
from Z by S if every path from some vertex in Y to any vertex in Z contains a
node from S. S is called a separator for Y and Z.
Step 1: The moral UGN. After this quick introduction of UGMs we return
to the join tree algorithm for BNs. The first step is to transform the directed
graph structure of BN into an UGM structure. This can be done in many ways.
The art lies in transforming a BN into an UGM such that as few as possible of the
valuable independence relations expressed in the BN get lost. The standard thing
to do is to moralize the directed BN graph Gd into an undirected graph Gm by
192
1. converting all edges from directed to undirected ones,
2. for every node Xi , adding undirected edges between all parents of Xi .
See Figure 57 for an example. The peculiar name “moralizing” comes from the
act of “marrying” previously unmarried parents. The moral UGM Gm implies the
same conditional independence relations as the BN Gd from which it was derived
(proof omitted here).
Figure 57: A BN and its associated moral UGM. Image taken from Huang and
Darwiche 1994.
Step 3: finding the cliques. For a change, finding all cliques in a triangulated
graph is not NP-complete — efficient algorithms for finding all cliques in a trian-
gulated graph are known. In our running example, we get 6 cliques, namely all
193
Figure 58: Left: A triangulated version Gt of the moral UGM Gm from Figure 57.
Right: the 6 cliques in Gt . Image adopted from Huang and Darwiche 1994.
the ”triangles” ABD, ACE, ADE, DEF, CGE, and EGH (Figure 58 (right)). Our
example conveys maybe a wrong impression that we always get cliques of size 3
in triangulated graphs. In general, one may find cliques of any size greater ≥ 2 in
such graphs.
Step 4: Building the join tree. This is a more interesting and involved step.
After BNs and UGMs, join trees are our third graph-based representation of in-
dependence relations governing a set X of random variables. We will discuss join
trees in their own right first, and then consider how a join tree can be obtained
from the cliques of a triangulated UGM.
194
that is, φS is obtained by marginalization from φC .
Figure 59: A join tree derived from the triangulated UGN shown in Figure 58.
Image taken from Huang and Darwiche 1994.
In join trees, the belief potentials are the marginal distributions of their vari-
ables:
Proposition 10.3 Let T be a join tree for P (X), and let K = {X1 , . . . , Xk } ⊆
X = {X1 , . . . , Xk , Xk+1 , . . . , Xn } be a cluster or sepset label set. Then for any
value instantiation x1 , . . . , xk of the variables from K, it holds that
X
P (x1 , . . . , xn ) = φK (x1 , . . . , xk ), (123)
xk+1 ,...,xn
to denote marginalization.
Ok., now that we know what a join tree is, we return to Step 4: constructing
a join tree from a triangulated moral UGM Gt . A join tree is specified through
195
(i) its labelled graph structure and (ii) its belief potentials. We first treat the the
question how one can derive the join tree’s graph structure from Gt .
There is much freedom in creating a join tree graph from Gt . One goal for
optimizing the design is that one strives to end with clusters that are as small as
possible (because the computational cost of using a join tree for inference will turn
out to grow exponentially in the maximal size of clusters). On the other hand,
in order to compute belief potentials later, any clique in Gt must be contained in
some cluster. This suggests to turn the cliques identified in Step 3 into the cluster
nodes of the join tree. This is indeed done in a general recipe for constructing a
join tree from a triangulated UGM Gt , which I rephrase from Huang and Darwiche
1994:
1. Begin with an empty set SEP, and a completely unconnected graph whose
nodes are the m maximal cliques Ci found in Step 3.
3. From SEP iteratively choose m − 1 sepsets and use them to create connec-
tions in the node graph, such that each newly chosen sepset connects two
subgraphs that were previously unconnected. This necessarily yields a tree
structure.
A note on the word “tree structure”: in most cases when a computer scientist
talks about tree graphs, there is a special “root” node. Here we use a more general
notion of trees to mean undirected graphs which (i) are connected (there is a path
between any two nodes) and (ii) where these paths are unique, that is between
any two nodes there is exactly one connecting path. In such tree graphs any node
can be designated as “root” if one wishes to see the familiar appearance of a tree.
This general recipe leaves much freedom in choosing the sepsets from SEP. Not
all choices will result in a valid join tree. In order to ensure the join tree properties,
we choose, at every step from 3., the candidate sepset that has the largest mass
(among all those which connect two previously unconnected subgraphs). The mass
of a sepset is the number of variables it contains.
This is not the only possible way of constructing a join tree, and it is still
underspecified (there may be several maximal mass sepsets at our disposal in a
step from 3.) Huang and Darwiche propose a full specification that heuristically
optimizes the join tree with respect to the ensuing inference algorithms.
If the original BN was not connected, some of the sepsets used in the join tree
will be empty; we get a join forest then.
We now turn to the second subtask in Step 4 and construct belief potentials
φC , φS for the clusters and sepsets, such that Equations 121 and 122 hold.
196
Belief potentials which account for (121) and (122) are constructed in two
steps. First, the potentials are initialized in a way such that (122) holds. Second,
by a sequence of message passes, local consistency (121) is achieved.
where i ranges over all cliques, j over all sepsets, and k over all RVs.
After having initialized the join tree potentials, we make them locally consistent
by propagating the information, which has been locally multiplied-in, across the
entire join tree. This is done through a suite of message passing operations, each of
which makes one clique/sepset pair consistent. We first describe a single message
pass operation and then show how they can be scheduled such that a message pass
does not destroy consistency of clique/sepset pairs that have been made consistent
in an earlier message passing.
197
2. “Absorption”: multiply the belief potential of D by the inverse of φold
S /φS
in order to restore the joint distribution:
φS
φD ← φD .
φold
S
After this step, C is consistent with S in the sense of (121). To also make
D consistent with S, a message passing in the reverse direction must be carried
out. An obvious condition is that this reverse-direction pass must preserve the
consistency of C with S. This is warrented if a certain order of passes is observed,
to which we now turn our attention.
This connection will be hit twice by a message pass, one in each direction.
Assume that the first of the two passes went from C to D. After this pass, we
have potentials φ0C , φ0S , φ0D , and C is consistent with S:
X
φ0S = φ0C .
C\S
At some later time, a message pass sweeps back from D to C. Before this happens,
the potential of D might have been affected by some other passes, so it is φ1D when
the pass from D to C occurs. After this pass, we have
X φ1S
φ1S = φ1D and φ1C = φ0C .
φ0S
D\S
198
It turns out that still S is consistent with C:
1 X 1 X 1
X
1 0 φS 0 φS 0 φS
φS = φS 0 = φC = φC 0 = φ1C .
φS φ0S φS
C\S C\S C\S
3. In a second phase (“distribute evidence”), carry out all passes that are ori-
ented away from the center, in an inside-out spreading order.
Figure 61 shows a possible global scheduling for our example join tree.
Figure 61: A scheduling for the global propagation of message passing. The center
is ACE. Figure taken from the Huang/Darwiche paper.
After all of this toiling, we have a magic join tree T — a tree graph adorned
with belief potentials that are locally consistent (Equation 121) and globally rep-
resent the joint distribution P (X) (Equation 122). The join tree is now ready for
use in inference tasks.
199
Inference task 1: compute a single marginal distribution. For any RV
X in the BN, computing the marginal distribution P (X) is a simple two-step
procedure:
200
where CE has labels (E, A1 , . . . , Al ), that is, φCE ΛE simply resets φCE to
zero for all arguments that have a different value for E than the observed
one. With the new potentials, the tree globally encodes P (X) 1e , where
1e : S⊗ Xi → {0, 1} is the indicator function of the set {(x1 , . . . , xn ) ∈
S⊗ Xi | xi = ej for Xi = Ej , j = 1, . . . , k}:
Q
φ
Qi Ci = P (X) ΛE1 . . . ΛEk = P (X) 1e =: P (X, e).
j φSj
φK = P (K, e).
P (Z, e) P (Z, e)
P (Z | e) = = P .
P (e) P (Z, e)
z∈SZ
201
In other cases, empirical observations are available that allow one to estimate
the conditional probability tables from data. For instance, the table shown in
Equation (116) could have been estimated from counts # of observed outcomes
as suggested in Figure 62.
Notice that each row in such a table is estimated independently from the other
rows. The task of estimating such a row is the same as estimating a discrete
distribution from data. When there is no abundance of data, Bayesian model esti-
mation via Dirichlet priors is the way to go, as in the protein frequency estimation
in Section 8.3.
When data like in Figure 62 are available for all nodes in a BN, estimating
the local conditional probabilities by the obvious frequency counting ratios gives
maximum likelihood estimates of the local conditional probabilities. It can be
shown that this is also the maximum likelihood estimate of the joint distribution
P (X) (not a deep result, the straightforward derivation can be found in the online
lecture notes Ermon 2019, chapter “Learning in directed models”).
It is very often the case that for some of the variables, neither empirical obser-
vations nor an expert’s opinion is available, either because simply the observations
have not been carried out or because these quantities are in principle unobservable.
Such unobservable variables are called hidden variables. To get an impression of
the nature and virtues of hidden variables, consider the BN in Figure 63.
Figure 63: A BN for use by a social worker (apologies to professionals in the field)
202
Figure 64: A BN for use by a psychologically and statistically enlightened social
worker
statistical model. While all other variables in this BN can be readily measured,
self-confidence can’t. Yet the augmented BN is, in an intuitive sense, more valuable
than the first one, because it tries to reveal a causal mechanism whereas the former
one only superficially connects variables by arrows that can hardly be understood.
Besides being intellectually more pleasing, the second BN offers substantial
computational savings: its join tree (construct it!) is much more lightweight than
the first BN’s, so statistical inference algorithms will run much faster.
Generalizing from this simplistic example, it should be clear that hidden vari-
ables are a great asset in modeling reality. But — they are hidden, which means
that the requisite probability tables cannot be directly estimated from empirical
data.
When there are hidden variables in a BN for whose conditional distribution no
data are available, one uses EM algorithms to estimate their probability tables.
The basic version of EM for BNs which seems standard today has been introduced
in a paper by Lauritzen 1995. The algorithm is also described in a number of online
tutorials and open access papers, for instance Mouafo et al. 2016. The algorithm
is complex. In the E-step it uses inference in join trees as a subroutine.
203
restricted Boltzmann machines, a stripped-down version of the former, com-
putationally tractable, and instrumental in kicking off the deep learning
revolution (Hinton and Salakuthdinov 2006).
204
11 Online adaptive modeling
Often an end-user of a machine learning model finds himself/herself in a situation
where the distribution of the input data needed by the model change over time.
The learnt model will then become inaccurate because it was trained with training
data that had another distribution of the input data. It would be desirable if the
learnt model could adapt itself to the new situation. Two examples:
• A speech recognition system on a smartphone is used while walking in a
city and doing some shopping — the background noise will change every
few seconds — and the recognition system has to adapt its “de-noising”
continuously to changing sorts of noise.
• A credit risk prediction system is used month after month — but when an
economical crisis changes, the loantakers’ payback morale, the system should
change its prediction biases.
Never-ending adaptation of machine learning systems has always been an issue
for machine learning. Currently this theme is receiving renewed attention in deep
learning in a special, somewhat restricted version, where it is discussed under the
headline of continual learning (or continuous learning). Here one seeks solutions
to the problem that, if an already trained neural network is subsequently trained
even more on new incoming training data, the previously learnt competences will
be destructively over-written by the process of continuing learning on new data.
This phenomenon is known since long as catastrophic forgetting. Methods for
counter-acting catastrophic forgetting are developing fast. I upload a snapshot
summary of the continual learning scene on Nestor’s “lecture notes and other
tutorial materials” page (file Brief Overview of Continual Learning Approaches,
written by Xu He, a PhD student in my group).
In this section I will however not deal with continual (deep) learning, for two
reasons: (i) this material is advanced and requires substantial knowledge of deep
learning methods, (ii) the currently available methods still fall short of the desired
goal to enable ongoing, “life-long” learning.
Instead, I will present methods which have since long been explored and suc-
cessfully used in the field of signal processing and control. This material is not
normally treated in machine learning courses — I dare say, mostly because ma-
chine learners are just not aware of this body of knowledge, and maybe also if they
are, they find these methods too “linear” (no neural networks involved!). But I
find this material most valuable to know,
• because these techniques are broadly applicable, especially in application
contexts that involve signal processing and control — like robotics, industrial
engineering applications, and modeling biological systems;
• because these techniques are mathematically elementary and transparent
and give you a good understanding of conditions when gradient descent
205
optimization of loss functions becomes challenged — it’s a perfect primer
to get into the learning algorithms of deep learning - you will understand
better why they sometimes become veeeeery slow or numerically instable;
• and finally, because I think that machine learning is an interdisciplinary
enterprise and I believe that these signal processing flavored methods will
become important in a young and fast-growing field of research called neu-
romorphic computing which has a large overlap with machine learning —
my own field since a few years.
Throughout this section, I am guided by the textbook Adaptive Filters: Theory
and Applications (Farhang-Boroujeny 1998).
x y
H
206
Note that here we consider the vector w as a column vector (often in the
literature, this specific weight vector is understood as row vector — as we did in
our earlier treatment of linear regression). Transversal filters are linear filters. A
filter H is called linear if for all a, b ∈ R and signals x1 , x2
H(a x1 + b x2 ) = a H(x1 ) + b H(x2 ). (126)
The proof that transversal filters are linear is an easy exercise.
I remark in passing that the theory of signals and systems works with complex
numbers throughout both for signals and filter parameters; for us it is however
good enough if we only use real-valued signals and model parameters.
The unit impulse δ = (δ(n)) is the signal that is zero everywhere except for
n = 0, where δ(0) = 1. The unit impulse response of a filter H is the signal H(δ).
For a transversal filter Hw , the unit impulse response is the signal which repeats
w = (w1 , . . . , wL )′ at times n = 0, 1, . . . , L − 1:
(
wi if n = i − 1 (i = 1, . . . , L),
(Hw (δ))(n) = (127)
0 else.
Figure 66 shows the structure of a transversal filter in the graphical notation
used in signal processing.
x(n –L+1)
Figure 66: A transversal filter (black parts) and an adaptive linear combiner (blue
parts). Boxes labeled z −1 are what signal processing people call “unit delays” —
elementary filters which delay an input signal by one timestep. The triangular
boxes mean “multiply with”. In such box-and-arrow flow diagrams in the field of
signal processing, diagonal arrows spearing through some box indicate that what
is in the box is becoming adapted on the basis of the information that arrives with
the arrow.
The theme of this section is online adaptive modeling. In the context of fil-
ters, this means that the filter changes over time in order to continue complying
207
with new situational conditions. Concretely, we consider scenarios of supervised
training adaptation, where a teacher output signal is available. We denote this
“desirable” filter output signal by (d(n)). The objective of online adaptive fil-
tering is to continually adapt the filter weights wi such that the filter output
y(n) = w′ (x(n), . . . , x(n − L + 1))′ stays close to the desired output d(n).
The situational conditions may change both with respect to the input signal
x(n), which might change its statistical properties, and/or with respect to the
teacher d(n), which might also change. For getting continual optimal performance
this implies that the filter weights must change as well: the filter weight vector w
becomes a temporal variable w(n). Using the shorthand x(n) = (x(n), . . . , x(n −
L + 1))′ for the last L inputs up to the current x(n), this leads to the following
online adaptation task:
Given at time n: the filter weight w(n−1) calculated at the end of the previous
timestep; new data points x(n), d(n) in the signal.
at the next timestep can be expected to be smaller than without the adap-
tation. Notes:
208
This will automatically happen if the modification ∆w (n) is based only
on the current error ε(n) = d(n) − w′ (n − 1) x(n). Information from
earlier errors will already be incorporated in w(n − 1). This leads
to an error feedback based weight adaptation which is schematically
shown in the blue parts of Figure 66. A diagonal arrow through a
box is the way how parameter adaptation is depicted in such signal
processing diagrams. Such continually adapted transversal filters are
called adaptive linear combiners in the Farhang/Boroujeny textbook.
• The error signal ε2 (n) which the adaptive filter tries to keep low need
not always be obtained by comparing the current model output
w(n − 1)′ x(n) with a teacher d(n). Often the “error” signal is obtained
in other ways than by comparison with a teacher. In any case, the
objective for the adaptation algorithm is to keep the “error” amplitude
at low levels. The error signal itself is the only source of information
to steer the adaptation (compare Figure 66). Several examples in the
next subsection will feature “error” signals which are not computed by
comparison with a teacher signal.
x d
209
The following game is played here. Some target system is monitored while it
receives an input signal x and generates an output g which is observed with added
observation noise ν; this observed signal output becomes the teacher d. The model
system is a transversal filter Hw whose weights are adapted such that the model
system’s output y stays close to the observed output d of the target system.
Obtaining and maintaining a model system Hw is a task which occurs ubiq-
uitously in systems engineering. The model system can be used for manifold
purposes because it allows to simulate the target system. Such simulation models
are needed, for instance, for making predictions about the future behavior of the
original system, or for assessing whether the target system might be running into
malfunction modes. Almost every control or predictive maintenance task in electric
or mechanical engineering requires system models. I proceed to illustrate the use
of system models with two concrete examples taken from the Farhang/Boroujeny
book.
y
x
Figure 68: Geological exploration via impulse response of learnt earth model. A.
Physical setup. B. Analysis of impulse response.
210
or large vibrating mass). An earth microphone is placed at a distant point B,
picking up a signal d. A model Hw (a “dummy earth”) is learnt. After Hw is
obtained, one analyses the impulse response w of this model. The peaks of w
give indications about reflecting layers in the earth crust between A and B, which
correspond to different delayed responses pj of the input impulse.
r u y (should be ~ r)
𝐲"
211
d x y
&"'
𝑧 "∆ 𝐻
𝑧 "∆
s x y 𝐬"
Figure 71: Schema of adaptive online channel equalization. Delays (which one
would insert for stability) are omitted. The box on the right with the step function
indicates a filter that converts the continuous-valued equalizer output y to a binary
signal — assuming that we want a channel for binary bitstream signals.
212
Feedback error learning for a composite direct / feedback controller.
Pure open-loop control cannot cope with external disturbances to the plant. For
example, if an electric motor has to cope with varying external loads, the resulting
breaking effects would remain invisible to the kind of open-loop controller shown in
Figure 72. One needs to establish some sort of feedback control, where the observed
mismatch between the reference signal r and the plant output y is used to insert
corrective input to the plant. There are many ways to design feedback controllers.
The “feedback controller” box in Figure 72 contains one of them without further
specification.
The scheme shown in the figure (proposed in Jordan and Wolpert 1999 in
a nonlinear control context, using neural networks) trains an open-loop inverse
(feedforward) controller in conjunction with the operation of a fixed, untrainable
feedback controller.
e = ufb(n)
𝑧 "∆
Figure 72: Schema of feedback error learning for a composite control system.
213
that (r(n) − y(n))2 is minimized, that is, such that the control improves (an
admittedly superficial explanation).
• When the plant characteristics change, or when external disturbances set in,
the feedback controller jumps to action, inducing further adaptation of the
feedforward controller.
s s + n0 𝐬"
Denoising has many applications, for instance in airplane cockpit crew com-
munication (cancelling the acoustic airplane noises from pilot intercom speech),
postprocessing of live concert recordings, or (like in one of the four suggested
semester projects) cancelling the mother’s ECG signal from the unborn child’s in
prenatal diagnostics. In the Powerpoint file denoisingDemo.pptx which you find
on Nestor together with the lecture notes, you can find an acoustic demo that I
once produced.
Explanations:
• The “error” which the adaptive denoising filter tries to minimize is s+ν0 −y,
where y is the filter output.
• The only information that the filter has to achieve this error minimization
is its input ν1 . Because this input is (ideally) independent of s, but related
to ν0 via some noise-to-noise filter, all that the filter can do is to subtract
from s + ν0 whatever it finds correlates in s + ν0 with ν1 . Ideally, this is ν0 .
Then, the residual “error” ŝ would be just s — the desired de-noised signal.
• This scheme is interesting (not just a trivial subtraction of ν1 from s +
ν0 ) because the mutual relationship between ν1 and ν0 may be complex,
involving for instance a superposition of delayed versions of ν0 .
214
More sophisticated methods for denoising are known today, often based on
mathematical principles from independent component analysis (ICA). You find an
acoustic demo on https://cnl.salk.edu/~tony/ica.html (the author of this
page, Anthony J. Bell, is an ICA pioneer).
But reality strikes: The optimization problem (⋆) often cannot be directly solved,
for instance because it is analytically intractable (the case for neural net-
work training) or because the training data come in as a time series and their
statistical properties change with time (as in adaptive online modeling).
Second best approach: Design an iterative algorithm which produces a sequence
of models (= parameter vectors) θ(0) , θ(1) , θ(2) , . . . with decreasing empirical
risk Remp (θ(0) ) > Remp (θ(1) ) > . . .. The model θ(n+1) is typically computed
by an incremental modification of the previous model θ(n) . The first model
θ(0) is a guess provided by the experimenter.
In neural network training, the hope is that this series converges to a model
θ(∞) = limn→∞ θ(n) whose empirical risk is close to the minimal possible em-
pirical risk. In online adaptive modeling, the hope is that if one incessantly-
iteratively tries to minimize the empirical risk, one stays close to the moving
target of the current best model (we’ll see how that works).
215
This scheme only treats the approach where one tries to minimize the training
loss. We know that in standard supervised learning settings this invites overfitting
and that one should better employ a regularized loss function together with some
cross-validation procedure, a complication that we ignore here. In online adaptive
modeling, the empirical risk Remp (θ) is time-varying, another complication that
for the time being we will ignore. Such complications notwithstanding, the general
rationale of iterative supervised learning algorithms is to compute a sequence of
models with decreasing empirical risk.
In standard (not online adaptive) settings, such iterative algorithms, if they
converge, can find only locally optimal models. A model θ(∞) is locally optimal
if every slight modification of it will lead to a higher empirical risk. The final
converged model limn→∞ θ(n) will depend on the initial guess θ(0) — coming up
with a method for good initial guesses was the crucial innovation that started the
deep learning revolution (Hinton and Salakuthdinov 2006).
216
Figure 74: A (2-dimensional cross-section of) a performance surface for a neural
network. The performance landscape shows the variation of the loss when (merely)
2 weights in the network are varied. Source: http://www.telesens.co/2019/01/
16/neural-network-loss-visualization/
The adaptation rate (or learning rate) µ is set to a small positive value.
217
𝜃2
𝜃 −∇R(𝜃)
𝜃1
Figure 75: A performance surface for a 2-dimensional model family with param-
eters θ1 , θ2 , with its contour plot at the bottom. For a model θ (yellow star in
contour plot) the negative gradient is shown as black solid arrow. It marks the
direction of steepest descent (broken black arrow) on the performance surface.
An obvious weakness of this elegant and natural approach is that the final
model θ(∞) depends on the choice of the initial model θ(0) . In complex risk land-
scapes (as the one shown in Figure 74) there is no hope of guessing an initial
model which guarantees to end in the global minimum. This circumstance is gen-
erally perceived and accepted. There is a substantial mathematical literature that
amounts to “if the initial model is chosen with a good heuristic, the local minimum
that will be reached will be a rather good one with high probability”.
We will see that the real headaches with gradient descent optimization are of a
different nature — specifically, there is a inherently difficult tradeoff between speed
of convergence (one does not want to invest millions or even billions of iterations)
and stability (the iterations must not lead to erratic large-sized jumps that lead
away from the downhill direction). The adaptation rate µ plays a key role in
this difficult tradeoff. For good performance (stability plus satisfactory speed of
convergence), it must be adapted online while the iterations proceed. And doing
that sagaciously is not trivial. The situation shown in Figure 76 is deceptively
218
𝜃(0)
𝜃(1)
𝜃(2)
𝜃(∞)
Figure 76: A gradient descent itinerary, re-using the contour map from Figure 75
and starting from the initial point shown in that figure. Notice the variable jump
length and the sad fact that from this initial model θ(0) the global minimum is
missed. Instead, the itinerary slides toward a local minimum at θ(∞) . The blue
arrows show the negative gradient at raster points. They are perpendicular to the
contour lines and their length is inversely proportional to the spacing between the
contour lines.
219
simplistic; say goodbye to all hope that gradient descent works as smoothly as
that in real-life applications.
These difficulties raise their ugly head already in the simplest possible risk
surfaces, namely the ones that arise with iterative solutions to linear regression
problems. Making you aware of these difficulties in an analytically tractable,
transparent setting is one of the two main reasons why I believe a machine learning
student should know about adaptive online learning of transversal filters. You will
learn a lot about the challenges in neural network training as a side effect. The
other main reason is that adaptive online learning of transversal filters is really,
really super useful in many practical tasks.
where the expectation is taken with respect to time, are well-defined and indepen-
dent of n. R is called (in the field of signal processing) the correlation matrix of
the input process; it has size L × L. p is an L-dimensional vector.
220
In a stationary process there is no need to have different filter weights at
different times. We can thus, for now, drop the time dependence from w(n) and
consider unchanging weights w. For any such weight vector w we consider the
expected squared error
This expected squared error for a filter Hw is the risk that we want to minimize.
We now take a close look at the performance surface, that is the graph of the risk
function R : RL → R. Its geometrical properties will turn out key for mastering
the adaptive filtering task. Figure 77 gives a visual impression of the performance
surface for the case of L = 2 dimensional weight vectors w = (w1 , w2 )′ .
R(w) = E[e2(n)]
Rmin w2 opt
w2
w1 opt
w(n+1) w(n+2)
w1 w(n)
Figure 77: The performance surface in the case of two-dimensional weight vectors
(black parts of this drawing taken from drip.colorado.edu/~kelvin/links/
Sarto_Chapter2.ps many years ago, page no longer online). An iterative al-
gorithm for weight determination would try to determine a sequence of weights
. . . , w(n) , w(n+1) , w(n+2) , . . . (green) that moves toward wopt (blue). The eigenvec-
tors uj (red) of the correlation matrix R lie on the principal axes of the hyperel-
lipsiods given by the level curves of the performance surface.
221
offset, x′ b is the linear term, and x′ Cx is the quadratic term. C must be a posi-
tive definite matrix or negative definite matrix. The graph of a multidimensional
quadratic function shares many properties with the graph of the one-dimensional
quadratic function f (x) = a + bx + cx2 . The one-dimensional quadratic function
graph has the familiar shape of a parabola, which depending on the sign of c is
opened upwards or downwards. Similarly, the graph of k-dimensional quadratic
function has the shape of a k-dimensional paraboloid, which is opened upwards
if C is positive definite and opened downwards if C is negative definite. A k-
dimensional paraboloid is a “bowl” whose vertical cross-sections are parabolas.
The contour curves of a k-dimensional paraboloid (the horizontal cross-sections)
are k-dimensional ellipsoids whose main axes lie in the directions of the k orthog-
onal eigenvectors u1 , . . . , uk of C.
In the 2-dimensional case of a performance surface shown in Figure 77, the
paraboloid must open upwards because the risk R is an expected squared error
and hence cannot be negative; in fact, the entire surface must be nonnegative.
The figure also shows a projection of the ellipsoid contour curves of the paraboloid
on the w1 -w2 plane, together with the eigenvectors of R.
Importantly, such quadratic performance surfaces have a single minimum
(provided that R has full rank, which we tacitly take for granted). This
minimum marks the position of the optimal (minimal risk) weight vector
wopt = argmin R(w). An iterative learning algorithm would generate a sequence
w∈RL
(n) (n+1)
...,w ,w , w(n+2) , . . . of weight vectors where each new w(n) would be po-
sitioned at a place where the performance surface is deeper than at the previous
weight vector w(n−1) (green dots in the figure).
The optimal weight vector wopt can be calculated analytically, if one exploits
the fact that the partial derivatives ∂R/∂wi are all zero (only) for wopt . The
gradient ∇R is given by
′
∂R ∂R
∇R = ,··· , = 2 Rw − 2 p
∂w1 ∂wL
which follows from (133) (exercise). Setting this to zero gives the Wiener-Hopf
equation
Rwopt = p, (134)
which yields the optimal weights as
If you consider the definitions of R and p (Equations 132 and 131), you will
find that this solution for wopt is the empirical risk version of the linear regression
solution (19) for the minimal empirical risk from a quadratic error loss.
In order to appreciate the challenges inherent in iterative gradient descent on
quadratic performance surfaces we have to take a closer look at the shape of the
hyperparaboloid.
222
First we use (133) and the Wiener-Hopf equation to express in the expected
residual error Rmin (see Figure 77) that we are left with when we have found wopt :
′ ′
Rmin = E[(d(n))2 ] − 2 wopt p + wopt R wopt
′
= E[(d(n))2 ] − wopt p
′
= E[(d(n))2 ] − wopt R wopt
= E[(d(n))2 ] − p′ R−1 p.
(136)
For a more convenient analysis we rewrite the error function R in new coordi-
nates such that it becomes centered at the origin. Observing that the paraboloid
is centered on wopt , that it has “elevation” Rmin over the weight space, and that
′
the shape of the paraboloid itself is determined by wopt R wopt , we find that we
can rewrite (133) as
R(v) = Rmin + v′ R v, (137)
where we introduced shifted (and transposed) weight coordinates v = (w − wopt ).
Differentiating (137) with respect to v yields
′
∂R ∂R ∂R
= ,..., = 2 R v. (138)
∂v ∂v1 ∂vL
∂R
= 2 D ṽ = 2 (λ1 ṽ1 , . . . , λL ṽL )′ , (139)
∂ ṽ
from which we get the second derivatives
∂ 2R
= 2 (λ1 , . . . , λL )′ , (140)
∂ ṽ2
that is, the eigenvalues of R are (up to a factor of 2) the curvatures of the per-
formance surface in the direction of the central axes of the hyperparabeloid. We
will shortly see that the computational efficiency of gradient descent on the per-
formance surface depends critically on these curvatures.
223
11.3.5 Gradient descent on a quadratic performance surface
Now let us do an iterative gradient descent on the performane surface, using the
normal coordinates ṽ. The generic gradient descent rule (129) here is
If we would define and carry out gradient descent in the original weight co-
ordinates w(n) via w(n+1) = w(n) − µ ∇R(w(n) ), we would get the same model
sequence (up to the coordinate change w ↔ ṽ) as given by Equations 141, 142
(n)
and 143. The sequence w(n) converges to wopt if and only if ṽj converges for
all coordinates 1 ≤ j ≤ L. Equation 141 implies that this happens if and only if
|1 − 2µλj | < 1 for all j. These inequalities can be re-written equivalently as
Specifically, we must make sure that 0 < µ < 1/λmax , where λmax is the largest
eigenvalue of R. Depending on the size of µ, the convergence behavior of (141) can
be grouped in four classes which may be referred to as overdamped, underdamped,
and two types of unstable. Figure 78 illustrates how ṽj (n) evolves in these four
classes.
We can find an explicit representation of w(n) if we observe that
X (n)
w(n) = wopt + v(n) = wopt + uj ṽj ,
j
where the uj are again the orthonormal eigenvectors of R. Inserting (143) gives
us X (0)
w(n) = wopt + ṽj uj (1 − 2µλj )n . (145)
j=1,...,L
This representation reveals that the convergence of w(n) toward wopt is gov-
erned by an additive overlay of L exponential terms, each of which describes
224
(n)
Figure 78: The development of ṽj [plotted in the y-axis] versus n [x-axis]. The
qualitative behaviour depends on the stepsize parameter µ. a. Overdamped case:
0 < µ < 1/(2λj ). b. Underdamped case: 1/(2λj ) < µ < 1/λj . c. Unstable with
(0)
µ < 0 and d. unstable with 1/λj < µ. All plots start with ṽj = 1.
For suitable µ (considering (144)), R(n) converges to Rmin . Plotting R(n) yields
a graph known as learning curve. Equation 146 reveals that the learning curve is
the sum of L decreasing exponentials (plus Rmin ).
How this learning curve looks like depends on the size of µ relative to the
eigenvalues λj . If 2µλj is close to zero for all j, the learning curve separates into
sections that each are determined by the convergence of one of the j components.
Figure 80 shows a three-mode learning curve for the case of small µ, rendered in
linear and logarithmic scale.
225
Figure 79: Two quite different modes of convergence (panel a.) versus rather
similar modes of convergence (panel b.). Plots shows contour lines of performance
surface for two-dimensional weights w = (w1 , w2 ). Violet dotted lines indicate
some initial steps of weight evolution.
Figure 80: A learning curve with three modes of convergence, in linear (a.) and
logarithmic (b.) scaling. This plot shows the qualitative behavior of modes of
convergence when µ is small. Rmin is assumed zero in these plots.
This separation of the learning curve into approximately linear sections (in
logarithmic rendering) can be mathematically explained as follows. Each of the
terms (1 − 2µλj )2n is characterized by a time constant τj according to
(1 − 2µλj )2n = exp(−n/τj ). (147)
If 2µλ is close to zero, exp(2µλ) is close to 1+2µλ and thus log(1−2µλ) ≈ 2µλ.
Using this approximation, solving (147) for τj yields for the j-th mode a time
constant of
1
τj ≈ .
4µλj
That is, the convergence rate (i.e. the inverse of the time constant) of the j-th
mode is proportional to λj (for small µ).
However, this analysis is meaningless for larger µ. If we want to maximize the
speed of convergence, we should use significantly larger µ as we will presently see.
The final rate of convergence is dominated by the slowest mode of convergence,
which is characterized by the geometrical sequence factor
max{|1 − 2µλj | j = 1, . . . , L} = max{|1 − 2µλmax |, |1 − 2µλmin |}. (148)
226
In order to maximize convergence speed, the learning rate µ should be chosen
such that (148) is minimized. Elementary considerations reveal that this minimum
is attained for |1 − 2µλmax | = |1 − 2µλmin |, which is equivalent to
1
µopt = . (149)
λmax + λmin
For this optimal learning rate, 1−2µopt λmin is positive and 1−2µopt λmax is neg-
ative, corresponding to the overdamped and underdamped cases shown in Figure
78. However, the two modes converge at the same speed (and all other modes are
faster). Concretely, the optimal speed of convergence is given by the geometric
factor β of the slowest mode of convergence,
λmax /λmin − 1
β = 1 − 2µopt λmin = ,
λmax /λmin + 1
which can be derived by substituting (149) for µopt . β has a value between 0
and 1. There are two extreme cases: if λmax = λmin , then β = 0 and we have
convergence in a single step. As the ratio λmax /λmin increases, β approaches 1 and
the convergence slows down toward stillstand. The ratio λmax /λmin thus plays a
fundamental role in limiting the convergence speed of steepest descent algorithms.
It is called the eigenvalue spread.
All of the derivations in this subsection were done in the context of quadratic
performance surfaces. This may seem a rather limited class of performance land-
scapes. However, it is a mathematical fact that in most (the mathematically
correct term for “most” is “generic”, which is a deep probability-theory concept)
differentiable performance surfaces, the shape of the surface in the vicinity of a
local minimum can be approximated arbitrarily well a by a quadradic surface. The
stability conditions and the speed of convergence toward local minima in generic
gradient descent situations therefore is generically subject to the same conditions
that we saw in this subsection.
Specifically, the eigenvalue spread (of the quadratic approximation) around a
local minimum sets stability limits to the achieveable rate of convergence. It is
a sad fact that this eigenvalue spread is typically large in neural networks with
many layers — today famously known as deep networks. For almost twenty years
(roughly, from 1985 when gradient descent training for neural networks became
widely adopted, to 2006 when deep learning started) no general, good ways were
known to achieve neural network training that was simultaneously stable and
exhibited a fast enough speed of convergence. Only shallow neural networks (typ-
ically with a single hidden layer) could be effectively trained, limiting the appli-
cations of neural network modeling to not too nonlinear tasks. The deep learning
revolution is based, among other factors, on an assortment of “tricks of the trade”
to overcome the limitations of large eigenvalue spreads by clever modifications
of gradient descent, which cannot work in its pure form. If you are interested
— Section 8 in the deep learning “bible” (I. Goodfellow, Bengio, and Courville
227
2016) is all about these refinements and modifications of, and alternatives to, pure
gradient descent.
The gradient descent update formula w(n+1) = w(n) −µ ∇R(w(n) ) then becomes
where in the last step we simplified the notation εw(n) (n) to ε(n). Inserting this
into (151) gives
which is the weight update formula of one of the most compact, cheap powerful
and widely used algorithms I know of. It is called the least means squares (LMS)
algorithm in signal processing, or Widrow-Hoff learning rule in neuroscience where
228
the same weight adaptation rule has been (re-)discovered independently from the
signal processing tradition. In fact, this online weight adaptation algorithm for
linear regression has been independently discovered and re-discovered many times
in many fields.
For completeness, here are all the computations needed to carry out one full
step of online filtering and weight adaptation with the LMS algorithm:
One fact about the LMS algorithm should always be kept in mind: being a stochas-
tic version of steepest gradient descent, the LMS algorithm inherits the problems
connected with the eigenvalue spread of the input process Xn . If its eigenvalue
spread is very large, the LMS algorithm will not work satisfactorily.
As an aside, in my working with recurrent neural networks, I once tried out
learning algorithms related to LMS. But the input signal to this learning algorithm
had an eigenvalue spread of 1014 to 1016 which resulted from the extremely multi-
curved geometry of the neural network’s cost landscape, so the beautiful LMS
algorithm was entirely inapplicable.
Because of its eminent usefulness (if the input vector correlation matrix has
a reasonably small eigenvalue spread), the LMS algorithm has been analysed in
minute detail. I conclude this section by reporting the most important insights
without mathematical derivations. At the same time I introduce some of the
standard vocabulary used in the field of adaptive signal processing.
For starters, we again assume that [X L D] is a stationary processes (recall
(130)). The evolution w(n) of weights is now also a stochastic process, because
the LMS weight update depends on the stochastic vectors x(n). One interesting
question is how fast the LMS algorithm converges in comparison with the ideal
steepest gradient descent algorithm ṽ(n+1) = (I−2µD) ṽ(n) from (141). Because we
now have a stochastic update, the vectors ṽ(n) become random variables and one
can only speak about their expected value E[ṽ(n) ] at time n. (Intuitive explanation:
this value would be obtained if many (infinitely many in the limit) training runs
of the adaptive filter would be carried out and in each of these runs, the value
of ṽ(n) at time n would be taken, and an average would be formed over all these
ṽ(n) ). The following can be shown (using some additional assumptions, namely,
that µ is small and that the signal (x(n)) has no substantial autocorrelation for
time spans larger than L):
Rather to our surprise, if the LMS algorithm is used, the weights converge —
on average across different trials — as fast to the optimal weights as when the
ideal algorithm (141) is employed. Figure 81 shows an overlay of the deterministic
229
development of weights according to (141) with one run of the stochastic gradient
descent using to the LMS algorithm.
Figure 81: Illustrating the similar performance of deterministic (pink) and stochas-
tic (red) gradient descent.
The fact that on average the weights converge to the optimal weights by no
means implies that R(n) converges to Rmin . To see why, assume that at some time
n, the LMS algorithm actually would have found the correct optimal weights, that
is, w(n) = wopt . What would happen next? Well, due to the random weight ad-
justment, these optimal weights would become misadjusted again in the next time
step! So the best one can hope for asymptotically is that the LMS algorithms lets
the weights w(n) jitter randomly in the vicinity of wopt . But this means that the
effective best error that can be achieved by the LMS algorithm in the asymptotic
average is not Rmin but Rmin + Rexcess , where Rexcess comes from the random scin-
tillations of the weight update. It is intuitively clear that Rexcess depends on the
stepsize µ. The larger µ, the larger Rexcess . The absolute size of the excess error
is not so interesting as is the ratio M = Rexcess /Rmin , that is the relative size of
the excess error compared to the minimal error. The quantity M is called the
misadjustment and describes what fraction of the residual error Rmin + Rexcess can
be attributed to the random oscillations effected by the stochastic weight update
[i.e., Rexcess ], and what fraction is inevitably due to inherent limitations of the
filter itself [i.e., Rmin ]. Notice that Rexcess can in principle be brought to zero by
tuning down µ toward zero — however, that would be at odds with the objective
of fast convergence.
Under some assumptions (notably, small M) and using some approximations
(Farhang-Boroujeny, Section 6.3), the misadjustment can be approximated by
M ≈ µ trace(R), (155)
where the trace of a square matrix is the sum of its diagonal elements. The
misadjustment is thus proportional to the stepsize and can be steered by setting
230
the latter, if trace(R) is known. Fortunately, trace(R) can be estimated online
from the sequence (x(n)) simply and robustly [how? — easy exercise].
Another issue that one has always to be concerned about in online adaptive
signal processing is stability. We have seen in the treatment of the ideal case that
the adaptation rate µ must not exceed 1/λmax in order to guarantee convergence.
But this result does not directly carry over to the stochastic version of gradient
descent, because it does not take into account the stochastic jitter of the gradient
descent, which is intuitively likely to be harmful for convergence. Furthermore,
the value of λmax cannot be estimated robustly from few data points in a practical
situation. Using again middle-league maths and several approximations, in the
book of Farhang-Boroujeny the following upper bound for µ is derived:
1
µ≤ . (156)
3 trace(R)
1. An even simpler stochastic gradient descent algorithm than LMS uses only
the sign of the error in the update: w(n+1) = w(n) + 2 µ sign(ε(n))x(n). If µ
is a power of 2, this algorithm does not need a multiplication (a shift does
231
it then) and is suitable for very high throughput hardware implementations
which are often needed in communication technology. There exist yet other
“sign-simplified” versions of LMS [cf. Farhang-Boroujeny p. 169].
2. Online stepsize adaptation: at every update use a locally adapted stepsize
µ(n) ∼ 1/kxk2 . This is called normalized LMS (NLMS). In practice this
pure NLMS is apt to run into stability problems; a safer version is µ(n) ∼
µ0 /(kxk2 + ψ), where µ0 and ψ are hand-tuned constants [Farhang-B. p.
172]. In my own experience, normalized LMS sometimes works wonders in
comparison with vanilla LMS.
3. Include a whitening mechanism into the update equation: w(n+1) = w(n) +
2µ ε(n) R−1 x(n). This Newton-LMS algorithm has a single mode of conver-
gence, but a problem is to obtain a robust (noise-insensitive) approximation
to R−1 , and to get that cheaply enough [Farhang-B. p. 210].
4. Block implementations: for very long filters (say, L > 10, 000) and high
update rates, even LMS may become too slow. Various computationally
efficient block LMS algorithms have been designed in which the input stream
is partitioned into blocks, which are processed in the frequency domain and
yield weight updates after every block only [Farhang-B. p. 247ff].
To conclude this section, it should be said that besides LMS algorithms there
is another major class of online adaptive algorithms for transversal filters, namely
recursive least squares (RLS) adaptation algorithms. RLS algorithms are not
steepest gradient-descent algorithms. The background metaphor of RLS is not to
minimize Rn (w)
P but to minimize the accumulated squared error up to the current
time, ζ(n) = i=1,...,n (d(i)−y(i))2 , so the performance surface we know from LMS
plays no role for RLS. The main advantages and disadvantages of LMS vs. RLS
are:
1. LMS has computational cost O(L) per update step, where L is filter length;
RLS has cost O(L2 ). Also the space complexity of RLS is an issue for long
filters because it is O(L2 ).
2. LMS is numerically robust when set up diligently. RLS is plagued by nu-
merical instability problems that are not easy to master.
3. RLS has a single mode of convergence and converges faster than LMS, very
much faster when the input signal has a high eigenvalue spread.
4. RLS is more complicated than LMS and more difficult to implement in
robust, stable ways.
5. In applications where fast tracking of highly nonstationary systems is re-
quired, LMS may have better tracking performance than RLS (says Farhang-
Boroujeny).
232
The use of RLS in signal processing has been boosted by the development of
fast RLS algorithms which reach a linear time complexity in the order of O(20 L)
[Farhang-B. Section 13].
Both LMS and RLS algorithms play a role in a field of recurrent neural net-
works called reservoir computing (Jaeger 2007), which happens to be one of my
personal playgrounds. In reservoir computing, the training of neural networks is
reduced to computing a linear regression. Reservoir computing has recently be-
come particularly relevant for low-power microchip hardware implementations of
neural networks. If time permits I will give an introduction to this emerging field
in a tutorial or extra session.
233
12 Feedforward neural networks: the Multilayer
Perceptron
Artificial neural networks (ANNs) have been investigated for more than half a
century in two scientific domains:
In machine learning, ANNs are used for creating complex information process-
ing architectures whose function can be shaped by training from sample data.
The goal here is to solve complex learning tasks in a data engineering spirit,
aiming at models that combine good generalization with highly nonlinear
data transformations.
Historically these two branches of ANN research had been united. The ancestor
of all ANNs, the perceptron of Rosenblatt 1958, was a computational model of
optical character recognition (as we would say today) which was explicitly inspired
by design motifs imported from the human visual system (check out Wikipedia
on “perceptron”). In later decades the two branches diverged further and further
from each other, despite repeated and persistent attempts to re-unite them. Today
most ANN research in machine learning has more or less lost its connections to
its biological origins. In this course we only consider ANNs in machine learning.
Even if we only look at machine learning, ANNs come in many kinds and
variations. The common denominator for most (but not all) ANNs in ML can be
summarized as follows.
• The units of an ANN are connected to each other by links called “synaptic
connections” (an echo of the historical past) or just “connections” or “links”.
234
to unit i. The nonzero elements in W therefore determine the network’s
connection graph. Often it is more convenient to split the global weight
matrix in submatrices, one for each “layer” of weights. In what follows I will
use the generic symbol θ as the vector of all weights in an ANN.
• The external functionality of an ANN results from the combined local in-
teractions between its interconnected units. Very complex functionalities
may thus arise from the structured local interaction between large numbers
of simple processing units. This is, in a way, analog to Boolean circuits –
and indeed some ANNs can be mapped on Boolean circuits. In fact, the
famous paper which can be regarded as the starting shot for computational
neuroscience, A logical calculus of the ideas immanent in nervous activity
(McCulloch and Pitts 1943), compared biological brains directly to Boolean
circuits.
• The hallmark of ANNs is that its functionality is learnt from training data.
Most learning procedures that are in use today rely on some sort of iterative
model optimization with a flavor of gradient descent.
This basic scenario allows for an immense spectrum of different ANNs, which
can be set up for tasks as diverse as dimension reduction and data compression,
approximate solving of NP-hard optimization problems, time series prediction,
nonlinear control, game playing, dynamical pattern generation and many more.
In this course I give an introduction to a particular kind of ANNs called feed-
foward neural networks, or often also – for historical reasons – multilayer percep-
trons (MLPs).
235
MLPs are used for the supervised learning of vectorial input-output tasks. In
such tasks the training sample is of the kind (ui , yi )i=1,...,N , where u ∈ Rn , y ∈ Rk
are drawn from a joint distribution PU,Y .
Note that in this section my notation departs from the one used in earlier
sections: I now use u instead of x to denote input patterns, in order to avoid
confusion with the network states x.
The MLP is trained to produce outputs y ∈ Rk upon inputs u ∈ Rn in a way
that this input-output mapping is similar to the relationships ui 7→ yi found in
the training data. Similarity is measured by a suitable loss function.
Supervised learning tasks of this kind – which we have already studied in
previous sections – are generally called function approximation tasks or regression
tasks. It is fair to say that today MLPs and their variations are the most widely
used workhorse in machine learning when it comes to learning nonlinear function
approximation models.
An MLP is a neural network structured equipped with n input units and k
output units. An n-dimensional input pattern u can be sent to the input units,
then the MLP does some interesting internal processing, at the end of which the k-
dimensional result vector of the computation can be read from the k output units.
An MLP N with n input units and k output units thus instantiates a function
N : Rn → Rk . Since this function is shaped by the synaptic connection weights θ,
one could also write Nθ : Rn → Rk if one wishes to emphasize the dependance of
N ’s functionality on its weights.
The learning task is defined by a loss function L : Rk × Rk → R≥0 . As we have
seen before, a convenient and sometimes adequate choice for L is the quadratic
loss L(N (u), y) = kN (u) − yk2 , but other loss functions are also widely used.
Chapter 6.2 in the deep learning bible (I. Goodfellow, Bengio, and Courville 2016)
gives an introduction to the theory of which loss functions should be used in which
task settings.
Given the loss function, the goal of training an MLP is to find a weight vector
θopt which minimizes the empirical loss, that is
1 X
N
θopt = argmin L(Nθ (ui ), yi ), (157)
θ∈H N i=1
where H is a set of candidate weight vectors. We have seen variants of this
formula many times by now in these lecture notes! And it goes almost without
saying (only “almost” because I can’t say this often enough) that some method of
regularization must be used to shield solutions obtained from the training error
minimization (157) against overfitting.
“Function approximation” sounds dry and technical, but many kinds of learn-
ing problems can be framed as function approximation learning. Here are some
examples:
Pattern recognition: inputs u are vectorized representations of any kind of “pat-
terns”, for example images, soundfiles, texts or customer profiles. Outputs
236
y are hypothesis vectors of the classes that are to be recognized.
Time series prediction: inputs are vector encodings of a past history of a temporal
process, outputs are vector encodings of future observations of the process.
Examples are stock market timeseries or weather data recordings.
Denoising, restoration and pattern completion: inputs are patterns that are cor-
rupted by noise or other distortions, outputs are cleaned-up or repaired or
completed versions of the same patterns. Important applications can be
found for instance in satellite sensing, medical imaging or audio processing.
Process control: In control tasks the objective is to send control inputs to a tech-
nological system (called “plant” in control engineering) such that the plant
performs in a desired way. The algorithm which computes the control inputs
is called a “controller”. Control tasks range in difficulty from almost trivial
(like controlling a heater valve such that the room temperature is steered to
a desired value) to almost impossible (like operating hundreds of valves and
heaters and coolers and whatnots in a chemical factory such that the chemi-
cal production process is regulated to optimal quality and yield). The MLP
instantiates the controller. Its inputs are settings for the desired plant be-
havior, plus optionally observation data from the current plant performance.
The outputs are the control actions which are sent to the plant.
237
output
!!!!!!!!!x1Kk x2Kk ... xLkKk
neurons
wijk
last hidden
1 x1Kk-1 x2Kk-1 ... xL k-1Kk-1 layer of
neurons
first hidden
1 x11 x21 ... xL11 layer of
neurons
wij1
1. The activations x0j of the input layer are set to the component values of the
L0 -dimensional input vector u.
2. For m < K, assume that the activations xm−1
j of units in layer m − 1 have
already been computed (or have been externally set to the input values, in
the case of m−1 = 0). Then the activation xmi is computed from the formula
LX
m−1 !
m m m−1 m
xi = σ wij xj + wi0 . (158)
j=1
That is, xm
i is obtained from linearly combining the activations of the lower
layer with combination weights wij m
, then adding the bias wi0m
∈ R, then
wrapping the obtained sum with the activation function σ. The activation
function is a nonlinear, “S-shaped” function which I explain in more detail
m
below. It is customary to interpret the bias wi0 as the weight of a synaptic
link from a special bias unit in layer m − 1 which always has a constant
activation of 1 (as shown in Figure 82).
Equation 158 can be more conveniently written in matrix form. Let xm =
m ′ m ′
(xm m m
1 , . . . , xLm ) be the activation vector in layer m, let b = (w1 , . . . , wLm )
238
be the vector of bias weights, and let Wm = (wijm
)i=1,...,Lm ; j=1,...,Lm−1 be the
connection weight matrix for links between layers m − 1 and m. Then (158)
becomes
xm = σ Wm xm−1 + bm , (159)
where the activation function σ is applied component-wise to the activation
vector.
239
1
0.5
ï0.5
ï1
ï3 ï2 ï1 0 1 2 3
Figure 83: The tanh (blue), the logistic sigmoid (green), and the rectifier function
(red).
240
methods/celebrity-chefs-pizza-dough-recipes.aspx
http://www.motherearthliving.com/cooking-
Figure 84: Illustrating the power of iterating a simple transformation. The baker
transformation (also known as horseshoe transformation) takes a 2-dimensional
rectangle, stretches it and folds it back onto itself. The bottom right diagram
visualizes a set that is obtained after numerous baker transformations (plus some
mild nonlinear distortion). — Diagrams on the right taken from Savi 2016.
approximation property of MLPs would spell out, for example, to the (proven)
statement that any task of classifying pictures can be solved to any degree of
perfection by a suitable MLP.
The proofs for such theorems are typically constructive: for some target func-
tion f and tolerance ε they explicitly construct an MLP N such that kf −N k < ε.
However, these constructions have little practical value because the constructed
MLPs N are far too large for any practical implementation. You can find more
details concerning such approximation theorems and related results in my legacy
ML lecture notes https://www.ai.rug.nl/minds/uploads/LN_ML_Fall11.pdf,
Section 8.1.
Even when the function f that one wants to train into an MLP is very complex
(highly nonlinear and with many “baker folds”), it can in principle be approxi-
mated with 1-hidden-layer MLPs. However, when one employs MLPs that have
many hidden layers, the required overall size of the MLP (quantified by total
number of weights) is dramatically reduced (Bengio and LeCun 2007). Even for
super-complex target functions f (like photographic image caption generation),
MLPs of feasible size exist when enough layers are used (one of the subnetworks
in the TICS system described in Section 1.2.1 used 17 hidden layers). This is
the basic insight and motivation to consider deep networks, which is just another
word for “many hidden layers”. Unfortunately it is not at all easy to train deep
networks. Traditional learning algorithms had made non-deep (“shallow”) MLPs
popular since the 1980-ies. But these shallow MLPs could only cope with rel-
atively well-behaved and simple learning tasks. Attempts to scale up to larger
numbers of hidden layers and more complex data sets largely failed, due to nu-
241
merical instabilities, too slow convergence, or poor model quality. Since about
2006 an accumulation of clever “tricks of the trade” plus the availability of power-
ful (GPU-based) yet affordable computing hardware has overcome these hurdles.
This area of training deep neural networks is now one of the most thriving fields
of ML and has become widely known under the name deep learning.
1. Get a clear idea of the formal nature of your learning task. Do you want
a model output that is a probability vector? or a binary decision? or a max-
imally precise transformation of the input? how should “precision” best be
measured? and so forth. Only proceed with using MLPs if they are really
looking like a suitable model class for your problem. MLPs are raw power
cannons. If you fire them on feeble datasets you make a professional error:
simpler models (like random forests) will give better result at lower cost and
better controllability and interpretability!
242
the validation error starts increasing — requires continual validation during
gradient descent). Demyanov 2015 is a solid guide for ANN regularization.
5. Fix an MLP architecture. Decide how many hidden layers the MLP shall
have, how many units each layer shall have, what kind of sigmoid is used
and what kind of output function and loss function. The structure should be
rich enough that data overfitting becomes possible and your regularization
method can kick in.
8. Do the job. Enjoy the powers, and marvel at the wickedness, of ANNs.
You see that “neural network training” is a multi-faceted thing and requires
you to consider all the issues that always jump at you in supervised machine
learning tasks. An ANN will not miraculously give good results just because it
has “neurons inside”. The actual “learning” part, namely solving the optimization
task (157), is only a subtask, albeit a conspicuous one because it is done with an
algorithm that has risen to fame, namely the backpropagation algorithm.
243
Initialization: Choose an initial model θ(0) . This needs an educated guess. A
(0)
widely used strategy is to set all weight parameters wi ∈ θ(0) to small
random values. Remark: For training deep neural networks this is not good
enough — the deep learning field actually got kickstarted by a clever method
for finding a good model initialization (Hinton and Salakuthdinov 2006).
Iterate: Compute a series (θ(n) )n=0,1,... of models of decreasing empirical loss (aka
training error) by gradient descent. Concretely, with
1 X
Remp (Nθ(n) ) = L(Nθ(n) (ui ), yi ) (161)
N i=1,...,N
being the gradient of the empirical risk with respect to the parameter vector
θ = (w1 , . . . , wl )′ at point θ(n) , update θ(n) by
Stop when a stopping criterion chosen by you is met. This can be reaching a
maximum number of iterations, or the empirical risk decrease falling under
a predermined threshold, or some early stopping scheme.
244
and this is also how it is actually computed: the gradient ∇L(Nθ (ui ), yi ) is eval-
uated for each training data example (ui , yi ) and the obtained N gradients are
averaged.
This means that at every gradient descent iteration θ(n) → θ(n+1) , all training
data points have to be visited individually. In MLP parlance, such a sweep through
all data points is called an epoch. In the neural network literature one finds
statements like “the training was done for 120 epochs”, which means that 120
average gradients were computed, and for each of these computations, N gradients
for individual training example points (ui , yi ) were computed.
When training samples are large — as they should be — one epoch can clearly
be too expensive. Therefore one often takes resort to minibatch training, where
for each gradient descent iteration only a subset of the total sample S is used.
The backpropagation algorithm is a subroutine in the gradient descent game.
It is a particular algorithmic scheme for calculating the gradient ∇L(Nθ (ui ), yi )
for a single data point (ui , yi ). Naive highschool calculations of this quantity incur
a cost of O(l2 ) (where l is the number of network weights). When l is not extremely
small (it will almost never be extremely small — a few hundreds of weights will be
needed for simple tasks, and easily a million for deep networks applied to serious
real-life modeling problems), this cost O(l2 ) is too high for practical exploits (and it
has to be paid N times in a single gradient descent step!). The backprop algorithm
is a clever scheme for computing and storing certain auxiliary quantities which cuts
the cost down from O(l2 ) to O(l).
Here is how backprop works.
1. BP works in two stages. In the first stage, called the forward pass, the
current network Nθ is presented with the input u and the output ŷ = Nθ (u)
is computed using the “forward” formulas (159) and (160). During this
forward pass, for each unit xmi which is not a bias unit and not an input unit
the quantity X
m m−1
ami = wij xj (163)
j=0,...,Lm−1
Define
∂L(Nθ (u), y)
δim = . (165)
∂am i
Using (163) we find
∂am
i
m
= xm−1
j . (166)
∂wij
245
Combining (165) with (166) we get
∂L(Nθ (u), y)
m
= δim xm−1
j . (167)
∂wij
Thus, in order to calculate the desired derivatives (164), we only need to
compute the values of δim for each hidden and output unit.
3. Computing the δ’s for output units. Output units xK i are typically set up
differently from hidden units, and their corresponding δ values must be com-
puted in ways that depend on the special architecture. For concreteness here
I stick with the simple linear units introduced in (160). The potentials aKi
are thus identical to the output values ŷi and we obtain
∂L(Nθ (u), y)
δiK = . (168)
∂ ŷi
This quantity is thus just the partial derivative of the loss with respect to
the i-th output, which is usually simple to compute. For the quadratic loss
L(Nθ (u), y) = kNθ (u) − yk2 , for instance, we get
∂kNθ (u) − yk2 ∂kŷ − yk2 ∂(ŷi − yi )2
δiK = = = = 2 (ŷi − yi ). (169)
∂ ŷi ∂ ŷi ∂ ŷi
4. Computing the δ’s for hidden units. In order to compute δim for 1 ≤ m < K
we again make use of the chain rule. We find
∂L(Nθ (u), y) X ∂L(Nθ (u), y) ∂am+1
δim = = m+1
l
, (170)
∂am i ∂al ∂ami
l=1,...,Lm+1
which is justified by the fact that the only path by which am i can affect
L(Nθ (u), y) is through the potentials am+1
l of the next higher layer. If we
substitute (165) into (170) and observe (163) we get
X ∂am+1
δim = δlm+1 l
∂ami
l=1,...,Lm+1
P
X ∂ m+1
j=0,...,Lm wlj σ(am
j )
= δlm+1 m
∂ai
l=1,...,Lm+1
X ∂ wlim+1 σ(am
i )
= δlm+1 m
∂ai
l=1,...,Lm+1
X
= σ ′ (am
i ) δlm+1 wlim+1 . (171)
l=1,...,Lm+1
This formula describes how the δim in a hidden layer can be computed by
“back-propagating” the δlm+1 from the next higher layer. The formula can be
246
used to compute all δ’s, starting from the output layer (where (168) is used
— in the special case of a quadratic loss, Equation 169), and then working
backwards through the network in the backward pass of the algorithm.
When the logistic sigmoid σ(a) = 1/(1 + exp(−a)) is used, the computation
of the derivative σ ′ (am
i ) takes a particularly simple form. Observing that for
this sigmoid it holds that σ ′ (a) = σ(a) (1 − σ(a)) leads to
σ ′ (am
i ) = xi (1 − xi ).
m m
247
Simple gradient descent, as described above, is cheaply computed but may
take long to converge and/or run into stability issues. A large variety of refined
iterative loss minimization methods have been developed in the deep learning
field which in most cases (but none in all cases) combine an acceptable speed of
convergence with stability. Some of them refine gradient descent, others use infor-
mation from the local curvature (and are therefore called “second-order” methods)
of the performance surface. The main alternatives are nicely sketched and com-
pared at https://www.neuraldesigner.com/blog/5_algorithms_to_train_a_
neural_network (retrieved May 2017, local copy at https://www.ai.rug.nl/
minds/uploads/NNalgs.zip).
That’s it. I hope this course has helped you on your way into machine learning
— that you have learnt a lot, that you enjoyed at least half of it, and that you will
not forget the essential ten percent.
248
Appendix
A Elementary mathematical structure-forming op-
erations
A.1 Pairs, tuples and indexed families
If two mathematical objects O1 , O2 are given, they can be grouped together in a
single new mathematical structure called the ordered pair (or just pair) of O1 , O2 .
It is written as
(O1 , O2 ).
In many cases, O1 , O2 will be of the same kind, for instance both are integers. But
the two objects need not be of the same kind. For instance, it is perfectly possible
to group integer O1 = 3 together with a random variable (a function!) O2 = X7
in a pair, getting (3, X7 ).
The crucial property of a pair (O1 , O2 ) which distinguishes it from the set
{O1 , O2 } is that the two members of a pair are ordered, that is, it makes sense to
speak of the “first” and the “second” member of a pair. In contrast, it makes not
sense to speak of the “first” or “second” element of the set {O1 , O2 }. Related to
this is the fact that the two members of a pair can be the same, for instance (2, 2)
is a valid pair. In contrast, {2, 2} makes no sense.
A generalization of pairs is N -tuples. For an integer N > 0, an N -tuple of N
objects O1 , O2 , . . . , ON is written as
(O1 , O2 , . . . , ON ).
1-tuples are just individual objects; 2-tuples are pairs, and for N > 2, N -tuples
are also called lists (by computer scientists that is; mathematicians rather don’t
use that term). Again, the crucial property of N -tuples is that one can identify its
i-th member by its position in the tuple, or in more technical terminology, by its
index. That is, in an N -tuple, every index 1 ≤ i ≤ N “picks” one member from
the tuple.
The infinite generalization of N -tuples is provided by indexed families. For
any nonempty set I, called an index set in this context,
(Oi )i∈I
denotes a compound object assembled from as many mathematical objects as
there are index elements i ∈ I, and within this compound object, every individual
member Oi can be “addressed” by its index i. One simply writes
Oi
to denote the ith “component” of (Oi )i∈I . Writing Oi is a shorthand for applying
the ith projection function on (Oi )i∈I , that is, Oi = πi ((Oi )i∈I ).
249
A.2 Products of sets
We first treat the case of products of a finite number of sets. Let S1 , . . . , SN be
(any) sets. Then the product S1 × . . . × SN is the set of all N -tuples of elements
from the corresponding sets, that is,
S1 × . . . × SN = {(s1 , . . . , sN ) | si ∈ Si }.
This generalizes to infinite products as follows. Let I be any set — we call it
an index set in this context. For every i ∈ I, let Si be some set. Then the product
set indexed by I is the set of functions
Y [
Si = {φ : I → Si | ∀i ∈ I : φ(i) ∈ Si }.
i∈I i∈I
is the set of all right-infinite real-valued timeseries (with discrete time points start-
ing at time n = 0).
250
B Joint, conditional and marginal probabilities
Note. This little section is only a quick memory refresher of some of the most
basic concepts of probability. It does not replace a textbook chapter!
We first consider the case of two observations of some part of reality that
have discrete values. For instance, an online shop creating customer profiles may
record from their customers their age and gender (among many other items). The
marketing optimizers of that shop are not interested in the exact age but only in
age brackets, say a1 = at most 10 years old, a2 = 11 − 20 years, a3 = 21 − 30
years, a4 = older than 30. Gender is roughly categorized into the possibilities
g1 = f, g2 = m, g3 = o. From their customer data the marketing guys estimate the
following probability table:
P (X = gi , Y = aj ) a1 a2 a3 a4
g1 0.005 0.3 0.2 0.04
(172)
g2 0.005 0.15 0.15 0.04
g3 0.0 0.05 0.05 0.01
The cell (i, j) in this 3 × 4 table contains the probability that a customer with
gender gi falls into the age bracket aj . This is the joint probability of the two
observation values gi and aj . Notice that all the numbers in the table sum to 1.
The mathematical tool to formally describe a category of an observable value is
a random variable (RV). We typically use symbols X, Y, Z, . . . for RVs in abstract
mathematical formulas. When we deal with concrete applications, we may also
use “telling names” for RVs. For instance, in Table (172), instead of P (X =
gi , Y = aj ) we could have written P (Gender = gi , Age = aj ). Here we have two
such observation categories: gender and age bracket, and hence we use two RVs
X and Y for gender and age, respectively. In order to specify, for example, that
female customers in the age bracket 11-20 occur with a probability of 0.3 in the
shop’s customer reservoir (the second entry in the top line of the table), we write
P (X = g1 , Y = a2 ) = 0.3.
Some more info bits of concepts and terminology connected with RVs. You
should consider a RV as the mathematical counterpart of a procedure or apparatus
to make observations or measurements. For instance, the real-world counterpart of
the Gender RV could be an electronic questionnaire posted by the online shop, or
more precisely, the “what is your age?” box on that questionnaire, plus the whole
internet infrastructure needed to send the information entered by the customer
back to the company’s webserver. Or in a very different example (measuring
the speed of a car and showing it to the driver on the speedometer) the real-
world counterpart of a RV Speed would be the total on-board circuitry in a car,
comprising the wheel rotation sensor, the processing DSP microchip, and the
display at the dashboard.
A RV always comes with a set of possible outcomes. This set is called the
sample space of the RV, and I usually denote it with the symbol S. Mathematically,
251
a sample space is a set. The sample space for the Gender RV would be the set
S = {m, f, o}. The sample space for Age that we used in the table above was S =
{{0, 1, . . . , 10}, {11, . . . , 20}, {21, . . . , 30}, {31, 32, . . .}}. For car speed measuring
we might opt for S = R≥0 , the set of non-negative reals. A sample space can be
larger than the set of measurement values that are realistically possible, but it
must contain at least all the possible values.
Back to our table and the information it contains. If we are interested only in
the age distribution of customers, ignoring the gender aspects, we sum the entries
in each age column and get the marginal probabilities of the RV Y . Formally, we
compute
X
P (Y = aj ) = P (X = gi , Y = aj ).
i=1,2,3
a1 a2 a3 a4
g1 0.005 0.3 0.2 0.04 0.545
g2 0.005 0.15 0.15 0.04 0.345 (173)
g3 0.0 0.05 0.05 0.01 0.110
0.01 0.5 0.4 0.09
Notice that the marginal probabilities of age 0.01, 0.5, 0.4, 0.09 sum to 1, as do
the gender marginal probabilities.
Finally, the conditional probability P (X = gi | Y = aj ) that a customer has
gender gi given that the age bracket is aj is computed through dividing the joint
probabilities in column j by the sum of all values in this column:
P (X = gi , Y = aj )
P (X = gi | Y = aj ) = . (174)
P (Y = aj )
There are two equivalent versions of this formula:
P (X = gi , Y = aj ) = P (X = gi | Y = aj )P (Y = aj ) (175)
where the righthand side is called a factorization of the joint distribution on
the lefthand side, and
P (X = gi , Y = aj )
P (Y = aj ) = , (176)
P (X = gi | Y = aj )
demonstrating that each of the three quantities (joint, conditional, marginal prob-
ability) can be expressed by the respective two others. If you memorize one of
these formulas – I recommend the second one – you have memorized the very
252
key to master “probability arithmetics” and will never get lost when manipulating
probability formulas.
The factorization (175) can be done in two ways: P (Y = aj | X = gi )P (X =
gi ) = P (X = gi | Y = aj )P (Y = aj ), which gives rise to Bayes’ formula
P (X = gi | Y = aj )P (Y = aj )
P (Y = aj | X = gi ) = , (177)
P (X = gi )
which has many uses in statistical modeling because it shows how one can revert
the conditioning direction.
Joint, conditional, and marginal probabilities are also defined when there are
more than two categories of observations. For instance, the online shop marketing
people also record how much a customer spends on average, and formalize this by
a third random variable, say Z. The values that Z can take are spending brackets,
say s1 = less than 5 Euros to s20 = more than 5000 Euros. The joint probability
values P (X = gi , Y = aj , Z = sk ) would be arranged in a 3-dimensional array
sized 3 × 4 × 20, and again all values in this array together sum to 1. Now there
are different arrangements for conditional and marginal probabilities, for instance
P (Z = sk | X = gi , Y = aj ) is the probability that among the group of customers
with gender gi and age aj , a person spends an amount in the range sk . Or P (Z =
sk , Y = aj | X = gi ) is the probability that in the gender group gi a person is aged
aj and spends sk . As a last example, the probabilities P (X = gi , Z = sj ) are the
marginal probabilities obtained by summing away the Y variable:
X
P (X = gi , Z = sj ) = P (X = gi , Y = ak , Z = sj ) (178)
k=1,2,3,4
So far I have described cases where all kinds of observations were discrete, that
is, they (i.e. all RVs) yield values from a finite set – for instance the three gender
values or the four age brackets. Equally often one faces continuous random values
which arise from observations that yield real numbers – for instance, measuring
the body height or the weight of a person. Since each such RV can give infinitely
many different observation outcomes, their probabilities cannot be represented in
a table or array. Instead, one uses probability density functions (pdf’s) to write
down and compute probability values.
Let’s start with a single RV, say H = Body Height. Since body heights are
non-negative and, say, never larger than 3 m, the distribution of body heights
within some reference population can be represented by a pdf f : [0, 3] → R≥0
which maps the interval [0, 3] of possible values to the nonnegative reals (Figure
85). We will be using subscripts to make it clear which RV a pdf refers to, so the
pdf describing the distribution of body height will be written fH .
A pdf for the distribution of a continuous RV X can be used to calculate the
probability that this RV takes values within a particular interval, by integrating
the pdf over that interval. For instance, the probability that a measurement of
253
2
1.5
pdf
1
0.5
0
0 0.5 1 1.5 2 2.5 3
H (body height)
body height comes out between 1.5 and 2.0 meters is obtained by
Z 2.0
P (H ∈ [1.5, 2.0]) = fH (x)dx, (179)
1.5
• Be aware that the values f (x) of a pdf are not probabilities! Pdf’s turn into
probabilities only through integration over intervals.
• Values f (x) can be greater than 1 (as in Figure 85), again indicating that
they cannot be taken as probabilities.
254
0.8
0.6
0.4
pdf
0.2
0
3
1
2 3
Y 0 1
0
X
where also the cases ai = −∞ and bi = ∞ are possible. A more compact notation
for the same integral is Z
f (x) dx,
D
where D denotes the k-dimensional box [a1 , b1 ]×. . .×[ak , bk ] and x denotes vectors
in Rk . Mathematicians speak of k-dimensional intervals instead of “boxes”. The
set of points S = {x ∈ Rk | fX1 ,...,Xk > 0} is called the support of the distribution.
Obviously S ⊆ D.
In analogy to the 1-dim case from Figure 85, probabilities are obtained from
a k-dimensional pdf fX1 ,...,Xk by integrating over sub-intervals. For such a k-
dimensional subinterval [r1 , s1 ] × . . . × [rk , sk ] ⊆ [a1 , b1 ] × . . . × [ak , bk ], we get its
probability by
Z s1 Z sk
P (X1 ∈ [r1 , s1 ], . . . , Xk ∈ [rk , sk ]) = ... f (x1 , . . . , xk ) dxk . . . dx1 . (180)
r1 rk
In essentially the same way as we did for discrete distributions, the pdf’s of
marginal distributions are obtained by integrating away the RV’s that one wishes
255
to expel. In analogy to (178), for instance, one would get
Z b2
fX1 ,X3 (x1 , x3 ) = fX1 ,X2 ,X3 (x1 , x2 , x3 ) dx2 . (181)
a2
fX,Y (x, c)
fX | Y =c (x) = . (183)
fY (c)
256
In this appendix (and in the lecture) I consider only two ways of representing
probability distributions: discrete ones by finite probability tables or probability
tables; continuous ones by pdfs. These are the most elementary formats of repre-
senting probability distributions. There are many others which ML experts readily
command on. This large and varied universe of concrete representations of prob-
ability distributions is tied together by an abstract mathematical theory of the
probability distributions themselves, independent of particular representations.
This theory is called probability theory. It is not an easy theory and we don’t at-
tempt an introduction to it. If you are mathematically minded, then you can get an
introduction to probability theory in my graduate lecture notes “Principles of Sta-
tistical Modeling” (https://www.ai.rug.nl/minds/uploads/LN_PSM.pdf). At
this point I only highlight two core facts from probability theory:
argmax φ(a)
a
is that d ∈ D for which φ(d) is maximal among all values of φ on D. If there are
several arguments a for which φ gives the same maximal value, – that is, φ does
not have a unique maximum –, or if φ has no maximum at all, then the argmax
is undefined.
257
by three random variables G, A, S. A random variable always comes together with
a sample space. This is the set of values that might be delivered by the random
variable. For instance, the sample space of the gender RV G could be cast as
{m, f, o} – a symbolic (and finite) set. A reasonable sample space for the age
random variable A would be the set of integers between 0 and 200 – assuming
that no customer will be older than 200 years and that age is measured in integers
(years). Finally, a reasonable sample space for the spending RV S could be just
the real numbers R.
Note that in the A and S examples, the sample spaces that I proposed look
very generous. We would not really expect that some customer is 200 years old,
nor would we think that ever a customer spends 101000 Euros – although both
values are included in the respective sample space. The important thing about a
sample space is that it must contain all the values that might be returned by the
RV; but it may also contain values that will never be observed in practice.
Every mathematical set can serve as a sample space. We just saw symbolic,
integer, and real sample spaces. Real sample spaces are used whenever one is
dealing with an observation procedure that returns numerical values. Real-valued
RVs are of great practical importance, and they allow many insightful statistical
analyses that are not defined for non-numerical RVs. The most important analyt-
ical characteristics of real RVs are expectation, variance, and covariance, which I
will now present in turn.
For the remainder of this appendix section we will be considering random
variables X whose sample space is Rn — that is, observation procedures which
return scalars (case n = 1) or vectors. We will furthermore assume that the
distributions of all RVs X under consideration will be represented by pdf’s fX :
Rn → R≥0 . (In mathematical probability theory, more general numerical sample
spaces are considered, as well as distributions that have no pdf — but we will
focus on this basic scenario of real-valued RVs with pdfs).
The expectation of a RV X with sample space Rn and pdf fX is defined as
Z
E[X] = x fX (x) dx, (184)
Rn
258
PX,Y of two RVs X, Y , we may compute the average value of the xi by
X
N
mean({x1 , . . . , xN }) = 1/N xi ,
i=1
but this sample mean is NOT the expectation of X. If we would have used another
random sample, we would most likely have obtained another sample mean. In
contrast, the expectation E[X] of X is defined not on the basis of a finite, random
sample of X, but it is defined by averaging over the true underlying distribution.
Since in practice we will not have access to the true pdf fX , the expectation
of a RV X cannot usually be determined in full precision. The best one can do is
to estimate it from observed sample data. The sample mean is an estimator for
the expectation of a numerical RV X. Marking estimated quantities by a “hat”
accent, we may write
XN
Ê[X] = 1/N xi .
i=1
A random variable X is centered if its expectation is zero. By subtracting the
expectation one gets a centered RV. In these lecture notes I use the bar notation
to mark centered RVs:
X̄ := X − E[X].
The variance of a scalar RV with sample space R is the expected squared
deviation from the expectation
σ 2 (X) = E[X̄ 2 ], (185)
which in terms of the pdf fX̄ of X̄ can be written as
Z
2
σ (X) = x2 fX̄ (x) dx.
R
but in fact this estimator is not the best possible – on average (across different
samples) it underestimates the true variance. If one wishes to have an estimator
that is unbiased, that is, which on average across different samples gives the correct
variance, one must use
!2
X
N XN
σ̂ 2 ({x1 , . . . , xN }) = 1/(N − 1) xi − 1/N xj
i=1 j=1
259
instead. The Wikipedia article on “Variance”, section “Population variance and
sample variance” points out a number of other pitfalls and corrections that one
should consider when one estimates variance from p samples.
The square root of the variance of X, σ(X) = σ 2 (X), is called the standard
deviation of X.
The covariance between two real-valued scalar random variables X, Y is defined
as
Cov(X, Y ) = E[X̄ Ȳ ], (186)
which in terms of a pdf fX̄ Ȳ for the joint distribution for the centered RVs spells
out to Z
Cov(X, Y ) = x y fX̄ Ȳ ((x, y)′ ) dx dy.
R×R
Finally, let us inspect the correlation of two scalar RVs X, Y . Here we have to
be careful because this term is used differently in different fields. In statistics, the
correlation is defined as
Cov(X, Y )
Corr(X, Y ) = . (187)
σ(X) σ(Y )
260
1. Expectation is a linear operator:
2. Expectation is idempotent:
E[E[X]] = E[X].
3.
Cov(X, Y ) = E[X Y ] − E[X] E[Y ].
E Derivation of Equation 32
X
1/N kxi − d ◦ f (xi )k2 =
i
X X
m
= 1/N kx̄i − (x̄′i uk ) uk k2
i k=1
X X
n X
m
= 1/N k (x̄′i uk ) uk − (x̄′i uk ) uk k2
i k=1 k=1
X Xn
= 1/N k (x̄′i uk ) uk k2
i k=m+1
X X n X
n X
= 1/N (x̄′i uk )2 = 1/N (x̄′i uk )2
i k=m+1 k=m+1 i
X
n
= σk2 .
k=m+1
261
References
[1] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. “A Learning Algorithm for
Boltzmann Machines”. In: Cognitive Science 9 (1985), pp. 147–169.
[2] A. C. Antoulas and D. C. Sorensen. “Approximation of large-scale dynamical
systems: an overview”. In: Int. J. Appl. Math. Comput. Sci. 11.5 (2001),
pp. 1093–1121.
[3] D. Bahdanau, K. Cho, and Y. Bengio. “Neural Machine Translation by
Jointly Learning to Align and Translate”. In: International Conference on
Learning Representations (ICLR). 2015. url: http : / / arxiv . org / abs /
1409.0473v6.
[4] J. A. Bednar and S. P. Wilson. “Cortical maps.” In: The Neuroscientist 22.6
(2016), pp. 604–617.
[5] Y. Bengio and Y. LeCun. “Scaling Learning Algorithms towards AI”. In:
Large-Scale Kernel Machines. Ed. by Bottou L. et al. MIT Press, 2007.
[6] L. Breiman. “Random forests”. In: Machine Learning 45 (2001), pp. 5–32.
[7] A. Clark. “Whatever Next? Predictive Brains, Situated Agents, and the
Future of Cognitive Science.” In: Behavioural and Brain Sciences 36.3 (2013),
pp. 1–86.
[8] G. E. Crooks. Field Guide to Continuous Probability Distributions, v 0.11
beta. online manuscript, retrieved April 2017, extended version also available
in print since 2019. 2017. url: http://threeplusone.com/fieldguide.
[9] A. Deisenroth, A. Faisal, and C. S. Ong. Mathematics for Machine Learning.
Free online copy at https://mml-book.github.io/. Cambridge University
Press, 2019.
[10] A.P. Dempster, N.M. Laird, and D.B. Rubin. “Maximum likelihood from
incomplete data via the EM-algorithm”. In: Journal of the Royal Statistical
Society 39 (1977), pp. 1–38.
[11] S. Demyanov. “Regularization Methods for Neural Networks and Related
Models”. PhD thesis. Dept of Computing and Information Systems, Univ.
of Melbourne, 2015.
[12] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification (second
edition). Wiley Interscience, 2001.
[13] R. Durbin et al. Biological Sequence Analysis: Probabilistic Models of Pro-
teins and Nucleic Acids. Cambridge University Press, 2000.
[14] D. Durstewitz, J. K. Seamans, and T. J. Sejnowski. “Neurocomputational
models of working memory”. In: Nature Neuroscience 3 (2000), pp. 1184–91.
262
[15] S. Edelman. “The minority report: some common assumptions to reconsider
in the modelling of the brain and behaviour”. In: J of Experimental and
Theoretical Artificial Intelligence (2015). url: http://www.tandfonline.
com/action/showCitFormats?doi=10.1080/0952813X.2015.1042534.
[16] S. Ermon. Probabilistic Graphical Models. Online lecture notes of a graduate
course at Stanford University. 2019. url: https://ermongroup.github.
io/cs228-notes/.
[17] B. Farhang-Boroujeny. Adaptive Filters: Theory and Applications. Wiley,
1998.
[18] K. Friston. “A theory of cortical response”. In: Phil. Trans. R. Soc. B 360
(2005), pp. 815–836.
[19] K. Friston. “Learning and Inference in the Brain”. In: Neural Networks 16
(2003), pp. 1325–1352.
[20] S. Fusi and X.-J. Wang. “Short-term, long-term, and working memory”. In:
From Neuron to Cognition via Computational Neuroscience. Ed. by M. Arbib
and J. Bonaiuto. MIT Press, 2016, pp. 319–344.
[21] I. J. Goodfellow, J. Shlens, and C. Szegedy. “Explaining and Harnessing
Adversarial Examples”. In: Proc. ICLR 2015. arXiv:1412.6572v3. 2014.
[22] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. Open access
version at http://www.deeplearningbook.org. MIT Press, 2016.
[23] A. N. Gorban et al. Principal Manifolds for Data Visualization and Dimen-
sion Reduction. Springer, 2008.
[24] A. Graves et al. “Hybrid computing using a neural network with dynamic
external memory”. In: Nature 7626 (2016), pp. 471–476.
[25] A. Hart, J. Hook, and J. Dawes. “Embedding and approximation theorems
for echo state networks”. In: Neural Networks 128 (2020), pp. 234–247.
[26] M. Hasegawa, H. Kishino, and T. Yano. “Dating the human-ape splitting by
a molecular clock of mitochondrial DNA”. In: J. of Molecular Evolution 22
(1985), pp. 160–174.
[27] G. E. Hinton and R. R. Salakuthdinov. “Reducing the Dimensionality of
Data with Neural Networks”. In: Science 313.July 28 (2006), pp. 504–507.
[28] E. Horvitz and M. Barry. “Display of information for time-critical decision
making”. In: Proc. 11th Conf. on Uncertainty in Artificial Intelligence. Mor-
gan Kaufmann Publishers Inc., 1995, pp. 296–305.
[29] C. Huang and A. Darwiche. “Inference in Belief Networks: A Procedural
Guide”. In: Int. J. of Approximate Reasoning 11.1 (1994), p. 158.
[30] J. P. Huelsenbeck and F. Ronquist. “MRBAYES: Bayesian inference of phy-
logenetic trees”. In: Bioinformatics 17.8 (2001), pp. 754–755.
263
[31] L. Hyafil and R. L. Rivest. “Computing optimal binary decision trees is
NP-complete”. In: Information Processing Letters 5.1 (1976), pp. 15–17.
[32] G. Indiveri. Rounding Methods for Neural Networks with Low Resolution
Synaptic Weights. arXiv preprint. Institute of Neuroinformatics, Univ. Zurich,
2015. url: http://arxiv.org/abs/1504.05767.
[33] H. Jaeger. “Echo State Network”. In: Scholarpedia. Vol. 2. 2007, p. 2330.
url: http://www.scholarpedia.org/article/Echo_State_Network.
[34] E. T. Jaynes. Probability Theory: the Logic of Science. First partial online
editions in the late 1990ies. First three chapters online at http://bayes.
wustl.edu/etj/prob/book.pdf. Cambridge University Press, 2003.
[35] D. Jones. Good Practice in (Pseudo) Random Number Generation for Bioin-
formatics Applications. technical report, published online. UCL Bioinfor-
matics, 2010. url: http : / / www . cs . ucl . ac . uk / staff / d . jones /
GoodPracticeRNG.pdf.
[36] M. I. Jordan, Z. Ghahramani, et al. “An introduction to variational methods
for graphical models”. In: Machine Learning 37.2 (1999), pp. 183–233.
[37] M. I. Jordan and D. M. Wolpert. “Computational motor control”. In: The
Cognitive Neurosciences, 2nd edition. Ed. by M. Gazzaniga. MIT Press,
1999.
[38] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. “Optimization by Simulated
Annealing”. In: Science 220.4598 (1983), pp. 671–680.
[39] R. Kiros, R. Salakhutdinov, and R. S. Zemel. “Unifying Visual-Semantic Em-
beddings with Multimodal Neural Language Models”. http://arxiv.org/abs/1411.2539
Presented at NIPS 2014 Deep Learning Workshop. 2014.
[40] J. Kittler et al. “On Combining Classifiers”. In: IEEE Transactions on Pat-
tern Analysis and Machine Intelligence 20.3 (1998), pp. 226–239.
[41] S.L. Lauritzen. “The EM algorithm for graphical association models with
missing data”. In: Computational Statistics & Data Analysis 19.2 (1995),
pp. 191–201.
[42] D. Luchinsky and et al. “Overheating Anomalies during Flight Test due
to the Base Bleeding”. In: Proc. 7th Int. Conf. on Computational Fluid
Dynamics, Hawaii July 2012. 2012.
[43] B. Mau, M.A. Newton, and B. Larget. “Bayesian phylogenetic inference via
Markov chain Monte Carlo methods”. In: Biometrics 55 (1999), pp. 1–12.
url: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.
33.8433&rep=rep1&type=pdf.
[44] W. S. McCulloch and W. Pitts. “A logical calculus of the ideas immanent
in nervous activity”. In: Bull. of Mathematical Biophysics 5 (1943), pp. 115–
133.
264
[45] N. Metropolis et al. “Equation of state calculations by fast computing ma-
chines”. In: The Journal of Chemical Physics 21.6 (1953), pp. 1087–1092.
[46] T. Mikolov et al. “Distributed Representations of Words and Phrases and
their Compositionality”. In: Advances in Neural Information Processing Sys-
tems 26. Ed. by C. J. C. Burges et al. 2013, pp. 3111–3119. url: http :
//papers.nips.cc/paper/5021- distributed- representations- of-
words-and-phrases-and-their-compositionality.pdf.
[47] A. Minnaar. Wor2Vec tutorial part I: the Skip-Gram model. Online tutorial.
2015. url: http://mccormickml.com/2016/04/27/word2vec-resources/
%5C # efficient - estimation - of - word - representations - in - vector -
space.
[48] T. M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[49] S. R. T. Mouafo et al. “A tutorial on the EM algorithm for Bayesian net-
works: application to self-diagnosis of GPON-FTTH networks”. In: Proc.
12th International Wireless Communications & Mobile Computing Confer-
ence (IWCMC 2016). 2016, pp. 369–376. url: https://hal.archives-
ouvertes.fr/hal-01394337.
[50] K. Murphy. An introduction to graphical models. Technical Report. http://www.cs.ubc.ca/∼m
Intel, 2001.
[51] K. P. Murphy. “Dynamic Bayesian Networks: Representation, Inference and
Learning”. Univ. of California, Berkeley, 2002.
[52] R. M. Neal. Probabilistic Inference Using Markov Chain Monte Carlo Meth-
ods. Technical Report CRG-TR-93-1. Dpt. of Computer Science, University
of Toronto, 1993.
[53] R. M. Neal. Using Deterministic Maps when Sampling from Complex Distri-
butions. Presentation given at the Evolution of Deep Learning Symposium
in honor of Geoffrey Hinton. 2019. url: http://www.cs.utoronto.ca/
~radford/ftp/geoff-sym-talk.pdf.
[54] K. Obermayer, H. Ritter, and K. Schulten. “A principle for the formation
of the spatial structure of cortical feature maps”. In: Proc. of the National
Academy of Sciences of the USA 87 (1990), pp. 8345–8349.
[55] O. M. Parkhi, A. Vedaldi, and A. Zisserman. “Deep Face Recognition”. In:
Proc. of BMVC. 2015. url: http://www.robots.ox.ac.uk:5000/~vgg/
publications/2015/Parkhi15/parkhi15.pdf.
[56] R. Pascanu and H. Jaeger. “A Neurodynamical Model for Working Mem-
ory”. In: Neural Networks 24.2 (2011). DOI: 10.1016/j.neunet.2010.10.003,
pp. 199–207.
[57] J. Pearl and S. Russell. “Bayesian Networks”. In: Handbook of Brain Theory
and Neural Networks, 2nd Ed. Ed. by M.A. Arbib. MIT Press, 2003, pp. 157–
160. url: https://escholarship.org/uc/item/53n4f34m.
265
[58] S. E. Peters et al. “A Machine Reading System for Assembling Synthetic Pa-
leontological Databases”. In: PLOS-ONE 9.12 (2014), e113523. url: http:
//journals.plos.org/plosone/article?id=10.1371/journal.pone.
0113523.
[59] L.R. Rabiner. “A tutorial on Hidden Markov Models and Selected Appli-
cations in Speech Recognition”. In: Readings in Speech Recognition. Ed. by
A. Waibel and K.-F. Lee. Reprinted from Proceedings of the IEEE 77 (2),
257-286 (1989). Morgan Kaufmann, San Mateo, 1990, pp. 267–296.
[60] F. Rosenblatt. “The Perceptron: a probabilistic model for information stor-
age and organization in the brain”. In: Psychological Review 65.6 (1958),
pp. 386–408.
[61] S. Roweis and Z. Ghahramani. “A unifying review of linear Gaussian mod-
els”. In: Neural Computation 11.2 (1999), pp. 305–345.
[62] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. “Learning Internal
Representations by Error Propagation”. In: Parallel Distributed Processing
Vol. 1. Ed. by D. E. Rumelhart and J. L. McClelland. Also as Technical
Report, La Jolla Inst. for Cognitive Science, 1985. MIT Press, 1986, pp. 318–
362.
[63] M. A. Savi. “Nonlinear Dynamics and Chaos”. In: Dynamics of Smart Sys-
tems and Structures. Ed. by V. Lopes Junior and et al. Springer International
Publishing Switzerland, 2016, pp. 93–117.
[64] J. Schmidhuber. “Deep Learning in Neural Networks: An Overview”. In:
Neural Networks 61 (2015). Preprint: arXiv:1404.7828, pp. 85–117.
[65] C. E. Shannon. “A mathematical theory of communication”. In: The Bell
System Technical Journal 27.3 (1948), pp. 379–423.
[66] D. Silver et al. “Mastering the game of Go with deep neural networks and
tree search”. In: Nature 529 (2016), pp. 484–489.
[67] P. Smyth. “Belief Networks, Hidden Markov Models, and Markov Random
Fields: a Unifying View”. In: Pattern Recognition Letters 18.11-13 (1997),
pp. 1261–1268.
[68] F. Suchanek et al. “Advances in automated knowledge base construction”.
In: SIGMOD Records Journal March (2013). url: http://suchanek.name/
work/publications/sigmodrec2013akbc.pdf.
[69] N. V. Swindale and H.-U. Bauer. “Application of Kohonen’s self-organizing
feature map algorithm to cortical maps of orientation and direction prefer-
ence”. In: Proc. R. Soc. Lond. B 265 (1998), pp. 827–838.
[70] F. Takens. “Detecting strange attractors in turbulence”. In: Dynamical Sys-
tems and Turbulence. Ed. by D.A. Rand and L.-S. Young. Lecture Notes in
Mathematics 898. Springer-Verlag, 1981, pp. 366–381.
266
[71] J. Tenenbaum, T. L. Griffiths, and C. Kemp. “Theory-based Bayesian mod-
els of inductive learning and reasoning”. In: Trends in Cognitive Science 10.7
(2006), pp. 309–318.
[72] M. Weliky, W. H. Bosking, and D. Fitzpatrick. “A systematic map of direc-
tion preference in primary visual cortex”. In: Nature 379 (1996), pp. 725–
728.
[73] H. Yin. “Learning nonlinear principal manifolds by self-organising maps”.
In: Principal Manifolds for Data Visualization and Dimension Reduction.
Ed. by A. N. Gorban et al. Vol. 58. Lecture Notes in Computer Science and
Engineering. Springer, 2008, pp. 68–95.
[74] P. Young et al. “From image descriptions to visual denotations: New similar-
ity metrics for semantic inference over event descriptions.” In: Transactions
of the Association for Computational Linguistics 2 (2014), pp. 67–78.
267