0% found this document useful (0 votes)
29 views62 pages

Deep Learning As A Building Block in Probabilistic Models: Pierre-Alexandre Mattei

This document discusses deep learning as a building block for probabilistic models. It provides an overview of generative and discriminative models for supervised learning tasks like classification and regression. Specifically, it explains that the goal is to build models that can produce uncertainty assessments and probabilistic predictions. While generative models fully model the joint distribution p(x,y), discriminative models directly model the conditional distribution p(y|x), which is sufficient for probabilistic predictions.

Uploaded by

martin.durand955
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views62 pages

Deep Learning As A Building Block in Probabilistic Models: Pierre-Alexandre Mattei

This document discusses deep learning as a building block for probabilistic models. It provides an overview of generative and discriminative models for supervised learning tasks like classification and regression. Specifically, it explains that the goal is to build models that can produce uncertainty assessments and probabilistic predictions. While generative models fully model the joint distribution p(x,y), discriminative models directly model the conditional distribution p(y|x), which is sufficient for probabilistic predictions.

Uploaded by

martin.durand955
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Deep learning as a building block in probabilistic models

Part II

Pierre-Alexandre Mattei

http://pamattei.github.io/
@pamattei

Inria, Maasai team


Université Côte d’Azur

1
Overview of talk

Recap on (supervised) generative/discriminative models

Deep discriminative models for classification

Deep discriminative models for regression

2
Supervised learning with uncertainty: general goal

The goal of this lecture is to train predictive machine learning models that can
produce uncertainty assessments.

3
Supervised learning with uncertainty: general goal

The goal of this lecture is to train predictive machine learning models that can
produce uncertainty assessments.

So what do we want?

3
Supervised learning with uncertainty: general goal

The goal of this lecture is to train predictive machine learning models that can
produce uncertainty assessments.

So what do we want?

We want to be able to make probabilistic preditions, like


• "the probability that the temperature in Nice tomorrow is between 20 and 25 degrees
in 17%",
• "the probability that this patient has this kind of cancer is 56%".

3
Supervised learning with uncertainty: general goal

The goal of this lecture is to train predictive machine learning models that can
produce uncertainty assessments.

So what do we want?

We want to be able to make probabilistic preditions, like


• "the probability that the temperature in Nice tomorrow is between 20 and 25 degrees
in 17%",
• "the probability that this patient has this kind of cancer is 56%".

To do that, we need to have a probabilistic model of our data, hence the need for
generative models, that can either be fully generative or discriminative.

3
What’s a generative model?

Let’s start with some data D. For example, in the regression case with p-dimensional
continuous features,
D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × R)n .
In the binary classification case,

D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × {0, 1})n .

4
What’s a generative model?

Let’s start with some data D. For example, in the regression case with p-dimensional
continuous features,
D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × R)n .
In the binary classification case,

D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × {0, 1})n .

In the unsupervised case, the data usually looks like D = (x1 , ..., xn ) ∈ (Rp )n .

4
What’s a generative model?

Let’s start with some data D. For example, in the regression case with p-dimensional
continuous features,
D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × R)n .
In the binary classification case,

D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × {0, 1})n .

In the unsupervised case, the data usually looks like D = (x1 , ..., xn ) ∈ (Rp )n .

We call (x1 , ..., xn ) the features and (y1 , ..., yn ) the labels. The features are usually stored
in a n × p matrix called the design matrix.

4
What’s a generative model?

Let’s start with some data D. For example, in the regression case with p-dimensional
continuous features,
D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × R)n .
In the binary classification case,

D = ((x1 , y1 ), ..., (xn , yn )) ∈ (Rp × {0, 1})n .

In the unsupervised case, the data usually looks like D = (x1 , ..., xn ) ∈ (Rp )n .

We call (x1 , ..., xn ) the features and (y1 , ..., yn ) the labels. The features are usually stored
in a n × p matrix called the design matrix.

A generative model “describes a process that is assumed to give rise


to some data”
David MacKay, in his book Information Theory, Inference, and Learning Algorithms (2003).

Formally, a generative model will just be a probability density p(D).


4
Generative models for supervised learning: General
assumptions
Although we’ll mostly focus on the unsupervised case my lectures, let us begin with
the (arguably simpler) supervised case D = ((x1 , y1 ), ..., (xn , yn )). It could be either a
regression or a classification task, for example.

5
Generative models for supervised learning: General
assumptions
Although we’ll mostly focus on the unsupervised case my lectures, let us begin with
the (arguably simpler) supervised case D = ((x1 , y1 ), ..., (xn , yn )). It could be either a
regression or a classification task, for example.

Most of the time, it makes sense to build generative models that assume that the
observations are independent. This leads to
n
Y
p(D) = p((x1 , y1 ), ..., (xn , yn )) = p(xi , yi ).
i=1

Usually, we also further assume that the data are identically distributed. This means
that all the (xi , yi ) will follow the same distribution that we may denote p(x, y)

5
Generative models for supervised learning: General
assumptions
Although we’ll mostly focus on the unsupervised case my lectures, let us begin with
the (arguably simpler) supervised case D = ((x1 , y1 ), ..., (xn , yn )). It could be either a
regression or a classification task, for example.

Most of the time, it makes sense to build generative models that assume that the
observations are independent. This leads to
n
Y
p(D) = p((x1 , y1 ), ..., (xn , yn )) = p(xi , yi ).
i=1

Usually, we also further assume that the data are identically distributed. This means
that all the (xi , yi ) will follow the same distribution that we may denote p(x, y)

When these two assumptions are met, we say that the data are independent and
identically distributed (i.i.d.). This is super useful is practice because, rather than
having to find a distribution p((x1 , y1 ), ..., (xn , yn )) over a very large space (whose
dimension grows linearly with n), we’ll just have to find a much lower dimensional
distribution p(x, y).
5
Generative models for supervised learning: Do we really have
to be fully generative?

Using the product rule, we may rewrite our p(x, y) as

p(x, y) = p(x)p(y|x) = p(y)p(x|y).

6
Generative models for supervised learning: Do we really have
to be fully generative?

Using the product rule, we may rewrite our p(x, y) as

p(x, y) = p(x)p(y|x) = p(y)p(x|y).

But if we mainly want to do (probailisitic) preditions, knowing p(y|x) is enough. It’s exactly
this conditional distribution that will give us statements like "the probability that this
patient x has this kind of cancer is 56%".

6
Generative models for supervised learning: Do we really have
to be fully generative?

Using the product rule, we may rewrite our p(x, y) as

p(x, y) = p(x)p(y|x) = p(y)p(x|y).

But if we mainly want to do (probailisitic) preditions, knowing p(y|x) is enough. It’s exactly
this conditional distribution that will give us statements like "the probability that this
patient x has this kind of cancer is 56%".

Based on these insights, there are two main approaches for building p(x, y):
• The fully generative (or model-based) approach posits a joint distribution p(x, y)
(often by specifying both p(y) and p(x|y)).
• The discriminative (or conditional) approach just specifies p(y|x) and completely
ignores p(x).

6
Generative models for supervised learning: Do we really have
to be fully generative?

Using the product rule, we may rewrite our p(x, y) as

p(x, y) = p(x)p(y|x) = p(y)p(x|y).

But if we mainly want to do (probailisitic) preditions, knowing p(y|x) is enough. It’s exactly
this conditional distribution that will give us statements like "the probability that this
patient x has this kind of cancer is 56%".

Based on these insights, there are two main approaches for building p(x, y):
• The fully generative (or model-based) approach posits a joint distribution p(x, y)
(often by specifying both p(y) and p(x|y)).
• The discriminative (or conditional) approach just specifies p(y|x) and completely
ignores p(x).

What do you think are the benefits of the two approaches?

6
Generative models for supervised learning: Discriminative vs
fully generative

A few examples of the two approaches:


• Discriminative: linear and logistic regression, Neural nets for
regression/classification, Gaussian process regression/classification
• Generative: Linear/quadratic discriminant analysis, Mixture discriminant analysis,
Supervised variational autoencoders, most of the models Charles Bouveyron will talk
about in his course1

1 cf. his book with G. Celeux, B. Murphy et A. Raftery. 7


Generative models for supervised learning: Discriminative vs
fully generative

A few examples of the two approaches:


• Discriminative: linear and logistic regression, Neural nets for
regression/classification, Gaussian process regression/classification
• Generative: Linear/quadratic discriminant analysis, Mixture discriminant analysis,
Supervised variational autoencoders, most of the models Charles Bouveyron will talk
about in his course1

Some of the advantages/drawbacks:


• Discriminative: much easier to design (and usually train) because we don’t have to
model p(x). Usually more accurate where we have a lot of data. Cannot
acommodate to missing features or do semi-supervised learning (missing
labels) easily.
• Generative: Can deal with missing features/labels. Usually more accurate when we
do not have a lot of data. Usually more robust to adversarial examples. Requires to
specify p(x) which is often hard because x may be high-dimensional/complex.

1 cf. his book with G. Celeux, B. Murphy et A. Raftery. 7


Generative vs Discriminative: a concrete example

Article from decanter.com/wine-news/police-uncover-italian-wine-fraud-88060/


8
Generative vs Discriminative: a concrete example

One of the wines the bad guys counterfeited was from the Barolo region. According to
Wikipedia, those wines have "pronounced tannins and acidity", and "moderate to high
alcohol levels (Minimum 13%)". This would help a trained human recognise them, but
could we train an algorithm to learn those characteristics?

Picture from Wikipedia

9
Generative vs Discriminative: a concrete example

125

100
Barolo
acidity

Other

75

50

11 12 13 14 15
alcohol

Data from Forina, Armanino, Castino, and Ubigli, (Vitis, 1986).


10
Generative vs Discriminative: a concrete example
The generative way would use the formula p(x, y) = p(y)p(x|y) and model the class
conditional distributions p(x|y) using a continuous bivariate ditribution (e.g. 2D Gaussians).

125

100
Barolo
acidity

Other

75

50

11 12 13 14 15
alcohol

11
Generative vs Discriminative: a concrete example
The generative way would use the formula p(x, y) = p(y)p(x|y) and model the class
conditional distributions p(x|y) using a continuous bivariate ditribution (e.g. 2D Gaussians).

125

100
acidity

Barolo

75

50

11 12 13 14 15
alcohol

12
Generative vs Discriminative: a concrete example
The generative way would use the formula p(x, y) = p(y)p(x|y) and model the class
conditional distributions p(x|y) using a continuous bivariate ditribution (e.g. 2D Gaussians).
Here is what we obtain using the R package Mclust (Scrucca, Fop, Murphy, and Raftery,
R Journal, 2016).
140
120
Fixed Acidity

100
80
60

11 12 13 14

Alcohol
13
Generative vs Discriminative: a concrete example
The discriminative way would only model p(y|x). Since there are only 2 classes, this
means that p(y|x) will be a Bernoulli random variable whose parameter π(x) ∈ [0, 1] is
a function of the features.

125

100
Barolo
acidity

Other

75

50

11 12 13 14 15
alcohol

14
Generative vs Discriminative: a concrete example
The discriminative way would only model p(y|x). Since there are only 2 classes, this
means that p(y|x) will be a Bernoulli random variable whose parameter π(x) ∈ [0, 1] is
a function of the features.

125

100
Barolo
acidity

Other

75

50

11 12 13 14 15
alcohol

14
Generative vs Discriminative: a concrete example
The discriminative way would only model p(y|x). Since there are only 2 classes, this
means that p(y|x) will be a Bernoulli random variable whose parameter π(x) ∈ [0, 1] is
a function of the features.

125

Key idea
100
Since we have an unknown function and Barolo
acidity

neural nets are good function approximators, Other


we should model π(x) using a neural net!
75

50

11 12 13 14 15
alcohol

14
Overview of talk

Recap on (supervised) generative/discriminative models

Deep discriminative models for classification

Deep discriminative models for regression

15
How to create a deep discriminative model?

We’ll focus now on the discriminative approach using neural nets, because it is
simpler. For more on the differences and links between the generative and discriminative
schools, a wonderful reference is Tom Minka’s short note on the subject: Discriminative
models, not discriminative training 2 .

2 https://tminka.github.io/papers/minka-discriminative.pdf
16
How to create a deep discriminative model?

We’ll focus now on the discriminative approach using neural nets, because it is
simpler. For more on the differences and links between the generative and discriminative
schools, a wonderful reference is Tom Minka’s short note on the subject: Discriminative
models, not discriminative training 2 .

Let us go back to our discriminative model: we can write it

p(y|x) = B(y|π(x)) = π(x)y (1 − π(x))1−y ,

where B(·|θ) denotes the density of a Bernoulli distribution with parameter θ ∈ [0, 1]. The
key idea is then to model the function x 7→ π(x) using a neural net.

2 https://tminka.github.io/papers/minka-discriminative.pdf
16
How to create a deep discriminative model?

We’ll focus now on the discriminative approach using neural nets, because it is
simpler. For more on the differences and links between the generative and discriminative
schools, a wonderful reference is Tom Minka’s short note on the subject: Discriminative
models, not discriminative training 2 .

Let us go back to our discriminative model: we can write it

p(y|x) = B(y|π(x)) = π(x)y (1 − π(x))1−y ,

where B(·|θ) denotes the density of a Bernoulli distribution with parameter θ ∈ [0, 1]. The
key idea is then to model the function x 7→ π(x) using a neural net.

This key idea goes way beyond the discriminative context


This general strategy of using outputs of neural nets as parameters of simple
probability distributions is the main recipe for building deep generative models. It
has been used extensively, for example in deep latent variable models such as variational
autoencoders (VAEs) or generative adversarial networks (GANs).

2 https://tminka.github.io/papers/minka-discriminative.pdf
16
How to model π
Our discriminative model for binary classification is

p(y|x) = B(y|π(x)) = π(x)y (1 − π(x))1−y ,

and we wish to model π using a neural net. But what kind of neural net?

17
How to model π
Our discriminative model for binary classification is

p(y|x) = B(y|π(x)) = π(x)y (1 − π(x))1−y ,

and we wish to model π using a neural net. But what kind of neural net?

The only really important constraint of the problem is that we need to have

∀x ∈ Rp , π(x) ∈ [0, 1].

Is it possible to enforce that easily in a neural net?

17
How to model π
Our discriminative model for binary classification is

p(y|x) = B(y|π(x)) = π(x)y (1 − π(x))1−y ,

and we wish to model π using a neural net. But what kind of neural net?

The only really important constraint of the problem is that we need to have

∀x ∈ Rp , π(x) ∈ [0, 1].

Is it possible to enforce that easily in a neural net?

Yes! By using a function that only output stuff in [0, 1] as the output layer. For example the
1
logistic sigmoid function σ : a 7→ 1+exp(−a) .
1.00

0.75
σ(x)

0.50

0.25

0.00

−10 −5 0 5 10
x

17
How to model π

So at the end, we’ll model π using the formula

π(x) = σ(fθ (x)),

where σ is the sigmoid function and fθ : Rp −→ R is any neural network (whose weights
are stored in a vector θ) that takes the features as input and returns an unconstrained real
number.

18
How to model π

So at the end, we’ll model π using the formula

π(x) = σ(fθ (x)),

where σ is the sigmoid function and fθ : Rp −→ R is any neural network (whose weights
are stored in a vector θ) that takes the features as input and returns an unconstrained real
number.

We have a lot of flexibility to choose fθ . In particular, if the features x1 , ..., xn are images,
we could use a CNN. In the case of time-series, we could use a recurrent neural net. In
the case of sets, we could use a deepsets architecture (Zaheer et al., NeurIPS 2017).

18
How to model π

So at the end, we’ll model π using the formula

π(x) = σ(fθ (x)),

where σ is the sigmoid function and fθ : Rp −→ R is any neural network (whose weights
are stored in a vector θ) that takes the features as input and returns an unconstrained real
number.

We have a lot of flexibility to choose fθ . In particular, if the features x1 , ..., xn are images,
we could use a CNN. In the case of time-series, we could use a recurrent neural net. In
the case of sets, we could use a deepsets architecture (Zaheer et al., NeurIPS 2017).

For the wine example, we could just take a small MLP

fθ (x) = W1 tanh(W0 xi + b0 ) + b1 .

18
How to model π

So at the end, we’ll model π using the formula

π(x) = σ(fθ (x)),

where σ is the sigmoid function and fθ : Rp −→ R is any neural network (whose weights
are stored in a vector θ) that takes the features as input and returns an unconstrained real
number.

We have a lot of flexibility to choose fθ . In particular, if the features x1 , ..., xn are images,
we could use a CNN. In the case of time-series, we could use a recurrent neural net. In
the case of sets, we could use a deepsets architecture (Zaheer et al., NeurIPS 2017).

For the wine example, we could just take a small MLP

fθ (x) = W1 tanh(W0 xi + b0 ) + b1 .

Since the function π and the model p(y|x) now depend on some parameters θ, we’ll
denote them by πθ and pθ (y|x) from now on.

18
How to find θ

There are many ways to find good parameter values for a generative model. One could
use Bayesian inference, score matching, the method of moments, adversarial training...
Let us focus on one of the most traditional ways: maximum likelihood. The idea is to find
a θ̂ that maximises the log-likelihood function log pθ (D).

19
How to find θ

There are many ways to find good parameter values for a generative model. One could
use Bayesian inference, score matching, the method of moments, adversarial training...
Let us focus on one of the most traditional ways: maximum likelihood. The idea is to find
a θ̂ that maximises the log-likelihood function log pθ (D).

In the discriminative case, the likelihood is:


n
X N
X N
X
log pθ (D) = log pθ (yi , xi ) = log pθ (yi |xi ) + log p(xi ),
i=1 i=1 i=1
Pn
but, since we don’t model p(x), i=1 log p(xi ) is constant, and maximising `(θ) is
equivalent to maximising
n
X
`(θ) = log pθ (yi |xi ).
i=1

We’ll also call `(θ) the likelihood (in fact, we’ll call any function that is equal to log pθ (D) up
to a constant the likelihood).

19
How to find θ: from ML to XENT

We have
n
X N
X
log π(x)yi (1 − π(x))1−yi ,

`(θ) = log pθ (yi |xi ) =
i=1 i=1

which leads to
n
X
`(θ) = [yi π(xi ) + (1 − yi ) log(1 − π(xi ))] .
i=1

20
How to find θ: from ML to XENT

We have
n
X N
X
log π(x)yi (1 − π(x))1−yi ,

`(θ) = log pθ (yi |xi ) =
i=1 i=1

which leads to
n
X
`(θ) = [yi π(xi ) + (1 − yi ) log(1 − π(xi ))] .
i=1

We will want to maximise this function, which is equivalent to minimising its opposite,
which is called the cross-entropy loss.
The cross-entropy loss is the most commonly used loss for neural networks, and is a way
of doing maximum likelihood without necessarily saying it.

20
Multiclass classification

If we have a multiclass problem (with K classes), we need to replace the Bernoulli


distribution with a categorical distribution3 . The model becomes

p(y|x) = Cat(y|π(x)).

The output of the neural net π(x) is no longer a single probability but a vector of
proportions of dimension K:

π(x) = (π(x)1 , ..., π(x)K ).

Of course, the proportions must be in [0, 1] and sum to one:


K
X
π(x)k = 1.
k=1

3 Bishop (see e.g. Section 2.2) calls this a multinomial distribution 21


Multiclass classification

If we have a multiclass problem (with K classes), we need to replace the Bernoulli


distribution with a categorical distribution3 . The model becomes

p(y|x) = Cat(y|π(x)).

The output of the neural net π(x) is no longer a single probability but a vector of
proportions of dimension K:

π(x) = (π(x)1 , ..., π(x)K ).

Of course, the proportions must be in [0, 1] and sum to one:


K
X
π(x)k = 1.
k=1

How can we enforce that the outputs of our neural net sum to one?

3 Bishop (see e.g. Section 2.2) calls this a multinomial distribution 21


Multiclass classification with the softmax

A simple way to make sure that the outputs of a neural net are indeed in [0, 1] and sum to
one is to use a softmax as a last layer.

22
Multiclass classification with the softmax

A simple way to make sure that the outputs of a neural net are indeed in [0, 1] and sum to
one is to use a softmax as a last layer.

The softmax function (aka normalised exponential), that’s defined as:


!|
exp a1 exp a2 exp aK
softmax(a1 , ..., aK ) = PK , PK , ..., PK .
j=1 exp aj j=1 exp aj j=1 exp aj

So at the end our model will look like

p(y|x) = Cat(y|πθ (x)),

with
πθ (x) = softmax(fθ (x)),
The function fθ can be modelled by any kind of neural network with input space Rp (the
data space) and output space RK . The unknown weights of the network are denoted by θ.

22
How to find θ: from ML to XENT (continued)

In the multiclass setting, the goal is again to maximise the likelihood:


n
X n
X
`(θ) = log pθ (yi |xi ) = log (Cat(yi |πθ (xi ))) .
i=1 i=1

One convenient way of writing the categorical density with parameter π is

Cat(y|π) = π1y1 ...πKyK ,

where y is a one-hot encoding of the label. This can be used to rewrite the likelihood:
n X
X K
`(θ) = yik log πθ (xi )k .
i=1 k=1

The opposite of this quantity is often called the cross-entropy loss. So minimising the
cross-entropy is equivalent to maximising the likelihood of a discriminative model.

23
Overview of talk

Recap on (supervised) generative/discriminative models

Deep discriminative models for classification

Deep discriminative models for regression

24
What is regression again?

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

25
What is regression again?

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

25
What is regression again?

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

How do we "make this model deep"?

25
What is regression again?

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

How do we "make this model deep"?

By replacing the simple linear function x 7→ µ + xT β by a neural network µθ (x).

25
Deep regression

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

26
Deep regression

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

26
Deep regression

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

How do we "make this model deep"?

26
Deep regression

In regression, the goal is to predict a continuous target y ∈ R using some features x ∈ Rp .


As before, the discriminative approach is to model only p(y|x). This is no longer a discrete
but a continuous distribution. What distribution could we choose?

Often, we simply use a Gaussian! Indeed, the most famous regression model is the
Gaussian linear regression:

pβ,σ (y|x) = N (y|µ + xT β, σ 2 ),

which can be rewritten in a perhaps more familiar vector form

y = Xβ + µ1n + ε,

with ε ∼ N (0, σ 2 In ).

How do we "make this model deep"?

By replacing the simple linear function x 7→ µ + xT β by a neural network µθ (x).

26
Deep regression

The simplest deep regression model is

pθ (y|x) = N (y|µθ (x), σ 2 ),


which could also be written
n
y = (µθ (xi ))i=1 + ε,
with ε ∼ N (0, σ 2 In ).
What are the parameters to learn?

27
Deep regression

The simplest deep regression model is

pθ (y|x) = N (y|µθ (x), σ 2 ),


which could also be written
n
y = (µθ (xi ))i=1 + ε,
with ε ∼ N (0, σ 2 In ).
What are the parameters to learn?

We have to learn θ and σ. Do you see a way to generalise this model further by adding
another deep learning touch?

27
Deep regression

The simplest deep regression model is

pθ (y|x) = N (y|µθ (x), σ 2 ),


which could also be written
n
y = (µθ (xi ))i=1 + ε,
with ε ∼ N (0, σ 2 In ).
What are the parameters to learn?

We have to learn θ and σ. Do you see a way to generalise this model further by adding
another deep learning touch?

Rather than assuming that σ is constant, we could rather model it using a neural net
σθ(x) !
Why on earth would that be a good idea?

27
Deep heteroscedatic regression

A deep heteroscedatic regression model is

pθ (y|x) = N (y|µθ (x), σθ (x)2 ),


its key strength is that it allows to model non-constant uncertainties about the value of
the target.

28
Deep heteroscedatic regression

A deep heteroscedatic regression model is

pθ (y|x) = N (y|µθ (x), σθ (x)2 ),


its key strength is that it allows to model non-constant uncertainties about the value of
the target.

Figure from https://en.wikipedia.org/wiki/Heteroscedasticity.

28

You might also like