Lecture 12 Bayesian Neural Network
Lecture 12 Bayesian Neural Network
Lecture 12:
✓ Bayesian Inference
✓ Source of Model Uncertainty and Ensembling
✓ Bayesian Neural Netwosk
𝑝 𝑦 𝑢, 𝒟 = න𝑝 𝒘 𝒟 𝑝 𝑦 𝑢, 𝒘 𝑑𝒘
Visualize confidence
intervals based on the
posterior predictive
mean and variance at
each point.
Regularization arises
naturally!
(Think of dropout)
Parameters represented by single, Parameters are represented by
fixed values after training. distributions.
Conventional approaches to train Introduce a prior distribution on the
NNs can be interpreted as approx. weights 𝑝 𝒘 , obtain the posterior
to the Bayesian method 𝑝 𝒘 𝒟 through learning.
© Copyright National University of Singapore. All Rights Reserved.
BNN: Maximizing the Posterior
Compute the posterior with Bayes’ Rule:
𝑝 𝒟𝒘 𝑝 𝒘 𝑝 𝒟𝒘 𝑝 𝒘
𝑝 𝒘𝒟 = =
𝑝 𝒟 ∫ 𝑝 𝒟 𝒘 𝑝 𝒘 𝑑𝒘
Compare / connect
generative modelling and
representation learning from
the distribution perspectives:
Define the “latent space” as a
“representation space”.
𝑥~𝑝𝑑𝑎𝑡𝑎 , 𝑧 = 𝑓(𝑥);
𝑧~𝑝𝑧 , 𝑥 = 𝑔(𝑧);
Observe: what structure can
you associate with?
𝑝𝜃 (𝑥|𝑧)