Lecture 7 - Introduction To Kriging
Lecture 7 - Introduction To Kriging
2
Looking at a Nonlinear Regression Problem
3
Looking at a Nonlinear Regression Problem
4
Looking at a Nonlinear Regression Problem
5
Looking at a Nonlinear Regression Problem
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
Covariance σ 122 = ρσ 1σ 2
7
Gaussian Basics
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
Covariance σ 122 = ρσ 1σ 2
Bivariate Gaussian
X1
X = N ( μ, Σ )
2 Covariance
matrix
µ1 σ 12 σ 122
N , 2 2
µ2 σ σ
21 2
Cov ( X 1 , X 2 ) σ 122
ρ= =
σ 1σ 2 σ 1σ 2
8
Gaussian Basics
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
9
Gaussian Basics
X1 0 1 0.5
X ∼ N 0 , 0.5 1
2
10
Gaussian Basics
X1 0 1 0.3
X ∼ N 0 , 0.3 1
2
11
Gaussian Basics
4
X1 0 1 0
3 X ∼ N 0, 0 1
2
2
1
X2
0
-1
-2
-3
-4
-4 -3 -2 -1 0 1 2 3 4
X1
12
Looking at a Joint Gaussian PDF in 3D
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
Observation:
X1 = x1
13
Looking at a Joint Gaussian PDF in 3D
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
Observation:
X1 = x1
14
Looking at a Joint Gaussian PDF in 3D
Conditional PDF
f X1X 2 ( x1 , x2 ) X1 0 1 0.8
f X 2| X1 ( x2 | x1 ) = X ∼ N 0 , 0.8 1
f X1 ( x1 ) 2
Observation:
X1 = x1
15
Looking at a Conditional Gaussian PDF
f X 2 ( x2 | X 1 = x1 ) = N ( µ2|1 , σ 2|21 )
ρσ 1σ 2
2 ( 1
µ 2|1 = µ2 + x − µ1 )
σ1 Posterior std
2 σ2|1 = ?
2 2
σ 2|1 = σ 2 −
( ρσ 1σ 2 )
σ 12
Conditioning a 2D
Gaussian
Posterior mean
Group Discussion (4 min) μ2|1 = ? 16
Looking at a Conditional Gaussian PDF
f X 2 ( x2 | X 1 = x1 ) = N ( µ2|1 , σ 2|21 )
ρσ 1σ 2
2 ( 1
µ 2|1 = µ2 + x − µ1 )
σ1 Posterior std
2 σ2|1 = 0.6
2 2
σ 2|1 = σ 2 −
( ρσ 1σ 2 )
σ 12
Posterior mean
μ2|1 = 0.8 17
Looking at a Conditional Gaussian PDF
Posterior std
σ2|1 = 0.6
Prior std
σ2 = 1
X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
19
Looking Back at 2D Gaussian Contour
f X 2 ( x2 | X 1 = x1 ) ? X1 0 1 0.8
X ∼ N 0 , 0.8 1
2
Conditional on
Observation:
X1 = x1
20
Looking Back at 2D Gaussian Contour
f X 2 ( x2 | X 1 = x1 ) X1 0 1 0.8
X ∼ N 0 , 0.8 1
∼ N ( µ2|1 , σ 2|21 ) 2
ρσ 1σ 2
2 ( 1
µ2|1 = µ 2 + x − µ1 )
σ1
= ρ x1 = 0.8
2
σ 2|1 = 2
σ −
( ρσ 1σ 2 )
2 2
σ1
= 1 − ρ 2 = 0.6
Observation:
X1 = x1
21
Looking Back at 2D Gaussian Contour
f X 2 ( x2 | X 1 = x1 ) X1 0 1 0
X ∼ N 0, 0 1
∼ N ( µ2|1 , σ 2|21 ) 2
ρσ 1σ 2
2 ( 1
µ2|1 = µ 2 + x − µ1 )
σ1
= ρ x1 = 0
2
2
σ 2|1 = σ 2 −
( ρσ 1σ 2 )
σ 12
= 1− ρ 2 = 1
Observation:
X1 = x1
22
Marginals and Conditionals of a Multivariate Normal
24
Looking Back at the Nonlinear Regression Problem
≡
y1 0 1.00 0.12 0.05 0.00 0.00 0.00
y
2 0 0.12 1.00 0.92 0.46 0.06 0.00
y3 0 0.05 0.92 1.00 0.71 0.15 0.00
∼ N ,
y4 0 0.00 0.46 0.71 1.00 0.54 0.00
y5 0 0.00 0.06 0.15 0.54 1.00 0.07
y6 0 0.00 0.00 0.00 0.00 0.07 1.00 27
Looking Back at the Nonlinear Regression Problem
?
(x5, y5) (x6, y6) , ′ ′ ′
(x4, y4)
?
(x5, y5) (x6, y6) , ′ ′ ′
(x4, y4)
?
(x5, y5) (x6, y6) , ′ ′ ′
(x4, y4)
y1 0 1.00 0.12 0.05 0.00 0.00 0.00 This example uses the
y squared exponential
2 0 0.12 1.00 0.92 0.46 0.06 0.00 kernel (A.K.A. Radial
y3 0 0.05 0.92 1.00 0.71 0.15 0.00 Basis Function kernel,
∼ N , Gaussian kernel):
y4 0 0.00 0.46 0.71 1.00 0.54 0.00
, ′
y5 0 0.00 0.06 0.15 0.54 1.00 0.07
exp
y6 0 0.00 0.00 0.00 0.00 0.07 1.00 230
Looking Back at the Nonlinear Regression Problem
?
(x5, y5) (x6, y6) , ′ ′ ′
(x4, y4)
y1 0 1.00 0.12 0.05 0.00 0.00 0.00 This example uses the
y squared exponential
2 0 0.12 1.00 0.92 0.46 0.06 0.00 kernel (A.K.A. Radial
y3 0 0.05 0.92 1.00 0.71 0.15 0.00 Basis Function kernel,
∼ N , Gaussian kernel):
y4 0 0.00 0.46 0.71 1.00 0.54 0.00
, ′
y5 0 2
0.00 0.062 0.15 0.54 1.00 0.07
34 exp exp
y6 2
0 0.00 0.00 0.00 0.00 0.07 1.00 231
Looking Back at the Nonlinear Regression Problem
2l
(x2, y2) (x3, y3)
where σ2 > 0 is the signal variance, and
l > 0 is the length scale.
2l
(x2, y2) (x3, y3)
where σ2 > 0 is the signal variance, and
l > 0 is the length scale.
2l
(x2, y2) (x3, y3)
where σ2 > 0 is the signal variance, and
l > 0 is the length scale.
0.65 0.53 0.30 0.06 0.00 0.00 1.00 38
Making Prediction at a Test Point
0.65 0.53 0.30 0.06 0.00 0.00 1.00 39
Making Prediction at a Test Point
40
Making Prediction at a Second Test Point
41
Making Prediction at a Third Test Point
42
Making Prediction at Many Test Points
43
Making Predictions at Many Test Points
44
Making Predictions at a Finer Grid of Test Points
3σ*
μ*
x*
FXi (xi)
0.4 0.4
0.2 0.2
0 0
ui Ui xi Xi
1.0
fU i (ui)
fX i (xi)
0 ui Ui 00 1 Xi
xi
47
Generating Multivariate Gaussian Samples
u1i 0 1 0
u ~ N , N ( 0, I 2 )
K = LT L 2i 0
0 1
[–3, 3]
49
Generating Multivariate Gaussian Samples (Posterior)
[μ*–3σ*, μ*+3σ*]
50
A Practical Implementation
Algorithm 2.1: Predictions and log marginal likelihood for Gaussian process
regression.
51
Project 1: Kriging implementation (Due March 24)