0% found this document useful (0 votes)
109 views12 pages

Lecture 1: Introduction To Uncertainty Quantification: Today

This document discusses uncertainty quantification and introduces some key concepts: - UQ involves forward and inverse problems to model systems with uncertain inputs and outputs - Forward problems predict outputs given inputs, while inverse problems estimate inputs given outputs - Orthogonal polynomials like Chebyshev and Hermite polynomials are introduced to efficiently approximate multivariate functions for UQ - An example complex function is approximated with a polynomial expansion up to different orders to demonstrate the approximation approach

Uploaded by

Nolan Gutierrez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views12 pages

Lecture 1: Introduction To Uncertainty Quantification: Today

This document discusses uncertainty quantification and introduces some key concepts: - UQ involves forward and inverse problems to model systems with uncertain inputs and outputs - Forward problems predict outputs given inputs, while inverse problems estimate inputs given outputs - Orthogonal polynomials like Chebyshev and Hermite polynomials are introduced to efficiently approximate multivariate functions for UQ - An example complex function is approximated with a polynomial expansion up to different orders to demonstrate the approximation approach

Uploaded by

Nolan Gutierrez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

4/15/2021 Jupyter Notebook Viewer

uq2019 (/github/G-Ryzhakov/uq2019/tree/master)
/  lectures (/github/G-Ryzhakov/uq2019/tree/master/lectures)

In [1]: from traitlets.config.manager import BaseJSONConfigManager


#path = "/home/damian/miniconda3/envs/rise_latest/etc/jupyter/nbconfig
cm = BaseJSONConfigManager()
cm.update('livereveal', {
'theme': 'sky',
'transition': 'zoom',
'start_slideshow_at': 'selected',
'scroll': True
})

Out[1]: {'theme': 'sky',


'transition': 'zoom',
'start_slideshow_at': 'selected',
'scroll': True}

Lecture 1: Introduction to Uncertainty


Quantification

Today
What is UQ?
polynomial approximation
parameters estimation

Uncertainty quantification

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 1/12
4/15/2021 Jupyter Notebook Viewer

Uncertainty quantification
Numerical simulations assume models of real world; these models pick uncertainties

In input variables: Y = F (X + ε)
In output variables: Y = F (X ) + ε
Models are approximate

Forward and inverse problems


UQ splits into two major branches: forward and inverse problems.

Forward problems
In the forward propagation of uncertainty, we have a known model F for a system of interest.
We model its inputs X as a random variable and wish to understand the output random
variable
Y = F (X )

(also denoted Y |X ) and reads Y given X .

Also, this is related to sensitivity analysis (how random variations in X influence variation in
Y ).

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 2/12
4/15/2021 Jupyter Notebook Viewer

Inverse problems
In inverse problems, F is a forward model, but Y is observed data, and we want to find the
input data X

such that F (X ) = Y , i.e. we want X |Y instead of Y |X .

Inverse problems are typically ill-posed in the usual sense, so we need an expert (or prior)
about what a good solution X might be.

Bayesian perspective becomes the method of choice, but this requires the representation of
high-dimensional distributions.
p(Y |X )p(X )
p(X |Y ) = .
p(Y )

Approximation of multivariate functions


If we want to do efficient UQ (not only Monte-Carlo) we need efficient tools for the
approximation of multivariate functions.

Consider orthogonal polynomials {pn }


b

⟨pn , pm ⟩ = pn (x)pm (x)w(x) dx = δ nm hn .



a

−1/2
Chebyshev polynomials of the first kind, (a, b) = (−1, 1) , w(x) = (1 − x
2
)

Hermite polynomials (mathematical or probabilistic), (a, b) = (−∞, +∞) ,


1 2
w(x) = exp(−x /2)
√ 2π

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 3/12
4/15/2021 Jupyter Notebook Viewer

In [2]: import numpy as np


import matplotlib.pyplot as plt
from numpy.polynomial import Chebyshev as T
from numpy.polynomial.hermite import hermval
%matplotlib inline

def p_cheb(x, n):


"""
RETURNS T_n(x)
value of not normalized Chebyshev polynomials
$\int \frac1{\sqrt{1-x^2}}T_m(x)T_n(x) dx = \frac\pi2\delta_{nm}$
"""
return T.basis(n)(x)

def p_herm(x, n):


"""
RETURNS H_n(x)
value of non-normalized Probabilistic polynomials
"""
cf = np.zeros(n+1)
cf[n] = 1
return (2**(-float(n)*0.5))*hermval(x/np.sqrt(2.0), cf)

def system_mat(pnts, maxn, poly):


"""
RETURNS system matrix
"""
A = np.empty((pnts.size, maxn), dtype=float)
for i in range(maxn):
A[:, i] = poly(pnts, i)
return A

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 4/12
4/15/2021 Jupyter Notebook Viewer

In [3]: x = np.linspace(-1, 1, 1000)


data = []
for i in range(5):
data.append(x)
data.append(p_cheb(x, i))

plt.plot(*data)
plt.legend(["power = {}".format(i) for i in range(len(data))]);

In [4]: def complex_func(x):


return np.sin(2.0*x*np.pi)*np.cos(0.75*(x+0.3)*np.pi)

plt.plot(x, complex_func(x));

Now, let's approximate the function with polynomials taking different maximal power n and the
corresponding number of node points
n

̂ 
f (x) ≈ f (x) = αi pi (x)

i=0

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 5/12
4/15/2021 Jupyter Notebook Viewer

In [5]: n = 6
M = n
nodes = np.linspace(-1, 1, M)
RH = complex_func(nodes)
A = system_mat(nodes, n, p_cheb)
if n == M:
alpha = np.linalg.solve(A, RH)
else:
alpha = np.linalg.lstsq(A, RH)[0]
print("α = {}".format(alpha))

α = [ 0.14310061 -0.43250604 0.20112419 -0.25853987 -0.3442248 0.691

In [7]: def calc_approximant(poly, alpha, x):


"""
RETURNS values of approximant in points x
"""
n = len(alpha)
y = np.zeros_like(x)
for i in range(n):
y[...] += poly(x, i)*alpha[i]

return y

In [8]: y = complex_func(x)
approx_y = calc_approximant(p_cheb, alpha, x)
plt.plot(x, y, x, approx_y, nodes, RH, 'ro');

Approximate value of the error


̂  ̂ 
ε = ‖f − f ‖ ∞ ≈ max∣ ∣
∣f (x) − f (x)∣
x∈

In [9]: epsilon = np.linalg.norm(y - approx_y, np.inf)


print("ε = {}".format(epsilon))

ε = 1.2986238408482182

If we take another set of polynomials, the result of the approximation will be the same
https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 6/12
4/15/2021 Jupyter Notebook Viewer
If we take another set of polynomials, the result of the approximation will be the same
(coefficients α will be different of course).

In [10]: A = system_mat(nodes, n, p_herm)


if n == M:
alpha = np.linalg.solve(A, RH)
else:
alpha = np.linalg.lstsq(A, RH)[0]
print("α = {}".format(alpha))

approx_y = calc_apprximant(p_herm, alpha, x)


plt.plot(x, y, x, approx_y, nodes, RH, 'ro')

epsilon = np.linalg.norm(y - approx_y, np.inf)


print("ε = {}".format(epsilon))

α = [ -5.50759675 125.08412934 -13.36674349 95.71226857 -2.75379837


11.05673463]
ε = 1.2986238408482218

Now, what will change if we take another set of node points?

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 7/12
4/15/2021 Jupyter Notebook Viewer

In [11]: nodes = np.cos((2.0*np.arange(M) + 1)/M*0.5*np.pi)


RH = complex_func(nodes)
A = system_mat(nodes, n, p_herm)
alpha = np.linalg.solve(A, RH)
print("α = {}".format(alpha))

approx_y = calc_apprximant(p_herm, alpha, x)


plt.plot(x, y, x, approx_y, nodes, RH, 'ro')

epsilon_cheb = np.linalg.norm(y - approx_y, np.inf)


print("ε_cheb = {}".format(epsilon_cheb))

α = [ -8.17756076 69.1215116 -19.61829875 52.92933361 -4.03369459


6.1756528 ]
ε_cheb = 0.8212801266161324

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 8/12
4/15/2021 Jupyter Notebook Viewer

In [12]: # All in one. We can play with maximum polynomial power


def plot_approx(f, n, distrib='unif', poly='cheb'):
def make_nodes(n, distrib='unif'):
return {'unif' : lambda : np.linspace(-1, 1, n),
'cheb' : lambda : np.cos((2.0*np.arange(n) + 1.0)/n*0.5

poly_f = {'cheb' : p_cheb, 'herm' : p_herm}[poly[:4].lower()]

#solve
nodes = make_nodes(n, distrib)()
RH = f(nodes)
A = system_mat(nodes, n, p_herm)
alpha = np.linalg.solve(A, RH)

# calc values
x = np.linspace(-1, 1, 2**10)
y = f(x)
approx_y = calc_apprximant(p_herm, alpha, x)

#plot
plt.figure(figsize=(14,6.5))
plt.plot(x, y, x, approx_y, nodes, RH, 'ro')
plt.show()

# calc error
epsilon_cheb = np.linalg.norm(y - approx_y, np.inf)
print("ε = {}".format(epsilon_cheb))

from ipywidgets import interact, fixed, widgets

In [13]: interact(plot_approx,
f=fixed(complex_func),
n=widgets.IntSlider(min=1,max=15,step=1,value=4,continuous_upd
distrib=widgets.ToggleButtons(options=['Uniform', 'Chebyshev r
poly=widgets.ToggleButtons(options=['Chebyshev polynomials', '
);

Random input

Let input x is random with known probability density function ρ .

We want to know statistical properties of the output

mean value
variance
risk estimation

How to find them using polynomial expansion?

Assume the function f is analytical

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 9/12
4/15/2021 Jupyter Notebook Viewer

f (x) = αi pi (x).

i=0

The mean value of f is


b ∞ b

𝔼f = f (τ)ρ(τ) dτ = αi pi (τ)ρ(τ) dτ.


∫ ∑∫
a a
i=0

If the set of orthogonal polynomials {pn } have the same weight function as ρ , and the first
polynomial is constant p0 (x) = h0 , then 𝔼f = α0 h0 . Usually, h0 = 1 and we get a simple
relation
𝔼f = α0

The variance is equal to


2
b ∞
2
Var f = 𝔼(f − 𝔼f ) = αi pi (τ) ρ(τ) dτ,

a (∑ )
i=1

note, that the summation begins with 1. Assume we can interchange the sum and the integral,
then
∞ ∞ b ∞

2
Var f = αi pi (τ) αj pj (τ) ρ(τ) dτ = α hi .
i
∑∑∫ ∑
a
i=1 j=1 i=1

The formula is very simple if all the coefficients {hi } are equal to 1:

2
Var f = α .
i

i=1

Let us check the formulas for the mean and variance by calculating them using the Monte-
Carlo method.

Normal distribution

First, consider the case of normal distrubution of the input x ∼  (0, 1) ,


exp(−x /2) , so we take Hermite polynomials.
1 2
ρ(x) =
√ 2π

In [14]: # Scale the function a little


scale = 5.0

big_x = np.random.randn(int(1e6))
big_y = complex_func(big_x/scale)
mean = np.mean(big_y)
var = np.std(big_y)**2
print ("mean = {}, variance = {}".format(mean, var))

mean = -0.16581516100035978, variance = 0.23119787259823302

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 10/12
4/15/2021 Jupyter Notebook Viewer

In [16]: def p_herm_snorm(n):


"""
Square norm of "math" Hermite (w = exp(-x^2/2)/sqrt(2*pi))
"""
return np.math.factorial(n)

n = 15
M = n

nodes = np.linspace(-scale, scale, M)


RH = complex_func(nodes/scale)
A = system_mat(nodes, n, p_herm)

if n == M:
alpha = np.linalg.solve(A, RH)
else:
W = np.diag(np.exp( -nodes**2*0.5))
alpha = np.linalg.lstsq(W.dot(A), W.dot(RH))[0]

h = np.array([p_herm_snorm(i) for i in range(len(alpha))])


var = np.sum(alpha[1:]**2*h[1:])

print ("mean = {}, variance = {}".format(alpha[0]*h[0], var))

mean = -0.16556240297459568, variance = 0.2313061711624609

Note, that the precise values are


𝔼f = −0.16556230699 … , Var f = 0.23130350880 …

so, the method based on polynomial expansion is more accurate than Monte-Carlo.

In [17]: ex = 2
x = np.linspace(-scale - ex, scale + ex, 10000)
y = complex_func(x/scale)
approx_y = calc_apprximant(p_herm, alpha, x)
plt.plot(x, y, x, approx_y, nodes, RH, 'ro');

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 11/12
4/15/2021 Jupyter Notebook Viewer

Uniform distribution

In tha case of uniform distribution we use Legendre polynomials


1
2
pn (x)pm (x) dx = δ nm
∫ 2n + 1
−1

In [18]: from numpy.polynomial import Legendre


def p_legendre(x, n, interval=(-1.0, 1.0)):
"""
Non-normed Legendre poly.
"""
xn = (interval[0] + interval[1] - 2.0*x)/(interval[0] - interval[1]
return Legendre.basis(n)(xn)

def p_legendre_snorm(n, interval=(-1.0, 1.0)):


"""
RETURNS E[L_n L_n]
"""
return (interval[1] - interval[0])/(2.0*n + 1.0)

In [20]: n = 15
M = n

nodes = np.linspace(-1, 1, M)
RH = complex_func(nodes)
A = system_mat(nodes, n, p_legendre)

if n == M:
alpha = np.linalg.solve(A, RH)
else:
alpha = np.linalg.lstsq(A, RH)[0]

h = np.array([p_legendre_snorm(i) for i in range(len(alpha))])


var = np.sum(alpha[1:]**2*h[1:])

print ("mean = {}, variance = {}".format(alpha[0]*h[0], var))

mean = 0.17164179416392844, variance = 0.4656157485565891

The precise values are


𝔼f = 0.1700970689187 … , Var f = 0.4806857166498 …

https://nbviewer.jupyter.org/github/G-Ryzhakov/uq2019/blob/master/lectures/lecture-1.ipynb 12/12

You might also like