Lecture 30
Lecture 30
STOCHASTIC HYDROLOGY
Lecture -30
Course Instructor : Prof. P. P. MUJUMDAR
Department of Civil Engg., IISc.
Summary
of
the
previous
lecture
• IDF relationship
– Procedure for creating IDF curves
– Empirical equations for IDF relationships
2
IDF Curves
Design precipitation Hyetographs from IDF relationships:
Rainfall intensity
Duration
3
IDF Curves
Procedure
• Rainfall intensity (i) from the IDF curve for specified
return period and duration(td) .
• Precipitation depth (P) = i x td
• The amount of precipitation to be added for each
additional unit of time Δt.
4
IDF Curves
• The increments are rearranged into a time
sequence with maximum intensity occurring at the
center of the duration and the remaining blocks
arranged in descending order alternatively to the
right and left of the central block to form the design
hyetograph.
……….
Decreasing
Max value
……….
Decreasing
5
Example – 1
Obtain the design precipitation hyetograph for a 2-
hour storm in 10 minute increments in Bangalore with
a 10 year return period.
Solution:
The 10 year return period design rainfall intensity for
a given duration is calculated using IDF formula by
Rambabu et. al. (1979)
KT a
i= n
(t + b )
6
Example – 1 (Contd.)
For Bangalore, the constants are
K = 6.275
a = 0.126
b = 0.5
n = 1.128
2.000
Precipita)on
(cm)
1.500
1.000
0.500
0.000
10
20
30
40
50
60
70
80
90
100
110
120
Time
(min)
10
MULTIPLE LINEAR
REGRESSION
11
Multiple Linear Regression
x2
• A variable (y) is dependent on many
x1
other independent variables, x1, x2,
x3, x4 and so on.
x3
• For example, the runoff from the x4
water shed depends on many
factors like rainfall, slope of
catchment, area of catchment,
moisture characteristics etc.
y
• Any model for predicting runoff should contain all
these variables
12
Simple Linear Regression Best fit line
y
(xi, yi) are observed values yˆi
yi
yˆi is predicted value of xi x
yˆi = a + bxi
Estimate the parameters a, b such that
Error, ei = yi − yˆi the square error is minimum
n n
2 2
Sum of square errors ∑ e = ∑ ( y − yˆ )
i =1
i
i =1
i i
n
2
M = ∑ { yi − ( a + bxi )}
i =1
13
Simple Linear Regression
n
2
M = ∑ { yi − a − bxi }
i =1 n n
∂M
∑ y − b∑ x
i =1
i
i =1
i
=0 a= ; a = y − bx
∂a n
' '
∂M b=
∑ i yi
x
( i ) i and
x − x = x '
y − y
( i ) i= y '
=0 2
∂b ∑( x ) '
i
yˆi = a + bxi
14
Multiple Linear Regression
A general linear model of the form is
y = β1x1 + β2x2 + β3x3 +…….. + βpxp
y is dependent variable,
x1, x2, x3,……,xp are independent variables and
β1, β2, β3,……, βp are unknown parameters
15
Multiple Linear Regression
• n equations are written for each observation as
y1 = β1x1,1 + β2x1,2 + …….. + βpx1,p
y2 = β1x2,1 + β2x2,2 + …….. + βpx2,p
.
.
yn = β1xn,1 + β2xn,2 + …….. + βpxn,p
17
Multiple Linear Regression
⎡ y1 ⎤ ⎡ x1,1 x1,2 x1,3 . . x1, p ⎤ ⎡ β1 ⎤
⎢ y ⎥ ⎢ x x2,2 x2,3 . . x2, p ⎥⎥ ⎢ β ⎥
⎢ 2 ⎥ ⎢ 2,1 ⎢ 2 ⎥
⎢ y3 ⎥ ⎢ x3,1 ⎥ ⎢ β3 ⎥
⎢ ⎥ = ⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ . ⎥ ⎢ . ⎥
⎢ . ⎥ ⎢ . ⎥ ⎢ . ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⎣ yn ⎥⎦ nx1 ⎢⎣ xn ,1 xn ,1 xn , p ⎥⎦
nxp
⎢⎣ β p ⎥⎦px1
ei = yi − yˆi
p
yˆi = ∑ β j xi , j
j =1
19
Multiple Linear Regression
In matrix notation,
2 '
∑i Ee = E
'
( ˆ
= Y − XΒ ) (Y − X Βˆ )
d 2
∑ ei =0 ∀ j
dΒ
0 = −2 X ' Y − X Βˆ
( )
X 'Y = X ' X Βˆ
20
Multiple Linear Regression
−1
• Premultiplying with ( X X ) on both the sides, '
−1 −1
( ) XX' '
XY = X X ( '
) ˆ
X 'X Β
−1
(X X )'
X 'Y = Βˆ
or
−1
ˆ = X 'X
Β ( ) X 'Y
inverted.
21
Multiple Linear Regression
• Suppose if no. of regression coefficients are 3, then
matrix( Xis' X as
) follows
⎡ n 2 n n
⎤
⎢ ∑ xi ,1 ∑x x
i ,2 i ,1 ∑ xi ,3 xi ,1 ⎥
⎢ i =1 i =1 i =1
⎥
⎢ n n n
⎥
( '
)
X X = ⎢ ∑ xi ,1 xi ,2 x 2
∑ i,2 ∑ xi ,3 xi ,2 ⎥
⎢ i =1 i =1 i =1 ⎥
⎢ n n n
2 ⎥
⎢ ∑ xi ,1 xi ,3 ∑x x
i ,2 i ,3 ∑ xi ,3 ⎥
⎣ i =1 i =1 i =1 ⎦
22
Multiple Linear Regression
• A multiple coefficient of determination, R2 (as in
case of simple linear regression) is defined as
23
Example – 2
In a watershed, the mean annual flood (Q) is
considered to be dependent on area of watershed (A)
and rainfall(R). The table gives the observations for
12 years. Obtain regression coefficients and R2 value.
Q in
cumec 0.44 0.24 2.41 2.97 0.7 0.11 0.05 0.51 0.25 0.23 0.1 0.054
A in
hectares 324 226 1474 2142 420 45 38 363 77 84 46 38
Rainfall
in cm 43 53 48 50 43 61 81 68 74 71 71 69
24
Example – 2 (Contd.)
The regression model is as follows
Q = β1 + β2A + β3R
⎡ n 2 n n
⎤
⎢ ∑ xi ,1 ∑x x
i ,2 i ,1 ∑ xi ,3 xi ,1 ⎥
⎢ i =1 i =1 i =1
⎥
⎢ n n n
⎥
( '
)
X X = ⎢ ∑ xi ,1 xi ,2 x 2
∑ i,2 ∑ xi ,3 xi ,2 ⎥
⎢ i =1 i =1 i =1 ⎥
⎢ n n n
2 ⎥
⎢ ∑ xi ,1 xi ,3 ∑x x
i ,2 i ,3 ∑ xi ,3 ⎥
⎣ i =1 i =1 i =1 ⎦
27
Example – 2 (Contd.)
⎡ 12 5277 732 ⎤
( X ' X = ⎢⎢5277 7245075 269879 ⎥⎥
)
⎢⎣ 732 269879 46536 ⎥⎦
28
Example – 2 (Contd.)
⎡ n ⎤
⎢ ∑ yi ⎥
⎢ i =1 ⎥ ⎡ 8.06 ⎤
n
⎢ ⎥ ⎢
( X Y = ⎢ ∑ xi ,2 yi ⎥ = ⎢10642 ⎥⎥
'
)
⎢ i =1 ⎥ ⎢ 417 ⎥
⎢ n ⎥ ⎣ ⎦
⎢ ∑ xi ,3 yi ⎥
⎣ i =1 ⎦
29
Example – 2 (Contd.)
ˆΒ = ( X ' X )−1 X 'Y
30
Example – 2 (Contd.)
Therefore the regression equation is as follows
31
Example – 2 (Contd.)
Q A R Q̂ e
0.44 324 43 0.49 -0.05
0.24 226 53 0.35 -0.11
2.41 1474 48 2.10 0.31
2.97 2142 50 3.04 -0.07
0.7 420 43 0.63 0.07
0.11 45 61 0.10 0.01
0.05 38 81 0.09 -0.04
0.51 363 68 0.55 -0.04
0.25 77 74 0.15 0.10
0.23 84 71 0.16 0.07
0.1 46 71 0.10 0.00
0.054 38 69 0.09 -0.04
32
Example – 2 (Contd.)
Multiple coefficient of determination, R2 :
' ' 2
Β X Y − ny
R2 = ' y = 0.672, n = 12
Y Y − ny 2
15.64 − 5.42 ' −5
Β = ⎣0.0351 0.0014 5.0135 ×10 ⎤⎦
⎡
=
15.77 − 5.42
= 0.99 ⎡ 8.06 ⎤
( X 'Y = ⎢⎢10642 ⎥⎥
)
⎢⎣ 417 ⎥⎦
Y 'Y = 15.77
33
PRINCIPAL COMPONENT
ANALYSIS
34
Principal Component Analysis
• Powerful tool for analyzing data.
• PCA is a way of identifying patterns in the data and
data is expressed in such a way that the similarities
and differences are highlighted.
• Once the patterns are found in the data, it can be
compressed (reduce the number of dimensions)
without losing information.
• Eigenvectors and eigenvalues are discussed first to
understand the process of PCA.
35
Matrix Algebra
Eigenvectors and Eigenvalues:
• Let A be a complex square matrix. If λ is a complex
number and X a non–zero complex column vector
satisfying AX = λX, X is an eigenvector of A, while λ
is called an eigenvalue of A.
• X is the eigenvector corresponding to the
eigenvalue λ.
• Eigenvectors are possible only for square matrices.
• Eigenvectors of a matrix are orthogonal.
36
Matrix Algebra
• If λ is an eigenvalue of an n × n matrix A, with
corresponding eigenvector X, then (A − λI)X = 0,
with X ≠ 0, so det (A − λI) = 0 and there are at most
n distinct eigenvalues of A.
• Conversely if det (A − λI) = 0, then (A − λI)X = 0 has
a non–trivial solution X and so λ is an eigenvalue of
A with X a corresponding eigenvector.
37
Example – 3
Obtain the eigenvalues and eigenvectors for the
matrix,
⎡1 2 ⎤
A = ⎢ ⎥
⎣ 2 1 ⎦
A − λI = 0
1− λ 2
=0
2 1− λ
38
Example – 3 (Contd.)
(1 − λ )(1 − λ ) − 4 = 0
λ 2 − 2λ − 3 = 0
Solving the equation,
λ = 3, −1
−2 x1 + 2 y1 = 0
2 x1 − 2 y1 = 0
2 x1 + 2 y1 = 0
2 x1 + 2 y1 = 0
42
Principal Component Analysis
The transformation is written as
Z = X×A
Where
X is nxp matrix of n observations on p variables
Z is nxp matrix of n values for each of p
components
A is pxp matrix of coefficients defining the linear
transformation
All X are assumed to be deviations from their
respective means, hence X is a matrix of deviations
from mean
43
Principal Component Analysis
Steps for PCA:
44
Principal Component Analysis
The procedure is explained with a simple data set of
the yearly rainfall and the yearly runoff of a catchment
for 15 years.
Year 1 2 3 4 5 6 7 8 9 10
Rainfall
105 115 103 94 95 104 120 121 127 79
(cm)
Runoff
42 46 26 39 29 33 48 58 45 20
(cm)
Year 11 12 13 14 15
Rainfall
133 111 127 108 85
(cm)
Runoff
54 37 39 34 25
(cm)
45
Principal Component Analysis
Step 2: Form a matrix with deviations from mean
Original matrix Matrix with deviations from mean
⎡105 42 ⎤ ⎡ −1.3 3.4 ⎤
⎢115 46 ⎥⎥ ⎢ 8.7 7.4 ⎥
⎢ ⎢ ⎥
⎢103 26 ⎥ ⎢ −3.3 −12.6 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 94 39 ⎥ ⎢ −12.3 0.4 ⎥
⎢ 95 29 ⎥ ⎢ −11.3 −9.3 ⎥
⎢ ⎥ ⎢ ⎥
⎢104 33 ⎥ ⎢ − 2.3 − 5.6 ⎥
⎢120 48⎥ ⎢ 13.7 9.4 ⎥
⎢ ⎥ ⎢ ⎥
⎢121 58 ⎥ ⎢ 14.7 19.4 ⎥
⎢127 45⎥ ⎢ 20.7 6.4 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 79 20 ⎥⎦ ⎢⎣ −27.3 −18.6 ⎥⎦
46
Principal Component Analysis
Step 3: Calculate the covariance matrix
n
∑ ( x − x )( y − y )
i =1
i i
cov( X , Y ) = s X ,Y =
n −1
47
Principal Component Analysis
Step 4: Calculate the eigenvalues and eigenvectors of
the covariance matrix
48
Principal Component Analysis
Step 5: Choosing components and forming a feature
vector
49