0% found this document useful (0 votes)
9 views

Lecture 30

The document summarizes stochastic hydrology and the use of intensity-duration-frequency (IDF) curves to develop design precipitation hyetographs. It provides an example of obtaining a hyetograph for a 2-hour, 10-year storm in Bangalore using IDF relationships. The resulting hyetograph is tabulated and plotted in 10-minute increments. It then briefly introduces multiple linear regression for modeling a dependent variable (e.g. runoff) based on multiple independent variables (e.g. rainfall, slope, area).

Uploaded by

Fofo Elorfi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture 30

The document summarizes stochastic hydrology and the use of intensity-duration-frequency (IDF) curves to develop design precipitation hyetographs. It provides an example of obtaining a hyetograph for a 2-hour, 10-year storm in Bangalore using IDF relationships. The resulting hyetograph is tabulated and plotted in 10-minute increments. It then briefly introduces multiple linear regression for modeling a dependent variable (e.g. runoff) based on multiple independent variables (e.g. rainfall, slope, area).

Uploaded by

Fofo Elorfi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

INDIAN

 INSTITUTE  OF  SCIENCE  

STOCHASTIC HYDROLOGY
Lecture -30
Course Instructor : Prof. P. P. MUJUMDAR
Department of Civil Engg., IISc.
Summary  of  the  previous  lecture  

• IDF relationship
– Procedure for creating IDF curves
– Empirical equations for IDF relationships

2  
IDF Curves
Design precipitation Hyetographs from IDF relationships:

Rainfall intensity

Duration

Alternating block method :


• Developing a design hyetograph from an IDF curve.
• Specifies the precipitation depth occurring in n
successive time intervals of duration Δt over a total
duration Td.

3  
IDF Curves
Procedure
• Rainfall intensity (i) from the IDF curve for specified
return period and duration(td) .
• Precipitation depth (P) = i x td
• The amount of precipitation to be added for each
additional unit of time Δt.

PΔt = Ptd2 – Ptd1


td1 Δt td2

4  
IDF Curves
• The increments are rearranged into a time
sequence with maximum intensity occurring at the
center of the duration and the remaining blocks
arranged in descending order alternatively to the
right and left of the central block to form the design
hyetograph.

Total rainfall duration

……….
Decreasing

Max value

……….
Decreasing

5  
Example – 1
Obtain the design precipitation hyetograph for a 2-
hour storm in 10 minute increments in Bangalore with
a 10 year return period.

Solution:
The 10 year return period design rainfall intensity for
a given duration is calculated using IDF formula by
Rambabu et. al. (1979)
KT a
i= n
(t + b )
6  
Example – 1 (Contd.)
For Bangalore, the constants are

K = 6.275
a = 0.126
b = 0.5
n = 1.128

For T = 10 Year and duration, t = 10 min = 0.167 hr,


6.275 ×100.126
i= 1.128
= 13.251
( 0.167 + 0.5)
7  
Example – 1 (Contd.)
• Similarly the values for other durations at interval
of 10 minutes are calculated.
• The precipitation depth is obtained by multiplying
the intensity with duration.
Precipitation = 13.251 * 0.167 = 2.208 cm
• The 10 minute precipitation depth is 2.208 cm
compared with 3.434 cm for 20 minute duration,
hence 2.208 cm will fall in 10 minutes, the
remaining 1.226 (= 3.434 – 2.208) cm will fall in
the remaining 10 minutes.
• Similarly the other values are calculated and
tabulated
8  
Example – 1 (Contd.)
Duration Intensity Cumulative Incremental Precipitation
Time (min)
(min) (cm/hr) depth (cm) depth (cm) (cm)
10 13.251 2.208 2.208 0 - 10 0.069
20 10.302 3.434 1.226 10 - 20 0.112
30 8.387 4.194 0.760 20 - 30 0.191
40 7.049 4.699 0.505 30 - 40 0.353
50 6.063 5.052 0.353 40 - 50 0.760
60 5.309 5.309 0.256 50 - 60 2.208
70 4.714 5.499 0.191 60 - 70 1.226
80 4.233 5.644 0.145 70 - 80 0.505
90 3.838 5.756 0.112 80 - 90 0.256
100 3.506 5.844 0.087 90 - 100 0.145
110 3.225 5.913 0.069 100 - 110 0.087
120 2.984 5.967 0.055 110 - 120 0.055
9  
Example – 1 (Contd.)
2.500  

2.000  
Precipita)on  (cm)  

1.500  

1.000  

0.500  

0.000  
10   20   30   40   50   60   70   80   90   100   110   120  
Time  (min)  

10  
MULTIPLE LINEAR
REGRESSION

11  
Multiple Linear Regression
x2
• A variable (y) is dependent on many
x1
other independent variables, x1, x2,
x3, x4 and so on.
x3
• For example, the runoff from the x4
water shed depends on many
factors like rainfall, slope of
catchment, area of catchment,
moisture characteristics etc.
y
• Any model for predicting runoff should contain all
these variables

12  
Simple Linear Regression Best fit line
y
(xi, yi) are observed values yˆi
yi
yˆi is predicted value of xi x

yˆi = a + bxi
Estimate the parameters a, b such that
Error, ei = yi − yˆi the square error is minimum
n n
2 2
Sum of square errors ∑ e = ∑ ( y − yˆ )
i =1
i
i =1
i i

n
2
M = ∑ { yi − ( a + bxi )}
i =1
13  
Simple Linear Regression
n
2
M = ∑ { yi − a − bxi }
i =1 n n

∂M
∑ y − b∑ x
i =1
i
i =1
i

=0 a= ; a = y − bx
∂a n
' '
∂M b=
∑ i yi
x
( i ) i and
x − x = x '
y − y
( i ) i= y '
=0 2
∂b ∑( x ) '
i

yˆi = a + bxi

14  
Multiple Linear Regression
A general linear model of the form is
y = β1x1 + β2x2 + β3x3 +…….. + βpxp

y is dependent variable,
x1, x2, x3,……,xp are independent variables and
β1, β2, β3,……, βp are unknown parameters

• n observations are required on y with the


corresponding n observations on each of the p
independent variables.

15  
Multiple Linear Regression
• n equations are written for each observation as
y1 = β1x1,1 + β2x1,2 + …….. + βpx1,p
y2 = β1x2,1 + β2x2,2 + …….. + βpx2,p
.
.
yn = β1xn,1 + β2xn,2 + …….. + βpxn,p

• Solving n equations for obtaining the p


parameters.
• n must be equal to or greater than p , in
practice n must be at least 3 to 4 times large as
p .
16  
Multiple Linear Regression
• If yi is the ith observation on y and xi,j is the ith
observation on the jth independent variable, the
generalized form of the equations can be written as
p
yi = ∑ β j xi , j
j =1
• The equation can be written in matrix notation as

Y( n×1) = X (n× p ) × Β( p×1)

17  
Multiple Linear Regression
⎡ y1 ⎤ ⎡ x1,1 x1,2 x1,3 . . x1, p ⎤ ⎡ β1 ⎤
⎢ y ⎥ ⎢ x x2,2 x2,3 . . x2, p ⎥⎥ ⎢ β ⎥
⎢ 2 ⎥ ⎢ 2,1 ⎢ 2 ⎥
⎢ y3 ⎥ ⎢ x3,1 ⎥ ⎢ β3 ⎥
⎢ ⎥ = ⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ . ⎥ ⎢ . ⎥
⎢ . ⎥ ⎢ . ⎥ ⎢ . ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⎣ yn ⎥⎦ nx1 ⎢⎣ xn ,1 xn ,1 xn , p ⎥⎦
nxp
⎢⎣ β p ⎥⎦px1

Y is an nx1 vector of observations on the dependent


variable, X is an nxp matrix with n observations on
each p independent variables, Β is a px1 vector of
unknown parameters.
18  
Multiple Linear Regression
• If xi,1=1 for ∀ i, β1 is the intercept

• Parameters βj, j = 1….p are estimated by


minimizing the sum of square errors (ei)

ei = yi − yˆi

p
yˆi = ∑ β j xi , j
j =1

19  
Multiple Linear Regression
In matrix notation,
2 '
∑i Ee = E
'
( ˆ
= Y − XΒ ) (Y − X Βˆ )
d 2
∑ ei =0 ∀ j

0 = −2 X ' Y − X Βˆ
( )
X 'Y = X ' X Βˆ
20  
Multiple Linear Regression
−1
• Premultiplying with ( X X ) on both the sides, '

−1 −1
( ) XX' '
XY = X X ( '
) ˆ
X 'X Β
−1
(X X )'
X 'Y = Βˆ
or
−1
ˆ = X 'X
Β ( ) X 'Y

• ( X X ) is a pxp matrix and rank must be p for it to be


'

inverted.

21  
Multiple Linear Regression
• Suppose if no. of regression coefficients are 3, then
matrix( Xis' X as
) follows

⎡ n 2 n n
⎤
⎢ ∑ xi ,1 ∑x x
i ,2 i ,1 ∑ xi ,3 xi ,1 ⎥
⎢ i =1 i =1 i =1
⎥
⎢ n n n
⎥
( '
)
X X = ⎢ ∑ xi ,1 xi ,2 x 2
∑ i,2 ∑ xi ,3 xi ,2 ⎥
⎢ i =1 i =1 i =1 ⎥
⎢ n n n
2 ⎥
⎢ ∑ xi ,1 xi ,3 ∑x x
i ,2 i ,3 ∑ xi ,3 ⎥
⎣ i =1 i =1 i =1 ⎦

22  
Multiple Linear Regression
• A multiple coefficient of determination, R2 (as in
case of simple linear regression) is defined as

2 Sum of squares dueto regression


R =
Sum of squares about the mean
Β' X 'Y − ny 2
= '
Y Y − ny 2

23  
Example – 2
In a watershed, the mean annual flood (Q) is
considered to be dependent on area of watershed (A)
and rainfall(R). The table gives the observations for
12 years. Obtain regression coefficients and R2 value.

Q in
cumec 0.44 0.24 2.41 2.97 0.7 0.11 0.05 0.51 0.25 0.23 0.1 0.054

A in
hectares 324 226 1474 2142 420 45 38 363 77 84 46 38

Rainfall
in cm 43 53 48 50 43 61 81 68 74 71 71 69

24  
Example – 2 (Contd.)
The regression model is as follows

Q = β1 + β2A + β3R

Where Q is the mean flood in m3/sec,


A is the watershed area in hectares and
R is the average annual daily rainfall in mm

This is represented in matrix form as


Y(12×1) = X (12×3) × Β(3×1)
25  
Example – 2 (Contd.)
To obtain ⎡ 0.44 ⎤ ⎡1 324 43⎤
coefficients ⎢ 0.24 ⎥ ⎢1 226 53 ⎥
this ⎢ ⎥ ⎢ ⎥
⎢ 2.41 ⎥ ⎢1 1474 48 ⎥
equation is ⎢ ⎥ ⎢ ⎥
2.97 ⎥ ⎢1 2142 50 ⎥
to be solved ⎢
⎢ 0.7 ⎥ ⎢1 420 43⎥
⎢ ⎥ ⎢ ⎥ ⎡ β1 ⎤
⎢ 0.11 ⎥ = ⎢1 45 61⎥
× ⎢ β ⎥
⎢ 0.05 ⎥ ⎢1 38 81⎥ ⎢ 2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢
⎣ β 3 ⎥⎦
⎢ 0.51 ⎥ ⎢1 363 68 ⎥ 3x1
⎢ 0.25 ⎥ ⎢1 77 74 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0.23 ⎥ ⎢1 84 71⎥
⎢ ⎥ ⎢ ⎥
⎢ 0.1 ⎥ ⎢1 46 71⎥
⎣ 0.054 ⎥
⎢ ⎦12x1 ⎣1
⎢ 38 69 ⎥
⎦12x3
26  
Example – 2 (Contd.)
The coefficients are obtained from
−1
ˆ = X 'X
Β ( ) X 'Y

⎡ n 2 n n
⎤
⎢ ∑ xi ,1 ∑x x
i ,2 i ,1 ∑ xi ,3 xi ,1 ⎥
⎢ i =1 i =1 i =1
⎥
⎢ n n n
⎥
( '
)
X X = ⎢ ∑ xi ,1 xi ,2 x 2
∑ i,2 ∑ xi ,3 xi ,2 ⎥
⎢ i =1 i =1 i =1 ⎥
⎢ n n n
2 ⎥
⎢ ∑ xi ,1 xi ,3 ∑x x
i ,2 i ,3 ∑ xi ,3 ⎥
⎣ i =1 i =1 i =1 ⎦
27  
Example – 2 (Contd.)

⎡ 12 5277 732 ⎤
( X ' X = ⎢⎢5277 7245075 269879 ⎥⎥
)
⎢⎣ 732 269879 46536 ⎥⎦

The inverse of this matrix is

⎡ 3.35 −6.1×10−4 −0.05 ⎤


−1 ⎢ −6 ⎥
( '
)
X X = ⎢ −6.1×10−4 2.9 ×10 −7
7.9 ×10 ⎥
⎢ −0.05
⎣ 7.9 ×10−6 7.5 ×10−4 ⎥⎦

28  
Example – 2 (Contd.)

⎡ n ⎤
⎢ ∑ yi ⎥
⎢ i =1 ⎥ ⎡ 8.06 ⎤
n
⎢ ⎥ ⎢
( X Y = ⎢ ∑ xi ,2 yi ⎥ = ⎢10642 ⎥⎥
'
)
⎢ i =1 ⎥ ⎢ 417 ⎥
⎢ n ⎥ ⎣ ⎦
⎢ ∑ xi ,3 yi ⎥
⎣ i =1 ⎦

29  
Example – 2 (Contd.)
ˆΒ = ( X ' X )−1 X 'Y

⎡ 3.35 −6.1×10−4 −0.05 ⎤ ⎡ 8.06 ⎤


⎢ −6 ⎥ ⎢
= ⎢ −6.1×10−4 2.9 ×10−7 7.9 ×10 ⎥ × ⎢10642 ⎥⎥
⎢ −0.05
⎣ 7.9 ×10−6 7.5 ×10−4 ⎥⎦ ⎢⎣ 417 ⎥⎦
⎡ 0.0351 ⎤
⎢
= ⎢ 0.0014 ⎥ ⎥
⎢⎣5.0135 ×10−5 ⎥⎦

30  
Example – 2 (Contd.)
Therefore the regression equation is as follows

Q = 0.0351 + 0.0014A + 5.0135*10-5R

From this equation, the estimated Q and the


corresponding errors are tabulated.

31  
Example – 2 (Contd.)
Q A R Q̂ e
0.44 324 43 0.49 -0.05
0.24 226 53 0.35 -0.11
2.41 1474 48 2.10 0.31
2.97 2142 50 3.04 -0.07
0.7 420 43 0.63 0.07
0.11 45 61 0.10 0.01
0.05 38 81 0.09 -0.04
0.51 363 68 0.55 -0.04
0.25 77 74 0.15 0.10
0.23 84 71 0.16 0.07
0.1 46 71 0.10 0.00
0.054 38 69 0.09 -0.04
32  
Example – 2 (Contd.)
Multiple coefficient of determination, R2 :
' ' 2
Β X Y − ny
R2 = ' y = 0.672, n = 12
Y Y − ny 2
15.64 − 5.42 ' −5
Β = ⎣0.0351 0.0014 5.0135 ×10 ⎤⎦
⎡
=
15.77 − 5.42
= 0.99 ⎡ 8.06 ⎤
( X 'Y = ⎢⎢10642 ⎥⎥
)
⎢⎣ 417 ⎥⎦
Y 'Y = 15.77
33  
PRINCIPAL COMPONENT
ANALYSIS

34  
Principal Component Analysis
• Powerful tool for analyzing data.
• PCA is a way of identifying patterns in the data and
data is expressed in such a way that the similarities
and differences are highlighted.
• Once the patterns are found in the data, it can be
compressed (reduce the number of dimensions)
without losing information.
• Eigenvectors and eigenvalues are discussed first to
understand the process of PCA.

35  
Matrix Algebra
Eigenvectors and Eigenvalues:
• Let A be a complex square matrix. If λ is a complex
number and X a non–zero complex column vector
satisfying AX = λX, X is an eigenvector of A, while λ
is called an eigenvalue of A.
• X is the eigenvector corresponding to the
eigenvalue λ.
• Eigenvectors are possible only for square matrices.
• Eigenvectors of a matrix are orthogonal.

36  
Matrix Algebra
• If λ is an eigenvalue of an n × n matrix A, with
corresponding eigenvector X, then (A − λI)X = 0,
with X ≠ 0, so det (A − λI) = 0 and there are at most
n distinct eigenvalues of A.
• Conversely if det (A − λI) = 0, then (A − λI)X = 0 has
a non–trivial solution X and so λ is an eigenvalue of
A with X a corresponding eigenvector.

37  
Example – 3
Obtain the eigenvalues and eigenvectors for the
matrix,
⎡1 2 ⎤
A = ⎢ ⎥
⎣ 2 1 ⎦

The eigenvalues are obtained as

A − λI = 0

1− λ 2
=0
2 1− λ

38  
Example – 3 (Contd.)

(1 − λ )(1 − λ ) − 4 = 0
λ 2 − 2λ − 3 = 0
Solving the equation,
λ = 3, −1

Therefore the eigenvalues are 3 and -1 for matrix A.

The eigenvector is obtained by


( A − λI ) X = 0
39  
Example – 3 (Contd.)
For λ1 = 3
( A − λ1I ) X1 = 0
⎡ −2 2 ⎤ ⎡ x1 ⎤
⎢ 2 −2⎥ ⎢ y ⎥ = 0
⎣ ⎦ ⎣ 1 ⎦

−2 x1 + 2 y1 = 0
2 x1 − 2 y1 = 0

which has solution x = y, y arbitrary. ⎡ y ⎤


eigenvectors corresponding to λ = 3 are the vectors ,⎢ ⎥
with y ≠ 0. ⎣ y ⎦
40  
Example – 3 (Contd.)
For λ1 = −1
( A − λ2 I ) X 2 = 0
⎡ 2 2 ⎤ ⎡ x2 ⎤
⎢ 2 2 ⎥ ⎢ y ⎥ = 0
⎣ ⎦ ⎣ 2 ⎦

2 x1 + 2 y1 = 0
2 x1 + 2 y1 = 0

which has solution x = -y, y arbitrary. ⎡ − y ⎤


eigenvectors corresponding to λ = -1 are the ⎢ y ⎥
vectors , with y ≠ 0. ⎣ ⎦
41  
Principal Component Analysis
Principal Component Analysis (PCA):
• When data is collected on p variables, these
variables are correlated
• Correlation indicates information contained in one
variable is also contained in some of the other p-1
variables.
• PCA transforms the p original correlated variables
into p uncorrelated components (also called as
orthogonal components)
• These components are linear functions of the
original variables.

42  
Principal Component Analysis
The transformation is written as
Z = X×A
Where
X is nxp matrix of n observations on p variables
Z is nxp matrix of n values for each of p
components
A is pxp matrix of coefficients defining the linear
transformation
All X are assumed to be deviations from their
respective means, hence X is a matrix of deviations
from mean
43  
Principal Component Analysis
Steps for PCA:

• Get the data for n observations on p variables.


• Form a matrix with deviations from mean.
• Calculate the covariance matrix
• Calculate the eigenvalues and eigenvectors of the
covariance matrix.
• Choosing components and forming a feature vector.
• Deriving the new data set.

44  
Principal Component Analysis
The procedure is explained with a simple data set of
the yearly rainfall and the yearly runoff of a catchment
for 15 years.
Year 1 2 3 4 5 6 7 8 9 10
Rainfall
105 115 103 94 95 104 120 121 127 79
(cm)
Runoff
42 46 26 39 29 33 48 58 45 20
(cm)
Year 11 12 13 14 15
Rainfall
133 111 127 108 85
(cm)
Runoff
54 37 39 34 25
(cm)
45  
Principal Component Analysis
Step 2: Form a matrix with deviations from mean
Original matrix Matrix with deviations from mean
⎡105 42 ⎤ ⎡ −1.3 3.4 ⎤
⎢115 46 ⎥⎥ ⎢ 8.7 7.4 ⎥
⎢ ⎢ ⎥
⎢103 26 ⎥ ⎢ −3.3 −12.6 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 94 39 ⎥ ⎢ −12.3 0.4 ⎥
⎢ 95 29 ⎥ ⎢ −11.3 −9.3 ⎥
⎢ ⎥ ⎢ ⎥
⎢104 33 ⎥ ⎢ − 2.3 − 5.6 ⎥
⎢120 48⎥ ⎢ 13.7 9.4 ⎥
⎢ ⎥ ⎢ ⎥
⎢121 58 ⎥ ⎢ 14.7 19.4 ⎥
⎢127 45⎥ ⎢ 20.7 6.4 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 79 20 ⎥⎦ ⎢⎣ −27.3 −18.6 ⎥⎦
46  
Principal Component Analysis
Step 3: Calculate the covariance matrix
n

∑ ( x − x )( y − y )
i =1
i i
cov( X , Y ) = s X ,Y =
n −1

⎡cov( X , X ) cov( X , Y ) ⎤ ⎡ 216.67 141.35⎤


⎢ cov(Y , X ) cov(Y , Y ) ⎥ = ⎢141.35 133.38⎥
⎣ ⎦ ⎣ ⎦

47  
Principal Component Analysis
Step 4: Calculate the eigenvalues and eigenvectors of
the covariance matrix

⎡cov( X , X ) cov( X , Y ) ⎤ ⎡ 216.67 141.35⎤


⎢ cov(Y , X ) cov(Y , Y ) ⎥ = ⎢141.35 133.38⎥
⎣ ⎦ ⎣ ⎦

48  
Principal Component Analysis
Step 5: Choosing components and forming a feature
vector

49  

You might also like