0% found this document useful (0 votes)
33 views19 pages

Hydrology Part: RD TH

The document discusses methods for creating hyetographs and rainfall time series for subcatchments from rainfall station data, including: 1. The Thiessen polygon method is used to determine the relative influence of each rainfall station on each subcatchment, resulting in weighting coefficients. A weighted average formula is then used to calculate average rainfall for each subcatchment from the station data. 2. The Kriging interpolation method is also explored to extrapolate rainfall values across the catchment. This involves exporting station coordinate data from GIS software, importing it into a spreadsheet along with rainfall time series, and using macros to output text files of rainfall data over time for use in interpolation software. 3. Comparisons

Uploaded by

Bernard Owusu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views19 pages

Hydrology Part: RD TH

The document discusses methods for creating hyetographs and rainfall time series for subcatchments from rainfall station data, including: 1. The Thiessen polygon method is used to determine the relative influence of each rainfall station on each subcatchment, resulting in weighting coefficients. A weighted average formula is then used to calculate average rainfall for each subcatchment from the station data. 2. The Kriging interpolation method is also explored to extrapolate rainfall values across the catchment. This involves exporting station coordinate data from GIS software, importing it into a spreadsheet along with rainfall time series, and using macros to output text files of rainfall data over time for use in interpolation software. 3. Comparisons

Uploaded by

Bernard Owusu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

7.

Hydrology part

Once we obtain a certain feeling about the catchment geography using GIS, we can focus
on water issues. The first thing to do is to create hyetographs from rain gauge station
records. We are using 2 different methods so the second task will be to compare them.
Then we can study the rainfall pattern of this event and compare it to the actual return
periods observed over the catchment (using data provided by Meteo France).
Then, using theses hyetographs and data provided by the GIS analysis, we can actually
carry out hydrological analysis. The hydrographs we will create now (in this hydrological
part) are done using lumped models. We will start by using Socose, Nash and SCS-CN
(Laborde’s Excel sheet) and then try withHAC-HMS. Physically based models will be
used later on.
We are provided with 2 rainfall time series. One represents the daily rainfall of the year
1994 and the other one the rainfall corresponding to our flood event. The first one will
help calculating the humidity coefficient used in lumped models. The latter will be used to
obtain hyetographs. It contains 62 hourly averaged rainfall values, starting from the 3rd of
November 1994 at midday until the 6th of November 1994 at midnight.

1. Hyetograph creation

We have rainfall time series (hyetograms) at 6 places scattered across the Var catchment
and we want to have time series of values averaged over each subcatchment. We will
process the rainfall data (using different extrapolation methods) to create hyetographs.
They underpin any kind of hydrological analysis. As these data are the main factor
influencing the results, we have to make sure that we create reliable hyetograms. Indeed,
bad rainfall data induce meaningless results.

Or we know that our analysis will be limited by our data. First of all, we have only 6
stations scattered over a 2833.6 km2 basin, which is quite scarce. Furthermore, the rainfall
depends on the altitude, so in a catchment such as the Var catchment, we might observe
important differences across the catchment. Or we know that our stations are mainly
located in valley, our data don’t take into consideration the altitude variation.

Among all the extrapolation method, we will focus on two of them: the Thiessen’s
polygons method and the Kriging extrapolation. This first method is quite straightforward
to compute but might appear quite approximate as it assumes a constant rainfall over wide
areas. Kriging method, which belongs to the family of linear least squares estimation
algorithms, could be much more reliable. Indeed, it attempts to express trends suggested
in our data. In this study, we will consider that. We can also notice that this method is
used on a daily basis by Méteo France (the French weather office), so we can be much
more confident about the results we will get from this method.
a. Thiessen’s polygon method
Thiessen method assumes that precipitation varies discreetly along the x and y directions.
This method split the whole basin into polygons. These polygons, which are named “Thiessen
polygons”, represent the influence of each rain gauge. Basically, each polygon corresponds to
rain gauge. A point belonging to given polygon is closer to that rain gauge than any other rain
gauge. This point is assumed to receive the same rainfall as the corresponding rain gauge
station.
In practice, we create polygons drawing the perpendicular bisectors of the lines between all
points. We have to keep in mind that this drawing method is not really accuracy. Furthermore,
throughout each polygons, precipitation is equal to the gauge value, which is highly
unrealistic. Even, if we the impact of such inaccuracy is decreased by the fact that we deal
with values averaged over the subcatchment, this method stays very approximate.

In ArcGIS, the first thing to do is to create a new layer and edit it. We use the snapping option
of the Editor Toolbar to draw line between rainfall stations. Then we draw mediatrices and
their intersections outline each polygons. Figure 7.1.1 shows the polygons we obtained in
ArcGis.

Figure 7.1.1: Thiessen polygons


We have now two layers (“Theissan_Poly” and “Sub_Catchment”) representing respectively
the Thiessan polygons and the Subcatchment polygons. Using them, we want to calculate the
relative surface of each polygon in each subcatchment. An easy way to do so is to first create
one polygon for each combination of subcatchment / Thiessan polygons. In the Arc Toolbox,
we use: Analysis Tool \ Overlay \ Union. We obtain a new layer with 17 polygons (Figure
7.1.2). However, we have no information about their surface. So we have to export this new
layer into the database created during the GIS part and we add it to our project. Now, opening
the attribute table, we have access to the area of each small polygon. We export this attribute
table (option \ export) into excel. Removing the unnecessary fields, we obtain table 7.1.1.
Sorting the data, we obtain table 7.1.2 which represents the relative surface of each polygon in
each subcatchment.

Figure 7.1.2: union polygons

Table 7.1.1: Simplified area table (provided by ArcGIS)

OBJECTID Subcatchment Name Rain Gauge station Area (m)


1 Tinée Guillaumes 368141829.9
2 Tinée Levens 41180726.58
3 Tinée St Martin Vésubie 328676755.4
4 Tinée Roquesteron 9484438.98
5 Upper Var Guillaumes 573403160.1
6 Upper Var Levens 39062901.88
7 Upper Var St Martin Vésubie 13453991.46
8 Upper Var Puget Théniers 420584868.1
9 Upper Var Roquesteron 67959456.61
10 Vésubie Levens 97096186.62
11 Vésubie St Martin Vésubie 296440064.2
12 Estéron Levens 22704839.38
13 Estéron Puget Théniers 123319223
14 Estéron Carros 52432733.39
15 Estéron Roquesteron 252403830.4
16 Lower Var Levens 22225337.88
17 Lower Var Carros 105034662.4
Table 7.1.2:Relative area: (correspond to the α coefficients)

Tinée Upper Var Vésubie Estéron Lower Var


Carros 0.00% 0.00% 0.00% 11.63% 82.54%
Levens 5.51% 3.51% 24.67% 5.04% 17.46%
Roquesteron 1.27% 6.10% 0.00% 55.98% 0.00%
Puget Théniers 0.00% 37.74% 0.00% 27.35% 0.00%
Guillaumes 49.25% 51.45% 0.00% 0.00% 0.00%
St Martin Vésubie 43.97% 1.21% 75.33% 0.00% 0.00%

In Excel, we can calculate the rainfall time series for each subcatchment using the Thiessen
formula (Equation 7.1.1). This formula makes a weighted average of the gauges stations
rainfall for each sub-catchment.

Equation 7.1.1: Thiessen formula

Where: P : mean rainfall on a subcatchment


A1: Relative surface of 1st Thiessen polygon
P1: Rainfall data of the 1st Thiessen polygon’s gauge
n: number of rainfall station
α: importance of each rain station polygon on each subcatchment
(Values in Table 2)

We can notice that α depends on spatial location of each rainfall station on the Var catchment.
So we have one value per subcatchment and per rain gauge. To make this formula clear, here
is an illustration for the Tinée subcatchment:
PTinee= Pcarros αcarros + Plevens αlevens …
Applying Thiessen formula to each subcatchment, we obtain a table which represents the
average precipitation for each subcatchment (see Annexe 1).

b. Kriging interpolation method


Kriging is a set of linear regression routine that minimize estimation variance from a
predefined covariance model. This method assumes that each parameter being interpolated
can be treated as regionalized variable. A regionalized variable is intermediate between a truly
random variable and a completely deterministic one. It varies in a continuous manner from 1
location to the next. In other words, points that are near each other have a certain degree of
special correlation and point that are far away from each other statistically independent.

ArcGIS only allow Kriging interpolation, so we use Surfer to extrapolate the data over the
whole catchment. Thus, we will have to use Excel, Surfer 8.0 and ArcGIS to implement the
Kriging method with our set of data. Basically, the work is divided into 3 main parts. We have
to go through many different steps for each time step to obtain results, so we will write
several macros in these 3 softwares to make this data processing automatically.
First of all, we need to get the coordinates of the rain gauges stations from ArcGIS. We edit
the attribute table of the layer representing the rain gauges. We change the names (remove the
“é”). Then in Arc Toolbox: Spatial analyst tool \ Utilities \ Export Feature Attribute to Ascii.
Then, we import these coordinates into an Excel sheet which has the rainfall time series for
the 6 gauges. We wrote a macro in Excel (Annexe ) to transform the rain gauge into time
series. This macro outputs one text file per time step. Table 7.1.3 is one example of these
files. P corresponds to the precipitation in mm. Furthermore, as Surfer cannot extrapolate
when the rainfall is null in all the rain gauges, this macro records as well the times at which
all the values are null and write this into another text.

Then, in surfer, for each time step we first check if whether there is rainfall or not. If yes, we
import the corresponding text file, extrapolate the data using kriging and output a grid file
(.dat). Here again we wrote a macro to carry out these steps automatically for each time step.
The grid we have go from xMin=940000 to x Max=1010000 and from yMin=1860000 to
yMax=1940000. Surfer generates 10,000 values (100 cells by 100 cells squares).

Then we wrote an Excel macro to remove the negative values calculated by Surfer and
convert this file into a text file (usable by ArcGIS).

Finally, we use the subcatchment layer and these text files in ArcGIS to retrieve subcatchment
rainfall statistics. The first thing to do is to import the text file as XYZ files. As we cannot
compute text files in ArcGIS, we have to transform it into a shapefile that holds made of
points representing the precipitation scattered across the whole subcatchment. Now, we want
to have the average precipitation over each subcatchment (for each time step). To do this, we
have to select the points by location. ArcGIS will work with the point shapefile (rainfall) and
polygon shapefile (subcatchment). In VBA, we made a loop that first select the a
subcatchment by name, then select the points that are “completely within” this polygon, make
an average of these selected points rainfall and output the value into a text file. Because of the
length of this process, writing a macro in ArcGIS makes a lot of sense. We eventually obtain a
text file with the precipitation averaged on each subcatchment at each time step. These time
series are reported into annexe XXX.

Table 7.1.3: Text file


X Y P
962995.6178 1909833.946 0.5
966705.0514 1894996.212 1
972825.6169 1886279.043 5
993598.4452 1909463.003 2
993042.0302 1886279.043 0.5
991001.8417 1879231.119 3
2. Comparison

We obtained rainfall time series for each subcatchment using 2different methods. It could be
interesting now to compare them to see if it is worth putting so much effort in getting
hyetographs. Hyetographs have to be as reliable as possible, but better methods don’t always
imply significantly better results. The first thing we can do is compare visually the two kinds
of hyetograph for each subcatchment. (Figure 7.2.1, Figure 7.2.2, Figure 7.2.3, Figure 7.2.4,
Figure 7.2.5, Figure 7.2.6).

We can see that in general the curves are alike. Thiessen method tends to provide higher picks
than Kriging method. Both of them are especially similar for two subcatchments: Estéron and
Upper Var. These two curves fit quite well. However, for the Vésubie, the peaks are really
different; we observe a 10 mm difference in the maximum value.

After this qualitative comparison, we can carry out a quantitative comparison. We will carry
out a statistical analysis to highlight the differences. We decide to compare the average of
these 2 methods using a T-Test. We will test their We consider that these several datasert are
not independent.

We will make the comparison of the hyetogram given by Thiessen and Kriging, for each
subcatchment. Par 1 focus on Lower Var hyetograph, Par 2 on Esteron, Par 3 on Vesubie,
Par 4 on Upper Var and Par 5 on Tinée. We consider here a 95% degree of confidence. It
means that if the degree of freedom (Sig) is higher than 0.05, we can accept the Null
hypothesis and so conclude that there is no significant differences between our 2 group of
data. A Sig lower than 0.05 means that there is a significant difference between the 2 groups.
When looking at table XXX, we can observe that there is no significant differences in the 5
tests we carried out. We can thus question the usefulness of carrying out the Kriging
extrapolation method in this case. Indeed, it takes much more effort for a result quite similar.
However, as we saw earlier, we can observe
Figure 7.2.1: Comparison Kriging / Thiessen for Tinée subcatchment
Rainfall (mm) Tinée subcatchment
18
Kriging
16
Thiessen
14
12
10
8
6
4
2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Time (h)

Figure 7.2.2: Comparison Kriging / Thiessen for Upper Var subcatchment

Upper Var subcatchment


Rainfall (mm) Kriging
25
Thiessen

20

15

10

0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)
Figure 7.2.3: Comparison Kriging / Thiessen for Estéron subcatchment

Estéron subcatchment
Rainfall (mm) Kriging
16
Thiessen
14
12
10
8
6
4
2
0
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61Time (h)

Figure 7.2.4: Comparison Kriging / Thiessen for Lower Var subcatchment

Vésubie subcatchment
Rainfall (mm)
Kriging
25
Thiessen
20

15

10

0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)
Figure 7.2.5: Comparison Kriging / Thiessen for Vésubie subcatchment

Lower Var subcatchment


Rainfall (mm) Kriging
25
Thiessen
20

15

10

0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)

Table 7.2.5: Comparison Kriging / Thiessen: T-Test


3. Rainfall study
Table 7.3.1: Thiessen

Statistics Rainfall mm
Tinée Upper Var Vésubie Estéron Lower Var
Mean 1.37 1.38 1.17 1.17 0.79
Event 1Max 4.28 5.46 4.13 3.73 5.48
Min 0.00 0.00 0.00 0.00 0.00
Mean 3.50 3.27 3.67 3.62 2.40
Event 2 Max 13.54 11.33 20.88 14.19 19.38
Min 0.00 0.15 0.00 0.00 0.00
Mean 2.44 2.32 2.44 2.42 1.61
Whole period Max 13.54 11.33 20.88 14.19 19.38
Min 0.00 0.00 0.00 0.00 0.00

mm
25 Sucatchments' hyetographs Tinée
Upper Var
Vésubie
20 Estéron
Lower Var

15

10

0 Hours
0 10 20 30 40 50 60
4. Hydrograph creation

7.4.1 Runoff derivation in the different subcatchments


To create the hydrographs, we have used Mike SHE and the Excel sheet created by Pr. Laborde This
Excel macro is discussed in the following chapters.

To run the model, we have to enter some characteristic parameters from the catchment (subcatchment)
to analyse and define the characteristics of transfer function from the S.C.S (Soil Conservation
Service).
This is an empirical method for runoff calculation which is based on the potential of the soil to absorb
water. The runoff has to be known to calculate the discharge at the outlet of each subcatchment. It’s
obtained with the following function of production:

( P(t ) − 0.2S ) 2
R(t ) = if P(t) > 0.2S (or zero)
P(t ) + 0.8S )

Where : R(t) is the amount of rain runoff between time 0 and time (mm)
P(t) is the amount of rain fallen between time 0 and time (mm)
S is the maximum amount of water the soil is able to absorb

The service provides CN-values to calculate the maximum amount of infiltration (S), depending on the
permeability of the land, its seepage state and nature of vegetation. First of all, we have to calculate the
antecedant moisture class (it means the amount of rainfall during the 5 days preceding the flood event)

Antecedant moisture class (AMC)

Condition I : P<12.7
Condition II : 12.7<P<27.9
Condition III: P>27.9

We recalculated the amount of water in each subcatchment from the data given with the Kriging
method, and we obtained a class III for all subcatchments, which means that we already had a really
moist or satured soil at the moment of critical rains that produced the flood.

After this step, and according to the geological class (estimated because of lack of data) fitting to the
soil’s occupation given by the GIS, we’ve calculated the runoff curve number for the different
subcatchments. We obtained the following results:

Runoff Curve Number and Infiltration potential

Tinée Vésubie Estéron Low Var High Var Var catchment


CN - Number 68.10 68.20 68.20 70.70 68.00 68.20
Smax 51.80 51.40 51.40 45.80 51.90 51.50
S0 10.40 10.30 10.30 9.20 10.40 10.30
Tableau 7.4.1.2 CN number, Smax and S0

7.4.2 Determination of unit hydrogramm and hydrograph

As the rainfall on the subcatchments is known, the discharge at their outlet can be now calculated.

7.4.2.1 Methods used to calculate hydrographs on the subcatchments

We determine unit hydrograph for each subcatchment using different empiric formulas:
Time of concentration

To calculate the time of concentration, we tried several formulas as:

Passani formula

Subcatchments data

A* L
Tc = 0.108 *
I

Nash formula

100 * A
Tc = 29.6 * 3
I

Where A is the surface area


L is the longest flow path
I the slope choosen

After trying them, we choose to use the Nash formula, which seems to be the most accurate one.

Rise time calculation

The rise time for each subcatchment is needed to determine the unit hydrogram with the methods
presented below. This value is calculated through the concentration time which is in turn obtained by
using empirical formulas where the geometry of the catchment is used (data provided by GIS
analysis).

• SCS (Soil Conservation Service of USA) HU method: it uses a triangular shape unit
hydrogram with rising time = 3/8 Tc
• SOCOSE-CEMAGREPH method (French agriculture ministry): rise time = 1/3 Tc and
calibration parameter of 3 and 5
• Nash model : rise time = 1/3 Tc and parameter 4 and 4.7

7.4.2.2 Calculation parameters


Area Length of Slope Tc Nash Tm SCS - Tm SOCOSE/NASH -
2]
[km the longest [m/m] [min] Nash [min] Nash [min]
path [km]
Tinée 747.48 71.44 0.035 1421 533 474
Vésubie 393.54 48.4 0.051 1106 415 369
Estéron 450.86 62.21 0.025 1282 481 427
High Var 1114.46 95.11 0.024 1692 634 564
Low Var 127.26 31.13 0.042 813 305 271
Tableau 7.4.2.2.1 Data used for all the subcatchments

10200
10800
1200
1800
2400
3000
3600
4200
4800
5400
6000
6600
7200
7800
8400
9000
9600
600
0

0 1,400.0

2
1,200.0
4
1,000.0
6

8 800.0

10 600.0

12
400.0
14
200.0
16

18 0.0

Pluie en mm Pluie Ruissellement Débit 3


Débit en m /s

Fig. 7.4.2.2.2 Rainfall, runoff and discharge for the High Var catchment, using the Nash method with
a coefficient n=4

7.4.2.3 Comparison of differents methods and validation of hydrographs results with measured
data

The peak discharge during the event of 5th of November 1994 assumed by the CEMAGREPH is about
3500 m3/s. However, this value has to be considered very carefully because no discharge has really
been measured on the Var river during the flood event and only an approximate rating curve could be
done. The only real information we have are the water level given by signs observed in the nature.
The real discharge’s range for this event is certainly from 2500 m3/s to 5000 m3/s but nothing more is
really known about it.
If we compare our calculations with the “observed” one, we can conclude:

The SOCOSE method gives results much under the measured value, but even fit in the range
of uncertainties; even when we tried to change the alpha value, we obtained values lower than
the observed one.

The SCS method gives values that fit not too bad, but are lower than the discharge at the
Napoleon Bridge.

The Nash method fits the best so we decided to keep this method. We changed the n parameter
to see how the discharge will react and we found out that a n coefficient of 4 was the closest
result we could get.
Hydrographs comparison - Discharge at the Napoleon Bridge

4000
Nash n = 4
3500
Nash n = 4.7
3000 SOCOSE a=3
SOCOSE a=5
Discharge [m3/s]

2500
SCS
2000 CEMAGREPH

1500

1000

500

0
0 20 40 60 80 100 120 140 160
Time [h]

Fig 7.4.2.3.1 Comparison between all the methods used with Laborde Excel sheet and the
CEMAGREPH hydrograph

After this stage, we tried to calculate the discharge with a flood routing method (Muskingum method);
the reason for that was that after calculating the discharge for all subcatchments, we noticed that the
delay time for the total discharge at the Napoleon Bridge occurred quite early compared to the
discharge at the outlet of each subcatchment. It means that we’ve calculated the delay time between
the flow which goes out of a subcatchment and the other flow reaching the main river. It gives us at
the end a delay time for each subcatchment and we have to shift it forward. This method provides us a
more realistic result, because it takes into account the delay time of the wave. In general, we observe
with the Muskingum-method that our peak is delayed and laminated. That’s what is shown on the
following graph.

Discharge at the Napoleon Bridge - Hydrograph Nash n=4

4000

3500 Nash
Nash - Muskingum
3000 CEMAGREPH
Discharge [m3/s]

2500

2000

1500

1000

500

0
0 20 40 60 80 100 120 140
Time [h]
Fig. 7.4.2.3.2 Discharges measured at Napoleon Bridge with Nash method – comparison between
Nash, Nash-Muskingum an CEMAGREPH hydrographs

7.4.2.4 Improvement of the results and conclusion

A few elements can be brought to explain the differences between all the different methods used

• The rain data were calculated with 2 approximative methods, which are interpolating from the
6 raingages we have, data for the whole catchment. For a better accuracy, we need much more
raingages.
• There are quite a few uncertainties on the Tc and Tm formula we used, because there are
dependant on the GIS data, and the slopes we used were subject to discussion. The delayed
peak time of all our hydrographs compare to the measured one could be explained by this
reason, but we tried to calibrate on the reality as much as possible.
• For a better hydrological calculation, we should have determined smaller subcatchments, but
this requires much more time.
• For the CN calculation (Smax), we calculated the coefficients based on the SCS-CN method,
which need a lot and accurate informations on the land use of the area, that weren’t provided.

We tried to change a lot of parameters and compare them with the observed data. A so big number of
possibilities are available that it’s pretty hard to choose one solution or another one, especially when
we know that the CEMAGREPH curve is not an accurate one. That’s the reason why we decided not
to change so much parameters that were related to the reality and try to change the other one, as the
coefficient n for the Nash method.
This method provides us a good approximation of the reality but for that we have to know as precise
as possible the real data picken from the field.
5. HEC-HMD analysis

HEC-HMS Model
A hydrological model was prepared using HEC HMS 3.1.0. Subcatchments defined by ArcGIS were
used for preparing the model. The non-grid-based lumped basin model was made. The assumption for
this model is that each sub-basin within the watershed can be adequately represented by a number of
hydrologic parameters. In effect, these parameters are a weighted average representation of the entire
subbasin. Any variation within a sub-basin is lumped into the subbasin total and an average value is
used in the analysis.

Preparation of design storm


As the recording gauges do not represent the subcatchments; the design storm had to be prepared using
an approach in which the subcatchments can be represented in an acceptable way. The procedure
consisted of following 2 steps.
a) The areas were allocated to each gauging station using Thiessen polygon method.
b) The rainfall for each subcatchment was calculated using the overlapping area for the
subcatchment with the area covered by the rain gauges.

Kriging method was used for getting the rainfall data corresponding to each subcatchment.

Preparation of model in HEC HMS


The following Components were prepared for the HEC-HMS model.
i. Basin model
ii. Time series data
iii. Meteorological model
iv. Control specifications

An explanation of these Components is given below:

1.1. Basin model


The basin model consists of only 2 types of elements.
a. Subbasin elements
b. Junction elements
Junction elements serve the purpose of joining the discharge coming from different
subcatchments. They do not play any hydrological role in the model.
Details such as the area and slope for each subcatchment are given to the model. The
important decisions for setting up basin model are about Transform method, Loss method
and Baseflow method.
Transform method
The surface runoff calculations were performed using SCS unit hydrograph method. This
method requires lag time to be computed for implementation. The standard lag is defined
as the length of time between the centroid of precipitation mass and the peak flow of the
resulting hydrograph. Studies by the SCS show that in general the lag time can be
approximated by taking 60% of the time of concentration.
The time of concentration was calculated for each subcatchment using the Kirpich
equation

Here L is the longest overland distance path (feet) and S is slope of the subcatchment
(m/m).

Loss method
A subbasin element conceptually represents infiltration, surface runoff, and subsurface
processes interacting together. The actual infiltration calculations are performed by a loss
method contained within the subbasin. All of the possible loss methods in HMS conserve
mass. That is, the sum of infiltration and precipitation left on the surface will always be
equal to total incoming precipitation.
SCS curve number method was used for performing the loss computations. The important
parameters required for the implementation of the method are the Curve Number and the
Percentage Imperviousness. Percentage imperviousness is taken from the GIS analysis.
The impervious areas of the subcatchments are very small in percentage.
The details for SCS curve number selection are given in the section that explains Nash
method in this report. It can be stated here that the outflow discharge is very sensitive to
the curve number selected and it must be chosen carefully.

Baseflow method
No baseflow has been considered in the simulation. The available rainfall data was
available for 30 hours prior to the recorded discharge data.

1.2. Time-Series Data


The time series data comprised of synthesized rainfall data which was to be given as
input to each subcatchment for simulation. Surfer was used to calculate the
representative discharges. The details for the procedure are given in section…

1.3. Meteorological model


Meteorological model links the rainfall data to the subcatchments. All the time series
data mentioned above are associated to each subcatchment and the type of input is
Specified Hyetograph.

1.4. Control specifications


The control specifications define the time of simulation. The rainfall data was available
from 03Nov94 12:00 to 06Nov94 01:00. The simulation was run for an extra 48 hours
to get the complete recession of the discharge hydrograph.
1.5. Results
Following are the results for the final selected parameters for the catchment. It must be
mentioned here that the simulation results are very sensitive to the curve number that is
used for loss method in the subbasins. The representative curve number used was 52,
which was calculated from the available information about the catchment.
Recorded Simulated
4000

3500

3000
Discharge (m³/s)

2500

2000

1500

1000

500

0
0 20 40 60 80 100 120
Time (hrs)

6. Conclusion

You might also like