Hydrology Part: RD TH
Hydrology Part: RD TH
Hydrology part
Once we obtain a certain feeling about the catchment geography using GIS, we can focus
on water issues. The first thing to do is to create hyetographs from rain gauge station
records. We are using 2 different methods so the second task will be to compare them.
Then we can study the rainfall pattern of this event and compare it to the actual return
periods observed over the catchment (using data provided by Meteo France).
Then, using theses hyetographs and data provided by the GIS analysis, we can actually
carry out hydrological analysis. The hydrographs we will create now (in this hydrological
part) are done using lumped models. We will start by using Socose, Nash and SCS-CN
(Laborde’s Excel sheet) and then try withHAC-HMS. Physically based models will be
used later on.
We are provided with 2 rainfall time series. One represents the daily rainfall of the year
1994 and the other one the rainfall corresponding to our flood event. The first one will
help calculating the humidity coefficient used in lumped models. The latter will be used to
obtain hyetographs. It contains 62 hourly averaged rainfall values, starting from the 3rd of
November 1994 at midday until the 6th of November 1994 at midnight.
1. Hyetograph creation
We have rainfall time series (hyetograms) at 6 places scattered across the Var catchment
and we want to have time series of values averaged over each subcatchment. We will
process the rainfall data (using different extrapolation methods) to create hyetographs.
They underpin any kind of hydrological analysis. As these data are the main factor
influencing the results, we have to make sure that we create reliable hyetograms. Indeed,
bad rainfall data induce meaningless results.
Or we know that our analysis will be limited by our data. First of all, we have only 6
stations scattered over a 2833.6 km2 basin, which is quite scarce. Furthermore, the rainfall
depends on the altitude, so in a catchment such as the Var catchment, we might observe
important differences across the catchment. Or we know that our stations are mainly
located in valley, our data don’t take into consideration the altitude variation.
Among all the extrapolation method, we will focus on two of them: the Thiessen’s
polygons method and the Kriging extrapolation. This first method is quite straightforward
to compute but might appear quite approximate as it assumes a constant rainfall over wide
areas. Kriging method, which belongs to the family of linear least squares estimation
algorithms, could be much more reliable. Indeed, it attempts to express trends suggested
in our data. In this study, we will consider that. We can also notice that this method is
used on a daily basis by Méteo France (the French weather office), so we can be much
more confident about the results we will get from this method.
a. Thiessen’s polygon method
Thiessen method assumes that precipitation varies discreetly along the x and y directions.
This method split the whole basin into polygons. These polygons, which are named “Thiessen
polygons”, represent the influence of each rain gauge. Basically, each polygon corresponds to
rain gauge. A point belonging to given polygon is closer to that rain gauge than any other rain
gauge. This point is assumed to receive the same rainfall as the corresponding rain gauge
station.
In practice, we create polygons drawing the perpendicular bisectors of the lines between all
points. We have to keep in mind that this drawing method is not really accuracy. Furthermore,
throughout each polygons, precipitation is equal to the gauge value, which is highly
unrealistic. Even, if we the impact of such inaccuracy is decreased by the fact that we deal
with values averaged over the subcatchment, this method stays very approximate.
In ArcGIS, the first thing to do is to create a new layer and edit it. We use the snapping option
of the Editor Toolbar to draw line between rainfall stations. Then we draw mediatrices and
their intersections outline each polygons. Figure 7.1.1 shows the polygons we obtained in
ArcGis.
In Excel, we can calculate the rainfall time series for each subcatchment using the Thiessen
formula (Equation 7.1.1). This formula makes a weighted average of the gauges stations
rainfall for each sub-catchment.
We can notice that α depends on spatial location of each rainfall station on the Var catchment.
So we have one value per subcatchment and per rain gauge. To make this formula clear, here
is an illustration for the Tinée subcatchment:
PTinee= Pcarros αcarros + Plevens αlevens …
Applying Thiessen formula to each subcatchment, we obtain a table which represents the
average precipitation for each subcatchment (see Annexe 1).
ArcGIS only allow Kriging interpolation, so we use Surfer to extrapolate the data over the
whole catchment. Thus, we will have to use Excel, Surfer 8.0 and ArcGIS to implement the
Kriging method with our set of data. Basically, the work is divided into 3 main parts. We have
to go through many different steps for each time step to obtain results, so we will write
several macros in these 3 softwares to make this data processing automatically.
First of all, we need to get the coordinates of the rain gauges stations from ArcGIS. We edit
the attribute table of the layer representing the rain gauges. We change the names (remove the
“é”). Then in Arc Toolbox: Spatial analyst tool \ Utilities \ Export Feature Attribute to Ascii.
Then, we import these coordinates into an Excel sheet which has the rainfall time series for
the 6 gauges. We wrote a macro in Excel (Annexe ) to transform the rain gauge into time
series. This macro outputs one text file per time step. Table 7.1.3 is one example of these
files. P corresponds to the precipitation in mm. Furthermore, as Surfer cannot extrapolate
when the rainfall is null in all the rain gauges, this macro records as well the times at which
all the values are null and write this into another text.
Then, in surfer, for each time step we first check if whether there is rainfall or not. If yes, we
import the corresponding text file, extrapolate the data using kriging and output a grid file
(.dat). Here again we wrote a macro to carry out these steps automatically for each time step.
The grid we have go from xMin=940000 to x Max=1010000 and from yMin=1860000 to
yMax=1940000. Surfer generates 10,000 values (100 cells by 100 cells squares).
Then we wrote an Excel macro to remove the negative values calculated by Surfer and
convert this file into a text file (usable by ArcGIS).
Finally, we use the subcatchment layer and these text files in ArcGIS to retrieve subcatchment
rainfall statistics. The first thing to do is to import the text file as XYZ files. As we cannot
compute text files in ArcGIS, we have to transform it into a shapefile that holds made of
points representing the precipitation scattered across the whole subcatchment. Now, we want
to have the average precipitation over each subcatchment (for each time step). To do this, we
have to select the points by location. ArcGIS will work with the point shapefile (rainfall) and
polygon shapefile (subcatchment). In VBA, we made a loop that first select the a
subcatchment by name, then select the points that are “completely within” this polygon, make
an average of these selected points rainfall and output the value into a text file. Because of the
length of this process, writing a macro in ArcGIS makes a lot of sense. We eventually obtain a
text file with the precipitation averaged on each subcatchment at each time step. These time
series are reported into annexe XXX.
We obtained rainfall time series for each subcatchment using 2different methods. It could be
interesting now to compare them to see if it is worth putting so much effort in getting
hyetographs. Hyetographs have to be as reliable as possible, but better methods don’t always
imply significantly better results. The first thing we can do is compare visually the two kinds
of hyetograph for each subcatchment. (Figure 7.2.1, Figure 7.2.2, Figure 7.2.3, Figure 7.2.4,
Figure 7.2.5, Figure 7.2.6).
We can see that in general the curves are alike. Thiessen method tends to provide higher picks
than Kriging method. Both of them are especially similar for two subcatchments: Estéron and
Upper Var. These two curves fit quite well. However, for the Vésubie, the peaks are really
different; we observe a 10 mm difference in the maximum value.
After this qualitative comparison, we can carry out a quantitative comparison. We will carry
out a statistical analysis to highlight the differences. We decide to compare the average of
these 2 methods using a T-Test. We will test their We consider that these several datasert are
not independent.
We will make the comparison of the hyetogram given by Thiessen and Kriging, for each
subcatchment. Par 1 focus on Lower Var hyetograph, Par 2 on Esteron, Par 3 on Vesubie,
Par 4 on Upper Var and Par 5 on Tinée. We consider here a 95% degree of confidence. It
means that if the degree of freedom (Sig) is higher than 0.05, we can accept the Null
hypothesis and so conclude that there is no significant differences between our 2 group of
data. A Sig lower than 0.05 means that there is a significant difference between the 2 groups.
When looking at table XXX, we can observe that there is no significant differences in the 5
tests we carried out. We can thus question the usefulness of carrying out the Kriging
extrapolation method in this case. Indeed, it takes much more effort for a result quite similar.
However, as we saw earlier, we can observe
Figure 7.2.1: Comparison Kriging / Thiessen for Tinée subcatchment
Rainfall (mm) Tinée subcatchment
18
Kriging
16
Thiessen
14
12
10
8
6
4
2
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Time (h)
20
15
10
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)
Figure 7.2.3: Comparison Kriging / Thiessen for Estéron subcatchment
Estéron subcatchment
Rainfall (mm) Kriging
16
Thiessen
14
12
10
8
6
4
2
0
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61Time (h)
Vésubie subcatchment
Rainfall (mm)
Kriging
25
Thiessen
20
15
10
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)
Figure 7.2.5: Comparison Kriging / Thiessen for Vésubie subcatchment
15
10
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61Time (h)
Statistics Rainfall mm
Tinée Upper Var Vésubie Estéron Lower Var
Mean 1.37 1.38 1.17 1.17 0.79
Event 1Max 4.28 5.46 4.13 3.73 5.48
Min 0.00 0.00 0.00 0.00 0.00
Mean 3.50 3.27 3.67 3.62 2.40
Event 2 Max 13.54 11.33 20.88 14.19 19.38
Min 0.00 0.15 0.00 0.00 0.00
Mean 2.44 2.32 2.44 2.42 1.61
Whole period Max 13.54 11.33 20.88 14.19 19.38
Min 0.00 0.00 0.00 0.00 0.00
mm
25 Sucatchments' hyetographs Tinée
Upper Var
Vésubie
20 Estéron
Lower Var
15
10
0 Hours
0 10 20 30 40 50 60
4. Hydrograph creation
To run the model, we have to enter some characteristic parameters from the catchment (subcatchment)
to analyse and define the characteristics of transfer function from the S.C.S (Soil Conservation
Service).
This is an empirical method for runoff calculation which is based on the potential of the soil to absorb
water. The runoff has to be known to calculate the discharge at the outlet of each subcatchment. It’s
obtained with the following function of production:
( P(t ) − 0.2S ) 2
R(t ) = if P(t) > 0.2S (or zero)
P(t ) + 0.8S )
Where : R(t) is the amount of rain runoff between time 0 and time (mm)
P(t) is the amount of rain fallen between time 0 and time (mm)
S is the maximum amount of water the soil is able to absorb
The service provides CN-values to calculate the maximum amount of infiltration (S), depending on the
permeability of the land, its seepage state and nature of vegetation. First of all, we have to calculate the
antecedant moisture class (it means the amount of rainfall during the 5 days preceding the flood event)
Condition I : P<12.7
Condition II : 12.7<P<27.9
Condition III: P>27.9
We recalculated the amount of water in each subcatchment from the data given with the Kriging
method, and we obtained a class III for all subcatchments, which means that we already had a really
moist or satured soil at the moment of critical rains that produced the flood.
After this step, and according to the geological class (estimated because of lack of data) fitting to the
soil’s occupation given by the GIS, we’ve calculated the runoff curve number for the different
subcatchments. We obtained the following results:
As the rainfall on the subcatchments is known, the discharge at their outlet can be now calculated.
We determine unit hydrograph for each subcatchment using different empiric formulas:
Time of concentration
Passani formula
Subcatchments data
A* L
Tc = 0.108 *
I
Nash formula
100 * A
Tc = 29.6 * 3
I
After trying them, we choose to use the Nash formula, which seems to be the most accurate one.
The rise time for each subcatchment is needed to determine the unit hydrogram with the methods
presented below. This value is calculated through the concentration time which is in turn obtained by
using empirical formulas where the geometry of the catchment is used (data provided by GIS
analysis).
• SCS (Soil Conservation Service of USA) HU method: it uses a triangular shape unit
hydrogram with rising time = 3/8 Tc
• SOCOSE-CEMAGREPH method (French agriculture ministry): rise time = 1/3 Tc and
calibration parameter of 3 and 5
• Nash model : rise time = 1/3 Tc and parameter 4 and 4.7
10200
10800
1200
1800
2400
3000
3600
4200
4800
5400
6000
6600
7200
7800
8400
9000
9600
600
0
0 1,400.0
2
1,200.0
4
1,000.0
6
8 800.0
10 600.0
12
400.0
14
200.0
16
18 0.0
Fig. 7.4.2.2.2 Rainfall, runoff and discharge for the High Var catchment, using the Nash method with
a coefficient n=4
7.4.2.3 Comparison of differents methods and validation of hydrographs results with measured
data
The peak discharge during the event of 5th of November 1994 assumed by the CEMAGREPH is about
3500 m3/s. However, this value has to be considered very carefully because no discharge has really
been measured on the Var river during the flood event and only an approximate rating curve could be
done. The only real information we have are the water level given by signs observed in the nature.
The real discharge’s range for this event is certainly from 2500 m3/s to 5000 m3/s but nothing more is
really known about it.
If we compare our calculations with the “observed” one, we can conclude:
•
The SOCOSE method gives results much under the measured value, but even fit in the range
of uncertainties; even when we tried to change the alpha value, we obtained values lower than
the observed one.
•
The SCS method gives values that fit not too bad, but are lower than the discharge at the
Napoleon Bridge.
•
The Nash method fits the best so we decided to keep this method. We changed the n parameter
to see how the discharge will react and we found out that a n coefficient of 4 was the closest
result we could get.
Hydrographs comparison - Discharge at the Napoleon Bridge
4000
Nash n = 4
3500
Nash n = 4.7
3000 SOCOSE a=3
SOCOSE a=5
Discharge [m3/s]
2500
SCS
2000 CEMAGREPH
1500
1000
500
0
0 20 40 60 80 100 120 140 160
Time [h]
Fig 7.4.2.3.1 Comparison between all the methods used with Laborde Excel sheet and the
CEMAGREPH hydrograph
After this stage, we tried to calculate the discharge with a flood routing method (Muskingum method);
the reason for that was that after calculating the discharge for all subcatchments, we noticed that the
delay time for the total discharge at the Napoleon Bridge occurred quite early compared to the
discharge at the outlet of each subcatchment. It means that we’ve calculated the delay time between
the flow which goes out of a subcatchment and the other flow reaching the main river. It gives us at
the end a delay time for each subcatchment and we have to shift it forward. This method provides us a
more realistic result, because it takes into account the delay time of the wave. In general, we observe
with the Muskingum-method that our peak is delayed and laminated. That’s what is shown on the
following graph.
4000
3500 Nash
Nash - Muskingum
3000 CEMAGREPH
Discharge [m3/s]
2500
2000
1500
1000
500
0
0 20 40 60 80 100 120 140
Time [h]
Fig. 7.4.2.3.2 Discharges measured at Napoleon Bridge with Nash method – comparison between
Nash, Nash-Muskingum an CEMAGREPH hydrographs
A few elements can be brought to explain the differences between all the different methods used
• The rain data were calculated with 2 approximative methods, which are interpolating from the
6 raingages we have, data for the whole catchment. For a better accuracy, we need much more
raingages.
• There are quite a few uncertainties on the Tc and Tm formula we used, because there are
dependant on the GIS data, and the slopes we used were subject to discussion. The delayed
peak time of all our hydrographs compare to the measured one could be explained by this
reason, but we tried to calibrate on the reality as much as possible.
• For a better hydrological calculation, we should have determined smaller subcatchments, but
this requires much more time.
• For the CN calculation (Smax), we calculated the coefficients based on the SCS-CN method,
which need a lot and accurate informations on the land use of the area, that weren’t provided.
We tried to change a lot of parameters and compare them with the observed data. A so big number of
possibilities are available that it’s pretty hard to choose one solution or another one, especially when
we know that the CEMAGREPH curve is not an accurate one. That’s the reason why we decided not
to change so much parameters that were related to the reality and try to change the other one, as the
coefficient n for the Nash method.
This method provides us a good approximation of the reality but for that we have to know as precise
as possible the real data picken from the field.
5. HEC-HMD analysis
HEC-HMS Model
A hydrological model was prepared using HEC HMS 3.1.0. Subcatchments defined by ArcGIS were
used for preparing the model. The non-grid-based lumped basin model was made. The assumption for
this model is that each sub-basin within the watershed can be adequately represented by a number of
hydrologic parameters. In effect, these parameters are a weighted average representation of the entire
subbasin. Any variation within a sub-basin is lumped into the subbasin total and an average value is
used in the analysis.
Kriging method was used for getting the rainfall data corresponding to each subcatchment.
Here L is the longest overland distance path (feet) and S is slope of the subcatchment
(m/m).
Loss method
A subbasin element conceptually represents infiltration, surface runoff, and subsurface
processes interacting together. The actual infiltration calculations are performed by a loss
method contained within the subbasin. All of the possible loss methods in HMS conserve
mass. That is, the sum of infiltration and precipitation left on the surface will always be
equal to total incoming precipitation.
SCS curve number method was used for performing the loss computations. The important
parameters required for the implementation of the method are the Curve Number and the
Percentage Imperviousness. Percentage imperviousness is taken from the GIS analysis.
The impervious areas of the subcatchments are very small in percentage.
The details for SCS curve number selection are given in the section that explains Nash
method in this report. It can be stated here that the outflow discharge is very sensitive to
the curve number selected and it must be chosen carefully.
Baseflow method
No baseflow has been considered in the simulation. The available rainfall data was
available for 30 hours prior to the recorded discharge data.
3500
3000
Discharge (m³/s)
2500
2000
1500
1000
500
0
0 20 40 60 80 100 120
Time (hrs)
6. Conclusion