0% found this document useful (0 votes)
8 views143 pages

Artificial Neural Network Approaches For Slope Stability Yapay Sinir Aglari Yontemi Kullanilarak Sev Stabilitesinin Incelenmesi

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 143

İSTANBUL TECHNICAL UNIVERSITY  INSTITUTE OF SCIENCE AND TECHNOLOGY

ARTIFICIAL NEURAL NETWORK APPROACHES FOR


SLOPE STABILITY

M.Sc. Thesis by
Mert TOLON , B.S.c

Department : Civil Engineering

Programme: Earthquake Engineering

JUNE 2007
İSTANBUL TECHNICAL UNIVERSITY  INSTITUTE OF SCIENCE AND TECHNOLOGY

ARTIFICIAL NEURAL NETWORK APPROACHES FOR


SLOPE STABILITY

M.Sc. Thesis by
Mert TOLON , B.S.c
501051212

Date of submission : 4 May 2007

Date of defence examination: 13 June 2007

Supervisor (Chairman): Assoc. Prof. Dr. Derin N. URAL


Members of the Examining Committee Assoc. Prof. Dr. Recep İYİSAN (İ.T.Ü.)

Assist. Prof. Dr. Mehmet BERİLGEN (Y.T.Ü.)

JUNE 2007
İSTANBUL TEKNİK ÜNİVERSİTESİ  FEN BİLİMLERİ ENSTİTÜSÜ

YAPAY SİNİR AĞLARI YÖNTEMİ KULLANILARAK


ŞEV STABİLİTESİNİN İNCELENMESİ

YÜKSEK LİSANS TEZİ


Mert TOLON
501051212

Tezin Enstitüye Verildiği Tarih : 4 Mayıs 2007


Tezin Savunulduğu Tarih : 13 Haziran 2007

Tez Danışmanı : Doç.Dr. Derin N. URAL

Diğer Jüri Üyeleri Doç. Dr. Recep İYİSAN (İ.T.Ü.)

Y.Doç.Dr. Mehmet BERİLGEN (Y.T.Ü.)

HAZİRAN 2007
PREFACE

I would like to express my grateful thanks to the respectful members of Earthquake


Engineering Division of the Civil Engineering Department at Istanbul Technical
University for giving me the chance to study for my MSc thesis. I especially would
like to express my sincere thanks to my supervisor Assoc. Prof. Derin N. URAL for
her guidance and insight throughout the research. I also want to thank to
Assoc. Prof. Recep IYISAN for giving valuable help during the preparation of this
thesis.
Finally, I am most grateful to my family for their endless supports. Preparation of
this thesis involved several months of long working hours; I could not have done it
without their insight and encouragement.

June 2007 Mert TOLON

ii
ÖNSÖZ

İstanbul Teknik Üniversitesi İnşaat Mühendisliği Bölümü Deprem Mühendisliği


Anabilim Dalı’nın bana yüksek lisans yapma olanağı tanıyan saygıdeğer öğretim
üyelerine minnet dolu teşekkürlerimi bildirmek isterim. Özellikle danışmanım
Doç. Dr. Derin N. URAL’a çalışmalarım boyunca gösterdiği rehberlik ve anlayıştan
dolayı en içten teşekkürlerimi sunmak istiyorum. Ayrıca bu tezi hazırlamam
sırasında göstermiş olduğu yardımlardan dolayı Doç. Dr. Recep İYİSAN’a teşekkürü
bir borç biliyorum.
Son olarak, aileme de vermiş oldukları sonsuz destekten ötürü minnettarım. Bu tezin
hazırlanması aylar süren çalışma saatleri sonunda bitmiştir; onların anlayışı ve
cesaretlendirmesi olmadan bunu gerçekleştiremezdim.

Haziran 2007 Mert TOLON

iii
TABLE OF CONTENTS

ABBREVIATIONS vii
LIST OF TABLES viii
LIST OF FIGURES ix
LIST OF SYMBOLS xi
ÖZET xii
SUMMARY xiii

1. INTRODUCTION 1
1.1. General 1
1.2. Short History of Artificial Intelligence Method 1
1.3. Objectives 2

2. LITERATURE REVİEW 4
2.1. Introduction 4
2.2. Types of slope failure modes 4
2.2.1. Short term stability 4
2.2.2. Long term stability 5
2.3. Factors affecting slope stability analyses 5
2.3.1. Failure plane geometry 5
2.3.2. Non-Homogenity of soil layers 6
2.3.3. Tension crack 6
2.3.4. Dynamic loading 7
2.4. Methods of analyses 7
2.4.1. Planar failure surface 9
2.4.2. Circular failure surface 10
2.4.2.1. Fellenius method 10
2.4.2.2. Bishop method 12
2.4.2.3.Spencer's method 13
2.4.2.4. Obtaining the most critical circle 16
2.4.3. Non-Circular failure surface 17
2.4.3.1. Janbu's method 17
2.4.3.2. Morgensten-Price methods 19
2.4.3.3. Location of critical failure surface 20
2.4.4. Selection of method 20
2.5. Numerical methods for slope stability analysis 21
2.5.1. Finite difference method 21
2.5.2. Finite element method 22
2.6. Computer programs based on traditional methods 24
2.7. Slope stability analyses conditions 33

iv
3. ARTIFICIAL INTELLIGENCE APPLICATIONS WITH NEURAL
NETWORKS 35
3.1 Introduction 35
3.2 Neural Network's Properties 35
3.2.1 Basic Structure Of Neural Network 35
3.2.2 Design Choices Of Neural Networks 39
3.2.3 Neural Network Architectures 40
3.2.3.1 GRNN architecture and learning algorithm 40
3.2.3.2 PNN architecture 45
3.2.3.3 Back propagation neural network architecture 47
3.2.3.4 Kohonen architecture 50
3.2.3.5 GMDH architecture 51
3.3 General Applications In Civil Engineering 52
3.3.1 Dynamic Soil–Structure Interaction Using Neural Networks For
Parameter Evaluation 52
3.3.2 A Neural Network Approach For Predicting The Structural Behavior
Of Concrete Slabs 52
3.3.3 Neural Network Analysis of Structural Damage Due to Corrosion 53
3.3.4 Artificial Neural Networks for Predicting The Response Of
Structural Systems with Viscoelastic Dampers 54
3.3.5 Modeling Ground Motion Using Neural Networks 55
3.3.6 Analysis Of Soil Water Retention Data Using Artificial
Neural Networks 55
3.3.7 Neural Network Based Prediction Of Ground Surface Settlements
Due To Tunnelling 57
3.3.8 Neural Network Modeling Of Water Table Depth Fluctuations 57
3.4 General Applications In Geotechnical Engineering 58
3.4.1 Pile Capacity 59
3.4.2 Settlement Of Foundations 60
3.4.3 Soil Properties And Behaviour 63
3.4.4 Liquefaction 65
3.4.5 Site Characterization 66
3.4.6 Earth retaining structures 66
3.4.7 Tunnels And Underground Openings 67

4. NEURAL NETWORK APPROACHES FOR SLOPE STABILITY 68


4.1 Introduction 68
4.2 Input Parameters Information 68
4.3 Analysis 70
4.3.1 BPNN approaches 70
4.3.1.1 Model 1 70
4.3.1.2 Model 2 74
4.3.2 GRNN approaches 77
4.3.2.1 Model 3 77
4.3.2.2 Model 4 80
4.3.2.3 Model 5 83

v
5. RESULTS 87

REFERENCES 94

APPENDIX A 99

APPENDIX B 103

APPENDIX C 107

APPENDIX D 108

APPENDIX E 112

APPENDIX F 116

APPENDIX G 120

APPENDIX H 122

CURRICULUM VITAE 128

vi
ABBREVIATIONS

AI : Artificial Intelligence
ANN : Artificial Neural Network
BPNN : Back Propagation Neural Network
CPT : Cone Penetration Test
FEM : Finite Element Method
FS : Factor of Safety
GRNN : General Regression Neural Network
GMDH : Group Method of Data Handling
KLP : Kohonen Learning Paradigm
MAE : Mean Absolute Errors
NN : Neural Network
PDE : Partial Differential Equations
PNN : Probabilistic Neural Network
RMSE : Root Mean Squared Errors
SCANN : Site Characterization using Artificial Neural Network

vii
LIST OF TABLES

Page No
Table 2.1 Comparison of fatures of methods……………………............. 20
Table 3.1 Summary of regression analysis results of pile capacity
prediction....................................................................................60
Table 3.2 Comparison of predicted vs measured settlements.................... 63
Table 4.1 Input and output values range for neural network...................... 69
Table 4.2 Model 1 approach for training................................................... 70
Table 4.3 The contribution factors for model 1.......................................... 71
Table 4.4 The results of model 1................................................................ 71
Table 4.5 A pieces of model 1 output table................................................ 73
Table 4.6 Model 2 approach for training.................................................... 74
Table 4.7 The contribution factors for model 2.......................................... 75
Table 4.8 The results of model .................................................................. 75
Table 4.9 The architecture and the configuration of the model 3....................... 77
Table4.10 Individual smoothing factors for model 3.................................. 78
Table 4.11 The results of model 3................................................................ 78
Table 4.12 The architecture and the configuration of the model 4.............. 80
Table 4.13 Individual smoothing factors for model 4.................................. 81
Table 4.14 The results of the model 4.......................................................... 82
Table 4.15 The architecture and the configuration of the model 5.............. 84
Table 4.16 Individual smoothing factors for model 5................................. 84
Table 4.17 The results of the model 5.......................................................... 85
Table 5.1 Model 1 and model 2 approach configurations and
architecture................................................................................. 87
Table 5.2 Output R2values for model 1 and 2............................................ 88
Table 5.3 The first five contribution factors for model 1 and model 2...... 89
Table 5.4 The architecture and the configuration of models 3, 4 and 5..... 89
Table 5.5 Output R2 values for model 3, 4, and 5...................................... 89
Table 5.6 The first five of individual smoothing factor for models 3, 4, 5 90
Table 5.7 Simulation success rates for each model.................................... 92

viii
LIST OF FIGURES

Page No
Figure 1.1 : A basic slope figure........................................................................... 1
Figure 2.1 : Shear characteristics of over consolidated clay and corresponding
Mohr-Coulomb failure envelopes .................................................... 5
Figure 2.2 : Change of minimum F.S.with depth of tension crack for constant
c’ & Φ’................................................................................................ 7
Figure 2.3 : Examples of limit equilibrium methods............................................. 8
Figure 2.4 : Forces acting on a vertical slice........................................................ 9
Figure 2.5 : Circular failure surface and forces acting on a single slice................ 11
Figure 2.6 : Position of line of thrust..................................................................... 13
Figure 2.7 : Forces on a slice for Spencer’s method............................................. 14
Figure 2.8 : Variation of Fm and Ff with θ............................................................ 15
Figure 2.9 : Grid search patter............................................................................... 16
Figure 2.10 : Janbu’s correction factor for his simplified method ......................... 19
Figure 3.1 : Schematic diagram of a neuron’s network......................................... 35
Figure 3.2 : Neural networks structure.................................................................. 36
Figure 3.3 : Design choices for neural network application................................. 39
Figure 3.4 : The basic GRNN architecture............................................................ 41
Figure 3.5 : The GRNN architecture..................................................................... 42
Figure 3.6 : Linear activation function.................................................................. 43
Figure 3.7 : Logistic function................................................................................ 44
Figure 3.8 : Symmetric logistic function............................................................... 44
Figure 3.9 : Gaussian function............................................................................... 44
Figure 3.10 : The PNN architecture......................................................................... 45
Figure 3.11 : Probabilistic neural network layers.................................................... 46
Figure 3.12 : Mismatch the function due to the overfitting..................................... 50
Figure 3.13 : Comparison of theoretical settlement and neural network
prediciton............................................................................................ 61
Figure 3.14 : Settlement predicted using traditional methods................................. 62
Figure 3.15 : Settlement prediction using artificial neural network........................ 62
Figure 4.1 : Basic slope profile and slope parameters........................................... 69
Figure 4.2 : Actual – network output scatter for model 1 and error limits............ 72
Figure 4.3 : Variables error through pattern and error limits for model 1............. 72
Figure 4.4 : Test set error graph for model 1......................................................... 73
Figure 4.5 : Actual-network output scatter for model 2 and error limits............... 75
Figure 4.6 : Variables error through pattern and error limits for model 2............. 76
Figure 4.7 : Test set error graph for model 2......................................................... 76
Figure 4.8 : Actual-network output scatter for model 3 and error limits............... 79
Figure 4.9 : Variables error through pattern and error limits for model 3............ 79
Figure 4.10 : Test set error graph for model 3......................................................... 80

ix
Figure 4.11 : Actual-network output scatter for model 4 and error limits............... 82
Figure 4.12 : Variables error through pattern and error limits for model 4............. 83
Figure 4.13 : Test set error graph for model 4......................................................... 83
Figure 4.14 : Actual-network output scatter for model 5 and error limits............... 85
Figure 4.15 : Variables error through pattern and error limits for model 5............. 86
Figure 4.16 : Test set error graph for model 5......................................................... 86
Figure 5.1 : Test set error graph for model 1......................................................... 88
Figure 5.2 : Test set error graph for model 2......................................................... 88
Figure 5.3 : Test set error graph for model 3......................................................... 90
Figure 5.4 : Test set error graph for model 4......................................................... 91
Figure 5.5 : Test set error graph for model 5......................................................... 91

x
LIST OF SYMBOLS

b : The slice width


β : The inclination of slope
c : The cohesion of soil
∆l : Length of each individual segment into which the slip surface has
been subdivided
Φ : The friction angle of soil
γ : The unit weight of soil
γw : The unit weight of water
H : The height of slope
Hb : The depth of firm base
Hw : The height of water level
kh : Horizontal seismic coefficient
kv : Vertical seismic coefficient
λ : Scaling factor
Q : The later slice forces
P : Total reaction to the base of the slice
P’ : The force due to the effective stress
si : Available shear strength
τ : The shear stress
u : The water pressure acting on the base of the slice
W : The total weight of the slice
Zo : Depth of the tension crack

xi
YAPAY SİNİR AĞLARI YÖNTEMİNİ KULLANARAK ŞEV
STABİLİTESİNİN İNCELENMESİ

ÖZET

Ülkemizde inşaat mühendisliği disiplininde yapay sinir ağlarını kullanmak çok yeni
bir yöntemdir. Bu olgu genelde inşaat mühendisliği disiplininde hidrolik dalında ve
Geoteknik mühendisliğinde kullanılmıştır. Bu çalışmada şev stabilitesinin
incelenmesi yapay zeka mantığı kullanılarak incelenmiştir. Bu da şev stabilitesinde
deprem etkisinin incelenmesine farklı bir bakış açısı getirecektir.
Bu çalışmada 170 tane lokal bölgenin şev profili dataları kullanılmıştır. Çalışmada
kullanılan bu verilerin hazırlanışı ve kullanım şekili bölüm 4’de anlatılmıştır.
Yapay zeka mantığı yaklaşımında beş tane yapay sinir ağı mimarisi kullanılmıştır.
Bunlar BPNN, geri yayılmalı sinir ağı mimarisi ve GRNN, genel regresyonlu yapay
sinir ağı mimarisi, GMDH, gruplama methodu, Kohonen ve PNN, olasılık
yöntemidir. Ancak sadece BPNN, geri yayılmalı sinir ağı mimarisi ve GRNN, genel
regresyonlu yapay sinir ağı mimarisi model oluşturmakta kullanılmıştır. Bu
yaklaşımlarda 9 adet girdi ve 1 tane çıkış parametreleri verilmiştir. Çıkış parametresi
şev güvenlik katsayısı olup, girdi parametreleri şev yüksekliği ( H ), şev eğimi ( β ),
yeraltı suyu derinliği ( Hw ), sağlam zemin derinliği ( Hb ), kohezyon ( c ), zemin
içsel sürtünme açısı ( Φ ), kuru birim hacim ağırlığı ( γ ), düşey ve yatay sismik
zemin katsayıları ( Kh , Kv )‘dır. Bu çalışmadaki amaç sismik zemin katsayılarının
şev stabilitesindeki önemlerinin incelenmesidir.
Tüm modellemelerde ve datalarda sismik zemin katsayıları kullanılmış olup bu
yaklaşımdan beklenen sismik etkinin öneminin çıkarılmasıdır.
Sonuç olarak genel regresyon yapay sinir ağı modelinin daha başarılı olduğu ve
% 92.5 başarı yüzdesine sahip olduğu görülmüş, düşey ve yatay sismik zemin
katsayılarının şev yüksekliği, şev eğimi ve yeraltı suyu derinliğinden sonra şev
stabilitesindeki etkisinin önemli olduğu görülmüştür.

xii
SLOPE STABILITY INVESTIGATION BY USING ARTIFICIAL NEURAL
NETWORK ANALYSIS

SUMMARY

To use Neural Network approaches is a new phenomena for civil engineering


disciplines in Turkey. This phenomena generally is used in Hydrology branch of civil
engineering disciplines and in geotechnical disciplines, etc. In this study slope
stability was discussed by using Neural Network approaches. This provides a new
point of view for seeing the effects of earthquake to slope stability safety.
In this study 170 slope data and their properties are used. Preparedness of using these
data in this study is discussed in chapter 4.
In Artificial Intelligence approach five neural network approaches architecture are
used. These approaches are Back propagation neural network architecture ( BPNN ),
General regression neural network ( GRNN ), Group method of data handling
( GMDH ), Kohonen learning paradigm and Probabilistic neural network ( PNN )
architectures. But only 2 of them used, these are the back propagation neural network
architecture ( BPNN ) and the general regression neural network ( GRNN ).
There are 9 input parameters and 1 output parameter. The output parameter is the
factor of the safety of the slopes ( F.S. ), the input parameters are the height of slope
( H ), the inclination of slope ( β ), the height of water level ( Hw ), the depth of firm
base ( Hb ), the cohesion of soil ( c ), the friction angle of soil ( Φ ), the unit weight of
soil ( γ ), but the important input parameters are horizontal and vertical seismic
coefficients ( kh , kv ).Trying to be obtained is the importance of the seismic
coefficients for a slope stability safety.
For all of the architecture approaches the models are solved for including the seismic
coefficients ( kh , kv ) effects. From this approach it is expected to see the earthquake
impact to a slope.
In conclusion this study shows that the general regression neural network (GRNN)
approach is, the more appropriate model, and has a % 92.5 success rate for forcasting
the effect of earthquake for slope stability safety and generally horizontal and
vertical seismic coefficients importance seen after the height of the slope ( H ), the
inclination of slope ( β ), the height of water level (Hw) importance.

xiii
1. INTRODUCTION

1.1 General

This study was prepared to investigate the effects seismic coefficient to a slope and
the slope stability reaction with Neural Network (NN) approaches. Assessments of
seismic loading and a basic slope figure given in Figure 1.1. For the purpose of
engineering design, source effects generally refer to the slope parameters.

Figure 1.1 : A basic slope figure


On the other hand, the parameters of slope are determinate the safety of a slope so to
find out which parameter is important we can use Neural Network ( NN ). These
parameters are discussed in section 3 and 4.

1.2 Short History of Artificial Intelligence ( AI ) Method

Artificial intelligence ( AI ) is the study of ideas brought into machines that respond
to stimulation, consistent with traditional responses from humans, given the human
capacity for contemplation, judgment and intention. Each such machine should
engage in critical appraisal and selection of differing opinions within itself. Produced
by human skill and labor, these machines should conduct themselves in agreement
with life, spirit and sensitivity, though in reality, they are imitations. Human beings
have attained the ability to respond to the world by bringing previous experience, or
others' experience, and AI function in this way.

1
Artificial Intelligence is a valuable tool of representing real-world realities. The birth
of artificial intelligence is attributed to the first "intelligent" machine concept
developed by Alan Turing, a scientist at Cambridge, UK. The famous "Turing
Machine", from many is considered the foundation stone of artificial intelligence,
and it found its first use in the, also famous, Enigma decryption project during the
WWII. Nevertheless, despite the "mythological" and science fiction depictions of
Artificial Intelligence (AI), its true development was closely related to the
development of computers in the post-war era. The use of computers allowed
Artificial Intelligence (AI) to pursue its real purpose, that is, the attempt to
understand, replicate and analyze intelligent entities and processes of our world. As
time progressed, the study of artificial intelligence made its first natal steps. But, the
"boost" on research and analytic explorations of Artificial Intelligence became
possible after the ability of researchers, organizations and institutions to perform
computer-based operations and experimentations, especially in the late 1970's and
1980's. Today, artificial intelligence is a part of our everyday life. AI is used on a
constantly growing number of applications and processes we use in our every-day
life. Web-searches, economic models, computer games, automobile processors, etc,
are only some of the most known applications that AI methods found their
implementation.

1.3 Objectives

Turkey is a country where destructive earthquakes occur frequently. Since


earthquakes occurs in regions at high population, earthquake loading and effects of
soil are extremely important.

The purpose of this study is to examine the effect of seismic coefficients and the
safety factor of a slope via artificial intelligence, rather than conventional and
wellknown methodologies. Advantages of this innovative Artificial Intelligence
approach can be listed as;

In future, it may provide forecasting of foundation, and soil effects or damage with
earthquake loading. Events experienced can be taught to Neuroshell2 (Neural
Network program used in this study) and events experienced are indicators of
experiences which might occur in the future.

2
Like constructing robots or perceiving of human voices on mobile phone which are
artificial intelligence methods; using earthquake data can be constituted knowledge
programs according to high risk potentials.

By the way of Neural Network approaches, inertial interaction can be generalized


and forecasted.

In this study, factor of safety and the seismic coefficients effects will be investigated
with NN approaches. It should be noted that the General Regression Neural Network
(GRNN) and Back Propagation Neural Network (BPNN) will be used.

3
2.LITERATURE REVIEW

2.1 Introduction

Slope stability is usually analyzed by methods of limit equilibrium. Historically these


methods were developed before the advent of computers; computationally more
complex methods followed later. These methods require information about the
strength parameters and the geometrical parameters of the soil and rock mass. The
factor of safety ( FS ) is defined as the ratio of reaction over action, expressed in
terms of moments and forces and eventually in terms of stresses, depending on the
geometry of the assumed slip surface. The way and methods of calculating FS values
are given below.

2.2 Types of Slope Failure Modes

One knows that a stability check is made for two different failure modes when
analyzing the safety of a slope. The slope can fail either during the excavation or
long after the construction is completed. So for checking these slope failure modes,
the stability of the slope must be checked both for short term stability and long term
stability.

2.2.1 Short Term Stability


Short term stability conditions apply after a cut is made in a slope. In excavating for
a cut shear stresses are induced that may cause failure in the undrained state. The
total stress strength parameter cohesion, c, is used for short term stability. Based on
field observations and laboratory analyses of soil samples the friction angle of the
soil is zero (Φ = 0) the total stress method is satisfactory for short term stability
analysis of non-fissured clay. For fissured over consolidated clays, the Φ = 0 analysis
can also be employed by taking into account reduced shear strength due to the
amount and magnitude of fissuring in soils.

4
2.2.2 Long Term Stability
Long term stability is encountered in natural slopes and are considered in analyzing
the stability of embankments. The effective stress methods of analysis are used for
the long term stability analysis of both non-fissured and over consolidated fissured
clay. Effective stress parameters, c’ and Φ’ must be used to analyses the long term
stability problem of slopes. Pore water pressure may be assumed to be in equilibrium
and are determined form considerations of steady-seepage. Skempton (1964)
suggested the use of the radius shear strength concept for long term slope analysis for
over consolidated clays. The residual shear strength can be obtained from slow
drained shear tests. Figure 2.1 shows the shear strength characteristics of an over
consolidated clay in terms of effective stress. Discussions on the method of selection
of the strength parameters for stability investigation are given by Lowe ( 1967 ) ,
Schuster ( 1968 ) and Duncan – Dunlop ( 1969 ) .

Figure 2.1 : Shear characteristics of over consolidated clay and corresponding Mohr-
Coulomb failure envelopes ( Fang , 1991 )

2.3 Factors Affecting Slope Stability Analysis

We know that there are a number of factors that affect slope stability analysis. There
are major factors like, failure plane geometry, non homogeneity of soil layers,
tension cracks, dynamic loading or earthquakes and seepage flow that can affect
slope stability analysis. These major factors explanations are given below.

2.3.1 Failure Plane Geometry


The geometry of the failure plane is assumed to be circular or non-circular. Non-
circular surfaces include logarithmic spiral and simple wedge geometry. These are
commonly known as general failure surfaces.

5
The use of the circular arc and logarithmic spiral failure planes for stability analysis
have been discussed by Spencer ( 1969 ) and Chen ( 1970 ). Spencer ( 1969 )
suggested that the circular curve is more critical than the logarithmic spiral arc. Chen
( 1970 ) concluded that the shape of the failure plane is not sensitive in the analysis
of stability in slopes .

2.3.2 Non homogenity of Soil Layers


Depending upon the environmental condition of deposition and subsequent stress
changes during geological history, soil strength parameters may be isotropic.
However, most soils are unisotropic. Lo (1965) developed a general method of
stability analysis for unisotropic soils, where the effect of unisotropy is small for
steep slopes. For flatter slopes, the influence of unisotropy on stability is significant
and can’t be ignored.

2.3.3 Tension Crack


Tension crack generally occur near the crest of a slope. The crack reduces the overall
stability of a slope by decreasing the cohesion which can be mobilized along the
upper part of a potential failure surfaces. Therefore the factor of the safety of a slope
varies with the depth of the tension crack. While the change in depth of a tension
crack can be quite large, the corresponding change in the numerical values of the
factor of safety is not significant. Figure 2.2 shows the change of minimum factor of
safety with the change of the depth for an assumed Φ’ - c’ soil under drained
conditions. The depth of water increases when the depth of the crack increases. The
effect of water pressure in a tension crack on the position of critical circle is found to
be rather small. However, the factor of safety decreases as the depth of water
increases in a tension crack .

If the soil strength is purely cohesive, as for clay soils in the undrained state, the
depth of the tension crack ranges from 2 to 4 times c / γ ( Bromhead , 1986 ). The
following formula can also be used to determine the depth of the tension crack

2c ′
ZO = tan(45 + φ ′ / 2) ( 2.1 )
γ
where , Zo is depth of the tension crack, tension cracks can be very deep and
sometimes can even penetrate to the water table .

6
Figure 2.2: Change of minimum F.S.with depth of tension crack for constant c’& Φ’
( Feng , 1991 )

2.3.4 Dynamic Loading


The effect of dynamic loading including that due to earthquakes on slope stability
should also be considered. So after the 1960’s the researchers started to study about
dynamic loading and slope stability relationship like Seed and Googman ( 1967 )
studied the yield acceleration of slope in cohesion less soils. Finn ( 1966 ) reported
the earthquake stability of cohesive slopes. Methods for evaluating slope response to
earthquakes and design procedures due to earthquake are given by Seed ( 1966 ,
1967 ), Sherard ( 1967 ) and Majundar ( 1971 ). Based on the laboratory tests, Ellis
and Hartman ( 1967 ) reported that the dynamic strength of a soil may be less or
greater than soil strength under static loadings.

2.4 Methods of Analysis

We know that, there are a number of methods available for performing slope stability
analysis but the majority of these methods may be categorized as limit equilibrium
methods. The basic assumption of the limit equilibrium method is that Coloumb’s
failure criterion is satisfied along the failure surface. It is widely used for slope

7
stability problems. However, it neglects soil stress – strain relationships in that the
soil is assumed to move as a rigid block.

To begin the analysis, a trail failure surface for the slope is assumed. Next a free
body or slice is then taken from the slope and the shear resistance is then compared
to the estimated of available mobilized shear stress of the soil to give an indication of
the factor of safety.

The Culman method and the Friction circle method ( Taylor , 1948 ) consider the
equilibrium of the whole free body as shown in Figure 2.3.

Figure 2.3 : Examples of limit equilibrium methods ( Fang , 1991 )

The Swedish circle method ( Fellenius , 1927 ), the Bishop’s ( 1955 ), Bishop and
Morgenstern ( 1960 ), Morgenstern ( 1963 ), and Spencer’s ( 1967 ) method are
based on the method of slices with minor variations. The method of slices approach
is to divide the free body into many vertical slices and to consider the equilibrium of
each slice. The safety of the slope is found from summing the stability of all slices.

In addition to the above mentioned methods, Hunter and Schuster ( 1968 , 1971 ),
Huang ( 1975 , 1980 ), and Koppular ( 1948 ) discuss limit equilibrium methods.

8
Lowe and Karafiath ( 1960 ) and Janbu ( 1954 ) developed methods which are also in
the category of the limit equilibrium method. These methods explanations are given
below by classifying them with the geometry of slope failure surface.

2.4.1 Planar Failure Surface


A slope that is uniform and very long relative t the depth of the potentially unstable
layer may often be analyzed as a planar failure slope. The general model is shown in
Figure 2.4.

Figure 2.4 : Forces acting on a Vertical Slice ( Mostyn and Small , 1987 )

As can be seen, the failure plane is taken to be parallel to and at a depth, d, below the
ground surface having an inclination α with the horizontal. The assumption that the
slope is very long and uniform implies that any vertical slice is similar to all others.
Thus the side forces must be equal in magnitude, opposite in direction and co-linear.
Groundwater flow is usually taken to be parallel to the ground surface with the
phreatic surface at a distance dW, above the failure plane. For a material with a Mohr-
Coloumb failure criterion the factor of safety, FS, of the slope is given by the
following expression ( Das, 1993 );

c ′ + (γd − γ W dW ) cos 2 α tan φ ′


FS = ( 2.2 )
γd . sin α . cos α
where c’ is effective cohesion of soil, γ is the unit weight of soil, γw is the unit weight
of water and Φ’ is the effective angle of friction.

9
The derivation of the factor of safety for a slope with planar failure surface is
presented in most textbooks on soil mechanics or slope stability. The effective
cohesion is often ignored or assumed to be zero in which case equation 2.2 simplifies
to :

 γ d  tan φ ′
FS = 1 − W W . ( 2.3 )
 γd  tan α
If the water table is at or below the failure plane then the slope is at limiting
equilibrium when the slope angle equals the effective angle of friction. If the water
table is at the surface then the slope angle at limiting equilibrium is near half the
effective angle of friction.

2.4.2 Circular Failure Surface


For many slope failures, the surfaces along which sliding takes place is found to be
non-planar or curved leading to the idea of using curved failure surfaces for the
analysis of slope stability ( Spencer 1973, Chen and Shao 1988 ). Although the actual
failure surfaces in most cases are bowl shaped , the representation of a failure surface
as a single curve ( in two dimension ) greatly simplifies the analysis.

Early solutions for circular surfaces were obtained by Fellenius ( 1927 ) who used
the method of slices and by Taylor ( 1937, 1948 ) who used a friction circle method
to produce charts of “ stability numbers “ to determine factors of safety against slope
failure. Most modern circular slip circle methods make use of the method of slices,
and the major differences between these methods involve the way, in which the
unknown quantities that arise in the analysis are dealt with. Some of the methods for
analysis of circular failure surfaces using the method of slices are presented in the
following sections.

2.4.2.1 Fellenius Method


This method assumes that for any slice, the forces acting upon its sides has a
resultant of zero in the direction normal to the failure arc. This method have errors on
the safe side, but is widely used in practice because of its early origins and
simplicity. Figure 2.5 shows the region above the assumed circular failure surface
divided into slices and a free body diagram of a single slice with all of the forces
acting on it, and the unknown points of application of the forces. As there are too
many unknowns to obtain a solution, some assumptions must be made about the

10
forces and their locations. The interslice forces ( Xn ; En ) are assumed to be equal
and opposite to each slice and therefore they cancel each other. Taking moments
about the center of the circle and assuming that everywhere along the failure surface
the amount of shear stress mobilized τ M is the same fraction of the total shear stress
available ( τ M = (c ′ + σ ′. tan φ ′) / F ), we obtain :

Figure 2.5 : Circular failure surface and forces acting on a single slice
( Fellenius , 1927 )

FS =
∑ [c ′b.sec α + (W . cos α − ub. sec α ). tan φ ′] ( 2.4 )
∑W . sin α
where c’ is the effective cohesion, b is the slice width , α is the angle of the base of
the slice to the horizontal, W is the total weight of the slice, u is the water pressure
acting on the base of the slice, Φ’ is the effective angle of friction, and the
summation implies an addition over all slices .

11
2.4.2.2 Bishop’s Method

This method was developed by Bishop in 1955, and improved upon the method of
slices developed by Fellenius ( 1936 ). The method is based on the statical analysis
of the mass which is considered to be made up of vertical slices. Equilibrium of
forces in the vertical direction is satisfied for each slice and the equilibrium of
moments about the center point of the trial arc is satisfied for each slice. Equilibrium
is also satisfied for the entire soil mass, consisting of all slices, above the trial arc.
The factor of safety is calculated by dividing the sum of the resisting moments by the
sum of the moments that causes the failure.

For a mathematically correct static solution, equilibrium of forces and moments must
exist for each slice as well as for all of the slices. Bishop’s rigorous formulation
contains too many unknowns to enable a direct solution. Some assumptions must be
made regarding the distribution of some of the unknown quantities and for this
method assumptions are made concerning the distribution of X force. The position of
the line of thrust yt ( Figure 2.6 ) of these X forces must be such that the moment
equilibrium of each slice is maintained. As pointed out Sarma ( 1979 ), Bishop didn’t
consider the point of action of the normal force on the base of the slice, thereby
eliminating another group of unknowns for the problem.

Using Bishop’s original and now somewhat familiar notation, the expression for the
factor of safety against a slip failure is expressed as :

F =
∑ [c′b + (W − ub + ∆X ). tan φ ′ / mα ] ( 2.5 )
∑W . sin α
where ;
∆X = X n − X n +1 ( 2.6 )

 tan α . tan φ ′ 
mα = cos α .1 +  ( 2.7 )
 F 
b is the slice width , W is the total weight of the slice, c’ is the effective cohesion, Φ’
is the effective angle of friction, u is the water pressure acting on the base of the
slice, α is the angle of the base of the slice to the horizontal.

12
Figure 2.6 : Position of Line of Thrust ( Fellenius , 1927 )

2.4.2.3 Spencer’s Method


Spencer developed this method in 1967 to determine the factor of safety of a slope
against the failure on a trial slip surface. The analysis is in terms of effective stress. It
leads to two equations of equilibrium, force equilibrium and moment equilibrium. As
in Bishop‘s method the soil mass with in the slip surface is divided into vertical
slices. In each slice, the resultant of the forces and the sum of the moments of the
forces must both be zero.

The factor of safety is defined as the ratio of the total shear strength available, S on
the slip surface to the total stress mobilized, Sm in order to maintain equilibrium.

S
F= ( 2.8 )
Sm
A sketch of a slice with the forces acting upon is shown in Figure 2.7. The forces are
as follows :

The weight, Wi

The total reaction, P normal to the base of the slice ( the force P’ due to the effective

stress, The force ub.secα due to the pore pressure, u ),

thus;

P = P ′ + ub. sec α ( 2.9 )

The mobilized shear force,


S
Sm = ( 2.10 )
F
where,
S = c ′b. sec α + P ′. tan φ ′ ( 2.11 )

13
c ′b tan φ ′
Sm = sec α + P ′ ( 2.12 )
F F
The interslice forces Zn and Zn+1; from equilibrium, the resultant Q of these two
forces must pass through the point of intersection of the three other forces.

Figure 2.7 : Forces on a slice for Spencer’s method ( Spencer , 1967 )

By resolving the forces shown in Figure 2.7 normal and parallel to the base of the
slice, the resultant, Qi of the later slice forces can be written :

c ′bi tan φ ′
sec α i + (Wi cos α i − u i bi sec α i ) − Wi sin α i
Qi = F F
 tan φ ′ 
cos(α i − θ i )1 + tan (α i − θ i )
 F 
( 2.13 )
For force equilibrium of the whole mass, the sum of both the horizontal and vertical
components of the inter slice forces must be zero.

∑ Q cosθ
i i =0 ( 2.14 )

∑ Q sin θ
i i =0 ( 2.15 )
Furthermore, for moment equilibrium, the sum of the moments of the inter slice
forces about the center rotation must also be zero.

∑ [Q R cos(α
i i − θ i )] = 0 ( 2.16 )

14
Since the slip surface is assumed to be circular,

∑ [Q cos(α i − θ i )] = 0 ( 2.17 )

Assuming the inter slice forces are parallel,

∑Q = 0 ( 2.18 )
Spencer also described the following procedure to solve for F, Q and θ .

A circular slip surface is chosen arbitrarily the area inside the slip surface is divided
into vertical slices of equal width. The mean height, h, and base slope α of each slice
is determined graphically.

Several values of θ are choosen and for each, the value of F is found which satisfies
both Equations 2.17 , 2.18. The value of F obtained using Eqn. 2.18 is designated Ff,
and that obtained from using Eqn. 2.17 as Fm. The value of the factor of safety
obtained from moment equilibrium and taking θ as zero is designated as Fm.

The resulting value of Ff are plotted versus θ. On the same graph, a second curve is
plotted as Fm versus θ. A typical graph is shown in Figure 2.8. The intersection of
two curves gives the values of the factor of the safety, F, which satisfies both Eqn.
2.17 and 2.18. The corresponding slope θ of the inter slice forces is also obtained.

The values of F and θ are then substituted into Eqn. 2.13 to obtain the values of the
resultant of the inter slice forces. Then, working from the first slice to the last, the
values of the inter slice forces are obtained.

Figure 2.8 : Variation of Fm and Ff with θ ( Spencer , 1967 )

15
The required ( critical ) factor of safety is obtained for the case Fm = Ff = FS. This
factor of safety FS = 1,07 and the corresponding value of the inter slice force angle
θ = 22,50 can be used to subsequently determine all the inter slice forces and their
line of thrust. The difference in factor of safety obtained using the Spencer’s method
as compared to Bishop’s method is not large. It was noted by Spencer (1968) that
the difference between two methods exceeded %1.

2.4.2.4 Obtaining The Most Critical Circle


Whichever of the methods of obtaining the factor of safety is used, a number of trial
circles must be taken and analyzed in order to obtain the one that gives the least
factor of safety ( Barker , 1980 ). As most analyses are done by computers the
process of analyzing a few hundred trial circles may be relatively quick and
inexpensive in today’s computing environment.

Computer programs need some type of algorithm upon which the search for the slip
surface with the minimum factor of safety is based. One of the most commonly used
methods is to specify a grid on which the centers of trial slip circles lie ( Figure 2.9 ).
Contours of the minimum factor of safety at each center on the grid can be plotted in
order to determine where the critical center may lie.

Figure 2.9 : Grid Search patter ( Mostyn and Small , 1987 )

The amount of computation required to find the critical circle may be greatly reduced
by using a technique by which one can automatically locate the center coordinates
and radius of the circle yielding the minimum factor of safety. Such a technique has
been described by Boutrup and Lovell ( 1980 ), who used the simplex reflection

16
method. To explain how the method works, consider the problem of finding the
factor of safety for a two dimensional circular slip surface. The problem basically
involves finding the coordinates a, b of the center and radius r of the circle which
minimize the factor of safety, FS. This is done by evaluating FS at the four corners of
a tetrahedron defined in x , y, r space. The value of factor of safety found at each
corner may then be used to decide in which direction to move to obtain a lower
factor of safety. Obviously this will be away from the vertex of the tetrahedron with
the highest factor of safety. Depending on the coordinates and radii given to start the
search, the minimum factor of safety can be found quite quickly.

2.4.3 Non – Circular Failure Surface


If the shear strength is non- uniform within a slope then the failure surface with the
minimum factor of safety will not necessarily be a circle but the shape will depend
on the distribution of shear strength. Sometimes the general shape of the critical
failure surface will be highly constrained by the distribution of weak zones within the
slope; other times it may require a lot of insight or work to find the critical surface or
at least some surface with similar stability.

Analysis of circular failure surfaces is easier than that of non-circular or generalized


failure surfaces. This is because moments taken about the center of a circular failure
surface result in a zero moment arm for the normal forces acting on the failure
surface and a constant moment arm for the cohesive forces on the failure surface.
Nevertheless the moments for the entire mass or each slice can be taken about any
point or points that are convenient and failure surface of any shape can be adopted.
This approach is used in analyzing generalized failure surfaces. Some of these
methods are given below.

2.4.3.1 Janbu’s Method


From the mid-50s t the early 70’s, Janbu developed generalized and simplified
methods which are best described in Janbu ( 1973 ) . In the generalized method, a
line of thrust is assumed and the equations of equilibrium solved. Sarma ( 1979 )
pointed out that this is not a rigorous solution because moment equilibrium of the last
slice is not satisfied; this affects the line of thrust but does not greatly affect the
factor of safety. Janbu ( 1973 ) noted that the factor of safety is relatively insensitive
to the assumption regarding the location of the line of thrust as long as it is

17
reasonable. According to Janbu ( 1973 ) in the line of thrust should be near one third
the height of the slice for cohesion less soils. It should be below this level in the
active zone and above it in the passive zone for cohesive soils. This method
sometimes gives answers that differ quite markedly from those obtained by other
methods such as Bishop method. Janbu’s method is based on satisfying only force
equilibrium and assumes zero inter slice shear forces and does not satisfy moment
equilibrium. However, the simplified Janbu method does satisfy vertical force
equilibrium and overall horizontal force equilibrium.

The normal effective stress at the base of each slice can be determined with the
following equations:

− U α cos α − S m sin α + W (1 − k v ) + U β cos β + Q cos δ


N′ = ( 2.19 )
cos α

The overall horizontal force equilibrium for the slide mass is determined from the
following:
n n n
 C + N ' tan φ 
∑ [F ] = ∑ [(N '+U α )sin α + Wk
i =1
H i
i =1
h + U β sin β ] + ∑ Q sin δ −
i =1  F
cos α  = 0 ( 2.20 )

It then follows that the Factor of Safety F can be determined with the following
equation:
n

∑ [C + N ' tan φ ]cos α


i =1
F= n
( 2.21 )
∑A
i =1
4 + N ' sin α

A4 =U α sin α + Wk h + U β + Q sin δ ( 2.22 )


The Simplified Janbu Method does not satisfy moment equilibrium for the slide
mass, as mentioned earlier. Therefore, Janbu performed more rigorous solutions and
compared the result to those found using his simplified method. He then presented
the following chart as seen in Figure 2.10 to correct for his over-determined solution.

18
Figure 2.10 : Janbu’s Correction factor for his simplified method

FJanbu= fo * Fcalcualted
2.4.3.2 Morgenstern - Price Method
This is perhaps the best known and most widely used method developed for
analyzing generalized failure surfaces. The method was initially described by
Morgenstern and Price ( 1965 ). It satisfies all static equilibrium requirements an is,
therefore, a rigorous method, but the solution obtained must be checked for
acceptability. The overall problem is made determinate by assuming a functional
relationship between the inter slice shear force and the inter slice normal force. The
function is referred to as f(x) and most programs implementing the method provide a
choice of such functions. Choosing such a function actually over determined the
problem and thus part of the solution is to determine a scaling factor, λ. The function
f(x) defines the relative inclination of the inter slice forces, while λ defines their
absolute magnitude. Thus the inter slice forces on the left hand side of slice are
related by following equation :

X = λ. f ( x).E ( 2.23 )
The solution procedure proposed by Morgenstern and Price ( 1965 ) differs from that
adopted by most investigators in that the problem was formulated using differential
equations that were integrated over each slice. Morgenstern and Price ( 1965 )
method doesn’t make the assumption that the normal force on the base of each slice
acts at the center of the slice. Thus, the accuracy of the other methods increases at the
slice become thinner. A reasonable number of slices should be adopted in any
analyses.

19
2.4.3.3 Location of Critical Failure Surface

Initially, methods of analysis were based on circular surfaces. However, development


of methods for non-circular surfaces followed soon. For the most part, non-circular
methods may also be used for the analysis of circular failure surfaces, since a circle
is merely a special type of curved failure surface.

The equivalent problem of determining the generalized failure surface having


minimum factor of safety is considerably more complex and routine procedures are
uncommon. It is often necessary to locate the critical failure surface by an intelligent
selection of potential failure surfaces and manual iteration until the critical surface
has been established. This may often be the most efficient means of locating the
critical surface.

2.4.4 Selection of Method


Some methods of slope stability analysis are more rigorous and should be favored for
detailed evaluation of final designs. Some methods ( Spencer, Sweedish, wedge ) can
be used to analyze noncircular slip surfaces. Some methods ( Bishop, Swedish,
wedge ) can be used without the aid of a computer and are therefore convenient for
independently checking results obtained using computer programs. Also when these
latter methods are implemented in software they extremely fast and are useful where
very large numbers of trial slip surfaces are to be analyzed. Table 2.1 can be helpful
in selecting a suitable method for analysis.

Table 2.1 : Comparison of features of methods


Ordinary Simplified Spencer Modified Wedge Infinite
method of Bishop Swedish slope
Feature slices

Accuracy X X X
Plane slip surfaces X
parallel to slope face
Circular slip surfaces X X X X
Wedge failure X X X
mechanism
Non-circular slip surfaces X X
– any shape
Suitable for hand X X X X X
calculations

20
2.5 Numerical Methods For Slope Stability Analysis

With the rapid development of computational technologies, alternative numerical


approach have been sought for developing new modeling techniques. Among them,
finite difference method and finite element method are being widely used by
consulting firms as computing facilities become cheaper and more readily available.
Although they are more complex to use than the conventionally limit equilibrium
methods, they nevertheless can provide a detailed insight into the way how a slope
will deform and fail and therefore provide a valuable addition to methods of
analyzing slope behavior.

2.5.1 Finite Difference Method


The application of the limit equilibrium methods gives an insight of the stability of
the slope at the state of failure and gives no information about the stress – strain
history of the slope prior and after failure has occurred. The limit equilibrium
methods generally do not satisfy the stress equilibrium at any given point in the slope
at any given time, thus the methods are inappropriate to model progressive failure
mechanisms. Finite element and difference methods can model the deformation of
the slope and the stress caused by the deformations throughout the failure. There are
some computer programs based on these methods that can solve such problems,
however these methods still require an interpretation of the results of analysis, and it
has not widely used for general slope stability analysis. However, with advanced
computer technology and interactive visualization of the results of such analyses, the
methods have a place among the general methods used in stability analysis. Finite
difference methods content is given below.

Finite difference method widely used to obtain approximate solutions of many


boundary value problems whose exact solutions are mathematically complex and in
come cases impossible. Response of a structure system is often represented by the
governing differential equations. These equations involve derivatives of functions
using finite difference approach these derivatives can be easily evaluated at discrete
points. The partial differential equations ( PDE ) can then be solved in the domain
with respect to some given boundary conditions. Cundall ( 1976 ) gave an example
of how finite difference methods might be applied to the problems of slope stability.

21
Finite difference method is an approximate method for determining derivatives of a
function. Depending upon circumstances, the finite difference method may give
exact results. However, frequently it yields only approximate results. The extent of
error in using finite difference method in finding derivatives of a function depends on
various including order of derivative, type of function, type of finite difference mesh.

This method has the following advantages over the traditional methods : Failure
mode develops naturally no need to specify trial surfaces; No parameters need to be
given as input. Multiple failure surfaces evolve naturally.

2.5.2 Finite Element Method


The finite element method ( FEM ) represent a powerful alternative approach for
slope stability analysis. This method is accurate, versatile and requires fewer
assumptions especially regarding the failure mechanism. The FEM can solve
problems with irregular boundaries and complex variation of potential and flow
lines. The region to be analyzed is divided into elements which are jointed at nodes.
The unknown displacements at each node may be computed and from these the strain
and stress fields within the body may be found.

The finite element method (FEM) can be used to compute displacements and stresses
caused by applied loads. However, it does not provide a value for the overall factor
of safety without additional processing of the computed stresses. The principal uses
of the finite element method for design are as follows:

(1) Finite element analyses can provide estimates of displacements and construction
pore water pressures. These may be useful for field control of construction, or when
there is concern for damage to adjacent structures. If the displacements and pore
water pressures measured in the field differ greatly from those computed, the reason
for the difference should be investigated.

(2) Finite element analyses provide displacement pattern which may show potential
and possibly complex failure mechanisms. The validity of the factor of safety
obtained from limit equilibrium analyses depends on locating the critical potential
slip surfaces. In complex conditions, it is often difficult to anticipate failure modes,
particularly if reinforcement or structural members such as geotextiles, concrete
retaining walls, or sheet piles are included. Once a potential failure mechanism is

22
recognized, the factor of safety against a shear failure developing by that mode can
be computed using conventional limit equilibrium procedures.

(3) Finite element analyses provide estimates of mobilized stresses and forces. The
finite element method may be particularly useful in judging what strengths should be
used when materials have very dissimilar stress-strain and strength properties, i.e.,
where strain compatibility is an issue. The FEM can help identify local regions
where “overstress” may occur and cause cracking in brittle and strain softening
materials. Also, the FEM is helpful in identifying how reinforcement will respond in
embankments. Finite element analyses may be useful in areas where new types of
reinforcement are being used or reinforcement is being used in ways different from
the ways for which experience exists. An important input to the stability analyses for
reinforced slopes is the force in the reinforcement. The FEM can provide useful
guidance for establishing the force that will be used.

Use of finite element analyses to compute factors of safety. If desired, factors of


safety equivalent to those computed using limit equilibrium analyses can be
computed from results of finite element analyses. The procedure for using the FEM
to compute factors of safety are as follows:

(1) Perform an analysis using the FEM to determine the stresses for the slope.

(2) Select a trial slip surface.

(3) Subdivide the slip surface into segments.

(4) Compute the normal stresses and shear stresses along an assumed slip surface.
This requires interpolation of values of stress from the values calculated at Gauss
points in the finite element mesh to obtain values at selected points on the slip
surface. If an effective stress analysis is being performed, subtract pore pressures to
determine the effective normal stresses on the slip surface. The pore pressures are
determined from the same finite element analysis if a coupled analysis was
performed to compute stresses and deformations. The pore pressures are determined
from a separate steady seepage analysis if an uncoupled analysis was performed to
compute stresses and deformations.

(5) Use the normal stress and the shear strength parameters, c and Φor c' and Φ ', to
compute the available shear strength at points along the shear surface. Use total
normal stresses and total stress shear strength parameters for total stress analysis and

23
effective normal stresses and effective stress shear strength parameters for effective
stress analyses.

(6) Compute an overall factor of safety using the following equation:

Fs =
∑ s ∆l i i
( 2.24 )
∑ τ ∆l i i

τ
τ f = c ' + σ n tan φ ' and Fs = s ( 2.25 )
τn

where ;
si = Available shear strength computed in step (4)
τ i = Shear stress computed in step (3)

∆l = Length of each individual segment into which the slip surface has been

subdivided.
The summations in Equation 2.24 are performed over all the segments into which the
slip surface has been subdivided.

Finite element analyses require considerably more time and effort, beyond that
required for limit equilibrium analyses and additional data related to stress-strain
behavior of materials. Therefore, the use of finite element analyses is not justified for
the sole purpose of calculating factors of safety.

Another method is that the shear strength reduction technique is a new method to use
finite element method in the slope stability analysis, and assumed that the failure
mechanism of slope is directly related to the development of the shear strain. In this
method, the shear strength ( c , ϕ ) of the geomaterial is divided by the shear strength
reduction ratio, Fs , and use the reduced shear strength ( c ' , ϕ ' ) to replace the primary
shear strength to bring the slope to the verge of failure. When the verge of failure
arrives, the strain or displacement in the sliding zone will break, and this kind of
break will lead the convergence of finite element fail. The expression of the
reduction can be described as:
c ' = c / Fs ( 2.26 )
ϕ ' = atan(tan ϕ / Fs ) ( 2.27 )

Where, c and ϕ are the shear strength parameters, c ' and ϕ ' are the reduced shear
strength parameters, Fs is the shear strength reduction ratio. During the calculation,
the shear strength reduction ration, Fs , is increased step by step, and the shear

24
strength of the geomaterial is also changed. When the convergence is failed, the
shear strength ratio, Fs , is the safety factor of the slope, and the plastic zone
corresponds to the sliding face of the slope.

2.6 Computer Programs Based on Traditional Methods


Program : CLARA-W

Description : CLARA-W is a program for geotechnical slope stability analysis in


two or three dimensions, using Bishop, Janbu, Spencer and Morgenstern-Price
methods. Features include: 2D and 3D analysis of rotational or non-rotational sliding
surfaces, ellipsoids, wedges, compound surfaces, fully specified surfaces and
searches. Other features include point loads, tension cracks, earthquake loading,
anisotropic and non-linear material strength models and the possibility to use digital
elevation model (DEM) files to specify ground surface topography. It also includes
3D extensions of the Spencer's method and the Morgenstern-Price method.
( Geotechnical & Geoenvironmental Software Directory - http://www.ggsd.com )

Program : XSLOPE for Windows

Description : XSLOPE for Windows computes the stability of an earth slope using
Bishop's (1955) simplified method for circular failure surfaces or Morgenstern and
Price's (1965, 1967) analysis for non-circular failure surfaces. The slope may be
divided into a number of different soil layers with different properties. In the Bishop
analysis a circular surface of rupture is assumed and then the equilibrium of the
sliding mass of soil is considered by dividing this mass into a number of slices. This
process is repeated for a large number of circles and the minimum factor of safety
determined. Pore pressures within each soil layer can be calculated by a number of
different methods, from the depth below a piezo metric surface, by using a pore
pressure coefficient ru, from a user specified grid of pore pressures, or from a grid of
pore pressures generated by program FESEEP. External normal and shear tractions
can be applied to segments along the surface of the slope. The effect of an
earthquake is modeled by applying a set of horizontal and vertical forces at the
centroid of each slice. These forces are calculated using the horizontal and vertical
seismic coefficients which are assumed to vary with depth. (http://www.ggsd.com )

25
Program : SVDynamic

Description : SVDynamic carries out slope stability analyses using the dynamic
programming method to determine the location of the slip surface and factor of
safety of a slope based on a finite element analysis. It has been verified against
traditional slope stability analysis methods such as Morgenstern-Price, GLE (General
Limit Equilibrium - Fredlund et al. 1982), Spencer, Simplified Bishop's, Janbu, and
the Ordinary method. (http://www.ggsd.com )

Program : GeoStru

Description : Slope (GeoStru) carries out the analysis of soil slope stability both in
static and seismic states utilising the limit equilibrium methods of Fellenius, Bishop,
Janbu, Bell, Sarma, Spencer, Morgenstern and Price. The discrete element method
(DEM) is also used for circular and non-circular failures by which it is possible to
determine movement in the slope, examine a gradual failure, and employ various
models of force-deformation. Program is doing automatic computation of Safety
Factor for surfaces that are tangential to a straight line (automatically varying the
inclination), or that pass through one, two, or three given points, and back analysis.
(http://www.ggsd.com )

Program : SLOPE/W Basic Edition

Description : SLOPE/W Basic Edition has the essential features for solving slope
stability analyses, including: Ordinary, Bishop, Janbu Simplified, Spencer,
Morgenstern-Price and Generalized Limit Equilibrium methods. Pore-water pressure
conditions specified using a piezometric line. Soil strength specified as undrained,
cohesive and frictional, no strength, or impenetrable. Ground surface surcharge
pressures. It has horizontal and vertical seismic coefficients analyses.
( http://www.ggsd.com )

Program : DC-Slope

Description : DC-Slope carries out slope stability analysis according to Krey-Bishop


(friction circle) and Janbu (arbitrary sliding planes) methods. Main features include:
Freely defined ground surface and layers, ground water and seepage paths, different
load cases with concentrated and distributed loads, dead and live loads or earthquake
loads. Program have automatic iteration of center and/or radius, optionally with
predefined range, to find the minimum safety factor. ( http://www.ggsd.com )

26
Program : Slope (ejgeSoft)

Description : Slope is a program for carrying out slope stability analysis by the
Bishop method and has the following features: Any shape of soil profile can be
considered; Change any parameter for immediate re-calculation; Any consistent unit
system can be used (SI is default); Several soil layers with different properties can be
considered; Single circle can be specified; Single center can be specified and R range
scanned by the program; A grid of centers can be specified for minimum FS search;
Output of details can be requested, down to slice weights; Pore pressures are
specified either by an ru coefficient or a fixed ground water table elevation.
( http://www.ggsd.com )

Program : CADS Re-Slope

Description : CADS Re-Slope is a general slope stability software package supplied


as a complete suite or as a series of modules. The Full Slope Stability module
features: Circular and user defined slip surfaces; Swedish method of slices; Bishops
method (No interslice shear, moment equilibrium); Janbu method (No interslice
shear, horizontal equilibrium); Rigorous method (interslice shear, full equilibrium);
Also program have Seismic analysis (horizontal and vertical acceleration).
( http://www.ggsd.com )

Program : GGU-STABILITY

Description : GGU-STABILITY for slope failure calculations and soil nailing.


Considers circular slip surfaces (Bishop or Krey) and polygonal slip surfaces
(Janbu), in addition to rigid body failure mechanisms and block sliding methods. Slip
stability. Overturning stability. Base failure safety. Slope failure safety (Bishop).
Calculation of maximum "nailing forces". ( http://www.ggsd.com )

Program : GeoStar

Description : GeoStar supports standard and improved limit equilibrium methods.


Cylindrical or general shaped (polygonal) shear surface. Progressive failure. General
shaped layers. Variable inter-slice forces. Ground water influence calculated from
groundwater levels, pore pressure coefficient (Bishop - Morgenstern), pore pressure
value for layer or explicit field of pore pressure values (i.e. imported from a finite
element calculation). ( http://www.ggsd.com )

27
Program : TSTAB

Description :TSTAB conducts limit equilibrium slope stability analyses of circular


slip surfaces by the Simplified Bishop or Spencer methods and searches for the
critical circular slip surface. Provides for application of line loads and pressures to
the slope and this capability allows for modeling of both anchors and internal
reinforcement, such as that provided by geogrids. Automatically computes the
pressures on submerged slopes and the pressures due to fluid in tension cracks.
Provides several alternatives for specifying shear strengths, allows anisotropic
undrained shear strengths and allows specification of local residual factors. Provides
a choice of automatic computation of pore pressures from a specified phreatic
surface or from the average pore pressure ratio or specification of pore pressures as
contours or specification of pore pressures on a grid. Includes an interactive pre-
processor for generating input files and generates plots of slope geometry, soil layers,
pore pressures, specified slip circles or trial slip circles, with critical circles
highlighted. (http://www.ggsd.com )

Program : Geo-Tec B

Description : Geo-Tec B is a cross platform (Windows and Macintosh) slope


stability analysis program using the Janbu, Bishop and Fellenius methods. It can
automatically calculate a series of circles between two defined extreme circles.
Circles may be defined by identifying three points or a center and radius. Seismic
analysis is carried out by a pseudo-static method. When using the Janbu method, it is
possible to apply an external horizontal force to the slope. ( http://www.ggsd.com )

Program : PCSTABL 6

Description : PCSTABL 6 is a computer program for the general solution of slope


stability problems by two-dimensional limiting equilibrium methods and includes the
analysis of reinforced soil slopes with geosynthetics, nailing, and tiebacks. The
calculation of the factor of safety against instability of a slope is performed by the
simplified Bishop method, applicable to circular shaped failure surfaces, the
simplified Janbu method, applicable to failure surfaces of general shape, and the
Spencer method, applicable to any type of surface. The simplified Janbu method has
an option to use a correction factor, developed by Janbu, which can be applied to the
factor of safety to reduce the conservatism produced by the assumption of no

28
interslice forces. It features random techniques for generation of potential failure
surfaces for subsequent determination of the more critical surfaces and their
corresponding factors of safety. ( http://www.ggsd.com )

Program : STABGM

Description : STABGM is a program for the slope stability analysis of reinforced


embankments and slopes with circular slip surfaces, using the Ordinary Method of
Slices and Bishop's Modified Method. ( http://www.ggsd.com )

Program : Stabl for Windows

Description :Stabl for Windows is the Windows version of PCSTABL 6 program


for the general solution of slope stability problems by two-dimensional limiting
equilibrium methods and includes the analysis of reinforced soil slopes with
geosynthetics, nailing, and tiebacks. The calculation of the factor of safety against
instability of a slope is performed by the simplified Bishop method, applicable to
circular shaped failure surfaces, the simplified Janbu method, applicable to failure
surfaces of general shape, and the Spencer method, applicable to any type of surface.
The simplified Janbu method has an option to use a correction factor, developed by
Janbu, which can be applied to the factor of safety to reduce the conservatism
produced by the assumption of no interslice forces. It features random techniques for
generation of potential failure surfaces for subsequent determination of the more
critical surfaces and their corresponding factors of safety. ( http://www.ggsd.com )

Program : Slope stability (Fine)

Description : Slope stability (Fine) solves the slope stability problem in a two
dimensional environment. The slip surface can be modeled in two different ways -
circular (Bishop or Petterson method), or polygonal (Sarma method). Features

include:

- Simple input of geometry of layers


- Built-in database of soils and rocks
- Optimization of circular and polygonal slip surfaces
- An arbitrary number of surcharges (strip, trapezoidal, concentrated loading)
- Simple modeling of rigid bodies
- Possibility to model earthquake effects

29
- Possibility to consider foliation of soils (soil anisotropy)
- Possibility to introduce geo-reinforcement into the analysis
- Analysis in effective and total parameters of soils
- An arbitrary number of analyses within one stage of construction
- Possibility to introduce restrictions on the slip surface optimization
- Analysis according to the theory of limit states or safety factor
( http://www.ggsd.com )

Program : STABLE

Description :STABLE carries out slope stability analysis by the methods of Bishop,
Morgenstern-Price and Sarma. Analyses may include: point loads, line reinforcement
forces, flexible soil geometry for representation of lenses, inclusions, clay cores etc.
Pore pressures may be specified as: piezometric surface, Ru values in each soil,
absolute or excess values in each soil. Automatic slip-circle generation., location of
critical circle, earthquake analysis. ( http://www.ggsd.com )

Program : SLOPE 12R

Description : SLOPE 12R is a computer program for analysing the stability of


slopes, also applicable to earth pressure and bearing capacity problems. Main
Features are : Automatically generates slip surfaces to find the critical failure
mechanism. The ground can be described in terms of up to nine soil strata with
different strength properties. Multiple water tables or piezometric surfaces modeled.
In simple cases pore pressures are calculated from the position of the water table but
in more complicated flow conditions local values of pore pressure can be defined.
Circular and non-circular slip surfaces can be analysed. A group of circular slip
surfaces can be analysed by defining a rectangular grid of centres. For each centre a
number of different radii can be specified. Two and three part wedges can be
analysed. A group of wedges can be analysed by defining a rectangular grid of
wedge nodes. Analysis methods: Swedish Circle (or Fellenius') method; Bishop's
method; Spencer's method; Janbu's method. External forces (due to buildings or strut
forces in excavations) can be applied to the ground surface. Earthquake forces can be
modeled in a quasi-static manner by specifying horizontal and vertical acceleration
factors. ( http://www.ggsd.com )

30
Program : XSTABL

Description : XSTABL performs a two dimensional limit equilibrium analysis to


compute the factor of safety for a layered slope. The Generalized Limit Equilibrium
(GLE) method allows factors of safety to be calculated for force and moment
equilibrium or for force equilibrium only, using different interslice force angle
distributions including Spencer's, Morgenstern-Price, or one of the methods proposed
by the Corps of Engineers. If an analysis requires a search for the most critical failure
surface, the simplified Bishop and Janbu methods of analysis are selected due to their
relative ease of use. The program may be used to either search for the most critical
circular, non-circular, or block-shaped surface, or alternatively, to analyze a single
circular or non-circular surface. ( http://www.ggsd.com )

Program : I.L.A.

Description : I.L.A. is a slope stability analysis program that also incorporates


features for retaining system designing. The slope analysis and design can be
performed using the Sarma method, whose numerical stability increases the
reliability of the calculations. The classic Bishop,Jambu, Morgenstern&Price and
Bell methods are also available. The failure surfaces can either be defined as circular
or planar surface families or even individually, as polygonal surfaces, and therefore
have any shape whatsoever. The analysis can be carried out under drained or
undrained conditions, also considering water pressures, surcharges, seismicity.
( http://www.ggsd.com )

Program : WinStabl

Description : WinStabl is a pre- and post-processor to STABL6. The package


supports: Simplified Janbu, Modified Bishop, and Spencer's analyses. Reinforcing
layers. Tiebacks. Earthquake (pseudo-static) analysis. Boundary (external) loads.
Anisotropic soils. Specific failure surfaces. Circular and irregular surface searching.
( http://www.ggsd.com )

Program : MStab

Description : MStab carries out stability analysis by the Bishop, Fellenius and
Spencer methods. It provides automatic search of the critical slip circle; user-defined
zones that the circle will not cross; integration of geotextiles; user-defined non-
circular slip plane; temporary and permanent loads; pore pressures and degree of

31
consolidation; Mohr-Coulomb soil parameters; output of global safety factor; output
of safety contours; output of stress components along slip plane.
( http://www.ggsd.com )

Program : GSTABL7 with STEDwin v.2

Description : GSTABL7 with STEDwin v.2 is a 2D limit equilibrium slope stability


program based on STABL6 but which includes geogrid reinforcement, piers/piles,
tiebacks, soil nails, applied forces, and surcharge loads. Failure surfaces of any shape
can be analyzed with Modified Bishop, Simplified Janbu, Spencer, and Morgenstern-
Price methods. ( http://www.ggsd.com )

Program : GSlope

Description : GSLOPE carries out limit equilibrium slope stability analysis of


existing natural slopes, unreinforced man-made slopes, or slopes with soil
reinforcement. The program uses Bishop's Modified method and Janbu's Simplified
method. Slopes can be analysed in either direction, and a seismic coefficient is
provided for pseudo-static analysis. When a non-circular slip surface is drawn, the
computed Factor of Safety is displayed immediately. ( http://www.ggsd.com )

Program : SLOPBG

Description :It allows the analysis of a generalized soil slope using Bishop's
modified method. Water pressures are accounted for using either phreatic lines, pore
pressure contours or a single pore pressure coefficient. Horizontal and vertical
seismic forces are considered. The program can search for the critical minimum
factor of safety circle. The program can be run in a deterministic or probabilistic
mode. In probabilistic mode several statistical distribution types can be applied to
best reflect the range of input variables. A simulation technique is then used to
generate pseudo-variables used to calculate the distribution of the factor of safety.
( http://www.ggsd.com )

Program : Slide

Description : Slide 5.0 is a 2D Limit Equilibrium Analysis program with a CAD


based graphical interface and a wide range of modeling and data interpretation
options. it now includes sensitivity, probabilistic and back analysis capabilities.
Sensitivity analysis allows the user to determine the effect of any parameter on the

32
factor of safety to discover the most critical parameters, leading to optimization of
the slope remediation. Safety factors are calculated based on a number of widely
used limit equilibrium techniques including Bishop Simplified, Spencer, and
GLE/Morgenstern & Price. ( http://www.ggsd.com )

Program : Slope-W

Description : Slope-W is a circular and non-circular soil and rock slope stability
program. Carries out stability analyses by the methods of Fellenius, Bishop
simplified, Janbu simplified, Spencer, Morgenstern-Price, Corps of Engineers, Lowe-
Karafiath, General Limit Equilibrium, Finite element stress. It can perform
probabilistic stability analyses using the Monte Carlo method.
(http://www.ggsd.com)

Program : Galena

Description : Galena is a slope stability analysis program incorporating Bishop


(circular), Spencer-Wright (circular and non-circular) and Sarma (non-vertical slices)
methods for problem solving in soils and rocks. Single and multiple analyses
available for all methods. Models can include external forces, distributed loads, and
earthquake effects. ( http://www.ggsd.com )

Program : Slope 2000

Description : Slope 2000 can locate the critical failure surface of a slope under
general conditions with general constraints. The shape of the failure surface can be
either circular or non-circular. The slope analysis methods include Bishop, Janbu
simplified and rigorous, Morgenstern-Price, Sarma, GLE, Corps of Engineers, Lowe
Karafiath, wedge. ( Geotechnical & Geoenvironmental Software Directory -
http://www.ggsd.com )

2.7 Slope Stability Analysis Conditions Importance

We know that pore water pressure is a major factor for slope stability check and
when carrying out an effective stress analysis the pore water pressures need to be
calculated at the base of each slice as the water force is involved in computing the
factor of safety. One of the common ways to compute pore water pressure is to use
ru, where ru is defined as the ratio of the water pressure u to the overburden pressure
γh at a given point. This implies that the pore water pressure is related to the

33
overburden pressure or that the water force U at the base of each slice is proportional
to the total weight W of the slice.

Another common way is to use a piezometric surface. A surface may be used in


conjunction with a slip circle program that the water pressure, u, at the base of each
slice is computed as γwhw where hw is the vertical distance between the piezometric
surface and the base of the slice. The use of a piezometric surface for a slope in
which seepage is taking place will lead to errors in estimating pore pressures since
pore pressures should be determined from a flow net and can’t be tied to a single
piezometric surface.

The use of a pore water pressure grid can overcome the above problem. Pore
pressure values may be specified at points on a regular gird and the values at the base
of each slice found from interpolation of values at the nearest grid points. This is
particularly useful with seepage problems where finite difference or finite element
solutions may be obtained and used to set up a grid pore pressure.

Most of the methods mentioned in the previous sections employ the definition of the
factor of safety Fs. As Lowe ( 1967 ) pointed out defining the factor of safety as a
factor on shear strength is logical because shear strength is usually the quantity that
involves the greatest degree of uncertainty. The limitation results from the fact that
these methods provide no information regarding the magnitudes of the strains within
the slopes or any indication about how they may vary along the slip surface. It is
worth noting that the average value of Fs is the same for all practical purposes, even
if the factor of safety is assumed to vary from place to place along the slip surface.

34
3. ARTIFICIAL INTELLIGENCE APPLICATIONS WITH NEURAL
NETWORKS

3.1 Introduction

Artificial neural networks are systems and computational devices that are constructed
to make use of some organizational principles resembling those of the human brain.
Normally there are a large number of highly connected computational nodes
( neurons ) that are operated and configured in parallel regular architectures. Like
human brain an artificial neural network has the ability to learn; recall and generalize
from the data which are used to train the system. The pattern of activation at the
input units represents the problem being presented to the network; and the pattern of
activation at the output processing units, represents computational results achieved
by neural network. The neural network propagates the changes in weights of the
connection between each linked neuron by minimizing the difference between the
actual output and target output. The propagation of the changes in link weight the
computation performed by neural network is strongly affected by the topology and
strengths of the connections between the neurons. As an example Figure 3.1 shows a
simple mathematical model of this network.

Figure 3.1: Schematic Diagram of A Neuron’s Network ( McCulloch & Pitts ,1943 )

3.2 Neural Network’s Properties

3.2.1 Basic Structure of Neural Network


The basic building block of neural network technology is the simulated neuron
(depicted in Figure 3.2 as a circle). Independent neurons are of little use, unless they

35
are interconnected in a network of neurons. The network processes a number of
inputs from the outside world in order to produce an output, the network's
classifications and predictions. The neurons are connected by weights, (depicted as
lines) which are applied to values passed from one neuron on to the next. A group of
neurons is called a slab. Neurons are also grouped into layers by their connection to
the outside world. For example, if a neuron receives data from outside of the
network, it is considered to be in the input layer. If a neuron contains the network's
predictions or classifications, it is in the output layer. Neurons in between the input
and output layers are in the hidden layer(s). A layer may contain one or more slabs of
neurons.

Figure 3.2 : Neural Networks Structure ( Ural & Bayrak , 2003 )

Neural networks are not programmed; they learn by example. Typically, a neural
network is presented with a training set consisting of a group of examples from
which the network can learn. The most common training scenarios utilize supervised
learning, during which the network is presented with an input pattern together with
the target output for that pattern. The target output usually constitutes the correct
answer, or correct classification for the input pattern. In response to these paired
examples, the neural network adjusts the values of its internal weights. If training is
successful, the internal parameters are then adjusted to the point where the network
can produce the correct answers in response to each input pattern. Usually the set of
training examples is presented many times during training to allow the network to
adjust its internal parameters gradually. The neural network approach does not
require human development of algorithms and programs that are specific to the
classification problem hand, suggesting that time and human effort can be saved.
There are drawbacks to the neural network approach, however: the time to train the

36
network may not be known beforehand, and the process of designing a network that
successfully solves an application problem may be involved.

It is possible to develop a network that can generalize on the tasks for which it is
trained, enabling the network to provide the correct answer when presented with a
new input pattern that is different from the input in the training set. To develop a
neural network that can generalize, the training set must include a variety of
examples that are good preparation for the generalization task. In addition, the
training session must be limited in iterations, so that no “over learning” takes place
(i.e., the learning of specific examples instead of classification criteria, which is
effective and general). Thus, special considerations in constructing the training set
and the training presentations must be made to permit effective generalization
behavior from a neural network. All of these characteristics of neural networks may
be explained through the simple mathematical structure of neural net models. The
computations performed in the neural net may be specified mathematically.

As a summary, an artificial neural network is a parallel computing system with the


following characteristics ( Lin & Lee , 1996 )

1. It is a naturally inspired mathematical model.

2. It consists of a large number of highly interconnected processing units.

3. Its connections ( weights ) hold the information about the relationship between
the inputs and outputs.

4. Each neuron can dynamically respond to its input stimulus and the response
completely depends on its local information that is the input signals arrive at the
processing element via connections and connection weights.

5. It has the ability to learn recall and generalize from training data by adjusting the
connection weights.

6. Its collective behavior demonstrates the computational power while no single


neuron carries specific information, the information is distributed among the neurons
which have a great deal of computational power.

One of the primary significant properties of neural network is the ability of the
network to learn from its environment and to improve its performance through
learning. The neural network learns about its environment through an iterative

37
process of adjustment to its synaptic weights and thresholds. Ideally the network can
become more knowledgeable about its environment after each iteration of the
learning process. The learning process can be classified as supervised learning or
unsupervised learning. For supervised learning we are trying to map the input output
pairs according to a given training set. In other words already know what the output
will be during the training. The network parameters are updated by a supervised
learning rule. For unsupervised learning there is no external instruction available
only input vectors can be used for learning. In other words one does not know
outputs or classes associated with the input patterns. During the unsupervised
learning there is no feedback from the environment to indicate what the outputs of a
network should be, or, whether they are correct. The network itself should discover
any relationships, such as patterns, features, correlations, or categories that may exist
in the input data and translate the discovered relationship outputs.

According to Mendel and McClaren ( 1970 ) the definition of learning can be


expressed as follows. Learning is a process by which the free parameters of a neural
network are adopted through a continuing process of stimulation by the environment
in which the network is embedded. The type of learning is determined by the manner
in which the parameter changes take place.

The learning can be understood as the process that repeatedly applies input vectors to
the network and the finds new weights and biases with the learning rule. The process
will be repeated until the sum error based on the cost function falls beneath an
acceptable error goal or a maximum number of epochs have occurred. The learning
parameters are varied according to different learning rules. But the common
parameters which will be used for each supervised learning process are the number
of epochs between displaying process the maximum number of epochs to train, the
acceptable error goal and the learning rate.

The learning algorithm is defined as a prescribed set of well defined rules for the
solution of a learning problem. There are large varieties of learning algorithms which
differ from each other in the way to adjust the weight. Each learning algorithm offers
an advantage of its own.

38
3.2.2 Design Choices of Neural Networks
Many design choices are involved in developing a neural network application.
Figure 3.3 Shows an example for choosing a design for a neural network application

Figure 3.3 : Design Choices For Neural Network application (Dayhoff, 1990)

The first option is in choosing the general area of application. Usually this is an
existing problem that appears amenable to solution with a neural network. Next, the
problem must be identified, so that a selection of inputs and outputs to the network
may be made. Choices for inputs and outputs involve identifying the types of pattern
that go into and out of the network. In addition, the researcher must design how those
patterns are to represent the needed information (the representation scheme). Next,
internal design choices must be made including the topology and size of the network.
The number of processing units are specified, along with the specific
interconnections that the network is to have. Processing units are usually organized
into distinct layers, which are either fully or partially interconnected.

There are additional choices for the dynamic activity of processing units. A variety
of neural net paradigms are available; these differ in the specifics of the processing
done at each unit and in how their internal parameters are updated. Each paradigm
dictates how the readjustments of parameters take place. This readjustment results in
“learning” by the network. Next, there are internal parameters that are “tuned” to
optimize the neural net design. The value of this parameter influences the rate of
learning by the network, and may possibly influence how successful the network
learns. These are experiments that indicate “learning” occurs at more successful rates
if this parameter is decreased during the learning session. Some paradigms utilize
more than one parameter that must be tuned.

39
Finally, the selection of training of data presented to the neural network influences
whether or not the network “learns” a particular task. How well a network will learn
depends on the examples presented. A good set of examples, which illustrate the
tasks to be learned well, is necessary for the desired learning to take place. A poor
set of examples will result in poor learning on the part of the network. The set of
training examples must also reflect the variability in the patterns that the network
will encounter after training.

3.2.3 Neural Network Architectures


3.2.3.1 GRNN Architecture And Learning Algorithm
General Regression Neural Networks, GRNN, invented by Dr. Donald Specht
(1996), is a three-layer network having one hidden neuron for every training pattern,
in addition to the total number of neurons equal to sum of input and output number.

Unlike Back-propagation Networks, there is a smoothing factor instead of training


parameters; learning rate and momentum. The success of GRNN networks is
dependent upon the smoothing factor. The individual smoothing factor adjustments
values may be used as a sensitivity analysis tool: inputs with low smoothing factor
adjustments are candidates for removal at a later trial, especially if the smoothing
factor adjustments approaches zero.

The fact that the GRNN network centers are determined by the training data vectors
gives the network stability and it ensures that it is not be over-trained. This is the
main feature along with its simplicity that distinguishes it from most other
approaches.

GRNN is especially useful for continuous function approximations. GRNN can have
multidimensional input, and it will fit multidimensional surfaces through data.
GRNN work by measuring how far a given sample pattern is from patterns in the
training set in N dimensional space, where N is the number of inputs in the problem.
When a new pattern is presented to the network , that input pattern is compared in N
dimensional space to all of the patterns in the training set to determine how far in
distance it is from those patterns. The output that is predicted by the network, is a
proportional amount of all of the outputs in the training set. The proportion is based
upon how far the new pattern is, from the given patterns in the training set.

40
Advantages:

i. GRNN can handle both linear and non-linear data.

ii. Adding new samples to the training set does not require re-calibration the model.

iii. Only one adjustable parameter thereby making overtraining less likely.

Disadvantages:

i. Requires many training samples to adequately span the variation in the data.

ii. Requires that all the training samples be stored for future use (i.e., prediction).

iii. Has trouble with irrelevant inputs.

iv. No intuitive method for selecting the optimal smoothing parameter.

Figure 3.4 : The Basic GRNN Architecture

The general regression neural network (GRNN) is a three layer network having one
hidden neuron for every training pattern (Lawrence,1993). ( Figure 3.4 ) In addition
to the total number of neurons is equal to the sum of the number of inputs and the
outputs (Specht,1991). The regression is the estimation of the least mean squares of
variables based on the available data. In other words, the regression decision in the
GRNN architecture reveals the most probable value of all the patterns in an N-
dimensional space (where N is the number of inputs). Output values correspond to
the weighted average of the target values. The target values are weights which
exceed from the hidden layers to the output layers. During the GRNN training
process, smoothing factors (or bandwidths) are the only weights which need to be
calculated. The success of the GRNN is dependent upon the smoothing factor,
however, there is no intuitive method for selecting the optimal smoothing factor
(Ural and Bayrak, 2003).

41
The GRNN utilizes a probabilistic model between an independent single training
vector in the input space (X) and a dependent scalar output (Y). Xi is defined as the
input vector of the ith training data set, and Yi is the output related to Xi. It is
assumed that x and y are defined as measured values of X and Y, respectively, and
the regression of Y on x is defined as yˆ such that;
p

∑y
i =1
i f i ( x)
y ( x) = p
(3.1)
∑ f ( x)
i =1
i

 − ( x − xi ) T ( x − xi ) 
f i ( x) = exp   (3.2)
 2σ 2 

By substituting Eq. 3.2 in Eq. 3.1


p
 − ( x − xi ) T ( x − xi ) 
∑ y i exp 
2σ 2

i =1
y ( x) = p   (3.3)
 − ( x − xi ) ( x − xi ) 
T

∑ exp 
2σ 2

i =1  
Eqs. (3) is known as the Specth’s GRNN. In Specth’s GRNN, σ, is a smoothing
factor shared by all the inputs and the pattern nodes. Assigning an independent
smoothing factor for each of the variables may improve accuracy; however, this may
be impractical in many simulations (Specht, 1996).

Figure 3.5 : The GRNN Architecture ( Specht , 1996 )

42
As shown in Figure 3.5, the GRNN is composed of one input layer, one hidden layer,
a summation layer, and one output layer. The GRNN models are trained by a one-
pass learning algorithm. In order to estimate an output, the presented input is
subtracted from each stored vector in the hidden layers. The probability density
functions ( PDF ) or radial basis functions are applied in order to evaluate the
squared or absolute difference between the hidden neurons and inputs. Between these
layers activation functions use and the details of them are given below. Activation
functions determine neurons behaviour. The formulas of the functions which are
used in Neuroshell2 are shown below:

Logistic→ f (x) =1/(1+ exp(−x)) (3.4)

Linear→ f (x) = x (3.5)

Tanh→ f (x) = tanh(x) , the hyperbolic tangent function (3.6)

Tanh15→ f (x) = tanh(1.5x) (3.7)

Sine→ f (x) = sin(x) (3.8)

Symmetric Logistic→ f (x) = ((2/(1+ exp(−x)))−1) (3.9)

Gaussian→ f (x) = exp(−x2 ) (3.10)

Gaussian Complement→ f (x) =1− exp(−x2 ) (3.11)

The details of these activation functions are given in Figure 3.6, 3.7, 3.8, 3.9 ;

Linear: Use of this function should generally be limited to the output slab. It is useful
for problems where the output is continuous variable, as opposed to several outputs
which represent categories.

Figure 3.6 : Linear activation function

43
Logistic (Sigmoid Logistic): This function is useful for most neural network
applications, and it maps values into the (0, 1) ranges. It is used when the outputs are
in categories.

Figure 3.7 : Logistic function

Symmetric Logistic: This is similar to the logistic, except that it maps to (-1, 1)
instead of to (0, 1). When the outputs are categories, trying symmetric logistic
function instead of the logistic function in the hidden and output layers may be
better. In some cases, the network will train to a lower error in the training and test
sets.

Figure 3.8 : Symmetric logistic function

Gaussian: It is very useful in some set of problems. Trying it in the hidden layer and
logistic function in the output layer may give good results.

Figure 3.9 : Gaussian function

The summation unit computes the sum of the outputs from all hidden neurons. The
network’s final output is obtained at the output layer, where a normalization is
performed. The normalized output is computed by dividing the value of the weighted
sum of hidden layer outputs, by the value in the summation layer.

44
3.2.3.2 PNN Architecture

Probabilistic Neural Networks are (PNN) are known for their ability to train quickly
on sparse data sets. PNN separates data into a specified number of output categories.
PNN networks are three layer networks wherein the training patterns are presented to
the input layer and the output layer has one neuron for each possible category. There
must be as many neurons in the hidden layer as there are training patterns. The
network produces activations in the output layer corresponding to the probability
density function estimate for that category. The highest output represents the most
probable category (Frederick, 1996).

The probabilistic neural network (PNN) learns to approximate the PDF of the
training examples. More precisely, the PNN is interpreted as a function which
approximates the probability density of the underlying examples’ distribution (rather
than the examples directly by fitting).The PNN consists of nodes allocated in three
layers after the inputs ( Figure 3.10 )

Pattern layer: There is one pattern node for each training example. Each pattern node
forms a product of the weight vector and the given example for classification, where
the weights entering a node are from a particular example.

Summation layer: Each summation node receives the outputs from pattern nodes
associated with a given class

Output layer: The output nodes are binary neurons that produce the classification
decision.

The only factor that needs to be selected for training is the smoothing factor, that is if
the deviation of the Gaussian functions is too small deviations cause a very spiky
approximation which can’t generalize well and large deviations smooth out details.

Figure 3.10 : The PNN Architecture

45
A probabilistic neural network (PNN) has 3 layers of nodes. The Figure 3.11 below,
displays the architecture for a PNN that recognizes K = 2 classes, but it can be
extended to any number K of classes. The input layer (on the left) contains N nodes:
one for each of the N input features of a feature vector. These are fan-out nodes that
branch at each feature input node to all nodes in the hidden (or middle) layer so that
each hidden node receives the complete input feature vector x. The hidden nodes are
collected into groups: one group for each of the K classes as shown in the
Figure 3.11.

Figure 3.11 : Probabilistic Neural Network Layers

Each hidden node in the group for Class k corresponds to a Gaussian function
centered on its associated feature vector (there is a Gaussian for each exemplar
feature vector). All of the Gaussians in a class , group feed their functional values to
the same output layer node for that class, so there are K output nodes.

At the output node for Class k (k = 1 or 2 here), all of the Gaussian values for Class k
are summed and the sum is scaled to so the probability volume under the sum
function is unity so that the sum forms a probability density function. Here we
temporarily use special notation for clarity. Let there be P exemplar feature vectors
{x(p): p = 1,...,P} labeled as Class 1 and let there be Q exemplar feature vectors
{y(r): r = 1,...,R} labeled as Class 2. In the hidden layer there are P nodes in the
group for Class 1 and R nodes in the group for Class 2. The equations for each

46
Gaussian centered on the respective Class 1 and Class 2 points x(p) and y(q) (feature
vectors) are (where N is the dimension of the vectors) are, for any input vector x

(3.12)
(3.13)
The F values can be taken to be one-half the average distance between the feature
vectors in the same group or at each exemplar it can be one-half the distance from
the exemplar to its nearest other exemplar vector. The kth output node sums the
values received from the hidden nodes in the kth group, called mixed Gaussians or
Parzen windows. The sums are defined by

(3.14)

(3.15)
x is any input feature vector, F1 and F2 are the spread parameters (standard
deviations) for Gaussians in Classes 1 and 2 , respectively. N is the dimension of the
input vectors, P is the number of center vectors in Class 1 and R is the number of
centers in Class 2, x(p) and y(r) are centers in the respective Classes 1 and 2, and 2x -
x(p)2 is the Euclidean distance (square root of the sum of squared differences)
between x and x(p). Any input vector x is put through both sum functions f1(x) and
f2(x) and the maximum value (maximum a posteriori, or MAP value) of f1(x) and
f2(x) decides the class. For K > 2 classes the process is analogous. There is no
iteration nor computation of weights. For a large number of Gaussians in a sum, the
error buildup can be significant. Thus the feature vectors in each class may be
reduced by thinning those that are too close to another one and making F larger.

3.2.3.3 Back propagation Neural Network Architecture

The back propagation learning algorithm is one of the most important developments
in neural network. Back propagation networks are known for their ability to
generalize well on a wide variety of problems. That is why they are used for the vast
majority of working neural network applications. Back propagation networks are a
supervised type of network, e.g., trained with both inputs and outputs. Depending
upon the number of patterns, training may be slower than other paradigms. When
using back propagation networks, you can increase the precision of the network by

47
creating a separate network for each output if your outputs are not categories
(Frederick, 1996).

This learning algorithm is applied to multilayer feed forward networks consisting of


processing elements with continuous differentiable activation functions. Such
networks associated with the back propagation learning algorithm are also called
back propagation networks. There are some factors related to this network, these are
the initial weights, the learning constant, the cost function, the update rule, the size
and nature of the training sets and the network architecture ( includes the number of
hidden nodes and number of hidden layers ) .

The ultimate solutions of multilayer feed forward network are strongly affected by
the initial weights. Normally the weight matrices are initialized with the random
small values.

The learning constant is another important factor that affects the efficiency and
convergence of the back propagation algorithm. A large value of learning constant
can speed up the convergence but might result in over shooting, while a small value
has an opposite effect. Another problem is that the best values of the learning
constant at the beginning of training may not be as good in the later training.
Therefore the learning can be improved by using an adaptive learning constant. It can
decrease the training time by trying to keep the learning step size as large as possible,
while keeping learning stable.

Also , any differentiable function which is minimized when its arguments are equal
can be used as the cost function. However the update rule needs to be changed
corresponding to different cost functions. The least square cost function is the most
popular one that has been used in a large variety of applications because of its
simplicity.

The two major requirements on the training data are to be sufficient and proper.
However there are no standard procedures or rules for all case when choosing the
training data. Normally the training data should be able to cover the entire expected
input space. In most situations scaling and normalization is necessary to help the
learning. The back propagation artificial neural network is good at generalization, it
can explain the input patterns which are new to the network after being well trained.
The generalization is important for learning tasks where the number of inputs is large

48
and the data itself is noisy. In general, we are not looking for a neural network
system that can best fit the training input pattern. Instead, we are looking for a
system trained with data that can respond to the testing of new input patterns in a
satisfactory manner. However, there is one phenomenon called “over fitting“ existing
in some network, when the network has too many trainable parameters for the given
amount of training data. “Over fitting“ means the network can learn very well on all
the training input patterns, but does not perform generalization well. Figure 3.12
demonstrates the network can’t generate reasonable output for the data between the
original inputs due to over fitting.

On the other hand, with too few trainable parameters, the network will fail to learn
the training data and will also perform very poorly on the testing data. Therefore in
the actual practicing we normally introduce the trainable parameters into the system
stepwise to figure out the optimum number of input parameters that can perform well
at generalization. Another way that may cause over fitting is when we limit the
acceptable error goal in the training stage to some small values which are difficult for
the system to reach. In this situation the system parameters will be updated in order
to specifically fit the training data set but will no longer a good job of fitting the
testing data sets. In order to solve this problem we normally decrease the value of
acceptable error goal stepwise to check the performance of the system and choose the
error goal that can let the system perform best on the testing data. The number of
hidden layers and number of hidden nodes are the basic factors we need to determine
for setting up the networks. For determination of the number of hidden layers using
one to two hidden layers are common and can satisfy most of the problems. For the
determination of the number of hidden nodes, it is rather difficult to follow any
standard rules due to complexity of network mapping and nondeterministic nature of
real world problems.

49
Figure 3.12 : Mismatch The Function Due To The Over fitting

3.2.3.4 Kohonen Architecture

The Kohonen self organizing map network used in numerical programs is a type of
unsupervised network, which has the ability to learn without being shown correct
outputs in sample patterns. These networks are able to separate data into a specific
number of categories. There are only two layers: an input layer and an output layer
which has one neuron for each possible output category.

The training patterns are presented to the input layer, then propagated to the output
layer and evaluated. One output neuron is the “winner”. The network weights are
adjusted during training. This process is repeated for all patterns for a number of
epochs chosen in advance. This network is very sensitive to learning rate. It is
lowered slightly but steadily as the training progresses, causing smaller and smaller
weight changes. This causes the network to stabilize.

The network adjusts the weights for the neurons in a neighborhood around the
winning neuron. The neighborhood size is variable, starting off fairly large
(sometimes even close to the number of categories) and decreasing with learning
until during the last training events the neighborhood is zero, meaning by then only

50
the winning neuron’s weights are changed. By that time the learning rate is very
small, and the clusters have been defined. Architecture automatically adjusts learning
rate and neighborhood size for you, but you have to specify the initial values as well
as the total number of epochs that learning will continue (Frederick, 1996).

3.2.3.5 GMDH Architecture

The technique called Group Method of Data Handling (GMDH) was invented by
A.G.Ivakhnenko, but enhanced by others, including A.R.Barron. This technique has
also been called “polynomial nets”.

GMDH works by building successive layers with complex links (or connections) that
are the individual terms of a polynomial. These polynomial terms are created by
using linear and non-linear regression. The initial layer is simply the input layer. The
first layer created is made by computing regressions of the input variables and then
choosing the best ones. The second layer is created by computing regressions of the
values in the first layer along with the input variables. Again, only the best are
chosen by the algorithm. These are called survivors. This process continues until the
net stops getting better (according to a prespecified selection criterion).

The resulting network can be represented as a complex polynomial description of the


model. You may view the formula, which contains the most significant input
variables. In some respects, it is very much like using regression analysis. GMDH
can build very complex models while avoiding over fitting problems.

GMDH contains several evaluation methods, called selection criteria, to determine


when it should stop training. One of these, called regularity, is similar to calibration
in that the net uses the constructed architecture that works the best on the test set.
The other selection criteria do not need a test set because the network automatically
penalized models that become too complex in order to prevent overtraining. The
advantage of this is that you can use all available data to train the network. A by-
product of GMDH is that it recognizes the most significant variables as it trains, and
will display a list of them (Frederick, 1996).

51
3.3 General Applications in Civil Engineering

3.3.1 Dynamic Soil–Structure Interaction Using Neural Networks For Parameter


Evaluation
The subject of soil structure interaction has attracted the attention of engineers
worldwide over the last four decades. Foundations are subjected to loads, which may
induce high pressures in soils causing considerable deformations between soils and
structural members. These deformations may eventually affect the forces transferred
at the foundation level .This phenomenon is referred to as soil-structure interaction.
Thus, to study the behavior of interfaces, it is necessary to characterize the behavior
at the interface, model constitutive relationships mathematically, and incorporate the
model together with the governing equations of mechanics into numerical procedures
such as the finite element method. Such an approach then can be used for solving
complex problems that involve dynamic loading, nonlinear material behavior, and
the presence of the water, leading to saturated interfaces.

A biased artificial neural network program based on back propagation algorithm has
been developed to find saturated Nevada sand aluminum interface parameters from
available saturated Ottawa sand steel interface parameters and saturated Sabine clay
steel interface parameters.

From the research it is shown that the model can be used to solve complex problems
involving dynamic loading, nonlinear material behavior and saturated conditions. It
is also shown that artificial neural networks can be used to obtain material
parameters for the model from available sets of parameters for different materials.

3.3.2 A Neural Network Approach For Predicting The Structural Behavior of


Concrete Slabs
This research investigates the use of Neural Networks ( NN ) as a preliminary
alternative to mathematical modeling or experimental testing for quick prediction of
the structural behavior of reinforced concrete slabs. Such predictions could be
utilized by a structural engineer on a preliminary basis to determine the initial
suitability of a particular slab design. Once this suitability was determined, the
engineer could then proceed with further, more traditional methods of design. This
will serve to illustrate the simple manner by which neural networks model the impact
of a set of parameters ( inputs ) on a set of simultaneous conclusions ( outputs ); and

52
the powerful learn by example and generalization mechanism that neural networks
use to detect the hidden relationships linking the inputs to their outputs.

Neural networks are computational models that adopt a training mechanism to


extract the relationships that link a set of causal input parameters to their resulting
conclusions. Once neural networks are trained , they can predict the results for an
unknown case ( not used in training ) if provided with the input parameters alone.
Some characteristics of neural networks that make them potentially useful for many
different types of applications are ( Moselhi et al., 1992 ) :

• Neural networks are organized within a parallel , decentralized structure rather than
the serial architecture found in conventional computer algorithms. As a result,
processing occurs in a rapid manner,

• They have distributed memories; neural network memories are represented by


interconnection weights spread over all of the network’s processing elements,

• They are fault tolerant, that is, they are still functional even after several processing
elements are damaged and become defective,

• They have the ability to learn by example,

• They have the ability to simulate the behavior of systems with limited modeling
effort, and

• They can provide speedy and reasonably accurate solutions in complex, uncertain,
and subjective situations.

3.3.3 Neural Network Analysis of Structural Damage Due to Corrosion


The need for the maintenance of bridges has been drawing attention in recent years.
Many bridges require some kinds of repairs, and the number of such bridges is likely
to grow for at least the foreseeable future. When determining whether or not a
particular bridge should be repaired, common practice is for a maintenance expert to
inspect the bridge visually. This method is therefore time consuming, and the
growing number of bridges requiring attention is making the current approach
impractical. Under these circumstances, engineers who are not expert in repairing are
increasingly called upon to judge the repair time.

53
In order to help non specialist engineers to make appropriate decisions on the timing
of repair, an attempt to develop a practical decision support system for the damage
assessment of structural corrosion is made. In this system, it is attempted to apply the
neural network technique for the damage assessment. When there are sufficient
records of past bridge maintenance, the learning ability of the neural network is
useful to save the working time and load necessary in the inspection and analysis.
Further reduction of load time can be achieved by utilizing the technique of image
processing. Through image processing, one is enabled to assess the damage state of
structural corrosion automatically and independent of the subjectivity of inspector, or
engineer.

3.3.4 Artificial Neural Networks for Predicting the Response of Structural Systems
with Viscoelastic Dampers
The artificial neural networks (ANNs) are emerging as powerful tools for solving
problems of an iterative nature. Among the various neural network paradigms
available, many problems of civil engineering are solved using multilayer feed
forward back-propagation networks. ANN-based methods have been used in
environmental and water resources engineering, traffic engineering, highway
engineering, and geotechnical engineering. Application of ANNs for structural
analysis, design automation, optimization, system identification, condition
assessment and monitoring, finite-element mesh generation, structural material
characterization, modeling, and structural control has been reported extensively in
the literature (Adeli, 2001).

An attempt has been made to estimate the inelastic demand of the structural systems
with passive energy dissipators in terms of average peak displacement using a back-
propagation neural network. The methodology to arrive at the base shear and roof
displacement using the effective damping and effective period predicted by the
neural network is also illustrated. Predicting the inelastic demand of structural
systems with dampers is a time-consuming process, which involves several
iterations. ANN methodology has been effectively tried to quickly predict the
inelastic demand in terms of peak displacement, effective damping, and effective
time period. The complete methodology to arrive at the design base shear force and
roof displacement has been illustrated. The ANN can be effectively used for new
designs as well as for checking the response of any retrofitted structure for the

54
chosen design spectrum. This network is useful in quickly deciding the amount of
damping and the number of dampers required to reduce the peak displacement and
help in restricting further damage. Sensitivity of input parameters could also be
studied which will help in the selection and proportioning of structural members and
dampers.

3.3.5 Modeling Ground Motion Using Neural Networks


In view of potential shortcomings of analytical modeling and considering the ever
increasing bulk of information on earthquake-induced ground motions within the
Valley of Mexico, knowledge-based procedures are being explored to develop
alternate ways to analyze the response of Mexico City soil deposits. Modeling
earthquake geotechnical problems by means of Artificial Neural Networks ( ANNs ),
when these are trained on a comprehensive set of data, is very appealing because
ANNs are capable of capturing and storing the related-phenomenon knowledge
directly from the information that originates during the monitoring process.

The information given above demonstrates that ANNs are able to predict with good
approximation ground surface responses to seismic events that come from different
earthquake sources. After a significant number of trials using different combinations
of input functions, learning rules and transfer functions, combined with one and two
hidden layers and a variety of processing neurons in each layer, it was found that the
architecture with general regression learning rule, was the most accurate.

3.3.6 Analysis Of Soil Water Retention Data Using Artificial Neural Networks
Several approaches for estimating hydraulic properties have been developed over the
last three decades (e.g. Husz, 1967; Gupta & Larson, 1979; Vereecken et al., 1989).
Tietje & Tapkenhinrichs (1993) reviewed and tested the quality of 13 different
pedotransfer functions (PTFs). One of their conclusions was that PTFs which
predicted shape parameters such as the Van Genuchten parameters (Van Genuchten,
1980) were inaccurate. All these studies use regression techniques, either linear or
non-linear.

When using regression to predict the water retention characteristics, the relations
between textural data and hydraulic characteristics need to be described by well-
defined, a priori, regression models, which in general is difficult since these models
are not known. Neural networks (NNs) do not need such an a priori model. A neural

55
network is an adaptable non-linear data transfer structure that can learn the relations
between input and output data while being insensitive to measurement noise (Hecht-
Nielsen, 1990).

Pachepsky et al. (1996) used NNs to estimate points of the water retention curve
using textural data from 200 samples and compared the results with the outcomes of
regression. Although the differences were not always significant, NNs performed
slightly better than regression. Schaap & Bouten (1996) used a neural network to
predict Van Genuchten's shape parameters for wetting and drying branches of the
water retention curve of sandy forest soils in the Netherlands. However, in none of
these studies were the effects of soil structure (i.e. ped-size and shape) considered.
The effects of soil structure on the hydraulic characteristics become more important
at small suctions, as has been explained by Durner (1994) in his study on the effect
of bi-modal pore size distributions on the retentivity and conductivity curve, and by
Booltink et al. (1993) who quantified the role of soil structure on water flow in
aggregated clay soils.

The first objective of this research is to illustrate the method of neural network
modeling by the development and evaluation of PTFs, emphasizing the combined
effects of soil textural and structural data (based on available soil data from the
Netherlands and Scotland). The second objective is to compare the performance of
the NNs with the previously developed regression-based PTFs, as described by
Gupta & Larson (1979).

It has been seen that classical statistical regression techniques require an a priori
assumption on the model type (e.g. linear, exponential or logarithmic) and that the
residuals are independent and identically distributed, whereas neural networks do
not. The avoidance of these a priori assumptions and the organizational structure of
strongly interconnected nodes make neural networks valuable when non-linear
relations have to be described, or when data of different types, quantitative as well as
qualitative, have to be included in the analysis. This study illustrates the procedure
for predicting points of the water retention curve by the inclusion of soil texture and
soil structure data.

56
Neural network models performed somewhat better than previously developed
regression-type transfer functions, although differences were not significant. The
neural network models were developed and tested for a limited number of soils.

3.3.7 Neural Network Based Prediction of Ground Surface Settlements due to


Tunneling
In this research , a neural network based procedure to predict ground surface
settlement during tunneling has been proposed. Incorporating a Gaussian normal
distribution function the settlement profiles collected from various tunnel sites (Seoul
subway) are analyzed, leading to two representative parameters. These parameters
are then stored in a database with background tunnel information for training a neural
network. It has been found that the use of both parameters representing monitored
raw profile leads to more efficiency in storing as well as in further applications of the
database.

Monitored ground surface profiles for a total of ‘113’ monitoring lines have been
collected to train an optimal neural network chosen and a parametric study has been
performed herein. It leads to a rational prediction based on past tunnel records using
pattern recognition and the memorization capability of an ANN. The capabilities
enable the neural network based prediction to be automatically improved as further
information is accumulated, without any restriction.

In conclusion we can say that , this research has introduced artificial intelligence for
prediction of ground surface settlement based on field data accumulated. However, it
should be noted that the capabilities of such codes in making accurate predictions, is
entirely dependent on the quality and the quantity of data used in training ANNs. If
the data is deficient or training is inadequate, the proposed neural network based
prediction should be treated with caution. Therefore, the collection and analysis of
monitored data should be carefully carried out for guaranteed predictions.

3.3.8 Neural Network Modeling of water table depth fluctuations


Recent literature reviews reveal that ANN specifically the feed forward networks,
have been successfully used for water resources variables modeling and prediction
[Coulibaly et al., 1999; Maier and Dandy, 2000]. The differences of ANN-based
modeling approach against the conventional methods are discussed in detail by many
authors [Connor et al., 1994; Sarle, 1994; Weigend and Gershenfeld, 1994; Suykens

57
et al., 1996] and specifically in hydrological applications by French et al. [1992],
Karunanithi et al. [1994], Hsu et al. [1995], Tokar and Markus [2000], and Coulibaly
et al. [2000a]. Furthermore, Hornik et al. [1989] established that a three-layer feed
forward ANN could be considered as a general nonlinear approximator. The major
advantage of an ANN is its ability to represent underlying nonlinear dynamics of the
system modeled without any a priori assumption regarding the processes involved.
Recently, ANN have been successfully used for modeling complex time-varying
patterns, such as low frequency climatic oscillations [Coulibaly et al., 2000b]. In the
aquifer system modeling context, ANN approach has been first used to provide maps
of conductivity or transmissivity values [Rizzo and Dougherty, 1994; Ranjithan et
al., 1995] and to predict water retention curves of sandy soils [Schaap and Bouten,
1996]. Recently, ANN’s have been applied to perform inverse groundwater modeling
for estimation of different parameters [Morshed and Kaluarachchi, 1998; Lebron et
al., 1999]. The purpose of this paper is to identify ANN models that can capture the
complex dynamics of large water table fluctuations, even with relatively short length
of training (or calibration) data. We specifically focus on temporal neural networks,
such as the input delay (IDNN) and the recurrent neural network (RNN) that have
different dynamically driven properties.

This study has shown that temporal and probabilistic neural networks are effective at
predicting monthly groundwater level fluctuations in the Gondo aquifer located in
the Sahel region. A significant advantage of these models is that they can provide
satisfactory predictions with short groundwater level records, which are a common
occurrence in countries with scarce instrumentation for groundwater monitoring. The
prediction results suggest that the RNN can be an effective tool for up to a 3 month
ahead forecast of the dry season deep water table depths.

3.4 General Applications in Geotechnical Engineering

The engineering properties of soil and rock exhibit varied and uncertain behaviour
due to the complex and imprecise physical processes associated with the formation
of these materials (Jaksa 1995). This is in contrast to most other civil engineering
materials, such as steel, concrete and timber, which exhibit far greater homogeneity
and isotropy. In order to cope with the complexity of geotechnical behaviour, and the
spatial variability of these materials, traditional forms of engineering design models

58
are justifiably simplified. An alternative approach, which has been shown to have
some degree of success, is based on the data alone to determine the structure and
parameters of the model. The ANN is well suited to model complex problems where
the relationship between the model variables is unknown ( Hubick 1992 ).

3.4.1 Pile Capacity


The prediction of the load capacity, particularly those based on pile driving data, has
been examined by several ANN researchers. Goh (1994a; 1995b) presented a neural
network to predict the friction capacity of piles in clays. The neural network was
trained with field data of actual case records. The model inputs were considered to be
the pile length, the pile diameter, the mean effective stress and the undrained shear
strength. The skin friction resistance was the only model output. The results obtained
by utilising the neural network were compared with the results obtained by the
method of Semple and Rigden (1986) and the â method (Burland 1973 ).

Goh (1995a; 1996b), soon after, developed another neural network to estimate the
ultimate load capacity of driven piles in cohesionless soils. In this study, the data
used were derived from the results of actual load tests on timber, precast concrete
and steel piles driven into sandy soils. The inputs to the ANN model that were found
to be more significant were the hammer weight, the hammer drop, the pile length, the
pile weight, the pile cross sectional area, the pile set, the pile modulus of elasticity
and the hammer type. The model output was the pile load capacity. When the model
was examined with the testing set, it was observed that the neural network
successfully modeled the pile load capacity. By examining the connection weights, it
was observed that the more important input factors are the pile set, the hammer
weight and the hammer type. The study compared the results obtained by the neural
networks with the following common relationships: the Engineering News formula
(Wellington 1892), the Hiley formula (Hiley 1922) and the Janbu formula (Janbu
1953). Regression analysis was carried out to obtain the coefficients of correlation of
predicted versus measured results for neural networks and the traditional methods.
Table 3.1 summaries the regression analysis results which indicate that the neural
network predictions of the load capacity of driven piles were found to be better than
these obtained using the other methods.

59
Table 3.1 : Summary of regression analysis results of pile capacity prediction
( Goh , 1995 )

Method Coefficient of correlation

Training data Testing data

Neural network 0,96 0,97

Engineering news 0,69 0,61

Hiley 0,48 0,76

Janbu 0,82 0,89

3.4.2 Settlement of Foundations


The design of foundations is generally controlled by the criteria of bearing capacity
and settlement; the latter often governing. The problem of estimating the settlement
of foundations is very complex, uncertain and not yet entirely understood. This fact
encouraged researchers to apply the ANN technique to settlement prediction. Goh
(1994) developed a neural network for the prediction of settlement of a vertically
loaded pile foundation in a homogeneous soil stratum. The input variables for the
proposed neural network consisted of the ratio of the elastic modulus of the pile to
the shear modulus of the soil, pile length, pile load, shear modulus of the soil,
Poisson’s ratio of the soil and radius of the pile. The output variable was the pile
settlement. The desired output that was used for the ANN model training was
obtained by means of finite element and integral equation analyses developed by
Randolph and Wroth (1978).

A comparison of the theoretical and predicted settlements for the training and testing
sets is given in Figure 13. The results in Figure 3.13 show that the neural network
was able to successfully model the settlement of pile foundations.

60
Figure 3.13 : Comparison Of Theoretical Settlement And Neural network
Prediciton ( Goh , 1994 )

Also , Sivakugan et al. (1998) explored the possibility of using neural networks to
predict the settlement of shallow foundations on granular soils. A neural network was
trained with five inputs representing the net applied pressure, average blow count
from the standard penetration test, width of foundation, shape of foundation and
depth of foundation. The output was the settlement of the foundation. The results
obtained by the neural network were compared with methods proposed by Terzaghi
and Peck (1967) and Schmertmann (1970). Based on the results obtained, it was
shown that the traditional method of Terzaghi and Peck and Schmertmann’s method
overestimate the settlements by about 2.18 times and 3.39 times respectively as
shown in Figure 3.14. In contrast, the predictions using the ANN model were good
(Figure 3.15).

61
Figure 3.14 : Settlement Predicted using Traditional methods
( Sivakugan et al.1998 )

Figure 3.15 : Settlement Prediction Using Artificial Neural Network


( Sivakugan et al.1998 )

Most recently, Shahin et al. (2000) carried out similar work for predicting the
settlement of shallow foundations on cohesionless soils. In this work, 272 data

62
records were used for modeling. The input variables considered to have the most
significant impact on settlement prediction were the footing width, the footing
length, the applied pressure of the footing and the soil compressibility. The results of
the ANN were compared with three of the most commonly used traditional methods.
These methods were Meyerhof (1965), Schultze and Sherif (1973) and Schmertmann
et al. (1978). The results of the study confirmed those found by Sivakugan et al.
(1998), in the sense that ANNs were able to predict the settlement well and
outperform the traditional methods. As shown in Table 3.2, the ANN produced high
coefficients of correlation, r, low root mean squared errors ( RMSE ) and low mean
absolute errors, ( MAE ) compared with the other methods.

Table 3.2 : Comparison of predicted vs measured settlements ( Shahin et al. 2000 )

Category ANN Meyerhof Schultze & Schmertmann


(1965 ) Sherif et al. (1978)
(1973)

Correlation , r 0,99 0,33 0,86 0,70

RMSE (mm.) 3,9 27,0 23,8 45,2

MAE (mm.) 2,6 20,8 11,1 29,5

3.4.3 Soil Properties and Behaviour


Soil properties and behaviour is an area that has attracted many researchers to
modeling using ANNs. Developing engineering correlations between various soil
parameters is an issue discussed by Goh (1995a; 1995c). Goh used neural networks
to model the correlation between the relative density and the cone resistance from
cone penetration test (CPT), for both normally consolidated and over-consolidated
sands. Laboratory data, based on calibration chamber tests, were used to successfully
train and test the neural network model. The neural network model used the relative
density and the mean effective stress of soils as inputs and the CPT cone resistance
as a single output. The ANN model was found to give high coefficients of correlation
of 0.97 and 0.91 for the training and testing data, respectively, which indicated that
the neural network was successful in modeling the non-linear relationship between
the CPT cone resistance and the other parameters.

63
Ellis et al. (1995) developed an ANN model for sands based on grain size
distribution and stress history. Sidarta and Ghaboussi (1998) employed an ANN
model within a finite element analysis to extract the geometerial constitutive
behaviour from non-uniform material tests. Penumadu and Jean-Lou (1997) used
NNs for representing the behaviour of sand and clay soils. Ghaboussi and Sidarta
(1998) used NNs to model both the drained and undrained behaviour of sandy soil
subjected to triaxial compression-type testing. Penumadu and Zhao (1999) also used
ANNs to model the stress-strain and volume change behaviour of sand and gravel
under drained triaxial compression test conditions. Zhu et al. (1998a; 1998b) used
neural networks for modeling the shearing behaviour of a fine-grained residual soil,
dune sand and Hawaiian volcanic soil. Cal (1995) used a neural network model to
generate a quantitative soil classification from three main factors (plastic index,
liquid limit and clay content). Najjar et al. (1996a) showed that neural network-based
models can be used to accurately assess soil swelling, and that neural network
models can provide significant improvements in prediction accuracy over statistical
models. Romero and Pamukcu (1996) showed that neural networks are able to
effectively characterize and estimate the shear modulus of granular materials.
Agrawal et al. (1994); Gribb and Gribb (1994) and Najjar and Basheer (1996b) all
used neural network approaches for estimating the permeability of clay liners.
Basheer and Najjar (1995) and Najjar et al. (1996b) presented neural network
approaches for soil compaction.

Other applications include modeling the mechanical behaviour of medium-to-fine


sand (Ellis et al. 1992), modeling rate-dependent behaviour of clay soils (Penumadu
et al. (1994), simulating the uniaxial stress-strain constitutive behaviour of fine-
grained soils under both monotonic and cyclic loading (Basheer 1998; Basheer and
Najjar 1998), characterizing the undrained stress-strain response of Nevada sand
subjected to both triaxial compression and extension stress paths (Najjar and Ali
1999; Najjar et al. 1999), predicting the axial and volumetric stress-strain behaviour
of sand during loading, unloading and reloading (Zhu and Zaman 1997), predicting
the anisotropic stiffness of granular materials from standard repeated load triaxial
tests (Tutumluer and Seyhan 1998).

64
3.4.4 Liquefaction
Liquefaction is a phenomenon which occurs mainly in loose and saturated sands as a
result of earthquakes. It causes the soil to lose its shear strength due to an increase in
pore water pressure, often resulting in large amounts of damage to most civil
engineering structures. Determination of liquefaction potential due to earthquakes is
a complex geotechnical engineering problem. Goh (1994b) used neural networks to
model the complex relationship between seismic and soil parameters in order to
investigate liquefaction potential. The neural network used in this work was trained
using case records from 13 earthquakes that occurred in Japan, United States and
Pan-America during the period 1891–1980. The study used eight input variables and
only one output variable. The input variables were the SPT-value, the fines content,
the mean grain size, the total stress, the effective stress, the equivalent dynamic shear
stress, the earthquake magnitude and the maximum horizontal acceleration at ground
surface. The output was assigned a binary value of 1, for sites with extensive or
moderate liquefaction, and a value of 0 for marginal or no liquefaction. The results
obtained by the neural network model were compared with the method of Seed et al.
(1985). The study showed that the neural network gave correct predictions in 95% of
cases, whereas Seed et al. (1985) gave a success rate of 84%. Goh (1996a) also used
neural networks to assess liquefaction potential from cone penetration test (CPT)
resistance data. The data records were taken for sites of sand and silty sand deposits
in Japan, China, United States and Romania, representing five earthquakes that
occurred during the period 1964–1983. A similar neural network modeling strategy,
as used in Goh (1994b), was used for this study and the results were compared with
the method of Shibata and Teparaksa (1988). The neural network showed a 94%
success rate, which is equivalent to the same number of error predictions as the
conventional method by Shibata and Teparaksa (1988).

Two other works (Najjar and Ali 1998; Ural and Saka 1998) also used CPT data to
evaluate soil liquefaction potential and resistance. Najjar and Ali (1998) used neural
networks to characterize the soil liquefaction resistance utilizing field data sets
representing various earthquake sites from around the world. The ANN model that
was developed in this work was generated to produce a liquefaction potential
assessment chart that could be used by geotechnical engineers in liquefaction
assessment tasks. Ural and Saka (1998) used neural networks to analyze liquefaction.

65
Comparison between this approach and a simplified liquefaction procedure indicated
a similar rate of success for the neural network approach as the conventional
approach.

Other applications of ANNs for liquefaction prediction include the prediction of


liquefaction resistance and potential (Juang and Chen 1999), investigation of the
accuracy of liquefaction prediction of ANNs compared with fuzzy logic and
statistical approaches (Ali and Najjar 1998) and assessment of liquefaction potential
using standard penetration test results (Agrawal et al. 1997).

3.4.5 Site Characterization


Site characterization is an area concerned with the analysis and interpretation of
geotechnical site investigation data. Zhou and Wu (1994) used a neural network
model to characterize the spatial distribution of rockhead elevations. The data used to
train the model were taken from seismic refraction surveys on more than 11 km of
transverse lines. The network used the spatial position (x- and y-coordinate) and the
surface elevation as inputs, and was used to estimate the rockhead elevation at that
location as the output. The trained network was tested to estimate the rockhead
elevations for all locations within the area of investigation by producing a contour
map. Results from the neural network model compared well with similar contour
maps, with the additional benefit that neural networks do not make assumptions or
simplify spatial variations.

A similar application relevant to ground water characterization was described by


Basheer et al. (1996). Basheer et al. (1996) indicated that neural networks can be
used to map and logically predict the variation of soil permeability in order to
identify landfill boundaries and construct a waste landfill. Rizzo et al. (1996)
presented a new site characterization method called SCANN (Site Characterization
using Artificial Neural Networks) that is based on the use of neural networks to map
discrete spatially-distributed fields.

3.4.6 Earth Retaining Structures


Goh et al. (1995) developed a neural network model to provide initial estimates of
maximum wall deflections for braced excavations in soft clay. The neural network
was used to synthesis data derived from finite element studies on braced excavations
in clay. The input parameters used in the model were the excavation width, soil

66
thickness/excavation width ratio, wall stiffness, height of excavation, soil undrained
shear strength, undrained soil modulus/shear strength ratio and soil unit weight. The
maximum wall deflection was the only output. Using regression analysis, the scatter
of the predicted neural network deflections relative to the deflections obtained using
the finite element method were assessed. The results produced high coefficients of
correlation for the training and testing data of 0.984 and 0.967, respectively. Some
additional testing data from actual case records were also used to confirm the
performance of the trained neural network model. The study intended to use the
neural network model as a time-saving and user-friendly alternative to the finite
element method.

3.4.7 Tunnels and Underground Openings


Shi et al. (1998) presented a study of neural networks for predicting settlements of
tunnels. A general NN model was trained and tested using data from the 6.5 km
Brasilia Tunnel, Brazil. The study identified many factors to be used as the model
inputs and three settlement parameters as the model outputs. The input parameters
were the length of excavation from drive start, the depth of soil cover above tunnel
crown, the area of tunnel section, the delay for closing invert, the water level depth,
the rate of advance of excavation, the construction method, the mean blow count
from standard penetration test at tunnel crown level, the tunnel spring-line level and
the tunnel inverted arch level. The three output parameters were the settlement at the
face passage, the settlement at the invert closing and the final settlement after
stabilization. The results showed that the NN model could not achieve a high level of
accuracy. To improve the prediction accuracy, the study proposed a modular NN
model based on the concept of integrating multiple neural network modules in one
system, with each module being constrained to operate at one specific situation of a
complicated real world problem. The modular concept showed an improvement in
terms of model convergence and prediction. The capability to improve the models
developed in this work was later extended by Shi (2000) by applying input data
transformation. This extended study indicated that distribution transformation of the
input variables reduced the prediction error by more than 13%.

67
4. NEURAL NETWORK APPROACHES FOR SLOPE STABILITY

4.1 Introduction

In this chapter, application of neural networks for slope stability is discussed. There
are 5 models introduced for neural network approaches. Model 1 and 2 are Back-
Propagation Neural Network (BPNN) approaches, and model 3, 4, 5 are General
Regression Neural Network (GRNN) approaches. Data and case studies are given for
Neural Network (NN). Factor of safety and seismic coefficients will be examined by
NN approaches which are the BPNN and GRNN approaches. Values are obtained
from a doctorate thesis ( Cao , 2002 ). These values will be used in NN approaches.
Neural Network parameters and case study are given in section 4.2.

4.2 Input Parameters Information

In this study, there are nine data parameter important for factor of safety, that are
presented for NN approaches. All data parameters are given below.

1 - H ( m. ) : The height of slope,

2- Hw ( m. ) : The height of water level,

3- Hb ( m. ) : The distance of firm base,

4- γ ( kN/m3 ) : The unit weight of soil,

5- β ( deg. ) : The inclination of slope,

6- c ( kPa. ) : The cohesion of soil,

7- φ ( deg. ) : The friction angle of soil,

8- kh : Horizontal seismic coefficient ,

9- kv : Vertical seismic coefficient,

10- F.S. : Factor of Safety ( with Bishop’s method )

68
There were 170 data patterns for Neural Network approaches (Given in Appendix A)
and the input – output values range for Neural Network is given below in Figure 4.1.
To see the common effects of parameters the slope profile has taken only one soil
layer and its parameters.

Figure 4.1 : Basic Slope Profile and Slope Parameters

First of all we have to define the output and input parameters in the program , then
we define the values range. So the program will use min, max, mean values while the
program is learning the model. This input and output values range table is given
below in Table 4.1.

Table 4.1 : Input and output values range for Neural Network
The height

The height

inclination
The depth

weight of
The unit
of water
of slope

of slope
of firm
level

base

The

soil
Parameters

Variable Name H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 )


Variable Type I I I I I
Min 3,65 0,0 0,00 11 9,00
Max 214,00 45,0 164,00 71 28,44
Mean 23,95 2,0 6,00 29 19,08
Std. Deviation 30,10 6,2 22,90 10 2,69
The cohesion

The friction
angle of soil

Horizontal

coefficient

coefficient

Factor of
Vertical
seismic

seismic

Safety
of soil

Parameters

Variable Name c ( kPa ) Φ ( deg. ) kh kv F.S.


Variable Type I I I I A
Min 0,0 0,000 0,035 0,050 0,620
Max 150,0 45,0 0,510 0,250 2,150
Mean 15,3 23,1 0,273 0,146 1,190
Std. Deviation 20,0 9,5 0,138 0,071 0,335

69
I is input value and A is actual output value in Table 4.1. We define which
parameters are input and output so these parameters used in the model as we defined.

4.3 Analysis

Analysis results are presented in output tables, error through pattern graphics. r2,
defined in Neuroshell2, is a statistical indicator usually applied to multiple regression
analysis. It compares the accuracy of the model to the accuracy of a trivial
benchmark model, and the prediction is just the mean of all of the samples. A perfect
fit would result in an r squared value of 1, a very good fit near 1, and a very poor fit
near 0. The formula that Neuroshell2 uses is defined as the following:
SSE
r2 = 1− Where (4.1)
SS YY
)
SSE = ∑ ( y − y ) 2 (4.2)
SS YY = ∑ ( y − y ) 2 (4.3)

Where y is the actual value, yˆ is the predicted value of y, and y is the mean of the y
values.

4.3.1 BPNN approaches


4.3.1.1 Model 1
In Table 4.2, training calibration for model 1 is given. Architecture is 3 hidden slabs,
and different activation functions used in tihs model. Activation function is linear
function. Pattern selection is rotational, not random from 145 pattern processed for
training.

Table 4.2 : Model 1 approach for training


Architecture 3 hidden slabs , different activation function
% Test Set
15
Extraction
# of Hidden Layer 2
# of Neurons In
18
One Hidden Layer
Learning Rate 0,2
Momentum Factor 0,6
Initial weight 0,3

70
Calibration
400
Interval
Scale Function Linear [ -1,1 ]
Missing Values Error Condition
Pattern Selection Rotation
Pattern Processed 170
As it is seen from Table 4.3 the most important parameter is the height of slope but
after the height parameter the seismic coefficients are coming. So for model 1 the
earthquake effect on slope can be seen.

Table 4.3 : The contribution factors for Model 1


The
Orders of The
Parameters Contribution
Contribution Function
Function
H ( m. ) 0,27667 1
kh 0,10485 2
kv 0,10008 3
Hw ( m. ) 0,09826 4
Hb ( m. ) 0,09587 5
γ ( kN/m3 ) 0,08857 6
β ( deg. ) 0,08404 7
c ( kPa ) 0,07862 8
Φ ( deg. ) 0,07305 9
After we made approximately 100 model approaches we get the results from Model
1 analysis. From these results as we can see from Table 4.4 our success rate is
% 79,31 and the correlation coefficient is 0,8912.

Table 4.4 : The results of Model 1


R squared: 0,7931
r squared: 0,7943
Mean squared error: 0,023
Mean absolute error: 0,109
Min. absolute error: 0
Max. absolute error: 0,667
Correlation coefficient r: 0,8912

r2 value is best fit correlation coefficient from Excel and same value from that
Neuroshell2 ‘ s r2. This show that the process which has been performed is correct.

71
The black line is best fit line and green lines are 20% error limits is taken from bestfit
line. In Figure 4.1, 20% error limit means that when the difference the target output
the network result is greater than 20% green lines, the network is considered as an
incorrectsimulation (Bayrak, 2004). According to this assumption, if the error graphs
of each mode is considered and this best network gives 10 incorrect simulations, out
of 170 data set for model 1. The error percentage 5.88 % which correspond to a
success percentage is 94.12 % for this simulation shown in Figure 4.2.

Figure 4.2 : Actual – Network output scatter for Model 1 and error limits

There are 24 incorrect simulations this means simulation success percentages 85 %


according to Figure 4.3.

Figure 4.3 : Variables error through pattern and error limits for model 1

72
Figure 4.4 : Test set error graph for Model 1

There are not so many big peaks and graph is decreased also after 0.05 graph is
generally constant in Figure 4.4, this means that over-learning did not occur.

A piece of model 1 outputs are given below in table 4.5 and all of the datas are given
in Appendix B. In Table 4.5 the Actual (1) is the factor of safety values that we
define , the network (1) is the simulation results of factor of safety and Act-Net (1) is
the difference between Actual (1) – Network (1). We can see how the model success
by looking Act-net (1) values then we can accept it successful.

Table 4.5 : A pieces of Model 1 output table


Actual(1) Network(1) Act-Net(1)
1,3200 1,2958 0,0242
0,9400 0,8895 0,0505
0,9700 0,9517 0,0183
1,6100 1,8270 -0,2170
1,6400 1,6866 -0,0466
1,3500 1,3435 0,0065
1,2700 1,0988 0,1712
1,0600 1,1224 -0,0624
1,5500 1,1839 0,3661
1,4000 1,3054 0,0946
1,1800 1,1452 0,0348
1,3100 1,2206 0,0894
0,7500 0,7897 -0,0397
0,8000 0,8660 -0,0660
0,7700 0,8582 -0,0882

73
This model has % 80 success percentage but Back Propagation Neural Network is
not an appropriate method for evaluation of Slope Stability and the Seismic
coefficients effects.

4.3.1.2 Model 2
There are certain differences between Model 2 from Model 1.Pattern extraction is
same 15% in Model 2 but momentum factor is 0.7 and calibration interval is 600 in
Model 2. These training model approach differences are given in Table 4.6.

Table 4.6 : Model 2 approach for training


3 hidden slabs , different activation
Architecture
function
% Test Set Extraction 15
# of Hidden Layer 2
# of Neurons In One
18
Hidden Layer
Learning Rate 0,2
Momentum Factor 0,7
Initial weight 0,3
Calibration Interval 600
Scale Function Linear << -1,1 >>
Missing Values Error Condition
Pattern Selection Rotation
Pattern Processed 170

As it is seen from Table 4.7 the most important parameter is the height of slope but
after the height parameter the seismic coefficients are coming. So like model 1 in
this model 2 the earthquake effect on slope can be seen.

74
Table 4.7 : The contribution factors for Model 2
Orders of The
Parameters The Contribution Function
Cont.Func.
H ( m. ) 0,27258 1
kh 0,10545 2
kv 0,10157 3
Hw ( m. ) 0,09792 4
Hb ( m. ) 0,09673 5
γ ( kN/m3 ) 0,08959 6
β ( deg. ) 0,08754 7
c ( kPa ) 0,07771 8
Φ ( deg. ) 0,07091 9
After we made approximately 100 model approaches we get the results from Model
2. From these results as we can see from Table 4.8 our success rate is % 80,30 and
the correlation coefficient is 0,8974 .

Table 4.8 : The results of Model


R squared: 0,8030
r squared: 0,8053
Mean squared error: 0,022
Mean absolute error: 0,106
Min. absolute error: 0,002
Max. absolute error: 0,594
Correlation coefficient r: 0,8974

There are 20 incorrect simulations in Figure 4.5, and the simulation success
percentage is 88 %.

Figure 4.5 : Actual-Network output scatter for Model 2 and error limits

75
There are 25 incorrect simulations this means simulation success percentages 85 %
according to Figure 4.6.

Figure 4.6 : Variables error through pattern and error limits for model 2

As it is seen from Figure 4.7 over-learning occurred with local peaks but generally
graph is decreased after the local peaks. This model has the challenge same model 1.

Figure 4.7 : Test set error graph for Model 2

76
4.3.2 GRNN approaches
4.3.2.1 Model 3
In Table 4.9, the GRNN architecture properties are shown for Model 3. The GRNN
method differs from BPNN methods by the existence of smoothing factors. On the
other hand, there are no learning rate, momentum factor, and initial weights in the
GRNN method. Extraction for training is 20%. There are 136 patterns for training
and 34 patterns for test patterns. Genetic breeding pool size can be taken values 20,
50, 75, 100, 200, 300 respectively for Neuroshell2. In this model genetic pool size is
taken 200. Missing values is taken error conditions. It should be noted that
smoothing factor is taken values in a range from 0 to 1 (0<smoothing factor<1).

Table 4.9 : The architecture and the configuration of the Model

Smoothing Factor 0,01821


Activation Function linear [ 0,1 ]
Distance Metric City Block
Calibraiton Genetic , Adaptive
Missing Values Error condition
Genetic Breeding Pool Size 200
% Test Set Extraction 20
Number of Inputs 9
Number of Outputs 1
Number of Training 136
Patterns
Number of Test Pattern 34
Pattern Processed 170

All the network parameters and the smoothing factors defined by program and given
in Table 4.10. As it is seen from Table 4.10 the most important parameter is the unit
weight of soil but after the unit weight of soil parameter the height and the water
level come. The seismic coefficients are coming after these parameters. So in
model 3 the earthquake effect can be seen after these 3 parameters.

77
Table 4.10 : Individual smoothing factors for Model 3

Network type: GRNN, genetic adaptive

Problem name: C:\NSHELL2\GRNN-1\GRNN-1


Number of inputs: 9
Number of outputs: 1
Number of training patterns: 136
Number of test patterns: 34
current best smoothing factor: 0,0721176
smoothing test generations: 85
last mean squared error: 0,028048
minimum mean squared error: 0,027964
generations since min. ms.
error: 20
Individual
Input name Orders
smoothing factor
γ ( kN/m3 ) 3,0000 1
H ( m. ) 1,9412 2
Hw ( m. ) 1,2000 3
kh 0,7177 4
Hb ( m. ) 0,6118 5
kv 0,3059 6
c ( kPa ) 0,1059 7
Φ ( deg. ) 0,0235 8
β ( deg. ) 0,0118 9
After we made approximately 216 model approaches we get the results from Model
3. From these results as we can see from Table 4.11 our success rate is % 90,29 and
the correlation coefficient is 0,9505 .

Table 4.11 : The results of Model 3


R squared: 0,9029
r squared: 0,9034
Mean squared error: 0,011
Mean absolute error: 0,038
Min. absolute error: 0
Max. absolute error: 0,510
Correlation coefficient r: 0,9505

78
There are 10 incorrect simulations in Figure 4.8 and the simulation success
percentage is 94.11 %.

Figure 4.8 : Actual-Network output scatter for model 3 and error limits

There are 10 incorrect simulations in Figure 4.9, and the simulation success
percentage is 94.11%.

Figure 4.9 : Variables error through pattern and error limits for model 3

As it is seen from Figure 4.10 over-learning did not occur and the error gradually
decreases.

79
Figure 4.10 : Test set error graph for model 3

4.3.2.2 Model 4
In model 4, activation function , distance metric and genetic breeding pool size are
different from model 3 architecture. Genetic breeding pool size is 50, distance metric
is vanilla and activation function is tanh. All the factors for Model 4 is given in Table
4.12.

Table 4.12 : The architecture and the configuration of the Model 4


Smoothing Factor 0,2196471
Activation Function tanh
Distance Metric Vanilla
Calibraiton Genetic , Adaptive
Missing Values Average Values
Genetic Breeding Pool Size 50
% Test Set Extraction 15
Number of Inputs 9
Number of Outputs 1
Number of Training 145
Patterns
Number of Test Pattern 25
Pattern Processed 170

80
All the network parameters and the smoothing factors defined by program and given
in Table 4.13. As it is seen from Table 4.13 the most important parameter is the
cohesion of soil but after the cohesion of soil parameter the inclination of slope
come. The seismic coefficients are coming after these parameters. So in model 4 the
earthquake effect can be seen after these 4 parameters. If we compare model 4
between other models the seismic coefficient effect in model 4 is less than others.

Table 4.13 : Individual smoothing factors for Model 4


Network type: GRNN, genetic adaptive
Problem name: C:\NSHELL2\GRNN-2\GRNN-2
Number of inputs: 9
Number of outputs: 1
Number of training patterns: 145
Number of test patterns: 25
current best smoothing factor: 0,2196471
smoothing test generations: 66
last mean squared error: 0,021763
minimum mean squared error: 0,021742
generations since min. ms.
error: 20
Individual
Input name smoothing factor Orders
c ( kPa ) 3,00000 1
β ( deg. ) 2,02353 2
Φ ( deg. ) 1,57647 3
Hb ( m. ) 1,41176 4
kh 1,23529 5
kv 0,78824 6
H ( m. ) 0,50588 7
Hw ( m. ) 0,30588 8
γ ( kN/m3 ) 0,03529 9
After making approximately 216 model approaches we get the results from Model 4.
From these results as we can see from Table 4.14 our success rate is % 91,37 and the
correlation coefficient is 0,9570.

81
Table 4.14 : The results of the Model 4
R squared: 0,9137
r squared: 0,9158
Mean squared error: 0,010
Mean absolute error: 0,040
Min. absolute error: 0
Max. absolute error: 0,510
Correlation coefficient r: 0,9570

There are 9 incorrect simulation in Figure 4.11, and the success simulation
percentage is 94.70%.

Figure 4.11 : Actual-Network output scatter for model 4 and error limits

There are 10 incorrect simulations in Figure 4.12, and the success simulation
percentage is 94.11%.

82
Figure 4.12 : Variables error through pattern and error limits for Model 4

As it is seen from Figure 4.13 over-learning did not occurr. Graph is close to 0,025
that means error is very little.

Figure 4.13 : Test set error graph for model 4

4.3.2.3 Model 5
Model 4 activation function is also different fom Model 3 and Model 4 activation
function. This model’s activation function is linear [-1,1]. Model 5 have the same
extraction at 20% like Model 4. Number of training patterns are 145 patterns and
number of test patterns are 25patterns like Model 4. Genetic breeding pool size is
100 in Table 4.15.

83
Table 4.15 : The architecture and the configuration of the Model 5
Smoothing Factor 0,11201
Activation Function Linear [ -1,1 ]
Distance Metric Vanilla
Calibraiton Genetic , Adaptive
Missing Values Error Condition
Genetic Breeding Pool Size 100
% Test Set Extraction 15
Number of Inputs 9
Number of Outputs 1
Number of Training 145
Patterns
Number of Test Pattern 25
Pattern Processed 170

All the network parameters and the smoothing factors for Model 5 defined by
program and given in Table 4.16. As it is seen from Table 4.16 the most important
parameter is the cohesion of soil but after the cohesion of soil parameter, the
inclination of slope come. The seismic coefficients are coming after these
parameters. So in model 5 the earthquake effect can be seen after these 4 parameters.
If we compare model 5 between other models the seismic coefficient effect in model
5 is less than others.

Table 4.16 : Individual Smoothing Factors for Model 5


Network type: GRNN, genetic adaptive

Problem name: C:\NSHELL2\GRNN-3\GRNN-3


Number of inputs: 9
Number of outputs: 1
Number of training patterns: 145
Number of test patterns: 25
current best smoothing factor: 0,1458824
smoothing test generations: 58
last mean squared error: 0,020391
minimum mean squared error: 0,020346

generations since min. ms. error: 20

84
Individual
Input name Orders
smoothing factor
c ( kPa ) 2,97647 1
β ( deg. ) 2,14118 2
Hb ( m. ) 1,61176 3
Φ ( deg. ) 0,90588 4
kh 0,72941 5
H ( m. ) 0,71765 6
kv 0,49412 7
γ ( kN/m3 ) 0,45882 8
Hw ( m. ) 0,36471 9
After making approximately 216 model approaches we get the results from Model 5.
From these results as we can see from Table 4.17 our success rate is % 92,25 and the
correlation coefficient is 0,9618 .

Table 4.17 : The results of the Model 5


R squared: 0,9225
r squared: 0,9250
Mean squared error: 0,009
Mean absolute error: 0,036
Min. absolute error: 0
Max. absolute error: 0,510
Correlation coefficient r: 0,9618

There are 6 incorrect simulation.Simulation success percentage is 96.47% as


depicted in Figure 4.14.

Figure 4.14 : Actual-Network output scatter for Model 5 and error limits

85
There are 8 incorrect simulations in Figure 4.15, and the success simulation
percentage is 95,29 %.

Figure 4.15 : Variables error through pattern and error limits for Model 5

As it is seen from Figure 4.16 over-learning did not occurr and graph is close to
0,015 that means error is very little.

Figure 4.16 : Test set error graph for model 5

86
5. RESULTS

To reach the best results, different configurations and architectures are trained. In
chapter 4, the best configurations, architectures, and error graphs for BPNN and
GRNN approaches are presented. In this chapter, all models results are evaluated.
Model 1 and Model 2 approach architectures are given in Table 5.1.

Table 5.1 : Model 1 and Model 2 approach configurations and architecture


MODEL 1 MODEL 2
3 hidden slabs , 3 hidden slabs ,
different different
Architecture
activation activation
function function
% Test Set
15 15
Extraction
# of Hidden Layer 2 2
# of Neurons In One
18 18
Hidden Layer
Learning Rate 0 0,2
Momentum Factor 0,6 0,7
Initial weight 0 0,3
Calibration Interval 400 600
Scale Function Linear [ -1,1 ] Linear << -1,1 >>
Missing Values Error Condition Error Condition
Pattern Selection Rotation Rotation
Pattern Processed 170 170
There are not a different extraction percentage for each other for training. Model’s
extraction percentage is 15%. Momentum factors are differ from each other. 0.6, 0.7
are momentum factors for model 1 and model 2, respectively. In Addition, there is a
different calibration interval for Model 1 and Model 2 in Table 5.1.As in Table 5.2,
Model 2 results are better compared to Model 1. If we look at the success rate of
Model 1 and Model 2 in Table 5.2, we can say that there are not too much
difference.

87
Table 5.2 : Output R2 values for Model 1 and 2
R
squared: r squared:
Model 1 0,7931 0,7943
Model 2 0,803 0,8053
The Model 1 calibration interval is 400, and error graph is quite good, however
Model 2 calibration interval is 600 and error graph is not acceptable. In Model 2,
there are several local over-learning points. In summary, increasing calibration
interval could lead to over-learning situations for BPNN approaches. This can be
seen from Figure 5.1 and 5.2.

Figure 5.1 : Test set error graph for Model 1

Figure 5.2 : Test set error graph for Model 2

88
As seen in Table 5.3, the first five of contribution factors of Model 1 and Model 2 are
the same parameters in output. This shows that results are generally meaningful for
evaluation of contribution factors for BPNN approaches.

Table 5.3 : The first five contribution factors for Model 1 and Model 2
Order Model 1 Model 2
1 H (m.) H m.)
2 kh kh
3 kv kv
4 Hw (m.) Hw (m.)
5 Hb (m.) Hb (m.)
In this GRNN approaches, all models results are evaluated. Model 3, Model 4 and
Model 5 approach architectures are given in Table 5.4.

Table 5.4 : The architecture and the configuration of Models 3, 4 and 5

MODEL 3 MODEL 4 MODEL 5


Smoothing Factor 0,01821 0,2196471 0,11201
Activation Function linear [ 0,1 ] tanh Linear [ -1,1 ]
Distance Metric City Block Vanilla Vanilla
Calibraiton Genetic , Genetic , Genetic ,
Adaptive Adaptive Adaptive
Missing Values Error condition Average Values Error Condition
Genetic Breeding Pool 200 50 100
Size
% Test Set Extraction 20 15 15
Number of Inputs 9 9 9
Number of Outputs 1 1 1
Number of Training 136 145 145
Patterns
Number of Test Pattern 34 25 25
Pattern Processed 170 170 170
If we look at the success rate of Model 3 , Model 4 and Model 5 in Table 5.5, we
can say that Model 5 is the best approach for slope stability.

Table 5.5 : Output R2 values for Model 3, 4, and 5


R squared: r squared:
Model 3 0,9029 0,9034
Model 4 0,9137 0,9158
Model 5 0,9225 0,9250

89
The activation function “ Linear [ -1,1 ] ” gives better results compared to
“Linear [ 0 , 1 ]” for the GRNN models in Table 5.5. In Model 3 and Model 5 all
configuration parameters are the same, except activation function and genetic
breeding pool size. For GRNN approaches, when activation function changes and
genetic breeding pool size decreases, success ratio gets better as seen in Table 5.5.
The architecture and the configuration of the Model 3, 4 and 5 are given in Table 5.4.

As seen in Table 5.6, the first five parameters are generally same in Model 3 and 5,
however they have different activation functions.

Table 5.6 : The first five of sensitivity factors for Models 3, 4, and 5
Order Model 3 Model 4 Model 5
1 β ( deg. ) c ( kPa ) c ( kPa )
2 H ( m. ) β ( deg. ) β ( deg. )
3 Hw ( m. ) Φ ( deg. ) H ( m. )
4 kh H ( m. ) Φ ( deg. )
5 Hb ( m. ) kh kh
Results are meaningful for individual smoothing parameters. But model 3 gives
different parameters from other models because of its genetic breeding pool size is
larger than other models. ( Figure 5.3, Figure 5.4 and Figure 5.5 )

Figure 5.3 : Test set error graph for model 3

90
Figure 5.4 : Test set error graph for model 4

Figure 5.5 : Test set error graph for Model 5

From model 3 through model 5, test set error decreases. The best model number 5
has the smallest test set error value. This shows that test set improves with different
calibrations. The smallest values are approximately 0.25 for model 3, 0.015 for
model 4, 0.01 for model 5. On the contrary like this comparison is not made for
BPNN. All the simulation success rates are given in Table 5.7.

91
Table 5.7 : Simulation success rates for each model
Model 1 (R squared) 0,7943
Model 1 ( Number of incorrect

BPNN
24
simulations )
Model 1 ( Simulation success rate % ) 85,88
Model 2 (R squared) 0,8053
Model 2 ( Number of incorrect
BPNN
20
simulations )
Model 2 ( Simulation success rate % ) 88,24
Model 3 (R squared) 0,9034
Model 3 ( Number of incorrect
GRNN

10
simulations )
Model 3 ( Simulation success rate % ) 94,11
Model 4 (R squared) 0,9158
GRNN

Model 4 ( Number of incorrect


10
simulations )
Model 4 ( Simulation success rate % ) 94,11
Model 5 (R squared) 0,9250
Model 5 ( Number of incorrect
GRNN

6
simulations )
Model 5 ( Simulation success rate % ) 96,47

Model 1 and 2 are Back-Propagation Neural Network (BPNN) approaches, and


model 3, 4, 5 are General Regression Neural Network (GRNN) approaches and in
Model 2 sometimes over learning patterns happen but this is not seen in Model 3, 4,
and 5 for GRNN approaches.

Neural network is generally used in Hydrology branch of civil engineering


disciplines and in Geotechnical discipline etc. which is given in Chapter 3. By the
way of this study Geotechnical, Earthquake Engineering work with together.
Because this study is slope stability by using seismic coefficients data and concerns
with these disciplines. Earthquake, and soil properties are used with together in this
study as input parameters. This study brings new view point to Earthquake
Engineering, and Geotechnical Engineering.

92
In this study , the aim is to find the parameters effects and which parameter is the
important factor for slope stability. Because of finding the seismic coefficients
effects imporatantance for earthquake and geotechnical engineering, this study have
done.

In conclusion, from this study it has seen that seismic coefficients effects are
important in slope stability but not a determinating parameter for evaluation of slope
stability and the effects of seismic coefficient. We can say that the slope height and
the water lever are more important. For evaluation of slope stability and the effects of
seismic coefficient GRNN is very good and suitable approach. Model 5 have the best
success rate but model 3 have the best contribution factors because of model 4 and
model 5 parameters are choosen from clay soil. As a result for Model 4 and 5
cohesion ( c ) is in the first order of contribution factor. On the contrary, BPNN is not
suitable approaches for this study. In future, forecasting can be done by using these
models. For example, there are 9 input parameters in this study, and using these 9
parameters factor of safety and seismic coefficient effects can be solved by the way
of Neural Network. Therefore solving and forecasting engineering problems get easy
for slope stability investigation.

93
REFERENCES

Agrawal , G. Chameau , J.L. & Baurdeau P.L. ( 1997 ) Assessing the liquefaction
susceptibility at a site based on information from penetration testing , In: Artificial
Neural Networks for applications , ASCE Monograph , Newyork , 185-214.

Bayrak, B., 2002. Liquefaction Potential: A neural network approach. Senior Thesis,
I.T.U. Civil Engineering Faculty, Istanbul.

Bayrak, B., 2004. Strong Ground Motion Attenuation Relationship Model by Using
Neural Network Methodology, M.S.c Thesis, I.T.U. Institute of Science and
Technology, Istanbul

Cao Jinggang ( 2002 ), Neural Network and Analytical Modeling of Slope Stability ,
Doctorate Thesis, Oklahoma University , Oklahoma

Chen Z. and Shao C. ( 1988 ) , Evaluation of minimum factor of safety in slope


stability analysis. Can. Geotech. J. , 20(1). 104-119

Chen, Y.H., and Tsai, C.C.P., 2002. A new method for estimation of the attenuation
relationship with variance components, Bulletin of the Seismological Society of
America, Vol.92, No.5. 1984-1991 (13).

Das., B.M., 1993. “ Principles of Soil Dynamics” Thomson Learning Academic


Resource Center. ISBN 0-534-93129-4

Duncan, M.,( 1969 ) , Soil slope stability analysis, Landslides Investigation and
Mitigation, Transportation Research Board Special Report 247, pp. 337–371.

Efe, Ö., Kaynak O., 2000. “Yapay Sinir Ağları ve Uygulamaları”, Bogazici
Üniversitesi

Ellis, G. W., Yao, C., Zhao, R., and Penumadu, D., 1995, Stress-strain modelling
of sands using artificial neural networks, J. of Geotech. Eng. 121(5), 429–435.

Fellenius Y. ( 1927 ) , Erdstatische berechnungen mit reibung und kohaesion. Ernst.


Berlin ( in English )

Feng, X., 1991, A neural network approach to comprehensive classification of rock


stability, blastability and drillability, Int. J. Surface Mining,Reclamation and
Environment 9, 57–62.

Frederick, M.D., 1996. Neuroshell2 user’s manual version 3.0. Ward Systems
Group

94
Goh A.T.C., (1996) “Neural Network modeling of CPT seismic liquefaction data,
Journal of Geotechnical and Geoenvironmental Engineering Division, 122, No.1, 70-
73.

Goh, A.T.C. (1994). “Empirical design in geotechnics using Neural Networks.”


Geotechnique, 45(4), 709-714.

Goh, A.T.C., (1995) “Seismic liquefaction potential assessed by Neural Networks.


Journal of Geotechnical and Geoenvironmental Engineering Division, ASCE, 120
(9), 1467- 1480.

Goh, A.T.C., and Chua, C.G., 2003. “A hybrid Bayesian back-propagation neural
network approach to multivariate modeling”, Int. J. Numer. Anal. Meth. Geomech.,
27: 000-000.

Goh, A.T.C., and Chua, C.G., 2003. “Quantifying uncertainty in predictions using a
Bayesian neural network”, GALAYAA B.V./MIT2_453:pp. 1-3

Goh, A.T.C., and Chua, C.G., 2004. “Nonlinear modeling with confidence
estimation using Bayesian neural networks”, eJSE.

Goh, A.T.C., and Chua, C.G., 2005. “Estimating wall deflections in deep
excavations using Bayesian Neural Networks”, Tunneling and Underground Space
Technology.

Goh, A.T.C., Kulhawy, F.H., and Chua, C.G., 2005. “Bayesian Neural Network
Analysis of Undrained Side Resistance of Drilled Shafts”, Journal of Geotechnical
and Geoenvironmental Engineering, Vol. 131, No. 1,

Hecht-Nielsen, R. (1990). Neurocomputing, Addison-Wesely Publishing Company.

Hecht-Nielson, R.: 1990, Theory of the back-propagation neural network, in Proc. of


Int. Joint Conf.on Neural Networks, Addison Wesley, New York, pp. I:595–611.

Hubick, K. T. (1992). Artificial neural networks in Australia, Department of


Industry, Technology and Commerce, Commonwealth of Australia, Canberra.

Jaksa, M. B. (1995). “The influence of spatial variability on the geotechncial design


properties of a stiff, overconsolidated clay,” PhD thesis, The University of Adelaide,
Adelaide.

Kramer, S.L., 1996. “Earthquake Geotechnical Engineering”, Prentice-Hall


International Series in Civil Engineering Mechanics, New Jersey, USA

Lin C. T. and Lee C. S. G. ( 1996 ) . , Neural fuzzy systems – A neuro-fuzzy


synergism to intelligent systems. , Prentice Hall Inc.

Lowe J. ( 1967 ) , Stability anlysis of embankments.J Soil Mech. And Found. Div. ,
ASCE , 93(4), 1-33

95
McCullock W. S. and Pitts W.H. ( 1943 ) , A Logical calculus at the ideas
imminent in nervous activity. Bull. Math. Biophy 5:115-133

Mendel J. M. and R. W. Mclaren ( 1970 ) , Reinforced learning control and pattern


recognition system , In adaptive learning and pattern recognition systems : Theory
and applications , NewYork Academic Press, pp.287-318

Morgensten N. ( 1963 ) , Stability charts for earth slopes during rapid drawdown.
Geotechnique, V13. pp.121-131

Morgenstern N.R. and Price V.E. ( 1965 ) ,The analysis of the stability of general
slip surface. Geotechnique. London 15(1),79-93

Mostyn G.R. and Small J.C. ( 1987 ) , Methods of stabilityanalysis. Soil slope
instability and stabilization , Walker & fell ( eds.) , Balkerna , Rotterdam.

Özaydın, K., 1995. “Zemin Mekaniği (Soil Mechanics)” Yıldız Tek. Üni. İnşaat
fak., İnşaat Müh. Blm. Geoteknik Anabilim Dalı, ISBN 975-511-145-X

Randolph, M. F., and Wroth, C. P. (1978). “Analysis of deformation of vertically


loaded piles.” J. Geotech. Engrg.,ASCE, 104(12), 1465-1488.

Schaap, M.G. & Bouten, W. 1996. Modeling water retention curves of sandy soils
using neural networks. Water Resources Research, 32, 3033–3040.

Shahin, M. A., Jaksa, M. B., and Maier, H. R. (2000). “Predicting the settlement
of shallow foundations on cohesionless soils using back-propagation neural
networks.” Research Report No. R 167, The University of Adelaide, Adelaide.

Shi, J. J. (2000). “Reducing prediciton error by transforming input data for neural
networks.” J. Computing in Civil Engrg., ASCE, 14(2), 109-116.

Shibata, T., and Teparaksa, W. (1988). “Evaluation of liquefaction potentials of


soils using cone penetration tests.” Soils and Foundations, 28(2), 49-60.

Sidarta, D. E., and Ghaboussi, J. (1998). “Constitutive modeling of geomaterials


from non-uniform material tests.” J. Computers & Geomechanics, 22(10), 53-71.

Sivakugan, N., Eckersley, J. D., and Li, H. (1998). “Settlement predictions using
neural networks.” Australian CivilEngineering Transactions, CE40, 49-52.

Skempton A.W. ( 1964 ) , Long-Term stability of clay slopes. Gewotechnique, 14.


77-102

Skempton A.W. and Hutchinson J.N. ( 1967 ) , Stability of natural slopes and
embankment foundations. Proc. 7th Int. Conf. Soil mech. And Found. Eng. Mexico
City , State of the Art Volume , pp.291 - 340

96
Specht D.F. (1996) “Fuzzy logic and Neural Network Handbook: Chapter 3-
Probabilistic and General Regression Neural Networks” McGraw-Hill Companies,
Inc., New York.

Specht, D.F., 1991. A generalized regression neural network, IEEE Transactions on


Neural Network, 568-576.

Specht, D.F., 1996. Fuzzy Logic and Neural Network Handbook, Mc Graw-Hill
Companies, Inc,.

Spencer E. ( 1969 ), Effect of tension on the stability of embankments. ASCE,


Journal of the soil mech. And found. Division , V94 , pp.1159-1173

Spencer E. ( 1973 ) ,Thrust line criterion in embankment stability stability anlysis.


Geotech.,V23 ,1, pp.85-100

Spencer, E., (1969), A method of analysis of the stability of embankments assuming


parallel inter-sliceforces, Geotechnique, 1711–26.

Tutumluer, E., and Seyhan, U. (1998). “Neural network modeling of anisotropic


aggregate behavior from repeated load triaxial tests. ” Transportation Research
Record 1615, National Research Council, Washington, D.C.

Ural, D., and Saka, H., 1998. "Liquefaction Prediction by Neural Networks",
Electronic Journal of Geotechnical Engineering, ISSN 1089-3032, Vol. 3, pp.1- 4.

Web Pages :
http://www.ggsd.com - Geotechnical & Geoenvironmental Software Directory

97
APPENDIXES

Appendix A. BPNN Approach Output Tables for Model 1

Appendix B. BPNN Approach Output Tables for Model 2

Appendix C. Error Through Patterns Graphs for Model 1 and 2.

Appendix D. GRNN Approach Output Tables for Model 3

Appendix E. GRNN Approach Output Tables for Model 4

Appendix F. GRNN Approach Output Tables for Model 5

Appendix G. Error Through Patterns Graphs for Model 3,4 and 5.

Appendix H. Slope Data Used In Program

98
Appendix A. BPNN Approach Output Tables for Model 1

Actual(1) Network(1) Act-Net(1)


1,3200 1,2958 0,0242
0,9400 0,8895 0,0505
0,9700 0,9517 0,0183
1,6100 1,8270 -0,2170
1,6400 1,6866 -0,0466
1,3500 1,3435 0,0065
1,2700 1,0988 0,1712
1,0600 1,1224 -0,0624
1,5500 1,1839 0,3661
1,4000 1,3054 0,0946
1,1800 1,1452 0,0348
1,3100 1,2206 0,0894
0,7500 0,7897 -0,0397
0,8000 0,8660 -0,0660
0,7700 0,8582 -0,0882
1,0500 0,9742 0,0758
0,9800 0,9446 0,0354
2,0900 1,7515 0,3385
1,0000 0,8538 0,1462
0,9000 0,8390 0,0610
1,1000 1,1197 -0,0197
1,2000 1,1544 0,0456
1,2900 1,3711 -0,0811
0,9700 0,9905 -0,0205
1,1300 1,1630 -0,0330
0,8600 0,7813 0,0787
1,1200 0,9997 0,1203
0,9600 1,0255 -0,0655
1,0000 0,9741 0,0259
1,1200 0,9246 0,1954
1,1000 1,2209 -0,1209
1,4000 1,1703 0,2297
1,0000 0,8796 0,1204
0,9900 0,6597 0,3303
1,0300 0,8021 0,2279
1,3200 1,2953 0,0247
1,5000 1,5342 -0,0342
1,5200 1,6925 -0,1725
1,1100 1,0193 0,0907
0,9700 0,9785 -0,0085
1,4700 1,1997 0,2703
0,9300 0,9949 -0,0649
0,9900 0,8729 0,1171
1,3500 1,3186 0,0314
0,7900 0,6446 0,1454

99
Actual(1) Network(1) Act-Net(1)
1,1500 1,2354 -0,0854
1,5000 1,3546 0,1454
2,0800 1,6502 0,4298
1,1900 1,1815 0,0085
0,9300 0,8940 0,0360
0,8100 0,8183 -0,0083
1,0500 1,2325 -0,1825
1,0700 1,1660 -0,0960
1,2100 1,4773 -0,2673
1,8200 1,7137 0,1063
0,9700 0,8637 0,1063
0,6200 0,8119 -0,1919
0,7800 0,9401 -0,1601
1,5400 1,5714 -0,0314
0,9300 0,8530 0,0770
1,6200 1,4839 0,1361
1,3000 1,2828 0,0172
1,7800 1,5596 0,2204
1,0300 1,1087 -0,0787
1,2300 1,1622 0,0678
1,2500 1,5146 -0,2646
1,3700 1,3961 -0,0261
1,2800 1,1608 0,1192
1,1100 1,1166 -0,0066
0,6800 1,1390 -0,4590
0,7000 0,9199 -0,2199
1,2000 1,1550 0,0450
2,1500 1,9525 0,1975
1,3500 1,4915 -0,1415
0,8900 0,9602 -0,0702
0,9200 0,8619 0,0581
0,6400 0,7483 -0,1083
1,1400 0,9359 0,2041
1,1900 1,2367 -0,0467
0,8700 0,9913 -0,1213
1,0500 1,1412 -0,0912
1,1700 1,1725 -0,0025
1,3100 1,2478 0,0622
1,0500 1,2664 -0,2164
1,3600 1,3803 -0,0203
1,5300 1,5754 -0,0454
1,3500 1,3638 -0,0138
1,0000 1,0266 -0,0266
0,8300 0,9026 -0,0726

100
Actual(1) Network(1) Act-Net(1)
0,7900 0,9566 -0,1666
1,0300 0,8343 0,1957
1,4500 1,0252 0,4248
1,6400 1,7485 -0,1085
1,2900 1,4117 -0,1217
1,0100 1,0574 -0,0474
1,0700 1,0724 -0,0024
1,0500 0,9476 0,1024
0,7300 0,7682 -0,0382
1,4300 1,2680 0,1620
1,0500 0,9638 0,0862
1,0000 0,9559 0,0441
1,2800 1,2553 0,0247
1,0000 0,9909 0,0091
0,8600 0,9928 -0,1328
0,9800 1,2301 -0,2501
1,2100 1,1395 0,0705
1,3100 1,3678 -0,0578
0,9700 0,9401 0,0299
1,7500 1,7513 -0,0013
2,0500 1,8548 0,1952
0,8200 1,1464 -0,3264
1,0000 0,9933 0,0067
0,9800 0,9676 0,0124
0,6700 0,8458 -0,1758
1,7600 1,7963 -0,0363
1,2000 1,0843 0,1157
1,1200 1,1657 -0,0457
0,9600 0,9827 -0,0227
1,7400 1,7397 0,0003
1,5400 1,4608 0,0792
1,2500 1,2440 0,0060
1,0000 1,1544 -0,1544
0,8700 0,8734 -0,0034
1,0000 1,1777 -0,1777
1,1100 1,1262 -0,0162
1,0000 1,0193 -0,0193
1,8750 1,5406 0,3344
2,0450 1,3778 0,6672
1,7800 1,6699 0,1101
1,9900 1,6817 0,3083
1,2500 0,8968 0,3532
1,1300 1,0239 0,1061
1,0200 1,0277 -0,0077

101
Actual(1) Network(1) Act-Net(1)
1,3000 1,1923 0,1077
1,2000 1,1305 0,0695
1,0900 1,1039 -0,0139
0,7800 0,8460 -0,0660
2,0000 1,9302 0,0698
1,7000 1,7450 -0,0450
1,0200 1,0602 -0,0402
0,8900 0,9100 -0,0200
1,4600 1,4707 -0,0107
0,8000 0,9442 -0,1442
1,4400 1,5093 -0,0693
0,8600 0,8622 -0,0022
1,0800 1,0844 -0,0044
1,1100 1,3293 -0,2193
1,4000 1,5357 -0,1357
1,3500 1,4917 -0,1417
1,0300 1,1319 -0,1019
1,2800 1,1535 0,1265
1,6300 1,5245 0,1055
1,0500 0,9304 0,1196
1,0300 1,0503 -0,0203
1,0900 1,0421 0,0479
1,1100 1,0893 0,0207
1,0100 1,0628 -0,0528
0,6250 0,9493 -0,3243
1,1200 0,9816 0,1384
1,2000 1,3517 -0,1517
1,8000 1,7990 0,0010
0,9000 1,0747 -0,1747
0,9600 0,8864 0,0736
0,8300 0,8157 0,0143
0,7900 0,8746 -0,0846
0,6700 0,9665 -0,2965
1,4500 1,4097 0,0403
1,5800 1,7291 -0,1491
1,3700 1,1987 0,1713
2,0500 1,9812 0,0688

102
Appendix B. BPNN Approach Output Tables for Model 2

Actual(1) Network(1) Act-Net(1)


1,3200 1,2902 0,0298
0,9400 0,8705 0,0695
0,9700 0,9951 -0,0251
1,6100 1,8157 -0,2057
1,6400 1,6686 -0,0286
1,3500 1,3615 -0,0115
1,2700 1,0968 0,1732
1,0600 1,0110 0,0490
1,5500 1,0413 0,5087
1,4000 1,3269 0,0731
1,1800 1,1132 0,0668
1,3100 1,1234 0,1866
0,7500 0,8022 -0,0522
0,8000 0,7761 0,0239
0,7700 0,8684 -0,0984
1,0500 0,9861 0,0639
0,9800 0,9767 0,0033
2,0900 1,7669 0,3231
1,0000 0,8540 0,1460
0,9000 0,8432 0,0568
1,1000 1,1075 -0,0075
1,2000 1,1274 0,0726
1,2900 1,3427 -0,0527
0,9700 0,9871 -0,0171
1,1300 1,1749 -0,0449
0,8600 0,7428 0,1172
1,1200 1,0157 0,1043
0,9600 1,0419 -0,0819
1,0000 0,9572 0,0428
1,1200 1,0469 0,0731
1,1000 1,2385 -0,1385
1,4000 1,4137 -0,0137
1,0000 0,8827 0,1173
0,9900 0,6425 0,3475
1,0300 0,7725 0,2575
1,3200 1,3082 0,0118
1,5000 1,5323 -0,0323
1,5200 1,6958 -0,1758
1,1100 0,9272 0,1828
0,9700 0,9550 0,0150
1,4700 1,2335 0,2365
0,9300 0,9400 -0,0100
0,9900 0,9304 0,0596
1,3500 1,3143 0,0357
0,7900 0,6401 0,1499

103
Actual(1) Network(1) Act-Net(1)
1,1500 1,2788 -0,1288
1,5000 1,5479 -0,0479
2,0800 1,6139 0,4661
1,1900 1,1773 0,0127
0,9300 0,8961 0,0339
0,8100 0,8212 -0,0112
1,0500 1,2598 -0,2098
1,0700 1,1753 -0,1053
1,2100 1,4223 -0,2123
1,8200 1,7024 0,1176
0,9700 0,8981 0,0719
0,6200 0,8027 -0,1827
0,7800 0,9687 -0,1887
1,5400 1,5612 -0,0212
0,9300 0,8704 0,0596
1,6200 1,5151 0,1049
1,3000 1,2875 0,0125
1,7800 1,5672 0,2128
1,0300 1,1244 -0,0944
1,2300 1,1601 0,0699
1,2500 1,4925 -0,2425
1,3700 1,2647 0,1053
1,2800 1,1822 0,0978
1,1100 1,1223 -0,0123
0,6800 1,1398 -0,4598
0,7000 0,9422 -0,2422
1,2000 1,1770 0,0230
2,1500 1,9544 0,1956
1,3500 1,4645 -0,1145
0,8900 0,9434 -0,0534
0,9200 0,8834 0,0366
0,6400 0,7107 -0,0707
1,1400 0,9095 0,2305
1,1900 1,1823 0,0077
0,8700 0,9799 -0,1099
1,0500 1,1542 -0,1042
1,1700 1,1911 -0,0211
1,3100 1,2531 0,0569
1,0500 1,2177 -0,1677
1,3600 1,3643 -0,0043
1,5300 1,5485 -0,0185
1,3500 1,3536 -0,0036
1,0000 1,0344 -0,0344
0,8300 0,7472 0,0828

104
Actual(1) Network(1) Act-Net(1)
0,7900 0,9798 -0,1898
1,0300 0,8383 0,1917
1,4500 1,1300 0,3200
1,6400 1,6420 -0,0020
1,2900 1,4378 -0,1478
1,0100 0,9903 0,0197
1,0700 1,0979 -0,0279
1,0500 0,9765 0,0735
0,7300 0,7640 -0,0340
1,4300 1,2638 0,1662
1,0500 0,9944 0,0556
1,0000 0,9497 0,0503
1,2800 1,3068 -0,0268
1,0000 0,9119 0,0881
0,8600 0,9723 -0,1123
0,9800 1,2635 -0,2835
1,2100 1,2521 -0,0421
1,3100 1,2112 0,0988
0,9700 0,9352 0,0348
1,7500 1,7563 -0,0063
2,0500 1,8710 0,1790
0,8200 1,1625 -0,3425
1,0000 0,9973 0,0027
0,9800 1,0114 -0,0314
0,6700 0,7422 -0,0722
1,7600 1,8047 -0,0447
1,2000 1,0784 0,1216
1,1200 1,1640 -0,0440
0,9600 0,8880 0,0720
1,7400 1,7563 -0,0163
1,5400 1,4523 0,0877
1,2500 1,1815 0,0685
1,0000 1,1274 -0,1274
0,8700 0,8529 0,0171
1,0000 1,1600 -0,1600
1,1100 1,1135 -0,0035
1,0000 1,0068 -0,0068
1,8750 1,5603 0,3147
2,0450 1,4507 0,5943
1,7800 1,6828 0,0972
1,9900 1,6904 0,2996
1,2500 0,9014 0,3486
1,1300 1,0458 0,0842
1,0200 0,8087 0,2113

105
Actual(1) Network(1) Act-Net(1)
1,3000 1,4622 -0,1622
1,2000 0,9646 0,2354
1,0900 1,0982 -0,0082
0,7800 0,8374 -0,0574
2,0000 1,9444 0,0556
1,7000 1,7199 -0,0199
1,0200 1,0648 -0,0448
0,8900 0,9039 -0,0139
1,4600 1,4480 0,0120
0,8000 0,9294 -0,1294
1,4400 1,4070 0,0330
0,8600 0,8334 0,0266
1,0800 1,1135 -0,0335
1,1100 1,2793 -0,1693
1,4000 1,5170 -0,1170
1,3500 1,4758 -0,1258
1,0300 1,0846 -0,0546
1,2800 1,2118 0,0682
1,6300 1,5631 0,0669
1,0500 0,9170 0,1330
1,0300 1,0282 0,0018
1,0900 0,9852 0,1048
1,1100 1,1896 -0,0796
1,0100 0,9953 0,0147
0,6250 0,7538 -0,1288
1,1200 0,9799 0,1401
1,2000 1,3455 -0,1455
1,8000 1,7952 0,0048
0,9000 1,0872 -0,1872
0,9600 1,0154 -0,0554
0,8300 0,8182 0,0118
0,7900 0,8685 -0,0785
0,6700 0,9388 -0,2688
1,4500 1,4539 -0,0039
1,5800 1,7386 -0,1586
1,3700 1,2078 0,1622
2,0500 1,9722 0,0778

106
Appendix C. Error Through Patterns Graphs for Model 1 and 2.

Figure App. C1 : Error through patterns for act-net for Model 1

Figure App. C2 : Error through patterns for act-net for Model 2

107
Appendix D. GRNN Approach Output Tables for Model 3

Actual(1) Network(1) Act-Net(1)


1,3200 1,3200 0,0000
0,9400 0,9400 0,0000
0,9700 0,9700 0,0000
1,6100 1,6400 -0,0300
1,6400 1,6400 0,0000
1,3500 1,3500 0,0000
1,2700 1,2700 0,0000
1,0600 1,0600 0,0000
1,5500 1,6799 -0,1299
1,4000 1,4000 0,0000
1,1800 1,1800 0,0000
1,3100 1,3100 0,0000
0,7500 0,7500 0,0000
0,8000 0,8000 0,0000
0,7700 0,7700 0,0000
1,0500 1,0500 0,0000
0,9800 0,9800 0,0000
2,0900 2,0897 0,0003
1,0000 0,9000 0,1000
0,9000 0,9000 0,0000
1,1000 1,0999 0,0001
1,2000 1,0998 0,1002
1,2900 1,3452 -0,0552
0,9700 0,9700 0,0000
1,1300 1,1996 -0,0696
0,8600 1,3600 -0,5000
1,1200 1,1188 0,0012
0,9600 0,9613 -0,0013
1,0000 1,0021 -0,0021
1,1200 1,1185 0,0015
1,1000 1,1000 0,0000
1,4000 1,4000 0,0000
1,0000 1,0000 0,0000
0,9900 1,0300 -0,0400
1,0300 1,0300 0,0000
1,3200 1,3200 0,0000
1,5000 1,5000 0,0000
1,5200 1,5200 0,0000
1,1100 1,3155 -0,2055
0,9700 0,9700 0,0000
1,4700 1,4696 0,0004
0,9300 0,9300 0,0000
0,9900 0,9398 0,0502
1,3500 1,3500 0,0000
0,7900 0,7900 0,0000

108
Actual(1) Network(1) Act-Net(1)
1,1500 1,1500 0,0000
1,5000 1,0997 0,4003
2,0800 1,6372 0,4428
1,1900 1,1900 0,0000
0,9300 0,9301 -0,0001
0,8100 0,8100 0,0000
1,0500 1,0305 0,0195
1,0700 1,2051 -0,1351
1,2100 1,2100 0,0000
1,8200 1,8202 -0,0002
0,9700 0,9700 0,0000
0,6200 0,6200 0,0000
0,7800 0,7800 0,0000
1,5400 1,5400 0,0000
0,9300 0,9300 0,0000
1,6200 1,5115 0,1085
1,3000 1,3000 0,0000
1,7800 1,7801 -0,0001
1,0300 1,0300 0,0000
1,2300 1,2300 0,0000
1,2500 1,2361 0,0139
1,3700 1,3443 0,0257
1,2800 1,2800 0,0000
1,1100 1,1100 0,0000
0,6800 1,1030 -0,4230
0,7000 0,7385 -0,0385
1,2000 1,2000 0,0000
2,1500 1,6400 0,5100
1,3500 1,3500 0,0000
0,8900 0,8900 0,0000
0,9200 0,9199 0,0001
0,6400 0,6401 -0,0001
1,1400 1,1400 0,0000
1,1900 0,8712 0,3188
0,8700 0,8700 0,0000
1,0500 1,0500 0,0000
1,1700 1,1700 0,0000
1,3100 1,3062 0,0038
1,0500 1,0538 -0,0038
1,3600 1,3600 0,0000
1,5300 1,5299 0,0001
1,3500 1,3500 0,0000
1,0000 1,0000 0,0000
0,8300 1,0000 -0,1700

109
Actual(1) Network(1) Act-Net(1)
0,7900 0,7904 -0,0004
1,0300 1,0300 0,0000
1,4500 1,6447 -0,1947
1,6400 1,6400 0,0000
1,2900 1,2900 0,0000
1,0100 1,0100 0,0000
1,0700 1,0700 0,0000
1,0500 1,0500 0,0000
0,7300 0,7300 0,0000
1,4300 1,4299 0,0001
1,0500 1,0500 0,0000
1,0000 1,0000 0,0000
1,2800 1,2800 0,0000
1,0000 1,0000 0,0000
0,8600 0,8600 0,0000
0,9800 0,9919 -0,0119
1,2100 1,2100 0,0000
1,3100 1,3100 0,0000
0,9700 1,1125 -0,1425
1,7500 1,7500 0,0000
2,0500 2,0500 0,0000
0,8200 1,0971 -0,2771
1,0000 1,0000 0,0000
0,9800 0,9800 0,0000
0,6700 0,9746 -0,3046
1,7600 1,7600 0,0000
1,2000 1,4688 -0,2688
1,1200 1,1199 0,0001
0,9600 1,0059 -0,0459
1,7400 1,7400 0,0000
1,5400 1,5400 0,0000
1,2500 1,0883 0,1617
1,0000 1,0998 -0,0998
0,8700 0,8710 -0,0010
1,0000 0,8813 0,1187
1,1100 1,1100 0,0000
1,0000 1,0000 0,0000
1,8750 1,8738 0,0012
2,0450 1,7580 0,2870
1,7800 1,7800 0,0000
1,9900 1,7800 0,2100
1,2500 1,2500 0,0000
1,1300 1,1300 0,0000
1,0200 1,0155 0,0045

110
Actual(1) Network(1) Act-Net(1)
1,3000 1,3000 0,0000
1,2000 1,2000 0,0000
1,0900 1,0900 0,0000
0,7800 0,7800 0,0000
2,0000 2,0000 0,0000
1,7000 1,7000 0,0000
1,0200 1,0200 0,0000
0,8900 0,8898 0,0002
1,4600 1,4600 0,0000
0,8000 0,8000 0,0000
1,4400 1,4400 0,0000
0,8600 0,8600 0,0000
1,0800 1,0800 0,0000
1,1100 1,1100 0,0000
1,4000 1,4000 0,0000
1,3500 1,3500 0,0000
1,0300 1,0300 0,0000
1,2800 1,2800 0,0000
1,6300 1,6300 0,0000
1,0500 1,0500 0,0000
1,0300 1,0300 0,0000
1,0900 1,0900 0,0000
1,1100 1,1112 -0,0012
1,0100 1,0100 0,0000
0,6250 0,6295 -0,0045
1,1200 1,1200 0,0000
1,2000 1,0300 0,1700
1,8000 1,7125 0,0875
0,9000 0,8076 0,0924
0,9600 0,9600 0,0000
0,8300 0,8300 0,0000
0,7900 0,7883 0,0017
0,6700 0,6720 -0,0020
1,4500 1,4500 0,0000
1,5800 1,4500 0,1300
1,3700 1,3700 0,0000
2,0500 2,0500 0,0000

111
Appendix E. GRNN Approach Output Tables for Model 4

Actual(1) Network(1) Act-Net(1)


1,3200 1,3200 0,0000
0,9400 0,9412 -0,0012
0,9700 1,0355 -0,0655
1,6100 1,6400 -0,0300
1,6400 1,6400 0,0000
1,3500 1,3500 0,0000
1,2700 1,2712 -0,0012
1,0600 1,0600 0,0000
1,5500 1,6237 -0,0737
1,4000 1,3864 0,0136
1,1800 1,1807 -0,0007
1,3100 1,3108 -0,0008
0,7500 0,7514 -0,0014
0,8000 0,8491 -0,0491
0,7700 0,7945 -0,0245
1,0500 1,0471 0,0029
0,9800 0,9816 -0,0016
2,0900 2,0822 0,0078
1,0000 0,9934 0,0066
0,9000 0,9134 -0,0134
1,1000 1,1006 -0,0006
1,2000 1,1006 0,0994
1,2900 1,2900 0,0000
0,9700 0,9704 -0,0004
1,1300 1,1300 0,0000
0,8600 0,8600 0,0000
1,1200 1,1197 0,0003
0,9600 0,9605 -0,0005
1,0000 1,0289 -0,0289
1,1200 1,0917 0,0283
1,1000 1,0606 0,0394
1,4000 1,4000 0,0000
1,0000 1,0000 0,0000
0,9900 1,0706 -0,0806
1,0300 1,0300 0,0000
1,3200 1,2733 0,0467
1,5000 1,4989 0,0011
1,5200 1,4540 0,0660
1,1100 0,8666 0,2434
0,9700 0,9700 0,0000
1,4700 1,3871 0,0829
0,9300 0,9340 -0,0040
0,9900 0,9999 -0,0099
1,3500 1,3505 -0,0005
0,7900 0,7900 0,0000

112
Actual(1) Network(1) Act-Net(1)
1,1500 1,1984 -0,0484
1,5000 1,2577 0,2423
2,0800 1,6368 0,4432
1,1900 1,2036 -0,0136
0,9300 1,0598 -0,1298
0,8100 0,8100 0,0000
1,0500 1,0500 0,0000
1,0700 1,0700 0,0000
1,2100 1,2100 0,0000
1,8200 1,8199 0,0001
0,9700 0,9895 -0,0195
0,6200 0,6200 0,0000
0,7800 0,7803 -0,0003
1,5400 1,5400 0,0000
0,9300 0,9283 0,0017
1,6200 1,6651 -0,0451
1,3000 1,2953 0,0047
1,7800 1,7878 -0,0078
1,0300 1,0300 0,0000
1,2300 1,2300 0,0000
1,2500 1,1865 0,0635
1,3700 1,3679 0,0021
1,2800 1,2800 0,0000
1,1100 1,1096 0,0004
0,6800 1,1807 -0,5007
0,7000 1,0421 -0,3421
1,2000 1,2000 0,0000
2,1500 1,6400 0,5100
1,3500 1,3500 0,0000
0,8900 0,8929 -0,0029
0,9200 0,9200 0,0000
0,6400 0,6404 -0,0004
1,1400 1,1400 0,0000
1,1900 1,0455 0,1445
0,8700 0,8953 -0,0253
1,0500 1,0497 0,0003
1,1700 1,1682 0,0018
1,3100 1,2570 0,0530
1,0500 1,1030 -0,0530
1,3600 1,3600 0,0000
1,5300 1,5290 0,0010
1,3500 1,3500 0,0000
1,0000 0,9999 0,0001
0,8300 1,0001 -0,1701

113
Actual(1) Network(1) Act-Net(1)
0,7900 0,8729 -0,0829
1,0300 1,0924 -0,0624
1,4500 1,5917 -0,1417
1,6400 1,6400 0,0000
1,2900 1,2900 0,0000
1,0100 1,0100 0,0000
1,0700 1,0700 0,0000
1,0500 1,0500 0,0000
0,7300 0,7307 -0,0007
1,4300 1,2366 0,1934
1,0500 1,0505 -0,0005
1,0000 1,0000 0,0000
1,2800 1,2800 0,0000
1,0000 1,0000 0,0000
0,8600 0,8600 0,0000
0,9800 0,9870 -0,0070
1,2100 1,2100 0,0000
1,3100 1,3100 0,0000
0,9700 0,9221 0,0479
1,7500 1,7495 0,0005
2,0500 2,0500 0,0000
0,8200 0,9396 -0,1196
1,0000 0,9999 0,0001
0,9800 0,9800 0,0000
0,6700 0,6702 -0,0002
1,7600 1,7600 0,0000
1,2000 1,4727 -0,2727
1,1200 1,1199 0,0001
0,9600 1,1200 -0,1600
1,7400 1,7388 0,0012
1,5400 1,5365 0,0035
1,2500 0,9959 0,2541
1,0000 1,1006 -0,1006
0,8700 0,8700 0,0000
1,0000 1,0914 -0,0914
1,1100 1,1100 0,0000
1,0000 0,9999 0,0001
1,8750 1,8744 0,0006
2,0450 1,6395 0,4055
1,7800 1,7800 0,0000
1,9900 1,7800 0,2100
1,2500 1,2500 0,0000
1,1300 1,1297 0,0003
1,0200 0,8481 0,1719

114
Actual(1) Network(1) Act-Net(1)
1,3000 1,2830 0,0170
1,2000 1,2170 -0,0170
1,0900 1,0900 0,0000
0,7800 0,7801 -0,0001
2,0000 1,9999 0,0001
1,7000 1,6928 0,0072
1,0200 1,0198 0,0002
0,8900 0,8215 0,0685
1,4600 1,4598 0,0002
0,8000 0,8000 0,0000
1,4400 1,4391 0,0009
0,8600 0,8601 -0,0001
1,0800 1,0796 0,0004
1,1100 1,1100 0,0000
1,4000 1,4000 0,0000
1,3500 1,3500 0,0000
1,0300 1,0371 -0,0071
1,2800 1,2800 0,0000
1,6300 1,6300 0,0000
1,0500 1,0693 -0,0193
1,0300 1,0280 0,0020
1,0900 1,0900 0,0000
1,1100 1,1102 -0,0002
1,0100 1,0100 0,0000
0,6250 0,7969 -0,1719
1,1200 1,1200 0,0000
1,2000 1,1992 0,0008
1,8000 1,8000 0,0000
0,9000 0,9470 -0,0470
0,9600 0,9600 0,0000
0,8300 0,8303 -0,0003
0,7900 0,7899 0,0001
0,6700 0,7388 -0,0688
1,4500 1,4502 -0,0002
1,5800 1,4502 0,1298
1,3700 1,3698 0,0002
2,0500 2,0500 0,0000

115
Appendix F. GRNN Approach Output Tables for Model 5

Actual(1) Network(1) Act-Net(1)


1,3200 1,3200 0,0000
0,9400 0,9400 0,0000
0,9700 0,9701 -0,0001
1,6100 1,6391 -0,0291
1,6400 1,6384 0,0016
1,3500 1,3500 0,0000
1,2700 1,2762 -0,0062
1,0600 1,0600 0,0000
1,5500 1,6252 -0,0752
1,4000 1,3999 0,0001
1,1800 1,1800 0,0000
1,3100 1,3089 0,0011
0,7500 0,7617 -0,0117
0,8000 0,8011 -0,0011
0,7700 0,7954 -0,0254
1,0500 1,0279 0,0221
0,9800 0,9801 -0,0001
2,0900 2,0545 0,0355
1,0000 1,0528 -0,0528
0,9000 0,9016 -0,0016
1,1000 1,1002 -0,0002
1,2000 1,1045 0,0955
1,2900 1,2900 0,0000
0,9700 0,9700 0,0000
1,1300 1,1300 0,0000
0,8600 0,8600 0,0000
1,1200 1,1196 0,0004
0,9600 0,9785 -0,0185
1,0000 1,0007 -0,0007
1,1200 1,1196 0,0004
1,1000 1,0722 0,0278
1,4000 1,4000 0,0000
1,0000 1,0000 0,0000
0,9900 1,0300 -0,0400
1,0300 1,0300 0,0000
1,3200 1,3003 0,0197
1,5000 1,5011 -0,0011
1,5200 1,5016 0,0184
1,1100 0,8600 0,2500
0,9700 0,9700 0,0000
1,4700 1,3046 0,1654
0,9300 0,9403 -0,0103
0,9900 1,0010 -0,0110
1,3500 1,3500 0,0000
0,7900 0,7900 0,0000

116
Actual(1) Network(1) Act-Net(1)
1,1500 1,1695 -0,0195
1,5000 1,3352 0,1648
2,0800 1,6400 0,4400
1,1900 1,1901 -0,0001
0,9300 1,0614 -0,1314
0,8100 0,8101 -0,0001
1,0500 1,0496 0,0004
1,0700 1,0700 0,0000
1,2100 1,2100 0,0000
1,8200 1,8061 0,0139
0,9700 0,9666 0,0034
0,6200 0,6200 0,0000
0,7800 0,7800 0,0000
1,5400 1,5400 0,0000
0,9300 0,9451 -0,0151
1,6200 1,6816 -0,0616
1,3000 1,2922 0,0078
1,7800 1,8154 -0,0354
1,0300 1,0299 0,0001
1,2300 1,2300 0,0000
1,2500 1,1684 0,0816
1,3700 1,3663 0,0037
1,2800 1,2800 0,0000
1,1100 1,1100 0,0000
0,6800 1,1792 -0,4992
0,7000 0,9682 -0,2682
1,2000 1,1999 0,0001
2,1500 1,6400 0,5100
1,3500 1,3514 -0,0014
0,8900 0,9126 -0,0226
0,9200 0,9196 0,0004
0,6400 0,6408 -0,0008
1,1400 1,1400 0,0000
1,1900 1,0465 0,1435
0,8700 0,8998 -0,0298
1,0500 1,0499 0,0001
1,1700 1,1582 0,0118
1,3100 1,3092 0,0008
1,0500 1,0508 -0,0008
1,3600 1,3600 0,0000
1,5300 1,5286 0,0014
1,3500 1,3499 0,0001
1,0000 1,0000 0,0000
0,8300 1,0007 -0,1707

117
Actual(1) Network(1) Act-Net(1)
0,7900 0,9554 -0,1654
1,0300 1,0509 -0,0209
1,4500 1,5504 -0,1004
1,6400 1,6400 0,0000
1,2900 1,3039 -0,0139
1,0100 1,0100 0,0000
1,0700 1,0730 -0,0030
1,0500 1,0501 -0,0001
0,7300 0,7772 -0,0472
1,4300 1,2599 0,1701
1,0500 1,0524 -0,0024
1,0000 1,0000 0,0000
1,2800 1,2800 0,0000
1,0000 1,0001 -0,0001
0,8600 0,8600 0,0000
0,9800 1,0267 -0,0467
1,2100 1,2100 0,0000
1,3100 1,3100 0,0000
0,9700 0,9065 0,0635
1,7500 1,7500 0,0000
2,0500 2,0500 0,0000
0,8200 0,9389 -0,1189
1,0000 0,9997 0,0003
0,9800 0,9800 0,0000
0,6700 0,6700 0,0000
1,7600 1,7600 0,0000
1,2000 1,4620 -0,2620
1,1200 1,1200 0,0000
0,9600 1,1200 -0,1600
1,7400 1,7400 0,0000
1,5400 1,5120 0,0280
1,2500 1,1160 0,1340
1,0000 1,1045 -0,1045
0,8700 0,8700 0,0000
1,0000 0,8669 0,1331
1,1100 1,1100 0,0000
1,0000 0,9999 0,0001
1,8750 1,8273 0,0477
2,0450 1,6390 0,4060
1,7800 1,7800 0,0000
1,9900 1,7800 0,2100
1,2500 1,2500 0,0000
1,1300 1,1300 0,0000
1,0200 0,9603 0,0597

118
Actual(1) Network(1) Act-Net(1)
1,3000 1,3000 0,0000
1,2000 1,2000 0,0000
1,0900 1,0900 0,0000
0,7800 0,7801 -0,0001
2,0000 2,0000 0,0000
1,7000 1,6997 0,0003
1,0200 1,0200 0,0000
0,8900 0,8656 0,0244
1,4600 1,4600 0,0000
0,8000 0,8000 0,0000
1,4400 1,4400 0,0000
0,8600 0,8600 0,0000
1,0800 1,0800 0,0000
1,1100 1,1100 0,0000
1,4000 1,4001 -0,0001
1,3500 1,3500 0,0000
1,0300 1,0307 -0,0007
1,2800 1,2800 0,0000
1,6300 1,6300 0,0000
1,0500 1,0786 -0,0286
1,0300 1,0299 0,0001
1,0900 1,0911 -0,0011
1,1100 1,1107 -0,0007
1,0100 1,0100 0,0000
0,6250 0,6847 -0,0597
1,1200 1,1200 0,0000
1,2000 1,2000 0,0000
1,8000 1,8000 0,0000
0,9000 0,9596 -0,0596
0,9600 0,9600 0,0000
0,8300 0,8300 0,0000
0,7900 0,7896 0,0004
0,6700 0,6947 -0,0247
1,4500 1,4500 0,0000
1,5800 1,4502 0,1298
1,3700 1,3699 0,0001
2,0500 2,0500 0,0000

119
Appendix G. Error Through Patterns Graphs for Model 3,4 and 5.

Figure App. G1 : Error through patterns for act-net for Model 3

Figure App. G2 : Error through patterns for act-net for Model 4

120
Figure App. G3 : Error through patterns for act-net for Model 5

121
Appendix H. Slope Data Used In Program ( Cao Jinggang , 2002 )

H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.


10,00 0,00 10,00 33,69 20,00 10,00 20,00 0,100 0,050 1,32
15,20 0,00 0,00 71,60 18,00 20,00 20,00 0,150 0,100 0,94
50,00 0,00 0,00 21,80 11,00 15,00 21,00 0,200 0,150 0,97
10,00 9,00 0,00 26,57 19,61 31,70 13,00 0,250 0,200 1,61
10,50 0,00 0,00 26,57 20,27 31,70 13,00 0,300 0,250 1,64
5,00 0,00 30,00 20,00 20,00 40,00 30,00 0,350 0,050 1,35
8,05 0,00 0,00 26,57 18,50 15,00 10,00 0,400 0,100 1,27
23,75 6,30 0,00 29,20 17,65 0,00 37,00 0,450 0,150 1,06
10,00 9,00 2,00 30,00 18,00 25,00 10,00 0,500 0,200 1,55
6,00 6,00 0,00 33,69 19,80 4,00 32,00 0,100 0,250 1,40
44,20 12,00 0,00 19,98 22,76 16,76 37,50 0,150 0,050 1,18
20,00 0,00 0,00 33,69 19,65 4,31 32,00 0,200 0,100 1,31
6,20 0,00 0,00 16,72 18,80 0,00 20,00 0,250 0,150 0,75
7,20 0,00 0,00 19,98 18,80 1,00 20,00 0,300 0,200 0,80
7,00 0,00 0,00 18,43 18,80 1,00 20,00 0,350 0,250 0,77
7,80 0,00 3,20 44,50 18,60 10,20 20,00 0,400 0,050 1,05
12,20 0,00 0,00 17,10 18,80 1,50 20,00 0,450 0,100 0,98
8,00 0,00 0,00 26,57 18,50 20,00 20,00 0,500 0,150 2,09
20,00 0,00 0,00 22,00 20,00 0,00 20,00 0,035 0,200 1,00
20,00 10,00 0,00 22,00 20,00 0,00 20,00 0,035 0,250 0,90
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,200 0,050 1,10
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,100 0,100 1,20
8,00 0,00 0,00 45,00 18,50 15,00 20,00 0,100 0,150 1,29
8,00 5,60 5,60 45,00 19,50 17,50 7,50 0,150 0,200 0,97
7,62 6,73 2,31 26,57 18,53 0,00 30,00 0,200 0,250 1,13
32,80 26,90 164,00 18,16 17,00 12,00 16,30 0,250 0,050 0,86

122
H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.
20,40 10,00 0,00 22,00 20,00 20,00 20,00 0,035 0,100 1,12
20,40 10,00 0,00 22,00 20,00 20,00 20,00 0,100 0,150 0,96
44,20 0,00 0,00 19,98 22,80 16,80 37,50 0,100 0,200 1,00
44,20 0,00 0,00 19,98 22,80 16,80 37,50 0,150 0,250 1,12
4,90 0,00 0,00 18,43 18,80 1,20 20,00 0,200 0,050 1,10
20,00 0,00 100,00 33,69 18,80 41,70 15,00 0,250 0,100 1,40
15,20 0,00 0,00 63,40 18,00 20,00 20,00 0,300 0,150 1,00
46,00 0,00 0,00 41,01 9,00 25,00 20,00 0,350 0,200 0,99
45,50 0,00 0,00 41,01 12,00 23,00 25,00 0,400 0,250 1,03
8,00 0,00 0,00 45,00 18,50 20,00 15,00 0,450 0,050 1,32
8,00 0,00 0,00 45,00 18,50 20,00 20,00 0,500 0,100 1,50
30,00 0,00 0,00 20,56 19,61 14,71 20,00 0,100 0,150 1,52
32,80 26,90 164,00 18,16 17,00 12,00 16,30 0,150 0,200 1,11
17,00 0,00 0,00 33,69 18,80 1,00 20,00 0,200 0,250 0,97
6,10 0,00 30,50 33,69 19,62 4,31 32,00 0,250 0,050 1,47
10,00 0,00 5,00 26,57 16,00 10,00 15,00 0,300 0,100 0,93
9,10 4,00 5,00 26,60 16,50 8,50 10,60 0,350 0,150 0,99
8,00 0,00 0,00 45,00 18,50 25,00 10,00 0,400 0,200 1,35
17,68 17,68 88,40 26,57 19,65 10,06 27,00 0,450 0,250 0,79
8,56 0,00 0,00 45,00 18,50 20,00 10,00 0,500 0,050 1,15
44,00 0,00 0,00 19,98 22,80 16,80 37,50 0,100 0,100 1,50
13,50 0,00 0,00 26,57 17,30 57,50 7,00 0,150 0,150 2,08
6,10 0,00 0,00 33,69 19,65 4,31 32,00 0,200 0,200 1,19
6,00 0,00 0,00 23,96 18,80 1,00 20,00 0,250 0,250 0,93
7,00 0,00 0,00 26,57 18,80 1,00 20,00 0,300 0,050 0,81
10,00 0,00 0,00 26,57 18,93 11,97 32,00 0,350 0,100 1,05
10,00 0,00 5,00 33,69 17,66 7,85 25,00 0,400 0,150 1,07
8,00 0,00 0,00 26,57 18,50 5,00 20,00 0,450 0,200 1,21
8,00 0,00 0,00 26,57 18,50 15,00 20,00 0,500 0,250 1,82
10,40 0,00 0,00 15,24 18,80 0,00 20,00 0,100 0,050 0,97

123
H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.
5,10 3,27 25,50 25,25 18,84 0,00 34,00 0,150 0,100 0,62
4,00 0,00 0,00 20,00 17,95 5,00 15,00 0,200 0,150 0,78
20,00 0,00 0,00 20,00 19,72 30,00 30,00 0,250 0,200 1,54
4,50 0,00 1,30 20,00 15,92 2,16 17,33 0,300 0,250 0,93
12,19 0,00 0,00 33,69 19,24 22,80 35,00 0,350 0,050 1,62
9,50 0,00 0,00 25,50 20,00 11,50 9,60 0,400 0,100 1,30
8,00 0,00 0,00 26,57 18,50 20,00 15,00 0,450 0,150 1,78
20,00 0,00 0,00 26,57 18,71 0,00 23,50 0,510 0,100 1,03
21,50 0,00 0,00 24,13 17,40 5,00 10,00 0,100 0,050 1,23
44,20 0,00 0,00 20,00 22,00 16,80 37,50 0,150 0,100 1,25
44,20 0,00 0,00 20,00 22,00 16,80 37,50 0,200 0,150 1,37
13,70 0,00 0,00 26,57 18,71 0,00 14,00 0,050 0,200 1,28
8,20 0,00 0,00 45,00 18,50 15,00 15,00 0,100 0,250 1,11
44,10 0,00 0,00 19,98 22,80 16,50 37,50 0,150 0,050 0,68
44,10 0,00 0,00 19,98 22,80 16,50 37,50 0,200 0,100 0,70
12,19 0,00 7,62 27,15 18,87 0,00 33,00 0,250 0,150 1,20
12,19 0,00 7,62 27,15 18,87 67,00 0,00 0,300 0,200 2,15
12,19 0,00 7,62 27,15 18,87 28,70 20,00 0,350 0,250 1,35
8,45 0,00 0,00 45,00 18,50 10,00 15,00 0,400 0,050 0,89
21,50 0,00 0,00 24,13 17,40 0,00 14,00 0,450 0,100 0,92
21,50 0,00 0,00 24,13 17,40 0,00 17,20 0,500 0,150 0,64
46,00 0,00 0,00 38,66 14,00 20,00 26,30 0,100 0,200 1,14
22,70 0,00 0,00 16,27 18,20 0,00 14,10 0,150 0,250 1,19
22,70 0,00 0,00 16,27 18,20 0,00 17,20 0,200 0,050 0,87
15,50 0,00 0,00 15,01 18,00 5,00 10,00 0,250 0,100 1,05
15,50 0,00 0,00 15,01 18,00 0,00 14,00 0,300 0,150 1,17
15,00 0,00 0,00 12,99 22,00 0,00 26,00 0,350 0,200 1,31
15,00 0,00 0,00 12,99 22,00 0,00 26,00 0,400 0,250 1,05
25,00 6,25 125,00 22,00 18,80 30,00 20,00 0,450 0,050 1,36
8,00 0,00 0,00 45,00 18,50 25,00 15,00 0,500 0,100 1,53

124
H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.
8,00 0,00 0,00 26,50 18,50 15,00 15,00 0,100 0,150 1,35
10,06 30,38 0,00 21,80 18,44 0,96 24,50 0,150 0,200 1,00
10,06 30,38 0,00 21,80 18,44 0,72 25,60 0,200 0,250 0,83
6,00 6,00 30,00 33,69 19,65 1,50 30,00 0,250 0,050 0,79
12,80 0,00 0,00 27,76 21,85 8,62 32,00 0,300 0,100 1,03
27,43 0,00 0,00 26,40 17,29 44,54 12,00 0,350 0,150 1,45
14,33 15,14 3,05 36,53 20,47 68,00 0,00 0,400 0,200 1,64
8,00 0,00 0,00 26,57 18,50 10,00 15,00 0,450 0,250 1,29
10,00 7,00 0,00 39,81 20,36 0,98 32,50 0,500 0,050 1,01
18,00 0,00 0,00 26,57 19,50 9,81 27,00 0,100 0,100 1,07
12,80 0,00 6,10 28,50 21,55 8,62 30,00 0,150 0,150 1,05
10,06 0,00 0,00 21,80 18,01 15,33 20,00 0,200 0,200 0,73
10,06 0,00 0,00 21,80 18,84 0,00 20,00 0,250 0,250 1,43
7,01 0,00 0,00 18,43 21,29 0,00 20,00 0,300 0,050 1,05
7,01 0,00 0,00 18,43 19,79 0,96 13,00 0,350 0,100 1,00
18,29 0,00 0,00 11,00 22,32 15,33 21,00 0,400 0,150 1,28
12,10 10,00 0,00 24,38 16,10 25,00 20,00 0,450 0,200 1,00
30,00 0,00 20,00 30,00 21,00 22,11 18,29 0,500 0,250 0,86
5,00 0,00 30,00 33,69 19,60 2,56 27,60 0,100 0,050 0,98
67,80 0,00 0,00 29,05 19,00 33,00 29,50 0,150 0,100 1,21
67,80 45,00 0,00 29,05 16,00 25,00 20,00 0,200 0,150 1,31
14,30 13,30 0,00 27,00 19,60 9,60 25,00 0,250 0,200 0,97
8,00 0,00 0,00 45,00 18,50 30,00 15,00 0,300 0,250 1,75
8,00 0,00 0,00 26,57 18,50 25,00 15,00 0,350 0,050 2,05
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,400 0,100 0,82
5,00 1,00 3,00 26,57 17,64 4,90 10,00 0,450 0,150 1,00
12,80 8,09 8,09 28,00 21,67 7,82 32,00 0,500 0,200 0,98
10,00 0,00 0,00 14,04 20,00 10,00 25,00 0,100 0,250 0,67
6,00 0,00 30,00 45,00 18,00 10,00 37,00 0,150 0,050 1,76
6,00 0,00 30,00 33,69 18,00 10,00 37,00 0,200 0,100 1,20

125
H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.
20,15 10,00 0,00 22,00 20,00 20,00 20,00 0,035 0,250 1,12
20,15 10,00 0,00 22,00 20,00 20,00 20,00 0,100 0,050 0,96
8,00 0,00 0,00 45,00 18,50 25,00 20,00 0,100 0,050 1,74
8,30 0,00 0,00 26,57 18,50 10,00 20,00 0,200 0,100 1,54
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,050 0,150 1,25
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,100 0,100 1,00
11,50 0,00 10,80 27,60 17,71 9,09 20,35 0,150 0,200 0,87
10,20 0,00 5,00 45,00 19,60 11,80 30,00 0,200 0,050 1,00
8,23 0,00 0,00 35,00 18,67 26,34 15,00 0,100 0,100 1,11
3,66 0,00 0,00 30,00 16,50 11,49 0,00 0,150 0,150 1,00
30,50 0,00 0,00 20,00 18,84 14,40 25,00 0,200 0,200 1,88
30,50 0,00 0,00 20,00 18,84 57,46 20,00 0,250 0,250 2,05
100,00 0,00 0,00 35,00 28,44 29,42 35,00 0,300 0,050 1,78
100,00 0,00 0,00 35,00 28,44 39,23 38,00 0,350 0,100 1,99
40,00 0,00 0,00 30,00 20,60 16,28 26,50 0,400 0,150 1,25
50,00 0,00 0,00 20,00 14,80 0,00 17,00 0,450 0,200 1,13
88,00 0,00 0,00 30,00 14,00 11,97 26,00 0,500 0,250 1,02
120,00 0,00 0,00 53,00 25,00 120,00 45,00 0,100 0,050 1,30
200,00 0,00 0,00 50,00 26,00 150,05 45,00 0,150 0,100 1,20
6,00 0,00 0,00 30,00 18,50 25,00 0,00 0,200 0,150 1,09
6,00 0,00 0,00 30,00 18,50 12,00 0,00 0,250 0,200 0,78
10,00 0,00 0,00 30,00 22,40 10,00 35,00 0,300 0,250 2,00
20,00 0,00 0,00 30,00 21,40 10,00 30,34 0,350 0,050 1,70
50,00 0,00 0,00 45,00 22,00 20,00 36,00 0,400 0,100 1,02
50,00 0,00 0,00 45,00 22,00 0,00 36,00 0,450 0,150 0,89
4,00 0,00 0,00 35,00 12,00 0,00 30,00 0,500 0,200 1,46
8,00 0,00 0,00 45,00 12,00 0,00 30,00 0,100 0,250 0,80
4,00 0,00 0,00 35,00 12,00 0,00 30,00 0,150 0,050 1,44
8,00 0,00 0,00 45,00 12,00 0,00 30,00 0,200 0,100 0,86
214,00 0,00 0,00 37,00 23,47 0,00 32,00 0,250 0,150 1,08

126
H ( m. ) Hw ( m. ) Hb ( m. ) β ( deg. ) γ ( kN/m^3 ) c ( kPa ) Φ ( deg. ) kh kv F.S.
115,00 0,00 0,00 40,00 16,00 70,00 20,00 0,300 0,200 1,11
10,67 0,00 0,00 22,00 20,41 24,90 13,00 0,350 0,250 1,40
12,19 0,00 0,00 22,00 19,63 11,97 20,00 0,400 0,050 1,35
12,80 0,00 0,00 28,00 21,82 8,62 32,00 0,450 0,100 1,03
45,72 0,00 0,00 16,00 20,41 33,52 11,00 0,500 0,150 1,28
10,67 0,00 0,00 25,00 18,84 15,32 30,00 0,100 0,200 1,63
7,62 0,00 0,00 20,00 18,84 0,00 20,00 0,150 0,250 1,05
61,00 0,00 0,00 20,00 21,43 0,00 20,00 0,200 0,050 1,03
21,00 0,00 0,00 35,00 19,06 11,71 28,00 0,250 0,100 1,09
30,50 0,00 0,00 20,00 18,84 14,36 25,00 0,300 0,150 1,11
76,81 0,00 0,00 31,00 21,51 6,94 30,00 0,350 0,200 1,01
88,00 0,00 0,00 30,00 14,00 11,97 26,00 0,400 0,250 0,63
20,00 0,00 0,00 45,00 18,00 24,00 30,15 0,450 0,050 1,12
100,00 0,00 0,00 20,00 23,00 0,00 20,00 0,500 0,100 1,20
15,00 0,00 0,00 45,00 22,40 100,00 45,00 0,100 0,150 1,80
10,00 0,00 0,00 45,00 22,40 10,00 35,00 0,150 0,200 0,90
50,00 0,00 0,00 45,00 20,00 20,00 36,00 0,200 0,250 0,96
50,00 0,00 0,00 45,00 20,00 20,00 36,00 0,250 0,050 0,83
50,00 0,00 0,00 45,00 20,00 0,00 36,00 0,300 0,100 0,79
50,00 0,00 0,00 45,00 20,00 0,00 36,00 0,350 0,150 0,67
8,00 0,00 0,00 33,00 22,00 0,00 40,00 0,400 0,200 1,45
8,00 0,00 0,00 33,00 24,00 0,00 40,00 0,450 0,250 1,58
8,00 0,00 0,00 20,00 20,00 0,00 24,50 0,500 0,050 1,37
8,00 0,00 0,00 20,00 18,00 5,00 30,00 0,100 0,100 2,05

127
CURRICULLUM VITAE

Mert TOLON was born on August 19, 1983 in İstanbul. He completed his primary
school in 1994 and his secondary school in Evrim Secondary School in 1998, and his
high school in Evrim High School in 2001 in İstanbul. He started his BS degree at
the Civil Engineering Department of the Yıldız Technical University and completed
a four-year undergraduate education and graduated in 2005. At the same year, he
started his MS education as a graduate student in the Earthquake Engineering
Division of Civil Engineering Department at the Istanbul Technical University.

ÖZGEÇMİŞ

İnş. Müh. Mert TOLON 19.08.1983 ‘de İstanbul’da doğdu. İlkokulu 1994’de,
ortaokulu 1998’de , liseyi de 2001’de Özel Evrim Lisesi’nde (İstanbul) bitirdi. Lisans
eğitimini Yıldız Teknik Üniversitesi İnşaat Mühendisliği bölümünde 4 yılda 2005
yılında tamamladı. Yine aynı yıl İstanbul Teknik Üniversitesi Fen Bilimleri Enstitüsü
Deprem Mühendisliği Anabilim Dalında yüksek lisans eğitimine başladı.

128

You might also like