Computational Geosciences With Mathematica (William C. Haneberg, 2004) @geo Pedia
Computational Geosciences With Mathematica (William C. Haneberg, 2004) @geo Pedia
Computational Geosciences With Mathematica (William C. Haneberg, 2004) @geo Pedia
Haneberg
Computational Geosciences with Mathematica
William C. Haneberg
Computational
Geosciences
with
Mathematica
With 297 Figures and a CD-ROM
123
E-mail:[email protected]
DOI 10.1007/978-3-642-18554-0
This work is subject to copyright.AII rights are reserved,whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reu se of illustrations, recitations,
broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication
of this publication or parts thereof is permitted only under the provisions of the German Copyright
Law of September 9, 1965, in its current version, and permission for use must always be obtained
from Springer.Violations are liable to prosecution under the German Copyright Law.
32/3141/AO - 543210
Preface
MathematicaR is a comprehensive mathematics package that can be used to perform numerical calculations, manipulate symbolic expressions, develop complicated computer programs, and create sophisticated scientic graphics. My objective in writing Computational Geosciences with Mathematica was to show how the
program can be applied to solve a wide range of problems of interest to geologists,
geomorphologists, hydrologists, geophysicists, and other geoscientists. As such, it
is partly a textbook on quantitative geoscience and partly a manual showing how
Mathematica can be used to solve some problems of interest to geoscientists. It is
written at a level appropriate for graduate students embarking on quantitative research projects, professors who are interested in learning about new approaches to
quantitative problem solving, and practicing geoscientists with an interest in Mathematica. While some of the material is more advanced than that taught in typical undergraduate geology programs in the United States, much of it will be accessible to
motivated junior and senior students.
It has long puzzled me that, while advanced computational tools such as Mathematica have been available for 15 years or more, many geoscientists (geologists in
particular) seem to be stuck in a spreadsheet rut. While spreadsheets manipulate
rows and columns of numbers adequately, they are not well suited for much more
than simple arithmetic. Still, their popularity persists and, in my opinion, continues
to make life more difcult for students or professionals trying to solve quantitative geoscientic problems. I began using Mathematica in 1989 and it has become
an indispensable computational and graphics tool in my research and professional
practice. Gerard Middleton seems to share a similar view of spreadsheets to his book
Data Analysis in the Earth Sciences Using Matlab, and I am glad to see that I am
not alone in that regard. Inexpensive student versions make Mathematica particularly well suited for students in computer methods or quantitative geology classes.
The subject matter and examples in Computational Geoscience with Mathematica were drawn largely from my experience as an applied researcher in engineering geology and hydrogeology, university instructor, and consulting geologist.
I have tried to include a broad range of topics, but there are many geoscientic and
mathematical topics that are not covered. Fractals, wavelets, and geostatistics, for
example, are all topics that can be fruitfully addressed Mathematica, but these are
topics that either fall well outside my range of experience or the space available in
VI
Preface
this book. I hope that, rather than sparking complaints, their omission will motivate
specialists in those elds to ll the void.
Mathematica is published by Wolfram Research, Inc., and is currently (summer
2003) in version 5.0. For more information, contact the company at:
Wolfram Research, Inc.
100 Trade Center Drive
Champaign, IL 61820-7237 USA
(217) 398-0700
[email protected]
www.wolfram.com
Mathematica is a registered trademark of Wolfram Research, Inc. Matlab is a
registered trademark of The MathWorks, Inc.
This book would not have been written without the support and encouragement
of my wife Lisa. Others who deserve a measure of credit (but none of the blame
for any mistakes) include Arvid Johnson and Paul Potter who, respectively, taught
me how to formulate geological problems in terms of mechanics and statistics. John
Hawley and the late Frank Kottlowski hired me and provided a fertile environment
for professional growth at the New Mexico Bureau of Mines and Mineral Resources,
a division of New Mexico Tech. The late Allan Gutjahr, also at New Mexico Tech,
was a fountain of statistical wisdom during our carpooling years. Mike Whitworth,
Marshall Reiter, Dave Love, Laurel Goodwin, Peter Mozley, and many other colleagues introduced me to a fascinating array of technical topics during my time
in New Mexico. Finally, Wolfram Research generously provided copies of Mathematica and allowed me to use a pre-release version of Mathematica 5.0 during the
writing of this book.
William C. Haneberg
Port Orchard, Washington
August 2003
Preface
VII
multi-author monographs (Clay and Shale Slope Instability, published by the Geological Society of America, and Faults and Fluid Flow in the Shallow Subsurface,
published by the American Geophysical Union). He earned a Ph.D. in geology from
the University of Cincinnati, where his dissertation research concerned precipitation induced pore pressure increases in potentially unstable slopes. For additional
information, please visit www.haneberg.com.
Contents
Introduction to Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 What is Mathematica? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Installing and Running Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 How the Book is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 A Brief Tour of Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.1 Symbolic and Numerical Operations . . . . . . . . . . . . . . . . . . . .
1.5.2 Vector and Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.3 2-D and 3-D Graphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.4 User-Dened Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.5 Data Import and Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.6 Mathematica Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 References and Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . .
1
1
2
2
3
4
4
8
10
20
21
22
23
25
25
25
25
26
26
29
29
30
35
37
38
39
41
41
43
47
49
55
Contents
Contents
XI
XII
Contents
Contents
XIII
1 Introduction to Mathematica
1 Introduction to Mathematica
imental data analysis, real time 3-D graphics, fuzzy logic, neural networks, signal
processing, time series analysis, and wavelets. Readers interested in those packages
should contact Wolfram Research for more information about their capabilities and
availability.
1 Introduction to Mathematica
After pressing enter, the intial expression is assigned an input number (in this case 1)
and a corresponding output line is shown immediately below. Mathematica distinguishes between exact integer expressions and approximate numerical expressions,
and therefore returned a value of 2/3 rather than 0.666667. Important irrational numbers such as are also manipulated as symbols unless Mathematica is forced to assign a numerical approximation. Purely symbolic expressions can also be used, for
example
In[2]:= a/b
a
Out[2]=
b
Input and output numbers are reset each time the Mathematica kernel is started.
Therefore, if you start Mathematica, save and close the window, and then open a
new window the input and output numbers will continue in sequence because the
kernel was not restarted.
One of Mathematicas strengths is its ability to perform symbolic manipulation,
for example algebra and calculus. It can nd symbolic solutions to many kinds of
equations, for example
In[3]:= Solvea/b 4, b
a
Out[3]= b
4
11
3
Basic Input palette (3 x). As discussed in Chapter 3, matrix and vector multiplication is slightly more specic and the multiplication operators cannot be switched
indiscriminantly. The same approach works for sets of equations
In[5]:= Solve2 x 6y 18, 7 x 8 y 7
, x, y
Out[5]= x
93
56
,y
29
29
Solve is one of Mathematicas standard functions, which all begin with uppercase letters and have arguments enclosed in square brackets. There are hundreds
of standard functions, and hundreds more in packages accompanying the standard
Mathematica distribution. They are listed alphabetically in The Mathematica Book
and can also be viewed using the Help Browser. Mathematica uses curly braces,
, to enclose lists of expressions or variables such as the lists of two equations and
two variables above. It can also evaluate just about any derivative or integral that
is likely to be included in standard mathematical references. A simple example, the
derivative of x2 with respect to x, is
In[7]:= x x2
Out[7]= 2 x
The derivative and integral symbols were pasted into the Mathematica notebook by
clicking on the Basic Input palette. If the limits of integration are specied, Mathematica will also calculate a denite integral.
b
In[9]:=
2xx
Out[9]= a2 b2
Say we know the values of a and b. They can be substituted into the result above
using a replacement rule specied with the /. operator. For example, if a 3.0 and
b 7.2
In[10]:= % /. a
3., b
7.2
Out[10]= 42.84
Using the replacement rule evalutes the expression with a 3.0 and b 7.2 only
in this instance, and does not permanently change the value of the expression. The
1 Introduction to Mathematica
% sign is shorthand for the previous output, and %% is shorthand for the output line
before that. Output lines in general can be referenced using either %n or Outn,
where n is the output line number. Alternatively, the the denite integral could have
been evaluated numerically by using real numbers for the limits of integration.
7.2
In[11]:=
2 xx
3.
Out[11]= 42.84
The sign is used to permanently assign values to variables. Variables can be numerical values
In[12]:= x 7.2
Out[12]= 7.2
Once a value is assigned to a variable name, it can be used like any other variable.
For example,
In[15]:=
x
Out[15]= 2.68328
because we previously assigned the value of 7.2 to x. To ensure that it does not
cause confusion further on, we can also clear the value of x.
In[16]:= Clearx
In can sometimes be desirable to suppress output, which can be done with a semicolon.
In[17]:= sinx Sin10.
In this case, a result is calculated and assigned to the variable name sinx but is not
displayed. Entering the variable name will display the result
In[18]:= sinx
Out[18]= 0.173648
A third way to force numerical output is to make at least one of the integers into a
real number by adding a decimal point.
In[21]:= 2/3.
Out[21]= 0.666667
or
In[23]:= N
Out[23]= 2.71828
If asked to give a numerical value for the imaginary number , Mathematica returns
In[24]:= N
Out[24]= 0. 1.
Mathematicas early versions used text input and output of expressions, but recent versions have included sophisticated mathematical notation and typsetting capabilities. The result is that many Mathematica functions can be specied using
fairly traditional mathematical notation or simple text-only input. For example, the
derivative and integral above can also be expressed as
In[25]:= Dx2, x
Out[25]= 2 x
and
1 Introduction to Mathematica
In[26]:= Integrate2 x, x
Out[26]= x2
2.8
Out[28]= 1.67332
or
In[29]:= Sqrt2.8
Out[29]= 1.67332
or
In[30]:= 2.81/2
Out[30]= 1.67332
Special symbols such as , , and can be represented using the text equivalents Pi,
I, and E.
1.5.2 Vector and Matrix Operations
Mathematica treats vectors of symbols, integers, and real numbers as lists and matrices as lists of lists. A list of data might be
In[31]:= data 1.2, 4.8, 2.8, 7.2, 9.1, 6.5
whereas one list is used to represent each row of a matrix using a Table.
In[32]:= m a, b
, c, d
Elements of lists or tables can be isolated using either Part or double square brackets . The rst element in the second row of m is
In[33]:= Partm, 2, 1
Out[33]= c
or, equivalently,
In[34]:= m2, 1
Out[34]= c
Matrices can also be lled with values following some functional relationship
by using the Table function.
In[35]:= Tablei j, i, 1, 3
, j, 1, 3
Out[35]= 1, 2, 3, 2, 4, 6, 3, 6, 9
In[36]:= MatrixForm%
1 2 3
Out[36]=
2 4 6
3 6 9
In[38]:= MatrixFormm
ab
Out[38]=
cd
They can also be constructed by clicking on the matrix button in the Basic Input
palette. Many of Mathematicas functions are listable, meaning that they can be
applied to lists (or lists of lists). To calculate the square root of each element in
data, for example, apply the square root function to the entire list.
In[39]:=
data
10
1 Introduction to Mathematica
0.5
-0.5
-1
-Graphics-
0.5
-0.5
-1
Out[42]= -Graphics-
11
A different function, ListPlot, is used for lists of data. If a list of single values is given, for example the list data dened above, ListPlot will assume
that they are dependent variables and that the independent variable has the values
1, 2, 3 . . .
In[43]:= ListPlotdata, PlotStyle > PointSize0.02
Out[43]= -Graphics-
Out[44]= -Graphics-
12
1 Introduction to Mathematica
In this case, Mathematica plots the rst element of each pair as the independent
variable and the second element as the dependent variable.
In[46]:= ListPlot%, PlotJoined
True
0.8
0.75
0.7
0.65
0.6
0.55
1.5
2.5
3.5
Out[46]= -Graphics-
Functions of two variables can be visualized as 3-D surface plots, contour plots, or
density plots.
In[47]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
ColorOutput
GrayLevel
1
0.5
0
-0.5
-1
0
2
4
6
Out[47]= -SurfaceGraphics-
As with other Mathematica functions, options can be used to control the details of
the plots. The plot below sets the number of points at which the function is evaluate
to 50 instead of the default value of 25.
13
1
0.5
0
-0.5
-1
0
2
4
6
Out[48]= -SurfaceGraphics-
1
0.5
0
-0.5
-1
0
2
4
6
Out[49]= -SurfaceGraphics-
The Plot3D default is to shade surfaces using three simulated colored light sources
(rendered here using gray levels; see Appendix C for a detailed discussion of color
14
1 Introduction to Mathematica
and lighting). Setting Lighting
False removes the lighting and shades the
surface according to its height.
In[50]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
ColorOutput
GrayLevel, Lighting
False
1
0.5
0
-0.5
-1
0
2
4
6
Out[50]= -SurfaceGraphics-
1
0.5
0
-0.5
-1
0
2
4
6
Out[51]= -SurfaceGraphics-
15
To see a complete list of the options available for any Mathematica function, use
Optionsfunction_name
.
In[52]:= OptionsPlot3D
Out[52]= AmbientLight GrayLevel0, AspectRatio Automatic,
Axes True, AxesEdge Automatic, AxesLabel None,
AxesStyle Automatic, Background Automatic,
Boxed True, BoxRatios 1, 1, 0.4,
BoxStyle Automatic, ClipFill Automatic,
ColorFunction Automatic,
ColorFunctionScaling True,
ColorOutput Automatic, Compiled True,
DefaultColor Automatic, DefaultFont $DefaultFont,
DisplayFunction $DisplayFunction, Epilog ,
FaceGrids None, FormatType $FormatType,
HiddenSurface True, ImageSize Automatic,
Lighting True, LightSources 1., 0., 1.,
RGBColor1, 0, 0, 1., 1., 1.,
RGBColor0, 1, 0, 0., 1., 1.,
RGBColor0, 0, 1, Mesh True,
MeshStyle Automatic, Plot3Matrix Automatic,
PlotLabel None, PlotPoints 25,
PlotRange Automatic, PlotRegion Automatic,
Prolog , Shading True,
SphericalRegion False, TextStyle $TextStyle,
Ticks Automatic, ViewCenter Automatic,
ViewPoint 1.3, 2.4, 2.,
ViewVertical 0., 0., 1.
16
1 Introduction to Mathematica
The function ContourPlot works in a similar manner, but with different options.
In[53]:= ContourPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorOutput
GrayLevel
6
0
0
Out[53]= -ContourGraphics-
Here is the same function plotted with 3, instead of the default 10, contours.
In[54]:= ContourPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorOutput
GrayLevel, Contours
3
6
0
0
Out[54]= -ContourGraphics-
17
0
0
Out[55]= -ContourGraphics-
Density plots display a function of two variables using continuous shades or colors
instead of contour intervals. Here is one with the default mesh
In[56]:= DensityPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorOutput
GrayLevel
6
0
0
Out[56]= -DensityGraphics-
18
1 Introduction to Mathematica
0
0
Out[57]= -DensityGraphics-
19
1
0.5
0
-0.5
-1
20
15
10
5
10
5
15
20
Out[59]= -SurfaceGraphics-
To change the horizontal coordinates from row and column numbers, use the
MeshRange option.
In[60]:= ListPlot3D%%, ColorOutput
GrayLevel,
MeshRange
0, 2
, 0, 2
1
0.5
0
-0.5
-1
0
2
4
6
Out[60]= -SurfaceGraphics-
20
1 Introduction to Mathematica
The combined colon and equal sign, , delays the assignment of the value x2 to x2
until the function is executed, and is therefore different than x2 x2 . Once a function
is dened, it can be used just like any of the built-in Mathematica functions.
In[62]:= x29.5
Out[62]= 90.25
An equivalent way to accomplish the same thing is to use the Function function
In[63]:= Functionx2, x2
Out[63]= Functionx2, x2
In[64]:= x25
Out[64]= 25
The shorthand version can produce very compact programs and is often used by
expert Mathematica programmers, but can also be very difcult for others to read
and understand.
Mathematica contains a variety of functions useful for ow control in longer
programs for example I f, Do, While, and For that can be used for traditional
procedural programming. It also contains functions such as Map and Apply that
can be used for functional programming. Here are four different ways to calculate
the sines of a table of real numbers:
In[67]:= values Tablex, x, 10. , 40. , 10.
Out[67]= 0.174533, 0.349066, 0.523599, 0.698132
In[68]:= MapSin, values
Out[68]= 0.173648, 0.34202, 0.5, 0.642788
In[69]:= Sinvalues
Out[69]= 0.173648, 0.34202, 0.5, 0.642788
21
mean
1
datai
len i1
1
dev
len 1
len
datai mean2
i1
Outside of the module, however, the variables len, mean, and dev have no values.
In[74]:= len, mean, dev
22
1 Introduction to Mathematica
If List is specied as the le format, however, Mathematica will treat the data as
a single list.
In[76]:= Import"/Users/bill/Mathematica_Book/example.dat",
"List"
Out[76]= 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
The le path name can be pasted into an Import statement by selecting Get File
Path. . . from the Input menu. The same syntax works for graphics les.
In[77]:= Import"/Users/bill/Mathematica_Book/pako.jpg"
Out[77]= -Graphics-
Using the syntax above, Mathematica will use the le sufx to identify the le format. See the Mathematica documentation for information on les without sufxes.
Graphics les do not appear until they are specically shown using the Show function.
In[78]:= Show%
Out[78]= -Graphics-
The Export function works similarly to the Import function except that both an
expression (the data or image to be exported) and a le name must be specied.
1.5.6 Mathematica Packages
Mathematica functions and programs can be stored as text les known as packages
and loaded when needed. The standard distribution of Mathematica includes dozens
of packages with special functions for algebra, calculus, graphics, linear algebra, numerical mathematics, and statistics. To see a complete list of the standard packages
accompanying Mathematica, bring up the Help Browser window, choose Add-ons
23
& Links in the far left column, then Standard Packages in the middle column. The
right column will contain a list of directories, each of which contains several addon packages that can be loaded whenever they are needed. Additional packages are
available from Wolfram Research, from other commercial developers, and in the
public domain (generally downloadable from the internet). This book includes a
package named CompGeosci, which contains a number of functions for specialized
plots and calculations as well as color functions that are useful for color graphics.
Users can also write their own packages, although the details of package writing are
beyond the scope of this book.
Mathematica packages can be loaded in two ways. The rst is to use
<< package_name, which loads the specied package. This is generally not
the recommended method because problems can arise if a package is loaded
more than once during a Mathematica session. The preferred method is to use
Needspackage_name, which loads parts of the package as needed and will not
load part of a package more than once.
Package names can be specied either using their complete le path or, if they
are located along one of Mathematicas default le paths, using their directory (context in Mathematica terms) and package name. For example, to load the package
DescriptiveStatistics from the Statistics directory (context) located along one of the
default le paths, enter
In[79]:= Needs"StatisticsDescriptiveStatistics"
Note that the `character is not a single quotation mark! It is the character located
beneath the tilde (~) character in the upper left hand corner of most keyboards.To
see a listing of the default le path for the installation of Mathematica on your
computer,type $Path and press Enter.To see a list of the packages that have been
loaded during a given Mathematica session, type $Packages and press Enter.
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
2.2 Overview
Although Mathematica can easily plot functions and lists of data in Cartesian (x-y)
and polar (-r) coordinate systems, it does not include functions to make several
kinds of plots that are of particular interest to geoscientists. This chapter shows
how to produce stem plots of discretely sampled data, rose plots of 2-D orientation
data, ternary (triangular) plots, stereographic and equal-area projection plots of 3-D
orientation data, box-and-whisker plots of cumulative statistics, and borehole log
plots in which the independent variable of depth is plotted vertically rather than
horizontally. Several examples include step-by-step instructions to illustrate the use
of Mathematica graphics primitives to develop custom plots.
26
records, each of which in turn consists of 14 items: the year, 12 monthly precipitation values, and the total annual precipitation. Here is the rst row of data:
In[4]:= data1
Out[4]= 1949, 1.63, 6.09, 6.94, 0.41, 2.56,
0.06, 0.16, 0.02, 0.5, 2.03, 3.23, 4.49, 28.12
Lets say that our objective is to make a stem plot of the total annual precipitation for
each year. To do this, we will need to isolate the year and total annual precipitation
data from the 12 monthly values that separate them in each row of the data set.
In[5]:= annualdata Tabledatai, 1,
datai, 14
, i, Lengthdata
; hence,
the statement below creates a table lled with lines that run from the x axis to the y
value of each annual precipitation value.
In[6]:= Table
Graphics
Lineannualdatai, 1, 0.
,
annualdatai, 1, annualdatai, 2
,
i, Lengthannualdata
As discussed in Chapter 1, the Show function must be used in order to display the
lines. This may seem awkward, but there are many times when it is useful to be
27
able to create a series of graphics objects without showing each on invidually and
then superimposing all of them with one Show command. The following statement
shows all of the stems and sets the option Axes to true (the default for general
graphics objects is false):
In[7]:= Show%, Axes
True,
AxesLabel
"Year", "Precipitation"
Precipitation
60
50
40
30
20
10
1960
1970
1980
1990
2000
Year
Out[7]= -Graphics-
The previous two statements could have been combined to produce and display the
stem plot in one step. The position of the data points can also be emphasized by
adding a ball to the end of each stem, which is most easily accomplished using the
Point graphics function. The statement below sets the point size to 0.02 (relative
to the size of the entire graphic) then, as with the stems, creates a table full of
points.
In[8]:= Graphics
PointSize0.02,
Table
Pointannualdatai, 1, annualdatai, 2
,
i, Lengthannualdata
The points can be added to the existing stem plot using Show, noting that the
Axes > True option is not carried through and must again be specied.
In[9]:= Show%, %%, Axes
True,
AxesLabel
"Year", "Precipitation"
28
Precipitation
60
50
40
30
20
10
1960
1970
1980
1990
2000
Year
Out[9]= -Graphics-
Mathematica automatically places the y axis at a value of x 1950 and labels the x
axis in 10 year intervals, which leaves the 1949 stem to the left of the y axis. This
can be easily changed using the AxesOrigin option.
In[10]:= Show%, AxesOrigin
1948, 0
Precipitation
60
50
40
30
20
10
1950
1960
1970
1980
1990
2000
Year
Out[10]= -Graphics-
29
Precipitation
14
12
10
8
6
4
2
1950
1960
1970
1980
1990
2000
Year
30
Out[12]= 26., 48., 335., 337., 347., 330., 77., 10., 27., 324.,
335., 330., 47., 347., 291., 326., 325., 31., 82., 46.,
75., 300., 11., 4., 342., 357., 316., 326., 37., 26.,
334., 307., 345., 336., 53., 339., 63., 341., 332., 44.,
292., 358., 33., 359., 324., 12., 358., 350., 339., 55.,
9., 290., 16, 1., 314., 281., 343., 76., 15., 51.
The List format specication was used because, lacking any information to the
contrary, Import assumes that any le with a .dat extension contains multiple
rows and columns of data. It explicitly assigns each value to its own row (meaning that each value is put inside its own set of curly brackets). Using List forces
Mathematica to import the values as a single list rather than a list of one element
lists.
2.4.2 Creating the Rose Plot
The rst step in creating a rose plot is to count the number of data points falling into
bins of a xed angular width, in this case 30
In[13]:= bincts BinCountsdata, 0, 360, 30
Out[13]= 11, 10, 5, 0, 0, 0, 0, 0, 0, 5, 10, 19
The zeroes in the middle of the list represent the lower compass quadrants, for which
no values were recorded. It will also be helpful to represent the number of bins, the
width of each bin, and the maximum radius of the bins with their own variables.
In[14]:= binlen Lengthbincts
Out[14]= 12
In[15]:= binwidth 360./binlen
Out[15]= 30.
In[16]:= maxbinrad Maxbincts
Out[16]= 19
The second step is to represent each bin as a segment of a disk with a radius
proportional to the numbers of data points in the bin. This is done using Disk. The
example below plots an angular segment of a disk that is centered at (0,0), has a
radius of bincts1, and ranges in angle from 0 to 30 . The axes are added
to illustrate that the arc does indeed have a radius of 11, and the PlotRange and
AspectRatio options are used to ensure that the height:width ratio of the plot is
not distorted.
31
In[17]:= Show
Graphics
Disk0., 0.
, bincts1, 0. , 30.
, Axes
True, PlotRange
11, 11
, 0, 11
,
AspectRatio
1/2.
10
8
6
4
-10
-5
10
Out[17]= -Graphics-
At this point, it is important to think about sign conventions for angles. Mathematica, like virtually every mathematical textbook and computer program, conventionally measures angles positive-counterclockwise from the positive x axis. In most
geoscientic applications, however, angles are conventionally measured positiveclockwise from North (which is, to add another layer of convention, usually shown
towards the top of the page). In order to plot the orientation data according to geoscientic convention, then, it will be necessary to a) rotate the data by 90 and b)
reverse the sign of each value. In Mathematica angular convention, the orientation
measurements fall within the range of 180 to 0 . The wedge representing the rst
bin is now:
In[18]:= Show
Table
Graphics
Disk0., 0.
, bincts1, 90. 30. ,
90. 0.
, i, binlen
,
AspectRatio
1/2.
32
-10
-5
10
Out[18]= -Graphics-
Now that the rst bin is plotted according to geoscientic convention, the next step
is to plot all of the bins that have non-zero values by lling a table with wedges and
then showing them.
In[19]:= Show
Graphics
Table
Disk0., 0.
, binctsi,
90. i binwidth ,
90. i 1 binwidth
,
i, binlen
,
Axes
True, PlotRange
maxbinrad, maxbinrad
,
0, maxbinrad
, AspectRatio
1/2.
17.5
15
12.5
10
7.5
5
2.5
-15
-10
-5
10
15
-Graphics-
We can also dress up the plot by adding some radii, for example in increments of 5,
by rst creating a table of graphics objects
In[20]:= Graphics
TableCircle0., 0.
, r, 0., 180.
, r, 5, 20, 5
33
and then showing them along with the previous plot. Note that PlotRange was
changed in order to show the outermost radius (the previous plot range was 19).
In[21]:= Show%%, %, PlotRange
20, 20
, 0, 20
20
17.5
15
12.5
10
7.5
5
2.5
-20
-15
-10
-5
10
15
20
Out[21]= -Graphics-
Bi-directional rose plots are often drawn with both directions shown. This can be
accomplished by adding a second table of wedges in which the reference direction
is 90 rather than 90 . The plot range and aspect ratio are changed accordingly,
and that the table of radii is incorporated into the list of graphics objects. Also note
that, because Graphics is now being supplied with a list of objects instead of a
single table as in the previous examples, the list must be enclosed in curly brackets
{}. Failure to do so will produce an error message but no plot.
In[22]:= Show
Graphics
Table
Disk0., 0.
, binctsi,
90. i binwidth ,
90. i 1 binwidth
,
i, binlen
, Table
Disk0., 0.
, binctsi,
90. i binwidth ,
90. i 1 binwidth
,
i, binlen
,
TableCircle0., 0.
, r, r, 5, 20, 5
,
Axes
True, PlotRange
20, 20
, 20, 20
,
AspectRatio
1., Ticks
None
34
Out[22]= -Graphics-
270
90
180
Out[23]= -Graphics-
35
36
Out[25]= -Graphics-
The second set of mean quartz, feldspar, and lithic percentages, also from Marsaglia
(2003), are from onshore streams and beaches. We will plot them using gray symbols to distinguish them from the offshore sand compositions, which is accomplished by adding the optional GrayLevel0.6 to the list of arguments.
In[26]:= data2 0.02, 0.23, 0.75
, 0.47, 0.39, 0.14
,
0.3, 0.54, 0.06
Out[27]= -Graphics-
37
Now, the two ternary plots can be superimposed to illustrate the compositional differences.
In[28]:= Showplot1, plot2
Q
Out[28]= -Graphics-
38
writing a general stereographic plotting routine. A structural geologist, for example, could equally correctly denote the strike and dip of a single dipping plane as
(S45 W, 45 NW), (225 , 45 NW), or (225 , 45 ). The last example is given using
the right-hand rule that is described in many structural geology textbooks, which
is convenient for computer applications because it allows input and output to be
completely numerical. Using the right-hand rule, the strike is chosen so that the
plane dips to the right when an observer is looking in the direction of the strike.
The implication of this is that the angle from the chosen strike direction to the dip
direction will always be 90 measured in a clockwise direction. Another possibility
is to describe the attitude of the plane using the plunge and azimuth of its dipline,
which is (45 , 315 ). Dipline orientations are also convenient for computer applications because, like strikes and dips specied using the right-hand rule, they do not
need non-numerical information added to eliminate ambiguities.
As illustrated in Marshak and Mitra (1998), the stereographic projection of a
plane with a dip angle of is a circular arc (sometimes referred to as a cyclographic
trace) with a radius of rplane tan tan (/4
- /2) and a center located tan from
the center of the projection, measured in a direction opposite to that of the dipline
azimuth. The stereographic projection of a line plunging at an angle is a point
located at radius rpoint tan (/4
- /2) from the center of the circle, measured in
the direction of the azimuth of the point. Both of these formulae assume that the
stereographic plot has a maximum radius of 1.
The CompGeosci.m package that accompanies this book contains two
functions to plot stereographic projections of lines and planes. The function
ListStereoArcPlotdata, arcshade, arcdash, opts constructs a
stereographic plot from a list of strikes and dips. ListStereoArcPlot requires
that strikes and dips be specied using the right-hand rule, with the strike listed rst
and the dip listed second (as in the example below). The arguments arcshade and
arcdash specify the gray level and dashing of the great circle traces, with default
values of black and no dashing. The last argument, opts, allows the user to specify
the plot range and aspect ratio. As with ListTernaryPlot, Mathematica begins
elminating optional arguments from right to left if the number of optional argument
specied is less than the total number of options.
2.6.1 Stereographic Projections of Planes
The data set below consists of the strikes and dips of 14 joints measured at an outcrop of basalt during an engineering geologic mapping project. All of the measurements are given in degrees; therefore, they must be converted to radians before being
plotted. The easiest way to do this is with the Degree constant built into Mathematica.
In[29]:= data 342., 75.
, 148., 50.
, 290., 80.
,
15., 62.
, 333., 65.
, 15., 75.
, 31., 65.
,
319., 66.
, 312., 67.
, 349., 89.9
, 359., 89.9
,
105., 85.
, 323., 82.
, 350., 89.9
39
Notice that several of the dip angles are listed as 89.9 . This is because the plotting
routines must calculate the tangent of the dip angle, and the tangent of 90 is:
In[30]:= Tan90
Out[30]= ComplexInfinity
Reducing the 90 dip angles by an imperceptible amount alleviates the complex innity result and allows the arcs to be plotted. Used as input for
ListStereoArcPlot, which is included in the CompGeosci.m package, they
produce the following stereographic projection:
In[31]:= lineplot ListStereoArcPlotdata, GrayLevel0.3,
Dashing0.005
0
270
90
180
Out[31]= -Graphics-
40
270
90
180
Out[33]= -Graphics-
270
90
180
Out[34]= -Graphics-
41
42
In[35]:= ListEqualAreaPointPlotdiplinedata,
GrayLevel0, 0.03
0
270
90
180
Out[35]= -Graphics-
The difference between the stereographic (equal angle) and equal area projections of the diplines can be illustrated by using Show to superimpose the two plots.
The lled circles are the equal area projections and the open circles are the stereographic projections.
In[36]:= Show%, pointplot
0
270
90
180
Out[36]= -Graphics-
43
44
270
90
180
Out[38]= -Graphics-
The next step is to determine the radius of the counting circles using the formula
presented above. In this example, there are
In[39]:= Lengthdata
Out[39]= 50
Out[40]= 0.220354
The counting circles are located on a grid with centers spaced r units apart in the
xand y directions. The following statement illustrates the grid of counting circles by
plotting 1) a table of black disks representing the points in the counting circle grid,
2) a table of gray counting circles with radii of 0.220354, and 3) a heavy black circle
representing the boundary of an equal area plot with a radius of 1. The variable
is
used to ensure that the distribution of counting circles is symmetric about the center
of the equal area plot. Floorx returns the largest integer that is less than or
equal to x.
45
In[41]:= r Floor1/r
Show
Graphics
TableDiskx, y
, 0.025, x, , , r
,
y, , , r
,
GrayLevel0.4,
TableCirclex, y
, 0.220354, x, , , r
,
y, , , r
,
GrayLevel0, Thickness0.01,
Circle0., 0.
, 1.
, AspectRatio
1.
Out[41]= -Graphics-
Some of the counting circles intersect the edge of the equal area plot and four
fall completely outside of the plot. Mathematica performs contouring on rectangular areas, so it is not possible to simply discard the circles lying completely outside
of the equal area plot. Instead, they will be hidden by placing a circular mask over a
square contour plot. The counting circles that straddle the equal area net boundary
pose a more difcult problem because the number of points falling within the circle
must be adjusted to compensate for the fact that only part of the counting circle
is within the equal area plot. The Kamb contouring routine in the CompGeosci.m
package accomplishes this by calculating the fraction of the counting circle that falls
within the equal area plot boundary and then dividing the number of points in the
circle by that fraction. For example, if 1/3 of a particular counting circle falls within
46
the equal area plot then the number of points is multiplied by 3. Once a grid of values is generated, a polynomial surface passing exactly through all of the points is
obtained using Mathematicas ListInterpolation function and the result is
contoured using ContourPlot using a 50 by 50 grid of interpolated values. The
use of an interpolated surface produces smoother contours than would be obtained
by using ListContourPlot to contour the results at their original grid spacing
of r. Finally, a mask is placed over the contour plot to hide the points falling outside
the equal area plot boundary. The function ListKambPlot is fairly long and includes two supporting functions, so it is not listed here. The functions, can, however
be inspected by opening the CompGeosci.m package as a Mathematica notebook
or with a text editor. As illustrated below, ListKambPlot takes as its arguments
a data set consisting of (plunge, azimuth) pairs and a contour interval scaling factor. All contours are plotted in multiples of the standard deviation of the binomial
distribution used to determine the counting circle area.
In[42]:= contourplot ListKambPlotdata, 1.
N 50
7.62712
2.54237
CI 1.
0
270
90
180
Out[42]= -Graphics-
As usual, Show can be used to combine the point and contour plots to see how well
the contours agree with any visible clusters.
47
270
90
180
Out[43]= -Graphics-
48
In[45]:= data2 3.79, 2.63, 3.36, 4.9, 3.22, 2.17, 4.91, 2.68,
3.55, 4.09, 5.09, 4.33, 3.73, 2.7, 3.11, 2.57, 3.7,
5.03, 3.3, 3.46, 4.61, 3.71, 4.55, 3.79, 3.09
Next, we will need a way to calculate the cumulative statistics (sometimes referred
to as percentiles or quantiles). ListBoxWhiskerPlot requires ve values for
each data set: its minimum; its 25th, 50th, and 75th percentiles; and its maximum.
The nth percentile of a data set is the value to which n percent of the data are less or
equal. The following routine (which is not in the CompGeosci.m package) takes a
list of data and returns the ve values.
In[46]:= Percentilesindata_
Modulelen, minval, maxval, pct25, pct50, pct75, data
,
len Lengthindata
data Sortindata
minval Mindata
pct25 dataRoundlen/4.
pct50 dataRoundlen/2.
pct75 dataRound3. len/4.
maxval Maxdata
Returnminval, pct25, pct50, pct75, maxval
For example, the minimum; 25th, 50th, and 75th percentiles; and maximum of
data1 are:
In[47]:= Percentilesdata1
Out[47]= 3.08, 3.85, 4.2, 4.5, 5.19
The function ListBoxWhiskerPlot takes as its arguments a list of data sets and
an optional scaling parameter that controls the width of the boxes. The default value
of the scaling parameter is 0.1.
In[48]:= ListBoxWhiskerPlot
Percentilesdata1, Percentilesdata2
,
0.2, FrameLabel
"data set", "percentiles"
49
percentiles
4.5
3.5
2.5
2
data set
Out[48]= -Graphics-
The minimum and maximum values of each distribution are marked by the horizontal lines at the end of each whisker, whereas the 25th, 50th, and 75th percentiles are
indicated by the bottom, middle, and top horizontal lines in the boxes.
Computer Note: Modify ListBoxWhiskerPlot so that it also plots the
mean value of each data set as a dashed line. Make sure that you make a copy of
the function before attempting to modify it.
Computer Note: Mathematica 5.0 includes the function BoxWhiskerPlot
in the standard package Statistics StatisticsPlots, which can be loaded using
either Needs or the << operator. See the Mathematica documentation for more
information about loading packages.
50
physical log data from an exploratory well drilled to evaluate groundwater resources
in clastic sediments comprising the aquifer system beneath Albuquerque, New Mexico. The data were originally supplied by the geophysical logging contractor in a
single text le that contained two parts: a header containing information about the
well and the logging procedure and a data section containing rows and colums of
data. The data section of the le originally contained more than 6300 rows of data
collected at 0.5 foot (15 cm) intervals. Although Mathematica can easily import
and plot les of this size, almost all of the detail is lost when the complete log is
displayed at the scale of a computer monitor or book page. Therefore, we will use
an abbreviated data set in order to examine the geophysical signatures that distinguish sandy aquifers from clayey aquitards (or, in petroleum industry applications,
potential reservoirs from shaley seals or source beds).
The rst step in importing the geophysical log data, which has already been
accomplished, is to remove the header information and store it in a seperate le
(Charles.dat, which is included on the CD accompanying this book). Then, import
the remaining data section using Import.
In[49]:= data Import
"/Users/bill/Mathematica_Book/Charles.dat"
You will have to change the le path as appropriate if you are following this example
on your own computer. The rst line in the data le is a list of 17 column names:
In[50]:= data1
Out[50]= DEPT, DT, CGR, NPHI, POTA, SGR, THOR,
URAN, GR, ILD, ILM, SFLU, SP, CALI, DRHO, PEF, RHOB
From left to right, the column names are DEPT depth in feet (1 foot 0.30 m);
DT sonic travel time in microseconds/foot; CGR gamma ray computed from
uranium, thorium, and postassium concentrations, in American Petroleum Institute
(API) gamma ray units; NPHI neutron porosity; POTA potassium concentration
in ppm; SGR spectroscopic gamma ray in API units; THOR thorium concentration in ppm; URAN uranium concentration in ppm; GR standard gamma ray
in API units; ILD and ILM deep and medium induction resisitivity in ohm-m;
SFLU spherically-focussed resistivity in ohm-m; SP spontaneous potential in
millivolts; CALI caliper (borehole diameter) in inches (1 inch 2.54 cm); DRHO
density porosity; PEF photoelectric factor in barns/electron; and RHOB bulk
density correction in g/cm3 . The numerical data start in the second row.
In[51]:= data2
Out[51]= 1944., 137.625, 60.2598, 0.39575, 0.02481,
73.7363, 6.56494, 1.94691, 73.8125, 16.6419, 16.2414,
13.3604, 78.5, 9.17969, 0.01209, 2.21875, 2.10848
During the subsequent steps it will be convenient to have a variable containing the
length of the data le.
51
To plot the log with the independent variable (depth) as the vertical axis, create a
table in which depth is the second of the two variables. Below are tables corresponding to the column names listed above. Notice that the depth has been converted from
feet to meters and multiplied by 1 so that depth increases downward on the resulting plots.
In[53]:= sflu Tabledatai, 12, 0.3 datai, 1
,
i, 2, npts
In[54]:= ild Tabledatai, 10, 0.3 datai, 1
,
i, 2, npts
In[55]:= sp Tabledatai, 13, 0.3 datai, 1
,
i, 2, npts
In[56]:= nphi Tabledatai, 4, 0.3 datai, 1
,
i, 2, npts
m b
m f
The example log was obtained from a clastic aquifer system rich in arkosic and volcaniclastic sediments, so reasonable values might be matrix 2.7 g/cm2 and water
1.0 g/cm3 . Now that the density porosity function has been dened, it can be used
to create a table of porosity values from the bulk density data.
In[58]:= dphi TableDensityPorositydatai, 17, 2.7, 1.,
0.3 datai, 1
, i, 2, npts
The next series of statements draws but, in order to conserve space, does
not display plots of the SP, resistivity, and porosity tables that we have just
generated. After all of the plots are drawn, they are shown side-by-side by
changing DisplayFunction
Identity to DisplayFunction
$DisplayFunction.
In[59]:= sfluplot ListPlotsflu, AspectRatio
4,
PlotJoined
True, Frame
True,
FrameTicks
25, 50, 75, 100
, Automatic,
,
,
PlotRange
All, 580, 615
, Axes
None,
FrameLabel
"ohm m", "Depth"
,
DisplayFunction
Identity
Out[59]= -Graphics-
52
,
PlotRange
All, 580, 615
, Axes
None,
PlotStyle
Dashing0.02
,
FrameLabel
"ohm m", "Depth"
,
DisplayFunction
Identity
Out[60]= -GraphicsIn[61]:= resistivityplot Showsfluplot, ildplot,
DisplayFunction
Identity
Out[61]= -GraphicsIn[62]:= spplot ListPlotsp, AspectRatio
4,
PlotJoined
True, Frame
True,
FrameTicks
Automatic, Automatic
, Axes
None,
PlotRange
90, 66
, 580, 615
,
FrameLabel
"mV", "Depth"
,
DisplayFunction
Identity
Out[62]= -GraphicsIn[63]:= dphiplot ListPlotdphi, PlotJoined
True,
AspectRatio
4, Frame
True, Axes
None,
PlotRange
0.25, 0.55
, 580, 615
,
FrameTicks
0.3, 0.4, 0.5
, Automatic,
,
,
FrameLabel
"porosity", "Depth"
,
DisplayFunction
Identity
Out[63]= -GraphicsIn[64]:= nphiplot ListPlotnphi, PlotJoined
True,
AspectRatio
4, Frame
True,
FrameTicks
0.3, 0.4, 0.5
, Automatic,
,
,
Axes
None, PlotStyle
Dashing0.02
,
PlotRange
0.25, 0.55
, 580, 615
,
FrameLabel
"porosity", "Depth"
,
DisplayFunction
Identity
Out[64]= -GraphicsIn[65]:= phiplot Showdphiplot, nphiplot,
DisplayFunction
Identity
Out[65]= -Graphics-
Here are all ve geophysical log curves superimposed on three different sets of axes.
On the resistivity plot (center), the solid line is the SFLU shallow resistitivity curve
whereas the dashed ilne is the ILD deep resistivity curve. On the porosity plot, the
solid line is density porosity and the dashed line is neutron porosity.
53
-585
-585
-585
-590
-590
-590
-595
-595
-595
Depth
Depth
-580
Depth
-580
-600
-600
-600
-605
-605
-605
-610
-610
-610
-85-80-75-70
mV
25 50 75 100
ohmm
Out[66]= -GraphicsArray-
Several geophysical attributes can be used to identify sandy zones that have the
potential to be productive aquifers or petroleum reservoirs (e.g., Asquith and Gibson, 1982). First, low spontaneous potential (SP) values. In this example, the values
range from 90 mV to 70 mV and, without knowing anything more about the
specics of log responses to various sediment types in this basin, 80 mV seems
like a good rst approximation of a cutoff value. Second, zones in which the spherically focussed resistivity (SFLU: solid resistivity curve) is noticably greater than the
deep induction resistivity (ILD: dashed resistivity curve) indicate permeable beds in
which the high resistivity drilling mud has been able to displace the low resistivity groundwater. Third, zones in which the neutron porosity log (which responds
to either pore water or bound water in clays) shows a higher value than the density porosity log will tend to be clayey intervals that do not make good aquifers
or petroleum reservoirs. These three criteria can be combined into a single logical
statement that identies zones with good aquifer or reservoir potential. First, create
a table of zeroes.
In[67]:= aquifer Table0, spi, 2
, i, npts 2
54
Then, replace 0 with 1 for each depth at which all three of the criteria are satised.
In[68]:= DoIfspi, 1 80. && sflui, 1 > ildi, 1 &&
nphii, 1 dphii, 1 , aquiferi, 1 1,
i, npts 2
Now create, but do not display, a plot of the potential aquifer quality.
In[69]:= aquiferplot ListPlotaquifer, AspectRatio
4,
Frame
True, PlotJoined
True,
FrameTicks
0, 1
, Automatic,
,
,
Axes
None, PlotRange
0.2, 1.2
, 580, 615
,
FrameLabel
"aquifer", "Depth"
,
DisplayFunction
Identity
Out[69]= -Graphics-
Finally, show the aquifer quality plot next to the three geophysical log plots for comparison. As in the previous set of plots, the solid resistivity curve is SFLU shallow
resistitivity whereas the dashed ilne is ILD deep resistivity. On the porosity plot, the
solid line is density porosity and the dashed line is neutron porosity. The suite of
logs suggests that the best potential aquifer or reservoir is the sandy unit from 592
to 597 m.
In[70]:= ShowGraphicsArrayspplot, resistivityplot, phiplot,
aquiferplot
, DisplayFunction
$DisplayFunction
-585
-585
-585
-585
-590
-590
-590
-590
-595
-595
-595
-595
Depth
-580
Depth
-580
Depth
-580
Depth
-580
-600
-600
-600
-600
-605
-605
-605
-605
-610
-610
-610
-610
5-70
-85
5-80
0-75
mV
25 50 75100
ohmm
Out[70]= -GraphicsArray-
1
aquifer
55
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
58
that returned by simplify), but it can also be extremely slow. Therefore, it is usually
best to start with Simpli fy and use FullSimpli fy only if the former does
perform well enough. The original polynomial can be recovered using Expand
In[4]:= Expand%
Out[4]= 1 4 x 4 x2
In this simple case, both functions return the same result. In general, however, the
will produce different results. TrigReduce returns a polynomial containing no
powers or products, as shown below
In[8]:= TrigReduceCos2 Cos3
Out[8]=
1
2 3 Cos 2 Cos2 Cos3
4
2
2
2
Out[9]= 2 Cos Cos Sin Cos Sin
2
2
2
2
2
The two functions TrigToExp and ExpToTrig allow for easy transformation
between trigonometric and exponential expressions
In[10]:= TrigToExpSin
1
1
2
2
In[11]:= ExpToTrig%
Out[10]=
Out[11]= Sin
Another group of functions is designed to isolate specic parts of symbolic expressions. These include Part, Exponent, Coe f ficient, Numerator, and
Denominator. Partexpression, n gives the nth part of an expression. For example, the second term in the polynomial 4x2 4 x 1 is
59
In[12]:= Part4x2 4 x 1, 2
Out[12]= 4 x
Notice that Coe f ficient does not return the coefcients of all powers of x, just
x1 . The coefcients of terms with different powers of x can be isolated by explicitly
specifying them as the argument. Exponentexpression, x returns the highest
power of x in an expression.
In[14]:= Exponent4x2 4 x 1, x
Out[14]= 2
a f
b g
bfag
bg
3x 7xy
x
Out[20]= 3 7 y
60
Individual terms can be replaced using ReplaceAll or its shorthand equivalent /. and a replacement rule. For example,
In[21]:= ReplaceAll3 x, x
4
Out[21]= 7
replaces x with 4and evaluates the expression. (The replacement rule arrow
is
automatically formed by typing a dash - and greater than > sign in succession on an
input line.) The same replacement could have been accomplished using
In[22]:= 3 x/. x
4
Out[22]= 7
Because an equal sign was used, the value 3 x was permanently assigned to y. It
will remain so unless it is erased using Cleary. The following line evaluates
y using x 4 one time only
In[24]:= y /. x
4
Out[24]= 7
To permanently change the value of y using a replacement rule, use an equal sign
In[26]:= y y /. x
4
Out[26]= 7
or, if you do not mind permanently changing the value of x, type x 4 and the
value of y will automatically be updated. We will want to use y again, and will clear
its value so as not to cause problems later.
In[27]:= Cleary
Replacement rules can also be used to match patterns. Consider this unwieldy
expression:
In[28]:= expr Tan Tan2 Tan3 Tan Tan2 Tan3
Out[28]= Tan Tan2 Tan3 Tan Tan2 Tan3
It is well known, as illustrated below, that tan for small angles ( << 1 radian).
61
In[29]:= PlotTan,
, , 0, 1
, PlotStyle
Dashing0.
,
Dashing0.02
, AspectRatio
1.5,
AxesLabel
"", " "
, PlotLegend
"tan ", ""
,
LegendPosition
0.4, 0.4
, LegendSize
0.6, 0.3
From In[29]:=
1.5
1.25
tan
0.75
0.5
0.25
Out[29]= -Graphics-
The two curves are virtually identical for values of 0 0.3 radians. Thus, if
and are small the expression can be approximated to rst order with very little
error. First, replace the tangents with the angles
In[30]:= expr /. Tan__
Out[30]= 2 3 2 3
In the previous expression, x__ was used to denote any value of x (in this case,
either or ) and the superscripted n__ to denote any power (in this case, either 2
or 3). Thus, a single replacement rule was used to eliminate all of the squares and
cubes of both and . The two replacement rules could also have been combined
into a list to perform the entire simplication in one step. (Changing the order of
replacement has no effect on the result.)
In[32]:= expr /. Tan__
, x__n__
0
Out[32]=
62
ab
cd
hj
kl
Matrices of the same rank can be added or subtracted just like any other variable.
Using //MatrixForm puts the result into a traditional matrix form.
In[35]:= m1 m2 //MatrixForm
ah bj
Out[35]=
ck dl
whereas using //TraditionalForm will return the results in the typeset form
common to published papers and books.
In[36]:= m1 m2 //TraditionalForm
ah bj
Out[36]=
ck dl
The TableForm rule will put the results into rows and columns without any surrounding parentheses.
In[37]:= m1 m2 //TableForm
Out[37]=
ah bj
ck dl
Matrix and vector multiplication is accomplished using a dot product. For example,
In[38]:= m1.m2 // MatrixForm
ahbk ajbl
Out[38]=
chdk cjdl
is equivalent to
In[39]:= Dotm1, m2 // MatrixForm
ahbk ajbl
Out[39]=
chdk cjdl
Dot works just as well with with combinations of matrices and vectors.
63
x
In[40]:= m1. //MatrixForm
y
axby
Out[40]=
cxdy
Mathematica distinguishes variables that are vectors from those that are not. If the
variables are not two vectors, the and operators represent standard scalar multiplication. Matrix division is not a traditionally dened operation. If one variable is
divided by another and both are matrices, Mathematica will return a component-bycomponent quotient.
In[42]:= m1/m2 // MatrixForm
a b
h j
Out[42]=
c
d
k l
Likewises, multiplying two matrices without the dot results in component-bycomponent products. The same result can be produced by taking the cross product
of two matrices
In[43]:= m1 m2 // MatrixForm
ah bj
Out[43]=
ck dl
the determinant
In[45]:= Detm1
Out[45]= b c a d
the transpose
In[46]:= Transposem1 //MatrixForm
ac
Out[46]=
bd
64
b c a d b c a d
For equations that are cast in matrix form, LinearSolve can be used to obtain a
solution. For example, the equations 3x 7y 8 and x 4y 7 can be written in
matrix form as:
In[48]:= M
37
14
13
17
,
5
5
The same result can be obtained by explicitly taking the inverse of M and premultiplying it with Y.
In[51]:= InverseM.Y
Out[51]=
13
17
,
5
5
As will be illustrated in the Geoscience Applications section, numerical or symbolic eigenvalues and eigenvectors can also be calculated using Eigenvalues,
Eigenvectors, or Eigensystem. The Mathematica documentation contains
information on additional functions such for matrix decomposition and linear programming.
65
In[52]:= Solve3 x2 4 x 7 y, x
1
1
Out[52]= x 2 17 3 y
, x 2 17 3 y
3
3
Mathematica returns solutions as lists of replacement rules, and is capable of obtaining an exact solution to any polynomial equation in one variable in which the
highest power is 5. To isolate the rst of the two solutions, for example to use in a
calculation, simply use the replacement rule to assign the value to a variable.
In[53]:= x /. %1
1
2 17 3 y
Out[53]=
3
or, equivalently,
In[54]:= x /. First%%
1
2 17 3 y
Out[54]=
3
Another way to obtain the result is to use Roots, which determines the roots of
polynomial equations and returns them as a logical combination of equations rather
than a list of replacement rules. Using the same example equation as above, Roots
returns
In[55]:= Roots3 x2 4 x 7 y, x
1
1
Out[55]= x 2 17 3 y
x 2 17 3 y
3
3
in which is the logical "or" operator. A simple rule to follow is to use Solve
if you want results as a list of replacement rules and Roots if you want them as
a list of logical expressions. Solve and Roots can also be used on systems of
equations. For example,
In[56]:= eq1 4 x 7 y
eq2 2 x 3 y
Out[56]= 7 4 x y
Out[56]= 3 2 x y
In[57]:= Solveeq1, eq2
, x, y
Out[57]= x 2, y 1
3
3
66
In[59]:= Solve3 x2 4 x 7 y, x
1
1
Out[59]= x 2 17 3 y
, x 2 17 3 y
3
3
To illustrate the difference between Solve and Reduce, consider a different equation:
In[60]:= Solvea x 0, x
Out[60]= x 0
Because Solve nds generic solutions for the specied variable x, it does not consider the possibility that a 0. It nds the general solution that will satisfy the
equation for any value of a. Reduce, however, does consider the special case and
returns all possible solutions, including a 0.
In[61]:= Reducea x 0, x
Out[61]= a 0x 0
in which && is the logical "and" operator. Reduce realizes, for example, that the
quadratic equation can be satised if a, b, and c are zero whereas Solve does not.
It also realizes that the generic solutions are invalid if a 0.
3.4.2 NSolve and FindRoots
The function NSolve returns numerical rather than symbolic solutions. A symbolic
solution to the polynomial 3x2 4 x 7 9 is
In[64]:= Solve3 x2 4 x 7 9, x
1
1
Out[64]= x 2 10
, x 2 10
3
3
67
The numerical solution gives the same result as taking the numerical value of the
symbolic solution, but with one less step, which can be illustrated by nding the
numerical value of the symbolic solution
In[66]:= N%%
Out[66]= x 0.387426, x 1.72076
There may also be cases in which Solve or Reduce cannot obtain a solution,
for example to the equation sinx
n x.
In[67]:= SolveSinx x/10, x
x
, x
Out[67]= Solve Sinx
10
A approximate numerical solution for a local region of the function, however, can
be estimated using FindRoot if an initial guess is specied. Using an initial guess
of 2, we nd that
In[68]:= FindRootSinx x/10., x, 2
Out[68]= x 2.85234
shows that the local numerical solution is correct. As shown in the plot below, however, the equations y sin x and y xx/10 actually have seven local roots (i.e., the
points where the two functions intersect each other).
In[71]:= PlotSinx, 0.1 x
, x, 15, 15
From In[71]:=
1.5
1
0.5
-15
-10
-5
5
-0.5
-1
-1.5
Out[71]= -Graphics-
10
15
68
Computer Note: Assume that you cannot plot the two functions to determine
the number of roots. Write a short Mathematica statement to estimate the number of roots to sin x xx/10 that occur within the range 15 x 15.
3.4.3 Geoscience Examples
Three Point Problems in Structural Geology and Hydrogeology
Three point problems arise in both structural geology and hydrogeology. In structural geology, the objective is to calculate the strike and dip (or dipline azimuth and
dip angle) of a planar feature such as a dipping bed or fault from the known elevation of the feature at three points. In hydrogeology, the objective is to calculate the
magnitude and direction of the hydraulic gradient from the known elevation of the
potentiometric surface at three points. Mathematically, however, the two kinds of
three point problem are virtually identical.
Although the solution of three point problems is traditionally taught using scaled
drawings, it is also possible to solve the problems algebraically.The key is to dene
a plane containing all three of the points,which has the equation z a b x c y. To
nd a general solution to the three point problem, start by dening the form of the
plane in which all three points lie. Assume that the x direction is East and the y
direction is North, so the locations of the points can be given in UTM coordinates,
state plane coordinates (or similar systems outside the United States), or any other
orthogonal coordinate system.
In[72]:= plane a b x c y
Out[72]= a b x c y
Next, determine the slope of the plane in the two coordinate directions, which is
done by differentiating with respect to x and y.
In[73]:= xslope x plane
Out[73]= b
In[74]:= yslope y plane
Out[74]= c
Because the slopes are vectors (they have both magnitude and direction), the maximum slope can be found using the Pythagorean theorem.
In[75]:= maxslope
Out[75]= b2 c2
x plane2 y plane2
69
The strike of a dipping plane is an ambiguous quantity, because it can be specied in either of two diametrically opposed directions. A strike of 80 is identical to
a strike of 80 180 260 . Dipline azimuths, however, are unambiguous quantities and are therefore easier to calculate than strike directions. The dipline azimuth
is dened as the compass direction of the maximum downward inclination of the
plane, which is identical to the compass direction of a vector normal to the plane
(also known as the aspect of the plane). The dipline azimuth is also the direction of
the maximum hydraulic gradient in hydrogeologic problems. Slope and aspect will
come up again in Chapter 7, where they will be used to help visualize and analyze
digital elevation models representing Earths surface. The azimuth of the dipline is
given by
In[77]:= azimuth ArcTan yslope, xslope/
ArcTanc, b
Out[77]=
The strike of a dipping plane is found by adding or subtracting 90 to or from the azimuth. Notice that the syntax of the ArcTan function used above is different than
that used to calculate the dip angle. Most computer languages, including Mathematica, make provisions for two kinds of arctangent functions. The rst kind requires only one argument and returns a value in the range of /2
to
/2 radians
(90 to 90 ), which is acceptable for dip angle values that can fall only between 0
and
/2 radians (0 to 90 ). The second kind takes two signed arguments and returns
a value that can range from to radians (180 to 180 ), which is also the range
of possible aspect or dipline azimuth values. The Mathematica documentation also
states that ArcTanx,y returns the value of arctan y/x
/ , but this angle is measured
from the x axis. To calculate the dipline azimuth as measured from the y axis,
which corresponds to North, the x and y values are swapped.
Consider the example of a dipping bed that is intersected by three boreholes.
The x, y, z coordinates of the borehole-bed intersections are (1000 m, 1000 m, 100
m), (2000 m, 5000 m, 850 m), and (4000 m, 3000 m, 400 m). It is a common
mistake to mix units of measurement in three point problems, for example by using
x and y coordinates given in kilometers and a z coordinate given in meters. Another
common mistake is to use the depths measured in boreholes rather than elevations
for the z coordinate. Always check your input to ensure that the units are consistent
and correct. Using the coordinates above, we can write three equations describing
the plane:
In[78]:= eq1 100. a 1000. b 1000. c
eq2 850. a 2000. b 5000. c
eq3 400. a 4000. b 3000. c
70
or
In[83]:= %% 90
Out[83]= 81.2538
Computer Note: Write a Mathematica function that accepts the x, y, and z coordinates of three co-planar points and calculates the slope and aspect of the plane.
How would you specialize the function to calculate strike and dip or hydraulic
gradient magnitude and direction?
xx xy
yx yy
71
In the example above, refers to the entire tensor whereas the indexed values (e.g.,
xx) refer to the components of the tensor. The rst of the two indices refers to
the coordinate direction in which that component of stress is acting and the second
refers to the orientation of the imaginary surface on which it is acting. Thus, xx is
the component of the stress tensor acting in the x-direction and on a surface normal
to the x-axis. This is often referred to as the in-on notation. Each of the components
of the stress tensor is expressed in terms of a force per unit area, which has units
identical to those of pressure (e.g., Pascals in SI units).
Computer Note: Although it may seem logical to use Mathematicas typesetting capabilities to more elegantly express the components of the stress tensor
using true subscripted indices, for example
xx xy
yx yy
this will not work. If this were entered as input, Mathematica would repeatedly
assume that each of the components of the tensor is the tensor itself with
a subscript, and return an error message when its recursion limit of 256
iterations is exceeded. You can try this to see the results if youd like, but
rst familiarize yourself with the Abort Evaluation option under the Kernel
menu. The easiest way to deal with symbolic subscripts is, therefore, to append
them onto the end of the variable name as shown above. Integer indices can be
used by dening the tensor using Mathematicas Array function, for example
Array, 2, 2
. If a tensor is dened in this manner, then one would
refer to the second colum of the rst row as 1, 2. Although it is a matter
of personal preference, it seems easier to type xy than 1, 2.
The forces acting in the x- and y-directions, respectively, on a plane oriented at
an angle can be converted to stresses (by dividing each force by the area upon
which it acts; see Middleton and Wilcock, 1994, p. 144 for a detailed explanation)
and written as two components of a stress vector. They are:
In[85]:= x xx Cos xy Sin
Out[85]= xx Cos xy Sin
and
In[86]:= y yx Cos yy Sin
Out[86]= yx Cos yy Sin
in which x and y are the components of the stress vector acting in the x- and
y-directions, and is the angle between the positive x-axis and the outward-directed
normal to the plane. These two equations are the two dimensional form of Cauchys
72
formula, which can be used to calculate the stresses acting on a plane of any orientation. They can also be written in matrix form as
Cos
x
In[87]:= .
Sin
y
Out[87]= True
Mathematica returns a value of True because the two sides of the equation are
separated by a logical operator and are algebraically identical to each other. To determine the normal and shear stresses acting on a plane oriented at angle , the
stress vectors must be rotated so that they are acting parallel and perpendicular to
the plane. This is accomplished using a two dimensional rotation matrix that we
will call R
In[88]:= R
Cos Sin
Sin Cos
The rotation matrix can be entered by hand, as shown above. The standard Mathematica package Geometry`Rotations` also includes a rotation matrix function
In[89]:= RotationMatrix2D // MatrixForm
Cos Sin
Out[89]=
Sin Cos
The normal and shear stresses on a plane with arbitrary orientation are the stress
vectors premultiplied by the rotation matrix, or
Cos
// MatrixForm
In[90]:= SimplifyR..
Sin
xx Cos2 xy yx Cos Sin yy Sin2
Out[90]=
yx Cos2 xx yy Cos Sin xy Sin2
73
which is the same result used in structural geology, geomechanics, and geophysics
textbooks. Mathematica makes the general assumption that the product of two matrices is another matrix in which each row is a list, even in the special case where
one of the matrices (and hence the result) is a vector. Therefore, the result above
is in the general matrix form of a list of lists. Each of its two components can be
extracted and assigned to variable names for later use.
In[93]:= normstress %1, 1
Out[93]= xx Cos2 2 xy Cos Sin yy Sin2
In[94]:= shearstress %%2, 1
Out[94]= xy Cos2 xx yy Cos Sin xy Sin2
To illustrate the use of the results, consider a medium in which the state of stress
is given by xx 250 MPa, yy 100 MPa, and xy 75 MPa. What are the normal
and shear stresses acting on a plane with an outward directed normal vector that is
rotated 18 with respect to the positive x-axis?
In[95]:= normstress /. xx
250., yy
100., xy
75.,
18.
Out[95]= 279.76
Out[96]= 16.5924
Another way to visualize the relationship among the components of the stress
tensor and the angle is to calculate the values of normal and shear stresses as
varies from 0 to 360 . The table below does that using the state of stress from the
example above and angular increments of 2 .
In[97]:= pts
Tablenormstress, shearstress
/.
xx
250., yy
100., xy
75.
,
, 0, 360. , 2.
The pairs of normal and shear stress values can now be plotted to graphically illustrate their relationship, producing a Mohr diagram that will be familiar to students
of structural geology, soil and rock mechanics, and geophysics.
In[98]:= ListPlotpts, AspectRatio
240/300.,
PlotRange
0, 300
, 120, 120
,
PlotJoined
True, AxesLabel
"Normal", "Shear"
74
From In[98]:=
Shear
100
50
50
100
150
200
250
Normal
300
0
-50
-100
Out[98]= -Graphics-
The two intersections of the Mohr circle with the normal stress axis, for which
xy 0, are the two principal stresses (approximately 69 and 281 MPa). The example state of stress of xx 250 MPa, yy 100 MPa, and xy 75 MPa can be
illustrated by plotting the xx and xy or yy and yx
xy values as a point on the
Mohr diagram.
In[99]:= Show%,
GraphicsPointSize0.02, Point250., 75.
,
Point100., 75.
,
Line100., 75.
, 250., 75.
From In[99]:=
Shear
100
50
50
100
150
-50
100
Out[99]= -Graphics-
200
250
Normal
300
0
75
Although both the sine and cosine functions in the expressions we have derived
can vary over a range of 360 , in physical space the inclination of a plane can vary
over only 180 . A plane said to be inclined 200 relative to a horizontal datum,
for example, is the same as a plane inclined 200 180 20 . Therefore, angles
on a Mohr diagram must be twice as large as the angles in physical space that the
represent (which is why Mohr diagrams are traditionally constructed using the angle
2). The angle from the x-axis (the direction in which xx acts) to the maximum
principal stress, taking into account the double angle relationship between the Mohr
diagram and physical space, can be found using some simple trigonometry:
In[100]:=
xy
1
/.
ArcTan
2
xx xxyy
2.
xx
250., yy
100., xy
75.
Out[100]= 22.5
The angle from the y-axis (not the x-axis!) to the minimum principal stress direction
is, similarly,
In[101]:=
1
xy
/.
ArcTan
xxyy
2
yy 2.
xx
250., yy
100., xy
75.
Out[101]= 22.5
Therefore, the direction from the x-axis to the minimum principal stress is 90
22.5 112.5. Both of these results can be conrmed by using a protractor to
measure the angles on the Mohr diagram and dividing the results by 2.
The magnitudes and orientations of the two principal stresses can also be found
algebraically. One way to accomplish this is to differentiate the expression for the
normal stress acting on a plane of arbitrary orientation with respect to and set the
result equal to zero, in order to nd the orientation at which the normal stress is
maximized or minimized.
In[102]:= normstress
Out[102]= 2 xy Cos2 2 xx Cos Sin
2 yy Cos Sin 2 xy Sin2
It is almost always a good idea to use Simpli fy to see if the result can be put
into a simpler form. In this case, Simpli fy converts the angles to double angles
because one of its default options is Trig
True, which allows the simplication
algorithm to perform trigonometric as well as algebraic manipulations.
In[103]:= Simplify%
Out[103]= 2 xy Cos2 xx yy Sin2
At this point, it is easy to look at the last equation and see that dividing it through
by cos 2 and rearranging would put the equation into the particularly simple textbook form tan 2
xy /(
yy xx ). Unfortunately, Mathematicas simplication
76
algorithm does not see any advantage to that form even though it is probably what
most humans would prefer. FullSimplify returns the same result. Manually
dividing through by cos 2, however, will accomplish the job.
In[104]:= Simplify%/ Cos2
Out[104]= 2 xy xx yy Tan2
Computer Note: The previous step would not have worked if the derivative had
already been set to zero, because Mathematica would treat the entire equation
as a single object divided by cos 2. The result would have been
2xyCos2 xx yySin2 0Sec2
rather than the desired 2xy xx yyTan2. Try each of the two
possibilities yourself.
Finally, the result can be set equal to zero and solved for tan 2
In[105]:= Solve% 0, Tan2
2 xy
Out[105]= Tan2
xx yy
or
In[106]:= Solve%% 0,
2 xy
1
Out[106]= ArcTan
2
xx yy
which is the same result as we obtained from the Mohr diagram. We will use this
result later, so it will be convenient to assign it a name.
In[107]:= maxangle /. %1
Out[107]=
1
2 xy
ArcTan
2
xx yy
The tangent function is periodic and repeats itself every 180 , which can be shown
by equating the tangents of angles separated by 180 and applying Simpli fy.
In[108]:= SimplifyTan Tan 180
Out[108]= True
77
Using the same state of stress as in the previous example, the orientation of the
maximum normal stress is thus (in degrees)
In[110]:= maxangle/ /. xx
250., yy
100., xy
75.
Out[110]= 22.5
The magnitude of the normal stress acting in this direction is (in MPa)
In[111]:= normstress /. xx
250., yy
100., xy
75.,
22.5
Out[111]= 281.066
which agrees with the maximum principal stress estimated from the Mohr circle.
To check the calculations, we can also calculate the shear stress in this direction.
According to the denition of the principal stresses, it should be zero.
In[112]:= shearstress /. xx
250., yy
100., xy
75.,
22.5
The minimum principal stress is oriented 90 from the maximum principal stress,
and its value is (in MPa)
In[114]:= normstress /. xx
250., yy
100., xy
75.,
22.5 90
Out[114]= 68.934
which also agrees very well with the estimated Mohr circle value. Its orientation is
In[115]:= maxangle/ 90 /. xx
250., yy
100., xy
75.
Out[115]= 112.5
The magnitudes of the two principal stresses can also be calculated as the eigenvalues of the stress tensor. Middleton and Wilcock (1994, p. 148-151) provide a
very clear explanation of the relationship between the eigenvalues of the stress tensor and principal stresses. We will simply calculate the values to illustrate that they
are identical to those calculated using a Mohr diagram or the algebraic method or by
algebraically nding the directions of maximum and minimum normal stress. The
general forms of the two eigenvalues of the two dimensional stress tensor are
In[116]:= Eigenvalues /. yx > xy // MatrixForm
1
xx
yy
xx2 4 xy2 2 xx yy yy2
2
Out[116]=
1
xx yy xx2 4 xy2 2 xx yy yy2
78
Substituting values for the state of stress used in the previous examples, the two
principal stresses are calcuated to be (in MPa)
In[117]:= % /. xx
250., yy
100., xy
75.
// MatrixForm
Out[117]=
68.934
281.066
The orientations of the principal stresses can be found from the eigenvectors of the
stress tensor. Their symbolic form is
In[118]:= Eigenvectors/. yx > xy // MatrixForm
xx yy xx2 4 xy2 2 xx yy yy2
1
2
xy
Out[118]=
2
2
2
xx
4
xy
2
xx
yy
yy
xx
yy
1
2 xy
The eigenvectors are the the axes of the stress ellipsoid; therefore, the angles of
the two axes can be found by taking the four-quadrants arctangents of the x and y
components of each axis. Refer to the Mathematica documentation for a discussion
of the differences between two- and four-quadrant arctangent functions. Substituting
the state of stress from the previous examples, the orientations of the two principal
stresses are thus (in degrees)
In[119]:= ArcTan%1, 1, %1, 2/ /. xx
250., yy
100., xy
75.
Out[119]= 112.5
and
In[120]:= ArcTan%%2, 1, %%2, 2/ /.
xx
250., yy
100., xy
75.
Out[120]= 22.5
Finally, we will use the variable R again in another problem and need to clear its
value to avoid problems.
In[121]:= ClearR
y
6x
3
6x
79
c
x
6/2x
3
2
6/2x
x c, y
1 3 x
2 c 3 x
2
1
c 3 x
2
1
c
2
1
1 2 y0
2
Finally, substitute the new-found value for the constant into the general solution.
In[126]:= PS GS /. %1
Out[126]=
1 1 3 x
1 2 y0
2 2
Is the solution correct? To nd out, substitute the particular solution back into the
differential equation and simplify the expression.
In[127]:= Simplify2 x PS 6 PS 3
Out[127]= True
80
to work, the general solution is recast in terms of the specic values of the problem
at hand, some algebra and calculus are done, and, assuming no mistakes have been
made, a solution is eventually obtained.
3.5.2 Solutions Using DSolve and NDSolve
The traditional approach to solving differential equations of interest to geoscientists
rarely involves the development of fundamentally new solutions. Instead, it is simply the application of well-known general rules to specic cases. Mathematica is
capable of following the same kinds of rules to obtain solutions using the functions
DSolve and NDSolve. The example equation introduced in the previous section,
for example, can be solved without reference to tables or handbooks using DSolve.
In[128]:= DSolve2 x yx 6 yx 3, yx, x
Out[128]= yx
1
3 x C1
2
1 3 x
1 3 x 2 y0
2
Does this solution agree with the one that we found by manually solving the equation?
In[130]:= Expandyx /. %1 ExpandPS
Out[130]= True
81
rule containing the solution. Interpolating functions will be discussed in more detail
in Chapter 6.
In[132]:= Plotyx /. %1, x, 0, 5
, PlotRange
All,
AxesLabel
"x", "y"
From In[132]:=
y
2
1.8
1.6
1.4
1.2
1
0.8
0.6
Out[132]= -Graphics-
where g is gravitational acceleration (9.81 m/s2 ). The resisting force arising from
the Mohr-Coulomb shear strength of dry granular soil (which is typical of slopes
along the Rio Grande gorge) is
82
in which the angle of internal friction, , is a standard soil property reecting the
frictional or non-cohesive component of soil shear strength. Sliding requires a net
imbalance of forces, and the net shear force acting parallel to the slope is the difference between the downslope component of weight and the resisting force.
In[135]:= %% %
Out[135]= g m Sin g m Cos Tan
A force is by denition the product of mass and acceleration, so the net force can
also be written as the product of mass and an average slope parallel acceleration yet
to be determined.
In[136]:= aslope m %
Out[136]= aslope m g m Sin g m Cos Tan
Computer Note: The previous line combines the solution and simplication of
an equation with a replacement rule and variable name assignment. In particular,
the expression
Simpli fySolve%, aslope1
tells Mathematica to solve an equation obtained in the previous step, simplify
it, and then take the rst part of the resulting list of replacement rules. Take a
minute or two to carefully read through and understand the combination, then
repeat the steps one at a time to reproduce the result on your own computer.
Now that the acceleration has been determined, velocity and distance at time t can
be found by integrating the acceleration.
In[138]:= velocity aslopet
Out[138]= aslope t
In[139]:= distance velocityt
Out[139]=
aslope t2
2
Both of the integrals assume that velocity and distance equal zero when t 0.
Finally, the kinetic energy of the sliding block is given by
In[140]:= energy
Out[140]=
83
1
m velocity2
2
1
aslope2 m t2
2
Both positive and negative roots are returned because the result is found by taking a
square root. We will keep only the positive result and assign it to the variable name
totaltime
In[142]:= totaltime t /. %2
10 10
Out[142]=
aslope
Using values known or inferred for this particular rockslide, the total time of sliding
appears to have been (in seconds)
In[143]:= totaltime /.
35. ,
37. , g
9.81
10 10
Out[143]=
aslope
The velocity and kinetic energy of the boulder when it struck the road can now be
calculated from the total time to be (in m/s and N-m, respectively)
In[144]:= velocity/.
35. ,
37. , g
9.81, t
49.
Nearby portions of the same highway are protected by energy absorbing rockfall
nets. Could similar nets be used to protect the highway from falling or sliding blocks
of this size? The capactity of this type of net is on the order of 5 105 N-m, so the
answer is that the this kind of net would not have stopped the boulder if it had
been placed just up-slope from the road Haneberg and Bauer (1993) also calculated
84
results for frictionless sliding boulders and rolling boulders, and Wieczorek et al.
(2000) analyzed a rockfall in which a large slab of rock became airborne and followed a ballistic trajectory. All of these situations are variations on the same simple
mechanical problem.
Computer Note: Although a single net placed just above the highway may not
be capable of stopping a boulder of the size and velocity described above, it
might be possible to use a network of nets placed at intervals upslope from the
highway. Using the same values as the example above, calculate 1) the maximum distance that a 2.7 105 kg boulder can travel before its kinetic energy
exceeds the capacity of a rockfall net and 2) the largest boulders (in terms of
mass) that could be stopped by nets placed at 100 m intervals up the slope.
In[146]:=
H298
T1
H
CpT
298
Using DSolve with a specied boundary condition yields the same results in just
one step.
In[148]:= DSolveT HT Cp, H298 H298
, HT, T
Out[148]= HT 298 Cp H298 Cp T
85
Out[149]= a
c
T2
c
bT
T2
The values of a, b, and c were cleared to ensure that they are not set to the values used in the three point problem discussed previously in this chapter. As above,
DSolve is used to nd a solution for HT
In[150]:= Simplify
DSolveT HT CpT, H298 H298
, HT, T
Out[150]= HT
c
c 1
H298 a 298 T b 88804 T2
298
T 2
The value of 88804 is 2982 . It will be convenient to have this result available for
future calculations, so we will use the replacement rule to assign the result to the
variable HT.
In[151]:= HT HT /. %1
Out[151]=
c
c 1
H298 a 298 T b 88804 T2
298
T 2
For purely cosmetic purposes, this result can be rearranged by collecting terms with
the same coefcients.
In[152]:= CollectHT, a, b, c
Out[152]= H298 c
1
1
1
a 298 T b 88804 T2
298 T
2
You may be wondering why the terms in the rearranged equation are not listed
alphabetically. The reason is that the equation is a polynomial in the variable T,
T
and Mathematica lists the terms by starting with the constant and then following in
ascending powers of T.
T
As an example of a thermodynamic calculation using the enthalpy equation,
consider the formation of jadeite (NaAlSi2 O6 ) and quartz (SiO2 ) from albite
(NaAlSi3 O8 ) at 900 K and 1 bar (100 kPa) as described in Wood and Fraser (1977).
The relevant thermodynamic data can be entered in a table containing both text and
numerical values. The rst element of each row is the mineral name, the second is
the enthalpy at 298 K (in cal/mol), and the third through fth are the coefcients
a, b, and c for that mineral. We will perform the calculations using the original
units (calories and bars) in order to avoid converting all of the coefcients, and then
convert the nal result into metric units of kJ/mol.
In[153]:= ThermoData
"albite", 937146., 61.7, 13.9 10 3 , 15.01 105
,
"jadeite", 719871., 48.16, 11.42 10 3 , 11.87 105
,
"quartz", 217650., 11.22, 8.2 10 3 , 2.7 105
86
These data can be displayed in an orderly fashion using the TableForm function,
which allows row and column headings to be added. TableForm can, like N and
MatrixForm, be appended to an expression using // (two slashes) if options such
as TableHeadings are not used. The table heading None species that no row
headings are to be shown, because the mineral names are in this case included as
the rst element of each row. TableForm can be wrapped with the related function PaddedForm to more precisely control the appearance of the table. See the
Mathematica documentation for more details. Although it is not necessary to do so,
inclusion of the mineral names as an element of each row ensures that each set of
thermodynamic data is associated with the name of the corresponding mineral rather
than a supercial table heading.
In[154]:= TableForm
ThermoData,
TableHeadings
None,"Mineral", H298 ,"a","b","c"
,
TableSpacing
1, 1
Mineral
albite
Out[154]=
jadeite
quartz
H298
937146.
719871.
217650.
a
61.7
48.16
11.22
b
0.0139
0.01142
0.0082
c
1.501 106
1.187 106
270000.
The change in enthalpy in any reaction is the sum of the product enthalpies minus the
sum of the reactant enthalpies, or in this case
H Hjadeite Hquartz Halbite . The rst
step in calculating the enthalpy change for the reaction is to calculate the individual
enthalpies at 800 K, which is done by substituting values from ThermoData into
HT.
In[157]:= Halb800
HT /. T
800., H298
ThermoData1, 2,
a
ThermoData1, 3, b
ThermoData1, 4,
c
ThermoData1, 5
Out[157]= 905502.
87
In[158]:= Hjad800
HT /. T
800., H298
ThermoData2, 2,
a
ThermoData2, 3, b
ThermoData2, 4,
c
ThermoData2, 5
Out[158]= 695047.
In[159]:= Hqtz800
HT /. T
800., H298
ThermoData3, 2,
a
ThermoData3, 3, b
ThermoData3, 4,
c
ThermoData3, 5
Out[159]= 210326.
The change in enthalpy associated with the decomposition of albite into jadeite and
quartz at 800 K and 100 kPa is then found by adding all of the product enthalpies
and subtracting all of the reactant enthalpies. The result is (in units of cal/mol)
In[160]:= Hjad800 Hqtz800 Halb800
Out[160]= 129.432
or, in kJ/mol,
In[161]:= 4.18 %/1000.
Out[161]= 0.541024
Population Growth
The growth of populations, for example those studied in the fossil record and modern ecosystems of interest to many geoscientists, is often described in terms of two
end members: exponential population growth and logistic population growth
(Haberman, 1998). Populations that experience exponential growth tend to consist
of opportunistic generalists that are able to adapt to unstable environments, for example pioneer species colonizing an intially empty habitat. Those that experience
logistic population growth tend to consist of specialists that thrive in stable environments. Exponentially growing populations are further characterized by a constant
reproductive rate that is not controlled by population density. Logistically growing
populations, in contrast, are characterized by growth rates that decrease as the population density increases, so that the population levels off at a size known as the
carrying capacity of the environment.
Exponential population growth is described by the ordinary differential equation
dP/dt rP, where P is the population size, t is time, and r is the population growth
rate with units of reciprocal time (for example, years1 ). It is common to use the
variable N to represent population size in population growth models; however, this
would conict with the Mathematica N function and we will use P instead. The
exponential population growth model can easily solved using DSolve with the
initial condition specied at P P0 at t 0.
88
The results can be visualized by plotting EP with several different values of r and
an initial population of 2. Because exponential population growth occurs so rapidly,
we will restrict the plot to 20 time units.
In[164]:= PlotEP /. P0
2., r
0.1
, EP /. P0
2., r
0.15
,
EP /. P0
2., r
0.2
, t, 0, 20
,
PlotStyle
Dashing0.
, Dashing0.05
,
Dashing0.01
,
AxesLabel
"t", "P"
,
PlotLegend
"r 0.10", "r 0.15", "r 0.20"
,
LegendPosition
0.75, 0.
, LegendSize
0.8, 0.5
From In[164]:=
P
50
r 0.10
r 0.15
40
r 0.20
30
20
10
5
10
15
20
Out[164]= -Graphics-
Logistic population growth is described by the slightly more complicated differential equation dP/dt rP (1 P/K
/K), where K is the carrying capacity of the
ecosystem. Although the growth rate, r, is shown as a constant the effective growth
rate is given by the term r (1 P/K
/K), meaning that population growth will cease
when P K. This can be demonstrated by using a substitution rule to evaluate the
effective growth rate term.
In[165]:= r 1 P/K /. P
K
Out[165]= 0
As above, the quickest way to solve the equation is to use DSolve and specify an
initial condition.
89
r t K P0
K P0 r t P0
r t K P0
K P0 r t P0
and plot results for the same set of r values as were used in the exponential population growth example and a carrying capacity of 100 organisms. Because logistic
population growth is self-regulating, however, we will plot the results over a larger
range of 0 to 100 time units to examine what happens as the population approaches
the carrying capacity.
In[168]:= PlotLP /. P0
2., r
0.1, K
100.
,
LP /. P0
2., r
0.15, K
100.
,
LP /. P0
2., r
0.2, K
100.
, t, 0, 100
,
PlotStyle
Dashing0.
, Dashing0.05
,
Dashing0.01
,
AxesLabel
"t", "P"
,
PlotLegend
"r 0.10", "r 0.15", "r 0.20"
,
LegendPosition
0., 0.4
, LegendSize
0.8, 0.5
From In[168]:=
P
100
80
60
r 0.10
40
r 0.15
20
r 0.20
20
40
60
80
100
Out[168]= -Graphics-
The logistic population growth curves are similar to the exponential growth curves
for the rst 20 to 40 time units, depending on the value of r. Beyond that, the growth
curves atten and converge on the specied carrying capacity of 100 organisms.
The equilibrium population is that for which dP/dt 0. Referring to the logistic
growth equation, dP/dt rP (1P/K
/K), the equilibrium population(s) must therefore
be given by the roots of rP (1P/K
/K) 0. This can be done using either Solve or
the related function Roots (see the Mathematica documentation for a discussion of
similarities and differences between the two).
90
Thus, a logistic population will either approach the carrying capacity of the ecosystem (P K
K) or become extinct (P 0). This can be illustrated graphically by plotting the right-hand side of the logistic growth equation (dP/dt) as a function of P,
producing a phase plot. As above, let K 100 and r 0.15.
In[170]:= Plotr P 1 P/K /.K
100., r
0.15
, P, 0, 110
,
AxesLabel
"P", "dP/dt"
From In[170]:=
dPdt
3
2
1
20
40
60
80
100
-1
Out[170]= -Graphics-
The two equilibrium populations are the points at which the dP/dt curve intersects
the P axis (P 0 and P K).
K P 0 represents an unstable state of equilibrium
because dP/dt is positive and populations with P > 0 can move only away from
that state of equilibrium. A population of P K
K, however, represents a stable state
of equilibrium. For values of P < K
K, dP/dt is positive and the population will grow
until it achieves the equilibrium state of P K. For values of P > K, though, dP/dt
is negative and the population will shrink until it reaches P K.
The nature of logistic growth can also be visualized by superimposing plots
showing growth curves for different values of P0 for xed values of r and K. The
statement below generates a table lled with plots (with DisplayFunction
Identity so the plots are not shown), then shows all of the plots together on the
same set of axes (using DisplayFunction
$DisplayFunction).
In[171]:= Show
Table
PlotLP /. r
0.15, K
100.
, t, 0, 100
,
DisplayFunction
Identity, P0, 2, 202, 10
, DisplayFunction
$DisplayFunction,
PlotRange
0, 200
, AxesLabel
"t", "P"
91
From In[171]:=
P
200
175
150
125
100
75
50
25
20
40
60
80
100
Out[171]= -Graphics-
By plotting the population for different values of P0 , you can show that values of P0
> K always lead to a population decrease and values of P0 < K alway lead to a population decrease. Likewise, the effect of changing r values while holding P0 constant
can be visualized by copying the previous statement and switching variables.
In[172]:= Show
Table
PlotLP /. P0
20., K
100.
, t, 0, 100
,
DisplayFunction
Identity, r, 0.2, 0.2, 0.01
, DisplayFunction
$DisplayFunction,
PlotRange
0, 100
, AxesLabel
"t", "P"
From In[172]:=
P
100
80
60
40
20
20
40
Out[172]= -Graphics-
60
80
100
92
93
p x4
C1 x C2 x2 C3 x3 C4
24 R
There is some exibility in the way that derivatives can be expressed in Mathematica. One way, shown above, is to use the traditional-looking partial derivative symbol with subscripts. The forms x,x,x,x wx, Dwx, x, 4
, and
Dwx, x, x, x, x
are all equivalent. The latter two date from early versions of Mathematica that did not have advanced typesetting capabilities and required all input and output to be in standard text format.
Four boundary conditions must be specied in order to obtain a particular solution. The rst two boundary conditions will specify that there is no deection at
either end of a beam of length L, or w 0 at x L /2.
In[174]:= bc1 w L/2 0
bc2 wL/2 0
Out[174]= w
L
0
2
L
Out[174]= w 0
2
The second pair of boundary conditions specify that the plate is horizontal at each
end, representing undeformed horizontal strata. This is accomplished by setting the
slope of the plate to zero.
In[175]:= bc3 x wx 0 /. x
L/2
bc4 x wx 0 /. x
L/2
Out[175]= w
L
0
2
L
Out[175]= w 0
2
Notice that the values of x in the second set of boundary conditions was specied
differently than those in the rst set. This has to do with the way that Mathematica
treats derivatives. When the derivatives are specied using the format above or using
the notation Dwx, x, the function must be supplied as the generic w(x) before
the specic value of x is inserted. Otherwise, Mathematica will assume that w is a
function of L /2 rather than x and, when the derivative with respect to x is evaluated,
the result will be zero. If this result forms part of an equation, as above, the result
will be
In[176]:= x wL/2 0
Out[176]= True
94
In[177]:= w L/2 0
w L/2 0
Out[177]= w
L
0
2
L
Out[177]= w 0
2
Now that the four boundary conditions have been specied, DSolve can be used to
obtain the particular solution.
In[178]:= Simplify
DSolvex,4
wx p/R, bc1, bc2, bc3, bc4
, wx, x
p L2 4 x2
384 R
2
Out[178]= wx
p L2 4 x2
384 R
Computer Note: Turcotte and Schubert (2002) give the solution to this problem
as (taking into account a difference in the sign of w and using the same variables
as this example)
w
p
x2 L2 L4
x4
24R
2
16
Use Mathematica to determine whether the two solutions are equal. One way to
do this symbolically is to Expand both solutions and then equate them using
the operator. It is necessary to expand each solution because Mathematica
does not recognize a statement of the form a (b c) a b b c as being true
because the forms are different. Another way to determine whether the two solutions are identical is to divide one by the other to see if the quotient is 1 or subtract one from the other to see if the result is 0 (in each case using Simpli fy
if necessary.
The result can be put into a particularly simple form that is a function of only x/
x/L
if it is multiplied by R and divided by pL4 . This means, of course, that this is no
longer an expression for w. Instead, it is an expression for the dimensionless or
normalized deection wR/(pL
( 4)
95
1
x4
x2
2
384 48 L
24 L4
Out[181]=
1
X2
X4
384 48 24
The implication of the dimensionless result is that, although the magnitude of the
deection will depend on p, R, and L4 the general shape of the laccolith will not. Its
general shape will be:
In[182]:= Plotdeflect, X, 1/2, 1/2
, AxesLabel
"x/L",
From In[182]:=
wR
pL4
0.0025
0.002
0.0015
0.001
0.0005
-0.4
-0.2
0.2
0.4
xL
Out[182]= -Graphics-
L4 p
384 R
p 128 x2 16 L2 4 x2
384 R
128 x2 16 L2 4 x2
384 L2
wR
pL4
96
1
X2
24
2
MR
pL2
From In[187]:=
MR
2
pL
0.08
0.06
0.04
0.02
-0.4
-0.2
0.2
0.4
xL
-0.02
-0.04
Out[187]= -Graphics-
The bending moment is related to the curvature and ber strain developed in the
plate (Johnson, 1970; Turcotte and Schubert, 2002). The ber strain, in particular,
is the strain developed in imaginary horizontal bers located at different distances
from the center of the plate as it is bent. For small deections, the ber strain is
y d 2 w/dx2 , where y is the distance measured perpendicular to the thickness of
the plate. Therefore, there will be tension ( > 0) along the upper edge of the plate
(y > 0) where the bending moment is negative and along the lower edge of the plate
(y < 0) where the bending moment is positive. The opposite holds true for compression. The plane dening the center of the plate, y 0, is known as the neutral surface because there is neither tension nor compression at y 0 in a thin elastic plate.
The locations of the largest ber strains (at the crest and two edges of the laccolith)
are likely to be the locations where joints or dilational fractures form during bending, which is the basis for the curvature mapping techniques employed by structural
geologists exploring for productive areas in fractured aquifers or petroleum reservoirs (Fischer and Wilkerson, 2000; Steward and Wynn, 2000). Chapter 7 includes
a discussion of curvature mapping using gridded subsurface data.
Groundwater Flow Across Faults
Faults can act either as barriers to or conduits for the ow of groundwater,
petroleum, and ore-bearing uids (Haneberg et al., 1999). As such, it can be useful
to have a simple model to make inferences about the hydraulic properties of faults
97
from eld data such as hydraulic head measurements from observation wells. This
example describes, following the method developed in Haneberg (1995), how steady
state groundwater ow across faults can be simulated by simultaneously solving two
or three differential equations describing horizontal ow in two aquifers separated
by a vertical fault.
Horizontal steady state groundwater ow through a homogeneous and isotropic
aquifer with no sinks or sources is described by the differential equation d 2 h/ dx2 0.
The hydraulic head, h, is the energy per unit weight of the groundwater, which ows
down-gradient from areas in which hydraulic head is high to those in which it is low.
This equation has a general solution of the form
In[188]:= DSolvex,x hx 0, hx, x
Out[188]= hx C1 x C2
Haneberg (1995) showed how to incorporate recharge or discharge along the fault
into the solutions, and Bense et al. (2003) used a variation of this method to account for a fault with recharge between two aquifers of differing transmissivity. In
order to simulate groundwater ow across two aquifers separated by a fault of nite
width, we will write three equations of this form (one for the fault and two for the
aquifers) and then solve them simultaneously to ensure that the hydraulic head and
ow match at each of the fault-aquifer boundaries. If the width of the fault is zero
and it has no hydraulic properties unto itself, then its only effect will be to juxtapose
two aquifers of different permeability. In that case, the fault does not have to be
explicitly considered and only two equations need be written (one for each aquifer).
The general solutions are for the left aquifer (L), fault (F), and right aquifer (R) are
(using semi-colons to suppress the output):
In[189]:= hL c1 c2 x
hF c3 c4 x
hR c5 c6 x
Out[189]= c1 c2 x
Out[189]= c3 c4 x
Out[189]= c5 c6 x
In this example, the fault straddles the coordinate system origin and extends over
w x w. The aquifer to the left of the fault extends over L x w and the
aquifer to the right of the fault extends over w x L. This geometry is illustrated
below. Most of the graphics commands are self-explanatory, and are given as a list
enclosed by curly brackets. A series of replacement rules is used to specify options
about the axes and ticks after the closing Graphics square brace but just inside
the closing Show square brace.
98
In[190]:= Show
Graphics
Thickness0.007,
Line 1., 0
, 1., 1
,
Line1., 0
, 1., 1
, GrayLevel0.75,
Rectangle 0.1, 0
, 0.1, 1
, GrayLevel0.,
Text"left aquifer", 0.5, 0.5
, 0, 0
,
Text"right aquifer", 0.5, 0.5
, 0, 0
,
Text"fault", 0., 0.9
, 0, 0
, None
From In[190]:=
fault
left aquifer
L
right aquifer
w
L
Out[190]= -Graphics-
The next step is to specify six boundary conditions that will allow the six constants to be determined. This can be done in different ways, one of which is illustrated below. We will start by specifying that the hydraulic head is
h at x L and
h at x L
. This gives rise to the following two boundary conditions:
In[191]:= bc1 hL h /. x
L
bc2 hR h /. x
L
Out[191]= c1 c2 L h
Out[191]= c5 c6 L h
The next two boundary conditions apply to the contacts between the fault and the
aquifers, where the solutions for hydraulic head will be required to match each other.
That is to say, the head in the aquifer must equal the head in the fault along the
contact between the two.
In[192]:= bc3 hL hF /. x
w
bc4 hR hF /. x
w
99
Out[192]= c1 c2 w c3 c4 w
Out[192]= c5 c6 w c3 c4 w
The nal two boundary conditions relate to the discharge of groundwater across the
fault-aquifer contacts. For one-dimensional horizontal ow, the discharge is given
by a variation of Darcys law: Q T dh/dx. Q is the discharge, with units of
length3 /time, and T is the aquifer or fault transmissivity, with units of length2 /time.
The negative sign is included because groundwater ows down gradient but the discharge must be positive. Transmissivity is the product of the hydraulic conductivity
(length/time) and thickness (length) of the aquifer or fault. In the absence of any
sources or sinks along the contact, we will require that the volume of water owing
out of one unit be exactly equal to the volume owing into the adjacent unit. Thus,
In[193]:= bc5 TL x hL TF x hF /. x
w
bc6 TF x hF TR x hR /. x
w
Out[193]= c2 TL c4 TF
Out[193]= c4 TF c6 TR
Now that all six boundary conditions have been specied, they can be solved to nd
algebraic expressions for the six constants
In[194]:= constants
SimplifySolvebc1, bc2, bc3, bc4, bc5, bc6
,
c1, c2, c3, c4, c5, c6
TF TL TR L w h
,
L TF TL TR 2 TL TR TF TL TR w
2 L TF TR
h,
c1 1
L TF TL TR 2 TL TR TF TL TR w
2 L TF TL h
c5 h
,
L TF TL TR 2 TL TR TF TL TR w
2 TF TR h
c2
,
L TF TL TR 2 TL TR TF TL TR w
2 TL TR h
c4
,
L TF TL TR 2 TL TR TF TL TR w
2 TF TL h
c6
L TF TL TR 2 TL TR TF TL TR w
Out[194]= c3
Particular solutions for the hydraulic head in the aquifers and the fault can be found
by substituting the constants into the general solutions for head.
In[195]:= hL SimplifyhL /. constants
L TF TL TR 2 TL TR w TF TL w TR w 2 TR x h
Out[195]=
L TF TL TR 2 TL TR TF TL TR w
In[196]:= hF SimplifyhF /. constants
L TF TL TR TF TL TR w 2 TL TR x h
Out[196]=
L TF TL TR 2 TL TR TF TL TR w
100
The Wolfram Research web site shows how this solution can also be obtained using
DSolve (http://library.wolfram.com/examples/faultow/)
To illustrate an application of these solutions, consider an example in which both
of the aquifer transmissivities are 0.01 m2 /s, the fault transmissivity is 0.001 m2 /s,
the head decreases a total of 10 m across a 1 km wide problem domain, and the fault
is inferred to be 1 m wide. All of these site-specic values can be put into a list of
replacement rules that we will call, for lack of a better name, sitevals
In[198]:= sitevals TL
0.01, TR
0.01, TF
0.0001,
L
500., h
5., w
1.
Each of the three solutions must be plotted separately over its range of validity.
One way to accomplish this is to create three plots with DisplayFunction
Identity and then combine them using Show with DisplayFunction
$DisplayFunction
In[199]:= pL PlothL /. sitevals, x, 500., 1
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
pF PlothF /. sitevals, x, 1., 1.
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
pR PlothR /. sitevals, x, 1., 500.
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
ShowpL, pF, pR, DisplayFunction
$DisplayFunction,
Frame
True,
FrameLabel
"Horizontal Distance m", "Head m"
From In[199]:=
4
m
2
0
-2
-4
-400
-200
0
200
Horizontal Distance
m
Out[199]= -Graphics-
400
101
If the fault is equally as transmissive, or even more so, than the aquifers, it will
have no observable effect on the hydraulic gradient. This can be demonstrated by
changing sitevals so that TF is an order of magnitude larger than TL and TR,
and then plotting a new set of head proles.
In[200]:= sitevals TL
0.01, TR
0.01, TF
0.1, L
500.,
h
5., w
1.
In[201]:= pL PlothL /. sitevals, x, 500., 1
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
pF PlothF /. sitevals, x, 1., 1.
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
pR PlothR /. sitevals, x, 1., 500.
,
PlotStyle
Thickness0.007,
DisplayFunction
Identity
ShowpL, pF, pR, DisplayFunction
$DisplayFunction,
Frame
True,
FrameLabel
"Horizontal Distance m", "Head m"
From In[201]:=
4
m
2
0
-2
-4
-400
-200
0
200
Horizontal Distance
m
400
Out[201]= -Graphics-
What happens if one of the aquifers is more transmissive than the other, for example
if the fault juxtaposes highly permeable sands and gravels against lower permeability bedrock? The transmissivity of the aquifers can be changed in sitevals
In[202]:= sitevals TL
0.01, TR
0.005, TF
0.0001,
L
500., h
5., w
1.
102
From In[203]:=
4
m
2
0
-2
-4
-400
-200
0
200
Horizontal Distance
m
400
Out[203]= -Graphics-
Look at the plot carefully and you will see that the ratio of any two of the three
transmissivities is the reciprocal of the ratio of the corresponding hydraulic gradients. This kind of irregular stair-step pattern of head changes across faults has
been observed in the Albuquerque basin aquifer system, New Mexico, where normal
faults bounding the rift basin juxtapose Cenozoic aquifers consisting of poorly lithied sediments against less transmissive Paleozoic bedrock (Titus, 1963; Haneberg,
1995; Reiter, 1999). Bense et al. (2003) described and analyzed similar patterns of
head changes across large and small faults in the Roer rift of northern Europe.
103
104
therefore nonlinear (Andrews and Hanks, 1985; Roering et al., 1999). This example
is limited to the linear case.
For the simple case of a vertical fault displacing a at horizontal surface, the
problem has a well-known analytical solution (Hanks et al., 1984) that can be written as a Mathematica function convenient for plotting.
In[204]:= zx_, t_, K_, z0_
#
x
z0 !
!
$
!
$
$
1 Erf
!
$
!
$
2
2.
K
t
"
%
The implementation of DSolve in Mathematica 5.0 and earlier cannot obtain symbolic solutions to diffusion equations, so we will use the published solution. In
this solution, z0 is the initial height of the scarp. The validity of the solution can
be demonstrated by differentiating the function as appropriate, expanding the expressions, and equating them.
In[205]:= Expandt zx, t, K, z0 ExpandK x,x zx, t, K, z0
Out[205]= True
Estimates of K from different areas suggest that a common value is on the order
of 104 m2 /yr, so we will use a value of 1 104 m2 /yr in this example and plot
topographic proles for different times. The nature of diffusion problems such as
this one is that the rate of change is inversely proportional to the square root of time,
so the time increments used below increase as the square of the elapsed time.
In[206]:= Plotzx, 2, 0.0001, 1, zx, 4, 0.0001, 1,
zx, 8, 0.0001, 1, zx, 16, 0.0001, 1,
zx, 32, 0.0001, 1, zx, 64, 0.0001, 1,
zx, 128, 0.0001, 1, zx, 256, 0.0001, 1,
zx, 512, 0.0001, 1, zx, 1024, 0.0001, 1,
zx, 2048, 0.0001, 1, zx, 4096, 0.0001, 1
,
x, 2, 2
, AspectRatio
1/4, AxesLabel
"x", "z"
,
AxesOrigin
2, 0
From In[206]:=
z
1
0.8
0.6
0.4
0.2
-1
Out[206]= -Graphics-
105
fault scarp, this can be accomplished using the Mathematica UnitStep function
(also known as a Heaviside step function). The plot below shows the UnitStep
representation of a fault scarp with a height of 1 m.
In[207]:= PlotUnitStepx, x, 2, 2
, AspectRatio
1/4,
PlotStyle
Thickness0.007, AxesLabel
"x","z"
,
AxesOrigin
2, 0
From In[207]:=
z
1
0.8
0.6
0.4
0.2
-1
Out[207]= -Graphics-
Two spatial boundary conditions must also be specied, so we will hold the elevation constant at some nite distance from the fault (z 0 at x 2 m and z 1 at
x 2 m). The analytical solution is for an innite space, but numerical solutions
are limited to nite problem domains. Next, dene the equation to be solved using
a value of K 0.001 m2 /yr.
In[208]:= lineareqn t zx, t 0.0001 x,x zx, t
Out[208]= z0,1 x, t 0.0001 z2,0 x, t
106
This numerical solution can be compared by plotting it on the same set of axes as
the analytical solution. For t 500 years, the two curves are:
In[212]:= Plotz2x, 500
, x, 2, 2
,
PlotStyle
Dashing0.
, Dashing0.01
,
AspectRatio
1/4, AxesLabel
"x", "z"
,
AxesOrigin
2, 0
From In[212]:=
z
1
0.8
0.6
0.4
0.2
-1
Out[212]= -Graphics-
Computer Note: Numerical solutions can contain artifacts related to the way
in which the problem was formulated and the method chosen for its solution.
The implementation of NDSolve used in Mathematica 4.2 and earlier produces oscillations, known as Gibbs oscillations, for small values of t in this
example. The oscillations occur because the innitely steep fault scarp is approximated by a Fourier series of sine waves, and very short wavelength components must be used to approximate the vertical step. The implementation of
NDSolve in Mathematica 5.0, however, eliminates the oscillations. If you are
using Mathematica 4.2 or earlier, using the option Di f ferenceOrder
12
in NDSolve will greatly reduce, but not completely eliminate, the Gibbs oscillations. They die off rapidly and do not affect the solution for t > 50 years.
107
Out[213]= -Graphics-
Computer Note: Use Table to generate a series of fault scarp proles for
different times, then animate them. This can be done by selecting all of the
plots and choosing Animate Selected Graphics from the Cell menu. Consult the
Mathematica documentation for your particular front end to learn more about
animating graphics.
One of the advantages of numerical solutions is that they can be easily adapted
to boundary conditions more complicated than a simple vertical fault. For example,
consider a listric normal fault along which the hangingwall was been rotated as it
slipped downward.
In[214]:= topography
Interpolation 2., 0.1
, 1.5, 0.08
, 1, 0.05
,
0.6, 0.02
, 0.3, 0.1
, 0.3, 1.
, 2., 1.
,
InterpolationOrder
1
Out[214]= InterpolatingFunction2., 2., <>
108
From In[215]:=
1
0.8
0.6
0.4
0.2
x
-1
Out[215]= -Graphics-
Now that the topography is represented by a function, it can be used to specify the
initial and boundary conditions.
In[216]:= ic zx, 0 topographyx
bc1 z 2, t topography 2.
bc2 z2, t topography2.
Out[216]= zx, 0 InterpolatingFunction2., 2., <>x
Out[216]= z2, t 0.1
Out[216]= z2, t 1.
The diffusion equation is solved in the same way as before, again using K
104 m2 / yr.:
In[217]:= NDSolvelineareqn , ic, bc1, bc2
, z, x, 2, 2
,
t, 0, 5000
Out[217]= z InterpolatingFunction2., 2.,
0., 5000., <>
In[218]:= z3 z/. %1
Out[218]= InterpolatingFunction2., 2., 0., 5000., <>
In[219]:= Plotz3x, 2, z3x, 4, z3x, 8, z3x, 16, z3x, 32,
z3x, 64, z3x, 128, z3x, 256, z3x, 512,
z3x, 1024, z3x, 2048, z3x, 4096
,
x, 2, 2
, AspectRatio
1/4, AxesLabel
"x", "z"
,
AxesOrigin
2, 0
From In[219]:=
1
0.8
0.6
0.4
0.2
-1
Out[219]= -Graphics-
109
The initial boundary conditions for this example, however, are noticably different.
In this example, we will assume that the intial temperature is 15 throughout the
subsurface. The temperature along Earths surface (z 0) is given in terms of a
mean annual temperature of 15 minus a sinusoidal seasonal component with an
amplitude of 10 .
In[221]:= Plot15. 10. Sin2 t, t, 0, 1
,
AxesLabel
"t yr", "T "
,
AxesOrigin
0, 15
110
From In[221]:=
T
25
20
0.2
0.4
0.6
0.8
t yr
10
Out[221]= -Graphics-
The second boundary condition will state that the thermal gradient, T/
T z 0 at
great depth. In analytic solutions to the problem (e.g., Carslaw and Jaeger, 1959;
Turcotte and Schubert, 2002), one of the constants of integration can be heuristically
eliminated by assuming that T/
T z 0 at z . In numerical solutions, however,
the depth must be nite and we will use an arbitrarily chosen value of z 500 m. In
Mathematica input format, then, the initial and boundary conditions are:
In[222]:= ic Tz, 0 15.
bc1 T0, t 15. 10. Sin2 t
bc2 z Tz, t 0. /. z
500.
Out[222]= Tz, 0 15.
Out[222]= T0, t 15. 10. Sin2 t
Out[222]= T1,0 500., t 0.
Solve the equation and assign the result to the variable Temp, so as not to overwrite
T(in case the equation is to be solved again, for example with a different amplitude
or wavelength temperature uctuation).
In[223]:= NDSolveeqn , ic, bc1, bc2
, T, z, 500, 0
, t, 0, 5
Out[223]= T InterpolatingFunction500., 0.,
0., 5., <>
In[224]:= Temp T /. %1
Out[224]= InterpolatingFunction500., 0., 0., 5., <>
Carslaw and Jaeger (1959) contains an analytical solution to this problem. The results of the numerical solution can be visualized in several different ways. One
approach is to use superimposed plots of the temperature uctuations at different
depths, as shown below for the rst ve years of the solution.
111
Depth m
25
0
25
50
20
100
10
Out[225]= -Graphics-
112
depth m
-20
-40
-60
-80
-100
0
years
Out[226]= -ContourGraphics-
The contour plot more clearly shows the adjustment from the initial conditions to
the periodic steady state that occurs over the rst 2 years or so, after which the
temperature oscillations appear to be identical. A legend can be added using the
ShowLegend function contained in the Graphics`Legend` standard add-on package. The legend below species 8 increments ranging from 5 to 25 .
In[227]:= ShowLegendtempcontourplot,
GrayLevel, 8, "5 ", "25 ", LegendShadow
None,
LegendPosition
1.1, 0.5
, LegendSize
0.2, 1.
From In[227]:=
0
5
depth m
-20
-40
-60
-80
-100
0
3
years
Out[227]= -Graphics-
25
113
A third way to visualize the solution is to use a 3D plot. To create a wire mesh
version without a shaded surface, use the option Lighting
False.
In[228]:= Plot3DTempz, t, t, 0, 5
, z, 200, 0
,
PlotRange
All, PlotPoints
50, Lighting
False,
AxesLabel
"time", "depth", "T"
,
BoxRatios
1, 0.6, 0.23
From In[228]:=
25
T 20
15
10
5
0
0
-50
1
2
-100
depth
3
-150
time
4
5 -200
Out[228]= -SurfaceGraphics-
Computer Note: Solve the periodic heat ow problem for diurnal temperature
uctuations that occur as the result of daily heating and cooling. What is the
relationship between the frequency of temperature uctuations and the depth to
which they propagate? How deep would the temperature change associated with
a 1,000,000 year long ice age propagate into the Earth?
114
equal to those of a sinusoidally-varying load superimposed on a at surface (Jeffreys, 1976). While this idealization introduces some errors by not explicitly accounting for the material within the hills or removed from the valleys, it is very
straightforward and provides an order-of-magnitude estimate of the effects of topography on the state of stress at depth. Haneberg (1999) shows how to solve a
more complicated version of this problem in which the upper surface is an arbitrary
waveform or combination of waveforms.
Under conditions of plane strain, the distribution of stress in an elastic material
can be described using a biharmonic equation written in terms of an Airy stress
function, (Davis and Selvadurai, 1996; Timoshenko and Goodier, 1970):
4
4
4
2 2 2 4 0
4
x
z
x
z
in which x and z are the two spatial coordinates. The Airy stress function is in turn
related to the components of the 2-D stress tensor by the derivatives xx 2 /
z2 ,
2
2
2
zz / x
, and xz / x
z. If the stresses along the top and bottom
edges of a 2-D beam can be expressed in terms of sine or cosine curves, then the
biharmonic equation has a general solution of
In[229]:= c1 z c2 n z/L c3 z c4 n z/L Cosn x/L
Out[229]=
nz
L
c1 c2 z
nz
L
nx
L
L is the wavelength of the topography, for example the crest-to-crest or troughto-trough distance in a series of valleys and ridges. Is this a valid solution to the
biharmonic equation?
In[230]:= Expandx,x,x,x 2 x,x,z,z z,z,z,z 0
Out[230]= True
Now that the validity of the general solution has been established, we can move on
to the boundary conditions and a particular solution. To do this, rst dene the three
components of the 2-D stress tensor in terms of
In[231]:= xx Simplifyz,z
Out[231]=
1
nz
L n
L2
2nz
c3 c1 L n c4 2 L n z c2
nx
Cos
L
2nz
L
2 L n z
n2 2
nz
L
c1 c2 z
nz
L
L2
115
1
nz
L n
L2
c3 c1
nx
Sin
L
2nz
L
n c4 L n z c2
2nz
L
L n z
Next, specify the boundary conditions along the surface (z 0). The rst boundary
condition will represent the load imposed on the at surface by topography. A positive value of A will indicate vertical compression along the surface as a consequence
of a mountain, whereas a negative value will indicate tension because of a valley.
In[234]:= bc1 zz A Cosn x/L /. z
0
Out[234]=
A Cos
nx
L
The second boundary condition assumes that the surface is frictionless. This will
introduce some error into the solutions, but it is not an unreasonable rst approximation of a complicated problem.
In[235]:= bc2 xz 0 /. z
0
Out[235]=
0
The next two boundary conditions represent the state of stress at great depth, which
can be nite or innite. In the previous example of periodic heat ow, we used a
nite boundary condition to obtain a numerical solution. This time, we will assume
that the effects of topography die off and have no effect an innite distance from the
surface. This is not a problem that Mathematica can handle symbolically, but it can
be solved using some human input. Recall the denition of . If both c3 and c4 are
not zero, the magnitude of the stress function would become innitely large with
depth. This is exactly the opposite of what we would like to occur. Therefore, we
can declare that c3 c4 0 for this particular problem. This would not be the case if
the lower boundary were located at a nite depth.
In[236]:= c3 0
c4 0
Out[236]= 0
Out[236]= 0
n c2 L c1 n Sin n L x
L2
0
Values for the two remaining constants can be found using Solve
116
A L2
AL
, c1 2 2
n
n
A
nz
L
L n z Cos n L x
L
In[240]:= xx Simplify xx /. constants1
Out[240]=
A
nz
L
L n z Cos n L x
L
In[241]:= xz Simplifyxz /. constants1
Out[241]=
A
nz
L
n z Sin n L x
L
One way to check the solutions is to plot the stresses along a boundary, which must
agree with the distribution specied in the boundary condition. For example, the
vertical normal stress along the surface is
In[242]:= Plotzz/. n
2, L
1, A
1, z
0
, x, 1/2, 1/2
,
PlotRange
All
From In[242]:=
1
0.5
-0.4
-0.2
0.2
0.4
-0.5
-Graphics-
This agrees with the boundary condition specied while solving the problem. Each
of the stress components can be plotted individually, for example as below. Light
areas indicate large positive values (compressive stress), whereas dark areas indicate
large negative values (tensile stress)
In[243]:= ContourPlotzz/. n
2, L
1, A
1
, x, 1/2, 1/2
,
z, 1, 0
, PlotRange
All, PlotPoints
25,
FrameLabel
"x", "z"
117
From In[243]:=
0
-0.2
-0.4
-0.6
-0.8
-1
-0.4
-0.2
0.2
0.4
Out[243]= -ContourGraphics-
-1
-0.8
-0.6
-0.4
Out[244]= -Graphics-
-0.2
118
As shown in both of the plots above, the effect of topography vanishes at a depth
equal to the wavelength of the topography. Therefore, we can expect wide valleys
or mountains to have a greater effect on the subsurface state of stress than narrow
canyons or peaks. Although the relief ((A) of the topography will affect the magnitude of stress very near the surface, the perturbation will still die off at depths less
than z L
.
The distribution of shear stress with depth follows a different pattern with twin
bulbs of equal magnitude but opposite sign centered beneath the mountain.
In[245]:= ContourPlotxz/. n
2, L
1, A
1
, x, 1/2, 1/2
,
z, 1, 0
, PlotRange
All, PlotPoints
25,
FrameLabel
"x", "z"
From In[245]:=
0
-0.2
-0.4
-0.6
-0.8
-1
-0.4
-0.2
0.2
0.4
Out[245]= -ContourGraphics-
Mathematica makes it easy to combine results, for example to plot the mean normal
stress.
In[246]:= ContourPlotzz xx/2./. n
2, L
1, A
1
,
x, 1/2, 1/2
, z, 1, 0
, PlotRange
All,
PlotPoints
25, FrameLabel
"x", "z"
119
From In[246]:=
0
-0.2
-0.4
-0.6
-0.8
-1
-0.4
-0.2
0.2
0.4
Out[246]= -ContourGraphics-
In terms of mean normal stress, therefore, we can tentatively conclude that the mechanical effects of topography along Earths surface will persist to a depth of about
1/2 the wavelength of the topography.
Computer Note: Assume that the topography across an area representative of
the Basin and Range province of the southwestern United States has a wavelength of 100 km and an amplitude of 2000 m. Might the topography have any
inuence on the location of magma bodies at depths of 10 to 15 km?
120
written to obtain a solution by iteration. Smith (1985), Press et al. (1992), and Wang
and Anderson (1982) provide detailed discussions of other nite difference methods
applicable to diffusion-type problems.
Finite difference solutions are based on numerical approximations of the derivatives in differential equations. The derivatives are approximated using the differences in values between adjacent points spaced nite distances apart on a regular
grid, hence then name nite difference. A simple one dimensional nite difference
approximation of a rst derivative at point i on a grid is
f
f fi1
i1
x
2
x
where
x is the distance between adjacent grid points. For example, consider the
following list representing the values of some dependent variable f on a nite difference gird, with each value separated by distance
x 0.1.
In[247]:= f 3., 2.6, 2.9
In[248]:= x 0.1
Using the equation above, a nite difference approximation of the rst derivative is:
f3 f1
2. x
Out[249]= 0.5
In[249]:=
A second derivative is the rst derivative of a rst derivative so, using the same kind
of reasoning, the second derivative of f at point i can be approximated as
f
f
f f
i1
i
i
xi1
2 f
x
x
2
x
In the expression above, the rst derivatives are calculated for the imaginary points
i 1/2 and i 1/2, and the second derivative is calculated for grid point i by taking
the the difference between the two. A nite difference approximation of the second
derivative is
In[250]:=
Out[250]= 70.
121
ary. Second, the ux across a boundary can be specied. The simplest form of ux
boundary condition is a no-ow boundary. The ow of groundwater per unit crosssectional area (i.e., specic discharge) is given by Darcys law, q K dh/dx, so a no
ow condition requires dh/dx 0. Recalling our earlier nite difference approximation of a rst derivative, we can write fi1 fi1 to specify a no ow condition at grid
point i. Because point i lies along a boundary, this means that we will either have to
use an imaginary grid point outside of the problem domain or write a different nite
difference expression for points along the boundary. We will choose the rst option
in the examples below.
Toth (1962) studied groundwater ow in small drainage basins in Alberta, and
inferred from eld observations that groundwater owed downward beneath ridges
separating basins and upward toward the axes of the basins, where it could be discharged into a stream. He assumed that seasonal water level uctuations were small,
so that the problem could be simulated using a steady state approach, and that the
ow occurred in a homogeneous and isotropic rectangular aquifer. He also assumed
that there was no groundwater ow across drainage divides imposed by ridges and
streams (h/x
0) and into the bedrock beneath the aquifer system (h/
/ z 0),
and that head varied as a linear function of distance along the top of the aquifer. The
geometry of the problem is illustrated below using a series of Mathematica graphics
functions.
In[251]:= Show
GraphicsText"h b m x", 0.5, 0.95
,
Text"h/x 0", 0.97, 0.5
,
Text"h/x 0", 0.02, 0.5
,
Text"h/z 0", 0.5, 0.05
,
Text"stream", 0., 1.1
,
Text"ridge", 1., 1.1
,
Line0, 0
, 0, 1
, 1, 1
, 1, 0
, 0, 0
,
Dashing0.01
, Line0, 1
, 1, 1.2
,
PlotRange
0.2, 1.2
, 0.2, 1.2
From In[251]:=
stream
ridge
h b m x
"h"x 0
"h"x 0
"h"z 0
Out[251]= -Graphics-
122
The right-hand side of the problem domain illustrated above represents a topographic drainage divide such as a ridge crest, whereas the left-hand side represents
a drainage divide in the form of a stream to which the groundwaer is discharged.
Following the general nature of the topography, hydraulic head along the upper
boundary increases from the stream along the basin axis to the ridge along the basin
margin.
First, dene the number of rows and columns in the nite difference grid. We
will use ten rows and ten columns but, in order to deal with the no-ow boundary
conditions, we will have to include two extra columns and one extra row.
In[252]:= nr 11
nc 12
and then create two tables, old and new to store the estimates.
In[253]:= old Table0., r, nr
, c, nc
new Table0., r, nr
, c, nc
The next step is to establish the hydraulic head along the upper boundary. In this
example, we will use a simple linear function so that head ranges from 0 to 1.
In[254]:= Dooldnr, c Nc 2/nc 3, c, 2, nc 1
Once the boundary values have been specied, the actual nite difference approximation is calculated for each of the non-boundary points in old and the result
is put into new. The equation in the expression below is the solution of the nite
difference approximation of the Laplace equation for hr,c .
In[255]:= Do
newr, c
oldr 1, c oldr 1, c oldr, c 1
oldr, c 1/4.,
r, 2, nr 1
, c, 2, nc 1
At this point, newconsists of mostly zeroes because successive changes to the solution will propagate away from the boundary along which head is specied. Row
nr appears at the bottom of the matrix below because that is the standard convention for matrices. When the solution is plotted using ListContourPlot
or ListDensityPlot, however, row nr will be at the top. The statement
Round100new/100 truncates the results to two decimal places.
123
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
0
0
0
0 0
0 0.03 0.06 0.08 0.11 0.14 0.17 0.19 0.22 0.25 0
0 0
0
0
0
0
0
0
0
0 0
Next, put the values held in new into old in order to prepare for the next iteration,
taking care not to overwrite the upper boundary head values in oldnr.
In[258]:= Do
oldr, c newr, c, r, 2, nr 1
, c, 2, nc 1
The no-ow boundaries must now be reset to that h1,c h3,c , hr,3 hr,1 , and hnc,r
hnc1,r . Because Mathematica stores tables as lists of lists, assigning one row to
another is easy. Assigning one column to another,is more complicated and requires
a Do loop.
In[259]:= old1 old3
In[260]:= Do
Module
,
oldr, 1 oldr, 3
oldr, nc oldr, nc 2
, r, nr
124
Computer Note: Simplify the Do loop above using two All statements to replace the values in all rows without iterating.
The table old now looks like (again recalling that the rows are reversed):
In[261]:= Round100 old/100. // MatrixForm
0
0
0
0
0
Out[261]=
0
0
0
0
0.03
0.11
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0 0.03 0.06 0.08 0.11 0.14 0.17 0.19 0.22 0.25 0.22
0 0.11 0.22 0.33 0.44 0.56 0.67 0.78 0.89 1. 0.89
This process is repeated until maxerr falls below a specied tolerance, which
should be very small compaed to the magnitude of the head values. The following
Mathematica program implements a nite difference solution of Toths problem by
combining the individual steps above. The number of rows and columns specied,
nr and nc, should not include the extra rows and columns necessary for the no-ow
boundary conditions. Values of tolerance should be very small compared to the
magnitude of the head values. How small is small enough? One way to nd out is to
experiment with increasingly smaller values until the solution stabilizes from trial
to trial.
In[262]:= Tothh_, rows_, cols_, tolerance_
Moduleold, new, maxerr
,
Set number of rows and columns,
create and initialize the necessary tables.
Make maxerr large to allow entry into
the While loop.
nr rows 1
nc cols 2
old Table0., r, nr
, c, nc
new Table0., r, nr
, c, nc
maxerr 1000
Determine the maximum error in this iteration
maxerr Max
Abs
Tableoldr, c newr, c, r, 2, nr 1
,
c, 2, nc 1
Swap the new and old values for interior grid points
Do
oldr, c newr, c, r, 2, nr 1
, c, 2, nc 1
Reset the three no flow boundary nodes
old1 old3
Do
Module
,
oldr, 1 oldr, 3
oldr, nc oldr, nc 2
, r, nr
ReturnTableoldr, c, r, 2, nr
, c, 2, nc 1
125
126
Now that the nite difference function has been written, calculate the solution to the
Toth problem on a 20 20 grid using a tolerance of 106 .
In[263]:= h Toth1, 20, 20, 1. 10 6
Here is a contour plot of the solution, with large values of head represented by light
gray and small values represented by dark gray. Groundwater will ow perpendicular to the head contours.
In[264]:= headcontourplot ListContourPloth, PlotRange
All,
Contours
Tablec, c, 0., 1., 0.05
From In[264]:=
20
17.5
15
12.5
10
7.5
2.5
2.5
7.5
10
12.5
15
17.5
20
Out[264]= -ContourGraphics-
The contour plot can be embelished with vectors showing the specic discharge, or
magnitude and direction of groundwater ow. Mathematicas standard packages include two functions for plotting vectors. ListPlotVectorField, which works
with tables such as h, takes as its argument a table of vectors. Such a table could be
calculated using nite difference approximations of the rst derivative, but there is
an easier way. The rst step is to interpolate a 2-D polynomial function that passes
through each of the calculated nite difference results. Chapter 6 contains detailed
information about the ListInterpolation function.
In[265]:= ListInterpolationh
Out[265]= InterpolatingFunction1., 20., 1., 20., <>
The next step is to use the function PlotGradientField, which calculates the
gradient of any scalar eld (such as hydraulic head) and then plots the correspond-
127
ing vector eld. The length of the vectors is proportional to the magnitude of the
hydraulic gradient, and ScaleFactor sets the relative length of the longest vector. Because the plot has spatial coordinates in terms of the nite difference row
and column numbers, ScaleFactor
1 means that the longest vector will be
1/20 of the plot height and width. PlotPoints
10 plots a vector at every other
point on the nite difference grid so that the vectors will not be too crowded. Also,
notice that the row and column indices are reversed, because x values correspond to
columns and y values to rows, and the negative of the gradient eld is plotted because groundwater ows down, not up, the hydraulic gradient. See the Mathematica
documentation for other options.
In[266]:= vectorplot PlotGradientField %c, r, r, 1, 20
,
c, 1, 20
, PlotPoints
10, ScaleFactor
3
From In[266]:=
Out[266]= -Graphics-
One of the difculties associated with vector plots of this type is that it can be
difcult to scale the arrows. In the plot above, chosing a scale factor that makes the
longest vectors a reasonable length also makes the shortest vectors so small that their
shafts are not plotted. To make all of the vectors the same length regardless of their
magnitude, list the option ScaleFunction
1& before ScaleFactor
when plotting a gradient eld. The two plots can now be superimposed to show
both the contours and the vectors. AspectRatio, which applies to the entire plot,
128
must be increased to keep the aquifer square when the large upward directed vectors
are added to the plot.
In[267]:= Showheadcontourplot, vectorplot, Frame
False,
AspectRatio
22/20.
Out[267]= -Graphics-
Computer Note: In a subsequent paper, Toth (1963) used a combined linear and
sinusoidal upper boundary head distribution to investigate the effect of different
scales of topography on groundwater ow systems. Modify the nite difference
x/L), where
h/L is the basinroutine above so that h
h x/L A sin(2 nx
scale slope of the water table, n is the periodicty of the local topography, A is
the amplitude of the local topography, and L is the width of the aquifer being
modeled. Use ContourPlot and PlotGradientField to visualize your
results. Experiment with different values of n to determine how deep the effect
of localized topography of different scales persists.
129
130
Jeffreys, H., 1976, The Earth: Its Origin, History, and Physical Constitution (6th ed.): Cambridge University Press.
Johnson, A.M., 1970, Physical Processes in Geology: Freeman-Cooper.
Keller, C.K., van der Kamp, G., and Cherry, J.A., 1989, A multiscale study of the permeability
of a thick clayey till: Water Resources Research, v. 25, p. 22992317.
Middleton, G.V. and Wilcock, P.R., 1994, Mechanics in the Earth and Environmental Sciences: Cambridge University Press.
Nash, D.B., 1980, Morphologic dating of degraded normal fault scarps: Journal of Geology,
v. 88, p. 353360.
Nash, D.B., 1984, Morphologic data of uvial terrace scarps and fault scarps near West Yellowstone, Montana: Geological Society of America Bulletin, v. 95, p. 14131424.
Oertel, G., 1996, Stress and Deformation: A Handbook on Tensors in Geology: Oxford University Press.
Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P., 1992, Numerical Recipes
in FORTRAN (2d ed.): Cambridge University Press.
Reid, M.E., 1995, A pore-pressure diffusion model for estimating landslide-inducing rainfall:
Journal of Geology, v. 102, p. 709-717.
Reiter, M., 1999, Hydrogeothermal studies on the southern part of Sandia National Laboratories/Kirtland Air Force Base Data regarding ground-water ow across the boundary of
an intermontane basin, in W.C. Haneberg, P.S. Mozley, J.C. Moore, and L.B. Goodwin,
editors, Faults and Subsurface Fluid Flow in the Shallow Crust: American Geophysical
Union, Geophysical Monograph 113, p. 207222.
Roering, J.J., Kirchner, J.W., and Dietrich, W.E., 1999, Evidence for nonlinear, diffusive
sediment transport on hillslopes and implications for landscape morphology: Water Resources Research, v. 35, p. 853870.
Smith, G.D., 1982, Numerical Solution of Partial Differential Equations: Finite Difference
Methods (3d ed.): Oxford University Press.
Stewart, S.A. and T.J. Wynn, 2000, Mapping spatial variation in rock properties in relationship to scale-dependent structure using spectral curvature: Geology, v. 28, p. 691694.
Timoshenko, S.P. and Goodier, J.N., 1970, Theory of Elasticity (3d ed.): McGraw-Hill.
Titus, F.B., Jr., 1963, Geology and ground-water conditions in eastern Valencia County, New
Mexico: New Mexico Bureau of Mines & Mineral Resources Ground-Water Report 7.
Toth, J., 1962, A theory of groundwater motion in small drainage basins in central Alberta,
Canada: Journal of Geophysical Research, v. 67, p. 43754387.
Toth, J., 1963, A theoretical analysis of groundwater ow in small drainage basins: Journal
of Geophysical Research, v. 68, p. 47954812
Turcotte, D.L. and Schubert, G., Geodynamics (2d. ed.): Cambridge University Press.
Wieczorek, G.F., Snyder, J.B., Waitt, R.B., Morrissey, M.M., Uhrhammer, R.A., Harp, E.L.,
Norris, R.D., Bursik, M.I., and Finewood, L.G., 2000, Unusual July 10, 1996, rock fall at
Happy Isles, Yosemite National Park, California, Geological Society of America Bulletin,
v. 112, p. 7585.
Wang, H.F. and Anderson, M.P., 1982, Introduction to Groundwater Modeling: W.H. Freeman.
Wolfram, S., 1999, The Mathematica Book (4th ed.): Cambridge University Press.
Wood, B.J. and Fraser, D.G., 1977, Elementary Thermodynamics for Geologists: Oxford University Press.
4 Random Variables
and Univariate Probability Distributions
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
132
random variable can also be the product of several other random variables, for example the likelihood of a landslide that is the result of interaction between other
random variables such as pore water pressure or soil shear strength.
The processes of interest to geologists are commonly so complicated that they
can be treated as if they were the results of random processes. Treating a variable
as if it were the result of a random process is not the same as arguing that geologic
processes are random rather than mechanistic. Rather, it is a pragmatic concession
that is made in order to obtain useful, affordable, and timely answers to pressing
problems. It is, in essence, an admission that our knowledge of the world and computational capabilities are not yet advanced enough to correctly formulate and solve
mathematical models of complicated geological processes. In the meantime, the
concept of random variables gives us a tool with which to quantify geologic uncertainty and make useful predictions of the likelihood of events such as oods,
earthquakes, and landslides.
Random variables can be either continuous, meaning that there are an innite
number of possible values, or discrete, meaning that there are a limited number
of possible values. Porosity can be considered to be a continuous random variable
because it can assume any value within the range 0 n 1. The number of oods
or earthquakes that likely to occur in an area over the next decade can be considered
a discrete random variable because the result must be an integer.
Random variables are represented by probability density functions (PDFs)
that give the likelihood that any given value of the variable will occur. The
most widely known theoretical PDF is the bell-shaped curve of the normal or
Gausssian distribution, although in many geologic problems it is the logarithms
of the variables, rather than the variables themselves, that seem to be normally
distributed. The peak of a normal distribution simply indicates that values selected at random from it are more likely to lie near the peak than the two extremes. There are, however, many other probability distributions useful to geologists and geological engineers. Well sample a few of them in this chapter. An
excellent and free compilation of probability distributions is A Compendium of
Common Probability Distributions, which can be downloaded as a pdf le from
www.causascientia.org/math_stat/Dists/Compendium.pdf. The probability that a
random variable X is less than or equal to some value x, or Prob{X x}, is given by
the cumulative distribution functionor CDF. The CDF is the integral of the PDF
from its lower limit to x.
An important use of random variables is to develop probabilistic or stochastic models of geologic processes or events. For example, instead of stating with
certainty that the porosity of a certain formation is 0.30 a geologist using a probabilistic model of porosity might state that there is a 75% probability that the average
porosity of the formation is between 0.20 and 0.40. Or, he or she could say that
the porosity of the formation follows a lognormal distribution with a certain mean
and variance. The apparent randomness of geological variables such as porosity can
be the result of spatial or temporal variability, meaning that the property does indeed vary in space or time, or uncertainty arising from measurement errors. The
133
2 2
Out[2]=
2
and the Mathematica command to plot the PDF of a normal distribution with a
mean of 0 and a standard deviation of 1, often referred to as the standard normal
distribution, is
In[3]:= PlotPDFNormalDistribution0, 1, X, X, 5, 5
,
AxesLabel
"X", "PDFX"
From In[3]:=
PDF
X
0.4
0.3
0.2
0.1
-4
-2
Out[3]= -Graphics-
134
The probability that X will be less than or equal to x is given by the CDF, which
is the integral of the PDF from its lower limit to x. For example, the probability of
drawing a value 1 from the normal distribution plotted above is
1.
In[4]:=
PDFNormalDistribution0, 1, XX
'
Out[4]= 0.841345
The decimal point after the upper limit of integration forces Mathematica to
return numerical, rather than symbolic, output. It can sometimes be handy to have
a symbolic expression rather than a numerical one. For example, it is easy to show
that the integral of the PDF is indeed equal to the CDF evaluated at X.
1
In[5]:=
PDFNormalDistribution0, 1, XX
'
Out[5]=
1
1
1 Erf
2
2
1
1
1 Erf
2
2
-4
-2
Out[7]= -Graphics-
The probability that a variable drawn from the same normal distribution will fall
between, say, 0.7 and 1.4, can be found by subtracting CDFs.
In[8]:= CDFNormalDistribution0, 1, 1.4
CDFNormalDistribution0, 1, 0.7
Out[8]= 0.67728
135
The total area beneath any PDF equals 1, as shown for a normal distribution by
In[9]:= CDFNormalDistribution0, 1, '
Out[9]= 1
It means that there is a 100% chance that any randomly selected value will fall
between the minimum and maximum values of the distribution (in this case and
).
4.3.2 Log-Normal Distribution
Log-normal distributions are those in which log X, rather than X itself, is normally
distributed. As such, they range over 0 log X . There are two ways to work with
log-normal distributions in Mathematica. First, take the logarithms of the random
variables and proceed to treat them as if they were normally distributed. Second,
use the built-in log-normal distribution. Either way, remember that Mathematica
uses natural base e logs by default and youll need to specify the base if you want
to use something else. The common logarithm of x, for example, would be obtained
using Log[10,x]
The lognormal PDF is
In[10]:= PDFLogNormalDistributionL , L , X
Out[10]=
LogXL 2
2
2 L
2 X L
In this case, L and L are the mean and standard deviation of the logarithms, not
the original arithmetric variables. A graphic example of a log-normal PDF is
In[11]:= PlotPDFLogNormalDistribution3, 1, X, X, 0, 100
,
AxesLabel
"X", "PDFX"
From In[11]:=
PDF
X
0.03
0.025
0.02
0.015
0.01
0.005
20
40
Out[11]= -Graphics-
60
80
100
136
Signmaxval X Signminval X
2 maxval minval
-2
Out[13]= -Graphics-
Notice that the PDF is constant between the lower and upper limits and zero elsewhere. Just as with the other distributions, the area underneath the PDF is equal
to 1.
Uniform distributions can be useful in cases where the available data are uniformly distributed or data are sparse and, although there might be reasonable minima and maxima, there is no compelling reason to assume that there is a central
tendency. For example, you might have a good idea that the porosity of a certain
formation or facies ranges from, say, 0.3 to 0.5 but dont have enough information
to know whether there is a central tendency. The result of using a uniform distribution instead of one with a central tendency (e.g., a normal distribution) is to increase
the uncertainty of any results calculated using that distribution.
137
X
X
-5
10
15
20
-Graphics-
1 X1Q X1P
BetaP, Q
138
in which Beta[P, Q] is the Euler beta function. Its built into Mathematica, so theres
no need to take any extra steps to calculate it.
The beta distribution PDF is symmetric when P and Q are equal, with the magnitude of both controlling the peakedness of the PDF. Unequal values of P and Q
cause the PDF to be skewed to the right or left. Here is a plot of the beta distribution
PDF for P 7.5 and Q 3.1. Substitute your own P and Q values and replot the
PDF to see what shape it will take (P and Q must be positive or you will receive an
error message!).
In[17]:= PlotPDFBetaDistribution7.5, 3.1, x,
x, 0, 1
, AxesLabel
"X", "PDFX"
From In[17]:=
PDFHXL
2.5
2
1.5
1
0.5
0.2
0.4
0.6
0.8
Out[17]= -Graphics-
,
Frame
True, FrameTickes
0, 1
, 0, 5, 10
,
,
,
Epilog
Text"P ", 0.6, 9
, TextP, 0.8, 6
,
Text"Q ", 0.6, 6
, TextQ, 0.8, 6
,
DisplayFunction
Identity, P, 1, 10, 2
,
Q, 1, 10, 2
Show%, DisplayFunction
$DisplayFunction
139
From In[18]:=
P =1
Q =1
10
5
0
0
P =3
Q =1
5
0
0
P =9
Q =1
5
0
1
P =9
Q =3
5
1
0
=9
Q =5
5
0
5
1
5
0
1
P =9
Q =9
10
1
P =7
Q =9
5
1
1
P =5
Q =9
10
=9
Q =7
10
5
1
1
P =3
Q =9
10
P =7
Q =7
5
1
0
10
=5
Q =7
10
10
5
1
P =7
Q =5
P =1
Q =9
10
P =3
Q =7
10
10
=5
Q =5
0
0
10
5
0
P =3
Q =5
0
P =1
Q =7
10
10
P =7
Q =3
10
0
5
1
5
0
0
10
P =5
Q =3
10
10
5
1
P =7
Q =1
P =1
Q =5
10
=3
Q =3
10
10
P =5
Q =1
0
10
10
0
5
1
10
P =1
Q =3
10
Out[18]= -GraphicsArray-
Out[20]= -Graphics-
10
140
Should you ever nd yourself working with Mathematicas built-in Pareto distribution, be careful not to plot or evaluate it for values of X < k. Youll calculate results,
but theyre nonsensical because the distribution is dened only for X k. Thats
why the plot above starts at X k 3.
Out[21]=
The probability that two or fewer (n 2) events with an averge recurrence interval
of r 100 years will occur in any given century (t 100 years) is
In[22]:= CDFPoissonDistribution100/100., 2
Out[22]= 0.919699
Thus, there is a 0.92 probability that 0, 1, or 2 events will occur over a century. The
probability of exactly two events occuring is given by the difference between the
probability of two or fewer events and the probability of one or no events. To wit,
In[23]:= CDFPoissonDistribution100/100., 2
CDFPoissonDistribution100/100., 1
Out[23]= 0.18394
Note that this equivalency does not apply to continuous distributions. Because continuous random variables can take on an innite number of values, the PDF cannot
be used to calculate the probability at a point. Instead, the probability that X falls
between two bounds must be calculated.
The Poisson PDF cannot be graphed using Plot[] because it is a series of
discrete values, between which it is undened, rather than a continuous function.
141
But, it can be plotted by rst creating a table of discrete values and then using
ListPlot[]. Here is a plot of Poisson probabilities for various values of n using
the same r t 100 values as above.
In[25]:= ListPlot
Tablen, PDFPoissonDistribution100/100., n
,
n, 0, 5
, PlotRange
All,
PlotStyle
PointSize0.02,
AxesLabel
"X", "PDFX"
From In[25]:=
PDFHXL
0.35
0.3
0.25
0.2
0.15
0.1
0.05
1
Out[25]= -Graphics-
Out[26]= -Graphics-
142
The result is very close to that predicted by the Poisson distribution. What is the
difference between the two? Crovelli (2000) shows that the binomial distribution is
actually a discrete time approximation to the continuous time Poisson distribution.
Note that were using discrete and continuous in a different sense here. Both are
discrete probability distributions in the sense that the random variable, n, is discrete
in both whereas the way that t is treated differs between the two. Crovelli shows
that using the binomial distribution introduces noticable errors for events with short
recurrence intervals over short times.
The distribution of the values can be visualized using a simple histogram, in this
case scaled so that the area of the bars sums to 1. This will allow the histogram to
143
be plotted on the same axes as a PDF of the distribution from which the values were
drawn.
In[30]:= plot1 Histogramdata, HistogramScale
1,
BarStyle
GrayLevel0.6
From In[30]:=
0.5
0.4
0.3
0.2
0.1
-1
Out[30]= -Graphics-
1
N
In[32]:= Variancedata
Out[32]= 0.728793
In[33]:= dev StandardDeviationdata
Out[33]= 0.853694
The variance and standard deviation calculated above are sometimes referred to as
the sample variance and sample standard deviation because they are based upon
an incomplete sample of the population. The sum of squared deviations is divided
by N 1 rather than N
N, which has the effect of increasing the variance by a small
amount in order to reect the uncertainty associated with the use of an estimated
value. If the data represent the entire population, then the population variance and
population standard deviation are calculated by
In[34]:= VarianceMLEdata
Out[34]= 0.655914
144
In[35]:= StandardDeviationMLEdata
Out[35]= 0.809885
For large sample sizes, say tens of numbers or more, the difference is generally
small enough to ingore. For small sample sizes, such as the 10 values in data, the
difference can be more noticable.
Mathematica also includes the functions SampleRange, GeometricMean,
HarmonicMean, Median, Skewness, Kurtosis, LocationReport,
DispersionReport, and other descriptive statistics that we will not use. They
are all, however, described in the paper and electronic documentation accompanying
the program.
What about other kinds of distributions? For example, say that we have a data
set describing the maximum annual ood discharge from a stream gauging station
and would like to t an extreme value distribution to the data. There are no built-in
functions to calculate the and parameters that dene the extreme value distribution PDF, but it is easy to calculate the sample mean and standard deviation of
the data set. These two statistics can in turn be related to the parameters of interest
using Mathematicas symbolic manipulation capabilities. The symbolic expression
for the mean value of an extreme value distribution with parameters and , for
example, is
In[36]:= MeanExtremeValueDistribution,
Out[36]= EulerGamma
Therefore, if the sample mean and standard deviation are known it is a simple matter
to set the symbolic expression for the mean equal to the sample mean and the symbolic expression for the standard deviation equal to the sample standard deviation,
forming two equations in two variables.
In[37]:= eq1 0. MeanExtremeValueDistribution,
Out[37]= 0. EulerGamma
In[38]:= eq2 1.
StandardDeviationExtremeValueDistribution,
Out[38]= 1.
6
Below is a plot of the extreme value distribution PDF along with the normal distribution PDF using the mean and standard deviation of 0 and 1.
145
In[40]:= p1 PlotPDFExtremeValueDistribution,, x
/.%1, x, 5, 5
, PlotRange
All,
DisplayFunction
Indentity
p2 PlotPDFNormalDistribution0,1, x, x, 5, 5
,
PlotRange
All, PlotStyle
Dashing0.01
,
DisplayFunction
Indentity
Showp1, p2, DisplayFunction
$DisplayFunction
From In[40]:=
0.4
0.3
0.2
-4
-2
Out[40]= -Graphics-
There are some limitations to this method. For example, the beta distribution built
into Mathematica has a range of 0 x 1 and it will be impossible to calculate valid
P and Q parameters from data sets that fall outside of that range. Data with a sample
mean and standard deviation of 0 and 1, as used in the previous example, will yield
P 0 and Q 1. Because P and Q must both be positive in the Mathematica implementation of the beta distribution, these results are nonsensical and any attempts
to use them will produce an error. There are, however, two solutions to this problem.
First, it is possible to write a custom Mathematica function for the beta distribution
that ranges between user-specied minimum and maximum values. The necessary
PDF equation is available in many probability and statistics textbooks. Second, we
can shift the mean and re-scale the data set so that ranges from 0 to 1.
Similar problems arise if the Solve function is used in an attempt to calculate values of and for a lognormal distribution from an arithmetric mean and
variance. The easiest solution to this problem is to calculate the mean and standard
deviation of the logarithms of the data.
4.5.1 How Good Are Those Estimates?
How good is the t between data and a normal distribution with its sample mean
and standard deviation? Or, between the derived distributions and the underlying
distribution? One way to examine the t is graphically, by plotting a normal distribution PDF using the mean and standard deviation found using the method of
146
moments, then superimpose it with the histogram of the random values and the normal distribution from which the values were drawn.
Here is a PDF generated using the method of moments mean and standard deviation:
In[41]:= plot2 PlotPDFNormalDistributionmeanval,dev, x,
x, 4, 4
, PlotRange
All,
PlotStyle
Thickness0.006, Dashing0.02
From In[41]:=
0.4
0.3
0.2
0.1
-4
-2
Out[41]= -Graphics-
0.3
0.2
0.1
-4
-2
Out[42]= -Graphics-
,
Ticks
4, 2, 0, 2, 4
, Automatic
,
AxesOrigin
4, 0
147
From In[43]:=
0.5
0.4
0.3
0.2
0.1
-2
Out[43]= -Graphics-
Notice that the histogram was specied rst in the Show command so that it is in
the background and doesnt obscure the PDFs. Try reversing the order to see what
happens otherwise.
148
1000
2000
3000
4000
Pb
5000
0HppmL
Although the data are skewed, they do have a strong central tendency and the sample
size is small. Therefore, it is not unreasonable to conclude that they may follow
something reasonably close to a normal distribution.Well use this data set to show
how hypotheses about the mean and variance of the population from which the data
were drawn can be tested.
4.6.1 The t Statistic
The question that we seek to answer is whether the soil lead data were drawn from
a normal distribution with a population mean of 1938 ppm and an unknown population variance. Like all statistical tests, the question is framed in terms of the null
hypothesis that there is no difference between the underlying population mean and
the calculated sample mean. The alternative hypothesis is that the population mean
is different than the calculated sample mean. The null hypothesis is evaluated by
calculating the t statistic, which takes into account the calculated sample mean, the
postulated population mean, the calculated sample variance, and the number of data
used to estimate the mean and variance. Using the sample statistics and the postulated population mean of 1938 ppm, the t statistic for this example is
PbMean 1938.
In[48]:= tvalue
PbVariance/LengthPbData
Out[48]= 0.0014742
149
A slightly different test is used if the underlying population variance is known and
does not have to be calculated from the data. This situation, however, rarely occurs
in practical applications. Consult the Mathematica documentation for more details.
The t statistic follows the Students t distribution, which is similar to a normal
distribution. Its exact shape, however, depends on the number of degrees of freedom
(typically one less than the number of samples). The composite plot below shows
Students t distribution PDFs for 1 (short dashes), 5 (long dashes), and 10 (solid line)
degrees of freedom. Students t distribution is indistinguishable from the normal
distribution for large numbers of samples, which is typically taken to mean 30 or so.
In[49]:= p1 PlotPDFStudentTDistribution1,x, x, 8, 8
,
PlotStyle
Dashing0.01
,
DisplayFunction
Identity
p2 PlotPDFStudentTDistribution5,x, x, 8, 8
,
PlotStyle
Dashing0.02
,
DisplayFunction
Identity
p3 PlotPDFStudentTDistribution10,x, x, 8, 8
,
DisplayFunction
Identity
Showp1, p2, p3, DisplayFunction
$DisplayFunction,
AxesLabel
"DOF", "PDF"
From In[49]:=
PDF
0.3
0.2
0.1
DOF
-7.5
-5
-2.5
2.5
7.5
Out[49]= -Graphics-
150
error can only occur in situations where the null hypothesis is rejected. It is possible
to make a Type II error, meaning that the null hypothesis is incorrect even though it
is accepted, but it is generally not possible to calculate the probability of committing
a Type II error. The null hypothesis is rejected if the calculated t value exceeds the
tabulated critical value of t. In this case, the degrees of freedom would be one less
than the number of data used in the calculations (in order to account for the fact that
the variance had to be estimated from the data). The critical t value for 13 1 12
degrees of freedom and an 0.05 level of signicance is 1.782, which is much larger
than the calculated t 0.0014742. Therefore, the null hypothesis cannot be rejected
in this case.
We can accomplish the same thing using a series of steps in Mathematica. First,
the we calculate the probability of obtaining a t value smaller than or equal to t
0.0014742 from a Students t distribution with 12 degrees of freedom. Because the
calculated sample mean is larger than the postulated population mean, the probability is
In[50]:= 1 CDFStudentTDistribution12, tvalue
Out[50]= 0.499424
This result is known as a one-sided P-value because it gives only the probability
of obtaining a t value less than or equal to our calculated t. Had the sample mean
been smaller than the population mean, the t statistic would have been negative and
the P-value would have been given by CDF[StudentTDistribution[12],
tvalue]. Next, we need to determine the critical t value against which the calculated t value is to be compared. Because we have selected a 0.05 level of signicance, this will be the value of t for which the CDF is 1 0.05 0.95. It is,
unfortunately, not possible to do this using Mathematicas Solve function. One
option is to plot the Students t CDF and visually interpolate the critical value.
In[51]:= GraphicsDashing0.01
,
Line0, 0.95
, 1.78, 0.95
, 1.78, 0
PlotCDFStudentTDistribution12,t, t, 5, 5
,
DisplayFunction
Identity,
AxesLabel
"t", "CDFt"
Show%, %%, Displayfunction
$DisplayFunction
151
From In[51]:=
CDFHtL
1
0.8
0.6
0.4
0.2
t
-4
-2
Out[51]= -Graphics-
Reading across from 0.95 on the vertical axis to the CDF curve and then down to
the horizontal axis, it is easy to see that the critical t value must be about 1.79.
At this point it may seem easier to look up a critical value in a book rather than
plotting and interpolating by eye. The same results, though, can be obtained in one
step using the function MeanTest, which minimally takes as its arguments the
data set and the mean against which the data are to be tested. MeanTest returns a
one-sided P-value by default.
In[52]:= MeanTestPbData, 1938
Out[52]= OneSidedPValue 0.499424
Mean
TestStat
Distribution
,
1938.46 0.0014742 StudentTDistribution12
OneSidedPValue 0.499424,
Fail to reject null hypothesis at significance level 0.05
Another use of MeanTest is to determine whether the mean value of a data set
is signicantly different than some critical threshold. For example, the action level
that triggered remediation of the smelter site from which the data set came was a
mean lead concentration of 500 ppm. What is the probability that the lead data were
drawn from a normal distribution with a mean of 500 ppm?
152
Mean
TestStat Distribution
,
1938.46 4.59458 StudentTDistribution12
OneSidedPValue 0.00030834,
Reject null hypothesis at significance level 0.05
Before conducting a mean difference test, we can plot the PDFs of normal distributions calculated using the sample means and variances from each data set.
In[56]:= PlotPDFNormalDistributionMeanPbData,
StandardDeviationPbData, x, x, 5000, 5000.
,
PlotRange
All, DisplayFunction
Indentity
PlotPDFNormalDistributionMeanPbData2,
StandardDeviationPbData2, x, x, 5000, 5000.
,
PlotRange
All, DisplayFunction
Indentity
Show%, %%, DisplayFunction
$DisplayFunction
From In[56]:=
0.001
0.0008
0.0006
0.0004
-4000
-2000
Out[56]= -Graphics-
2000
4000
153
The distribution inferred from rst data set has considerably more variability than
that inferred from the second set, although their mean values do not appear to be
signicantly different. The plot above also illustrates one of the drawbacks to using
normal distributions: they can include negative values that may be physically meaningless. Although it doesnt do anything more than suggest that a normal distribution
may not be an appropriate one in this case, thornier problems can arise when physically unrealistic values are generated in more complicated simulations. This issue
will be addressed in Chapter 5. The mean values can be rigorously compared using the function MeanDi f ferenceTest from Statistics `HypothesisTests`, as
shown below.
In[57]:= MeanDifferenceTestPbData, PbData2, 0.,
FullReport > True, SignificanceLevel
0.05
Out[57]=
FullReport
MeanDiff TestStat Distribution
,
123.077
0.371008 StudentTDistribution14.9022
OneSidedPValue 0.357926,
Fail to reject null hypothesis at significance level 0.05
The two variances can be compared using an analogous test, often referred to
in statistics texts as an F test because ratios of variances follow what is known as
an F distribution. Thus, it is the ratio of variances rather than their difference that
is tested. The null hypothesis is that the two samples were drawn from populations
with the same variances, which have a ratio of 1.
In[58]:= VarianceRatioTestPbData, PbData2, 1.,
FullReport > True, SignificanceLevel
0.05
Out[58]=
FullReport
Ratio
TestStat Distribution
,
8.14672 8.14672 FRatioDistribution12, 12
OneSidedPValue 0.00048347,
Reject null hypothesis at significance level 0.05
As suggested by the preliminary plot of PDFs, the variances are different enough
that the null hypothesis must be rejected. What is the explanation for our conclusions
that there is no signicant difference between population means but that there is a
signicant difference between the population variances? One possible explanation
lies in the techniques used by the two groups of geologists. The group that produced
the rst data set used a portable x-ray uoresence unit to obtain lead concentrations
in the eld whereas the second group took soil samples to the laboratory for atomic
absorption analysis. Another possibility is that we have committed a Type I error
by incorrectly rejecting the null hypothesis, although this is unlikely because the
results do not change even with SignificanceLevel0.001. Thus, there is a
less than 1/1000 chance that a Type I error has occurred. Before drawing conclusions
154
about the precision of the methods used by the two groups of geologists, however,
consider the fact that there was no signicant difference in variances for data sets
from two other survey squares at the same site.
1000
2000
Out[59]= -Graphics-
3000
4000
Pb
5000
155
1000
2000
3000
4000
Pb
5000
Out[61]= -Graphics-
Therefore, we are justied in concluding that the lead data are not normally distributed only if we are willing to take a 78% chance of committing a Type I error.
Are the lead data better represented by a lognormal distribution? The easiest
way to evaluate the possibility is to plot the logarithms of the data (noting that the
minimum and maximum values must now be given as logarithms and that taking the
logarithm of zero produces an error)
156
6.5
7.5
8.5
Pb
The plot, K-S statistic, and K-S probability all suggest that the lead data are better
represented by a lognormal than a normal distribution. The likelihood of committing
a Type I error by rejecting the null hypothesis that there is no difference between
the empirical distribution and the lognormal distribution is > 99%.
Similarly, the two lead data sets can be compared using K-S plots and statistics.
In[67]:= KSTwoListPlotLogPbData, LogPbData2,
Log400, Log5000, AxesLabel
"Pb", "CumnFreq"
From In[67]:=
Cum
Freq
1
0.8
0.6
0.4
0.2
Pb
6.5
Out[67]= -Graphics-
7.5
8.5
157
KSTwoList calculates the K-S statistic by dividing the range between the minimum and maximum values in the data sets by a large user-specied number, which
is 100 in the example above. This number should be larger than the number of data
points in the two lists being compared.
In[68]:= KSTwoListLogPbData, LogPbData2, 100
Out[68]= 0.307692
In[69]:= KSProb%, 13
Out[69]= 0.138274
The greatest separation between the two cumulative frequency plots occurs between
values of 1300 and 1700 ppm and is, as calculated by the K-S function, approximately 0.31. There is a 14% chance of committing a Type I error if we reject the
null hypothesis that the two empirical distributions are the same.
It may not be terribly useful to generate just one value, but it might be very useful
to be able to draw tens, hundreds, or thousands of values when simulating geologic processes. To generate a table of 10 random values from the same distribution,
type
In[71]:= RandomArrayNormalDistribution0, 1, 10
Out[71]=
0.756926, 0.949171, 2.04928, 0.131767, 1.11098,
0.549414, 0.151664, 0.498526, 0.862757, 1.20414
158
How about something more substantial? Lets generate a table of 100 values and
give that table the variable name RandomValues.
In[72]:= RandomValues
RandomArrayNormalDistribution0, 1, 100
As usual, the semi-colon can be used to supress output. The values are calculated
and stored, but not displayed. Heres a histogram of the values:
In[73]:= HistogramRandomValues,
HistogramCategories
20, HistogramRange
4, 4
,
ApproximateIntervals
No, BarStyle
GrayLevel0.6
From In[73]:=
20
15
10
-2
Out[73]= -Graphics-
Experiment with Mathematica and try generating and plotting several different sets
of random numbers. The results will be overwritten each time you execute the
RandomValues . . . line, so give each set a different name (e.g., RandomValues2, RandomValues3, etc.) if you dont want to loose your previous results.
To visually compare the generated distribution with its underlying theoretical
distribution, rst re-scale the histogram so that the total area of the bars is 1, just as
it must be for any PDF. This is accomplished with the HistogramScale->1 option.
In[74]:= RandomValueHistogram
HistogramRandomValues, HistogramScale
1,
HistogramRange
4, 4
, HistogramCategories
20,
ApproximateIntervals
No,
BarStyle
GrayLevel0.6
From In[74]:=
0.5
0.4
0.3
0.2
0.1
-2
Out[74]= -Graphics-
0.3
0.2
0.1
-4
-2
Out[75]= -Graphics-
159
160
In[76]:= Show%%, %
From In[76]:=
0.5
0.4
0.3
0.2
0.1
-2
Out[76]= -Graphics-
Computer Note: Is the random sample a good approximation of the PDF? Repeat the exercise, particularly with different sample sizes, to get a feel for the
variability inherent in sets of randomly selected values.
The same procedures can be followed to generate random numbers from any of
Mathematicas standard probability distributions. Below is an example using 500
values drawn from a beta distribution.
In[77]:= RandomArrayBetaDistribution2, 7, 500
In[78]:= Histogram%, HistogramScale
1,
BarStyle
GrayLevel0.6
From In[78]:=
5
4
3
2
1
0.1
0.2
0.3
Out[78]= -Graphics-
0.4
0.5
0.6
161
0.2
0.4
0.6
0.8
0.1
0.2
0.3
0.4
0.5
0.6
Out[80]= -Graphics-
It might sometimes be useful to generate binary {0,1} values, for example to denote
the perfectly random occurrence of a yes-no process. For example, did a landslide
occur in a GIS raster or not? Did an earthquake occur in a given time period or not?
This can be done using
In[81]:= RandomInteger
Out[81]= 0
162
Reset the random seed to 5, though, and the rst list is generated again
In[84]:= SeedRandom5
RandomArrayNormalDistribution0, 1, 5
Out[84]=
0.401391, 0.564765, 0.793385, 0.59151, 1.68444
One way to test for obvious problems with a random number generator is to generate
a lot of random numbers, plot them, and look a small piece of the plot for any
patterns or lattice-like structures in the plot. The following two commands generate
1,000,000 random points and then plots those within the range 0.001 x 0.002.
Are there any patterns or lattice structures evident?
In[85]:= TableRandom, Random
, i, 1000000
163
From In[86]:=
1
0.8
0.6
0.4
0.2
0.0012
0.0014
0.0016
0.0018
0.002
Out[86]= -Graphics-
164
In[88]:= Do
AppendToCentralLimitResults,
MeanTableRandom
UniformDistribution 5, 5, 50
,
i, 50
In[89]:= HistogramCentralLimitResults,
BarStyle
GrayLevel0.6
From In[89]:=
12
10
8
6
4
2
-0.5
0.5
Out[89]= -Graphics-
As shown above, the results are beginning to look something like a normal distribution. They are denitely not uniformly distributed. How do the means and variances
compare? The values for the underlying uniform distribution are:
In[90]:= MeanUniformDistribution 5., 5.
VarianceUniformDistribution 5., 5.
Out[90]= 0.
Out[90]= 8.33333
The results are close, but not in exact agreement. As an experiment, repeat the simulation several times to get a feel for the variability of results obtained for N 50.
Better results can be obtained by increasing the sample size to N 500 and the
number of samples to M 100. Note that the rst step is to clear previous values
from the results table.
165
Out[92]=
In[93]:= HistogramCentralLimitResults,
BarStyle
GrayLevel0.6
From In[93]:=
70
60
50
40
30
20
10
-0.5
0.5
The agreement, although still not exact, is better than the rst example and
does serve to illustrate the very non-intuitive consequences of the Central Limit
Theorem.
What are the implications of the Central Limit Theorem for geological applications? Field measurements or experimental results that are subjected to statistical
analysis can be considered to be the sums of many independent factors and in large
numbers should appoximate a normal distribution. In theory, therefore, one should
be able to use functions such as MeanTest indiscriminantly because the Central
Limit Theorem tells us that the normal distribution is just that: the one that random
variables normally follow. They key word, though, is should. Many variables of
interest to geologists follow highly skewed distributions such as the lognormal distribution, so data sets should always be plotted to see if they at least come close
to being normally distributed before using tests that apply to normal distributions.
166
ShowGraphicsArrayGraphTable,
DisplayFunction
$DisplayFunction
167
From In[95]:=
Out[95]= -GraphicsArray-
The gray curves are the underlying or true distributions and the black curves are
the tted distributions. In some cases, the agreement between the tted and underlying distributions is good. In other cases, the tted distribution is a very poor
representation of the underlying distribution.
What is the result if we increase the number of samples to 10 per trial?
In[96]:= BackgroundPlot
PlotPDFNormalDistribution0, 1, x,
x, 5, 5
, Axes
None, DisplayFunction
Identity,
PlotRange
0, 1.5
, PlotStyle
Thickness0.02,
GrayLevel0.6
Do
Blockpseudodata, ForegroundPlot
, pseudodata
TableRandomNormalDistribution0,1, 10
ForegroundPlot
PlotPDFNormalDistributionMeanpseudodata,
StandardDeviationpseudodata, x, x, 5, 5
,
Axes
None, PlotStyle
Thickness0.01
,
DisplayFunction
Identity, PlotRange
0, 1.5
GraphTablei, j ShowBackgroundPlot,
ForegroundPlot, i, 5
, j, 5
ShowGraphicsArrayGraphTable,
DisplayFunction
$DisplayFunction
168
From In[96]:=
Out[96]= -GraphicsArray-
The agreement between tted and underlying distributions is clearly better when
10 values are used. Choosing more, say 25 or 30, would produce even smaller differences. The exact number of samples required to adequately characterize an underlying distribution depends on the desired condence level and the standard deviation
of the underlying distribution. Statistics handbooks contain formulae for the estimation of the sample sizes required for specied condence levels assuming that the
data are normally distributed. For example, the number of samples necessary to determine the condence interval (h) around the mean of normally distributed data
at the level of signicance rst requires us to calculate the value for which the
Student t distribution has only an
/2 probability of being exceeded. The Student t
distribution resembles the normal distribution, but its exact shape is controlled by
the degrees of freedom (dof n 1 is used when n samples are used to estimate
1 parameter, in this case the mean). The two are virtually identical for large numbers
of samples.
Below are four plots showing the Student t distribution for n 1, 5, and 10 as
well as a standard normal distribution (( 0, 1) for comparison.
In[97]:=
Show
GraphicsArray
PlotPDFStudentTDistribution1,x, x, 10, 10
,
PlotRange
0, 0.5
, DisplayFunction
Identity,
Frame
True, Epilog
Text"tnn 1", 6, 0.35
,
PlotPDFStudentTDistribution5,x, x, 10, 10
,
PlotRange
0, 0.5
, DisplayFunction
Identity,
Frame
True, Epilog
Text"tnn 5", 6, 0.35
,
PlotPDFStudentTDistribution10,x, x, 10, 10
,
PlotRange
0, 0.5
, DisplayFunction
Identity,
Frame
True, Epilog
Text"tnn 10", 6, 0.35
,
PlotPDFStudentTDistribution0,1, x, x, 10, 10
,
PlotRange
0, 0.5
, DisplayFunction
Identity,
Frame
True, Epilog
Text"tn 0n 1", 6, 0.35
,
, DisplayFunction
$DisplayFunction
169
From In[97]:=
0.5
0.4
0.3
0.2
-10 -5
t
n = 1
0.5
0.4
0.3
0.2
-10 -5
10
t
n = 10
0.5
0.4
0.3
0.2
-10 -5
n = 5
0.5
0.4
0.3
0.2
10
-10 -5
10
Normal
l
m = 0
s = 1
0
10
Out[97]= -GraphicsArray-
From In[98]:=
1-CDF
0.5
0.4
0.3
0.2
0.1
10
Out[98]= -Graphics-
170
The condence level for our rst numerical sampling experiment is (Crow et al.,
1960, Statistics Manual)
In[99]:= h
Tcrit s
/. Tcrit
4.3, n
3, s
1.
n
Out[99]= 2.4826
So, for three samples the condence level surrounding the mean is 2.48261 2.48.
Not very encouraging! Repeating the exercise for 10 samples, the critical t value
drops to about 2.1 and
In[100]:= h
Tcrit s
/. Tcrit
2.3, n
10, s
1.
n
Out[100]= 0.727324
or x 0.73.
Conversely, what if we specify the condence interval and wish to calculate the
number of samples necessary to attain it? The equation above can be re-arranged
to solve for n to estimate the sample size. If youve been reading carefully, you
may have noticed that the value of Tcrit depends on n, so in theory this must be an
iterative process in which we guess n, look up a value of Tcrit to calculate n, and
repeat the process until n converges. In practice, the plot above shows that for 10 or
more samples there is little change in the critical value so the rst guess will often
be good enough.
If we wish to determine the mean value with a condence interval of 0.1, then
we should collect (Crow et al., 1960)
In[101]:= Clearh
2
In[102]:= n
Tcrit s
/. Tcrit
2., h
0.1, s
1.
Out[102]= 400.
Thats right, 400 samples! Similar tests, with similar restrictions, exist for testing
standard deviations.
In actual applications, the ability to collect data is commonly constrained by
money, time, or both. How much are you, a client, or perhaps some regulators willing to pay in order to have more condence in the result? And, how precise do
you need to be? That depends in part on the sensitivity of a model to the random
variable represented by the distribution as well as the risk (which is a function of
the likelihood of an occurrence and its consequences) involved. There is no simple
answer.
171
5 Probabilistic Simulation
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
174
5 Probabilistic Simulation
In[2]:=
PeakFlow
5890, 6270, 8790, 6860, 5300, 5160, 3110, 8980, 4700,
1690, 5600, 7400, 2500, 16200, 14000, 2080, 7190, 7330,
8560, 8600, 3580, 7280, 12700, 14400, 7500, 4820, 8780,
1580, 5500, 9500, 5240, 5850, 2240, 1740, 7180, 4340, 832,
5900, 3630, 6690, 5440, 2410, 1990, 12000, 10800, 2220,
8770, 5380, 1950, 4080, 10200, 9990, 1470, 710, 8720,
2000, 1860, 2200, 1020, 5000, 6840, 2760, 2320, 2340,
3980, 966, 952, 5200, 1950, 3550, 3270, 3140, 2250,
1860, 1090, 6620, 1050, 3700, 2290, 2480, 1490, 9000,
5080, 2930, 5010, 5660, 6010, 8420, 7500, 9280, 1110,
2540, 2480, 5730, 3330, 5580, 7410, 936, 4930, 2530,
3310
//N
40
60
80
100
Years
Out[3]= -Graphics-
An alternative might have been to use a vertical bar chart or ListPlot with
PlotJoined > True. Notice that the horizontal axis shows the number of each
data point, not the year. The same data set can be shown in a histogram, in this case
scaled so that it can be subsequently shown with a PDF.
In[4]:= PeakFlowHistogram HistogramPeakFlow,
HistogramScale
1, BarStyle
GrayLevel0.6
175
From In[4]:=
0.00012
0.0001
0.00008
0.00006
0.00004
0.00002
5000
10000
15000
Out[4]= -Graphics-
0.6
0.4
0.2
7.5
8.5
9.5
Out[5]= -Graphics-
It is difcult to say if this improved things, but at least the distribution might be a
little more symmetric. Now, we can t a log-normal distribution to the data using
the method of moments.
In[6]:= meanval MeanLogPeakFlow
Out[6]= 8.2935
176
5 Probabilistic Simulation
Recall that weve already used the variable names meanval and dev, so their
previous values will be overwritten unless you have cleared them or restarted the
kernel. Now, plot the PDF with this mean and standard deviation but suppress its
output, then show it along with the rst histogram.
In[8]:= PlotPDFLogNormalDistributionmeanval, dev, x,
x, 0, 17000
, PlotStyle
Thickness0.008,
DisplayFunction
Identity
Out[8]= -GraphicsIn[9]:= ShowPeakFlowHistogram, %,
DisplayFunction
$DisplayFunction
From In[9]:=
0.000175
0.00015
0.000125
0.0001
0.000075
0.00005
0.000025
5000
10000
15000
Out[9]= -Graphics-
The log-normal distribution appears to be a fair representation but, because the peak
annual discharges are extreme values, perhaps we can do better with an extreme
value distribution. The two extreme value distribution parameters and are related
to the mean and standard deviation by (Chow et al., 1988)
StandardDeviationPeakFlow 6.
Out[10]= 2599.56
In[10]:=
If you look this up in Chow et al. (1988), be aware that they use the parameters u
and , which correspond to our and . The two alphas are not equal! Well use
the Mathematica notation in this example. Alternatively, and could have been
determined as described in Chapter 4. As above, we can plot the resulting PDF and
superimpose it on the histogram
177
10000
15000
Out[13]= -Graphics-
The agreement between the observed peak ows and the theoretical distribution
seems to have improved, particularly with regard to the height and location of the
peak of the distribution.
5.2.3 Empirical Cumulative Distribution
A third method, which may be the most familiar, is to establish an empirical cumulative distribution using the measured discharges and without relying on any theoretical probability distribution. The Weibull formula, P m/(n1), is often used for
this in the United States (Chow et al., 1988). The variable m is the rank of a given
ood and n is the total number of data. This approach assumes that oods are ranked
from largest to smallest and gives the probability that a given discharge will be exceeded. If the oods are ranked from smallest to largest, the same formula gives the
probability that the discharge will not be exceeded. Well do the latter, which will
allow the results to be superimposed on the previous plot for comparison. First, sort
or rank the discharges from smallest to largest and suppress the output (remove the
semicolon if you would like to see the sorted list).
In[14]:= RankedFlow SortPeakFlow
Now, create a table containing each discharge and its corresponding probability of
not being exceeded. Plot the results with discharge on the horizontal axis and the
cumulative probability on the vertical axis. The rst column in the table will be
178
5 Probabilistic Simulation
the peak discharge value of rank m and the second column will be the Weibull
cumulative probability.
In[15]:= n LengthPeakFlow
m
, m, n
n1
FrequencyPlot1 ListPlot%,
PlotStyle
GrayLevel0.6, PointSize0.02
Table
RankedFlowm, N
From In[15]:=
1
0.8
0.6
0.4
0.2
2500
5000
7500
Out[15]= -Graphics-
179
From In[18]:=
1
0.8
0.6
0.4
0.2
2500
5000
7500
Out[18]= -Graphics-
180
5 Probabilistic Simulation
The recurrence interval for a given peak annual discharge can be estimated by multiplying the cumulative probability of that discharge by the number of years of data.
For example, the peak ow of 8,000 cfs would have estimated recurrence intervals
of
In[21]:= LengthPeakFlow
CDFLogNormalDistributionmeanval, dev, 8000.
Out[21]= 83.5296
and
In[22]:= LengthPeakFlow
CDFExtremeValueDistribution, , 8000.
Out[22]= 84.1728
using the two theoretical CDFs. The actual peak ow data can also be used for the
estimate, either by reading from the graph above or using the table of cumulative
probabilities below:
In[23]:= CumFreqsPeakFlow
Out[23]=
710., 0.00990099, 832., 0.019802, 936., 0.029703, 952., 0.039604,
966., 0.049505, 1020., 0.0594059, 1050., 0.0693069,
1090., 0.0792079, 1110., 0.0891089, 1470., 0.0990099,
1490., 0.108911, 1580., 0.118812, 1690., 0.128713, 1740., 0.138614,
1860., 0.148515, 1860., 0.158416, 1950., 0.168317, 1950., 0.178218,
1990., 0.188119, 2000., 0.19802, 2080., 0.207921, 2200., 0.217822,
2220., 0.227723, 2240., 0.237624, 2250., 0.247525, 2290., 0.257426,
2320., 0.267327, 2340., 0.277228, 2410., 0.287129, 2480., 0.29703,
2480., 0.306931, 2500., 0.316832, 2530., 0.326733, 2540., 0.336634,
2760., 0.346535, 2930., 0.356436, 3110., 0.366337, 3140., 0.376238,
3270., 0.386139, 3310., 0.39604, 3330., 0.405941, 3550., 0.415842,
3580., 0.425743, 3630., 0.435644, 3700., 0.445545, 3980., 0.455446,
4080., 0.465347, 4340., 0.475248, 4700., 0.485149, 4820., 0.49505,
4930., 0.50495, 5000., 0.514851, 5010., 0.524752, 5080., 0.534653,
5160., 0.544554, 5200., 0.554455, 5240., 0.564356, 5300., 0.574257,
5380., 0.584158, 5440., 0.594059, 5500., 0.60396, 5580., 0.613861,
5600., 0.623762, 5660., 0.633663, 5730., 0.643564, 5850., 0.653465,
5890., 0.663366, 5900., 0.673267, 6010., 0.683168, 6270., 0.693069,
6620., 0.70297, 6690., 0.712871, 6840., 0.722772, 6860., 0.732673,
7180., 0.742574, 7190., 0.752475, 7280., 0.762376, 7330., 0.772277,
7400., 0.782178, 7410., 0.792079, 7500., 0.80198, 7500., 0.811881,
8420., 0.821782, 8560., 0.831683, 8600., 0.841584, 8720., 0.851485,
8770., 0.861386, 8780., 0.871287, 8790., 0.881188, 8980., 0.891089,
9000., 0.90099, 9280., 0.910891, 9500., 0.920792, 9990., 0.930693,
10200., 0.940594, 10800., 0.950495, 12000., 0.960396,
12700., 0.970297, 14000., 0.980198, 14400., 0.990099, 16200., 1.
The values bracketing 8000 cfs are 7500 and 4080 cfs, and a value for 8420 cfs can
be easily interpolated
181
,
InterpolationOrder
1
Out[24]= InterpolatingFunction7500., 8420., <>
You may be wondering why it wouldnt have been easier to interpolate the entire
cumulative frequencies list. The answer is that there are duplicate discharge values, which will return an error from Interpolation. The empirical recurrence
interval is thus
In[25]:= %8000. LengthPeakFlow
Out[25]= 82.5435
In this case, there was little difference between the results returned by the three
different methods.
182
5 Probabilistic Simulation
The total probability of having either zero or some other number of 100 year oods
must be 1. Thus, the probability of one or more 100 year oods (i.e., more than
zero) per century is
In[28]:= 1 %%
Out[28]= 0.633968
The probability of exactly one 100 year ood per century is, according to the two
distributions,
In[30]:= PDFBinomialDistribution100, 0.01, 1
Out[30]= 0.36973
and
In[31]:= PDFPoissonDistribution100 0.01, 1
Out[31]= 0.367879
The probability of having two or more 100 year oods in a century can be found
from the information above, or directly from
In[32]:= 1 CDFBinomialDistribution100, 0.01, 1
1 CDFPoissonDistribution100 0.01, 1
Out[32]= 0.264238
Out[32]= 0.264241
So far the two distributions have produced similar results. As discussed above,
however, the binomial distribution overestimates the exceedance probability of
events with short recurrence intervals over short periods of time. For example, consider the differences for a ood with a 2 year recurrence interval over a 3 year
period:
In[33]:= 1 CDFBinomialDistribution3, 1/2., 0
Out[33]= 0.875
In[34]:= 1 CDFPoissonDistribution3 1/2., 0
Out[34]= 0.77687
183
K
Hw Hcr
For lack of information to the contrary, it was assumed that f was uniformly
distributed between 0.02 and 0.25, using values taken from published literature, and
that Hcr was uniformly distributed between 0.1 and 1.0 m of water. The distance
to the water table L 10 m, was assumed to be known with certainty. In this case
there was no ponding of water to help drive the wetting front, so Hw 0. There were
some hydraulic conductivity data available, and hydraulic conductivity is very often
log-normally distributed, so a log-normal distribution was used for K. The following
PDF is used to specify K in the simulation.
t
184
5 Probabilistic Simulation
From In[35]:=
PDF
1.510
1.2510
110
7.510
510
2.510
K
-8
510
-7
110
-7
1.510
-7
210
-7
2.510
-7
310
Out[35]= -Graphics-
Here is the Monte Carlo simulation of the travel time of a Green-Ampt wetting
front. First, the number of trials, ntrials, is set to 1000. A blank array of zeroes,
t, is then dened and lled. Values for f, K, and Hcr are selected at random and
one realization calculated. The realization is converted from seconds to years and
the process is repeated 999 more times.
In[36]:= ntrials 1000
t Table0, ntrials
Do
Blockf, K, L, Hw, Hcr
,
f RandomUniformDistribution0.02, 0.25
K RandomLogNormalDistribution 17., 0.75
L 10.
Hw 0.
Hcr RandomUniformDistribution 0.1, 1.
Hw L Hcr
ti f/K L Hw Hcr Log
Hw Hcr
ti ti/3600./24./365.25
, i, ntrials
Below are the results, showing that travel time is likely to be on the order of months,
not years.
In[37]:= Mint
Out[37]= 0.0312399
185
In[38]:= Maxt
Out[38]= 9.51068
In[39]:= Histogramt, HistogramCategories
50,
BarStyle
GrayLevel0.6
From In[39]:=
140
120
100
80
60
40
20
2
-Graphics-
Another way to illustrate the results is with cumulative plot, which shows a 60%
probability that the travel time will be 1 year or less.
In[40]:= CumFreqPlott, 0, 7
From In[40]:=
1
0.8
0.6
0.4
0.2
1
Out[40]= -Graphics-
186
5 Probabilistic Simulation
Tan
H
2
Tan
For example,
In[42]:= FS20. , 20. , 0.3
Out[42]= 0.85
187
or by plotting it
In[44]:= PlotFS25. , 20. , H, H, 0, 1
,
AxesLabel
"H", "FS"
, AxesOrigin
0, 0.65
From In[44]:=
FS
1.2
1.1
1
0.9
0.8
0.7
0.2
0.4
0.6
0.8
Out[44]= -Graphics-
Below is a Monte Carlo simulation similar to that used for the wetting front model
just discussed. The logic is the same; only the variables have been changed.
In[45]:= ntrials 1000
results Table0, ntrials
Do
Block, , H
,
RandomUniformDistribution25. , 35.
RandomUniformDistribution20. , 30.
H RandomUniformDistribution0.01, 1
resultsi FS, , H
, i, ntrials
0.5
0.75
Out[46]= -Graphics-
1.25
1.5
1.75
188
5 Probabilistic Simulation
Log
1 FS
Out[48]= -Graphics-
According to the graph, the probability that FS 1 (or Log[FS] 0) is about 0.65.
This is the probability of sliding. We can also calculate a probability by tting a
theoretical PDF, in this case a log-normal distribution, to the results.
In[49]:= meanval MeanLogresults
dev StandardDeviationLogresults
Out[49]= 0.0834508
Out[49]= 0.258332
In[50]:= CDFLogNormalDistributionmeanval, dev, 1
Out[50]= 0.626667
Some practioners prefer not to use this approach because it requires one to assume
that the results are adequately represented by some kind of theoretical probability
distribution. In this case, it shouldnt present a problem.
A third alternative is to use something called a reliability index, which is dened
as the difference between the calculated FS and some critical value (in this case,
FS 1) divided by the standard deviation of the results.
189
meanval Log1.
dev
Out[51]= 0.323038
In[51]:= RI
The reliability index tells how many standard deviations the calculated mean lies
away from the critical value. Thus, a value of 0.36 says that the calculated mean
lies 0.36 standard deviations below the critical value.
5.5.2 Effects of Changing Independent Variable Distributions
So far we have assumed that all of the variables contributing to the factor of safety
are uniformly distributed. But, studies show that water levels in slopes susceptible
to landsliding may be generally low most of the time and only occasionally high
enough to trigger landsliding, for example during and immediately after heavy rainstorms (e.g., Haneberg, 1991; Haneberg and Gkce, 1994). How do we account for
the fact that most of the time pore water pressure will be too low to cause landsliding? One possibility is to simulate H as a log-normally distributed variable, although
we could also simulate it as a Pareto or beta variable. We know that physically possible range of phreatic surface heights is (excluding the possibility of artesian pressures) 0 H 1, but have no idea about its mean or standard deviation. It turns out
that the standard deviation of a uniform distribution is
In[52]:= StandardDeviationUniformDistributionminval, maxval
Out[52]=
maxval minval
2 3
Out[53]= 1.3294
The mean value is just the average of the minimum and maximum values
In[54]:= 0.5 Log1. Log0.01
Out[54]= 2.30259
And here is the PDF that now represents the water levels in the slope.
In[55]:= PlotPDFLogNormalDistribution 2.31, 1.33, x,
x, 0, 1
, AxesLabel
"H", "PDF"
190
5 Probabilistic Simulation
From In[55]:=
PDF
7
6
5
4
3
2
1
0.2
0.4
0.6
0.8
Out[55]= -Graphics-
Although the log-normal distribution is nite at its low end, it continues on to positive innity. Thus, there will always be a small likelihood of selecting a value of H
> 1, which will produce a negative factor of safety. This is the same Monte Carlo
simulation as above, with only the H distribution changed
In[56]:= ntrials 1000
results Table0, ntrials
Do
Block, , H
,
RandomUniformDistribution25. , 35.
RandomUniformDistribution20. , 30.
H RandomLogNormalDistribution 2.31, 1.33
resultsi FS, , H
, i, ntrials
Out[57]= -Graphics-
191
Out[58]=
The number of values removed will differ each time the simulation is run, and there
is always a chance that no negative values will have to be removed. Removing just
a few values out of a thousand is unlikely to have an effect on any inferences made
using the results. The number of non-negative values in this simulation is:
In[59]:= Lengthnewresults
Out[59]= 985
Here is a K-S plot of the results, showing that they are in this case closely represented by a normal distribution:
In[60]:= KSOneListPlotnewresults, FloorMinnewresults,
CeilingMaxnewresults,
AxesLabel
"LognFS", "CumnProb"
192
5 Probabilistic Simulation
From In[60]:=
Cum
Prob
1
0.8
0.6
0.4
0.2
0.25 0.5 0.75
Log
2 FS
Out[60]= -Graphics-
The K-S statistic between newresults and a normal distribution having the sample mean and standard deviation is
In[61]:= KSOneListnewresults
Out[61]= 0.0473539
193
In[62]:= Clear, , H
In[63]:= SeismicFS_, _, H_, Cs_
1 H/2 Cos Cs Sin Tan
Sin Cs Cos
in which Cs is a coefcient of seismic acceleration given in terms of g, the gravitational acceleration. It is easy to show that this equation reduces to the static FS
equation used above if Cs is zero. To wit,
In[64]:= SeismicFS, , H, 0
H
Out[64]= 1 Cot Tan
2
Now, back to earthquakes. According to the USGS national earthquake hazard maps
(http://eqint.cr.usgs.gov/eq/html/zipcode.shtml), a peak ground acceleration of 0.12
g has an 0.10 probability of being exceeded in 50 years in Socorro, New Mexico.
Well use that for an example. The Monte Carlo simulation is similar to the previous two except that a xed Cs value is specied. Thus, the results will show the
probability of a landslide given the specied value of Cs.
In[65]:= ntrials 1000
results Table0, ntrials
Cs 0.12
Do
Block, , temp
,
RandomUniformDistribution30. , 35.
RandomUniformDistribution20. , 25.
H RandomLogNormalDistribution 2.31, 1.33
resultsi SeismicFS, , H, Cs
, i, ntrials
Out[66]= -Graphics-
194
5 Probabilistic Simulation
As before, well censor the offensive negative values by simply removing them
In[67]:= newresults TableNull, 0
len Lengthresults
Do
Ifresultsi ( 0,
AppendTonewresults, resultsi, i, len
Out[67]=
and then take another look at the histogram of newly censored results.
In[68]:= SeismicHistogram Histogramnewresults,
HistogramScale
1, BarStyle
GrayLevel0.6,
HistogramRange
0, 1.5
From In[68]:=
3
2.5
2
1.5
1
0.5
0.25
0.5
0.75
1.25
1.5
Out[68]= -Graphics-
195
From In[72]:=
3
2.5
2
1.5
1
0.5
0.5
1.5
Out[72]= -Graphics-
Therefore, the normal distribution appears to be the better choice of the two although
it is not the only possibility. We can either use the empirical distribution just as
produced by the Monte Carlo simulation or try to t a different distribution.
Computer Note: Fit a beta distribution to the results to see if it agrees more
closely. Using the beta distribution as implemented by Mathematica, you will
have to rescale the Monte Carlo output so that it ranges between 0 and 1. Alternatively, you can write your own implementation of the beta function PDF.
The probability of landsliding assuming a normal distribution is:
In[75]:= CDFNormalDistributionmeanval, dev, 1.
Out[75]= 0.462072
whereas a cumulative probability plot of the Monte Carlo results suggests a lower
value of aproximately 0.35.
In[76]:= CumFreqPlotnewresults, 0, 1.5
196
5 Probabilistic Simulation
From In[76]:=
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1.2
1.4
Out[76]= -Graphics-
Because the acceleration that we used (0.12 g) is a value inferred to have a 0.10
probability of being exceeded in 50 years, the conditional probability of a landslide
due to an earthquake during a 50 year interval is (using the cumulative plot result)
In[77]:= 0.35 0.1
Out[77]= 0.035
197
Flat-Lying Ellipsoids
Herbison-Evans (2002) gives a good description of the mathematics behind 3-D ellipsoid geometry. First, well need to dene a vector containing the three coordinate
axes.
In[78]:= X x, y, z
Out[78]= x, y, z
Just to be exible, well include a vector containing the distances that the center
of the ellipsoid is removed from the coordinate system origin {0, 0, 0}. This isnt
necessary if we only want to generate ellipses centered at the origin.
In[79]:= U x, y, z
Out[79]=
x,
y,
z
Out[80]=
1
1
1
, 0, 0, 0, 2 , 0, 0, 0, 2
a2
b
c
0 0
1
2 0
b
1
0 2
c
We can now assemble X, U, and V into the equation dening a at lying 3-D ellipsoid. Well consider the more general (and more complicated) problem of a rotated
ellipsoid further on in this chapter.
In[82]:= FlatEllipsoid X U.V.X U
Out[82]=
x
x2 y
y2 z
z2
a2
b2
c2
Out[83]=
x2 y2 z2
a2 b2 c2
198
5 Probabilistic Simulation
third coordinate (in this case, z 0 to produce a slice through the center of the ellipsoid). Here is an example of the elliptical cross-section normal to the z axis for an
ellipsoid with a 5, b 3, and c 1:
In[84]:= ImplicitPlot
FlatEllipsoid 1 /. a
5., b
3., c
1., x
0,
y
0, z
0., z
0
,
x, 5, 5
, AspectRatio
3/5
From In[84]:=
3
2
1
-4
-2
-1
-2
-3
Out[84]= -Graphics-
The technique can be extended into the realm of probabilistic simulation by letting
, AspectRatio
0.6,
DisplayFunction
Identity, i, 10
, j, 10
, DisplayFunction
$DisplayFunction
199
From In[85]:=
Out[85]= -GraphicsArray-
Computer Note: The ellipse-plotting routine above is fairly slow, taking about
21 seconds to execute on my computer. Thats long enough to make some people
impatient, but not quite long enough to step out for a cup of coffee. If there were
a simple way of determining the apparent axis lengths of the ellipse formed
when the ellipsoid intersects a plane, the ellipses could have been drawn more
quickly using Circlex, y, a,b
. Although, as shown below, it is possible to
derive simple expressions for a and b in special cases such as at-lying ellpsoids,
in general the problem is much more difcult.
How would you describe this apparent clast size distribution if you saw it in an outcrop face or thin section? Would you have inferred that it represented a population
of identically sized clasts? What kind of implications does this have for day-to-day
eldwork?
With a little more work, we can also generate an apparent clast size distribution
curve. This is done by solving FlatEllipsoid 1 for x with y 0 (which
will yield the maximum x dimension, or apparent a) and then for y with x 0
(which will yield the maximum y dimension, or apparent b). Heres how:
In[86]:= NSolve1 FlatEllipsoid /.
x
0, y
0, y
0, z
0
, x
2
2
z
1.
a
1. a2
z2
, x a2
Out[86]= x 1. a2
2
c
c2
In[87]:= NSolve1 FlatEllipsoid /.
x
0, y
0, x
0, z
0
, y
1. b2
z2
1. b2
z2
2
, y b2
Out[87]= y 1. b
2
c
c2
200
5 Probabilistic Simulation
Thus,
the a and b values of the apparent ellipse are scaled by a uniform factor of
1
z/ c2 as long as the clasts are all aligned with their x and y axes parallel to
the outcrop plane. This pleasantly simple result is, unfortunately, not correct if
the clasts have random orientations.
The next problem is to dene what we mean by clast size, which is a non-trivial
problem. Is it the longest axis? The intermediate axis, as many people argue when
interpreting the results of a sieve analysis? The shortest axis? One way to distill the
sizes of the apparent ellipses into a 1-D measurement that is somewhat akin to that
used when sediments are sieved is to take the radii of circles having the same area
as the ellipses generated by the Monte Carlo simulation. The following user-dened
fuction takes a, b, c, and
z as input and returns an apparent grain size
a
z 2
1. b
c
1.
z 2
c
Now, generate a set of 100 apparent clast sizes using the same list of random offsets
as we used in the graph.
In[89]:= ClastSizeResults Table
ClastSize5., 3., 1., RandomOffsetsi, i, 100
How does the mean of the apparent grain sizes compare to their true size?
In[90]:= MeanClastSizeResults
Out[90]= 2.85976
The true clast size is taken to be the radius of a sphere having the same volume as
the ellipsoid, 4/3 a b c, or
In[91]:= TrueClastSize
3
Out[91]= 2.46621
The error in estimated mean grain size introduced by the outcrop effect is thus
MeanClastSizeResults TrueClastSize
TrueClastSize
Out[92]= 15.9578 Percent
The error that occurs if the equivalent radii calculated from the outcrop dimensions
of the ellipsoids are used to estimate the true total volume of the clasts (remember,
the clasts are all the same size; they just have different offsets from the x-y plane).
In[93]:= len LengthClastSizeResults
Out[93]= 100
201
len
In[94]:=
4
ClastSizeResultsi3
3 i1
Out[94]= 13022.8
In[95]:= 100.
The overestimate is in part an artifact of the clast aspect ratios that we chose, with
the largest possible cross-sectional area parallel to the outcrop plane. Although this
value gives a good indication of the magnitude of error that can be introduced by
the outcrop effect, it is specic to one simulation based on one grain size with an
orientation that maximizes one component of the error. The error introduced for
other grain shapes, particularly if they are not drawn from a uniform distribution or
if they are randomly oriented, may be signicantly different.
Why does the outcrop over predict the true clast size when the outcrop effect
causes the clasts to appear smaller? Because there are two factors at play. First, the
outcrop effect does make the clasts appear smaller. Second, we chose the clast orientation such that the short dimension is perpendicular to the outcrop plane. The result
is that the two longest semi-axes are used to calculate an equivalent radius from the
elliptical area, whereas all three semi-axes are used to calculate an equivalent radius
from the ellipsoidal volume. In this case, the error introduced by ignoring the third
dimension when calculating the equivalent radius outweighs that introduced by the
oucrop effect.
Here is a histogram of the apparent grain size distribution:
In[97]:= HistogramClastSizeResults, BarStyle
GrayLevel0.6
From In[97]:=
20
15
10
5
Out[97]= -Graphics-
202
5 Probabilistic Simulation
From In[98]:=
Cum.
Prob
1
0.8
0.6
0.4
0.2
0.5
1.5
2.5
3.5
Equiv.
4 Size
Out[98]= -Graphics-
Ry Cos, 0, Sin
, 0, 1, 0
, Sin, 0, Cos
Rz Cos, Sin, 0
, Sin, Cos, 0
, 0, 0, 1
R Rx.Ry.Rz
As formulated above, the rotation matrix R performs rotations around the xed coordinate axes. Reversing the order would perform the rotations around the axes of
the ellipsoid, which change after each of the three incremental rotations.
The equation describing an ellipsoid with arbitrary orientation is (HerbisonEvans, 2002)
203
1 Sin x
x Cos Cos z
z Sin y
y Cos Sin
2
a
1
Cos Cos z
z Cos Cos
c2
y
y Cos Sin Sin Cos Sin
1
x
x Cos Cos Sin Sin Sin 2 Cos Sin
b
z
z Cos Sin x
x Cos Sin Cos Sin Sin
y
y Cos Cos Sin Sin Sin
1 Cos Cos x
x Cos Cos
x
x
2
a
z
z Sin y
y Cos Sin
1
Cos Cos Sin Sin Sin
c2
z
z Cos Cos y
y Cos Sin Sin Cos Sin
x
x Cos Cos Sin Sin Sin
1
Cos Sin Cos Sin Sin
b2
z
z Cos Sin x
x Cos Sin Cos Sin Sin
y
y Cos Cos Sin Sin Sin
1 Cos Sin x
x Cos Cos
y
y
2
a
z
z Sin y
y Cos Sin
1
Cos Sin Sin Cos Sin
c2
z
z Cos Cos y
y Cos Sin Sin Cos Sin
x
x Cos Cos Sin Sin Sin
1
Cos Cos Sin Sin Sin
b2
z
z Cos Sin x
x Cos Sin Cos Sin Sin
y
y Cos Cos Sin Sin Sin
That is quite a mess, but fear not. There is sense to be made of it.
As above, the rotated ellipsoid equation can be used to generate an array of
apparent clast shapes. The true clast shape is the same as above, but this time the
roll, pitch, and yaw are allowed to range over intervals of 10 . The angles used
for the simulation of a real geologic material would almost certainly depend on its
genesis and be constrained by the results of eld or petrographic fabric analysis. One
might expect a melange formed by shearing to have a different degree of angular
dispersion than, say, an ablation till consisting of material dropped into place as a
glacier recedes.
204
5 Probabilistic Simulation
,
AspectRatio
4/6.,
DisplayFunction
Identity
, i, 10
, j, 10
, DisplayFunction
$DisplayFunction
From In[101]:=
Out[101]= -GraphicsArray-
Although the procedure is a little more complicated than for the at-lying ellipsoids,
it isnt too difcult to generate an apparent clast size distribution curve for the randomly rotated ellipses. See the optional interlude at the end of the chapter if youre
wondering how eigenvalues are related to the ellipse areas.
205
temp2 temp1 /. x
0, y
0
EllipseEqn Expandtemp1/temp2
AandB 1/SqrtEigenvalues
CoefficientEllipseEqn, x2 ,
CoefficientEllipseEqn, x y/2
,
CoefficientEllipseEqn, x y/2,
CoefficientEllipseEqn, y2
AppendToRotatedClastResults,
AandB1 AandB2
, i, 100
In[103]:= HistogramRotatedClastResults,
BarStyle
GrayLevel0.6
From In[103]:=
10
8
6
4
2
1
Out[103]= -Graphics-
206
5 Probabilistic Simulation
In[104]:= CumFreqPlotRotatedClastResults, 0, 4,
AxesLabel
"Equiv.nSize", "Cum.nProb"
From In[104]:=
Cum.
Prob
1
0.8
0.6
0.4
0.2
0.5
1.5
2.5
3.5
Equiv.
4 Size
Out[104]= -Graphics-
and
MeanRotatedClastResults TrueClastSize
TrueClastSize
Out[106]= 18.7132 Percent
which means that, in this case, adding random orientations changed the error in the
apparent mean from about 20% to about 21% (the exact values will differ if you
perform the simulation on your own computer with a different random seed). In
terms of total clast volume,
100
In[107]:=
4
RotatedClastResultsi3
3 i1
Out[107]= 6028.1
4
In[108]:= 100. TrueClastSize3
3
Out[108]= 6283.19
%% %
In[109]:= 100 Percent
%
Out[109]= 4.05987 Percent
Thus, when the possibility of random orientation is considered the outcrop effect
causes the total clast volume to be underestimated instead of overestimated (as for
the at-lying ellipsoids).
207
The results will be different for larger degrees of angular variation. For example,
here is an extreme case in which each of the three angles is allowed to vary over a
range of 89 :
In[110]:= Random TableRandomReal, 89. , 89.
, 100
Random TableRandomReal, 89. , 89.
, 100
Random TableRandomReal, 89. , 89.
, 100
Show
GraphicsArray
Table
ImplicitPlot
RotatedEllipsoid 1/.
a
5., b
3., c
1., x
0., y
0.,
z
0.,
Randomi j,
Randomi j,
Randomi j,
z
RandomOffsetsi j
, x, 6., 6.
,
Axes
None, PlotRange
6., 6.
, 6., 6.
,
AspectRatio
1.,
DisplayFunction
Identity
, i, 10
, j, 10
, DisplayFunction
$DisplayFunction
From In[110]:=
Out[110]= -GraphicsArray-
208
5 Probabilistic Simulation
temp2 temp1 /. x
0, y
0
EllipseEqn Expandtemp1/temp2
AandB 1/SqrtEigenvalues
CoefficientEllipseEqn, x2 ,
CoefficientEllipseEqn, x y/2
,
CoefficientEllipseEqn, x y/2,
CoefficientEllipseEqn, y2
AppendToRotatedClastResults2,
AandB1 AandB2
, i, 100
In[112]:= InclinedClastSizeResults2 Table
inclinednewa inclinednewb /.
x
0., y
0., z
RandomOffsetsi, a
5.,
b
3., c
1.,
Randomi,
Randomi,
Randomi
, i, 100
In[113]:= HistogramRotatedClastResults2,
BarStyle
GrayLevel0.6
From In[113]:=
25
20
15
10
5
0.5
Out[113]= -Graphics-
1.5
209
In[114]:= CumFreqPlotRotatedClastResults2,
MinRotatedClastResults2,
MaxRotatedClastResults2,
AxesLabel
"Equiv.nSize", "Cum.nProb"
From In[114]:=
Cum.
Prob
1
0.8
0.6
0.4
0.2
0.5
1.5
Equiv.
Size
Out[114]= -Graphics-
What about the mean clast size and error due to the outcrop effect?
In[115]:= MeanRotatedClastResults2
Out[115]= 0.535597
MeanRotatedClastResults2 TrueClastSize
TrueClastSize
Out[116]= 78.2826 Percent
In[117]:=
4
RotatedClastResults2i3
3 i1
Out[117]= 265.884
4
In[118]:= 100. TrueClastSize
3
Out[118]= 1033.04
%% %
In[119]:= 100 Percent
%
Out[119]= 74.2621 Percent
We can safely conclude that variable clast orientation can combine with the outcrop
effect to produce signicant errors in the mean clast size and clast volume estimates.
In the three simulations above, the errors in estimated mean values range from 75%
to 25%. The errors in estimated total clast volume range over an even broader
range, from about 56% to 113%. Thats worth thinking about next time youre
looking at an outcrop.
210
5 Probabilistic Simulation
-2
-1
-2
-3
Out[121]= -Graphics-
The semi-axes correspond the the square roots of the reciprocals of the two eigenvalues shown below.
In[122]:= 1./
Sqrt
Eigenvalues
CoefficientEllipseEqn, x2 ,
CoefficientEllipseEqn, x y/2.
,
CoefficientEllipseEqn, x y/2.,
CoefficientEllipseEqn, y2
Out[122]= 2.9976, 4.99932
211
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
214
6.3 Interpolation
6.3.1 Finding a Single Interpolating Polynomial
The Mathematica function InterpolatingPolynomial returns an n 1 order
equation representing the polynomial passing through n data. For example, consider
the set of 5 equally spaced elevation measurements below:
In[2]:= data 177.5, 178., 178.8, 180.6, 182.6
The 4th order polynomial that passes exactly through each data point is, specifying
x as the independent variable,
In[3]:= f SimplifyInterpolatingPolynomialdata, x
Out[3]= 0.0625 10.6878 x 4.56503 x 57.4215 5.7439 x x2
182
181
180
179
Out[4]= -Graphics-
and then the interpolated curve. The plot range has been deliberately chosen to
exceed the range of the independent variable used to obtain the interpolation polynomial.
6.3 Interpolation
215
182
180
178
Out[5]= -Graphics-
Combining the two using Show illustrates that the interpolating polynomial does
indeed pass exactly through each point and produces a reasonable result for values
of 1 x 5. Outside of the range of the data, however, the interpolated curve
contains twists and turns that are not supported by the data and almost certainly not
desirable. It is almost never a good idea to use an interpolated function outside of
the range of the data from which it was derived!
In[6]:= Showdataplot, polyplot
From In[6]:=
fx
182
180
178
Out[6]= -Graphics-
InterpolatingPolynomial can also work with data points that are not uniformly spaced as long as their x values are known. To illustrate this, explicitly assign
an x coordinate to each value in data.
216
Note that, because the number of points has been reduced, the result is a 3rd order
polynomial. The line interpolated from the reduced data set can be compared to the
original data set by superimposing plots.
In[10]:= Plot%, x, 1, 5
, AxesLabel
"x", "fx"
,
DisplayFunction
Identity
Out[10]= -GraphicsIn[11]:= Show%, dataplot, DisplayFunction
$DisplayFunction
From In[11]:=
fx
182
181
180
179
Out[11]= -Graphics-
The new interpolated line passes exactly through points 1, 2, 4, and 5 but does not
pass through point 3 (which was dropped from the data set).
Problems with High-Order Polynomials
One of the drawbacks to using a single polynomial that passes through each data
point is that the order of the polynomial will increase with the number of data,
which can lead to unreasonably large uctuations in the interpolated curve between
6.3 Interpolation
217
the data points. To illustrate, we can use a data set consisting of 20 elevation measurements (the rst 5 of which are the same as before).
In[12]:= data 177.5, 178., 178.8, 180.6, 182.6, 184.8, 187.3,
190.2, 194.3, 198.8, 201.6, 202.6, 202.9, 203.8, 205.4,
207., 211.9, 217.1, 221.1, 222.6
As above, we can compare the data to the interpolated curve by superimposing plots
In[14]:= dataplot ListPlotdata, PlotStyle
PointSize0.015,
AxesLabel
"x","fx"
, DisplayFunction
Identity
polyplot Plotf, x, 1, 20
, AxesLabel
"x", "fx"
,
DisplayFunction
Identity
Showdataplot, polyplot,
DisplayFunction
$DisplayFunction
fx
240
220
200
10
15
20
Out[14]= -Graphics-
The polynomial constains oscillations that are not supported by the data, but it does
pass exactly through each point. Although the result is correct in the sense that it
fullls its mathematical obligation to pass through each point in data, it is poor
because it adds oscillations that are not consistent with the general behavior of the
218
210
200
190
10
15
20
Out[16]= -Graphics-
The result represents the data much better than did the 19th order polynomial because Interpolation defaults to a succession of 3rd order polynomials passing through adjacent points. Why 3rd order? Because it is the lowest order polynomial for which curvature will be continuous (i.e., its second derivative is not
zero). Lower order polynomials can be used, but the result may be a jagged
curve if there are many changes in slope. To specify a different order polynomial, use the optionInterpolationOrder
n, where n is the desired order.
6.3 Interpolation
219
Interpolation will also work with irregularly spaced data sets as long as values
for the independent variable are supplied. Refer to the Mathematica documentation
for details.
Computer Note: Following the example used for Interpolating
Polynomial, add x coordinates to data, drop at least one of its points, and
then use Interpolation on the irregularly spaced result.
Mathematica can also interpolate multidimensional sets of equally spaced or
gridded data. Consider the following table of gridded taken from a digital elevation
model (Chapter 7 contains more information about using Mathematica to plot and
analyze digital elevation data).
In[17]:= data2 205.8, 208.3, 213.7, 218.5, 221.3, 222.4
,
206.5, 210., 215.5, 220.3, 222.6, 223.
,
207., 211.9, 217.1, 221.1, 222.6, 221.9
,
207.6, 212.9, 217.8, 220.6, 220.3, 219.2
,
207.5, 212.6, 215.8, 217., 216.4, 215.1
,
205.8, 209.8, 212.5, 212.2, 211.2, 209.4
220
215
5
210
4
1
2
3
3
2
4
5
61
Out[18]= -SurfaceGraphics-
Suppose that the elevation data are located on 30 m centers, but that there is a need
to estimate elevation values every 10 m. This can be most easily accomplished using
ListInterpolation
220
In[19]:= ListInterpolationdata2
Out[19]= InterpolatingFunction1., 6., 1., 6., <>
The result is an interpolating function similar to that obtained from 1D interpolation. When the interpolation function is plotted, however, the order of the two spatial coordinates must be reversed so that the orientation corresponds to the surface in
produced above by ListPlot3D. The discrepancy arises because ListPlot3D
assumes coordinates are given by row and then column, whereas Plot3D assumes
that they are given rst by the x coordinate (which corresponds to the column number) and then the y coordinate (which corresponds to the row number).
In[20]:= Plot3D%x, y, y, 1, 6
, x, 1, 6
,
ColorOutput
GrayLevel, PlotPoints
31
From In[20]:=
220
215
5
210
4
1
2
3
3
2
4
5
61
Out[20]= -SurfaceGraphics-
Out[21]= InterpolatingFunction100., 250., 200., 350., <>
221
From In[22]:=
220
250
215
210
200
200
00
0
150
250
300
350100
Out[22]= -SurfaceGraphics-
ListInterpolation works with arrays of any dimension, but the data must be
regularly spaced or gridded. Although Interpolation will accept irregularly
spaced values in one dimension, it will not do so in two or more dimensions. Chapter
7 discusses different gridding methods that can be used to interpolate irregularly
spaced values in two or more dimensions.
222
y c1 c 2 x
y c1 c2 xc3 x2
y c1 c 2 x c 3 z
L c3 sin2x/
L
y c1 c2 sinx/
When the function being tted is nonlinear in the independent variable x, as in the
second example above, the procedure is known as polynomial regression (although
it may sometimes be incorrectly called nonlinear regression). Despite the nonlinearity in x, polynomial regression is a form of linear regression because the coefcients remain linear. Middleton (2000) gives a good description of the differences
between nonlinear functions with linear coefcients and functions with nonlinear
coefcients. When the function contains two or more independent variables, as in
the third example above, the procedure is known as multiple regression. It is also
possible to perform polynomial multiple linear regressions. An example of a function that cannot be tted using linear regression is y c1 expc2 x. Exponential
relationships of this form have been used to characterize depth vs. porosity relationships in sediments and sedimentary rocks, so they are of interest to a variety
of geoscientists. Nonlinear functions can be tted using the nonlinear regression
methods discussed further on in this chapter.
6.4.1 Derivation of Linear Least Squares Equations
Mathematica includes several functions that t curves and surfaces to data, but it
is instructive to work through the linear regression calculations step-by-step before
introducing functions such as Fit and Regress. This will provide a basic understanding of the calculations while at the same time illustrating how Mathematicas
symbolic manipulation capabilities can be used to derive equations of interest to
geoscientists.
To illustrate how simple linear regression lines are determined, we will use some
rainfall and groundwater level data from a landslide along the Ohio River valley
near Cincinnati, Ohio (Haneberg and Gkce, 1994). The data consist of rainfall (in
mm) and resulting water level changes (in cm) from 14 separate precipitation events
during March, April, and May 1980.
In[23]:= data 1.94, 2.5
, 3.33, 1.89
, 3.22, 1.67
,
5.67, 1.31
, 4.72, 1.02
, 3.89, 0.96
, 2.78, 1.1
,
10.56, 0.15
, 9.44, 3.92
, 12.78, 5.23
,
14.72, 4.22
, 13.61, 3.63
, 20.39, 4.32
,
38.89, 5.89
In[24]:= len Lengthdata
Out[24]= 14
An exploratory plot shows that the data follow a trend but also contain a fair amount
of scatter.
223
From In[25]:=
WL cm
6
5
4
3
2
1
5
10
15
20
25
30
35
rain
40 mm
Out[25]= -Graphics-
Our objective is to determine the straight line y c1 c2 x that best ts the data.
It is obvious that there will be no single straight line that passes through all of the
data, so we will have to develop a criterion to dene what we mean by the best
t. In this example, we will assume that rainfall is the independent variable (x
( ) and
water level change is the dependent variable (y) measured with error. This is an
important distinction in regression analysis, because standard methods assume that
the independent variable is known without error and the dependent variable can
be measured only with some experimental error. Special techniques, which will be
described further on in this chapter, must be used if both variables contain errors.
The most common approach to linear regression is based on the minimization of
the squares of errors between the regression line and the dependent variable, hence
the name least squares. In order to illustrate the general approach, it will be helpful
to use two symbolic arrays representing the rainfall (xi) and water level (yi)
values. Our criterion will be that the best-tting line minimizes the sum of squared
errors between the line and the data, 2 ni1 yi y i 2 where yi are the observed
data and y i c1 c2 xi is the equation of the line that we are tting. We dene the
sum of the squared errors as
n
Out[26]=
c1 c2 xi yi2
i1
The sums of squares of the errors can be minimized by taking the derivative
of 2 with respect to c1 and c2 and setting the result equal to zero. In this case the
224
variables of interest are not x and y, which are both known at each point, but instead
the unknown coefcients c1 and c2 . First, nd the derivatives of 2
In[27]:= Simplifyc1 2
n
Out[27]=
2 c1 c2 xi yi
i1
Next, collect terms to put the result into a more easily understandable form
In[28]:= Collect%, c1, c2
n
Out[28]=
2 c1 c2 xi yi
i1
len
Inspection shows that the result above is of the form 2 n c1 2 len
i1 xi 2 i1 yi .
Unfortunately, Mathematica will not solve equations involving sums with symbolic
limits such as n, and the result would be exceptionally messy if we let n 14 to
correspond to the number of data (although that method does work). As shorthand,
we will use Sx and Sy to represent the two sums, and n to represent the total number
of data without specically using the value of 13. Thus, the results we obtain will
be applicable to data sets of any length. Setting the result above equal to zero and
dividing through by 2, we get:
In[29]:= eq1 c1 n c2 Sx Sy 0
Computer Note: Use Mathematica replacement rules to replace the sums with
shorthand variables such as Sx and Sy instead of manually entering the new
equations.
The same procedure can be repeated for the second constant.
In[30]:= Simplifyc2 2
n
Out[30]=
2 xi c1 c2 xi yi
i1
Out[31]=
2 xi c1 c2 xi yi
i1
A quick inspection shows that this result contains three sums and is of the form
len 2
len
2 c1 len
i1 xi 2 c2 i1 xi 2 i1 xi yi . As above, we can rewrite it using shorthand
terms for the sums and set the result equal to zero.
In[32]:= eq2 c1 Sx c2 Sx2 Sxy 0
Now that the equations have been assembled, they can be solved to determine the
two constants.
225
Sx Sxy Sx2 Sy
n Sxy Sx Sy
, c2
Sx2 n Sx2
Sx2 n Sx2
As usual, the constants can be extracted from the list of replacement rules and assigned to a variable name for future use.
In[34]:= constants %1
Out[34]= c1
n Sxy Sx Sy
Sx Sxy Sx2 Sy
, c2
2
Sx n Sx2
Sx2 n Sx2
len
i1
len
len
i1
WL cm
6
5
4
3
2
1
10
Out[37]= -Graphics-
20
30
rain
40 mm
226
There appears to be a reasonably good t between the data and the regression line.
One aspect that needs to be addressed is the physical signicance of the regression line, which has a non-zero y intercept. According to the regression line, water
level will incease by about 12 cm even if no rain falls. We will accept this dilemma
for now and raise the issue again when we discuss the tting of curves other than
straight lines. The next three sub-sections will also explore the issue of goodnessof-t in more detail.
6.4.2 Residuals
Residuals are the differences between the data and the regression line, and can be
easily calculated in Mathematica if we rst construct tables of the predicted and
observed values.
In[38]:= regressionline /. x
data3, 1
Out[38]= 1.70919
In[39]:= predicted
Tableregressionline /. x
datai, 1, i, len
Out[39]= 1.53302, 1.72433, 1.70919, 2.04638, 1.91563, 1.8014,
1.64863, 2.71939, 2.56525, 3.02493, 3.29193, 3.13916,
4.0723, 6.61845
The statement dataAll, 2 is a quick way to extract a single column, in this
case the second column, from a table. The same result could have been obtained by
looping through each row of data.
In[41]:= Tabledatai, 2, i, len
Out[41]= 2.5, 1.89, 1.67, 1.31, 1.02, 0.96,
1.1, 0.15, 3.92, 5.23, 4.22, 3.63, 4.32, 5.89
Because of the way that Mathematica handles lists, there is no need to subtract terms
one-by-one. Instead, one list is subracted from the other.
227
If the residuals are due solely to random experimental error, we would expect them
to be normally distributed around a mean value of zero. In fact, the existence of normally distributed errors is one of the underlying assumptions of the linear regression
method. The mean in this example is indeed very close to zero.
In[43]:= Meanresiduals
Out[43]= 6.34413 1017
The null hypothesis that there is no signicant difference between the residual distribution and a normal distribution can be evaluated using a Kolmogorov-Smirnov test
as discussed in Chapter 4. Below is a cumulative plot of the residuals (solid line) and
a normal distribution having the sample mean and variance of the residuals (dashed
line).
In[44]:= KSOneListPlotresiduals, 4, 4, AxesOrigin
4, 0
,
AxesLabel
"residualncm", "cum. prob."
From In[44]:=
cum. prob.
1
0.8
0.6
0.4
0.2
-3
-2
-1
Out[44]= -Graphics-
residual
4 cm
228
10
15
20
25
30
35
rain
mm
-1
-2
Out[47]= -Graphics-
It appears that, although they are normally distributed around a mean of zero, the
residuals may be clustered to some degree. This may indicate that a more complicated polynomial equation would provide a better t, but for now we will accept the
results and proceed.
6.4.3 Goodness-of-Fit and the Correlation Coefcient
The goodness-of-t of a regression line can be quantitatively evaluated by calculating a correlation coefcient. To do so, rst calculate the mean value of the groundwater level measurements.
In[48]:= meanval MeandataAll, 2
Out[48]= 2.70071
Computer Note: Mean is a built-in function in Mathematica 5.0, but an addon function in earlier versions. If you are using an earlier version, before using
Mean you will have to load the standard package Statistics DescriptiveStatistics using either Needs or <<. Refer to the Mathematica documentation for
more information about loading packages.
The goodness-of-t is dened as the ratio of the sum of squared deviations from the
mean (often referred to simply as the sum of squares) of the observed values (total
229
sum of squares, SST) and the predicted values (sum of squares due to regression,
SSR). Both can be calculated using the predicted and observed lists created
above.
len
Out[49]= 24.1556
len
Out[50]= 42.4667
Values of the goodness-of-t can range from 0 to 1. Comparing the two sum of
squares equations, it can be seen that the two will be equal if and only if the predicted
values are exactly the same as the observed values, which will yield a value of 1.
The goodness-of-t is conventionally written as r2 or R2 , and its square root is the
correlation coefcient r.
In[52]:=
SSR/SST
Out[52]= 0.754197
This is the same result that is obtained by summing the squares of the differences
between predicted and observed values.
230
Out[54]= 18.3111
One seemingly obvious way to incorporate the number of data into the process is
to calculate mean values for the sums of squares. In this case, however, calculation
of mean sums of squares is not quite as simple as dividing the sums by the number
of data. This is because mean values were used to calculate both SSR and SST,
and those mean values were in turn calculated from a sample of an underlying data
set rather than from the complete (and innite) population of possible rainfall and
groundwater level values. To account for this fact, the degrees of freedom associated
with SST are reduced from len to len 1. This is the same logic that is used when
distinguishing between sample and population standard deviations (see Chapter 4).
Upon rst consideration, it might seem that there would be 2 degrees of freedom
associated with SSR because the regression line has two variables: its slope and y
intercept. It turns out, however, that a line tted to data using least squares methods
will always pass through the mean of the dependent variable and, as above, the
use of a sample mean requires that the degrees of freedom associated with SSR be
reduced from 2 to 1. The number of degrees of freedom associated with SSE is
the difference between those associated with SST and SSR, or len 2. All that
having been written, we can now calculate the means of the sums of squares due to
regression and error.
In[55]:= SSR SSR/1.
Out[55]= 24.1556
In[56]:= SSE SSE/len 2
Out[56]= 1.52592
The ratio of mean sums of squares can now be tested against the null hypothesis
that the ratio is 1 (i.e., there is no difference between the values) at a specied
signicance level using an F ratio test. F ratio tests are used to compare calculated
variances from two data sets of populations, and variances are mean sums of squared
deviations (see the denition in Chapter 4 or refer to a statistics text). If the null
hypothesis is true, then the regression is not signicant because its mean sum of
squares is no different from that of the errors. If the null hypothesis is rejected,
however, then the regression is signicant at the specied level. We will use the
common signicance level of 0.05.
In[57]:= FRatioPValueSSR/SSE , 1, len 2,
SignificanceLevel
0.05
Out[57]= OneSidedPValue 0.00183053,
Reject null hypothesis at significance level 0.05
Would you have predicted that the regression line is statistically signicant if you
had plotted the results, and perhaps even calculated a correlation coefcient, but not
231
Computer Note: Mathematica 5.0 includes the new function FindFit, which
performs both linear and nonlinear regression. The syntax of its use is slightly
different than that of Fit and Regress, and its main advantage is that it will
automatically decided whether linear or nonlinear regression is appropriate.
232
a concave-downward arc that might be better approximated by a x curve than a
straight line. One physical justication for a concave-downward curve is that large
storms might cause the inltration capacity of the soil to be exceeded and generate
runoff, so only a portion of the rain contributes to the water level change during
heavy storms. A contrasting statistical explanation might be that the lone 40 mm
rainfall value is an anomaly that exerts a disproportionately large inuence on the
shape of the trend. Remove that outlier, and the data follow a much stronger straight
line trend. There is no way of knowing how the trend would appear if there were
more data from storms in the 20 to 40 mm range. Therefore, the apparent concavity may or may not represent the actual hydrologic behavior
of the hillside. Field
observations are the only way to ll the gap. Second, a x curve passes through
the origin, thereby eliminating the problem of a non-zero y intercept and producing a more physically plausible relationship than the straight line with a non-zero y
intercept obtained above.
In[60]:= line2 Fitdata,
x, x
Out[60]= 0.93216 x
WL cm
6
5
4
3
2
1
10
20
30
rain
40 mm
Out[61]= -Graphics-
233
Computer Note: Perform an ANOVA for the x regression line to determine
if it is more or less signicant thanthe straight line. With regard to the problem
of a non-zero y intercept, does the x provide a better or worse t than a line of
the form y c1 x?
6.4.6 Can I Solve for the Independent Variable?
The short answer is that yes, you can, but in general no, you should not! At least,
you should not do it unless you have thought about your intended actions and are
condent that you know what you are doing. It can be tempting to rearrange a regression equation to express the independent variable in terms of the dependent
variable. A common reason is that someone has published a regression equation
giving y in terms of x, but you want to the opposite: calculate x in terms of y. Except for special cases in which r2 1, however, the results obtained by rearranging a
regression equation to solve for x in terms of y will be different than those obtained
by performing a new regression with x as the dependent variable.
To show what happens when the change in water level is used as the independent
variable, switch the values in each element of the rainfall and water level data set.
In[64]:= switcheddata Tabledatai, 2, datai, 1
,
i, Lengthdata
Then, use Fit to nd the regression line for rainfall in terms of water level change.
In[65]:= Fitswitcheddata, 1, y
, y
Out[65]= 0.737532 4.13291 y
In order to plot this result on the same set of axes as the rst regression line, we
need to solve for y
In[66]:= SimplifySolve% x, y
Out[66]= y 0.178453 0.24196 x
In[67]:= line3 y /. %1
Out[67]= 0.178453 0.24196 x
234
and combine it with the data plot and previous straight-line regression plot to show
how both lines are related to the data. The dashed line represents the regression line
calculated using water level change as the independent variable, and the solid line
represents the line calculated using rainfall as the independent variable.
In[69]:= Showdataplot, plot1, plot3,
DisplayFunction
$DisplayFunction
From In[69]:=
WL cm
6
5
4
3
2
1
5
10
15
20
25
30
35
rain
40 mm
-Graphics-
The point at which the two lines intersect is given by the means of the rainfall and
water level data. This can be demonstrated by calculating the mean values of each.
The specication dataAll,i returns all of the values in the ith column of
data.
In[70]:= rain MeandataAll, 1
Out[70]= 10.4243
In[71]:= WL MeandataAll, 2
Out[71]= 2.70071
Although they are clearly different, both regression lines appear to t the data
equally well. In fact, they both have the same P value and are therefore equally
signicant from a purely statistical perspective. Which one is correct? The answer
depends on whether rainfall or change in water level is the dependent variable for
which the sum of errors is minimized. In this case, it seems clear that rainfall is the
independent variable, because it is unlikely that rainfall is dependent on the change
in water level.
235
236
From In[72]:=
8
6
4
2
0.2
0.4
0.6
0.8
-2
Out[72]= -Graphics-
Expanding upon this simple illustration, you might imagine how a large number
of curves might be added together to approximate a complicated curve like a topographic prole. Our task will be to determine the amplitudes of the sine curves that
are necessary to replicate a particular prole. This would be virtually impossible to
accomplish by trial-and-error estimation, but it is very easy to do using least squares
methods. To begin, read in a set of elevation values.
In[73]:= ReadList
"/Users/bill/Mathematica_Book/elevations.dat",
Number
Computer Note: You will have to change the le path above to reect the directory in which you have placed the le elevation.dat.
Because this data set consists of only one column, whereas Import assumes the
general case of a multi-column table, it is just as easy to import the data using
ReadList as it is to use Import and remove the extra set of brackets. The data
are from a digital elevation model with 10 m spacing, however, so it is easy to
create a new table that assigns horizontal coordinates starting with the rst elevation
value.
In[74]:= elev Tablei 1 10., %i
, i, Length%
237
From In[75]:=
elev
180
160
140
120
100
500
1000
1500
2000
dist
Out[75]= -Graphics-
1000
1500
2000
dist
Out[77]= -Graphics-
Although the correlation is crude, it is easy to see how sine or cosine curves might
be useful tools. To t a Fourier series to data, rst generate a table of the sine or
cosine curves that will be used.
238
Out[78]= Sin
and then add the constant term using Prepend. Its compliment, Append, could
have been just as easily used to add a term to the end of the list without affecting the
regression results.
In[79]:= terms Prependterms, 1
x
3x
x
x
, Sin
, Sin
, Sin
,
2000
1000
2000
500
x
3x
7x
x
Sin
, Sin
, Sin
, Sin
,
400
1000
2000
250
x
9x
, Sin
Sin
2000
200
Now, use Fit in the usual way. Because sinterms is already a list with its own
brackets, it does not have to be enclosed in another set. Doing so will return an
incorrect result.
In[80]:= Fitelev, terms, x
x
x
30.2979 Sin
2000
1000
3x
x
7.16466 Sin
3.39745 Sin
2000
500
x
3x
0.782245 Sin
3.34785 Sin
400
1000
7x
x
2.26662 Sin
2.57889 Sin
2000
250
9x
x
0.0724944 Sin
1.22961 Sin
2000
200
As usual, we can plot the regression curve along with the data.
In[81]:= Plot%, x, 0, 2000
, AxesLabel
"dist", "elev"
,
DisplayFunction
Identity
Showtopodataplot, %,
DisplayFunction
$DisplayFunction
239
From In[81]:=
elev
180
160
140
120
100
500
1000
1500
2000
dist
Out[81]= -Graphics-
The Fourier series represents many aspects of the topography well, but has trouble
with the details. This problem can be addressed by adding some higher frequency
(shorter wavelength) terms to deal with small-scale elevation changes. Because the
expressions are long, we will use semicolons to suppress the output and display only
the nal plot of the data and Fourier series. Also notice that dataplot is listed
before % in the Show statement in order to place the gray points in the background.
Reversing the order would obscure much of the regression line with the points.
In[82]:= terms TableSinn x/2000, n, 1, 50
In[83]:= terms Prependterms, 1
In[84]:= line Fitelev, terms, x
In[85]:= Plot%, x, 0, 2000
, AxesLabel
"dist", "elev"
,
DisplayFunction
Identity
Showtopodataplot, %,
DisplayFunction
$DisplayFunction
From In[85]:=
elev
180
160
140
120
100
500
1000
Out[85]= -Graphics-
1500
2000
dist
240
A 51 term Fourier series reproduces the topography almost exactly. In most geoscientic applications, a few tens of terms provide an adequate representation of
data. The relative contribution of each waveform can be illustrated by plotting the
absolute value of its amplitude. This is conventionally done in terms of frequency,
ff, where f n/2 is the number of cycles per 2000 m. The sum of terms in line can
be treated as a list, so line4 returns the fourth term of the summation.
In[86]:= line4
Out[86]= 6.51467 Sin
3x
2000
The coefcient can be isolated by using a replacement rule that sets the sin term to
unity
In[87]:= line4 /. Sin__
1
Out[87]= 6.51467
This logic can be extended to create a table of frequencies and coefcients known
as an amplitude spectrum. Notice that i 1 is used to skip the rst term of line,
which is the constant.
In[88]:= Tablei 1/2, AbsPartline, i /. Sin__
1
,
i, 2, Lengthline
ListStemPlot%, 0.02, PlotRange
All,
AxesOrigin
0, 0
, AxesLabel
"f", Ai
From In[88]:=
Ai
50
40
30
20
10
10
15
20
25
Out[88]= -Graphics-
241
The upper-case X and Y variables represent the known data, whereas the lower-case
xand y represent the corresponding point on the regression line yet to be determined.
We differentiate f and set the results equal to zero in order to minimize the function.
This time, however, there are four variables: x, y, c1 , and c2 .
In[90]:= xi f
n
Out[90]=
yi Yi
c2 i
i1
i1
In[91]:= yi f
n
Out[91]=
xi Xi
i
i1
i1
In[92]:= c1 f
n
Out[92]=
i
i1
In[93]:= c2 f
n
Out[93]=
xi i
i1
As in the least squares example, we will use shorthand notation for the summations.
Solving the rewritten forms of the rst two derivatives above, we get
242
Next, substitute the two preceeding solutions into the straight-line equation
In[96]:= Sy n c1 c2 Sx /. %1 /. %%1
Out[96]= SY c2 S c1 n c2 SX S
In[97]:= Solve%, S
Out[97]= S
c1 n c2 SX SY
2 c2
c1 n c2 SX SY
0
2 c2
c2 SX SY
n
The sums of X and Y divided by n are mean values, so in more traditional format
c1 Y c2 X, where the overbars denote mean values. The next task is to determine
Y c X c
c2 . From the results obtained above we know that 22c 1 and x X , and
2
can therefore rewrite the fourth of the derivatives as
In[100]:= x 0/.x
X /.
Y c2 X c1/2c2/.c1
Y c2X
Out[100]=
X
c2 X Y c2 X Y
0
2 c2
which is one equation with one unknown. In this case the order of the replacement
rules is critical, and switching them will produce an incorrect result. Solving for c2 ,
we nd
In[101]:= Solve%, c2
Out[101]= c2
Y Y
YY
, c2
XX
XX
Now it is time for one last application of creative algebraic visualization to discern
the simple patterns in this result. If we square the numerators and denominators,
apply summation operators to the results, divide through by n/n, and then take the
square root, we get
243
c2
1
n
1
n
ni1 Y Y
ni1
X X
sy
sx
It may seem tricky, but the result above is algebraically equivalent to the expression
with which we started: (Y Y )/(X X). It is just in a more convenient form, which
is the ratio of the standard deviation of y to the standard deviation of x.
We still have to compare our newly found reduced major axis regression line to
the water level data set and the two least squares lines. To perform the necessary
calculations, rst break data into two separate lists.
In[102]:= rain dataAll, 1
Out[102]= 1.94, 3.33, 3.22, 5.67, 4.72, 3.89, 2.78,
10.56, 9.44, 12.78, 14.72, 13.61, 20.39, 38.89
In[103]:= WL dataAll, 2
Out[103]= 2.5, 1.89, 1.67, 1.31, 1.02, 0.96,
1.1, 0.15, 3.92, 5.23, 4.22, 3.63, 4.32, 5.89
The slope of the reduced major axis line is the ratio of the standard deviations
In[104]:= c2
StandardDeviationWL
StandardDeviationrain
Out[104]= 0.182486
and, now that the slope has been calculated, the y intercept is found by combining it
with the two mean values.
In[105]:= c1 MeanWL c2 Meanrain
Out[105]= 0.798433
As it has been many times before, the next step is combine a plot of the reduced
major axis line with plots of the data set and the two least squares lines. The reduced
major axis line is the thick line bisecting the two least squares lines, illustrating that
it does indeed do a good job of selecting a line that simultaneously minimizes errors
in both the x and y directions.
In[106]:= Plotc1 c2 x, x, 0, 40
, PlotStyle
Thickness0.007,
DisplayFunction
Identity
Showdataplot, plot1, plot3, %,
DisplayFunction
$DisplayFunction
244
From In[106]:=
WL cm
6
5
4
3
2
1
5
10
15
20
25
30
35
rain
40 mm
-Graphics-
245
Computer Note: You will have to change the le path above to reect the directory into which you have placed the le porosity.dat.
Its length is
In[108]:= len Lengthdata
Out[108]= 6312
As always, plot the data. This is especially important in nonlinear regression because
it can help to provide initial estimates for the coefcients.
In[109]:= dataplot ListPlotdata, PlotStyle
GrayLevel0.7,
PlotRange
All, AxesLabel
"depth", "porosity"
,
AxesOrigin
0, 0.2
From In[109]:=
porosity
1
0.8
0.6
0.4
200
400
Out[109]= -Graphics-
600
800
depth
246
There are a few erroneous porosity values approaching or exceeding 1 at very shallow depths where the well is cased and above the static water level. In general,
though, the high porosity measurements near the surface that range between 0.5 and
1.0 are not unreasonable because sediments in shallow subsurface include pumice
gravels deposited by the ancestral Rio Grande. We will ignore them in this example
because, with more than 6300 data, their inuence will be minor.
Next, perform the regression. If no starting values are specied for the coefcients, Mathematica assigns default values of 1. In this case, the result is
In[110]:= compactioncurve NonlinearFitdata, n0 Exp z/z0,
z
, n0, z0
Out[110]= 0.502093 0.000552994 z
0.5
0.5 Expz0
, z0
0.5 Exp 1000/z0
0.3
Out[112]= 0.502093 0.000552994 z
The porosity value, although high, is not unrealistic for poorly compacted Cenozoic
sediments near the surface. The reference depth is 1/0.00055, or about 1800 m. It,
too, is consistent with the measured porosity data. Finally, a plot of the porosity
data and the Athy compaction curve show that the agreement is reasonably good.
247
Using NonlinearRegress for the same problem will return an extensive table
of diagnostics.
In[113]:= Plotcompactioncurve, z, 0, 1000
,
DisplayFunction
Identity
Showdataplot, %, DisplayFunction
$DisplayFunction
From In[113]:=
porosity
1
0.8
0.6
0.4
200
400
600
800
1000
depth
Out[113]= -Graphics-
Our results show that, while n0 is not appreciably different between the Albuquerque
basin sediments and the shales studied by Athy, the Albuquerque basin sediments
are much less compressible than the muds that lithied to form Athys shales. This
is to be expected, because our data set included porosity measurements from a combination of sandy, silty, and clayey strata. Even so, the Albuquerque basin sediments
are compressible enough that heavy groundwater pumping could cause compaction
of up to 10 cm for every meter the water level decreases once a threshold is exceeded
(Haneberg, 1995).
Computer Note: Dickinson (1953) and Helm (1984) proposed a compaction
curve with three coefcients: n n0 c log (z/
z/z0 ). The curve that represents Dickinsons data from the Gulf of Mexico has the coefcients n0 0.05,
c 0.103, and z0 10,000 m. The signicance of the Dickinson-Helm coefcients is much different than that of the Athy coefcients. In the Dickinson-Helm
model, n0 is the limiting value of porosity at great depth as z/
z/z0 1 and n n0 .
Use NonlinearFit or NonlinearRegress to t a Dickinson-Helm compaction curve to the Albuquerque basin data set, taking care to estimate reasonable initial coefcient values from the data plot. Which curve provides a better
t?
6.5.2 Logistic Regression
Logistic regression is appropriate in cases where the dependent variable y is binary.
For example, y 1 if a landslide exists (or at least is mapped) at a point and y 0 if
248
one does not (e.g., Bernknopf et al., 1988; Jger and Wieczorek, 1994; Ohlmacher
and Davis, 2003). Or, y 1 if a contaminant is present above some threshold amount
and y 0 if it is not (e.g., Focazio et al., 2002; Tesoriero and Voss, 1997). In such
a situation, the dependent variable is actually an estimate of the probability that an
event has occurred or will occur. This section includes a example developed around
a simple univariate relationship between landslide occurrence and slope, but more
complicated problems can be addressed using multivariate logistic regression. The
extension from one variable or dimension to several should be obvious.
At this point is seems logical to ask why standard least squares or reduced major
axis regression is not appropriate in the situations where logistic regression is used.
The primary reason is that they can yield predicted values less than 0 or greater
than 1, which make no sense if the dependent variable is a probability. Other problems include the fact that if the dependent variable can take on values of only 0
and 1, the error between the observations and the regression line will not be normally distributed and will depend on the indpendent variable (recall the discussion
of normally distributed residuals earlier in this chapter). These shortcomings can be
addressed by tting a line that resembles the logistic population curve introduced in
Chapter 3, hence the name logistic regression.
We will use a data set of landslide occurrence and slope angles in a 2 3 km
area near Wheeling, West Virginia to illustrate a geoscientic application of logistic
regression. Most landslides in the area develop in colluvium that covers hillsides
and is underlain by a variety of nearly at lying clastic and carbonate sedimentary
rocks. Water levels in the colluvium can approach or reach the ground surface during
the late winter and early spring of unusually wet years, so landslides are common.
Landslide occurrence was taken from a slope stability map prepared by Davies et al.
(1978) and recoded so that active landslides, dormant landslides, and areas deemed
susceptible to landsliding were assigned values of 1. All other areas were assigned
a value of 0. The map was digitized using 30 30 m cells corresponding to the
resolution of the U.S. Geological Survey digital elevation model of the area, which
was used to calculate a maximum slope angle for each cell using a method that will
be described in Chapter 7. The result is contained in the le logisticdata.dat,
which is imported by the next statement.
In[114]:= data Import
"/Users/bill/Mathematica_Book/logisticdata.dat"
Computer Note: You will have to change the le path above to reect the directory into which you have placed the le porosity.dat.
The data le is large, containing more than 6800 values.
In[115]:= len Lengthdata
Out[115]= 6868
249
10
15
20
25
slope
30 angle
Out[116]= -Graphics-
Although there is a great deal of overlap, notice that the slope angles at which landslides were mapped or inferred (y 1) extend to higher values than slope angles
for which landslides were note mapped or inferred (y 0). Also, notice that there
seem to be more no-landslide slope angles than landslide slope angles at the low
end of the range. Because there are so many data, however, it is difcult to discern
details about the distributions of slope angles for which landslides were or were not
mapped.
One way to make the data more comprehensible is to calculate the percentage of
cells in which landslides were mapped for different slope angle increments, which is
also the probability of landslide occurrence as a function of slope angle. First create
a new table to accumulate to results.
In[117]:= slideangles TableNull, 0
Now loop through data. If a landslide was mapped (i.e., datai, 2 1 is
true), then add that slope angle to slideangles.
In[118]:= DoIfdatai, 2 1,
AppendToslideangles, datai, 1, i, len
Count the number of cells in which landslides were mapped for each 1 slope angle
increment from 0 to 30 . Mathematica increments the BinCounts indices such
that a number at the lower edge of a bin is included in that bin, meaning that a
number at the upper edge of the bin is not (it is put into the next larger bin). To
include the 0 slope angles, therefore, a starting value of 0.5 is used in the following
counts.
250
and similarly count the total number of cells in each 1 slope angle increment.
In[120]:= BinCountsdataAll, 1, 0.5, 30.5, 1
Out[120]= 111, 361, 296, 344, 241, 279, 359, 333, 380, 398, 384,
493, 269, 338, 242, 297, 253, 253, 269, 187, 183, 152,
118, 131, 94, 36, 25, 18, 12, 11, 1
The probability values will be more useful if we associate them with their slope
angles, which are this case the midpoints of each slope angle increment.
In[122]:= probability Table1. i 1, %i
, i, Length%
Out[122]= 0, 0.036036, 1., 0.119114, 2., 0.179054,
3., 0.328488, 4., 0.406639, 5., 0.512545,
6., 0.62117, 7., 0.63964, 8., 0.621053,
9., 0.678392, 10., 0.684896, 11., 0.709939,
12., 0.684015, 13., 0.695266, 14., 0.702479,
15., 0.693603, 16., 0.754941, 17., 0.735178,
18., 0.657993, 19., 0.631016, 20., 0.704918,
21., 0.671053, 22., 0.567797, 23., 0.534351,
24., 0.776596, 25., 0.75, 26., 0.76,
27., 0.722222, 28., 0.916667, 29., 1.,
30., 1.
Finally, plot the probability of nding a landslide in a cell as a function of slope angle. Although the relationship isnt ideal, it does illustrate a general correspondence
between landslide occurrence and slope angle in the study area.
In[123]:= probplot ListPlotprobability,
AxesLabel
"slopenangle", "probability"
,
PlotStyle
PointSize0.018
251
From In[123]:=
probability
1
0.8
0.6
0.4
0.2
5
10
15
20
25
slope
30 angle
Out[123]= -Graphics-
Now that we have a feel for the data, we can move on to the regression. Instead
of tting a line of the form y c1 c2 x, in logistic regression one of the form log
( /(1 p)) c1 c2 x or p/(1 p) c1 c2 x , where p is the probability of occurrence
(p
is used. Areas in which landslides were mapped have p 1 because a landslide has
occurred, and areas in which landslides were not mapped have p 0. The utility of
a logistic regression equation is that it will allow the estimation of the probability
that a landslide will be found or occur in an area that has not yet been mapped (but
which has similar geologic and topographic characteristics). The term log ((p/(1
p)) is known as the log odds ratio or logit and p/(1 p) is, logically enough, known
as simply the odds ratio. Examination of the odds ratio shows that it is simply the
ratio of the probability that a landslide occurs ((p) to the probability that it does not
occur (1 p) at a specic location.
To t a logistic curve, we will rst dene the logit function and solve it for p.
Weve already assigned numerical values to c1 and c2, so the rst step is to clear
the existing values.
In[124]:= Clearc1, c2
In[125]:= SimplifySolve Log
p
c1 c2 x, p
1 p
c1c2 x
1 c1c2 x
In[126]:= psltn p /. %1
Out[125]= p
Out[126]=
c1c2 x
1 c1c2 x
We will use the result in NonlinearFit and assign the result to logcurve.
Many descriptions of logistic regression are intimately tied to a method of solution
known as maximum likelihood estimation (MLE), and it is easy to falsely conclude
that MLE is the only way to solve logistic regression problems. But, it is not. Nonlinear least squares will work, and the problem can even be transformed and solved
by linear least squares. Lowry (2003) explains the linear least squares approach, and
252
The weights are incorporated using the Weights option. The default value is
Weights
Automatic, which gives each point equal weight.
In[128]:= results NonlinearFitprobability, psltn , x
,
c1, c2
, Weights
wts
0.8607310.135306 x
1 0.8607310.135306 x
In[129]:= logcurve %
Out[128]=
0.8607310.135306 x
1 0.8607310.135306 x
Out[129]=
To see the results, plot logcurve over the observed range of slope angles and superimpose the result with the probability values that were used in NonlinearFit.
In[130]:= logplot Plotlogcurve, x, 0, 30
,
DisplayFunction
Identity
Showlogplot, probplot,
AxesLabel
"slopenangle", "probability"
,
DisplayFunction
$DisplayFunction
From In[130]:=
probability
1
0.8
0.6
0.4
0.2
5
10
15
Out[130]= -Graphics-
20
25
slope
30 angle
253
The agreement is not perfect, but the plot does show a denite correspondence between the regression line and the probability data.
You may also be thinking that the regression line does not look much like the
logistic population curves obtained in Chapter 3, but that is because of the data set
and range of slope angles we have chosen. This can be demonstrated by plotting the
curve over a much wider (but physically unrealistic) range of angles. The 0 to 30
range of slope angles used in this example is denoted by the gray rectangle.
In[131]:= Plotlogcurve, x, 90, 90
,
DisplayFunction
Identity
ShowGraphicsGrayLevel0.8,
Rectangle2, 0.01
, 39.9, 1.
,
DisplayFunction
Identity, Axes
True
Show%, %%, probplot,
DisplayFunction
$DisplayFunction
From In[131]:=
1
0.8
0.6
0.4
0.2
-75
-50
-25
25
50
75
Out[131]= -Graphics-
Ohlmacher and Davis (2003) also obtained wide logistic regression curves in their
study of landslide hazards in Kansas, some of which predicted p < 1 for slopes as
high as 90 . Although the results in this example are much messier than the simple
examples typically used in textbook logistic regression tutorials, they are representative of the results obtained from observations of complicated natural systems.
Computer Note: The non-randomness of the residuals suggests that a better
curve can be found. Modify the logit function to include a higher order polynomial, for example:
log
p
c 1 c2 x c 3 x2 c4 x3
1 p
254
(Continued.) The visible agreement the regression curve to the probability data
will be much better, but will now include a small interval in which the probability of observing a landslide decreases as the slope angle increases. The geoscientic question is, why? Should the probability of landsliding always increase
as slope angle increases? A histogram of slope angles in the study area and an
understanding of the innite slope stability equation used in Chapter 5 may help
to explain the relationship. You may assume that the angle of internal friction
for hillside soils and colluvium in the area is about 25 , cohesive strength is negligible, and the area receives enough rainfall to saturate the slopes nearly to the
ground surface from time to time. What kinds of eld evidence would you look
for to test any hypotheses that you develop?
In logistic regression, there are no statistics analogous to the r2 goodness-oft used for linear regression models. This is because the concept of an r2 value is
postulated on the existence of normally distributed errors, which does not occur with
binary data. Most of the goodness-of-t statistics that do exist for logistic regression
results, moreover, are difcult to apply and some are closely tied to the method of
solution (such as maximum likelihood estimation). One measure of goodness-of-t
that is easy to calculate is a simple percentage of correct predictions. To calculate the
percentage of correct predictions, assume that a value of 0.5 is the cutoff between the
occurrence or non-occurrence of a landslide (or whatever other dependent variable is
being studied). This is, after all, what we really want to know: will a landslide occur
at a given place or not? Then, use the cutoff to transform probability into a list
of binary yes-no variables. This is most easily accomplished using Round.
In[132]:= list1 RoundprobabilityAll, 2
Out[132]= 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
The next step is to determine what percentage of the elements in list1 agrees with
the corresponding elements in list2. There are different ways to accomplish this,
one of which is to add the two lists together.
In[134]:= list3 list1 list2
Out[134]= 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
255
Values of 0 indicate slope angle categories for which both list1 and list2 agree
that no landslide exists or should not exist. Values of 2 indicate categories for which
both lists agree that a landslide does or should exist. The remaining values of 1 indicate categories where the observations and predictions differ. Select extracts
the elements of list3 that are either 0 or 2 and returns the result as a list, the
length of which is determined by Length. The two lengths are then added together
and divided by the total number of slope angle increments to calculate the percentage of agreement. The //N operator is necessary to obtain a numerical result from
calculations involving only integers.
In[135]:= LengthSelectlist3, # 0&
LengthSelectlist3, # 2&/Lengthlist3 //N
Out[135]= 0.935484
In this very simple measure of its ability to predict whether the observed probability of landslide occurrence is less or greater than 50% for a given slope angle
increment, therefore, the logistic regression model appears to work well. Consult a
logistic regression reference such as Kleinbaum and Klein (2002) or Menard (2001)
for information about more sophisticated assessments and the maximum likelihood
estimation method of solution.
256
Haneberg, W.C., 1999, Effects of valley incision on the subsurface state of stress theory and
application to the Rio Grande valley near Albuquerque, New Mexico: Environmental &
Engineering Geoscience, v. 5, p. 117131.
Haneberg, W.C. and Gkce, A.., 1994, Rapid Water-Level Fluctuations in a Thin Colluvium
Landslide West of Cinncinati, Ohio: U.S. Geological Survey Bulletin 2059-C.
Hart, B.E., Flemings, P.B., and Deshpande, A., 1995, Porosity and pressure: role of compaction disequilibrium in the development of geopressures in a Gulf Coast Pleistocene
basin: Geology, v. 23, p. 4548.
Helm, D. C., 1984, Field-based computational techniques for predicting subsidence due to
uid withdrawal, in T.L. Holzer, editor, Man-Induced Land Subsidence: Geological Society of America Reviews in Engineering Geology, v. 6, p. 122.
Jger, S. and Wieczorek, G.F., 1994, Landslide Susceptibililty in the Tully Valley Area, Finger Lakes Region, New York: U.S. Geological Survey Open-File Report 94-615 (online
version at http://pubs.usgs.gov/of/1994/ofr-94-0615/tvstudy.htm).
Kleinbaum, D.G. and Klein, M., 2002, Logistic Regression (2d ed.): Springer Verlag.
Lowry, R., 2003, Simple Logistic Regression: web page located at
http://faculty.vassar.edu/lowry/logreg1.html.
Menard, S., 2001, Applied Logistic Regression Analysis: Sage Publications.
Middleton, G.V., 2000, Data Analysis in the Earth Sciences Using Matlab: Prentice Hall.
Ohlmacher, G.C. and Davis, J.C., 2003, Using multiple logistic regression and GIS technology to predict landslide hazard in northeast Kansas, USA: Engineering Geology, v. 69,
p. 331343.
Swan, A.R.H. and Sandilands, 1995, Introduction to Geological Data Analysis: Blackwell
Science.
Tesoriero, A.J., and Voss, F.D., 1997, Predicting the probability of elevated nitrate concentrations in the Puget Sound Basin-Implications for aquifer susceptbility and vulnerability:
Ground Water, v.35, p.1029-1039.
Wessel, P., 2000, Geologic Data Analysis:
http://www.higp.hawaii.edu/~cecily/courses/gg313/DA_book/index.html.
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
258
they must generally be gridded using two dimensional equivalents the interpolation
methods discussed in Chapter 6.
7.2.1 Digital Elevation Models
Digital elevations models, often referred to as DEMs, are available in a variety of
sizes and resolutions. They are freely available through the U.S. Geological Survey,
state GIS clearinghouses, and sources such as www.gisdatadepot.com. The example used in this section is a 201 x 201 grid of elevation values, listed to the nearest
decimeter, from the USGS digital elevation model of the Bremerton West, Washington, 7.5 quadrangle. This is a 10 m digital elevation model, meaning that the rows
and columns of elevation data are separated by 10 m. Mathematica 4.2 and higher
can import Spatial Data Transfer Standard (SDTS) digital elevation models, which
are now being used for U.S. Geological Survey 7.5 DEMs, and there are some public domain Mathematica functions that will read earlier ASCII (also known as DEM
format) models. Geographic information system (GIS) or specialized DEM reading
software can also be used to read the DEM les, isolate portions of interest, and save
them as a tab delimited ASCII or text le containing only the elevation values. The
east-west and north-south coordinates, given either as Universal Transverse Mercator (UTM) grid coordinates or latitude and longitude, can also be exported but will
result in a much larger le.
The following statement reads in the elevation data:
In[2]:= temp Import
"/Users/bill/Mathematica_Book/bremerton.dat"
Computer Note: You will have to change the le pathway to locate the bremerton.txt le on your computer. The easiest way to do this is to use the Get File
Path. . . item on the Input menu.
The next step is to reorganize the elevation data, because Mathematica plots grids of
data with row 1 at the bottom of the plot whereas digital elevation models typically
have row 1 at the top (northern edge). Skipping this step would produce maps with
North at the bottom of the map. Mathematica stores matrices as lists of lists, so
it is easy to write a one line routine to take lists from temp starting at the end
and put them into elev starting at the beginning. The iteration statement simply
places values from temp into elev starting with the last row of temp and working
towards the rst row.
In[3]:= elev Tabletempi, i, Lengthtemp, 1, 1
Now we can calculate some basic values that will be useful in subsequent calculations. Notice that there are six different variables dened in one input, and that the
results of the calculations are returned as six separate output lines.
259
260
261
From In[6]:=
Out[6]= -ContourGraphics-
Computer Note: Experiment with different color functions to see what effects
they produce. For example
ColorFunction
Functionz, RGBColor0.2 0.8 z2 , 0.6 1.6 z 2 z2 ,
0.2 z 0.8 z2
produces an atlas-like range of colors ranging from bright green at the lowest
elevations through brown and on to white at the highest elevations. Each of the
three polynomials given as arguments in RGBColor describes how the red,
green, or blue value changes as values of the plotted function range from their
minimum to their maximum. See Appendix B for more detailed instructions.
One drawback to contour plots made with Mathematica is that the contour lines
are not labeled. An option that works for shaded contour and density plots is to
use the ShowLegend function contained in Graphics`Legend`. In the case of the
shaded contour plot, ShowLegend is invoked as follows:
In[7]:= ShowLegendshadedtopomap,
Functionz, GrayLevel0.3 0.7 z, 16, "53", "223 m",
LegendShadow
None, LegendPosition
1.1, 0.9
,
LegendSize
0.5, 1.8
262
From In[7]:=
53
223 m
Out[7]= -Graphics-
The list of variables required by ShowLegend includes a color function (or, in this
case, a gray level function) that should be identical to that used in the original plot,
the number of divisions within the legend, and the labels for the low and high ends
of the elevation range. The topographic map contains 17 contour lines, meaning
that there will be 16 lled intervals between the contours. Optional variables include LegendShadow, LegendPosition, and LegendSize. The default for
LegendShadow is true, which puts a black drop-shadow behind the legend. The
dimensions for LegendLocation and LegendSize are given relative to the
size of the plot, which is centered at 0,0 and ranges from 1 to 1 in each direction,
and LegendLocation refers to the x, y coordinates of the lower left-hand corner
of the legend. The documentation for Graphics`Legend` describes other options that
can be used to control the exact appearance of the legend.
Computer Note: Modify the options in ShowLegend so that the lightest color
and highest elevation label are at the top, rather than the bottom, of the legend.
The default for ContourPlot and ListContourPlot is, as illustrated
above, to shade each contour interval with a different shade of gray. Using the option ColorFunctionHuewill produce a rainbow of colors, although both the
highest and lowest values will be colored red. This is not always helpful so, as described in Appendix B, the argument given to ColorFunction can be scaled
to avoid the problem. ColorFunction can also be customized, for example to
shade topography ranging from green at low elevations to white at high elevations.
Below is the same data set plotted with ContourShadingFalse. The option
ContourShadingFalse is used to produce black contour lines set against a
white background.
263
Out[8]= -ContourGraphics-
Computer Note: Download and install the contour labeling package described
in Appendix B, then use it to label contour plots of the Bremerton data set.
Computer Note: Contour maps are often shown with every fth contour as a
thicker or darker line. Make a contour plot using a 25 m contour interval, use the
ContourStyle option to increase the thickness of the contours, and then use
Show to superimpose the result on a contour map with a 5 m contour interval.
264
Out[9]= -DensityGraphics-
The contour and density plots can be superimposed using the Show function.
In[10]:= Show%, topomap
From In[10]:=
Out[10]= -Graphics-
265
As illustrated below, the resampled data makes a reasonably representative wireframe plot. Because three dimensional plots are used for qualitative visualization
rather than as the basis for calculations, resampling does not create any problems.
In this case the Boxed option was not overridden, although it could have been by
using BoxedFalse.
In[12]:= ListPlot3Dsampledelev, Mesh
True, Shading
False,
BoxRatios
50 ncols, 50 nrows, 10 relief
From In[12]:=
40
200
150
100
30
20
10
20
30
40
Out[12]= -SurfaceGraphics-
266
From In[13]:=
Out[13]= -SurfaceGraphics-
Out[14]= -SurfaceGraphics-
267
From In[15]:=
Out[15]= -SurfaceGraphics-
Mathematica allows complete control over surface shading and coloration, but these
features are cannot be adequately illustrated in a black and white textbook. Some
examples of surface coloration are included in Appendix B on the companion CD.
The viewpoint for three dimensional graphics can be changed using the
ViewPoint option, which is not interactive in the standard version of Mathematica. The easiest way to change the viewpoint is to select the 3DViewPointSelector item from the Input menu, which brings up an interactive dialog box in which
a cube can be rotated and zoomed. Clicking on the Paste button will paste the new
viewpoint coordinates at the location of the cursor in the notebook. The plot below
was created by typing Show[%,and leaving the cursor at the end of the line. Then,
the viewpoint selector dialog box was used to rotate the graphics cube and paste
a viewpoint value at the cursor location. Finally, the input line was completed by
typing the nal ]and pressing the Enter key.
In[16]:= Show%%, ViewPoint > 6.463, 3.17, 2.979
From In[16]:=
Out[16]= -SurfaceGraphics-
268
150
200
elev.
-Graphics-
Computer Note: Because the Bremerton elevation data set elev consists of
40,401 values, readers following the examples in this section and using computers that are slow or have limited memory may wish to substitute the resampled
data set sampledelev. The results will be similar regardless of which data set
is used.
Elevation data can also be summarized using the CumFreqs or
CumFreqPlot functions in the Mathematica package included with this
book. CumFreqPlot takes three arguments: a list of data (which must be attened
if it is a matrix of values), the minimum plot value, and the maximum plot value.
CumFreqs returns a list of cumulative frequencies without a plot. The cumulative
distribution of relative elevations in the resampled Bremerton data set is shown
in the graph below, with the relative elevation on the horizontal axis and the
cumulative proportion of elevations less than each relative elevation shown on the
vertical axis. It is the emprical equivalent of the cumulative distribution function
269
(CDF) plots introduced in Chapter 4. The elevations are normalized relative to the
relief so that they range from 0 to 1, which allows curves from different areas to be
easily compared even if their absolute elevations are different.
Flattenelev minval
, 0., 1.,
relief
AxesLabel
"Relat.nElev.", "Cum.nFreq."
In[18]:= CumFreqPlot
From In[18]:=
Cum.
Freq.
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
Relat.
1 Elev.
Out[18]= -Graphics-
This cumulative plot of relative elevations is very similar to the hypsometric curve
used in many geomorphological studies, except that the traditional hypsometric
curve shows cumulative proportion on the horizontal axis and elevation on the vertical axis. The traditional hypsometric curve can be plotted by obtaining the cumulative frequencies of the elevation data
In[19]:= elevfreqs CumFreqs
Flattenelev minval
relief
putting them into a new data table in which the two columns are interchanged.
In[20]:= len Lengthelevfreqs
Out[20]= 40401
In[21]:= Tableelevfreqsi, 2, elevfreqsi, 1
, i, len
270
From In[22]:=
Relat.
Elev.
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
Cum.
1 Prob.
Out[22]= -Graphics-
The area beneath the hypsometric curve is the hypsometric integral, which can
be used as a scalar reection of the degree of incision. Values of the hypsometric
integral can range from 0 to 1. Large values indicate high plateaus that are incised
by a few narrow valleys, whereas small values indicate at plains interrupted by
only a few hills or hummocks. The hypsometric integral can be calculated from the
swapped cumulative frequency list using the ListIntegrate function contained
in the standard NumericalMath`ListIntegrate` package.
In[23]:= ListIntegrate%%
Out[23]= 0.425987
A variable known as the elevation-relief ratio was introduced shortly after the concept of the hypsometric integral was developed, and was later shown to produce a
value virtually indistinguishable from the hypsometric integral (Scheidegger, 1991).
The elevation-relief ratio for the resampled Bremerton data set is
MeanFlattensampledelev minval
//N
maxval minval
Out[24]= 0.420897
In[24]:=
271
the slope. An alternative is to use nite difference approximations, in which derivatives are approximated as elevation changes over nite distances, for example the
elevation difference between two adjacent values divided by the horizontal distance
between them. In practice, nite difference methods are implemented using either
the four or eight neighbors of each elevation point. A nite difference approximation can be illustrated using a set of nine elevation values taken from the Bremerton
elevation data set. First, ll a table with a subset of nine values from the full data
set. The choice of rows and columns is arbitrary.
In[25]:= data Tableelevr, c, r, 100, 102
, c, 50, 52
Out[25]= 215.1, 213.8, 212.2, 219.2, 218., 217.1,
221.9, 221.1, 220.6
The TableForm option can be used to display the elevations in rows and columns,
recalling that the northernmost row is at the bottom of the table because we reversed
the row order at the beginning of this section.
In[26]:= data //TableForm
215.1 213.8 212.2
Out[26]= 219.2 218. 217.1
221.9 221.1 220.6
The north-south and east-west components of slope are calculated separately. Notice
that the elevation differences are divided by twice the elevation grid spacing because
the center point itself is not used in this calculation.
data3, 2 data1, 2
20.
Out[27]= 0.365
In[27]:= NS
In[28]:= EW
Each of the two slope components is a vector quantity, so the resultant maximum
downward slope at row 101 and column 51 of the Bremerton elevation data set is
calculated as the square root of the sum of their squares, or
In[29]:=
NS 2 EW2
Out[29]= 0.379803
The value calculated above is theslope gradient, which is the tangent of the slope
angle and therefore dimensionless. It can be converted into a slope angle using the
ArcTan function as shown below.
In[30]:=
1
ArcTan NS 2 EW2
Out[30]= 20.7969
272
Mathematica, like other computer programs, calculates angles in radians rather than
degrees and Degree is a built-in conversion factor. V
Values given in radians are
divided by Degree to obtain degrees, and those given in degrees are multiplied by
Degree to obtain radians. If the elevation data set represented a structural geologic
surface, for example the top of a petroleum reservoir or aquifer, the slope angle
would be the dip of the surface.
Slope angles for an entire table of values can be calculated by combining the
previous four steps into a single equation and then using that equation to produce
a new table lled with the slope angles at each data point. Because the slope angle calculation method that we are using is based on values of neighboring data
points, however, it cannot calculate slopes for points around the edges of the data
set. Therefore, the resulting tables will have two fewer rows and two fewer columns
than elev.
In[31]:= 10.
slopes
1
ArcTan
2
Computer Note: Write a Mathematica function that will take an entire table of gridded elevation values and their grid spacing as input and produce a table of slope angles as output. The usage might be something like
SlopeAngleelev, .
Computer Note: Develop a method for calculating slope angles that will allow values to be calculated along the edges of the data set. It may help to read
about the treatment of boundary conditions in nite difference simulations of
groundwater or heat ow.
Contour plots of slope angles can be difcult to interpret, and the best visualization choice is often a density plot that shows a continuous range of tones or
colors. A MeshRange specication is included in the density plot below so that it
can be centered beneath a topographic map of elev, which has two more rows and
columns of data.
273
From In[32]:=
Out[32]= -DensityGraphics-
The white and black banding on the slope map is an artifact of the digital elevation model, which rounds the elevation data to the nearest decimeter. The information in the slope map can be tied to the landscape by overlaying it with a topographic
map. Because most of the slope plot is gray to black, the usual black contour lines
would not show up well. Therefore, the rst step will be to make another contour
map with white contours and a 20 m contour interval so that the slope information is not obscurred by the contours. DisplayFunctionIdentity is used
to suppress output of the contour map, which would be invisible against a white
background.
In[33]:= whitetopomap ListContourPlotelev,
ContourShading
False,
AspectRatio
aratio, Frame
False,
Contours
Tablec, c, 50, 400, 20.0001
,
ContourStyle
Thickness0.005, GrayLevel1
,
DisplayFunction
Identity
Out[33]= -ContourGraphics-
Next, Show is used to place the contour map over the slope angle map.
DisplayFunction $DisplayFunction is used to make both maps visible.
In[34]:= Show%%, %, DisplayFunction
$DisplayFunction
274
From In[34]:=
Out[34]= -Graphics-
Computer Note: Experiment with different color functions to help visualize the slope angle distribution. Using ColorFunction > Function
z, RGBColorz, 1 z, 0 in DensityPlot or ContourPlot produces plots that range from bright green for low values to bright red for high
values.
Computer Note: Overlay a gray scale density plot with a colored contour plot.
What combinations of colors and map styles best convey the information about
slope angles and topography?
Computer Note: Make a density plot that shows slope angles above a certain
threshold, say 20 , in red and all other values in green.
Computer Note: Create a new density plot without using the MeshRange
specication, then overlay it with a contour plot containing black or colored
lines. This will illustrate the mismatch that occurs if the two missing rows and
columns of slope angles are not taken into account.
Another option for visualizing slope angles is the use of vector plots such as
those used to illustrate groundwater ow directions in Chapter 2. In the case of a
201 201 grid of elevation data, however, the vectors would be too crowded to
read.
275
1
Out[35]= 163.951
The elevation points in data thus dene a slope that is facing about 16 east of
south (164) and dipping about 21 in that direction. Notice that the arc tangent
function used above is different than the one used to calculate the slope angle. Because the possible range of slope angles occupies only one quadrant (0 to 90 ), the
arc tangent could be calculated from the simple ratio of the slope components. Aspect, however, can range through all four quadrants (0 to 360 ) and the sign of each
component must therefore be considered. The four-quadrant arc tangent of y/x is
calculated using ArcTanx, y.
The relationship between the east-west slope gradient, the north-south slope gradient, the slope angle, and the slope aspect can be illustrated by drawing a simple vector diagram. The slope gradient will be the resultant of the two orthogonal
slope gradient components, and the aspect is the supplement of the angle measured
clockwise-positive from North to the resultant. The angle itself gives the maximum
upslope gradient, hence 180 must be added or subtracted in order to obtain the direction of the maximum downslope gradient. Multiplying the two gradients by 1
in the expression above has the same effect as adding or subtracting 180 .
The orientation of the surface dened by data can be visualized with a surface
plot of its nine elevations, as shown below.
In[36]:= ListPlot3Ddata, AxesLabel
"E W", "N S", ""
,
Shading
False
From In[36]:=
220
217.5
215
212.5
1
2.5
2
1.5
1.5
2
E
W
2.5
3
Out[36]= -SurfaceGraphics-
NS
276
Recall from structural geology that the strike of a surface is the compass direction
of its intersection with an imaginary horizontal plane. Picking an elevation within
the range of data, plotting it but supressing the output, and then combining it with
the dipping surface plot produces the gure below.
In[37]:= Plot3D218, x, 1, 3
, y, 1, 3
, PlotPoints
25,
DisplayFunction
Identity
Show%%, %, DisplayFunction
$DisplayFunction
From In[37]:=
220
3
217.5
215
212.5
1
2.5
2
NS
1.5
1.5
2
EW
2.5
3
Out[37]= -Graphics3D-
The direction of the strike line is the aspect 90 , or 164 - 90 74 . You can verify
this by picking any three points that do not lie along the same line from data and
using the three-point interpolation method developed in Chapter 2.
Slope aspect azimuths for the entire elev data set can be calculated by making
just a few changes to the slope angle calculation. Remember to correctly order the
two gradients in the arc tangent function and multiply them by 1. An error will
occur if any of the east-west gradients is zero and Mathematica will not calculate a
value for that point. This potential problem can be alleviated by adding a very small
quantity (say, 0.0000001) to the east-west gradient to ensure that there will be no
divide by zero errors
In[38]:= 10.
aspect
1
elevr 1, c elevr 1, c
,
Table ArcTan
2
elevr, c 1 elevr, c 1
1. 10 7 ,
2
r, 2, nrows 1
, c, 2, ncols 1
277
A density plot of the slope aspect angle, shown below, looks something like a
shaded relief map, but not quite. The problem is the existence of unnatural looking
black and white patches throughout the plot. These are produced because Mathematica scales the gray levels linearly between the lowest and highest azimuth values
but, in reality, azimuths are continuously distributed. The result is that an azimuth
of 001 would be plotted as black whereas an azimuth of 359 would be plotted as
white. The second fact that contributes to the unusual appearance of the map below
is that Mathematica returns arc tangent values in a range between 180 and 180 ,
another mathematical convention. When working with maps, however, it is much
more convenient to have azimuths between 0 and 360 . Both of these problems can
be easily xed.
In[39]:= ListDensityPlotaspect, Mesh
False, Frame
False,
MeshRange
2, nrows 1
, 2, ncols 1
From In[39]:=
Out[39]= -DensityGraphics-
To make a more realistic looking shaded relief map, we will need to come up with
way to avoid the discontiuity that occurs where the high and low ends of the scale
meet. Because the aspect azimuth data vary continuously over a range of 0 to 360 ,
the logical choice is a trigonometric function, such as a sine or cosine curve, that
likewise varies continuously over the same range. One such solution is illustrated
below. A cosine curve will have its largest values, and therefore lightest shades on
the density map, for aspect azimuths near 0 and its smallest values, and therefore
darkest shades, for azimuths near 180 .
In[40]:= ListDensityPlotCosaspect , Mesh
False,
MeshRange
2, nrows 1
, 2, ncols 1
,
Frame
False
278
From In[40]:=
Out[40]= -DensityGraphics-
The simulated lighting can be adjusted by shifting the cosine curve. For example,
the plot below has lighting from a direction of 045 . It also scales the GrayLevel
option so as to remove the darkest values from the image.
In[41]:= ListDensityPlotCosaspect 45. , Mesh
False,
ColorFunction
Functionz, GrayLevel0.2 0.8 z,
MeshRange
2, nrows 1
, 2, ncols 1
,
Frame
False
From In[41]:=
Out[41]= -DensityGraphics-
279
Out[42]= -Graphics-
Much more sophisticated shaded relief maps can be constructed by specifying the
degree of reectance as a function of the angle between the topography and the light
source.
Computer Note: Use ListPlot3D to plot the surface and explore the effects of changing lighting on three dimensional shaded relief plots by varying
the LightSources, AmbientLight, and Lighting options. Consult the
Mathematica documentation for more information about these options.
Computer Note: Generate a series of aspect plots with different lighting angles
and then animate them. The Mathematica documentation contains information
about animating a series of plots.
280
and its four nearest neighbors. Again using the nine values in data, the curvature at
row 101 and column 51 is calculated as
In[43]:=
1
data1, 2 data3, 2 data2, 1
102
data2, 3 4. data2, 2
Out[43]= 0.00799988
Curvature, as can be shown be examining the units in the expression above, has
units of reciprocal length (meters in this case). Positive values of curvature indicate concave-upwards topography (for example, valleys), whereas negative values
indicate convex-upwards topography (for example, ridges). Some geomorphologists
further distinguish between plan curvature (the curvature of contour lines shown in
map view) and prole curvature (the curvature measured down a slope such as the
axis of a valley), and equations to calculate those two variations can be found in
geomorphology or GIS books such as Burrough and McDonnell (1998).
In[44]:= curvature SlopeCurvatureelev, 10.
In[45]:= ListDensityPlotcurvature, AspectRatio
aratio,
Frame
False, Mesh
False,
MeshRange
2, nrows 1
, 2, ncols 1
From In[45]:=
Out[45]= -DensityGraphics-
The lightest colors in the map above are areas of concave slopes such as unchanneled hollows or stream valleys. The darkest colors are convex areas such as hilltops
and ridges. Curvature maps have been combined with slope maps to identify debris
ow source areas and runout paths, and may be useful for identifying topographically subtle features such as the scarps and toes of dormant landslides. As before,
281
superimposing a topographic map can help to show the signicance of slope curvature. Because most of the values are light gray to white, a black line (or colored)
contour map will be more useful than the white map used in Out[34].
In[46]:= Show%, topomap
From In[46]:=
Out[46]= -Graphics-
282
Assume that the angle of internal friction is constant over the entire map area
and is 30 , or
In[47]:= tan Tan25.
Out[47]= 0.466308
If more data were available, for example a map showing different formations with
different angles of internal friction, they could have been incorporated. For now,
though, we will assume that the value is a constant. Factor of safety values can now
be calculated and stored in a new table named FS
tan
,
Tanslopesr, c 0.00001
r, 1, nrows 2
, c, 1, ncols 2
In[48]:= FS Table
and then plotted and overlain with a contour map. The plot is of the quantity 1 FS,
so light values indicate low factors of safety and are more susceptible to landsliding.
In[49]:= ListDensityPlot1 FS, Mesh > False,
MeshRange
2, nrows 1
, 2, ncols 1
,
Frame
False,
ColorFunction
Functionz, GrayLevel0.3 0.7 z,
DisplayFunction
Identity
Show%, topomap, DisplayFunction
$DisplayFunction
From In[49]:=
Out[49]= -Graphics-
283
with the quantitative factor of safety? One way is to produce a composite map that
contains three different categories; concave slopes with factors of safety less than
2 (the most hazardous), planar or convex slopes with factors of safety less than 2
(moderately hazardous), and slopes with factors of safety greater than 2 (the least
hazardous).
The following set of Mathematica statements creates a table named
landslide and sets all of its values to 2. Then, the lines within the Do statement check the FS and curvature values for each data point and assign a value
of 0 or 1.5 depending on the result. These values were chosen so that areas with the
lowest hazard will appear as white and those with moderate hazard will appear as
gray in a grayscale density map.
In[50]:= landslide Table2., nrows 2
, ncols 2
Do
Block
,
Ifcurvaturer, c 0. && FSr, c < 2. ,
landslider, c 1.5
Ifcurvaturer, c > 0. && FSr, c < 2. ,
landslider, c 0.
, r, 1, nrows 2
, c, ncols 2
,
Frame
False, DisplayFunction
Identity
Show%, topomap, DisplayFunction
$DisplayFunction
From In[51]:=
Out[51]= -Graphics-
284
It cannot be overemphasized that this is a very simple approach to a very complicated problem, and that there are other factors that control
t landslide potential.
They include other components of shear strength, the hydrologic and mechanical
effects of vegetation, seismic effects, the previous occurrence of landslides, and the
magnitude and frequency of rainstorms that may trigger landslides. Not all of them
are well understood or easily modeled, and calculations should never be used as a
replacement for eld observations. Nonetheless, the simple model developed above
illustrates how easily data sets and their derivative products can be combined with
geologic inference to produce reconnaissance level screening tools that can be used
in conjunction with eld and laboratory investigations.
The surface-generating function was obtained by trial and error, adjusting values to
produce a surface that might reasonably represent a series of antiforms and synforms
superimposed on a regionally dipping surface. Well consider a map area that ranges
over 0 x 10,000 and 0 y 10,000 units. In map view, the surface looks like
this:
285
-ContourGraphics-
The GrayLevel specication was used to scale the values so that the darkest shade
is dark gray rather than black, which would have obscurred any data points later
plotted on the map.
Now, generate a series of randomly located points at which the surface is to
be sampled. The statement RandomReal, 0, 10000
selects a real number
between 0 and 10,000 at random, so the table below consists of 25 pairs of random
x and y coordinates. Random number generation is discussed in much more detail
in Chapter 4.
In[54]:= SeedRandom6
In[55]:= locs TableRandomReal, 0, 10000
,
RandomReal, 0, 10000
, 25
Out[55]=
3605.14, 9447.96, 5545.13, 6210.91,
5589.31, 9487.48, 3358.64, 9733.6, 9580.38, 8719.48,
9357.68, 1640.3, 9201.46, 2841.85, 7602.61, 5907.51,
277.95, 4364.93, 1620.37, 5073.31, 1425.84, 4317.25,
8547.71, 5196.8, 7820.71, 4869.3, 3002.59, 8985.89,
2231.39, 5381.82, 9643.95, 9252.29,
2651.02, 6662.33, 286.274, 7611.99, 3449.56, 3820.48,
2683.66, 1704.48, 3171.61, 9455.56, 1063.29, 6631.16,
1745.77, 5138.3, 2515.58, 1434.37, 3925.06, 269.01
286
Once the random coordinates have been generated, ll a table with values of f
calculated only at the locs points. Each triplet in the table below contains an eastwest coordinate, a north-south coordinate, and a z value for those coordinates.
In[56]:= surfdata Tablelocsi, 1, locsi, 2,
f /.x > locsi, 1, y > locsi, 2
,
i, Lengthlocs
Out[56]=
3605.14, 9447.96, 310.174,
5545.13, 6210.91, 178.594, 5589.31, 9487.48, 308.612,
3358.64, 9733.6, 315.268, 9580.38, 8719.48, 510.402,
9357.68, 1640.3, 289.23, 9201.46, 2841.85, 333.602,
7602.61, 5907.51, 284.827, 277.95, 4364.93, 115.194,
1620.37, 5073.31, 237.588, 1425.84, 4317.25, 208.56,
8547.71, 5196.8, 361.01, 7820.71, 4869.3, 277.91,
3002.59, 8985.89, 308.431, 2231.39, 5381.82, 262.572,
9643.95, 9252.29, 518.608, 2651.02, 6662.33, 285.703,
286.274, 7611.99, 178.625, 3449.56, 3820.48, 197.802,
2683.66, 1704.48, 146.348, 3171.61, 9455.56, 312.302,
1063.29, 6631.16, 230.655, 1745.77, 5138.3, 244.626,
2515.58, 1434.37, 135.398, 3925.06, 269.01, 86.7817
From In[57]:=
NS
10000
8000
6000
4000
2000
2000
4000
Out[57]= -Graphics-
6000
8000
EW
10000
0
287
Out[58]= -Graphics-
288
x
Do
Blockx, y, nearest
,
x xmin i 1 x
y ymin j 1 y
Do
dk, 1
nearest Sortd
zvalsj, i indatanearest1, 2, 3
, i, 1, nx
, j, 1, ny
Returnzvals
The results of NearestNeighbor, along with the data point locations, are shown
in the contour map below.
In[61]:= Show
ListContourPlotneighborresults,
ColorFunction
Functionz, GrayLevel0.3 0.7 z,
Contours
Tablec, c, 50, 500, 50.0001
,
MeshRange
0, 10000
, 0, 10000
, Frame
False,
DisplayFunction
Identity,
wellmap, DisplayFunction
$DisplayFunction
289
From In[61]:=
Out[61]= -Graphics-
Although the general pattern of two antiforms and a synform can be discerned with
some imagination, a knowledge of the underlying surface, and an appreciation of
cubism, the surface is not very realistic. Its stairstep nature can be emphasized by
making a three dimensional surface plot.
In[62]:= ListPlot3Dneighborresults, MeshRange
0, 10000
,
0, 10000
,
ColorFunction
Functionz, GrayLevel0.3 0.7 z,
AxesLabel
"E W", "N S", " "
From In[62]:=
500
400
300
200
100
0
10000
8000
6000
4000 NS
2000
4000
E
W
2000
6000
8000
10000
00
Out[62]= -SurfaceGraphics-
290
zi
di n
Ni1 d1i n
Ni1
where z x,y is the interpolated or gridded value, zi are the nearest N data points, and
di are the distances between the interpolated value and the nearest N data points. The
function InverseDistanceGrid takes as input a list of x, y, and z values such
as sur fdata; lists consisting the minimum value, maximum value, and number
of grid points in the x and y dimensions; the power to which the distance is raised;
and the number of neighbors to be included in each interpolation. Its output is a
table containing ny rows and nx columns of interpolated values.
In[63]:=
ReciprocalDistanceGrid
indata_, xvals_, yvals_, power_, neighbors_
Blocklen, x, y, zvals
,
xmin xvals1
xmax xvals2
nx xvals3
ymin yvals1
ymax yvals2
ny yvals3
len Lengthindata
xmax xmin
x
nx 1
ymax ymin
y
ny 1
zvals Table0., nx
, ny
d Table0., k
, k, len
Do
Blockx, y, k, m, mind, nearest
,
x xmin i 1 x
y ymin j 1 y
Returnzvals
291
The following statement creates the table reciprocalresults and lls it with
a 21 21 grid of interpolated values using an exponent of 2 and the 15 nearest
neighbors to each point.
In[64]:= reciprocalresults
ReciprocalDistanceGridsurfdata, 0, 10000, 21
,
0, 10000, 21
, 2, 15
,
Contours
Tablec, c, 50, 500, 50.0001
,
Frame
False, DisplayFunction
Identity,
wellmap, DisplayFunction
$DisplayFunction
From In[65]:=
Out[65]= -Graphics-
The result appears more realistic than the stair-step surface generated by the nearest
neighbor approach, but the underlying antiforms and synforms are still very difcult
to discern and the contours seem unrealistically jagged. The surfaces generated by
the reciprocal distance method will, of course, also be inuenced by the location
of the data points and you will see different results if you select a different set of
randomly located data points.
292
293
be iterated until the maximum difference between iterations is 0.5 units of measurement. The input data points will not necessarily fall on any of the grid points where
interpolations will be performed. Indeed, it would be surprising if any of them fell
exactly on a grid point. To account for this discrepancy, ThinPlateGrid assigns
each known data point values to the nearest grid point and holds the value constant
throughout the iterations. The actual function is not shown below because of its
length, but it can be examined by opening the accompanying Mathematica package
as a notebook or with a text editor. In general, the spacing of the interpolated grid
should be smaller than the spacing of the sampled grid. Otherwise, the function may
produce undesired results because it will try to assign more than one known value
to some interpolation grid points.
Here is an example of ThinPlateGrid using the same data and ranges as the
previous examples, and with a numerical tolerance of 0.001.
In[66]:= thinplateresults ThinPlateGridsurfdata,0.,10000.
,
0., 10000.
, 500, 0.001
,
Contours
Tablec,c,50.,550.,50.0001
,Frame
False,
ColorFunction
Functionz, GrayLevelz 0.3/1.3,
DisplayFunction
Identity
Show%, wellmap, DisplayFunction
$DisplayFunction
From In[67]:=
Out[67]= -Graphics-
294
This is probably the most natural looking of the three surfaces, and most geologists
would probably not hesitate to consider it a successfully interpreted data set. It is
generally, although not exactly, similar to the map produced using the reciprocal
distance method. As stated above, the nature of the surface obtained by any gridding method will be strongly dependent upon the distribution of data points. More
sophisticated variations of the thin plate spline method also include a tension component that lets the user tighten or loosen the imaginary elastic plate being used for
interpolation.
7.3.3 A Note About Kriging
Kriging is a sophisticated interpolation technique that incorporates information
about the spatial correlation structure of the surface, and could be the subject of
an entire course or book. It has many proponents. Kriging can work well and be
worth the effort when the number of data points is large and the data satisfy certain
conditions. In other cases the surfaces generated by kriging are no better, and can
be appreciably worse, than those produced by the methods we have examined. In
situations where data too sparse to yield reliable information about their spatial correlation structure, assumptions about their spatial relationships must be made and
kriging loses much of its attractiveness. The books by Isaaks and Srivastava (1989),
Burrough and McDonnell (1998), Middleton (2000), Carr (2002), and Davis (2002)
listed in the Recommended Reading section of this chapter describe the theory and
application of kriging methods in various degrees of detail.
7.3.4 Adding Well Locations to Surface Plots
It is relatively straightforward to consruct a three dimensional version of wellmap
in order to visualize the relationship between a surface and boreholes from which the
data were obtained. We know from previous plots that the z values in sur fdata
range from 0 to about 500, so this would be a good vertical range for lines representing boreholes. The statement below constructs a table lled with 25 vertical lines,
each representing a well from which an elevation datum was obtained.
In[68]:= lines Table
Linelocsi, 1, locsi, 2, 0
,
locsi, 1, locsi, 2, 500
,
i, Lengthlocs
Line[{{x1 , y1 , z1 },{x2 , y2 , z2 }}] creates, but does not display, a line from {x1 ,
y1 , z1 } to {x2 , y2 , z2 }. The table of lines can be plotted by identifying it as a
Graphics3D object and then using Show. The Thickness function controls the
thickness of the lines relative to the entire width of the plot and the GrayLevel
function controls the darkness of the lines. The latter could have been replaced by
an RGBColor function.
295
Out[69]= -Graphics3D-
Now that the well locations are plotted, make a three dimensional surface plot of
thinplateresults
In[70]:= thinplateplot3d ListPlot3Dthinplateresults,
MeshRange
0, 10000
, 0, 10000
, Shading
False,
DisplayFunction
$DisplayFunction
From In[70]:=
400
10000
200
8000
6000
0
0
2000
4000
4000
2000
6000
8000
10000
00
-SurfaceGraphics-
296
Out[71]= -Graphics3D-
297
contained the same numbers of rows or columns. A three dimensional surface plot
is a convenient way to visualize the interpolation errors.
In[74]:= ListPlot3Derrorsurface,
AxesLabel
"E W", "N S", " "
,
ColorFunction
Functionz, GrayLevel0.3 z/1.3,
MeshRange
0, 10000
, 0, 10000
From In[74]:=
50
10000
0
-50
8000
6000
4000 NS
2000
4000
E
W
2000
6000
8000
10000
00
Out[74]= -SurfaceGraphics-
The surface plot shows that the interpolation errors are quite large along the
edges of the grid. Although certainly not desirable, large errors along the edges
of the grid are inevitable because they represent an extrapolation beyond the data
points rather than an interpolation between data points. Keep this in mind when
extrapolating any kind of curve or surface beyond the range of the data!
Another way to represent this tendency for large interpolation errors to occur
along the edges of the interpolation grid is to create a table consisting of the distance
from the center of the grid and the error at each interpolated point. The table is
created by the statement
and plotted by
In[76]:= ListPlotFlatten%, 1,
AxesLabel
"Distance", "Error"
298
From In[76]:=
Error
80
60
40
20
Distance
Out[76]= -Graphics-
299
It isnt necessary to restrict trend surfaces to planes. Other low order polynomials
can be just as easily used, although there should be some geologic reason for doing so. In general, however, the order of the polynomial is generally much smaller
than the number of data points so that the process is one of regression rather than
interpolation. The statement below superimposes three dimensional surface plots
of trendsur face and thinplateresults to illustrate the relationship between the two.
In[78]:= Plot3Dtrendsurface, x, 0, 10000
, y, 0, 10000
,
PlotPoints
50,
ColorFunction
Functionz, GrayLevel0.3 z/1.3,
DisplayFunction
Identity
ListPlot3Dthinplateresults, MeshRange
0, 10000
,
0, 10000
,
ColorFunction
Functionz, GrayLevel0.3 z/1.3,
DisplayFunction
Identity
Show%, %%, DisplayFunction
$DisplayFunction,
Axes
None
From In[78]:=
Out[78]= -Graphics3D-
Calculating Residuals
Residuals have the same denition as in Chapter 6, although in trend surface analysis
it is common to concentrate on the residuals and the information they convey rather
300
than trying to minimize them. Well compare the planar trend surface tted above to
the thin plate spline gridded data, so the next step is to create a table of trend surface
values corresponding to the 21 x 21 grid of values in thinplateresults.
In[79]:= trenddata
Round
Tablelocsi, 1, locsi, 2,
trendsurface /.x > locsi, 1,
y > locsi, 2
,
i, Lengthlocs
The residual is the difference between the third columns of the two tables, or
In[80]:= residualdata
Round
Tablelocsi, 1, locsi, 2,
surfdatai, 3 trenddatai, 3
,
i, Lengthlocs
Notice that simply executing sur fdata trenddata will not provide the answer we want because, in addition to nding the residuals from the third columns,
it will subtract the rst two columns from each other and set all of the x and y coordinates to zero. Alternatively, we could have simply subtracted the polynomial
trendsur face from the true surface f using the statement
In[81]:= f trendsurface
Out[81]= 42.4842 0.00206896 x 0.00185985 y
x
1000 y
1. 106 x y 100 Sin
Sin
4000
12500
and evaluated the result using locs. In most real world geological problems,
though, the true underlying surface f is unknown and can only be estimated from
a nite number of data points. The minimum and maximum residual values, which
will be useful for dening contour intervals, are
In[82]:= MinColumnresidualdata, 3
Out[82]= 122
In[83]:= MaxColumnresidualdata, 3
Out[83]= 65
As above, we cannot apply Min and Max to the entire residualdata table because it includes x and y coordinates along with the residual values. Column, which
is an add on function in Statistics`DataManipulation`, solves the problem by isolating the column containing the residuals. With these minimum and maximum values
in mind, a contour map of the residuals can be superimposed with a map of the
randomly selected data points using the statement
301
,
Contours
Tablec, c, 200, 100, 25.0001
,
Frame
False,
ColorFunction
Functionz, GrayLevelz 0.3/1.3,
DisplayFunction
Identity
Show%, wellmap, DisplayFunction
$DisplayFunction
From In[85]:=
Out[85]= -Graphics-
How does this compare to the residual map produced from the true surface? We can
easily produce one by subtracting the best-t function trendsur face from the
true surface f, as shown below.
In[86]:= ContourPlotReleasef trendsurface, x, 0, 10000
,
y, 0, 10000
,
Contours
Tablec, c, 200, 100, 25.0001
,
Frame
False, PlotPoints
25,
ColorFunction
Functionz, GrayLevelz 0.3/1.3,
DisplayFunction
Identity
Show%, wellmap, DisplayFunction
$DisplayFunction
302
From In[86]:=
Out[86]= -Graphics-
303
Therefore, the linear regional trend accounts for about 81% of the variability of the
z values contained in sur fdata.
Derivative Maps
The same slope and curvature mapping tools that we developed for topographic
surfaces can be applied to any gridded surface. In this case, well assume that
sur fdata represents the top of a petroleum reservoir or aquifer. Therfore, a contour map of the gridded sur fdata values is a structural contour map. Although
structural contour maps can be interpreted as-is, it can sometimes be helpful to produce rst derivative (slope) and second derivative (curvature) maps to aid in their
interpretation. For example, the elastic beam theory used in Chapter 3 to analyze
deformation above laccoliths suggests that faults should occur where the shearing
force (which is proportional to the slope of the surface) is greatest and that joints
should occur where the ber stress (which is proportional to the curvature) is greatest. Therefore, slope and curvature maps of a folded surface may help to identify
areas that may contain faults that impede uid ow or fractures that increase porosity and permeability (e.g., Fischer and Wilkerson, 2000; Stewart and Wynn, 2000).
The results returned by SlopeAngle have units of degrees, but can be converted
to dimensionless gradients (vertical/horizontal) by taking their tangents. As long as
the results are used for visualization and interpretation rather than calculations, however, the choice is a matter of personal preference. Below is a contour plot showing
the rst derivative of thinplateresults along with the data point locations
from which that table was interpolated.
In[88]:= ListContourPlotSlopeAngleresidualgrid, 500.,
MeshRange
0, 10000
, 0, 10000
, Frame
False,
ColorFunction
Functionz, GrayLevelz 0.3/1.3,
DisplayFunction
Identity
Show%, wellmap, DisplayFunction
$DisplayFunction
304
From In[88]:=
Out[88]= -Graphics-
, Frame
False,
ColorFunction
Functionz, GrayLevelz 0.3/1.3,
DisplayFunction
Identity
Show%, wellmap, DisplayFunction
$DisplayFunction
From In[89]:=
Out[89]= -Graphics-
305
Light areas in the curvature map indicate positive curvature (concave-up) associated
with synformal structures, whereas dark areas indicate negative (concave-down)
curvature associated with antiformal structures.
Computer Note: The CompGeosci package will load correctly only if it is located in one of the directories in Mathematicas standard le path. Execute the
statement $Path to see a list of the default paths on your computer and place
the le CompGeosci.m in one of those directories. The specic le paths may
differ from one operating system to another. See Chapter 1 for more information
about installing the CompGeosci package.
308
can be removed by either tting a straight line to obtain an equation for the trend
and then calculating residuals (see Chapter 7) or, as described further on in this
chapter, by calculating rst differences. A time series is said to be homoscedastic
if its variance is constant with time and heteroscedastic if its variance changes as a
function of time. It is important to realize that time (or space) series do not have to be
periodic, although in many geoscientic problems there is an important component
of periodicity.
Periodic waveforms are described in terms of amplitudes, frequencies, and
wavelengths. In the plot below, for example, the amplitude of the waveform is 0.2.
In[2]:= Plot0.2 Sin6 x/18., x, 0, 18
,
AxesLabel
"t", "ft"
From In[2]:=
ft
0.2
0.1
2.5
7.5
10
12.5
15
17.5
-0.1
-0.2
Out[2]= -Graphics-
The frequency can be written as 1 cycle per 6 units of time (i.e., 1/6) or 3 cycles per
18 units of time (i.e., 3/18). Both equal 1/6 and are therefore algebraically equivalent. Because the frequency is a ratio, it is even possible to specify it in terms of
non-integer wavelengths such as 3/4 cycles per 9/2 units of time because that, too,
will reduce to a frequency of 1/6.
3/4
9/2
1
Out[3]=
6
In[3]:=
Frequencies waves are often expressed as cycles per second using units of Hertz.
Because the frequency per unit of time in our example reduces to 1/6 regardless
of how it is written, you might be asking what is to be gained by expressing the
frequency to wavelength ratio as anything but 1/6. The reason is that in digital signal
processing data are commonly presented as a list of dependent variables without
any corresponding time coordinate. For example, the sine curve above might be
represented as a list of discrete measurements obtained at 0.25 unit intervals.
309
0.1
10
20
30
40
50
60
70
-0.1
-0.2
Out[5]= -Graphics-
The frequency of sint at rst glance appears to be either 3 (because the waveform
repeats itself three times during the length of the time series) or 3/72, which is
assumed to have a wavelength of 1. Only by knowing the sampling rate (4 samples
per unit of time) and the length of the data set (n 72) will we be able to determine
that the frequency is really
In[6]:=
3 4
Lengthsint 1
Out[6]=
1
6
The length of sint is reduced by 1 because the 73rd element is actually the rst
sample from the beginning of a fourth repetition of the waveform.
310
1
f
Ft exp2 t 1 1/
n
n t1
where f is a list of results by frequency, F(t) is a list of regularly sampled data,
is the frequency, and t is time. Different variations of the Fourier transform are
used in different elds, and the example above uses Mathematicas default sign
convention. Refer to the written or online documentation for more details. The data
are said to lie in the time domain, whereas the results are in the frequency domain.
The exponential term on the right-hand side of the Fourier transform equation is
equivalent to
In[7]:= ExpToTrig2 t/n
Out[7]= Cos
2t
2t
Sin
n
n
According to this denition, the wavelength n is the length of the data set.
Why use a Fourier transform when linear regression seems to work well enough?
Although it can be a very useful method, particularly when data are not sampled regularly or are otherwise missing, linear regression can also be computationally slow.
This was particularly so in the early days of computing. Today, software such as
Mathematica can perform the least squares calculations very rapidly and the speed
difference may not be signicant for any but the largest data sets. Still, it is good
to have a fast numerical alternative for cases in which speed does matter. Another
reason is that many ltering operations are easier when a time series is expressed
in terms of its frequencies, or spectral components, than in terms of time. The fast
Fourier transforms, or FFT, is an especially efcient method that works when the
data are sampled at regular time intervals and the length of the data set is a power
of 2. Mathematica implements an extremely efcient fast Fourier transform that can
accept data sets of any length, but they must be sampled regularly in time or space.
Missing values can be approximated by interpolation or by setting them to an arbitrary value such as 0, but must be specied in one way or another. Some Fourier
transform routines require that the input data length be a power of 2, and require
users to pad the end of the series with zeroes to attain a length that is a power of 2.
Mathematica automatically takes care of this problem, however, so there is no need
for users to pad the input to Fourier.
The discrete Fourier transform of sint is lengthy because it consists of 73
terms, each with a real and an imaginary component, so the output will be surpressed.
In[8]:= fft ChopFouriersint
311
Chop is used to eliminate any very small numerical errors (< 1010 ) in the result.
To illustrate the real and imaginary components, we can look at just one term of the
results.
In[9]:= fft2
Out[9]= 0.000369209 0.00857387
This result can be shown to be identical to that obtained by explicitly typing out the
denition of the Fourier transform and taking the second element of the result.
In[10]:= len Lengthsint
len
1
TableChop
sintt
len t1
Exp2 t 1 1/len , , 1, len
In[11]:= %2
Out[11]= 0.000369209 0.00857387
The real components are multiples of the amplitudes of the cosine terms and the
imaginary components are the multiples of the amplitudes of the sine terms in a
Fourier series of the form
Ft
a0
an cos2 n t/ L bn sin2 n t/ L
2
i1
The rst term in the list returned by Fourier contains the value for a0 , so the
second term contains values for a1 and b1 , and so forth. Because of the particular
denition of a Fourier transform used by default in Mathematica,
the amplitudes are
found by multiplying each term in the Fourier transform by 2/ n. In this example,
the amplitudes as a function of frequency are:
In[12]:= ListStemPlot
2Refft
, 0.015, PlotRange
All,
len
From In[12]:=
a
0.025
0.02
0.015
0.01
0.005
10
20
30
Out[12]= -Graphics-
40
50
60
70
312
and
In[13]:= ListStemPlot
2Imfft
, 0.015, PlotRange
All,
len
From In[13]:=
b
0.2
0.1
10
20
30
40
50
60
70
-0.1
-0.2
Out[13]= -Graphics-
As expected from the function that created sint, the maximum amplitude is for
the sine component is 0.2 and occurs at a frequency of 4 1 3 cycles per data
length. The subtraction is necessary because the a0 term the rst element of the results ( f ft1); therefore, the ith value in the Fourier transform results represents a frequency of i 1. The results are symmetric or antisymmetric about a
frequency of 36, which is known as the Nyquist frequency. Frequencies above the
Nyquist frequency are said to be aliased because they contain no new information.
An important ramication of the Nyquist frequency is that the highest frequency that
can be represented in a discretely sampled signal is n/2 cycles per data set length.
Thus, sampling should always be planned to that the Nyquist frequency is greater
than the frequencies of the phenomena being studied. For example, if temperature
varies on a daily basis then a sampling frequency of at least twice a day is necessary
to correctly detect the uctuations without aliasing.
The power spectrum (known as the variance spectrum or spectral density function in some elds) is given by the square of the absolute value of the real and
imaginary parts of each term, which is the sum of the squares of the real and imaginary parts, divided by the square root of the number of data. We can use the logical
operator to see if the denition of the absolute value is indeed true
In[14]:=
Absfft2
Refft2 Imfft2
len
len
Out[14]= True
313
Absfft2
, 0.015,
len
PlotRange
All, AxesLabel
"", "Power"
In[15]:= ListStemPlot
From In[15]:=
Power
0.08
0.06
0.04
0.02
10
20
30
40
50
60
70
Out[15]= -Graphics-
The sum of squares of the absolute value for each frequency is closely related to the
variance of the original data set. In this example, the summation yields
len
In[16]:=
1
Absffti2
len 1 i1
Out[16]= 0.02
Because of this relationship, the power of each frequency can be interpreted as its
contribution to the total variance of the data set, and the signicance of any particular frequency can be tested using an F ratio test (see Chapter 4). If the rst term
of the Fourier transform is not zero, then the rst term should be subtracted before attempting to calculate the variance from the sum of powers. A related result
is the amplitude spectrum, which is (using Mathematicas Fourier transform convention) twice the square root of the power spectrum. Taking the absolute values,
the amplitude spectrum is
2 Absfft2
In[18]:= ListStemPlot
, 0.015, PlotRange
All,
len
AxesLabel
"", "Amplitude"
314
From In[18]:=
Amplitude
0.2
0.15
0.1
0.05
10
20
30
40
50
60
70
Out[18]= -Graphics-
As discussed by Press et al., (1992), there are several other commonly used definitions of the power spectrum. Ignoring results above the Nyquist frequency, the
frequency with the highest power and amplitude is 4 1 3 cycles per data set
length. The results were obvious in this simple example but, as will be shown below, it is not as easy to select the dominant frequency or frequencies in real data.
Finally, taking the inverse Fourier transform of f ft returns the original data.
In[19]:= ListStemPlotInverseFourierfft, 0.015,
AxesLabel
"t", "ft"
From In[19]:=
ft
0.2
0.1
10
20
30
40
50
60
70
-0.1
-0.2
Out[19]= -Graphics-
As implemented in Mathematica, Fourier and InverseFourier can also accept multi-dimensional tables of data, for example digital elevation models.
Real data are not usually as well behaved as our simple sine curve. The data
imported below consist of monthly streamow measurements of the Palouse River
315
near Colfax, Washington collected between January 1956 and December 1963 by
the U.S. Geological Survey.
In[20]:= data Import
"/Users/bill/Mathematica_Book/palouse.dat", "List"
The original data are in cubic feet per second, and can easily be converted to cubic
meters per second (1 cfs 0.02832 cms).
In[21]:= data 0.02832 data
40
60
80
Out[23]= -Graphics-
The Fourier transform is obtained just as above except that we will subtract the mean
value of data.
In[24]:= fft Fourierdata Meandata
We are not interested in reproducing the streamow measurements, but would like
to identify the predominant frequency. A reasonable guess might be that it is one
cycle per year, To nd out if this is correct, plot the amplitude spectrum
2 Absfft2
In[25]:= ListStemPlot
, 0.015, PlotRange
All,
len
AxesLabel
"", "Amplitude"
316
From In[25]:=
Amplitude
12
10
8
6
4
2
20
40
60
80
Out[25]= -Graphics-
In[26]:= ListStemPlot
From In[26]:=
Power
350
300
250
200
150
100
50
20
40
60
80
Out[26]= -Graphics-
The largest amplitude and power are associated with the a frequency of 9 1 8
cycles per data length, corresponding to the eight years of record. A second prominent but weaker peak in the amplitude spectrum occurs for a frequency of 17 1
16 cycles per data length, or 2 cycles per year. In many applications the power
spectrum is extremely noisy, in part because the effect of high-power frequencies
can leak into adjacent lower power frequencies. Although we will not discuss the
details, improved power spectra can be generated by using various smoothing processes.
317
This data set is an example of one with a non-zero mean, but we have already
subtracted the mean value and can proceed to calculate the variance.
len
In[27]:=
1
Absffti2
len 1 i1
Out[27]= 141.125
The result is the same as that calculated using the Variance function.
In[28]:= Variancedata
Out[28]= 141.125
If the objective is to calculate the variance of a data set in the simplest way possible, then using Variance is much easier than subtracting a mean value, taking a
Fourier transform, and then summing squares. But, it is very useful to understand
that the range of values plotted in a power spectrum is closely related to the variance of the data set. The signicance of the power of any particular frequency, for
example, can be expressed as the ratio of the power to the total variance of the data
set. The variance associated with a frequency of 8 cycles per data length is in this
case
In[29]:=
Absfft92
len
Out[29]= 371.367
This is substantially higher than the variance of the entire data set, which is approximately 141. Because we are working with two variances, the null hypothesis that
their ratio is not signicantly different than 1 can be tested using an F ratio test (see
Chapter 4). First, calculate the ratio of variances.
In[30]:= fratio %/Variancedata
Out[30]= 2.63148
Then, use the ratio in FRatioPValue. Notice that the numerator, which is the
spectral power for a frequency of 8, has two degrees of freedom because it contains
to parts: one real and one imaginary.
In[31]:= FRatioPValuefratio, 2, len 1
Out[31]= OneSidedPValue 0.0772141
There is about an 8% chance of committing a Type I error if we reject the null hypothesis, which is greater than the standard value of 5% level of signicance that
is considered acceptable in many scientic problems. The power of the 3 cycles
per data length frequency seems, however, to be substantially greater than any of
the other amplitudes in the power spectrum. Why, then, is its p value low enough
that the null hypothesis cannot be rejected at the standard 0.05 level? The reason
is that the small number of degrees of freedom associated with the power of each
318
frequency makes the estimate of the variance very uncertain. In this case, the condence interval for the variance ratio has a very wide range
In[32]:= FRatioCIfratio, 2, len 1, ConfidenceLevel
0.95
Out[32]= 0.686013, 103.91
Thus, if we want to be 95% certain about the F ratio we can only say that it lies
somewhere between 0.69 and 104. That is indeed a very uncertain estimate! We can
also test for the signicance of the smaller peak at a frequency of 16 cycles per data
length. Its p value is
In[33]:= FRatioPValue
Absfft172
Variancedata,
len
2, len 1
Therefore, it is unlikely that this frequency is signicantly different than the background noise. The condence interval of this F ratio is:
Absfft172
Variancedata,
len
2, len 1, ConfidenceLevel
0.95
In[34]:= FRatioCI
t is known as the lag. Unless a time series consists of truly random values, it is
reasonable to expect that values close to each other in time (small lag) will be more
similar to each other than those separated by large lags. The covariance between n
pairs of two variables x and y is dened as
1
Covx, y
x xyi y
n1 i i
but can be modied to compare instances of the same variable separated by a lag
t, in which case it becomes the autocovariance. As is done for the variance, the
covariance function uses a denominator of n 1 to produce an unbiased estimate.
ACovx, y
319
1
x xxi
t x
n1 i i
The correlation between two variables is the covariance divided by the product of
their standard deviations, or
x xyi y
1 i i
Corrx, y
n1
sx sy
Thus, the autocorrelation is by analogy
x xxi
t x
1 i i
ACorrx, y
n1
s2 x
The Mathematica package Statistics`MultiDescriptiveStatistics` includes the functions Covariance and CovarianceMLE, which calculate unbiased and maximum likelihood (population) estimates of the covariance from a data set, as well as
Correlation (which we have already used).
Be aware that the use of the terms autocovariance and autocorrelation is not
standardized and can be very confusing. Some authors, for example, use autocorrelation to describe a population statistic and serial autocorrelation to describe the
corresponding sample statistic. Other authors have used serial correlation to mean
the correlation between two different time series, whereas others use the term crosscorrelation for the same purpose. The convention here will be to use autocorrelation
to refer to both population and sample statistics. For time series with more than a
handful of data, the difference between the two will generally be inconsequential.
To illustrate the calculation of autocorrelation values, we will use the Palouse
River monthly peak discharge measurements that
t
were previously assigned to the
name data. The function below calculates the autocorrelation as a function of lag.
As in the examples above, we will not explicitly consider the independent variable
(time in this case) but will instead use the length of the data set and assign the
discharge measurements time values of 1, 2, 3, and so forth. The function below
calculates the autocorrelation of data set y at lag
t
In[35]:= AutoCorrelationy_, t_
Covariancey, RotateLefty, t
Return
Variancey
The built-in function RotateLe ft shifts the values in y one place to the left. The
autocorrelation of data for a lag of, say, 8 is thus
In[36]:= AutoCorrelationdata, 8
Out[36]= 0.316315
320
As with most time series data, it is difcult to glean much useful information from
this list of numbers. Using ListStemPlot to visualize the values produces an
autocorrelogram.
In[38]:= ListStemPlottemp, 0.015, PlotRange
All,
AxesLabel
"t", "AutoCorr"
From In[38]:=
AutoCorr
1
0.8
0.6
0.4
0.2
20
40
60
80
-0.2
-0.4
Out[38]= -Graphics-
321
six months apart tend to be the most different from each other. The same results can
be obtained using Mathematicas Correlation function.
In[39]:= ClearAutoCorrelation
In[40]:= AutoCorrelationy_, t_
ReturnCorrelationy, RotateLefty, t
The approach is the same as in the previous function, except that the variance
of y is not explicitly calculated in the function. One advantage of the rst approach
is that, although it is slightly more complicated, the Covarianceand Variance
functions can be replaced with CovarianceMLE and AutoCovarianceMLE
if there is a need to distinguish between the population and sample statistics. As
shown below, the second implementation produces results identical to the rst in
this example. In this example the calculation of the results is accomplished within
the ListStemPlot function.
In[41]:= ListStemPlotTableAutoCorrelationdata, k,
k, 0, len 1
, 0.015, AxesLabel
"t", "AutoCorr"
From In[41]:=
AutoCorr
1
0.8
0.6
0.4
0.2
20
40
60
80
-0.2
-0.4
Out[41]= -Graphics-
322
ters because they eliminate the need to explicitly consider Fourier transforms when
performing convolutions.
8.5.1 First Differences
Trends can be removed from time series by calculating rst differences, which are
dened as xt
t xt . To illustrate, consider a contrived data set consisting of both a
sine wave and a trend.
In[42]:= pseudodata Table0.1 t Sin2 t/10., t, 0, 100
In[43]:= ListPlotpseudodata, PlotJoined
True
From In[43]:=
10
8
6
4
20
40
60
80
100
Out[43]= -Graphics-
20
40
-0.2
-0.4
Out[44]= -Graphics-
60
80
100
323
Notice that calculating rst differences preserves the frequency content, but not the
amplitude, of the time series. As such, it will be a useful technique in situations
where the goal is to understand the periodicity of a time series without regard to its
amplitude(s).
Another way to calculate rst differences is to use ListConvolve or
ListCorrelate. The function ListConvolvek, y applies the kernel k,
which is a list or matrix, term-by-term to the data set y by calculating kr ysr for
r
Because the kernel contains three terms, it cannot be applied to the rst and last
elements of x and the resulting list contains only two terms. The related function
ListCorrelate applies the kernel the the list in a forward direction by calculating kr ysr .
r
In[46]:= ListCorrelatea, b, c
, x1, x2, x3, x4
Out[46]= a x1 b x2 c x3, a x2 b x3 c x4
Out[48]= 1, 1
which, after clearing the previous denition of x, produces the following result for
each element in the data set
In[49]:= Clearx
In[50]:= ListCorrelatek, xt , xtt
Out[50]= xt xt
t
324
From In[51]:=
0.6
0.4
0.2
20
40
60
80
100
-0.2
-0.4
Out[51]= -Graphics-
Computer Note: Show that reversing the kernel and using ListConvolve
will produce identical results.
The results are the same, so why go to the trouble of using ListCorrelate or
ListConvolve? The answer is that they are much faster. To illustrate this point,
we can repeat the rst difference example 1000 times and have Mathematica return
the elapsed time (performing it only once would return a time of zero)
In[52]:= Timing
Do
ListCorrelatek, pseudodata, i, 1000
Out[52]= 0.13 Second, Null
Out[53]= 1.28 Second, Null
325
Its use can be illustrated by applying it to a noisy periodic pseudodata set generated
by adding random noise to a sine curve.
In[55]:= pseudodata2 TableSin2 t/10.
RandomReal, 0.8, 0.8
, t, 1, 100
40
60
80
100
-0.5
-1
-1.5
-Graphics-
The 3-term moving average of pseudodata2, which helps to reduce the noise, is
In[57]:= ListStemPlotListConvolveMovingAvg3, pseudodata2,
0.015
326
From In[57]:=
1
0.5
20
40
60
80
100
-0.5
-1
Out[57]= -Graphics-
0.5
20
40
60
80
-0.5
Out[59]= -Graphics-
The 3-term and, especially, the 5-term moving averaged time series give a good
indication of the periodicity in the underlying signal. They do not, however, do a
very good job of uncovering the amplitude of the signal.
More sophisticated smoothing lters can assign different weights to the adjacent
values. One approach is to use a Gaussian bell-shaped curve to assign weights, as
illustrated below for a 5-term smoothing lter. Notice that the sum of the elements
is unity.
327
0.5
20
40
60
80
-0.5
Out[61]= -Graphics-
Computer Note: The effective width of the Gaussian smoothing lter can be
controlled by adjusting the standard deviation in the normal distribution used to
generate the lter. Modify the lter above so that operates on 3 and 7 terms, in
each case with the outermost terms each having a weight of approximately 0.05.
328
0.5
20
40
60
80
100
-0.5
-1
Out[64]= -Graphics-
In this case, the high frequency component is unwanted noise. In other cases, the
high frequency component can represent signicant features, for example the edges
of objects in an image.
329
Computer Note: Readers using the digital version of this text should change
the le path above to reect the location of the image le on their hard drive or
CD.
The size of the multidimensional array containing the image data is found using
Dimensions. The results show 474 rows and 374 columns in each of three color
layers (red, green, and blue). A gray scale image would have only one layer.
In[66]:= Dimensionspicture1, 1
Out[66]= 474, 374, 3
Out[67]= -Graphics-
The ImageSize option is used to control the size of the displayed image, in this
case reducing its original size by 1/2, or by clicking on the image and dragging the
handles with a mouse. The default ImageSize values are the numbers of rows and
columns in the image, with each row or column representing one printers point on
the computer monitor or printed page. The option ImageResolution can also be
used to specify the resolution of the image in pixels per inch. The color image can be
displayed as a gray scale image using the option ColorOut put
GrayLevel.
In order to perform any image processing, the Mathematica graphics object will
have to be converted into an array, or list of lists, using the command below:
In[68]:= picture picture /. Graphics
List
The red, green, and blue values for each pixel are contained in the rst element of
picture, and can be extracted and assigned to their own variable name.
330
As did the original image, picturevalues contains 474 rows and 374 columns
of red, green, and blue (RGB) values. See Appendix C or the Mathematica documentation for more information about working with color in Mathematica.
In[70]:= Dimensionspicturevalues
Out[70]= 474, 374, 3
Sections of an image can be isolated using Take. For example, the following
line extracts a 100 by 100 pixel section of the red channel.
In[72]:= Tablepicturevaluesr, c, 1, r, 200, 300
,
c, 200, 300
In[73]:= ListDensityPlot%, Mesh
False, AspectRatio
1
From In[73]:=
100
80
60
40
20
0
0
20
40
60
80
100
Out[73]= -DensityGraphics-
331
400
300
200
100
0
0
50
Out[74]= -DensityGraphics-
As with any other data set, the range of red values can also be visualized with a
histogram of the attened picturevalues array.
In[75]:= HistogramFlattenpicturevaluesAll, All, 1,
ColorOutput
GrayLevel
From In[75]:=
1400
1200
1000
800
600
400
200
50
100
Out[75]= -Graphics-
150
200
250
332
green
blue
Out[76]= -GraphicsArray-
The lighter (larger) values in the red channel image suggest that the composite RGB
image has a reddish color. As anyone who is viewing the original image on a color
computer monitor can attest, it is indeed primarily red.
Now that we have converted the image to an array of numbers and disassembled it layer-by-layer, it will be helpful to know how to return it to a graphics
object. This can be done by applying the function RGBColor to each element in
picturevalues (which must be divided by 255 because RGBColor accepts arguments only in the range of 0 to 1), and then putting the result into a Mathematica
variable known as a raster array.
333
The # and & characters in the statement above are shorthand notation for a Mathematica pure function. In essence, it selectively applies the RGBColor function to
the values contained in picturevalues, assigning the rst elements to the red
channel, the second to the green channel, and the third to the blue channel. Consult the printed or online Mathematica documentation for more information on pure
functions. Such a statement might be used after an image is imported, manipulated,
and ready to be exported as a graphics le using Export or to recombine the
three separate channels into one color image. The following statement exports the
re-converted image as a jpeg le:
In[78]:= Export"/Users/bill/Mathematica_Book/wheeling.jpg",
reconvertedpicture, "JPEG"
Out[78]= /Users/bill/Mathematica_Book/wheeling.jpg
You will of course want to use a le name and path, as well as le format, of your
own choosing. The printed and on-line documentation includes details about the
graphics formats supported by Mathematica and their options. The jpeg format, for
example, includes options to set the color space (RGB or gray level), image quality
(default is 75 out of 100), smoothing, and whether or not to create a progressive jpeg
le.
Computer Note: The statement
ShowGraphicsRasterArrayApply
RGBColor#1, #2, #3&,
picturevalues/255., 2
, AspectRatio
474/374.
can be used to convert picturevalues into a graphics object and display the
result.
8.6.2 Basic Mathematical Operations
Now that the image has been transformed into a set of numbers, any Mathematica
numerical function can be applied to them. To create a negative image, multiply any
of the channels by 1.
In[79]:= ListDensityPlot picturevaluesAll, All, 1,
Mesh
False, AspectRatio
474/374.
334
From In[79]:=
400
300
200
100
0
0
50
Out[79]= -DensityGraphics-
Similarly, we can square the red channel values to see what effect that operation will
have.
In[80]:= ListDensityPlotpicturevaluesAll, All, 12 ,
Mesh
False, AspectRatio
474/374.
From In[80]:=
400
300
200
100
0
0
50
Out[80]= -DensityGraphics-
335
The result of squaring the red channel is to produce a darker plot, which may be
surprising at rst glance. ListDensityPlot plots large values as light colors,
so why does squaring the red channel darken instead of lighten the plot? The reason
is that, whereas the numerical values in each channel range from 0 to 255, Mathematica scales them to gray scale or RGB values ranging from 0 to 1 before plotting.
Thus, squaring a value less than 1 produces a smaller number and the image is darkened. Readers who have their own copies of Mathematica may wish to see what
effect squaring the red channel has on the color image. This can be done with the
following statement:
In[81]:= Show
Graphics
RasterArrayApplyRGBColor#12 , #2, #3&,
picturevalues/255., 2
, AspectRatio
474/374.
From In[81]:=
Out[81]= -Graphics-
For those who are reading the paper copy of this book and do not have access to
the color images in the digital version, the effect of squaring the red channel was to
signicantly reduce the red hue of the image. Forested hills that were brownish red
are now green, and the red areas appear to be restricted to grassy elds that were
bright red in the original image. Areas of bare soil or rock remain light pink to white.
The PlotRange option can be used to control the contrast of an image. Decreasing PlotRange will increase contrast:
In[82]:= ListDensityPlotpicturevaluesAll, All, 1,
Mesh
False, AspectRatio
474/374.,
PlotRange
127.5 50, 127.5 50
336
From In[82]:=
400
300
200
100
0
0
50
Out[82]= -DensityGraphics-
400
300
200
100
0
0
50
Out[83]= -DensityGraphics-
337
8.6.3 Thresholding
One common image processing technique is thresholding, in which values below
a threshold are all changed to a constant value and those above the threshold are
changed to another constant value. For example, the red channel of picture can
be thresholded in a way such that values below 128 are changed to black (0) and
values above 128 are changed to white (1). This will have the effect of changing the
continuous tone gray level image into one that is truly black and white. First, create
table with the same number of rows and columns as picturevalues.
In[84]:= thresholdvalues Table0., r, 474
, c, 374
This approach is similar to the one that we used to produce the landslide hazard
maps in Chapter 7. Here is the result of the thresholding:
In[86]:= ListDensityPlotthresholdvalues, Mesh
False,
AspectRatio
474/374.
From In[86]:=
400
300
200
100
0
0
50
Out[86]= -DensityGraphics-
338
The lter is applied exactly as it was for the 1-D time series, in this case specifying
that only the red channel is to be smoothed. The green and blue channels will remain
unchanged. As shown below, the result is a blurred image.
In[88]:= ListDensityPlotListConvolvek,
picturevaluesAll, All, 1,
AspectRatio
474/374., Mesh
False
From In[88]:=
400
300
200
100
0
0
50
Out[88]= -DensityGraphics-
The Gaussian smoothing lter can also be extended to 2-D images, and variations are included as Gaussian blur lters in many image processing programs.
The Mathematica package Statistics`MultinormalDistribution` contains functions
that can be used to develop a 2-D Gaussian PDF.
In[89]:= k TablePDFMultinormalDistribution0., 0.
,
1., 1. 3.,
1. 3., 1.,
x, y
, x, 2, 2
, y, 2, 2
339
From In[89]:=
Out[89]=
0.0154362, 0.0259108, 0.0097047, 0.000811038, 0.0000151237,
0.0259108, 0.103403, 0.0920757, 0.0182942, 0.000811038,
0.0097047, 0.0920757, 0.194924, 0.0920757, 0.0097047,
0.000811038, 0.0182942, 0.0920757, 0.103403, 0.0259108,
0.0000151237, 0.000811038, 0.0097047, 0.0259108, 0.0154362
The multinormal distribution has two mean values and a covariance matrix instead
of a standard deviation. The PDF above is the 2-D equivalent of a standard normal
distribution with zero mean and unit variance. Summing the elements in kwill yield
a value greater than 0.98, which is close to the value of 1 that must be obtained for
any legitimate PDF. As illustrated in the plot below, the use of a Gaussian smoothing
lter produces a result that is visually similar to that produced by a simple 3 by 3
term moving average.
In[90]:= smoothplot ListDensityPlotListConvolvek,
picturevaluesAll, All, 1,
AspectRatio
474/374., Mesh
False
From In[90]:=
400
300
200
100
0
0
50
Out[90]= -DensityGraphics-
Although it may not be obvious why anyone would want to blur a perfectly
good image, it turns out that the ability will become very useful. Blurring can be
an important part of image sharpening and also used to pre-process noisy images
before applying edge detection lters.
340
Next, start to create the unsharp mask using the Gaussian kernel developed in the
previous section.
In[92]:= smooth ListConvolvek, picturevaluesAll, All, 1
The results of the Gaussian kernel are the same as those shown in the previous section. The unsharp mask will be some fraction of the difference between original
and smooth, which will tend to emphasize boundaries and edges. In this case, the
unsharp mask with a constant of 0.5 looks like this:
In[93]:= ListDensityPlot0.5 original smooth,
Frame
False, Mesh
False, AspectRatio
470/370.
From In[93]:=
Out[93]= -DensityGraphics-
341
Finally, the unsharp mask is subtracted from the original. In this example, it is shown
side-by-side with the original image for comparison.
In[94]:= Show
GraphicsArray
ListDensityPlotoriginal, Frame
False,
Mesh
False, AspectRatio
474/374.,
PlotLabel > "original",
DisplayFunction
Identity,
ListDensityPlotoriginal0.5original smooth,
Frame
False,
Mesh
False, AspectRatio
474/374.,
PlotLabel > "sharpened",
DisplayFunction
Identity
,
DisplayFunction
$DisplayFunction
From In[94]:=
original
sharpened
Out[94]= -GraphicsArray-
Look closely at the two images and you will see that the sharpened image has greater
detail in the mid-tones and shadows, and well as crisper boundaries between light
and dark areas. While unsharp masking will not perform miracles on poorly focussed images, it can add a signicant degree of sharpness to images that are already
in focus. If you are following these examples on your own computer, click on the
image above and drag one of the handles to enlarge the image and examine it in
more detail.
Computer Note: Experiment with values other than 0.5 to determine which is
best for this image.
342
and then use ListConvolve to convolve the kernel with the image.
In[96]:= ListDensityPlotListConvolvek,
picturevaluesAll, All, 1,
AspectRatio
474/374., Mesh
False
From In[96]:=
400
300
200
100
0
0
50
Out[96]= -DensityGraphics-
343
Although edges stand out weakly in the ltered image, they are not strong because
the original image contains many high frequency details. If the high frequency components were unwanted, they would be called noise. In this case, however, they are
desirable and we will call them details. Regardless of the name we choose, smoothing will tend to make the detected edges stronger. This can be illustrated using the
array smooth that we created for unsharp masking. We will want to make use of the
results several times, so the rst step will be to apply the Laplace lter to smooth
and then assign the result to the variable name smoothlaplace.
In[97]:= smoothlaplace ListConvolvek, smooth
400
300
200
100
0
0
50
Out[98]= -DensityGraphics-
Although linear features such as the river and roads stand out on this image, it is
because they are either white or black and not because their edges have been clearly
delineated. As shown in the histogram below, there are relatively few pixels with
strong positive or negative curvature and many with near-zero curvature. In other
words, the image contains many edges even though it was smoothed in an attempt
to remove details.
344
In[99]:= HistogramFlattensmoothlaplace
From In[99]:=
3000
2500
2000
1500
1000
500
-100
-50
50
100
Out[99]= -Graphics-
Out[100]= 1., 2., 1., 0., 0., 0., 1., 2., 1.
In[101]:= xgrad 1., 0., 1.
, 2., 0., 2.
, 1., 0., 1.
Out[101]= 1., 0., 1., 2., 0., 2., 1., 0., 1.
Sobel edge detection ltering is accomplished by applying the two gradient kernels
in succession.
In[102]:= smoothsobel ListConvolveygrad,
ListConvolvexgrad, smooth
The result is an image in which the edges are more pronounced than in the Laplace
lter example, although there are some conspicuous diagonal artifacts in the image. The primary reason that the edges stand out so clearly is that the are the
highest (lightest pixels) and lowest (darkest pixels) values instead of mid-range
values.
In[103]:= ListDensityPlotsmoothsobel, AspectRatio
466/366.,
Mesh
False
345
From In[103]:=
400
300
200
100
0
0
Out[103]= -DensityGraphics-
Here is a density plot of the interpolated surface, which does a good job of reproducing the smoothed image. The column iteration was listed before the row iteration
to ensure that the image would appear in the correct orientation.
In[105]:= DensityPlotsmoothinterpr, c, c, 1, 370
, r, 1, 470
,
PlotPoints
400, Mesh
False,
AspectRatio
470/370.
346
From In[105]:=
400
300
200
100
0
0
50
Out[105]= -DensityGraphics-
The equivalent of a Sobel edge detection lter can be applied by differentiating the
interpolated surface with respect to the row and column coordinates. Negative signs
are included to be consistent with the Sobel lters used above. The resulting image
is sharper than that created by the discrete Sobel lters, particularly with regard to
the strong diagonal artifacts that were so apparent in the Sobel ltered image. Interpolation also allows the resolution of the image to be increased (although sharpness
does not increase because a smooth curve is interpolated between known values). A
drawback to the use of interpolated surfaces, however, is that the calculations can
be much slower than numerical convolution.
In[106]:= DensityPlotEvaluate Dsmoothinterpr, c, c
Dsmoothinterpr, c, r,
c, 1, 370
, r, 1, 470
, PlotPoints
500,
Mesh
False, AspectRatio
470/370.
347
From In[106]:=
400
300
200
100
0
0
50
Out[106]= -DensityGraphics-
Appendix A
Mathematica Functions
in the Computational Geoscience Package
A.1 Introduction
The Computational Geosciences with Mathematica package, located in the le
CompGeosci.m on the CD accompanying this book, contains a number of functions that are too long to be conveniently listed in the text. This Appendix contains
a list of all of those functions, a brief description of each, and the chapter in which
the function is rst introduced. Consult those chapters for details on the the use of
the functions.
Mathematica packages can be generated using either a text editor or creating a
Mathematica notebook and using the Save As Special menu item under File to export them in package format. The CompGeosci package was created using the Save
As Special method, and the notebook from which the package was created (BookFunctions.nb) is provided on the accompanying CD. If you are interested in creating
your own package, consult The Mathematica Book or the online documentation for
instructions for authors of Mathematica packages.
350
351
Appendix B
Working with Color
Out[108]= -GraphicsArray-
Although Hue is simple to use, one major drawback is that its range of colors begins and ends with red. If a contour or density plot were to be colored using the
unmodied Hue function, both the lowest and highest contour intervals would be
identical in color. This is rarely desirable. One way to avoid this problem is to scale
the values that are used in Hue. For example, the following statement scales Hue
by 0.8 and produces a range of colors from red (Hue0) to violet (Hue0.8).
In[109]:= Show
GraphicsArray
TableGraphicsHue0.8i,
Rectangle0, 0
, 1, 1
, i, 0, 1, 0.1
W. C. Haneberg, Computational Geosciences with Mathematica
Springer-Verlag Berlin Heidelberg 2004
354
Out[109]= -GraphicsArray-
To show only the upper portion of the hue spectrum, rescale and shift the argument
in Hue, as illustrated below, to produce values that range from orange (Hue0.1)
to red (Hue1).
In[110]:= Show
GraphicsArray
TableGraphicsHue0.1 0.9i,
Rectangle0, 0
, 1, 1
, i, 0, 1, 0.1
Out[110]= -GraphicsArray-
Any combination of rescaling and shifting of the value passed to Hue is allowed as
long as the result falls between 0 and 1.
Hue can also be used with three arguments, the latter two specifying the saturation and brightness of the color. If only one argument is used in Hue, Mathematica
assumes that the saturation and brightness values are both 1. Reducing the saturation
in the color bar above to 0.5, for example, produces the following range of colors:
In[111]:= Show
GraphicsArray
TableGraphicsHue0.1 0.9i, 0.5, 1.,
Rectangle0, 0
, 1, 1
, i, 0, 1, 0.1
Out[111]= -GraphicsArray-
The grid below shows how changing the saturation (ranging from 0 in the top row
to 1 in the bottom row) and brightness (ranging from 0 in the left-most column to 1
in the right-most column) changes the appearance of a rectangle with a hue of 0.7.
In[112]:= Show
GraphicsArray
TableGraphicsHue0.7, i, j,
Rectangle0, 0
, 1, 1
, i, 0, 1, 0.1
,
j, 0, 1, 0.1
355
Out[112]= -GraphicsArray-
Notice that about half the rectangles, which correspond to small saturation or brightness values, appear to be black or gray. Therefore, it is important to use relatively
high saturation and brightness values (say, greater than 0.5) if the resulting colors
are to be distinguishable.
B.5.2 Red, Green, and Blue (RGB)
The second way to specify colors is to use the RGBColor function, which takes
as its arguments the intensity of the red, green, and blue primary colors of transmitted light. Computer monitors and televisions typically create images using RGB
colors. As with Hue, the value for each component can range between 0 and 1.
Pure red, for example, would be RGBColor1, 0, 0 whereas pure blue would
be RGBColor0, 0, 1. Unlike Hue, RGBColor can also be used to specify black (RGBColor0, 0, 0) and white (RGBColor1, 1, 1). The Mathematica statement below, which is a two-dimensional version of the statements used
above to illustrate the Hue function, shows an array of colors for the red and green
components ranging from 0 (top row and left-most column) to 1 (bottom row and
right-most column) in increments of 0.1 while holding the blue component xed at
0.5. Thus, the dark blue rectangle in the upper left-hand corner was drawn using
RGBColor0, 0, 0.5 and the yellow rectangle in the lower right-hand corner
was drawn using RGBColor1, 1, 0.5.
In[113]:= Show
GraphicsArray
Table
GraphicsRGBColori, j, 0.5,
Rectangle0, 0
, 1, 1
,
i, 0, 1, 0.1
, j, 0, 1, 0.1
356
Out[113]= -GraphicsArray-
Computer Note: Modify the previous statement to draw a series of color grids,
similar to that above, in order to illustrate the complete range of RGB colors.
For example, you might let all three of the color components vary from 0 to 1 in
increments of 0.2.
Guessing the RGB components of a color that you might want to use in a plot
can be a tricky process. One way to obtain the color you want is to make your
own RGB color chart (use the instructions in the preceeding Computer Note), or to
nd a color chart in a computer graphics book or web site. Note that many color
charts will show colors with RGB components ranging in value from 0 to 255. To
recreate these colors in Mathematica, just divide each value by 255 before using
it as an argument in RGBColor. Another way to explore RGB colors is to use
the simple user-dened function that draws a rectangle having the specied RGB
components.
In[114]:= RGBViewerr_, g_, b_
ShowGraphicsRGBColorr, g, b,
Rectangle0, 0
, 1, 1
The RGB combination 0.7, 0.2,0.8, for example, produces the bright purple color
shown below.
In[115]:= RGBViewer0.7, 0.2, 0.8
357
Out[115]= -Graphics-
The add-on package Graphics`Colors` contains a list of 193 RGB color specications with names like CinnabarGreen, DarkOrchid, GeraniumLake,
VenetianRed and PapayaWhip. Once the Colors package is loaded, typing in
a color name will return its RGB specication. For example,
In[116]:= VenetianRed
Out[116]= RGBColor0.829997, 0.099994, 0.119999
To see the complete list of the predened colors, enter the variable name
AllColors. You can preview these predened colors with a one-line statement
that is very similar to the RGBViewer function dened above.
In[117]:= ShowGraphicsVenetianRed,
Rectangle0,0
, 1,1
Out[117]= -Graphics-
358
Out[118]= -Graphics-
359
0.5
-0.5
-1
-Graphics-
To draw the entire plot in a specic color, change the De faultColor option from
black (the default value) to the color of your choice.
In[120]:= PlotSinx, x, 0, 2
, DefaultColor
Hue0.6
1
0.5
-0.5
-1
Out[120]= -Graphics-
Any valid HSB, RGB, or CMYK color (or gray level) specication could have been
used in these examples. Color specications can be combined with plot options such
as Dashing, Thickness, or, in the case of list plots, PointSize to produce a
variety of effects. Similarly, axis, frame, and background colors can be specied
using AxesStyle, FrameStyle, and Background.
360
0.5
-0.5
-1
0
Out[121]= -Graphics-
0
0
Out[122]= -ContourGraphics-
361
A default contour or density plot is lled with a range of gray levels, but color can be
used by specifying a color function. The simplest way to color a contour or density
plot is to use Hue without an argument.
In[123]:= ContourPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorFunction
Hue, PlotPoints
40
6
0
0
Out[123]= -ContourGraphics-
As illustrated above, though, using Hue produces a plot in which both the lowest and
highest contour intervals are colored red. Hue can be scaled or shifted, as described
in the rst section of this appendix, to alleviate the problem. To do so requires that
Hue be incorporated into a color function that we will call ScaledHue.
In[124]:= ScaledHuez_ Hue0.8 z
As above, any argument used in a color function must be within the range of 0 to 1
and values outside of this range will produce an error message. The newly dened
color function can now be used as an option in ContourPlot or DensityPlot.
In[125]:= ContourPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorFunction
ScaledHue , PlotPoints
40
362
0
0
Out[125]= -ContourGraphics-
363
0
0
Out[127]= -ContourGraphics-
More complicated color functions can be derived by specifying the RGB (or
HSB or CMYK) colors for several values of the color scale and then using
Interpolate. Say that you would like to use a red to green color function, similar to that above, but replace the middle brown with white. Therefore, the smallest values shown would have a color of RGBColor0, 1, 0, the middle values
would have a color of RGBColor1, 1, 1, and the largest values would have a
color of RGBColor1, 0, 0. The rst step in dening the color function is to
interpolate a series of three curves, which we will call RedCurve, GreenCurve,
and BlueCurve, by specifying each of the three components for values of z 0,
z 0.5, and z 1.
In[128]:= RedCurve Interpolation0., 0.
, 0.5, 1.
, 1., 1.
,
InterpolationOrder
1
Out[128]= InterpolatingFunction0., 1., <>
The option InterpolationOrder
1 is used for two reasons. First, the default
value of 3 cannot be used if there are only three data points, because n points are
required to interpolate a polynomial of order n 1. Second, although it would have
been possible use an interpolation order of 2, in this case the result would be a
parabolic curve with values that exceed 1 between the three data points. Therefore,
the best strategy is generally to use linear interpolation. The form of RedCurve
can be illustrated by plotting it.
364
0.2
0.4
0.6
0.8
Out[129]= -Graphics-
,
InterpolationOrder
1
Out[130]= InterpolatingFunction0., 1., <>
In[131]:= PlotGreenCurvez, z, 0, 1
,
PlotStyle
RGBColor0, 1, 0
1
0.8
0.6
0.4
0.2
0.4
0.6
0.8
Out[131]= -Graphics-
Neither red green contain any blue, so the blue curve will have a value of zero for
input values of 0 and 1. The middle input value of 0.5 is white, so the red, green,
and blue curves must all have a value of 1.
In[132]:= BlueCurve
Interpolation0., 0.
, 0.5, 1.
, 1., 0.
,
InterpolationOrder
1
Out[132]= InterpolatingFunction0., 1., <>
0.2
0.4
0.6
0.8
Out[133]= -Graphics-
and a contour plot made using GWR as a color function looks like this:
In[135]:= ContourPlotSinx Siny, x, 0, 2
, y, 0, 2
,
ColorFunction
GWR, PlotPoints
40
6
0
0
Out[135]= -ContourGraphics-
365
366
RainbowReverse
BrownGreenWhite
RedYellowGreen
RedWhiteGreen
367
1
0.5
0
-0.5
-1
0
2
4
6
Out[136]= -SurfaceGraphics-
The coloration of the surface has nothing to do with the z value (height of the surface). It is controlled solely by the relationship of different portions of the surface
to the three light sources. The default lighting sources for all 3-D graphics are:
In[137]:= OptionsGraphics3D, LightSources
Out[137]= LightSources 1., 0., 1., RGBColor1, 0, 0,
1., 1., 1., RGBColor0, 1, 0,
0., 1., 1., RGBColor0, 0, 1
The rst part of each light source is a directional variable given in coordinates relative to the nal display area, not the coordinate axes of the plot, with x and y in
the plane of the image and z perpendicular to it (with z to the front of the image).
The second part of each light source is a color, which can be specied using Hue,
RGBColor, or (for grayscale images) GrayLevel. Make a quick sketch illustrating the placement of the default light sources. A surface plot lighted with only a
single red source located at {1., 1., 1.} looks like this:
In[138]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
PlotPoints
40, LightSources
1., 1., 1.
,
RGBColor1, 0, 0
368
1
0.5
0
-0.5
-1
0
2
4
6
Out[138]= -SurfaceGraphics-
Notice that, because there is a single discrete light source, the surface contains shadows. We can add a blue light source from the other direction to see what effect it
will have on the plot.
In[139]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
PlotPoints
40, LightSources
1., 1., 1.
,
RGBColor1, 0, 0
, 0., 0., 1.
,
RGBColor0, 0, 1
1
0.5
0
-0.5
-1
0
2
4
6
Out[139]= -SurfaceGraphics-
Three-dimensional graphics objects can also be lighted with a uniform, or ambient, light source that is specied as a single hue, RGB color, or (for grayscale
graphics) gray level. The plot below shows the sinx
n sin y plot with ambient red
light, which produces a slightly different effect than the use of a single discrete red
light source. Here is a surface plot with ambient red light (the default light sources
are be turned off by giving the option LightSources as an empty list).
369
1
0.5
0
-0.5
-1
0
2
4
6
Out[140]= -SurfaceGraphics-
Adding a single white light source from the upper right front {1., 1., 1.} counteracts
the ambient light and adds some shadows.
In[141]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
PlotPoints
40, AmbientLight > RGBColor1, 0, 0,
LightSources
1., 1., 1.
, RGBColor1., 1., 1.
1
0.5
0
-0.5
-1
0
2
4
6
Out[141]= -SurfaceGraphics-
Finally, restoring the three default light sources produces this lighted by three different color discrete sources as well as ambient red light.
In[142]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
PlotPoints
40, AmbientLight > RGBColor1, 0, 0
370
1
0.5
0
-0.5
-1
0
2
4
6
Out[142]= -SurfaceGraphics-
Surface Color
Plot3D and ListPlot3D belong to a simplied class of 3-D graphics, known as
Sur faceGraphics, in which each x-y coordinate can be represented as a single
z value. This excludes sufaces that are folded, which would have more than one possible z value for each x-y coordinate. The surface color of a Sur faceGraphics
plot can be specied by a color function that is related to the height of the surface,
for example the color function RainbowReverse from the ComputationalGeoscience package.
In[143]:= Plot3DSinx Siny, x, 0, 2
, y, 0, 2
,
PlotPoints
40, ColorFunction
RainbowReverse
1
0.5
0
-0.5
-1
0
2
4
6
Out[143]= -SurfaceGraphics-
When implemented as in the example above, the color function is applied to the
height of the 3-D surface. This is not the only option. It is also possible to specify
371
that the value of a completely different function be used to color the surface. This
is done by supplying Plot3D with a list of two functions, the second of which
is used to color the surface according to the specied color function. In the example below, the height of the surface is given by the usual sin x sin y but the coloration reects the slope or gradient of the surface. First, dene a variable called
Sur faceGradient.
In[144]:= SurfaceGradient
Next, use Plot3D with two functions, the second of which is the color function
with Sur faceGradient as an argument.
In[145]:= Plot3DSinx Siny,
RainbowReverseSurfaceGradient
,
x, 0, 2
, y, 0, 2
, PlotPoints
40
1
0.5
0
-0.5
-1
0
2
4
6
Out[145]= -SurfaceGraphics-
372
Notice that the color function must be applied to each element within Array2, not
simply to the array as a whole.
In[147]:= Array2 TableRainbowReverseSurfaceGradient,
x, 0., 2 , /20.
, y, 0., 2 , /20.
1
0.5
0
-0.5
40
30
20
10
20
30
40
Out[148]= -SurfaceGraphics-
B.6.4 Graphics3D
Surface Color
The specication of surface colors for general 3-D graphics objects cannot be done using a simple color function because, in general, a 3-D surface does not follow a simple functional relationship. General 3-D surface
color can, however, be specied using the function Sur faceColor. Using
Sur faceColorGrayLeveln will specify the albedo (reectance) of a 3-D
surface. Sur faceColorRGBColorr,g,b will produce a surface over which
the color is given by the specied RGB color multiplied by the cosine of the angle
between the surface and the light source.
Graphics3Dobjects are, in general, created as combinations of 3-D lines and
polygons. It is also possible to convert a Sur faceGraphics object, for example
as created by Plot3D or ListPlot3D, into a Graphics3D object. The statement below transforms the sin x sin y plot from a Sur facegGraphics object
into a Graphics3D object and gives it the name ExampleSur face.
373
In[149]:= ExampleSurface
Graphics3DPlot3DSinx Siny, x, 0, 2
,y, 0, 2
,
PlotPoints
40
1
0.5
0
-0.5
-1
0
2
4
6
Out[149]= -Graphics3D-
The default surface color value, which is illustrated above, is Sur faceColor
GrayLevel1. This corresponds to the albedo of a sheet of plain white paper.
Reducing the albedo produces a much darker plot, as shown below.
In[150]:= Show
Graphics3DSurfaceColorGrayLevel0.6,
ExampleSurface1
Out[150]= -Graphics3D-
374
Notice that Sur faceColor must come before the object to be drawn, not after it. Also, note that the graphics object is given as ExampleSur face1.
Three dimensional graphics objects are actually lists, the rst element of which is
the graphic described in terms of 3-D primitives such as polygons. The second element contains graphics options.
Computer Note: Type ExampleSur face2 to see the list of graphics
primitives comprising the sin x sin y plot and ExampleSur face2 to
see a list of the options.
Specifying Sur faceColor as an RGB color has the same effect of lighting
a Sur faceGraphics plot with a red light. For example, specifying red as the
surface color produces this surface:
In[151]:= Show
Graphics3DSurfaceColorRGBColor1, 0, 0,
ExampleSurface1
Out[151]= -Graphics3D-
Another way to investigate the relationship between surface color and lighting in 3D graphics is to create a white object, for example a pyramid, and then experiment
with different lighting options. Here is the Mathematica statement to create, but not
show, a white pyramid:
375
In[152]:= WhitePyramid
SurfaceColorRGBColor1, 1, 1,
Polygon0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor1, 1, 1,
Polygon0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor1, 1, 1,
Polygon 0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor1, 1, 1,
Polygon 0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
With ambient white light produces a white pyramid even though we have not disabled the default discrete light sources
In[153]:= ShowGraphics3DWhitePyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
AmbientLight
GrayLevel1.
Out[153]= -Graphics3D-
Out[154]= -Graphics3D-
376
Using something other than white ambient light results in surface colors controlled
by a combination of the default discrete light sources and the ambient light color. In
this case, the use of red ambient light gives the pyramid a red to pink tint.
In[155]:= ShowGraphics3DWhitePyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
AmbientLight
RGBColor1, 0, 0
Out[155]= -Graphics3D-
Out[156]= -Graphics3D-
Illuminating the pyramid with a single red light source located to the right, in front
of, and above the pyramid produces a different result.
In[157]:= ShowGraphics3DWhitePyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
3.5, 2.4, 5.
, RGBColor1, 0, 0
377
Out[157]= -Graphics3D-
Here is the same pyramid with a discrete white light source added to the left, behind,
and above the pyramid.
In[158]:= ShowGraphics3DWhitePyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
3.5, 2.4, 5.
, RGBColor1, 0, 0
,
3.5, 2.4, 2.
, RGBColor1, 1, 1
Out[158]= -Graphics3D-
Finally, turning off both the ambient light and the discrete light sources (by specifying them as an empty list in order to override the default values) produces a black
pyramid.
In[159]:= ShowGraphics3DWhitePyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
378
Out[159]= -Graphics3D-
Remember that all of the pyramids shown so far have had white surfaces! The
addition of color to the surface, which can be specied using Sur faceColor,
adds more complexity. As described in the Mathematica documentation, surface
colors can be specied using one, two, or three terms. If only one term is used, it
must be a gray level or RGB color and species that the surface is a diffuse reector
of light with the color as specied. If two terms are used, the second term species
a specular reectance component of the specied color. It imparts shininess to the
surface. An optional third term is a specular reectance exponent (the default value
is 1). The example below uses only a diffusive surface color.
In[160]:= ColoredPyramid
SurfaceColorRGBColor1, 0, 0,
Polygon0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor0, 1, 0,
Polygon0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor0, 0, 1,
Polygon 0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
,
SurfaceColorRGBColor1, 1, 0,
Polygon 0.5, 0.5, 0
, 0.5, 0.5, 0
, 0, 0, 1
379
Out[161]= -Graphics3D-
The same result is obtained regardless of whether discrete light sources are used. If
the ambient white light is removed, the default discrete light sources produce this
result:
In[162]:= ShowGraphics3DColoredPyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False
Out[162]= -Graphics3D-
As with the white pyramid, removing both the discrete light sources and the ambient
light will produce a black pyramid.
In[163]:= ShowGraphics3DColoredPyramid,
ViewPoint > 2.771, 0.998, 8.505
, LightSources
,
Boxed
False
380
Out[163]= -Graphics3D-
If the discrete light sources are removed and something other than white ambient
light is used, the results will depend on the surface colors. In the example below, red
ambient light colors the red and yellow faces of the pyramid (because both contain
red; yellow is RGBColor1, 0, 1). The blue and green faces, neither of which
contain any red and therefore will not reect red light, appear black.
In[164]:= ShowGraphics3DColoredPyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
, AmbientLight
RGBColor1, 0, 0
Out[164]= -Graphics3D-
Likewise, changing the ambient light to pure blue will blacken all but the blue face.
In[165]:= ShowGraphics3DColoredPyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
, AmbientLight
RGBColor0, 0, 1
381
Out[165]= -Graphics3D-
Lighting the surface with other primary colors will produce results that depend
on the RGB content of both the light and the surface (and specular component if
one is used). Below is the colored pyramid illuminated with the Mathematica color
DarkTurquoise.
In[166]:= ShowGraphics3DColoredPyramid,
ViewPoint > 2.771, 0.998, 8.505
, Boxed
False,
LightSources
, AmbientLight
DarkTurquoise
Out[166]= -Graphics3D-
Judging from the results, we can infer that DarkTurquoise contains signicant
amounts of blue and green (because the blue and green faces above closely resemble
those rendered using white light), but no red. This can be conrmed by checking the
RGB composiiton of DarkTurquoise.
In[167]:= DarkTurquoise
Out[167]= RGBColor0., 0.807794, 0.819605