Matlab For Dynamic Modeling
Matlab For Dynamic Modeling
a
2
+b
2
). Extracting those from the complete
set produced by eig takes some work. For the dominant eigenvalue:
>> A=[5 1 1; 1 -3 1; 0 1 3]; L=eig(A);
>> j=find(abs(L)==max(abs(L)))
>> L1=L(j);
>> ndom=length(L1);
In the second line abs(L)==max(abs(L)) is a comparison between two vectors, which returns
a vector of 0s and 1s. Then nd extracts the list of indices where the 1s are.
8 MATRIX COMPUTATIONS 21
The third line uses the found indices to extract the dominant eigenvalues. Finally, length
tells us how many entries there are in L1. If ndom=1, there is a single dominant eigenvalue .
The dominant eigenvector(s) are also a bit of work.
>> [W,D]=eig(A)
>> L=diag(D)
>> j=find(abs(L)==max(abs(L)));
>> L1=L(j);
>> w=W(:,j);
The rst line supplies the raw ingredients, and the second pulls the eigenvalues from D into
a vector. After that its the same as before. The last line constructs a matrix with dominant
eigenvectors as its columns. If there is a single dominant eigenvalue, then L1 will be a single
number and w will be a column vector.
To get the corresponding left eigenvector(s), repeat the whole process on B=transpose(A).
Eigenvector scalings
The eigenvectors of a matrix population model have biologically meanings that are clearest
when the vectors are suitably scaled. The dominant right eigenvector w is the stable stage
distribution, and we are most interested in the relative proportions in each stage. To get those,
>> w=w/sum(w);
The dominant left eigenvector v is the reproductive value, and it is conventional to scale those
relative to the reproductive value of a newborn. If newborns are class 1:
>> v=v/v(1);
Exercise 8.1 : Write an m-le which applies the above to A=[1 5 0; 6 4 0; 0 1 2]. Your le
should rst nd all the eigenvalues of A, then extract the dominant one and the corresponding
(right) eigenvector, scaled as above. Repeat this for the transpose of A to nd the dominant
left eigenvector, scaled as above.
8.1 Eigenvalue sensitivities and elasticities
For an nn matrix A with entries a
ij
, the sensitivities s
ij
and elasticities e
ij
can be computed
as
s
ij
=
a
ij
=
v
i
w
j
v, w
e
ij
=
a
ij
s
ij
(1)
where is the dominant eigenvalue, v and w are dominant left and right eigenvalues, and
v, w is the inner product of v and w, computed in Matlab as dot(v,w). So once , v, w have
been found and stored as variables, it just takes some for-loops to compute the sensitivities and
elasticities.
n=length(v);
vdotw=dot(v,w);
9 CREATING NEW FUNCTIONS 22
for i=1:n; for j=1:n;
s(i,j)=v(i)*w(j)/vdotw;
end; end;
e=(s.*A)/lambda;
Note how the elasticities are computed all at once in the last line. In Matlab that kind of
vectorized calculation is much quicker than computing them one-by-one in a loop. Even
faster is turning the loops into a matrix multiplication:
vdotw=dot(v,w);
s=v*w/vdotw;
e=(s.*A)/lambda;
Exercise 8.2 Construct the transition matrix A, and then nd , v, w for an age-structured
model with the following survival and fecundity parameters:
Age-classes 1-6 are genuine age classes with survival probabilities
(p
1
, p
2
, , p
6
) = (0.3, 0.4, 0.5, 0.6, 0.6, 0.7)
Note that p
j
= a
j+1,j
, the chance of surviving from age j to age j +1, for these ages. You
can create a vector p with the values above and then use a for-loop to put those values
into the right places in A.
Age-class 7 are adults, with survival 0.9 and fecundity 12.
Results: = 1.0419
A =
_
_
_
_
_
_
_
_
_
_
0 0 0 0 0 0 12
.3 0 0 0 0 0 0
0 .4 0 0 0 0 0
0 0 .5 0 0 0 0
0 0 0 .6 0 0 0
0 0 0 0 .6 0 0
0 0 0 0 0 .7 .9
_
_
_
_
_
_
_
_
_
_
w = (0.6303, 0.1815, 0.0697, 0.0334, 0.0193, 0.0111)
v = (1, 3.4729, 9.0457, 18.8487, 32.7295, 56.8328, 84.5886)
9 Creating new functions
M-les can be used to create new functions, which then can be used in the same way as Matlabs
built-in functions. Function m-les are often useful because they let you break a big program
into a series of steps that can be written and tested one at a time. They are also sometimes
necessary. For example, to solve a system of dierential equations in Matlab, you have to
write a function m-le that calculates the rate of change for each state variable.
9 CREATING NEW FUNCTIONS 23
9.1 Simple functions
Function m-les have a special format. Here is an example, mysum.m that calculates the sum of
the entries in a matrix [the sum function applied to a matrix calculates the sum of each column,
and then a second application of sum gives the sum of all column sums].
function f=mysum(A);
f=sum(sum(A));
return;
This example illustrates the rules for writing function m-les:
1. The rst line must begin with the word function, followed by an expression of the form:
variable name = function name(function arguments)
2. The function name must be the same as the name of the m-le.
3. The last line of the le is return; (this is not required in the current version of Matlab,
but is useful for compatibility with older versions).
4. In between are commands that calculate the function value, and assign it to the variable
variable name that appeared in the rst line of the function.
In addition, the function m-le must be in a folder thats on Matlabs search path. You can
put it in a folder thats automatically part of the search path such as Matlabs work folder, or
else use the addpath command to augment the search path.
Matlab gives you some help with these rules. When you create an m-le using File/New/M-
le on the Matlab toolbar, Matlabs default is to save it under the right name in the work
folder. If everything is done properly, then mysum can be used exactly like any other Matlab
function.
>> mysum([1 2])
ans =
3
9.2 Functions with multiple arguments or returns
Matlab functions can have more than one argument, and can return more than one calculated
variable. The function eulot.m is an example with multiple arguments. It computers the
Euler-Lotka sum
n
a=0
(a+1)
l
a
f
a
1
as a function of , and vectors containing the values of l
a
and f
a
.
9 CREATING NEW FUNCTIONS 24
function f=eulot(lambda,la,fa);
age=0:(length(la)-1);
y=lambda.^(-(age+1));
f=sum(y.*la.*fa)-1;
return;
We have seen that (given the values of l
a
and f
a
) the dominant eigenvalue of the age-structured
model results in this expression equaling 0. Type in
>> la=[0.9 0.8 0.7 0.5 0.2]; fa=[0 0 2 3 5];
and then you should nd that eulot(1.4,la,fa) and eulot(1.5,la,fa) have opposite signs,
indicating that is between 1.4 and 1.5.
To have more than one returned value, the rst line in the m-le is slightly dierent: the various
quantities to be returned are enclosed in [ ]. An example is stats.m:
function [mean_x,var_x,median_x,min_x,max_x]=stats(x);
mean_x=mean(x); var_x=var(x);
median_x=median(x);
min_x=min(x); max_x=max(x);
return;
Function m-les can contain subfunctions, which are functions called by the main function (the
one whose name appears in the m-le name). Subfunctions are visible only within the m-le
where they are dened. In particular, you cannot call a subfunction at the Command line, or
in another m-le. For example, create an m-le Sumgeseries.m with the following commands:
function f=Sumgseries(r,n);
u=gseries(r,n); f=sum(u);
return;
function f=gseries(r,n);
f=r.^(0:n);
return;
Only the rst of the two functions the one with the same name as the m-le will be visible
to Matlab. That is:
>> Sumgseries(0.1,500)
ans =
1.1111
>> gseries(0.1,500)
??? Undefined command/function gseries.
Exercise 9.1 Use z=randn(500,1) to create a matrix of 500 Gaussian random numbers. Then
try a=stats(z), [a,b]=stats(z), and [a,b,c]=stats(z) to see what Matlab does if you ask
for a smaller number of returned values than a function computes. Remember, youll have to
put a copy of stats.m into a folder on your search path.
10 A SIMULATION PROJECT 25
1 2 3 4 ... ... L-1 L
Exercise 9.2 Modify stats.m so that it also returns the value of srsum(x) where the function
srsum(x)=sum(sqrt(x)) is dened using a subfunction rather than within the body of stats.
When thats working, try srsum([1 2 3]) at the Command line and see what happens.
Exercise 9.3 . Write a function m-le rmatrix.m which takes as arguments 3 matrices A, S, Z,
and returns the matrix B = A+S. Z. When its working you should be able to do:
>> A=ones(2,2); S=0.5*eye(2); Z=ones(2,2); B=rmatrix(A,S,Z)
B =
1.5000 1.0000
1.0000 1.5000
10 A simulation project
This section is an optional capstone project putting into use the Matlab programming skills
that have been covered so far. Nothing new about Matlab per se is covered in this section.
The rst step is to write a script le that simulates a simple model for density-independent
population growth with spatial variation. The model is as follows. The state variables are the
numbers of individuals in a series of L = 20 patches along a line (L stands for length of the
habitat) .
Let N
j
(t) denote the number of individuals in patch j (j = 1, 2, . . . , L) at time t (t = 1, 2, 3, . . .
), and let
j
be the geometric growth rate in patch j. The dynamic equations for this model
consist of two steps:
1. Geometric growth within patches:
M
j
(t) =
j
N
j
(t) for all j. (2)
2. Dispersal between neighboring patches:
N
j
(t + 1) = (1 2d)M
j
(t) +dM
j1
(t) +dM
j+1
(t) for 2 j L 1 (3)
where 2d is the dispersal rate. We need special rules for the end patches. For this
exercise we assume reecting boundaries: those who venture out into the void have the
sense to come back. That is, there is no leftward dispersal out of patch 1 and no rightward
dispersal out of patch L:
N
1
(t + 1) = (1 d)M
1
(t) +dM
2
(t)
N
L
(t + 1) = (1 d)M
L
(t) +dM
L1
(t)
(4)
Write your script to start with 5 individuals in each patch at time t=1, iterate the model up
to t=50, and graph the log of the total population size (the sum over all patches) over time.
11 COIN TOSSING AND MARKOV CHAINS 26
Use the following growth rates:
j
= 0.9 in the left half of the patches, and
j
= 1.2 in the
right.
Write your program so that d and L are parameters, in the sense that the rst line of your
script le reads d=0.1; L=20; and the program would still work if these were changed other
values.
Notes and hints:
1. This is a real programming problem. Think rst, then start writing your code.
2. Notice that this model is not totally dierent from Loop1.m, in that you start with a
founding population at time 1, and use a loop to compute successive populations at times
2,3,4, and so on. The dierence is that the population is described by a vector rather
than a number. Therefore, to store the population state at times t = 1, 2, , 50 you will
need a matrix njt with 50 rows and L columns. Then njt(t,:) is the population state
vector at time t.
3. Vectorize! Vector/matrix operations are much faster than loops. Set up your calcu-
lations so that computing M
j
(t) =
j
N
j
(t) for j = 1, 2, , L is a one-line statement
of the form a=b.*c . Then for the dispersal step: if M
j
(t), j = 1, 2, . . . , L is stored as a
vector mjt of length L, then what (for example) are M
j
(t) and M
j1
(t) for 2 j (L1)?
Exercise 10.1 Use the model (modied as necessary) to ask how the spatial arrangement of
good versus bad habitat patches aects the population growth rate. For example, does it matter
if all the good sites ( > 1) are at one end or in the middle? What if they arent all in one
clump, but are spread out evenly (in some sense) across the entire habitat? Be a theoretician:
(a) Patterns will be easiest to see if good sites and bad sites are very dierent from each other.
(b) Patterns will be easiest to see if you come up with a nice way to compare growth rates
across dierent spatial arrangements of patches. (c) Dont confound the experiment by also
changing the proportion of good versus bad patches at the same time youre changing the spatial
arrangement.
Exercise 10.2 Modify your script le for the model (or write it this way to begin with...)
so that the dispersal phase (equations 3 and 4) is done by calling a subfunction reflecting
whose arguments are the pre-dispersal population vector M(t) and the dispersal parameter d,
and which returns N(t + 1), the population vector after dispersal has taken place.
11 Coin tossing and Markov Chains
The exercises on coin tossing and Markov chains in Chapter 3 can be used as the basis for a
computer-lab session. For convenience we also include them here. All of the Matlab functions
and programming methods required for these exercises have been covered in previous sections,
but it is useful to look back and remember
how to generate sets of random uniform and Gaussian random numbers using rand and
randn.
11 COIN TOSSING AND MARKOV CHAINS 27
how logical operators can be used to convert a vector of numbers into a vector of 1s and
0s according to whether or not a condition holds.
how to nd the places in a vector where the value changes, using logicals and find.
>> v=rand(100,1);
>> u = (v<0.3);
>> w=find(u[2:100]~=u[1:99])
Coin tossing Exercise 11.1 Experiment with sequences of coin ips produced by a random
number generator:
Generate a sequence r of 1000 random numbers uniformly distributed in the unit interval
[0, 1].
Compute and plot a histogram for the values with ten equal bins of length 0.1. How much
variation is there in values of the histogram? Does the histogram make you suspicious
that the numbers are not independent and uniformly distributed random numbers?
Now compute sequences of 10000 and 100000 random numbers uniformly distributed in
the unit interval [0, 1], and a histogram for each with ten equal bins. Are your results
consistent with the prediction of the central limit theorem that the range of variation
between bins in the histogram is proportional to the square root of the sequence length?
Exercise 11.2 Convert the sequence of 1000 random numbers r from the previous exercise
into a sequence of outcomes of coin tosses in which the probability of heads is 0.6 and the
probability of tails is 0.4. Let 1 represent an outcome of heads and let 0 represent an outcome
of tails. To generate from r a sequence of 0s and 1s that reect these probabilities, we assign
random numbers less than 0.4 to tails, and random numbers larger than 0.6 to heads. A simple
way to do this follows:
seq = zeros(1000,1);
for i=1:1000
if r(i) < 0.6
seq(i)=1;
end
end
Matlab Challenge Write a vectorized program to generate the coin tosses without using
the command for.
(Hint: The logical operator < can act on vectors and matrices as well as scalars.)
Recall that this coin tossing experiment can be modeled by the binomial distribution: the
probability of k heads in the sequence is given by
c
k
(0.6)
k
(0.4)
1000k
where c
k
=
1000!
k!(1000 k)!
.
11 COIN TOSSING AND MARKOV CHAINS 28
Calculate the probability of k heads for values of k between 500 and 700 in a sequence of
1000 independent tosses. Plot your results with k on the x-axis and the probability of k
heads on the y-axis. Comment on the shape of the plot.
Now test the binomial distribution by doing 1000 repetitions of the sequence of 1000 coin
tosses and plot a histogram of the number of heads obtained in each repetition. Compare
the results with the predictions from the binomial distribution.
Repeat this experiment with 10000 repetitions of 100 coin tosses. Comment on the dier-
ences you observe between this histogram and the histogram for 1000 repetitions of tosses
of 1000 coins.
Markov chains The purpose of the following exercises is to generate synthetic data for single
channel recordings from nite state Markov chains, and explore patterns in the data.
Single channel recordings give the times that a Markov chain makes a transition from a closed
to an open state or vice versa. The histogram of expected residence times for each state in a
Markov chain is exponential, with dierent mean residence time for dierent states. To observe
this in the simplest case, we again consider coin tossing. The two outcomes, heads or tails, are
the dierent states in this case. Therefore the histogram of residence times for heads and tails
should each be exponential. The following steps are taken to compute the residence times:
Generate sequences of independent coin tosses based on given probabilities.
Look at the number of transitions that occur in each of the sequences (a transition is
when two successive tosses give dierent outcomes).
Calculate the residence times by counting the number of tosses between each transition.
Exercise 11.3 Find the script cointoss.m. This program calculates the residence times of
coin tosses by the above methodology. Are the residence times consistent with the prediction
that their histogram decreases exponentially? Produce a plot that compares the predicted
results with the simulated residence times stored by cointoss in the vectors hhist and thist.
(Suggestion: use a logarithmic scale for the values with the matlab command semilogy.)
Models for stochastic switching among conformational states of membrane channels are some-
what more complicated than the coin tosses we considered above. There are usually more than
2 states, and the transition probabilities are state dependent. Moreover, in measurements some
states cannot be distinguished from others. We can observe transitions from an open state to
a closed state and vice versa, but transitions between open states (or between closed states)
are invisible. Here we shall simulate data from a Markov chain with 3 states, collapse that
data to remove the distinction between 2 of the states and then analyze the data to see that
it cannot be readily modeled by a Markov chain with just two states. We can then use the
distributions of residence times for the observations to determine how many states we actually
have.
Suppose we are interested in a membrane current that has three states: one open state, O, and
two closed states, C
1
and C
2
. As in the kinetic scheme discussed in class, state C
1
cannot make
11 COIN TOSSING AND MARKOV CHAINS 29
a transition to state O and vice-versa. We assume that state C
2
has shorter residence times
than states C
1
or O. Here is the transition matrix of a Markov chain we will use to simulate
these conditions:
C
1
C
2
O
_
_
.98 .1 0
.02 .7 .05
0 .2 .95
_
_
C
1
C
2
O
You can see from the matrix that the probability 0.7 of staying in state C
2
is much smaller
than the probability 0.98 of staying in state C
1
or the probability 0.95 of remaining in state O.
Exercise 11.4 Generate a set of 1000000 samples from the Markov chain with these transition
probabilities. We will label the state C
1
by 1, the state C
2
by 2 and the state O by 3. This can
be done with a modication of the script we used to produce coin tosses:
nt = 1000000;
A = [0.98, 0.10, 0; 0.02, 0.7, 0.05; 0, 0.2, 0.95]
sum(A)
rd = rand(nt,1);
states = ones(nt+1,1);
states(1) = 3; \% Start in open state
for i=1:nt
if rd(i) < A(3,states(i))
states(i+1) = 3;
elseif rd(i) < A(3,states(i))+A(2,states(i))
states(i+1) = 2;
end;
end;
(If your computer does not have sucient memory to generate 1000000 samples, use 100000.)
Exercise 11.5 Compute the eigenvalues and eigenvectors of the matrix A. Compute the
total time that your sample data in the vector states spends in each state (try to use vector
operations to do this!) and compare the results with predictions coming from the dominant
right eigenvector of A.
Exercise 11.6 Produce a new vector rstates by reducing the data in the vector states
so that states 1 and 2 are indistinguishable. The states of rstates will be called closed and
open.
Exercise 11.7 Plot histograms of the residence times of the open and closed states in rstates
by modifying the program cointoss.m.
Comment on the shapes of the distributions in each case. Using your knowledge of the transition
matrix A, make a prediction about what the residence time distributions of the open states
should be. Compare this prediction with the data. Show that the residence time distribution
of the closed states is not t well by an exponential distribution.
12 THE HODGKIN-HUXLEY MODEL 30
g
Na
g
K
g
L
v
Na
v
K
V
L
T C
120 36 0.3 55 -72 -49.4011 6.3 1
12 The Hodgkin-Huxley model
The purpose of this section is to develop an understanding of the components of the Hodgkin-
Huxley model for the membrane potential of a space clamped squid giant axon. It goes with
the latter part of Chapter 3 in the text, and with the Recommended reading: Hille, Ion
Channels of Excitable Membranes, Chapter 2.
The Hodgkin-Huxley model is the system of dierential equations
C
dv
dt
= i
_
g
Na
m
3
h(v v
Na
) +g
K
n
4
(v v
K
) +g
L
(v v
L
)
dm
dt
= 3
T6.3
10
_
(1 m)
_
v 35
10
_
4mexp
_
v 60
18
__
dn
dt
= 3
T6.3
10
_
0.1 (1 n)
_
v 50
10
_
0.125nexp
_
v 60
80
__
dh
dt
= 3
T6.3
10
_
0.07(1 h) exp
_
v 60
20
_
h
1 + exp(0.1(v + 30))
_
where
(x) =
x
exp(x) 1
.
The state variables of the model are the membrane potential v and the ion channel gating
variables m, n, and h, with time t measured in msec. Parameters are the membrane capacitance
C, temperature T, conductances g
Na
, g
K
, g
L
, and reversal potentials v
Na
, v
K
, v
L
. The gating
variables represent channel opening probabilities and depend upon the membrane potential.
The parameter values used by Hodgkin and Huxley are:
Most of the data used to derive the equations and the parameters comes from voltage clamp
experiments of the membrane, e.g Figure 2.7 of Hille. In this set of exercises, we want to see that
the model reproduces the voltage clamp data well, and examine some of the approximations
and limitations of the parameter estimation.
When the membrane potential v is xed, the equations for the gating variables m, n, h are rst
order linear dierential equations that can be rewritten in the form
x
dx
dt
= (x x
)
where x is m, n or h.
Exercise 12.1 Re-write the dierential equations for m, n, and h in the form above, thereby
obtaining expressions for
m
,
n
,
h
and m
inf
, n
inf
, h
inf
as functions of v.
Exercise 12.2 Write a Matlab script that computes and plots
m
,
n
,
h
and m
inf
, n
inf
, h
inf
as
functions of v for v varying from 100mV to 75mV. You should obtain graphs that look like
Figure 2.17 of Hille.
12 THE HODGKIN-HUXLEY MODEL 31
In voltage clamp,
dv
dt
= 0 so we obtain the following formula for the current from the Hodgkin-
Huxley model:
i = g
Na
m
3
h(v v
Na
) +g
K
n
4
(v v
K
) +g
L
(v v
L
)
The solution of the rst order equation
x
dx
dt
= (x x
)
is
x(t) = x
+ (x(0) x
) exp(
t
x
)
Exercise 12.3 Write an m-le to compute and plot as a function of time the current i(t) ob-
tained from voltage clamp experiments in which the membrane is held at a potential of 60mV
and then stepped to a higher potential v
s
for 6msec. (When the membrane is at its holding
potential 60mV, the values of m, n, h approach m
(60), n
(60), h
, h
,
m
,
h
, the pa-
rameters of the sodium current in voltage clamp from data. The data we use is from the
model itself: as in Exercises 2 and 3 compute the Hodgkin-Huxley sodium current gener-
ated by a voltage clamp experiment with a holding potential of 90mV and steps to v
s
=
80, 70, 60, 50, 40, 30, 20, 10, 0. This is your data. Using the expression g
Na
m
3
h(v
v
Na
) for the sodium current, estimate m
,
m
, h
and
h
as functions of voltage from this
simulated data. The most commonly used methods assume that
m
is much smaller than
h
,
so that the activation variable m reaches its steady state before h changes much. Explain the
procedures you use. Some of the parameters are dicult to determine, especially over cer-
tain ranges of membrane potential. Why? How do your estimates compare with the values
computed in Exercise 1?
Challenge: For the parameters that you had diculty estimating in Exercise 4, simulate
voltage clamp protocols that help you estimate these parameters better. (See Hille, pp. 44-45.)
Describe the protocols and how you estimate the parameters. Plot the currents produced by
the model (as in Exercise 2) for your new experiments, and give the parameter estimates that
you obtain using the additional data from your experiments. Further investigation of these
procedures is a good topic for a course project!
12.1 Getting started
We oer here some suggestions for completing the exercises in this section.
Complicated expressions are often built by composing simpler expressions. In any programming
language, it helps to introduce intermediate variables. Here, lets look at the gating variable h
12 THE HODGKIN-HUXLEY MODEL 32
rst. We have
dh
dt
= 0.07 exp
_
v 60
20
_
(1 h)
h
1 + exp(0.1(v + 30)
Introduce the intermediate expressions
a
h
= 0.07 exp
_
v 60
20
_
and
b
h
=
1
1 + exp(0.1(v + 30)
.
Then
dh
dt
= a
h
(1 h) b
h
h = a
h
(a
h
+b
h
)h.
We can then divide this equation by (a
h
+b
h
) to obtain the desired form
h
dh
dt
= (h h
)
as
1
a
h
+b
h
dh
dt
=
a
h
a
h
+b
h
h.
Comparing these two expressions we have
h
=
1
a
h
+b
h
, h
=
a
h
a
h
+b
h
.
Implementing this in Matlab to compute the values of h
(45) and
h
(45), we write
v = -45;
ah = 0.07*exp((-v-60)/20);
bh = 1/(1+exp(-0.1*(v+30)));
tauh = 1/(ah+bh);
hinf = ah/(ah+bh);
Evaluation of this script gives tauh = 4.6406 and hinf = 0.1534.
To do the second exercise, for t
h
and h
,
m
, n
,
n
, but there is one slight twist: the
function . This function dened by
(x) =
x
exp(x) 1
is indeterminate giving the value 0/0 when x = 0, so Matlab cannot evaluate it there. Nonethe-
less, using lHopitals rule from calculus, we can dene (0) = 1. When computing the values
in Matlab, either avoid x = 0 or use an if statement to test for whether x = 0. It is helpful in
writing Matlab scripts to compute the terms involving to introduce intermediate variables
for its argument:
...
amv = -(v+35.0)/10.0;
am = amv/(exp(amv) - 1);
...
You will need some of the data from the second exercise in completing the third and fourth,
and you should extend the range of v to include v = 90 for these exercises. It is helpful
to dene the parameters, a vector t of the time values that you want to use in computing
the currents, values of m, n, h at the holding potential v = 60, values of m
, n
, h
and
m
,
n
,
h
at the potentials of the steps, arrays that will hold all of the data for each of the
gating variables m, n, h, etc. before you compute the currents. Use code like m(s,j) = m1(s)
+ (m0 - m1(s))*exp(-t(j)/mt1(s)) to compute the gating variables, with m0 the value of m
at the holding potential, m1(s) the value of m
, h
,
m
,
h
. Frequently used procedures assume
that (1) we start at a potential suciently hyperpolarized that there is no inactivation (i.e.
h = 1) and (2) activation is so fast relative to inactivation that m reaches its steady state
before h has changed signicantly. One then estimates m
,
m
from the increasing portion of
the conductance traces, assuming that the conductance is g
Na
m
3
and that h = 1. To estimate
h
, we assume that the decreasing tail of the conductance curve is t to g
Na
m
3
h since m
has already reached its steady state.
Estimating h
is easier from a dierent set of voltage traces in which the holding potential v
0
is varied with a step from each holding potential to the same potential v
1
. In this protocol,
we start with h partially inactivated, so the maximal conductance of the trace is proportional
to the value of h. Relative to a holding potential at which h is close to 1, the proportionality
13 SOLVING SYSTEMS OF DIFFERENTIAL EQUATIONS 34
constant gives the value of h
0
of h, prior to the step. Consult Hille for further descriptions of
these protocols.
13 Solving systems of dierential equations
Matlabs built-in functions make it relatively easy to do some fairly complicated things. One
important example is nding numerical solutions for a system of dierential equations
dx
dt
= f(t, x).
Here x is a vector assembled from quantities that change with time, and f gives their rates of
change. The Hodgkin-Huxley model is one example. Here here we start with a simple model
of a gene regulation model from the paper
T. Gardner, C. Cantor and J. Collins, Construction of a genetic toggle switch in Escherichia
coli. Nature 403: 339-342.
The model is
du
dt
= u +
u
1 +v
dv
dt
= v +
v
1 +u
(5)
The variables u, v in this system are functions of time. They represent the concentrations of two
repressor proteins P
u
, P
v
in bacteria that have been infected with a plasmid containing genes
that code for P
u
and P
v
. The plasmid also has promoters, with P
u
a repressor of the promoter
of the gene coding for P
v
and vice-versa.
The equations are a simple bathtub model describing the rates at which u and v change
with time. P
u
is degraded at the rate u and is produced at a rate
u
1+v
, which is a decreasing
function of v. The exponent models the cooperativity in the repression of P
u
synthesis by
P
v
. These two processes of degradation and synthesis combine to give the equation for
du
dt
, and
there is a similar equation for
dv
dt
.
There are no explicit formulas to solve this pair of equations. We can interpret what the
equations mean geometrically. At each point of the (u, v) plane, we regard (
du
dt
,
dv
dt
) as a vector
that gives the direction and magnitude for how fast (u, v) jointly change as a function of t.
Solutions to the equations give rise to parametric curves (u(t), v(t)) whose tangent vectors
(
du
dt
,
dv
dt
) are those specied by the equations. The Matlab command quiver can be used to plot
the vector eld. Use the following script to plot the eld for = 3, = = 2).
[U,V] = meshgrid(0:.2:3);
Xq = -U + 3/(1+V.^2);
Yq = -V + 3/(1+U.^2);
quiver(U,V,Xq,Yq);
13 SOLVING SYSTEMS OF DIFFERENTIAL EQUATIONS 35
We can think of the solutions as curves in the plane that follow the arrows Given a starting
point (u
0
, v
0
), the mathematical theory proves that there is a unique solution (u(t), v(t)) with
(u(0), v(0)) = (u
0
, v
0
). The process of nding the solutions is called numerical integration.
In all of them, an approximate solution is built up by adding segments one after another for
increasing time. Matlab provides several dierent methods for doing this, all labeled ode...
with a common reference page.
Exercise 13.1 Open the Matlab reference page for ode45 and look at the syntax for the
command.
Note that the rst argument for an ODE solver is odefun, where odefun is a function that
returns the values of the right hand sides of the dierential equations. So the rst step in
solving a system of ODEs is to write a function m-le which evaluates the vector eld f as a
function of time t and the state variables x. For our example (5) we will name the function
toggle and place it in the le toggle.m:
function dy = toggle(t,y,p)
dy = zeros(2,1);
dy(1) = - y(1) + p(1)./(1+y(2).^p(2));
dy(2) = - y(2) + p(1)./(1+y(1).^p(3));
The arguments of toggle.m represent time, the current value of the state vector (u, v), and
the parameter vector p = (, , ). (The Matlab documentation doesnt give examples where
we pass the values of parameters to the odefun as is done here, but it tells us it can be done.)
Then the command
[T,Y] = ode45(@toggle,[0 100],[0.2,0.1],[],[3,2,2]);
invokes the solver ode45 to produce the solution for time in the interval [0, 100] starting at the
initial point (0.2, 0.1) with parameter vector p = (3, 2, 2). Here the rst argument is a function
handle (the Matlab version of a pointer) for the function toggle.m. The empty argument []
is a place holder for an array of options that can be used to set algorithmic parameters that
control the numerical integration algorithm. For example, the options RelTol and AbsTol can
be used to control the accuracy that the numerical integration tries to achieve. It does this by
adjusting the time steps adaptively based on internal estimates of the error. By using smaller
time steps, it can achieve better accuracy, up to a point.
Exercise 13.2 Write the le for toggle.m and run this command. What is the size of Y ?
Change the time interval to [0, 200] and run the command again. Now what is the size of Y ?
We can now plot the results in two dierent ways:
plot1 = figure;
plot(T,Y(:,1),T,Y(:,2))
plots u and v as functions of time. Note that these functions seem to be approaching constants,
and that these constants have dierent values.
13 SOLVING SYSTEMS OF DIFFERENTIAL EQUATIONS 36
plot2 = figure;
plot(Y(:,1),Y(:,2))
Exercise 13.3 Make these plots.
The second plot is called a phase portrait. It shows the path in the (u, v) phase plane taken
by the trajectory, but we lose track of the times at which the trajectory passes through each
point on this path.
Exercise 13.4 Rerun ode45 with initial conditions (0.2, 0.3) to produce new output [T1,Y1]
and plot the phase plane output of both solutions. Do this for time intervals [0, 50] and [0, 200].
The trajectories appear to end at the same places, indicating that they didnt go anywhere
after T = 50. We can explain this by observing that the dierential equations vanish at these
endpoints. The curves where
du
dt
= 0 and
dv
dt
= 0 are called nullclines for the vector eld. They
intersect at equilibrium points, where both
du
dt
= 0 and
dv
dt
= 0. The solution with initial
point an equilibrium is constant. Here, the equilibrium points are (asymptotically) stable,
meaning that trajectories close to the equilibria approach them as t increases.
Exercise 13.5 Plot the nullclines without erasing the phase portrait. The script
hold on
v = [0:0.01:3];
u = 3./(1+v.^2);
plot(u,v,r)
plots the u nullcline in red.
Exercise 13.6 There is a third equilibrium point where the two nullclines intersect in addition
to the two that occur at the ends of the trajectories we have computed. Investigate what
happens to trajectories with initial conditions near these trajectories.
The options RelTol and AbsTol can be used to control the accuracy that the numerical inte-
gration tries to achieve by using smaller time steps. For example, you can set these to 10
10
and then run the integrator with the commands
options = odeset(RelTol,1e-10,AbsTol,1e-10);
[T,Y] = ode45(@toggle,[0 100],[0.2,0.1],options,[3,2,2]);
Note that options are modied by using the odeset function to create the variable option that
is then used as an argument to the integrator ode45. You can use help odeset to get a list of
the various options that can be modied from their defaults.
Exercise 13.7 Run trajectories with the default tolerances and 10
10
. How does the number
of steps taken by ode45 change?
Exercise 13.8 Change the value of from 3 to 1.5. How does the phase portrait change?
Plot the nullclines to help answer this question.
MATLAB has a number of functions for solving dierential equations, and which one to use
depends on the problem. One key issue is stiness; dierential equations are called sti
14 EQUILIBRIUM POINTS AND LINEARIZATION 37
if they have some variables or combinations of variables changing much faster than others.
Sti systems require special techniques and are harder to solve than non-sti systems. Many
biological models are at least mildly sti. Typing doc ode45 will get you (on our computers,
at least) documentation that lists and compares the various solvers. Because each has its own
strengths and weaknesses, it can be useful to solve a system with several of them and compare
the results.
Exercise 13.9 Write a vector eld and main m-les to solve the Lotka-Volterra model
dx
1
/dt = x
1
(r
1
x
1
ax
2
)
dx
2
/dt = x
2
(r
2
x
2
bx
1
)
in which the parameters r
1
, r
2
, a, b are all passed as parameters. Generate solutions for the
same parameter values with at least 3 dierent ODE solver functions, and compare the results.
Exercise 13.10 Write a vector eld and main m-les to solve the constant population size
SIR model with births,
dS/dt = (S +I +R) SI S
dI/dt = SI ( +)I
dR/dt = I R
For parameter values = 1/60, = 25 (corresponding to a mean lifetime of 60 years, and
disease duration of 1/25 of a year) and population size S(0) +I(0) + R(0) = 1000000, explore
how the dynamics of the disease prevalence I(t) changes as you increase the value of from 0.
14 Equilibrium points and linearization
This section continues our study of dierential equations with Matlab. We will investigate the
computation of equilibrium points and their linearization. Recall that an equilibrium point of
the system
dx
dt
= f(x) is a vector x
0
in the phase space where f(x
0
) = 0. If phase space has
dimension n, then this is a system of n equations in n variables that may have multiple solu-
tions. Solving nonlinear equations is a dicult task for which there are no sure re algorithms.
Newtons method is a simple iterative algorithm that is usually very fast when it works, but it
doesnt always work.
Newtons method takes as its input a starting value of x
0
, ideally one that is close to the
solution of f(x
0
) = 0 that we seek. It evaluates y
0
= f(x
0
) and terminates if the magnitude
of y
0
is smaller than a desired tolerance. If y
0
is larger than the desired tolerance, then a new
value x
1
of x
0
is computed from the solution of the linear or tangent approximation to f at x
0
:
L(x) = Df(x
0
)(x x
0
) + f(x
0
). Here Df(x
0
) is the n n matrix of partial derivatives of f
evaluated at x
0
the jth column of Df is the derivative of f with respect to the jth coordinate.
If Df(x
0
) has a matrix inverse, then we can solve the linear system L(x) = 0 for x, yielding the
new value of x we use in Newtons method: x
1
= x
0
Df
1
(x
0
)f(x
0
)). So we replace x
0
by x
1
and start over again by evaluating f(x
1
). If its magnitude is small enough, we stop. Otherwise,
we compute the linear approximation at x
1
, solve for its root and continue with this new value
of x.
Close to a solution of f(x) = 0 where Df has a matrix inverse, Newtons method converges
14 EQUILIBRIUM POINTS AND LINEARIZATION 38
quadratically. The script newton.m implements Newtons method for models with the same
syntax as functions for solving dierential equations.
function [x,df] = newton(f,x0,p)
.
.
end;
We can apply this to our toggle switch model (5), in the le toggle.m:
function dy = toggle(t,y,p)
dy = zeros(2,1);
dy(1) = - y(1) + p(1)./(1+y(2).^p(2));
dy(2) = - y(2) + p(1)./(1+y(1).^p(3));
with the commands
p = [3 2 2];
x0 = [2.5;0];
[x,df] = newton(@toggle,x0,p)
Exercise 14.1 Download the les newton.m and toggle.m to your workspace and run the
command above. Note that the intermediate values of x and y are displayed. Recall that for
these values of p, there are three equilibrium points. Now choose dierent values of x
0
to nd
the other two equilibrium points.
The le repress.m implements the six dimensional repressilator model of Elowitz and Leibler:
function dy = repress(t,y,p)
dy = zeros(6,1);
dy(1) = -y(1) + p(1)/(1.+y(6)^p(4))+ p(2);
dy(2) = -y(2) + p(1)/(1.+y(4)^p(4))+ p(2);
dy(3) = -y(3) + p(1)/(1.+y(5)^p(4))+ p(2);
dy(4) = -p(3)*(y(4)-y(1));
dy(5) = -p(3)*(y(5)-y(2));
dy(6) = -p(3)*(y(6)-y(3));
Exercise 14.2 Reproduce the gure in the textbook that shows oscillations in this model
by computing and graphing a trajectory for this model with parameters p = [50,0,0.2,2].
Almost any initial conditions should work. Try x0 = 2*rand(6,1)
Exercise 14.3 Use Newtons method to compute an equilibrium point of the repressilator for
the same values of the parameters.
We can use eigenvalues and eigenvectors as tools to study solutions of a vector eld near an
equilibrium point x
0
, as discussed in the textbook. The basic idea is that we approximate the
vector eld by the linear system
dw
dt
= Aw
15 PHASE-PLANE ANALYSIS AND THE MORRIS-LECAR MODEL 39
where A is the n n matrix Df(x
0
) that newton.m computes for us and w = x x
0
. In many
circumstances the phase portrait of this linear system will look similar to the phase portrait of
dx
dt
= f(x). Now, if v is an eigenvector of A with eigenvalue , the curve
w(t) = exp(t)v
is a solution of
dw
dt
= Aw because Av = v.
If the eigenvalue is negative, then the exponential exp(t) 0 as t . Complex eigenvalues
give solutions that have trigonometric terms: exp(it) = cos(t) + i sin(t). Whenever the real
parts of all the eigenvalues are negative, the equilibrium point is linearly stable. Otherwise it
is unstable.
Exercise 14.4 Compute the eigenvalues of the equilibrium point that you found for the
repressilator model. Now change the parameters to p = [50,1,0.2,2] and recompute the
equilibrium point and its eigenvalues.
Exercise 14.5 Compute the eigenvalues of the three equilibrium points for the repressilator
with p = [3 2 2]. You should nd that the equilibria o the diagonal are stable. The equi-
librium point on the diagonal has one positive and one negative eigenvalue, making it a saddle.
Choosing initial points that add to this equilibrium point small increments in the direction
of the eigenvector with positive eigenvalue, compute trajectories of the vector eld. Do the
same for increments in the direction of the eigenvector with negative eigenvalue, but integrate
backward in time; i.e., choose a negative nal time for your integration. These trajectories
approximate the unstable manifold and stable manifold of the saddle.
15 Phase-plane analysis and the Morris-Lecar model
In this section we continue the study of phase portraits of two dimensional vector elds us-
ing the Morris-Lecar model for the membrane potential of barnacle muscle ber. For these
exercises it is convenient to use pplane, a Matlab graphical tool for phase-plane analysis of
two-dimension vector elds developed by John Polking. At this writing, versions of pplane for
various versions of Matlab are available at http://math.rice.edu/ dfield. Download the
appropriate versions pplane?.m and dfield?.m into your Matlab working directory, and then
start pplane by typing pplane? in the Matlab command window.
1
Recommended reading: Rinzel and Ermentrout, Analysis of Neural Excitability and Oscil-
lations in Koch and Segev, Methods in Neuronal Modeling: From Synapses to Networks, MIT
Press, Cambridge, MA, 2nd edition, 1998.
1
Note that ? here is a pplane version number, not literally a question mark. At the moment pplane7 is the
current version so you would download pplane7.m and dfield7.m and type pplane7 to get it started.
15 PHASE-PLANE ANALYSIS AND THE MORRIS-LECAR MODEL 40
Parameter Set 1 Set 2
g
Ca
4.4 5.5
g
K
8 8
g
L
2 2
v
Ca
120 120
v
K
-84 -84
v
L
-60 -60
C 20 20
0.04 0.22
i 90 90
v
1
-1.2 -1.2
v
2
18 18
v
3
2 2
v
4
30 30
The dierential equations for the Morris-Lecar model are
C
dv
dt
= i g
Ca
m
(v)(v v
Ca
) g
K
w(v v
K
) g
L
(v v
L
)
w
(v)
dw
dt
= (w
(v) w)
m
w
(v) =
1
cosh(
vv
3
2v
4
)
(6)
The following parameters are used in the textbook:
Exercise 15.1 Compute phase portraits for the Morris-Lecar model at the two dierent
tabulated sets of parameter values. Label
each of the equilibrium points by type,
the stable and unstable manifolds of any saddle points
the stability of the periodic orbits.
Bifurcations of the system occur at parameters where the number of equilibria or periodic orbits
change. The typical bifurcations encountered while varying a single parameter at a time in a
system with at most a single saddle point are
1. Saddle-node bifurcation: The Jacobian at an equilibrium points has a zero eigenvalue.
2. Hopf bifurcation: The Jacobian at an equilibrium point has a pair of pure imaginary
eigenvalues.
16 SIMULATING DISCRETE-EVENT MODELS 41
3. Homoclinic bifurcation: There is a trajectory in both the stable and unstable manifold
of a saddle.
4. Saddle-node of limit cycle bifurcation: A periodic orbit has double eigenvalue 1.
The changes in dynamics that occur at each kind of bifurcation are discussed in Chapter 5 of
the textbook.
Exercise 15.2 At saddle-node bifurcations, two equilibria appear or disappear. Figure 5.14
of the textbook shows that as g
Ca
is varied, saddle-node bifurcations occur near g
Ca
= 5.32
and g
Ca
= 5.64. Compute phase portraits for values of g
Ca
near these bifurcations, describing
in words how the phase portraits change.
Exercise 15.3 Now set g
Ca
= 5.5 and vary in the range from (0.04, 0.22). Show that
both Hopf and homoclinic bifurcations occur in this range. What are approximate bifurcation
values? Draw labeled phase portraits on both sides of the bifurcations, indicating the changes
that occur.
Exercise 4 15.4 Hopf bifurcations are supercritical if stable periodic orbits emerge from the
equilibrium and subcritical if unstable periodic orbits emerge from the equilibrium. Is the Hopf
bifurcation you located in Exercise 3 subcritical or supercritical? Explain how you know.
Exercise 15.5 With g
Ca
set to 4.4, show that the two periodic orbits you computed in
Exercise 1 approach each other and coalesce as is increased. This is a saddle-node of limit
cycle bifurcation. Draw phase portraits on the two sides of the bifurcations.
Exercise 15.6 For parameter values = 0.33 and g
Ca
varying near the saddle-node value
approximately 5.64, the saddle-node is a snic. Explain what this is using phase portraits as an
illustration.
16 Simulating Discrete-Event Models
This section is an introduction to simulating models that track discrete agents (organisms,
molecules, neurons) as they change in state, as an alternative to compartment models that
assume large numbers of agents. It can can be regarded as a warmup for simulating nite-
population disease models (Chapter 6 in the textbook), or as some simple examples of agent-
based models (Chapter 8).
Figure 3 shows a compartment model for biased movement of particles between two compart-
ments. The corresponding system of dierential equations is
dx
1
dt
= Lx
2
Rx
1
dx
2
dt
= Rx
1
Lx
2
(7)
Even for molecules but much more so for individuals catching a disease changes in state
are discrete events in which individuals move one by one from one compartment to another.
In some cases, such as molecular diusion, transitions can really occur at any instant. But for
modeling purposes we can have transitions occurring at closely-spaced times dt, 2dt, 3dt, for
16 SIMULATING DISCRETE-EVENT MODELS 42
1
21
=Rx
1
12
=Lx
2
2
Figure 3: Compartment diagram for biased movements between 2 compartments.
some short time step dt, and allow each individual to follow a Markov chain with transition
matrix
A =
_
1 (R dt) L dt
R dt 1 (L dt)
_
In TwoState.m the movement decisions for many particles are made by using rand to toss many
coins at once. If there are N particles in compartment 1, each with probability Rdt of moving to
compartment 2, then sum(rand(N,1)<R*dt) simulates the combined outcome (number moving)
from all of the coin tosses. Note in TwoState.m that at each time step, rst all coins are
tossed for all particles in all compartments and only then are particles moved to update the
state variables.
Each simulation of the model will have a dierent outcome, but some properties will be more
or less constant. In particular
1. Once dt is small enough to approximate a continuous-time process, further decreases in
dt have essentially no eect on the behavior of simulations. Roughly, dt is small enough
to model continuous time if there would be practically no chance of an individual really
doing 2 or more things in a time interval of length dt. For this model, that means that
we must have (Rdt)(Ldt) 1, i.e. dt 1/
RL.
2. A compartments range of departures from solutions of the dierential equation are of
order 1/
2
u
x
2
+
2
u
y
2
_
v
t
= e(u +b 0.5v)
(8)
In this form of the model, the substance v does not diuse - the model is the extreme limit of the
diering diusion constants that are required for pattern formation by the Turing mechanism.
In an electrophysiological context, v represents the gating variable of a channel (which does not
move), while u represents the membrane potential which changes due to diusion of ions in the
tissue as well as by transmembrane currents. The tissue could be the surface of the heart, or
with one space dimension, a nerve axon.
17 SIMULATING DYNAMICS IN SYSTEMS WITH SPATIAL PATTERNS 44
To solve this equation, we want to discretize both space and time, replacing the derivatives in
the equations by nite dierences. For the time derivatives, we estimate
u
t
(x, y, t)
u(x, y, t +h) u(x, y, t)
h
and
v
t
(x, y, t)
v(x, y, t +h) v(x, y, t)
h
h being the time step of the method. For the spatial derivatives, we estimate
2
u
x
2
(x, y, t)
u
x
(x, y, t)
u
x
(x k, y)
k
u(x +k, y, t) u(x, y, t) (u(x, y, t) u(x k, y, t))
k
2
and
2
u
y
2
(x, y, t)
u
y
(x, y, t)
u
y
(x, y k, t)
k
u(x, y +k, t) u(x, y, t) (u(x, y, t) u(x, y k, t))
k
2
The values of the function u in the lower-left corner of the lattice is given by
.
.
.
.
.
.
.
.
.
.
.
.
u(k, 3k, t) u(2k, 3k, t) u(3k, 3k, t)
u(k, 2k, t) u(2k, 2k, t) u(3k, 2k, t)
u(k, k, t) u(k, 2k, t) u(3k, k, t)
We shall work with a rectangular domain and impose no ux boundary conditions. This means
that none of the u material should ow out of the domain due to the diusion. Each of the
terms of the form u(x, y) u(x k, y) in the discretized Laplacian represents the net material
owing between two sites. Therefore, if we border the domain by one additional row of sites
that take the same values as those at the adjacent site in the interior of the domain, then we
can apply the discrete approximation of the Laplacian throughout the domain, including the
sites just interior to the boundary. Using this trick, the following Matlab function calculates
the right hand side of our discretized operator for the arrays u and v:
function [uf,vf] = sfn(u,v)
global dx e b ;
ny = size(u,1);
nx = size(u,2);
uer = [u(:,1),u,u(:,nx)];
uec = [u(1,:);u;u(ny,:)];
ul = uec(3:ny+2,:)+uec(1:ny,:)+uer(:,1:nx)+uer(:,3:nx+2)-4*u;
u3 = u.*u.*u;
uf = (u-u3/3-v)/e + k^2*ul;
vf = e*(u+b-0.5*v);
17 SIMULATING DYNAMICS IN SYSTEMS WITH SPATIAL PATTERNS 45
An important pragmatic consideration here is that there are no loops. Everything is written as
vector operations. The program is already quite slow to run - loops would make it intolerable.
Now, we use the simple Euler method to update the points:
[uf,vf] = sfn(u,v);
u = u+h*uf;
v = v+h*vf;
Because the time steps are sequential, depending upon the previous step, we do need a loop to
execute it.
The three les sfn.m, sfninit.m, sfncont.m (29 lines of code!) suce to produce simulations of
spiral patterns. Run sfninit rst, and then sfncont. You can repeat sfncont to run additional
steps, and you can change the parameters nsteps, b and e before running sfncont again. Here
are some things that you can do with these les:
Figure 13 from Winfrees paper shows a (b, e) bifurcation diagram for rotor patterns he
studied. Can you reproduce some of these patterns?
Experiment with changing the spatial discretization parameter k. What eect do you
expect to see on the spatial pattern?
Run sfninit2 and investigate what happens when spiral patterns collide with one another.