Full Download Elements of Concave Analysis and Applications Prem K. Kythe PDF
Full Download Elements of Concave Analysis and Applications Prem K. Kythe PDF
Full Download Elements of Concave Analysis and Applications Prem K. Kythe PDF
com
https://textbookfull.com/product/elements-of-
concave-analysis-and-applications-prem-k-kythe/
https://textbookfull.com/product/analysis-and-implementation-of-
isogeometric-boundary-elements-for-electromagnetism-felix-wolf/
textbookfull.com
https://textbookfull.com/product/advanced-flight-dynamics-with-
elements-of-flight-control-1st-edition-nandan-k-sinha/
textbookfull.com
https://textbookfull.com/product/seismic-analysis-of-structures-and-
equipment-praveen-k-malhotra/
textbookfull.com
https://textbookfull.com/product/the-routledge-companion-to-
eighteenth-century-literatures-in-english-1st-edition-sarah-eron/
textbookfull.com
The Radium Girls: The Dark Story of America’s Shining
Women Kate Moore
https://textbookfull.com/product/the-radium-girls-the-dark-story-of-
americas-shining-women-kate-moore/
textbookfull.com
https://textbookfull.com/product/indigenous-peoples-and-mining-good-
practice-guide-2nd-edition-international-council-on-mining-metals/
textbookfull.com
https://textbookfull.com/product/to-clear-away-the-shadows-1st-
edition-david-drake/
textbookfull.com
https://textbookfull.com/product/invest-in-yourself-positive-thinking-
the-right-education-the-family-leader-and-how-to-attract-wealth-2nd-
edition/
textbookfull.com
Handbook of essential oils science technology and
applications Second Edition Ba■er
https://textbookfull.com/product/handbook-of-essential-oils-science-
technology-and-applications-second-edition-baser/
textbookfull.com
Elements of
Concave Analysis
and Applications
Elements of
Concave Analysis
and Applications
Prem K. Kythe
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc.
(CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization
that provides licenses and registration for a variety of users. For organizations that have been granted
a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notations, Definitions, and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Matrix Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Cofactor Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 Solution with the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.3 Gaussian Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Definite and Semidefinite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Special Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.1 Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.2 Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.3 Bordered Hessian: Two Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.4 Bordered Hessian: Single Function . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Differential Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 Limit of a Function at a Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Theorems on Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.1 Limit at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.2 Infinite Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Global and Local Extrema of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 First and Second Derivative Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.1 Definition of Concavity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5 Vector-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5.1 Geometric Meaning of the Inflection Point . . . . . . . . . . . . . . . . . . . 40
2.6 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.7 Multivariate Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7.1 Geometric Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
viii CONTENTS
This textbook on concave analysis aims at two goals. Firstly, it provides sim-
ple yet comprehensive subject matter to the readers who are undergraduate
seniors and beginning graduate students in mathematical economics and busi-
ness mathematics. For most readers the only prerequisites are courses in ma-
trix algebra and differential calculus including partial differentiation; however,
for the last chapter a thorough working knowledge of linear partial differen-
tial equations and the Laplace transforms is required. The readers can omit
this chapter if not required. The subject of the book centers mostly around
concave and convex optimization; other related topics are also included. The
details are provided below in the overview section.
Although there are many excellent books on the market, almost all of
them are at times difficult to understand. They are very heavy on theoreti-
cal aspects, and generally fail to provide ample worked-out examples to give
readers easy understanding and workability.
The second goal is elucidated below in the section ‘To Readers’.
Motivation
The subject of convexity and quasi-convexity has been a model for economic
theorists to make decisions about cost minimization and revenue maximiza-
tion. This has resulted in a lot of publications in convex optimization. So
why is there keen interest in concave and quasi-concave functions? Firstly,
economic theory dictates that all utility functions are quasi-convex and that
all cost functions are concave in input prices. Therefore, a cost function that
is not concave in input prices is not a cost function. Secondly, the standard
model in economic theory consists in a set of alternatives and an ordering of
these alternatives, according to different priorities and interests. The process
that a decision maker follows is to choose a favorite alternative with the prop-
erty that no other alternative exceeds the ordering. In such a situation the
decision maker often uses a function that ‘represents’ this ordering. Thus, for
example, suppose there are four alternatives, say, a, b, c and d, and suppose
that the decision maker prefers a to b and treats both c and d as equally
desirable. Any function, like f , with f (a) > f (b) > f (c) = f (d) may rep-
xiv PREFACE
Overview
A general description of the topics covered in the book is as follows: Chap-
ter 1 introduces a review of matrix algebra that includes definitions, matrix
inversion, solutions of systems of linear algebraic equations, definite and semi-
definite matrices, Jacobian, two types of Hessian matrices, and the Hessian
test. Chapter 2 is a review of calculus, with topics dealing with limits, deriva-
tive, global and local extrema, first and second derivative tests, vector-valued
functions, optimization, multivariate functions, and basic concepts of mathe-
matical economics.
Concave and convex functions are introduced in Chapter 3, starting with
the notion of convex sets, Jensen’s inequalities for both concave and convex
functions, and unconstrained optimization. Chapter 4 deals with concave
programming; it is devoted to optimization problems on maximization mostly
with inequality constraints, and using the Lagrange method of multipliers and
the KKT necessary and sufficient conditions. Applications to mathematical
economics include the topic of peak price loading, and comparative statics is
discussed. Optimization problems focusing on minimization are introduced
in Chapter 5 on convex programming, in order to compare it with concave
optimization. Nonlinear programming is discussed; the Fritz John and Slater
conditions are presented, and the topic of Lagrangian duality is discussed.
Chapters 6 and 7 deal with quasi-concave and quasi-convex functions.
Both topics are important in their own applications. The single-function
bordered Hessian test on quasi-concavity and quasi-convexity is presented,
and optimization problems with types of functions and the minmax theorem
are provided. Chapter 8 deals with log-concave functions; general results
on log-concavity are presented, with application on mean residual life; and
the Asplund sum is introduced, with its algebra, derivatives, and area mea-
sure. Log-concavity of nonnegative sequences is discussed, and all log-concave
PREFACE xv
To Readers
The second goal concerns specifically the abuse and misuse of a couple of
standard mathematical notations in this field of scientific study. They are
the gradient ∇f and the Laplacian ∇2 f of a function f (x) in Rn . Somehow,
and somewhere, a tradition started to replace the first-order partials of the
function f by its gradient ∇f . It seems that this tradition started without
any rigorous mathematical argument in its support. This book has provided
a result (Theorem 2.18) that establishes that only under a specific necessary
condition the column vector [∂f /∂x1 · · · ∂f /∂xn ] can replace the gradient
vector ∇f , and these two quantities, although isomorphic to each other, are
not equal. Moreover, it is shown that any indiscriminate replacement between
these two quantities leads to certain incorrect results (§3.5).
xvi PREFACE
The other misuse deals with the Laplacian ∇2 f , which has been used to
represent the Hessian matrix (§1.6.2), without realizing that ∇2 f is the trace
(i.e., sum of the diagonal elements) of the Hessian matrix itself. This abuse
makes a part equal to the whole. Moreover, ∇2 is the well-known linear partial
differential operator of the elliptic type known as the Laplacian.
It appears that this misuse perhaps happened because of the term ‘vector’,
which is used (i) as a scalar quantity, having only magnitude, as in the row
or column vectors (in the sense of a matrix), and (ii) as a physical quantity,
such as force, velocity, acceleration, and momentum, having both magnitude
and direction. The other factor for the abuse in the case of the gradient is
the above-mentioned linear isomorphic mapping between the gradient vector
∇f and the (scalar) column vector [∂f /∂x1 · · · ∂f /∂xn ]T . This isomorphism
has been then literally used as ‘equality’ between these two quantities. Once
the case for ∇f became the tradition, the next choice ∇2 f for the Hessian
matrix became another obvious, but incorrect, tradition.
As readers, you will find an attention symbol, !!! , at different parts of the
book. It is used to point out the significance of the statements found there.
The other less important notations are the ≺, the ⊕ and the ⊙ symbols.
Although borrowed from physics and astronomy, these symbols are acceptable
with a different but almost similar meaning provided that they are properly
defined as given in the section on Notations. Moreover, the ⊕ and the ⊙
symbols have now become so common due to the advancement in cell phones
and related electronic technology that they are probably losing their rigorous
mathematical significance.
Acknowledgments
I take this opportunity to thank Mr. Sarfraz Khan, Executive Editor, Taylor
& Francis, for his support, and Mr. Callum Fraser for coordinating the book
project. I also thank the Project Editor Michele A. Dimont for doing a great
job of editing the text. Thanks are due to the reviewers and to some of my
colleagues who made some very valuable suggestions to improve the book.
Lastly, I thank my friend Michael R. Schäferkotter for help and advice freely
given whenever needed.
Prem K. Kythe
Notations, Definitions, and Acronyms
Cov, covariance
c.d.f., cumulative distribution function
c(w, y), cost function
Dt , derivative with respect to t
Df (x), derivative of f (x) in Rn
D, aggregated demand
D, domain, usually in the z-plane
dist(A, B), distance between points (or sets) A and B
dom(f ), domain of a function f
DRS, decreasing return to scale
e, expenditure function
E, amount allocated for expenditure
E[X], expected value of a random vector X
E(f ), entropy of f
Eq(s)., Equation(s) (when followed by an equation number)
ei , ith unit vector, i = 1, . . . , n
[e], set of the unit vectors ei in Rn
epi(f ), epigraph of f
e(p, u), expenditure function
F , field
f : X 7→ Y , function f maps the set X into (onto) the set Y
f ◦ g, composite function of f and g: (f ◦ g)(·) = f (g(·))
f ′ , first derivative of f
f ′′ , second derivative of f
f (n) , nth derivative of f
∂f (x)
, first-order partials of f in Rn , also written fi , for i = 1, . . . , n; also
∂xi
∂f ∂f ∂f
written as fx , fy , fx for , , in R3
∂x ∂y ∂z
∂ 2 f (x)
, second-order partials of f in Rn , also written as fij for i, j = 1, . . . , n;
∂xi ∂xj
∂2f ∂2f ∂2f
also written as fxx , fyy , fzx for , , in R3
∂x2 ∂y 2 ∂z 2
(f ◦ g)(x),= f (g(x)), composition of functions f and g
Rt Rt
f ⋆ g, convolution of f (t) and g(t) (= 0 f (t − u)g(u) du = 0 f (ug (t − u) du =
L−1 {G(s)F (s)})
FJ, Fritz John conditions R∞
F (s), Laplace transform of f (t) (= 0 est f (t) dt)
G, government expenditure; constrained set
Gmin , positive minimal accepted level of profit
G(·; ·), Green’s function
NOTATIONS, DEFINITIONS, AND ACRONYMS xix
∂ ∂ ∂
∇, ‘del’ operator, ∇ = i + j + k , ((x, y, z) ∈ R3 ); an operator defined
∂x ∂y ∂z
∂ ∂
in Rn as ∇ = e1 + · · · + en
∂x1 ∂xn
∂f ∂f
∇f , gradient of a function f , a vector in R3 , defined by ∇ = i +j +
∂x ∂y
∂f ∂f ∂f
k ; a vector in Rn defined by ∇ = e1 + · · · + en for x =
∂z ∂x1 ∂xn
∂f ∂f
(x1 , . . . , xn ) ∈ Rn (dimension 1 × n), or by ∇ = e1 + · · · + en for
∂x1 ∂xn
x = (x1 , . . . , xn ) ∈ Rn (dimension n × 1)
∂2 ∂2
∇2 , Laplacian operator defined on Rn as + · · ·+ ; it is a linear elliptic
∂x21 ∂x2n
partial differential operator
∂2f ∂2f
∇2 f (x) = 2 + ··· + , x = (x1 , . . . , xn ) ∈ Rn , Laplacian of f (x); also
∂x1 ∂x2n
the trace of the Hessian matrix H
kxk1 , l1 -norm of a vector x
kxk2 , l2 -norm, or Euclidean norm, of a vector x
kxk∞ , l∞ -norm of a vector x
≻, , (subordination (predecessor): A B, matrix inequality between ma-
trices A and B; A ≻ B, strict matrix inequality between matrices A and
B
≺, , subordination (successor) , e.g., f ≺ g is equivalent to f (0) = g(0) and
f (E) ⊂ g(E), where E is the open disks ; but here x ≺ y is used for
componentwise strict inequality, and x y for componentwise inequality
between vectors x and y
(f ⊕ g)(z), = sup{f (x)g(y)}, where f and g are log-concave functions
x+y
x
(s ⊙ f )(x), = sf , where f is a log-concave function, and s > 0
s
n
n! n
k , binomial coefficient = k! (n − k)! = n−k
iso iso
= , isomorphic to; for example, A = B means A is isomorphic to B, and
conversely
end of a proof, or an example
!!! attention symbol
1
Matrix Algebra
Some basic concepts and results from linear and matrix algebra, and from
finite-dimensional vector spaces are presented. Proofs for most of the results
can be found in many books, for example, Bellman [1970], Halmos [1958],
Hoffman and Kunze [1961] Lipschutz [1968], and Michel and Herget [2007].
1.1 Definitions
A matrix A is a rectangular array of elements (numbers, parameters, or vari-
ables), where the elements in a horizontal line are called rows, and those in
a vertical line columns. The dimension of a matrix is defined by the number
of rows m and the number of columns n, and we say that such a matrix has
dimension m × n, or simply that the matrix is m × n. If m = n, then we have
a square matrix. If the matrix is 1 × n, we call it a row vector, and if the
matrix is m × 1, then it is called a column vector. A matrix that converts the
rows of a matrix A to columns and the columns of A to rows is called the
transpose of A and is denoted by AT .
Let two 3 × 3 matrices A and B be defined as
a11 a12 a13 b11 b12 b13
A = a21 a22 a23 , B = b21 b22 b23 . (1.1.1)
a31 a32 a33 b31 b32 b33
subtracted from) a11 in A; b12 to (or from) a12 , and so on. Multiplication of
a matrix by a number or scalar involves multiplication of each element of the
matrix by the scalar, and it is called scalar multiplication, since it scales the
matrix up or down by the size of the scalar.
A row vector A and a column vector B are written, respectively, as
b11
A = [ a11 a12 a13 ]1×3 , B = b21 .
b31 3×1
the matrices A and B, and B and E are conformable for multiplication, but
A and C are not conformable. Thus,
3 · 5 + 6 · 7 + 11 · 9 3 · 13 + 6 · 8 + 11 · 10 156 97
AB = = ,
12 · 5 + 8 · 7 + 5 · 9 12 · 13 + 8 · 8 + 5 · 10 161 270 2×2
5 · 1 + 13 · 2 5 · 4 + 13 · 4 5 · 7 + 13 × 9 31 72 152
BE = 7 · 1 + 8 · 2 7·4+8·4 7 · 7 + 8 × 9 = 23 60 121 .
9 · 1 + 10 · 2 9 · 4 + 10 · 4 9 · 7 + 10 × 9 29 76 153 3×3
Random documents with unrelated
content Scribd suggests to you:
latter decreased abruptly in frequency and locked on to the first
subharmonic. As the stimulus frequency was further increased, the
pacemaker frequency would increase, then skip to the next
harmonic, then increase again, etc. This type of behavior was
observed by Moore et al. (23) in Aplysia and reported at the San
Diego Symposium for Biomedical Electronics shortly after it was
observed by the author in the electronic model.
Thus, we have shown that an electronic analog with all
parameters except membrane capacitance fixed at values close to
those of Hodgkin and Huxley, can provide all of the normal threshold
or axonal behavior and also all of the subthreshold somatic and
dendritic behavior outlined on page 7. Whether or not this is of
physiological significance, it certainly provides a unifying basis for
construction of electronic neural analogs. Simple circuits, based on
the Hodgkin-Huxley model and providing all of the aforementioned
behavior, have been constructed with ten or fewer inexpensive
transistors with a normal complement of associated circuitry (18). In
the near future we hope to utilize several models of this type to help
assess the information-processing capabilities not only of individual
neurons but also of small groups or networks of neurons.
REFERENCES
1. Hagiwara, S., and Bullock, T. H.
“Intracellular Potentials in Pacemaker and Integrative Neurons
of the Lobster Cardiac Ganglion,”
J. Cell and Comp. Physiol. 50 (No. 1):25-48 (1957)
2. Chalazonitis, N., and Arvanitaki, A.,
“Slow Changes during and following Repetitive Synaptic
Activation in Ganglion Nerve Cells,”
Bull. Inst. Oceanogr. Monaco No. 1225:1-23 (1961)
3. Hodgkin, A. L., Huxley, A. F., and Katz, B.,
“Measurement of Current-Voltage Relations in the Membrane
of the Giant Axon of Loligo,”
J. Physiol. 116:424-448 (1952)
4. Hagiwara, S., and Saito, N.,
“Voltage-Current Relations in Nerve Cell Membrane of
Onchidium verruculatum,”
J. Physiol. 148:161-179 (1959)
5. Hagiwara, S., and Saito, N.,
“Membrane Potential Change and Membrane Current in
Supramedullary Nerve Cell of Puffer,”
J. Neurophysiol. 22:204-221 (1959)
6. Hagiwara, S.,
“Current-Voltage Relations of Nerve Cell Membrane,”
“Electrical Activity of Single Cells,”
Igakushoin, Hongo, Tokyo (1960)
7. Bullock, T. H.,
“Parameters of Integrative Action of the Nervous System at
the Neuronal Level,”
Experimental Cell Research Suppl. 5:323-337 (1958)
8. Otani, T., and Bullock, T. H.,
“Effects of Presetting the Membrane Potential of the Soma of
Spontaneous and Integrating Ganglion Cells,”
Physiological Zoology 32 (No. 2):104-114 (1959)
9. Bullock, T. H., and Terzuolo, C. A.,
“Diverse Forms of Activity in the Somata of Spontaneous and
Integrating Ganglion Cells,”
J. Physiol. 138:343-364 (1957)
10. Bullock, T. H.,
“Neuron Doctrine and Electrophysiology,”
Science 129 (No. 3355):997-1002 (1959)
11. Chalazonitis, N., and Arvanitaki, A.,
“Slow Waves and Associated Spiking in Nerve Cells of Aplysia,”
Bull. Inst. Oceanogr. Monaco No. 1224:1-15 (1961)
12. Bullock, T. H.,
“Properties of a Single Synapse in the Stellate Ganglion of
Squid,”
J. Neurophysiol. 11:343-364 (1948)
13. Bullock, T. H.,
“Neuronal Integrative Mechanisms,”
“Recent Advances in Invertebrate Physiology,”
Scheer, B. T., ed., Eugene, Oregon:Univ. Oregon Press 1957
14. Hodgkin, A. L., and Huxley, A. F.,
“Currents Carried by Sodium and Potassium Ions through the
Membrane of the Giant Axon of Loligo,”
J. Physiol. 116:449-472 (1952)
15. Hodgkin, A. L., and Huxley, A. F.,
“The Components of Membrane Conductance in the Giant
Axon of Loligo,”
J. Physiol. 116:473-496 (1952)
16. Hodgkin, A. L., and Huxley, A. F.,
“The Dual Effect of Membrane Potential on Sodium
Conductance in the Giant Axon of Loligo,”
J. Physiol. 116:497-506 (1952)
17. Hodgkin, A. L., and Huxley, A. F.,
“A Quantitative Description of Membrane Current and its
Application to Conduction and Excitation in Nerve,”
J. Physiol. 117:500-544 (1952)
18. Lewis, E. R.,
“An Electronic Analog of the Neuron Based on the Dynamics of
Potassium and Sodium Ion Fluxes,”
“Neural Theory and Modeling,”
R. F. Reiss, ed., Palo Alto, California:Stanford University
Press, 1964
19. Eccles, J. C.,
Physiology of Synapses,
Berlin:Springer-Verlag, 1963
20. Grundfest, H.,
“Excitation Triggers in Post-Junctional Cells,”
“Physiological Triggers,”
T. H. Bullock, ed., Washington, D.C.:American Physiological
Society, 1955
21. Rall, W.,
“Membrane Potential Transients and Membrane Time
Constants of Motoneurons,”
Exp. Neurol. 2:503-532 (1960)
22. Araki, T., and Otani, T.,
“The Response of Single Motoneurones to Direct Stimulation,”
J. Neurophysiol. 18:472-485 (1955)
23. Moore, G. P., Perkel, D. H., and Segundo, J. P.,
“Stability Patterns in Interneuronal Pacemaker Regulation,”
Proceedings of the San Diego Symposium for Biomedical
Engineering,
San Diego, California, 1963
24. Eccles, J. C.,
The Neurophysiological Basis of Mind,
Oxford:Clarendon Press, 1952
Fields and Waves in Excitable
Cellular Structures
R. M. STEWART
Space General Corporation
El Monte, California
J. Z. Young (24)
INTRODUCTION
The study of electrical fields in densely-packed cellular media is
prompted primarily by a desire to understand more fully the details
of brain mechanism and its relation to behavior. Our work has
specifically been directed toward an attempt to model such
structures and mechanisms, using relatively simple inorganic
materials.
The prototype for such experiments is the “Lillie[1] iron-wire
nerve model.” Over a hundred years ago, it had been observed that
visible waves were produced on the surface of a piece of iron
submerged in nitric acid when and where the iron is touched by a
piece of zinc. After a short period of apparent fatigue, the wire
recovers and can again support a wave when stimulated. Major
support for the idea that such impulses are in fact directly related to
peripheral nerve impulses came from Lillie around 1920. Along an
entirely different line, various persons have noted the morphological
and dynamic similarity of dendrites in brain and those which
sometimes grow by electrodeposition of metals from solution.
Gordon Pask (17), especially, has pointed to this similarity and has
discussed in a general way the concomitant possibility of a physical
model for the persistent memory trace.
By combining and extending such concepts and techniques, we
hope to produce a macroscopic model of “gray matter,” the structural
matrix of which will consist of a dense, homogeneously-mixed,
conglomerate of small pellets, capable of supporting internal waves
of excitation, of changing electrical behavior through internal fine-
structure growth, and of forming temporal associations in response
to peripheral shocks.
A few experimenters have subsequently pursued the iron-wire
nerve-impulse analogy further, hoping thereby to illuminate the
mechanisms of nerve excitation, impulse transmission and recovery,
but interest has generally been quite low. It has remained fairly
undisturbed in the text books and lecture demonstrations of medical
students, as a picturesque aid to their formal education. On the
outer fringes of biology, still less interest has been displayed; the
philosophical vitalists would surely be revolted by the idea of such
models of mind and memory, and at the other end of the scale,
contemporary computer engineers generally assume that a nerve
cell operates much too slowly to be of any value. This lack of
interest is certainly due, in part, to success in developing techniques
of monitoring individual nerve fibers directly to the point that it is
just about as easy to work with large nerve fibers (and even
peripheral and spinal junctions) as it is to work with iron wires.
Under such circumstances, the model has only limited value,
perhaps just to the extent that it emphasizes the role of factors
other than specific molecular structure and local chemical reactions
in the dynamics of nerve action.
When we leave the questions of impulse transmission on long
fibers and peripheral junctions, however, and attempt to discuss the
brain, there can be hardly any doubt that the development of a
meaningful physical model technique would be of great value. Brain
tissue is soft and sensitive, the cellular structures are small, tangled,
and incredibly numerous. Therefore (Young (24)), “ ... physiologists
hope that after having learned a lot about nerve-impulses in the
nerves they will be able to go on to study how these impulses
interact when they reach the brain. [But], we must not assume that
we shall understand the brain only in the terms we have learned to
use for the nerves. The function of nerves is to carry impulses—like
telegraph wires. The functions of brains is something else.” But,
confronted with such awesome experimental difficulties, with no
comprehensive mathematical theory in sight, we are largely limited
otherwise to verbal discourses, rationales and theorizing, a
hopelessly clumsy tool for the development of an adequate
understanding of brain function. A little over ten years ago Sperry
(19) said, “Present day science is quite at a loss even to begin to
describe the neural events involved in the simplest form of mental
activity.” This situation has not changed much today. The
development, study, and understanding of complex high-density
cellular structures which incorporate characteristics of both the Lillie
and Pask models may, it is hoped, alleviate this situation. There
would also be fairly obvious technological applications for such
techniques if highly developed and which, more than any other
consideration, has prompted support for this work.
Experiments to date have been devised which demonstrate the
following basic physical functional characteristics:
(1) Control of bulk resistivity of electrolytes
containing closely-packed, poorly-conducting
pellets
(2) Circulation of regenerative waves on closed
loops
(3) Strong coupling between isolated excitable
sites
(4) Logically-complete wave interactions,
including facilitation and annihilation
(5) Dendrite growth by electrodeposition in
“closed” excitable systems
(6) Subthreshold distributed field effects,
especially in locally-refractory regions.
In addition, our attention has necessarily been directed to various
problems of general experimental technique and choice of materials,
especially as related to stability, fast recovery and long life. However,
in order to understand the possible significance of, and motivation
for such experiments, some related modern concepts of
neurophysiology, histology and psychology will be reviewed very
briefly. These concepts are, respectively:
1. Cellular Structure
2. Short-Term Memory
3. The Synapse
4. Inhibition
5. Long-Term Memory
EXPERIMENTAL TECHNIQUE
In almost all experiments, the basic signal-energy mechanism
employed has been essentially that one studied most extensively by
Lillie (12), Bonhoeffer (2), Yamagiwa (22), Matumoto and Goto (14)
and others, i.e., activation, impulse propagation and recovery on the
normally passive surface of a piece of iron immersed in nitric acid or
of cobalt in chromic acid (20). The iron we have used most
frequently is of about 99.99% purity, which gives performance more
consistent than but similar to that obtained using cleaned “coat-
hanger” wires. The acid used most frequently by us is about 53-55%
aqueous solution by weight, substantially more dilute than that
predominantly used by previous investigators. The most frequently
reported concentration has been 68-70%, a solution which is quite
stable and, hence, much easier to work with in open containers than
the weaker solutions, results in very fast waves but gives, at room
temperatures, a very long refractory period (typically, 15 minutes). A
noble metal (such as silver, gold or platinum) placed in contact with
the surface of the iron has a stabilizing effect (14) presumably
through the action of local currents and provides a simple and useful
technique whereby, with dilution, both stability and fast recovery (1
second) can be achieved in simple demonstrations and experiments.
Experiments involving the growth by electrodeposition and study
of metallic dendrites are done with an eye toward electrical, physical
and chemical compatibility with the energy-producing system
outlined above. Best results to date (from the standpoints of
stability, non-reactivity, and morphological similarity to neurological
structures) have been obtained by dissolving various amounts of
gold chloride salt in 53-55% HNO₃.
An apparatus has been devised and assembled for the purpose of
containing and controlling our primary experiments. (See Figure 1).
Its two major components are a test chamber (on the left in Figure
1) and a fluid exchanger (on the right). In normal operation the test
chamber, which is very rigid and well sealed after placing the
experimental assembly inside, is completely filled with electrolyte (or,
initially, an inert fluid) to the exclusion of all air pockets and bubbles.
Thus encapsulated, it is possible to perform experiments which
would otherwise be impossible due to instability. The instability
which plagues such experiments is manifested in copious generation
of bubbles on and subsequent rapid disintegration of all “excitable”
material (i.e., iron). Preliminary experiments indicated that such
“bubble instability” could be suppressed by constraining the volume
available to expansion. In particular, response and recovery times
can now be decreased substantially and work can proceed with
complex systems of interest such as aggregates containing many
small iron pellets.
The test chamber is provided with a heater (and thermostatic
control) which makes possible electrochemical impulse response and
recovery times comparable to those of the nervous system (1 to 10
msec). The fluid-exchanger is so arranged that fluid in the test
chamber can be arbitrarily changed or renewed by exchange within
a rigid, sealed, completely liquid-filled (“isochoric”) loop. Thus,
stability can be maintained for long periods of time and over a wide
variety of investigative or operating conditions.
Most of the parts of this apparatus are made of stainless steel
and are sealed with polyethylene and teflon. There is a small quartz
observation window on the test chamber, two small lighting ports, a
pressure transducer, thermocouple, screw-and-piston pressure
actuator and umbilical connector for experimental electrical inputs
and outputs.
BASIC EXPERIMENTS
The basic types of experiments described in the following
sections are numbered for comparison to correspond roughly to
related neurophysiological concepts summarized in the previous
section.
1. Cellular Structure
2. Regenerative Loops
3. Strong Coupling
4. Inhibitory Coupling
5. Dendrite Growth
(a)
(b)
SUMMARY
An attempt is being made to develop meaningful electrochemical
model techniques which may contribute toward a clearer
understanding of cortical function. Two basic phenomena are
simultaneously employed which are variants of (1) the Lillie iron-wire
nerve model, and (2) growth of metallic dendrites by
electrodeposition. These phenomena are being induced particularly
within dense cellular aggregates of various materials whose
interstitial spaces are flooded with liquid electrolyte.
REFERENCES
1. Bok, S. T.,
“Histonomy of the Cerebral Cortex,”
Amsterdam, London:Elsevier Publishing Co., New
York:Princeton, 1959
2. Bonhoeffer, K. F.,
“Activation of Passive Iron as a Model for the Excitation of
Nerve,”
J. Gen. Physiol. 32:69-91 (1948).
This paper summarizes work carried out during 1941-1946 at
the University of Leipzig, and published during the war years
in German periodicals.
3. Boycott, B. B., and Young, J. Z.,
“The Comparative Study of Learning,”
S. E. B. Symposia, No. IV
“Physiological Mechanisms in Animal Behavior,”
Cambridge: University Press, USA:Academic Press, Inc.,
1950
4. Cole, K. S., and Curtis, H. J.,
“Electric Impedance of the Squid Giant Axon During Activity,”
J. Gen. Physiol. 22:649-670 (1939)
5. Eccles, J. C.,
“The Effects of Use and Disuse of Synaptic Function,”
“Brain Mechanisms and Learning—A Symposium,”
organized by the Council for International Organizations of
Medical Science, Oxford:Blackwell Scientific Publications, 1961
6. Franck, U. F.,
“Models for Biological Excitation Processes,”
“Progress in Biophysics and Biophysical Chemistry,”
J. A. V. Butler, ed., London and New York:Pergamon Press, pp.
171-206, 1956
7. Gerard, R. W.,
“Biological Roots of Psychiatry,”
Science 122 (No. 3162):225-230 (1955)
8. Gesell, R.,
“A Neurophysiological Interpretation of the Respiratory Act,”
Ergedn. Physiol. 43:477-639 (1940)
9. Hebb, D. O.,
“The Organization of Behavior, A Neuropsychological Theory,”
New York:John Wiley and Sons, 1949
10. Hebb, D. O.,
“Distinctive Features of Learning in the Higher Animal,”
“Brain Mechanisms and Learning—A Symposium,”
organized by the Council for International Organizations of
Medical Science, Oxford:Blackwell Scientific Publications, 1961
11. Konorski, J.,
“Conditioned Reflexes and Neuron Organization,”
Cambridge:Cambridge University Press, 1948
12. Lillie, R. S.,
“Factors Affecting the Transmission and Recovery in the
Passive Iron Nerve Model,”
J. Gen. Physiol. 4:473 (1925)
13. Lillie, R. S.,
Biol. Rev. 16:216 (1936)
14. Matumoto, M., and Goto, K.,
“A New Type of Nerve Conduction Model,”
The Gurma Journal of Medical Sciences 4(No. 1) (1955)
15. McCulloch, W. S., and Pitts, W.,
“A Logical Calculus of the Ideas Immanent in Nervous Activity,”
Bulletin of Mathematical Biophysics 5:115-133 (1943)
16. Morrell, F.,
“Electrophysiological Contributions to the Neural Basis of
Learning,”
Physiological Reviews 41(No. 3) (1961)
17. Pask, G.,
“The Growth Process Inside the Cybernetic Machine,”
Proc. 2nd Congress International Association Cybernetics,
Gauthier-Villars, Paris:Namur, 1958
18. Retzlaff, E.,
“Neurohistological Basis for the Functioning of Paired Half-
Centers,”
J. Comp. Neurology 101:407-443 (1954)
19. Sperry, R. W.,
“Neurology and the Mind-Brain Problem,”
Amer. Scientist 40(No. 2): 291-312 (1952)
20. Tasaki, I., and Bak, A. F.,
J. Gen. Physiol. 42:899 (1959)
21. Thorpe, W. H.,
“The Concepts of Learning and Their Relation to Those of
Instinct,”
S. E. B. Symposia, No. IV,
“Physiological Mechanisms in Animal Behavior,”
Cambridge:University Press, USA:Academic Press, Inc., 1950
22. Yamagiwa, K.,
“The Interaction in Various Manifestations (Observations on
Lillie’s Nerve Model),”
Jap. J. Physiol. 1:40-54 (1950)
23. Young, J. Z.,
“The Evolution of the Nervous System and of the Relationship
of Organism and Environment,”
G. R. de Beer, ed.,
“Evolution,”
Oxford:Clarendon Press, pp. 179-204, 1938
24. Young, J. Z.,
“Doubt and Certainty in Science, A Biologist’s Reflections on
the Brain,”
New York:Oxford Press, 1951