0% found this document useful (0 votes)
70 views761 pages

Synergetics Introduction and Advanced Topics, Haken, 2004

The book 'Synergetics' by Hermann Haken explores the interdisciplinary field of synergetics, which focuses on self-organization in complex systems across various domains such as physics, chemistry, biology, and psychology. It provides theoretical tools and mathematical approaches to understand phenomena of self-organization and phase transitions, emphasizing the universality of these principles across different systems. The text serves as a comprehensive resource for students and researchers interested in the dynamics of complex systems and their applications in various scientific fields.

Uploaded by

Mon Cho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views761 pages

Synergetics Introduction and Advanced Topics, Haken, 2004

The book 'Synergetics' by Hermann Haken explores the interdisciplinary field of synergetics, which focuses on self-organization in complex systems across various domains such as physics, chemistry, biology, and psychology. It provides theoretical tools and mathematical approaches to understand phenomena of self-organization and phase transitions, emphasizing the universality of these principles across different systems. The text serves as a comprehensive resource for students and researchers interested in the dynamics of complex systems and their applications in various scientific fields.

Uploaded by

Mon Cho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 761

Synergetics

Springer-Verlag Berlin Heidelberg GmbH


C ONLINE LIBRARY
Physics and Astronomy
springeronline.com
Hermann Haken

Synergetics
Introduction and Advanced Topics

With 266 Figures

, Springer
Professor Dr. Dr. h.c. multo Hermann Haken
Institute for Theoretical Physics I
Center of Synergetics
University of Stuttgart
PfaffenwaIdring 57/IV
70550 Stuttgart, Germany

Cataloging-in-Publication Data applied for


Bibliographic information published by Die Deutsche Bibliothek
Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic
data is available in the Internet at <http://dnb.ddb.de>.

This book appeared originally in two volumes in the series "Springer Series in Synergetics":
Synergetics, 3rd Edition (1983)
Advanced Synergetics, 1st Edition (1983)

ISBN 978-3-642-07405-9 ISBN 978-3-662-10184-1 (eBook)


DOI 10.1007/978-3-662-10184-1
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer-Verlag Berlin Heidelberg
GmbH. Violations are liable for prosecution under the German Copyright Law.
springeronline.com
© Springer-Verlag Berlin Heidelberg 2004
Originally published by Springer-Verlag Berlin Heidelberg New York in 2004
Softcover reprint of the hardcover 1st edition 2004
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Cover design: design & production, Heidelberg
Printed on acid-free paper 54/3141/tr - 543 2 1 0
Preface

This book is a reprint edition that comprises two titles, namely "Synergetics. An
Introduction. Nonequilibrium Phase Transitions and Self-Organization in Physics,
Chemistry and Biology" and ''Advanced Synergetics. Instability Hierarchies of Self-
Organizing Systems and Devices". The reason for this publication is two-fold:
Since synergetics is a new type of interdisciplinary field, initiated by the author
in 1969, the basic ideas developed in these volumes are of considerable theoretical
interest. But much more than this, the methods and even the concrete examples
presented in these books are still highly useful for graduate students, professors, and
even for researchers in this fascinating field. The reason lies in the following facts:
Synergetics deals with complex systems, i.e. systems that are composed of many
individual parts that are able to spontaneously form spatial, temporal or functional
structures by means of self-organization. Such phenomena occur in many fields
ranging from physics, chemistry and biology to economy and sociology. More
recent areas of application have been found in medicine and psychology, where
the great potential of the basic principles of synergetics can be unearthed. Further
applications have become possible in informatics, for instance the designing of new
types of computers, and in other fields of engineering. The central question asked by
synergetics is: Are there general principles of self-organization irrespective of the
nature of the individual parts systems are composed of?
Indeed, the original books "Synergetics" and "Advanced Synergetics" provide
the reader with a solid knowledge of basic concepts, theoretical tools and mathemat-
ical approaches to cope with the phenomenon of self-organization from a unifying
point of view. Synergetics takes into account deterministic processes as treated in
dynamic systems theory including bifurcation theory, catastrophy theory, as well
as basic notions of chaos theory and develops its own approaches. Equally well
it takes into account stochastic processes that are indispensable in many appli-
cations especially when qualitative changes of systems are treated. An important
special case refers to non-equilibrium phase-transitions that were originally treated
in physics.
The longevity of these two books derives from the timelessness of mathematics.
But also the explicit examples of applications to various fields are still of a paradig-
matic character. The wide applicability of the results of synergetics stems from the
fact that close to instability points were systems change their spatio-temporal pat-
terns or functions dramatically, similar behavior may be demonstrated in a great
VI Preface

variety of systems belonging to many different fields. Thus "universality classes"


can be established. This has the advantage that once one has understood the behavior
of one system, one may extrapolate this behavior to other systems.
The "hard core" of the research strategy, as outlined in this volume, is as follows:
Nearly all systems are subject to influences from their environment. These influences
are quantified by control parameters. When these parameters change, the system may
adopt smoothly, or may change qualitatively. We focus our attention on the latter
situation, which is, of course, most interesting. Here, in large classes of systems that
are originally described by many variables, the behavior of a system is described
and determined by only few variables, the order parameters. They fix the behavior
of the individual parts via the slaving principle. The order parameter equations
that I derive are stochastic differential equations. I call them generalized Ginzburg-
Landau equations because they contain the famous Ginzburg-Landau equations
as a special case. No prior knowledge of these equations is needed in this book,
however.
In order to motivate the reader to familiarize himself/herself with the detailed
mathematical approach that has led to these general insights and numerous explicit
applications, I will list some examples that are, of course, not exhaustive. Many
further examples as well as mathematical methods can be found in the volumes
of the Springer Series in Synergetics. I want to emphasize that a number of these
volumes add new important aspects to the just mentioned "hard core" and thus reflect
the central intention of the Springer Series in Synergetics, namely to provide the
science community with a forum for an in-depth discussion on the ways we can
deal with complex systems. Clearly over the past years quite a number of books and
numerous papers have been published outside the Springer Series in Synergetics,
but surely these have been in the spirit of the "Synergetics Enterprise". But now,
let me list some examples to which I give some typical references both outside and
inside the Springer Series. The latter references are indicated by "SSyn" and can
be found at the end of this book. The reader will understand that in view of the
vast field of applications of synergetics my list can contain only a small selection of
references.

Physics:
Spatio-temporal patterns (Staliunas, Sanchez-Morcillo, Gaul (2003», (Rosanov
(SSyn 2002» and stochastic properties of laser light, of fluids (Xu (SSyn 1997»
and plasmas, of currents in semiconductors (Scholl (2001» and of active lattices
(Nekorkin and Velarde (SSyn 2002».

Chemistry:
Formation of spatio-temporal patterns at macroscopic scales in chemical reactions.
for example the Belousov-Shabotinsky reaction (Babloyantz (1986). Nicolis (1995),
Mikhailov (SSyn 1994), Mikhailov and Loskutov (SSyn 1996», also the chemistry
of flames.
Preface VII

Computer science:
Synergetic computer for pattern recognition and decision making (Haken (SSyn
2004». This type of computer represents a genuine alternative to the by now tradi-
tional neuro-computers, to e.g. the Hopfield net.

Traffic science:
This field has become a truly interdisciplinary enterprise. Here typical synergetic
phenomena can be discovered such as phase transitions in traffic flows (Helbing
(1997».

Biology:
Morphogenesis:
Based on Turing's ideas, synergetics calculates spatial density distributions of
molecules, in particular gradients, stripes, hexagons, etc. as a function of bound-
ary and initial conditions. In initially undifferentiated omnipotent cells molecules
are produced as activators or inhibitors that diffuse between cells and react with
each other and thus can be transformed. At places of high concentration the activator
molecules switch on genes that, eventuaIly, lead to ceIl differentiation (Haken (this
book), Murray (2002».

Evolution:
By means of synergetics new kinds of analogies between biological and physical
systems have been unearthed. For instance, equations established by Eigen (Eigen
and Schuster (1979» for prebiotic, i.e. molecular evolution, turn out to be isomorphic
to specific rate equations for laser light (photons), where a specific kind of photon
wins the competition between different kinds of photons.

Population dynamics:
Resources, such as food, nesting places for birds, light intensity for plants, etc.
serve as control parameters. The numbers or densities of the individuals of species
serve as order parameters. Specific examples are provided by the Verhulst equation
or the preditor-prey relation of the Lotka-Volterra equations. Of particular interest
are dramatic changes, for instance the dying out of species under specific control
parameter values. This has influences on environmental policy. If specific control
parameters exceed critical values, the system's behavior can change dramatically.
For instance beyond a specific degree of pollution, the fish population of a lake wilI
die out.

Rhythms:
Nearly all biological systems show more or less regular periodic oscillations or fluc-
tuations. These can be imposed on the system from the outside, for instance by the
day /night cycle (circadian rhythms), seasons, etc. (exogenous), or can be produced
by the system itself (endogenous). In the foreground of synergetics research are
VIII Preface

endogenous rhythms that may proceed on quite different spatial and temporal scales
(Haken and Koepchen (SSyn 1992), Mikhailov and Calenbuhr (SSyn 2002)). Exam-
ples are cell metabolism, the circadian rhythms, brain waves in different frequency
bands (see below), cycles of menstruation, and rhythms in the cardiovascular system
(Stefanovska (2002)).

Movement science:
Rhythmical movements of humans and animals show well defined patterns of
coordination of the limbs, for instance walking, running of humans or gaits of
quadrupeds. Synergetics studies in particular transitions between movement pat-
terns, for instance the paradigmatic experiment by Kelso (Kelso (1995)). If subjects
move their index fingers parallel at a low movement frequency, suddenly at an
increased frequency an abrupt involuntary transition to a new symmetric move-
ment occurs. The control parameter is the prescribed finger movement frequency,
the order parameter is the relative phase between the index fingers (Haken (SSyn
1996)). The experimentally proven properties of a nonequilibrium phase transition
(critical fluctuations, critical slowing down, hysteresis) substantiate the concept of
self-organization and exclude that of a fixed motor program. Numerous further
coordination experiments between different limbs can be represented by the Haken-
Kelso-Bunz model (Kelso (1995), Haken (SSyn 1996)). Gaits of quadrupeds and
transitions between them were modelled in detail (Schoner et al. (1990)), see also
(Haken (SSyn 1996)). These results have lead to a paradigm change in movement
science.

Visual perception:
The recognition of patterns, e.g. of faces, is interpreted as the action of an associative
memory in accordance with usual approaches. Here incomplete data (features) with
which the system is provided from the outside are complemented by means of data
stored in the memory. A particular aspect of the synergetic approach is the idea that
pattern recognition can be conceived as pattern formation. This is not only meant as
a metaphor, but means also that specific activity patterns in the brain are established.
In pattern formation a partly ordered pattern is provided to the system, whereby
several order parameters are evoked that compete with each other dynamically. The
control parameters are so-called attention parameters that in cases without bias are
assumed to be equal. The winning order parameter imposes the total pattern on
the system according to the slaving principle. This process is also the basis of the
synergetic computer developed by Haken (Haken (SSyn 1995)).

Gestalt psychology:
As is shown in Gestalt psychology, Gestalt is conceived as an entity to which in
synergetics an order parameter with its synergetic properties (slaving principle!) can
be attached. In principle, the recognition process of Gestalt proceeds according to
the synergetic process of pattern recognition. The winning order parameter gener-
ates, according to the slaving principle, an ideal percept that is the corresponding
Preface IX

Gestalt. In ambiguous patterns an order parameter is attached to each percept of


an object. Because in ambiguous figures two or several possibilities of interpreta-
tions are contained, several order parameters participate in the dynamics, whereby
the attention parameters become dynamical quantities. (For a comprehensive treat-
ment of this and related topics see Kruse and Stadler (SSyn 1995).) As already
assumed by W. Kohler (Kohler (1920, 1955» and as is shown by the synergetic
equations, the corresponding attention parameter saturates, i.e. it becomes zero,
when the corresponding object has been recognized and the other interpretation
now becomes possible, where again the corresponding saturation process starts,
etc. Our model equations allow us also to take into account bias (Haken (SSyn
2004».

Psychology:
According to the concept of synergetics, psychological behavioral patterns are gen-
erated by self-organization of neuronal activities under specific control parameter
conditions and are represented by order parameters. In important special cases, the
order parameter dynamics can be represented as the overdamped motion of a ball in
a mountainous landscape. By means of changes of control parameters, this landscape
is deformed and allows for new equilibrium positions (stable behavioral patterns).
This leads to new approaches to psychotherapy: destabilization of unwanted behav-
ioral patterns by means of new external conditions, new cognitive influences, etc.
and measures that support the self-organization of desired behavioral patterns. This
comprises also the administration of appropriate drugs (e.g. neuroleptica), that in
the sense of Synergetics act as control parameters. The insights of synergetics have
been applied in the new field of Psychosynergetics with essential contributions by
Schiepek, Tschacher, Hansch, Ciompi and others (Schiepek (1999), Tschacher et al.
(SSyn), Hansch (2002), Ciompi (1982».

Brain theory:
Several books of the Springer Series in Synergetics are devoted to this field as well
as to experiments (KrUger (SSyn 1991), Ba§ar (SSyn 1998), Ba§ar et al. (SSyn
1983), Uhl (SSyn 1998), Tass (SSyn 1999), Haken (SSyn 1995, 2002». Also H.R.
Wilsons's fine book (1999) deserves attention in this context.
According to a proposal by Haken, the brain of humans and animals is conceived
as a synergetic, i.e. self-organizing system. This concept is supported by experiments
and models on movement coordination, visual perception, Gestalt psychology and
by EEG and MEG analysis (see below). The human brain with its 1011 neurons (and
glia cells) is a highly interconnected system with numerous feedback loops. In order
to treat it as a synergetic system, control parameters and order parameters must be
identified. While in synergetic systems of physics, chemistry and partly biology, the
control parameters are fixed from the outside, for instance by the experimentor, in
the brain and in other biological systems the control parameters can be fixed by the
system itself. In modelling them it is assumed, however, that they are practically
time-independent during the self-organization process. Such control parameters can
be, among others, the synaptic strengths between neurons that can be changed
X Preface

by learning according to Hebb (Hebb (1949», neurotransmitters, such as dopamin,


serotonin and drugs that block the corresponding receptors (for example haloperidol,
coffein), hormons (influencing the attention parameters). Furthermore: more or less
permanent external or internal stimuli.
In the frame of the given control parameters, self-organization takes place of
neuronal activity, whereby the activity patterns are connected with the correspond-
ing order parameters by means of circular causality. The order parameters move for
a short time in an attractor landscape, but then the attractor and also the order pa-
rameters disappear (concept of "quasi-attractors"). An example is the disappearance
of a percept in watching ambiguous figures. The origin and disappearance of quasi
attractors and the corresponding order parameters can happen on quite different
time scales so that some of them can act as attractors practically all the time or are
only hard to be removed (psychotherapy in the case of behavioral disturbances).
The activity patterns can be stimulated by external stimuli (exogenous activity),
but can also be created spontaneously (endogenous activity) for instance: in dreams,
hallucinations and, of course, in thinking.
Synergetics throws new light on the mind-body problem, for instance the percepts
are conceived as order parameters, whereas the "enslaved" parts of a system are
represented by electro-chemical activities of the individual neurons. Because of
circular causality, the percepts as order parameters and the neural activity condition
each other. Beyond that, the behavior of a system can be described at the level
of order parameters (information compression) or at the level of the activities of
individual parts (large amount of information) (Haken (SSyn 1995)).

Analysis of electroencephalograms (EEG) and magnetoencephalograms (MEG):


The neuronal activity is accompanied by electric and magnetic fields that are mea-
sured by the EEG and MEG, respectively, across the scalp. According to the ideas
of synergetics, at least in situations where the macroscopic behavior changes quali-
tatively, the activity patterns should be connected by few order parameters. Typical
experiments are the above described finger coordination experiments by Kelso and
closely related experiments, for instance the coordination between the movement of
a finger and a sequence of acoustic signals. In a typical MEG experiment parts of
the scalp or the whole scalp are measured by an array of SQUIDS (superconducting
quantum interference devices) that allow the determination of spatio··temporal field
patterns. By means of appropriate procedures, these patterns are decomposed into
fundamental patterns. As the analysis shows, two dominant basic patterns appear,
whose amplitudes are the order parameters. Thus the brain dynamics is governed
by only two order parameters which implies a pronounced spatio-temporai coher-
ence. If the coordination between finger movement and the acoustic signal changes
dramatically, the dynamics of the order parameters does also (Haken (SSyn: Brain
Dynamics)). In a novel analysis termed synchronization tomography, in a somewhat
related experiment Tass et al. (Tass et al. (2003)) was able to determine both the
localization of the field sources and in particular their temporal coherence. Many
future studies can be expected in this field.
Preface XI

Sociology:
Here we may distinguish between the more psychologically and the more systems
theory oriented schools, where synergetics belongs to the second approach. We can
distinguish between a qualitative and a quantitative synergetics (For a quantitative
synergetics see Weidlich (2000)). In the previous case, a number of sociologically
relevant order parameters are identified. Examples are: the language of a nation:
After his/her birth, the baby is exposed to the corresponding language and learns
it (in the technical terms of synergetics: the baby is enslaved) and then carries on
with this language when having become an adult (circular causality). These language
order parameters may compete, where one wins (for example in the USA the English
language), they may coexist (for example in Switzerland), or they may cooperate (for
instance popular language and technical language ). Whereas in this case the action of
the slaving principle is evident, in the following examples its applicability is critically
discussed by sociologists so that instead of slaving some of them like to speak of
binding or consensualization. Corresponding order parameters are: type of state
(e.g. democracy, dictatorship), public law, rituals, corporate identity, social climate
in a company, and ethics. The latter example is particularly interesting, because order
parameters are not prescribed from the outside or ab initio, but originate through
self-organization and need not be uniquely determined.

Epistemology:
An example for order parameters (though not articulated that way) is provided by the
scientific paradigms in the sense of Thomas S. Kuhn (Kuhn (1996)), where a change
of paradigm has the properties of a nonequilibrium phase transition, such as critical
fluctuations and critical slowing down. Synergetics as a new scientific paradigm is
evidently self-referential. It explains its own origin.

Management:
The concept of self-organization is increasingly used in management theory and
management praxis (e.g. Ulrich and Probst (SSyn 1984)). Instead of fixed order
structures with many hierarchical levels, now flat organisational structures with only
few hierarchical levels are introduced. In the latter case a hierarchical level makes
its decisions by means of its distributed intelligence. For an indirect steering of
these levels by means of a higher level, specific control parameters in the sense of
synergetics must be fixed, for instance by fixing special conditions, goals, etc. The
order parameters are, for instance, the self-organized collective labor processes. In
this context, the slaving principle according to which the order parameters change
slowly, whereas the enslaved parts react quickly (adaptability), gains a new interpre-
tation. For instance, the employees that are employed for a longer time determine
the climate of labor, the style of work, etc., whereby it can also be possible that
undesired cliques are established. This trend can be counteracted by job rotation.

Development of cities:
Whereas so far the development of cities was based on the concept of city planning
with detailed plans for areas, new approaches use concepts of self-organization
XII Preface

according to synergetic principles. Instead of a detailed plan, now specific control


parameters are fixed, such as a general infrastructure (streets, communication centers,
and so on). For details consider the book by J. Portugali (Portugali (SSyn 1999».
This list may suffice to provide the reader with some feeling for the range of
applications of synergetics, which continues to expand, as is witnessed for instance
by the book on nonlinear dynamics of the lithosphere and earthquake prediction
edited by Keilis-Borok and Soloviev (SSyn 2002). I am sure that the reader will
also note that the mathematical methods have still wider applications going beyond
the phenomena of self-organization and may prove useful when tackling a variety
of problems. For instance, the ubiquitous stochastic processes in classical and/or
quantum physics are treated by a number of monographs and texts (Gardiner (SSyn
1983), Horsthemke and Lefever (SSyn 1983), Risken (SSyn 1984) and reprintings,
Gardiner and Zoller (SSyn 1991), Grasman, van Herwaarden (SSyn 1999), Ani-
shchenko et al. (SSyn 2002». Just to mention a further example: the role of chaos
in quantum physics is treated in the comprehensive book of Haake (SSyn 2001).
I wish to thank Prof. W. Beiglbock and Dr. C. Caron and their coworkers of
Springer-Verlag for their excellent cooperation.

Stuttgart, December 2003 Hermann Haken

References
1. Staliunas, K., Sanchez-Marcillo, v., Gaul, LJ.: Transverse Patterns in Nonlinear Optical
Resonators, Berlin: Springer (2003)
2. Scholl, E.: Nonlinear Spatio-Temporal Dynamics and Chaos in Semiconductors. Cam-
bridge: Cambridge University Press (2001)
3. Babloyantz, A.: Molecules, Dynamics, and Life: An Introduction to Self-Organization
of Matter. Indianapolis: Wiley (1986)
4. Nicolis, G.: Introduction to Nonlinear Science. Cambridge University Press (1995)
5. Helbing, D.: Verkehrsdynamik. Neue physikalische Modellierungskonzepte. Berlin:
Springer (1997)
6. Murray, J.D.: Mathematical Biology. Berlin: Springer (2002)
7. Eigen, M., Schuster, P.: The Hypercycle - A Principle of Natural Self-Organization.
Berlin: Springer (1979)
8. Stefanovska, A: Cardiorespiratory Interactions. Nonlinear Phenom. Complex Syst. 5,
462-469 (2002)
9. Kelso, J.AS.: Dynamic Patterns: The Self-Organization of Brain and Behavior. Cam-
bridge, MA: MIT Press (1995)
10. Schoner, G., Yiang, w.Y., Kelso, lAS.: A Synergetic Theory of Quadrupedal Gaits and
Gait Transitions. J. Theor. BioI. 142, 359-391 (1990)
11. Kohler, W.: Die physischen Gestalten in Ruhe und im stationiiren Zustand. Braun-
schweig: Vieweg (1920)
Preface XIII

12. Kohler, w.: Direction of Processes in Living Systems. Scientific Monthly 8, 29-32
(1955)
13. Schiepek, G.: Die Grundlagen der Systemischen Therapie; Theorie-Praxis-Forschung.
Gottingen: Vandenhoek and Ruprecht (1999)
14. Hansch, D.: Evolution und Lebenskunst. Grundlagen der Psychosomatik. Ein Selbst-
management-Lehrbuch. Gottingen: Vandenhoeck und Ruprecht (2002)
IS. Ciompi, L.: Affektlogik. Uber die Struktur der Psyche und ihre Entwicklung. Ein Beitrag
zur Schizophrenieforschung. Stuttgart: Klett-Cotta (1982)
16. Tass, P.A. et al.: Synchronization Tomography: A Method for Three-Dimensional Lo-
calization of Phase Synchronized Neuronal Populations in the Human Brain using
Magnetoencephalography. Phys. Rev. Letters 90, 88101-1-88101-4 (2003)
17. Wilson, H.R.: Spikes, Decisions and Actions. Dynamical Foundations of Neuroscience.
Oxford: Oxford University P.ress (1999)
18. Hebb, D.O.: The Organization of Behavior: A Neuropsychological Theory. New York:
Wiley (1949)
19. Weidlich, w.: Sociodynarnics. A Systematic Approach to Mathematical Modelling in
the Social Sciences. Amsterdam: Harwood Academic Publishers (2000)
20. Kuhn, T.S.: The Structure of Scientific Revolutions. University of Chicago Press, 3rd.
ed. (1996)
Contents

Part I An Introduction

Nonequilibrium Phase Transitions and Self-Organization


in Physics, Chemistry and Biology ................................ .

Part II Advanced Topics

Instability Hierarchies of Self-Organization Systems and Devices . . . . . .. 389

Part III Springer Series in Synergetics

List of All Published Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 759


Part I

An Introduction

Nonequilibrium Phase Transitions and Self-Organization


in Physics, Chemistry and Biology
To the Memory of

Maria and Anton Vollath


Preface to the Third Edition

Over the past years the field of synergetics has been mushrooming. An ever-
increasing number of scientific papers are published on the subject, and numerous
conferences all over the world are devoted to it. Depending on the particular
aspects of synergetics being treated, these conferences can have such varied titles
as "Nonequilibrium Nonlinear Statistical Physics," "Self-Organization," "Chaos
and Order," and others. Many professors and students have expressed the view
that the present book provides a good introduction to this new field. This is also
reflected by the fact that it has been translated into Russian, Japanese, Chinese,
German, and other languages, and that the second edition has also sold out. I
am taking the third edition as an opportunity to cover some important recent
developments and to make the book still more readable.
First, I have largely revised the section on self-organization in continuously
extended media and entirely rewritten the section on the Benard instability. Sec-
ond, because the methods of synergetics are penetrating such fields as eco-
nomics, I have included an economic model on the transition from full employ-
ment to underemployment in which I use the concept of nonequilibrium phase
transitions developed elsewhere in the book. Third, because a great many papers
are currently devoted to the fascinating problem of chaotic motion, I have added
a section on discrete maps. These maps are widely used in such problems, and can
reveal period-doubling bifurcations, intermittency, and chaos.
Instability hierarchies, which are dealt with in this book in the context of the
laser, have proved since its writing to be a widespread phenomenon in systems
driven far from thermal equilibrium. Because a general treatment of this problem
would have gone beyond the scope and level of the present monograph, however,
I have written a further book entitled Advanced Synergetics *, which can be
considered a continuation of the present text.
I wish to thank my co-worker Dr. A. Wunderlin for his assistance in revising
this book, and especially for his efforts in the Benard problem. I am also greatly
indebted to my secretary, Mrs. U. Funke, for her efficient help in reworking
several chapters.

Stuttgart, January 1983 Hermann Haken

* Springer Ser. Synergetics, Vol. 20 (Springer, Berlin, Heidelberg, New York, Tokyo \983)
Preface to the Second Edition

The publication of this second edition was motivated by several facts. First of all,
the first edition had been sold out in less than one year. It had found excellent
critics and enthusiastic responses from professors and students welcoming this
new interdisciplinary approach. This appreciation is reflected by the fact that the
book is presently translated into Russian and Japanese also.
I have used this opportunity to include some of the most interesting recent
developments. Therefore I have added a whole new chapter on the fascinating
and rapidly growing field of chaos dealing with irregular motion caused by
deterministic forces. This kind of phenomenon is presently found in quite diverse
fields ranging from physics to biology. Furthermore I have included a section on
the analytical treatment of a morphogenetic model using the order parameter
concept developed in this book. Among the further additions, there is now a com-
plete description of the onset of ultrashort laser pulses. It goes without saying that
the few minor misprints or errors of the first edition have been corrected.
I wish to thank all who have helped me to incorporate these additions.

Stuttgart, July 1978 Hermann Haken

Preface to the First Edition


The spontaneous formation of well organized structures out of germs or even out
of chaos is one of the most fascinating phenomena and most challenging prob-
lems scientists are confronted with. Such phenomena are an experience of our
daily life when we observe the growth of plants and animals. Thinking of much
larger time scales, scientists are led into the problems of evolution, and, ultimate-
ly, of the origin of living matter. When we try to explain or understand in some
sense these extremely complex biological phenomena, it is a natural question
whether processes of self-organization may be found in much simpler systems of
the unanimated world.
In recent years it has become more and more evident that there exist numerous
examples in physical and chemical systems where well organized spatial, tem-
poral, or spatio-temporal structures arise out of chaotic states. Furthermore, as
in living organisms, the functioning of these systems can be maintained only by
a flux of energy (and matter) through them. In contrast to man-made machines,
which are devised to exhibit special structures and functionings, these structures
develop spontaneously-they are self-organizing. It came as a surprise to many
Preface to the First Edition IX

scientists that numerous such systems show striking similarities in their behavior
when passing from the disordered to the ordered state. This strongly indicates
that the functioning of such systems obeys the same basic principles. In our book
we wish to explain such basic principles and underlying conceptions and to
present the mathematical tools to cope with them.
This book is meant as a text for students of physics, chemistry and biology
who want to learn about these principles and methods. I have tried to present
mathematics in an elementary fashion wherever possible. Therefore the knowl-
edge on an undergraduate course in calculus should be sufficient. A good deal of
important mathematical results is nowadays buried under a complicated nomen-
clature. I have avoided it as far as possible through, of course, a certain number
of technical expressions must be used. I explain them wherever they are introduced.
Incidentally, a good many of the methods can also be used for other problems,
not only for self-organizing systems. To achieve a self-contained text I included
some chapters which require some more patience or a more profound mathematical
background of the reader. Those chapters are marked by an asterisk. Some of them
contain very recent results so that they may also be profitable for research workers.
The basic knowledge required for the physical, chemical and biological
systems is, on the average, not very special. The corresponding chapters are
arranged in such a way that a student of one of these disciplines need only to read
"his" chapter. Nevertheless it is highly recommended to browse through the other
chapters just to get a feeling a how analogous all these systems are among each
other. I have called this discipline "synergetics". What we investigate is the joint
action of many subsystems (mostly of the same or of few different kinds) so as to
produce structure and functioning on a macroscopic scale. On the other hand,
many different disciplines cooperate here to find general principles governing
self-organizing systems.
I wish to thank Dr. Lotsch of Springer-Verlag who suggested writing an
extended version of my article "Cooperative phenomena in systems far from
thermal equilibrium and in nonphysical systems", in Rev. Mod. Phys. (1975). In
the course of writing the "extension", eventually a completely new manuscript
evolved. I wanted to make this field especially understandable to students of
physics, chemistry and biology. In a way, this book and my previous article have
become complementary.
It is a pleasure to thank my colleagues and friends, especially Prof.
W. Weidlich, for many fruitful discussions over the years. The assistance of my
secretary, Mrs. U. Funke, and of my coworker Dr. A. Wunderlin was an enor-
mous help for me in writing this book and I wish to express my deep gratitude to
them. Dr. Wunderlin checked the formulas very carefully, recalculating many of
them, prepared many of the figures, and made valuable suggestions how to
improve the manuscript. In spite of her extended administrative work, Mrs. U.
Funke has drawn most of the figures and wrote several versions of the manu-
script, including the formulas, in a perfect way. Her willingness and tireless efforts
encouraged me agian and again to complete this book.

Stuttgart, November 1976 Hermann Haken

7
Contents

1. Goal
1.1 Order and Disorder: Some Typical Phenomena 1
1.2 Some Typical Problems and Difficulties . 12
1.3 How We Shall Proceed . . . . . . . . . . . 15

2. Probability
2.1 Object of Our Investigations: The Sample Space 17
2.2 Random Variables 19
2.3 Probability.......... 20
2.4 Distribution . . . . . . . . . 21
2.5 Random Variables with Densities 24
2.6 Joint Probability. . . . . . . . 26
2.7 Mathematical Expectation E (X), and Moments 28
2.8 Conditional Probabilities . . . . . . . . . . 29
2.9 Independent and Dependent Random Variables 30
2.10 * Generating Functions and Characteristic Functions. 31
2.11 A Special Probability Distribution: Binomial Distribution 33
2.12 The Poisson Distribution . . . . . . . . . . . 36
2.13 The Normal Distribution (Gaussian Distribution) 37
2.14 Stirling's Formula . . 39
2.15 * Central Limit Theorem . . . . . . . . . . . . 39

3. Information
3.1 Some Basic Ideas 41
3.2 * Information Gain: An Illustrative Derivation 46
3.3 Information Entropy and Constraints 48
3.4 An Example from Physics: Thermodynamics 53
3.5* An Approach to Irreversible Thermodynamics 57
3.6 Entropy-Curse of Statistical Mechanics? 66

4. Chance
4.1 A Model of Brownian Movement . . . . . . . . . . . . 69
4.2 The Random Walk Model and Its Master Equation 75
4.3 * Joint Probability and Paths. Markov Processes. The Chapman-
Kolmogorov Equation. Path Integrals . . . . . . . . . 79

* Sections with an asterisk in the heading may be omitted during a first reading.
XII Contents

4.4 * How to Use Joint Probabilities. Moments. Characteristic


Function. Gaussian Processes . . . . . . . . . . . . 85
4.5 The Master Equation . . . . . . . . . . . . . . . 88
4.6 Exact Stationary Solution of the Master Equation for Systems
in Detailed Balance . . . . . . . . . . . . . . . . . . 89
4.7 * The Master Equation with Detailed Balance. Symmetrization,
Eigenvalues and Eigenstates. . . . . . . . . . . . . 92
4.8 * Kirchhoff's Method of Solution of the Master Equation 95
4.9 * Theorems about Solutions of the Master Equation . 97
4.10 The Meaning of Random Processes, Stationary State,
Fluctuations, Recurrence Time . . . . . . . . . 98
4.11 * Master Equation and Limitations of Irreversible Thermo-
dynamics . . . . . . . . . . . . . . . . . . . . . 102

5. Necessity
5.1 Dynamic Processes. . . . . . . . . . . . . . . . . . . 105
5.2 * Critical Points and Trajectories in a Phase Plane. Once Again
Limit Cycles . . . . . . . . . . . . . . . . . 113
5.3 * Stability . . . . . . . . . . . . . . . . . . . 120
5.4 Examples and Exercises on Bifurcation and Stability 126
5.5 * Classification of Static Instabilities, or an Elementary
Approach to Thorn's Theory of Catastrophes 133

6. Chance and Necessity


6.1 Langevin Equations: An Example 147
6.2* Reservoirs and Random Forces . 152
6.3 The Fokker-Planck Equation . . 158
6.4 Some Properties and Stationary Solutions of the Fokker-
Planck-Equation. . . . . . . . . . . . . . . . . . 165
6.6 Time-Dependent Solutions of the Fokker-Planck Equation 172
6.6* Solution of the Fokker-Planck Equation by Path Integrals. 176
6.7 Phase Transition Analogy. . . . . . . . . . . . . 179
6.8 Phase Transition Analogy in Continuous Media: Space-
Dependent Order Parameter. . . . . . . . . . . . 186

7. Self-Organization
7.1 Organization................. 191
7.2 Self-Organization . . . . . . . . . . . . . . . 194
7.3 The Role of Fluctuations: Reliability or Adaptibility?
Switching. . . . . . . . . . . . . . . . . . . 200
7.4 * Adiabatic Elimination of Fast Relaxing Variables from the
Fokker-Planck Equation . . . . . . . . . . . . . . . 202
7.5 * Adiabatic Elimination of Fast Relaxing Variables from the
Master Equation . . . . . . . . . . . . . . . . 204
7.6 Self-Organization in Continuously Extended Media. An
Outline of the Mathematical Approach . . . . . . . 205

10
Contents XIII

7.7* Generalized Ginzburg-Landau Equations for Nonequilibrium


Phase Transitions . . . . . . . . . . . . . . . . . . . 206
7.8 * Higher-Order Contributions to Generalized Ginzburg-Landau
Equations . . . . . . . . . . . . . . . . . . . . . 216
7.9 * Scaling Theory of Continuously Extended Nonequilibrium
Systems . . . . . . 219
7.10* Soft-Mode Instability 222
7.11 * Hard-Mode Instability 226

8. Physical Systems
8.1 Cooperative Effects in the Laser: Self-Organization and Phase
Transition . . . . . . . . . . . . . 229
8.2 The Laser Equations in the Mode Picture 230
8.3 The Order Parameter Concept. 231
8.4 The Single-Mode Laser. . . . . . . . 232
8.5 The Multimode Laser . . . . . . . . 235
8.6 Laser with Continuously Many Modes. Analogy with Super-
conductivity . . . . . . . . . . . . . . . . . . . . 237
8.7 First-Order Phase Transitions of the Single-Mode Laser . . 240
8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses 243
8.9 Instabilities in Fluid Dynamics: The Benard and Taylor
Problems . . . . . . . . . . . . 249
8.10 The Basic Equations . . . . . . . . . 250
8.11 The Introduction of New Variables. . . 252
8.12 Damped and Neutral Solutions (R :s;; Rc) 254
8.13 Solution Near R = Rc (Nonlinear Domain). Effective Langevin
Equations . . . . . . . . . . . . . . . . . . . . . . 258
8.14 The Fokker-Planck Equation and Its Stationary Solution . . 262
8.15 A Model for the Statistical Dynamics of the Gunn Instability
Near Threshold . . . . . . . . . . . . . 266
8.16 Elastic Stability: Outline of Some Basic Ideas 270

9. Chemical and Biochemical Systems


9.1 Chemical and Biochemical Reactions . . . . . . . . . 275
9.2 Deterministic Processes, Without Diffusion, One Variable 275
9.3 Reaction and Diffusion Equations . . . . . . . . . . 280
9.4 Reaction-Diffusion Model with Two or Three Variables:
The Brusselator and the Oregonator . . . . . . . . . 282
9.5 Stochastic Model for a Chemical Reaction Without Diffusion.
Birth and Death Processes. One Variable . . . . . . . . 289
9.6 Stochastic Model for a Chemical Reaction with Diffusion.
One Variable . . . . . . . . . . . . . . . . . . . 294
9.7* Stochastic Treatment of the Brusselator Close to Its Soft-
Mode Instability. . 298
9.8 Chemical Networks . . . . . . . . . . . . . . . . 302

11
XIV Contents

10. Applications to Biology


10.1 Ecology, Population-Dynamics . . . . . . . . . . . . 305
10.2 Stochastic Models for a Predator-Prey System . . . . . 309
10.3 A Simple Mathematical Model for Evolutionary Processes 310
10.4 A Model for Morphogenesis . . . . . . . . 311
10.5 Order Parameters and Morphogenesis . . . . 314
10.6 Some Comments on Models of Morphogenesis 325

11. Sociology and Economics


11.1 A Stochastic Model for the Formation of Public Opinion 327
11.2 Phase Transitions in Economics . . . . . . . . . . . 329

12. Chaos
12.1 What is Chaos? 333
12.2 The Lorenz Model. Motivation and Realization 334
12.3 How Chaos Occurs . . . . . . . . . . . . . 336
12.4 Chaos and the Failure of the Slaving Principle . 341
12.5 Correlation Function and Frequency Distribution . 343
12.6 Discrete Maps, Period Doubling, Chaos, Intermittency 345

13. Some Historical Remarks and Outlook . 351

References, Further Reading, and Comments 355

Subject Index 371

12
1. Goal
Why You Might Read This Book

1.1 Order and Disorder: Some Typical Phenomena


Let us begin with some typical observations of our daily life. When we bring a cold
body in contact with a hot body, heat is exchanged so that eventually both bodies
acquire the same temperature (Fig. l.l). The system has become completely homo-
geneous, at least macroscopically. The reverse process, however, is never observed
in nature. Thus there is a unique direction into which this process goes.

cold

Fig. 1.1. Irreversible heat exchange

t
• ••• • • • •
• • • • •
• • • • • •
••• • • •
• Fig. 1.2. Irreversible expansion of a gas

[]-0-11 Fig. 1.3. Drop of ink


spreading in water

When we have a vessel filled with gas atoms and remove the piston the gas will
fill the whole space (Fig. 1.2). The opposite process does not occur. The gas by itself
does not concentrate again in just half the volume of the vessel. When we put a
drop of ink into water, the ink spreads out more and more until finally a homo-
geneous distribution is reached (Fig. 1.3). The reverse process was never observed.
When an airplane writes words in the sky with smoke, the letters become more and
2 I. Goal

more diffuse and disappear (Fig. 1.4). In all these cases the systems develop to a
unique final state, called a state of thermal equilibrium. The original structures
disappear being replaced by homogeneous systems. When analysing these phe-
nomena on the microscopic level considering the motion of atoms or molecules one
finds that disorder is increased.

Fig. 1.4. Diffusion of clouds

Let us conclude these examples with one for the degradation of energy. Consider
a moving car whose engine has stopped. At first the car goes on moving. From a
physicist's point of view it has a single degree of freedom (motion in one direction)
with a certain kinetic energy. This kinetic energy is eaten up by friction, converting
that energy into heat (warming up the wheels etc.). Since heat means thermal
motion of many particles, the energy of a single degree of freedom (motion of the
car) has been distributed over many degrees of freedom . On the other hand, quite
obviously, by merely heating up the wheels we cannot make a vehicle go.
In the realm of thermodynamics, these phenomena have found their proper
description. There exists a quantity called entropy which is a measure for the
degree of disorder. The (phenomenologically derived) laws of thermodynamics
state that in a closed system (i.e., a system with no contacts to the outer world) the
entropy ever increases to its maximal value.

000
000 0 0
00 0
o 0 00
gas molecules droplet crystoll iM lollic.
Fig. 1.5. Water in its different
In 0 box phases

On the other hand when we manipulate a system from the outside we can change
its degree of order. Consider for example water vapor (Fig. 1.5). At elevated tem-
perature its molecules move freely without mutual correlation. When temperature
is lowered, a liquid drop is formed, the molecules now keep a mean distance
between each other. Their motion is thus highly correlated. Finally, at still lower

14
1.1 Order and Disorder: Some Typical Phenomena 3

temperature, at the freezing point, water is transformed into ice crystals. The
molecules are now well arranged in a fixed order. The transitions between the
different aggregate states, also called phases, are quite abrupt. Though the same
kind of molecules are involved all the time, the macroscopic features of the three
phases differ drastically. Quite obviously, their mechanical, optical, electrical,
thermal properties differ wildly.
Another type of ordering occurs in ferromagnets (e.g., the magnetic needle of a
compass). When a ferromagnet is heated, it suddenly loses its magnetization. When
temperature is lowered, the magnet suddenly regains its magnetization (cf. Fig. 1.6).
What happens on a microscopic, atomic level, is this: We may visualize the magnet
as being composed of many, elementary (atomic) magnets (called spins). At elevated
temperature, the elementary magnets point in random directions (Fig. 1.7). Their

1.0 o 0

~ ,
"\ 0
08

\
€O .6
:i
E
\
~·0.4

0.2
Fig. 1.6. Magnetization of ferromagnet as
function of temperature (After C. Kittel:
Introduction to Solid State Physics. Wiley
0.2 0.4 0.6 0.8 1.0 Inc., New York 1956)
rlT.

Fig. 1.7. Elementary mag-


nets pointing into random
Itltl111 directions (T > Tc) (lhs);
aligned elementary mag-
nets (T< Tc) (rhs)

magnetic moments, when added up, cancel each other and no macroscopic mag-
netization results. Below a critical temperature, T e , the elementary magnets are
lined up, giving rise to a macroscopic magnetization. Thus the order on the micro-
scopic level is a cause of a new feature of the material on the macroscopic level. The
change of one phase to the other one is called phase transition. A similarly dramatic
phase transition is observed in superconductors. In certain metals and alloys the
electrical resistance completely and abruptly vanishes below a certain temperature
(Fig. 1.8). This phenomenon is caused by a certain ordering of the metal electrons.
There are numerous further examples of such phase transitions which often show
striking similarities.

15
4 1. Goal
Rrsislonce

'------......----_r.mp.rolur. Fig. 1.8. Resistance of a superconductor


as a function of temperature (schematic)

While this is a very interesting field of research, it does not give us a clue for the
explanation of any biological processes. Here order and proper functioning are not
achieved by lowering the temperature, but by maintaining a flux of energy and
matter through the system. What happens, among many other things, is this:
The energy fed into the system in the form of chemical energy, whose processing
involves many microscopic steps, eventually results in ordered phenomena on a
macroscopic scale: formation of macroscopic patterns (morphogenesis), locomo-
tion (i.e., few degrees of freedom 1) etc.
In view of the physical phenomena and thermodynamic laws we have men-
tioned above, the possibility of explaining biological phenomena, especially the
creation -of order on a macroscopic level out of chaos, seems to look rather hope-
less. This has led prominent scientists to believe that such an explanation is impos-
sible. However, let us not be discouraged by the opinion of some authorities. Let us
rather reexamine the problem from a different point of view. The example of the
car teaches us that it is possible to concentrate energy from many degrees offreedom
into a single degree of freedom. Indeed, in the car's engine the chemical energy of
gasoline is first essentially transformed into heat. In the cylinder the piston is then
pushed into a single prescribed direction whereby the transformation of energy of
many degrees of freedom into a single degree of freedom is accomplished. Two facts
are important to recall here:
1) The whole process becomes possible through a man-made machine. In it we
have established well-defined constraints.
2) We start from a situation far from thermal equilibrium. Indeed pushing the
piston corresponds to an approach to thermal equilibrium under the given
constraints.
The immediate objection against this machine as model for biological systems
lies in the fact that biological systems are self-organized, not man-made. This leads
us to the question if we can find systems in nature which operate far from thermal
equilibrium (see 2 above) and which act under natural constraints. Some systems of
this kind have been discovered quite recently, others have been known for some
while. We describe a few typical examples:
A system lying on the border line between a natural system and a man-made
device is the laser. We treat here the laser as a device, but laser action (in the micro-
wave region) has been found to take place in interstellar space. We consider the

16
1.1 Order and Disorder: Some Typical Phenomena 5

pump

I
Fig. 1.10. Photons emitted in axial direction
(a) have a much longer lifetime to in the
Fig. 1.9. Typical setup of a laser "cavity" than all other photons (b)

solid state laser as an example. It consists of a rod of material in which specific


atoms are embedded (Fig. 1.9). Usually mirrors are fixed at the end faces of the rod.
Each atom may be excited from the outside, e.g., by shining light on it. The atom
then acts as a microscopic antenna and emits a light-wavetrack. This emission
process lasts typically 10- 8 second and the wave track has a length of 3 meters. The
mirrors serve for a selection of these tracks: Those running in the axial direction
are reflected several times between the mirrors and stay longer in the laser, while
all other tracks leave it very quickly (Fig. 1.10). When we start pumping energy into
the laser, the following happens : At small pump power the laser operates as a
lamp. The atomic antennas emit independently of each other, (i.e., randomly) light-
wavetracks. At a certain pump power, called laser threshold, a completely new
phenomenon occurs. An unknown demon seems to let the atomic antennas oscillate
in phase. They emit now a single giant wavetrack whose length is, say 300,000 km!
(Fig. 1.11). The emitted light intensity, (i.e., the output power) increases drastically
with further increasing input (pump) power (Fig. 1.12). Evidently the macroscopic
properties of the laser have changed dramatically in a way reminescent of the phase
transition of for example, the ferromagnet.

I1i,Id ",,"g Ih

1\ /\ C>..
\]V
f ield strength
Fig. 1.11.
Wave tracks
emitted (a)
from a lamp,
(b) from a
(b) laser

As we shall see later in this book, this analogy goes far deeper. Obviously the
laser is a system far from thermal equilibrium. As the pump energy is entering the
system, it is converted into laser light with its unique properties. Then this light
leaves the laser. The obvious question is this: What is the demon that tells the sub-

17
6 1. Goal

Diode +51

Fig. 1.12. Output power versus input power of a laser


5 below and above its threshold (After M. H. Pilkuhn,
unpubl. result)

systems (i.e., the atoms) to behave in such a well organized manner? Or, in more
scientific language, what mechanisms and principles are able to explain the self-
organization of the atoms (or atomic antennas)? When the laser is pumped still
higher, again suddenly a completely new phenomenon occurs. The rod regularly
emits light flashes of extremely short duration, say 10- 12 second. Let us consider
as a second example fluid dynamics, or more specifically, the flow of a fluid round a
cylinder. At low speed the flow portrait is exhibited in Fig. 1.l3(a). At higher speed,

(e)

~"' IOO

Fig. 1.13. Flow of a fluid


round a cylinder for
different velocities (After
R. P. Feynrnan, R. B.
Leighton, M. Sands: The
Feynman Lectures 0/
Phys., Vol. II. Addison-
Wesley 1965)

18
1.1 Order and Disorder: Some Typical Phenomena 7

suddenly a new, static pattern appears : a pair of vortices (b). With still higher speed,
a dynamic pattern appears, the vortices are now oscillating (c). Finally at still
higher speed, an irregular pattern called turbulent flow arises (e). While we will
not treat this case in our book, the following will be given an investigation.
The convection instability (Benard instability). We consider a fluid layer heated
from below and kept at a fixed temperature from above (Fig. 1.14). At a small
temperature difference (more precisely, gradient) heat is transported by heat con-
duction and the fluid remains quiescent. When the temperature gradient reaches a
critical value, the fluid starts a macroscopic motion. Since heated parts expand,
these parts move up by buoyancy, cool, and fall back again to the bottom. Amaz-
ingly, this motion is well regulated. Either rolls (Fig. 1.15) or hexagons (Fig. 1.16)
are observed. Thus, out of a completely homogeneous state, a dynamic well-
ordered spatial pattern emerges. When the temperature gradient is further in-
creased, new phenomena occur. The rolls start a wavy motion along their axes.
Further patterns are exhibited in Fig. 1.17. Note, that the spokes oscillate tem-
porarily. These phenomena playa fundamental role for example in meteorology,
determining the motion of air and the formation of clouds (see e.g., Fig. 1.18).

Fig. 1.14. Fluid layer heated from


below for small Rayleigh numbers. Fig. 1.15. Fluid motion in form of roles for Ray-
Heat is transported by conduction leigh numbers somewhat bigger than the critical

Fig. 1.1 6. The


cell structure of
the Benard in-
stability as seen
from above
(After S.
Chandrasekhar:
Hydrodynamic
and Hydromag-
netic Stability.
Clarendon
Press, Oxford
1961)

Fig. 1.17. Pattern of fluid motion at elevated Rayleigh


number (After F. H. Busse, J. A. Whitehead : J. Fluid
Mech. 47, 305 (1971»

19
8 l. Goal

Fig. 1.18. A typical pattern of cloud streets (After R. Scorer : Clouds of the World. Lothian Pub!.
Co., Melbourne 1972)

20
1.1 Order and Disorder: Some Typical Phenomena 9

A closely related phenomenon is the Taylor instability. Here a fluid is put be-
tween two rotating coaxial cylinders. Above a critical rotation speed, Taylor
vortices occur. In further experiments also one of the cylinders is heated. Since in a
number of cases, stars can be described as rotating liquid masses with thermal
gradients, the impact of this and related effects on astrophysics is evident. There are
numerous further examples of such ordering phenomena in physical systems far
from thermal equilibrium. However we shall now move on to chemistry.
In a number of chemical reactions, spatial, temporal or spatio-temporal pat-
terns occur. An example is provided by the Belousov-Zhabotinsky reaction. Here
Ce2(S04h, KBr0 3 , CH 2(COOHh, H 2S04 as well as a few drops of Ferroine
(redox indicator) are mixed and stirred. The resulting homogeneous mixture is then
put into a test tube, where immediately temporal oscillations occur. The solution
changes color periodically from red, indicating an excess of Ce 3 +, to blue, indicat-
ing an excess of Ce 4+ (Fig. 1.19). Since the reaction takes place in a closed system,
the system eventually approaches a homogeneous equilibrium. Further examples
of developing chemical structures are represented in Fig. 1.20. In later chapters of
this book we will treat chemical reactions under steady-state conditions, where,
nevertheless, spatio-temporal oscillations occur. It will turn out that the onset
of the occurrence of such structures is governed by principles analogous to those
governing disorder-order transitions in lasers, hydrodynamics, and other systems.

Fig. 1.19. The Belousov-Zhabotinsky reaction showing a spatial pattern


(schematic)

Our last class of examples is taken from biology. On quite different levels, a
pronounced spontaneous formation of structures is observed. On the global level,
we observe an enormous variety of species. What are the factors determining their
distribution and abundance? To show what kind of correlations one observes,
consider Fig. 1.21 which essentially presents the temporal oscillation in numbers
of snowshoe hares and lynx. What mechanism causes the oscillation? In evolution,
selection plays a fundamental role. We shall find that selection of species obeys the
same laws as for example, laser modes.
Let us turn to our last example. In developmental physiology it has been known
long since that a set of equal (equipotent) cells may self-organize into structures
with well-distinguished regions. An aggregation of cellular slime mold (Dictyoste-
lium disciodeum) may serve as model for cell interactions in embryo genesis.
Dictyostelium forms a multi-cellular organism by aggregation of single cells. Dur-
ing its growth phase the organism exists in the state of single amoeboid cells. Several
hours after the end of growth, these cells aggregate forming a polar body along

21
10 1. Goal

(a)

(b)

Fig. 1.20 a and b. Spirals of chemical activity in a shallow dish. Wherever two waves collide head
on, both vanish. Photographs taken by A. T. Winfree with Polaroid Sx' 70.

22
l.l Order and Disorder: Some Typical Phenomena 11
le0r--------------------------------------------------------,
140 _HARE
__ __ LyNX
~12
Z
~IOO
:>
~ eo
....
z 60

le45 1055

Fig. 1.21. Changes in the abundance of the lynx and the snowshoe hare, as indicated by the
number of pelts received by the Hudson Bay Company (After D. A. McLulich : Fluctuations in
the Numbers of Varying Hare. Univ. of Toronto Press, Toronto 1937)

Fig. 1.22. Wave pattern of <;hemotactic activity in dense cell layers of slime mold (after Gerisch
et al.)

23
12 1. Goal

which they differentiate into other spores or stalk cells the final cell types constitut-
ing the fruiting body. The single cells are capable of spontaneously emitting a
certain kind of molecules called cAMP (cyclic Adenosin 3'5'Monophosphate) in
the form of pulses into their surroundings. Furthermore cells are capable of
amplifying cAMP pulses. Thus they perform spontaneous and stimulated emission
of chemicals (in analogy to spontaneous and stimulated emission of light by laser
atoms). This leads to a collective emission of chemical pulses which migrate in the
form of concentration waves from a center causing a concentration gradient of
cAMP. The single cells can measure the direction of the gradient and migrate
towards the center with help of pseudopods. The macroscopically resulting wave
patterns (spiral or concentric circles) are depicted in Fig. 1.22 and show a striking
resemblance to the chemical concentration waves depicted in Fig. 1.20.

1.2 Some Typical Problems and Difficulties


In the preceding section we presented several typical examples of phenomena some
of which we want to study. The first class of examples referred to closed systems.
From these and numerous other examples, thermodynamics concludes that in
closed systems the entropy never decreases. The proof of this theorem is left to
statistical mechanics. To be quite frank, in spite of many efforts this problem is not
completely solved. We will touch on this problem quite briefly, but essentially take
a somewhat different point of view. We do not ask how can one prove quite generally
that entropy ever increases but rather, how and how fast does the entropy increase
in a given system? Furthermore it will transpire that while the entropy concept and
related concepts are extremely useful tools in thermostatics and in so-called
irreversible thermodynamics, it is far too rough an instrument to cope with self-
organizing structures. In general in such structures entropy is changed only by a
tiny amount. Furthermore, it is known from statistical mechanics that fluctuations
of the entropy may occur. Thus other approaches are required. We therefore try
to analyze what features are common to the non-equilibrium systems we have
described above, e.g., lasers, fluid dynamics, chemical reactions, etc. In all these
cases, the total system is composed of very many subsystems, e.g., atoms, mole-
cules, cells, etc. Under certain conditions these subsystems perform a well organized
collective motion or function.
To elucidate some of the central problems let us consider a string whose ends are
kept fixed. It is composed of very many atoms, say 1022 , which are held together by
forces. To treat this problem let us make a model. A chain of point masses coupled
by springs (Fig. 1.23). To have a "realistic" model, let us still take an appreciable
number of such point masses. We then have to determine the motion of very many
interacting "particles" (the point masses) or "subsystems". Let us take the follow-
ing attitude: To solve this complicated many-body problem, we use a computer
into which we feed the equations of motion of the point masses and a "realistic"
initial condition, e.g., that of Fig. 1.24. Then the computer will print for us large
tables with numbers, giving the positions of the point masses as a function of time.
Now the first essential point is this: These tables are rather useless until our brain

24
1.2 Some Typical Problems and Difficulties 13

elongat ion q

~
1 2 3 4 5 6 ~. Fig. 1.23. Point masses coupled by
springs

Fig. 1.24. An initial configuration of Fig. 1.25. Coordinates of point masses at


point masses a later time

selects certain "typical features". Thus we shall discover that there are correlations
between the positions of adjacent atoms (Fig. 1.25). Furthermore when looking
very carefully we shall observe that the motion is periodic in time. However, in this
way we shall never discover that the natural description of the motion of the string
is by means of a spatial sine wave (cf. Fig. 1.26), unless we already know this answer
and feed it as initial condition into the computer. Now, the (spatial) sine wave is
characterized by quantities i.e., the wavelength and the amplitude which are com-
pletely unknown on the microscopic (atomic) level. The essential conclusion from
our example is this: To describe collective behavior we need entirely new con-
cepts compared to the microscopic description. The notion of wavelength and
amplitude is entirely different from that of atomic positions. Of course, when we
know the sine wave we can deduce the positions of the single atoms.

Fig. 1.26. Examples of sine-waves formed by


=e
strings. q e:
sin (21lx/ l), amplitude, l: wave-
length. l=2L/n, n: integer

25
14 1. Goal

In more complicated systems quite other "modes" may appropriately describe


spatio-temporal patterns or functionings. Therefore, our mechanical example must
be taken as an allegory which, however, exhibits the first main problem of multi-
component systems: What is the adequate description in "macroscopic" terms, or,
in what "modes" does the system operate? Why didn't the computer calculation of
our example lead to these modes? The reason lies in the linearity of the correspond-
ing equations of motion which permits that any superposition of solutions is again a
solution of these equations. It will turn out that equations governing self-organiza-
tion are intrinsically nonlinear. From those equations we shall find in the following
that, "modes" may either compete, so that only one "survives", or coexist by
stabilizing each other. Apparently the mode concept has an enormous advantage
over the microscopic description. Instead of the need to know all "atomic" co-
ordinates of very many degrees of freedom we need to know only a single or very
few parameters, e.g., the mode amplitude. As we will see later, the mode amplitudes
determine the kind and degree of order. We will thus call them order parameters
and establish a connection with the idea of order parameters in phase transition
theory. The mode concept implies a scaling property. The spatio-temporal patterns
may be similar, just distinguished by the size (scale) of the amplitude (By the way,
this "similarity" principle plays an important role in pattern recognition by the
brain, but no mechanism is so far known to explain it. Thus for example a triangle
is recognized as such irrespective of its size and position).
So far we have demonstrated by an allegory how we may be able to describe
macroscopic ordered states possibly .by very few parameters (or "degrees of free-
dom"). In our book we will devise several methods of how to find equations for
these order parameters. This brings us to the last point of our present section. Even
if we have such "parameters", how does self-organization, for example a spon-
taneous pattern formation, occur. To stay in our allegory of the chain, the question
e
would be as follows. Consider a chain at rest with amplitude = o. Suddenly it
starts moving in a certain mode. This is, of course, impossible, contradicting
fundamental physical laws, for example that of energy conservation. Thus we have
to feed energy into the system to make it move and to compensate for friction to
keep it moving. The amazing thing in self-organizing systems, such as discussed in
Section 1.1, is now this. Though energy is fed into the system in a completely
random fashion, the system forms a well-defined macroscopic mode. To be quite
clear, here our mechanical string model fails. A randomly excited, damped string
oscillates randomly. The systems we shall investigate organize themselves co-
herently. In the next section we shall discuss our approach to deal with these puzzl-
ing features.
Let us conclude this section with a remark about the interplay between micro-
scopic variables and order parameters, using our m~hanical chain as example:
The behavior of the coordinate of the Jl'th point mass (Jl = 1,2,3, ... ) is described
and prescribed by the sine-wave and the size of the order-parameter: Thus the
order-parameter tells the atoms how to behave. On the other hand the sine wave
becomes only possible by the corresponding collective motion of the atoms. This
example provides us with an allegory for many fields. Let us take here an extreme
case, the brain. Take as subsystems the neurons and their connections. An enor-

26
1.3 How We Shall Proceed 15

mous number of microscopic variables may describe their chemical and electrical
activities. The order-parameters are ultimately the thoughts. Both systems neces-
sitate each other. This brings us to a final remark. We have seen above that on a
macroscopic level we need concepts completely different from those on a micro-
scopic level. Consequently it will never suffice to describe only the electro-chemical
processes of the brain to describe its functioning properly. Furthermore, the
ensemble of thoughts forms again a "microscopic" system, the macroscopic order
parameters of which we do not know. To describe them properly we need new
concepts going beyond our thoughts, for us an unsolvable problem. For lack of
space we cannot dwell here on these problems which are closely tied up with deep-
lying problems in logic and which, in somewhat different shape, are well known to
mathematicians, e.g., the Entscheidungsproblem. The systems treated in our book
will be of a simpler nature, however, and no such problems will occur here.

1.3 How We Shall Proceed


Since in many cases self-org·anization occurs out of chaotic states, we first have to
develop methods which adequately describe such states. Obviously, chaotic states
bear in themselves an uncertainty. If we knew all quantities we could at least list
them, find even some rules for their arrangement, and could thus cope with chaos.
Rather we have to deal with uncertainties or, more precisely speaking, with prob-
abilities. Thus our first main chapter will be devoted to probability theory. The
next question is how to deal with systems about which very little is known. This
leads us in a very natural way to basic concepts of information theory. Applying it
to physics we recover basic relations of thermodynamics, so to speak, as a by-
product. Here we are led to the idea of entropy, what it does, and where the
problems still are. We then pass over to dynamic processes. We begin with simple
examples of processes caused by random events, and we develop in an elementary
but thorough way the mathematical apparatus for their proper treatment. After we
have dealt with "chance", we pass over to "necessity", treating completely deter-
ministic "motion". To it belong the equations of mechanics; but many other
processes are also treated by deterministic equations. A central problem consists
in the determination of equilibrium configurations (or "modes") and in the in-
vestigation of their stability. When external parameters are changed (e.g., pump
power of the laser, temperature gradient in fluids, chemical concentrations) the old
configuration (mode) may become unstable. This instability is a prerequisite for the
occurrence of new modes. Quite surprisingly, it turns out that often a situation
occurs which requires a random event to allow for a solution. Consider as a static
example a stick under a load (cf. Fig. 1.27). For small loads, the straight position is
still stable. However, beyond a critical load the straight position becomes unstable
and two new equivalent equilibrium positions appear (cf. Fig. 1.27). Which one the
stick acquires cannot be decided in a purely deterministic theory (unless asym-
metries are admitted). In reality the development of a system is determined both by
deterministic and random causes ("forces"), or, to use Monod's words, by "chance
and necessity". Again we explain by means of the simplest cases the basic concepts

27
16 1. Goal

Fig. 1.27. Deformation of stick under load

and mathematical approaches. After these preliminaries we come in Chapter 7 to the


central question of self-organization. We will discover how to find order parameters,
how they "slave" subsystems, and how to obtain equations for order parameters.
This chapter includes methods of treating continuously extended media. We will
also discuss the fundamental role of fluctuations in self-organizing systems.
Chapters 8 to 10 are devoted to a detailed treatment of selected examples from
physics, chemistry, and biology. The logical connection between the different
chapters is exhibited in Table 1.1. In the course of this book it will transpire that
seemingly quite different systems behave in a completely analogous manner. This
behavior is governed by a few fundamental principles. On the other hand, ad-
mittedly, we are searching for such analogies which show up in the essential gross
features of our systems. When each system is analyzed in more and more detail
down to the subsystems, quite naturally more and more differences between these
systems may show up.

2. Probability

3. Information
,t
,
I
L ____ _

II. Sociology
Table 1.1

28
2. Probability
What We Can Learn From Gambling

2.1 Object of Our Investigations: The Sample Space

The objects we shall investigate in our book may be quite different. In most cases,
however, we shall treat systems consisting of very many subsystems of the same kind
or of very few kinds. In this chapter we deal with the subsystems and define a few
simple relations. A single subsystem may be among the following:

atoms plants
molecules animals
photons (light quanta) students
cells

Let us consider specifically a group of students. A single member of this group


will be denoted by a number w = I, 2, ... The individual members of the group
under consideration will be called the sample points. The total group or, mathe-
matically stated, the total set of individuals will be called sample space (Q) or sample
set. The set Q of sample points I, 2, ... , M will be denoted by Q = {I, 2, ... , M}.
The word "sample" is meant to indicate that a certain subset of individuals will be
sampled (selected) for statistical purposes. One of the simplest examples is tossing a

°
coin. Denoting its tail by zero and its head by one, the sample set of the coin is
given by Q = {O, I}. Tossing a coin now means to sample or I at random. An-
other example is provided by the possible outcomes when a die is rolled. Denoting
the different faces of a die by the numbers I, 2, ... , 6 the sample set is given by
Q = {l, 2, 3,4, 5, 6}. Though we will not be concerned with games (which are,
nevertheless a very interesting subject), we shall use such simple examples to exhibit
our basic ideas. Indeed, instead of rolling a die we may do certain kinds of experi-
ments or measurements whose outcome is of a probabilistic nature. A sample point
is also called a simple event, because its sampling is the outcome of an "experiment"
(tossing a coin, etc.).
It will be convenient to introduce the following notations about sets. A collec-
tion of w's will be called a subset of Q and denoted by A, B, ... The empty set is
written as 0, the number of points in a set S is denoted by lSI. If all points of the set
A are contained in the set B, we write

A c B or B:::> A. (2.1)
18 2. Probability

If both sets contain the same points, we write

A = B. (2.2)
The union

A u B = {w I WE A or WEB} (2.W

is a new set which contains all points contained either in A or B. (cf. Fig. 2.1). The
intersection
A nB= {WIWEA and wEB} (2.4)

is a set which contains those points which are both contained in A and B (cf. Fig.
2.2). The sets A and B are disjoint if (cf. Fig. 2.3).
An B = 0. (2.5)

Fig. 2.1. The union A U B of the sets A Fig. 2.2. The intersection A n B ~ of the
I:2a and B ISSSI comprises all elements of sets A E2<I and B rn
A and B. (To visualize the relation 2.3 we
represent the sets A and B by points in a
plane and not along the real axis)

Fig. 2.3. Disjoint sets have no


elements in common

Fig. 2.4. E2<I is decomposed into A and its com-


plement A C

1 Read the rhs of (2.3) as follows: all co's, for which w is either element of A or of B.

30
2.2 Random Variables 19

All sample points of Q which are not contained in A form a set called the comple-
ment A C of A (cf. Fig. 2.4). While the above-mentioned examples imply a countable
number of sample points, w = 1,2, ... , n (where n may be infinite), there are
other cases in which the subsystems are continuous. Think for example of an area
of a thin shell. Then this area can be further and further subdivided and there are
continuously many possibilities to select an area. If not otherwise noted, however,
we will assume that the sample space Q is discrete.

Exercises on 2.1

Prove the following relations 1) - 4):


I) if A c B, B c C, then A c C;
if A c Band B c A, then A = B
2) (Ae)" = A, Qe = 0, 0e = Q
3) a) (A u B) u C = A u (B u C) (associativity)
b) Au B = Bu A (commutativity)
c) (A n B) n C = A n (B n C)
d) An B = Bn A (commutativity)
e) A n (B u C) = (A n B) u (A n C) (distributivity)
f) A u A = A n A = A, A u 0 = A, A n 0 = 0
g) A u Q = Q, A n Q = A, A u A e = Q, A n A e = 0
h) A u (B n C) = (A u B) n (A u C) (distributivity)
i) (A n B)" = A u Be; (A u B)e = A e n Be
C

Hint: Take an arbitrary element w of the sets defined on the Ihs, show that it is
contained in the set on the rhs of the corresponding equations. Then do the same
with an arbitrary element w' of the rhs.
4) de Morgan's law:
a) (Ai u A2 U ... U AnY = A~ n A~ n ... n A~
b) (Ai n A2 n ... n An)e = A~ u A~ u ... u A~
Hint: Prove by complete induction.

2.2 Random Variables


Since our ultimate goal is to establish a quantitative theory, we must describe the
properties of the sample points quantitatively. Consider for example gas atoms
which we label by the index w. At each instant the atom w has a certain velocity
which is a measurable quantity. Further examples are provided by humans who
might be classified according to their heights; or people who vote yes or no, and
the numbers 1 and 0 are ascribed to the votes yes and no, respectively. In each case
we can ascribe a numerically valued function X to the sample point w so that we
have for each w a number X(w) or, in mathematical terms, w -+ X(w) (cf. Fig. 2.5).

31
20 2. Probability
em X(w) (height)

190
180
170
1===;;;========
t------.~-......----......---_.r_
160
150

Fig. 2.5. Random variable X(w) (heights


of persons)

This function X is called a random variable, simply because the sample point w is
picked at random: Making a velocity measurement of a single gas molecule or the
rolling of a die. Once the sample point is picked, X(w) is determined. It is, of
course, possible to ascribe several numerically valued functions XI ' X 2 , ••• to the
sample points w, e.g. molecules might be distinguished by their weights, their
velocities, their rotational energies etc. We mention a few simple facts about ran-

*
dom variables. If X and Yare random variables, then also the linear combination
aX + bY, the product XY, and the ratio X/ Y (Y 0), are also random variables.
More generally we may state the following: If cp is a function of two ordinary
variables and X and Yare random variables, then w -+ cp(X(w), Yew»~ is also a
random variable. A case of particular interest is given by the sum of random
variables

Sn(w) = XI(w) + ... + Xn(w). (2.6)

Because later on we want to treat the joint action of many subsystems (distinguished
by an index i = 1,2, ... ,n), we have often to deal with such a sum.
Examples are provided by the weight of n persons in a lift, the total firing rate
of n neurons, or the light wave built up from n wavetracks emitted from atoms. We
shall see later that such functions as (2.6) reveal whether the subsystems (persons,
neurons, atoms) act independently of each other, or in a well-organized way.

2.3 Probability

Probability theory has at least to some extent its root in the considerations of the
outcomes of games. Indeed jf one wants to present the basic ideas, it is still ad-
vantageous to resort to these examples. One of the simplest games is tossing a coin
where one can find head or tail. There are two outcomes. However, if one bets on
head there is only one favorable outcome. Intuitively it is rather obvious to define
as probability for the positive outcome the ratio between the number of positive
outcomes 1 divided by the number of all possible outcomes 2, so that we obtain
P = 1/2. When a die is thrown there are six possible outcomes so that the sample
space Q = {I, 2, 3, 4, 5, 6} . The probability to find a particular number k =
1,2,3,4,5,6 is P(k) = 1/6. Another way of reaching this result is as follows . When,
for symmetry reasons, we ascribe the same probability to all six outcomes and de-
mand that the sum of the probabilities of these outcomes must be one, we again

32
2.4 Distribution 21

obtain

P(k) = i, k = 1,2,3,4,5,6. (2.7)

Such symmetry arguments also play an important role in other examples, but in
many cases they require far-reaching analysis. First of all, in our example it is as-
sumed that the die is perfect. Each outcome depends, furthermore, on the way the
die is thrown. Thus we must assume that this is again done in a symmetric way
which after some thinking is much less obvious. In the following we shall assume
that the symmetry conditions are approximately realized. These statements can be
reformulated in the spirit of probability theory. The six outcomes are treated as
equally likely and our assumption of this "equal likelihood" is based on symmetry.
In the following we shall call P(k) a probability but when k is considered as
varying, the function P will be called a probability measure. Such probability
measures may not only be defined for single sample points w but also for subsets
A, B, etc. For example in the case of the die, a subset could consist of all even
numbers 2, 4, 6. The probability of finding an even number when throwing a die is
evidently P({2,4,6}) = 3/6 = 1/2 or in short P(A) = 1/2 where A = {2,4,6}.
We now are able to define a probability measure on the sample space Q. It is a
function on Q which satisfies three axioms:
I) For every set A c Q the value of the function is a nonnegative number P(A) ~ o.
2) For any two disjoint sets A and B, the value of the function oftheir union A u B
is equal to the sum of its values for A and B,

P(A u B) = P(A) + P(B), provided A ('\ B = 0. (2.8)

3) The value of the function for Q (as a subset) is equal to I

P(Q) = 1. (2.9)

2.4 Distribution

In Section 2.2 we introduced the concept of a random variable which relates a


certain quantity X (e.g., the height of a person) to a sample point w (e.g., the
person). We now consider a task which is in some sense the inverse of that relation.
We prescribe a certain range of the random variable, e.g., the height between 160
and 170 cm and we ask, what is the probability of finding a person in the population
whose height lies in this range (cf. Fig. 2.6). In abstract mathematical terms our
problem is as follows: Let X be a random variable and a and b two constant num-
bers; then we are looking for the subset of sample points w for which a ~ X(w) ~ b
or in short

{a ::::; X::::; b} = {w I a ::::; X(w) ::::; b}. (2.10)

We again assume that the total set Q is countable. We have already seen in Section

33
22 2. Probability

em X{w) x , I

ISO
190
170
~==n:========
t-
160
ISO , ,
I , Fig. 2.6. The probability P
~~~ to find a person between
heights h) and hJ+ I (hi =
6 P(hj ~X~hj., ) 150cm, h2 =160cm, etc.)

2.3 that in this case we may assign a probability to every subset of Q. Thus we may
assign to the subset defined by (2.10) a probability which we denote by

P(a:::; X:::; b). (2.11)

To illustrate this definition we consider the example of a die. We ask for the
probability that, if the die is rolled once, the number of spots lies in between 2 and
5. The subset of events which have to be counted according to (2.10) is given by the
numbers of2, 3,4,5. (Note that in our example X(w) = w). Because each number
appears with the probability 1/6, the probability to find the subset 2, 3, 4, 5 is
P = 4/6 = 2/3. This example immediately lends itself to generalization. We could
equally well ask what is the probability of throwing an odd number of spots, so
that X = 1,3, 5. In that case, X does not stem from an interval but from a certain
well-defined subset, A, so that we define now quite generally

P(X E A) = P({w I X(w) E A}) (2.12)

where A is a set of real numbers. As a special case of the definition (2.1 I) or (2. 12),
we may consider that the value of the random variable X is given: X = x. In this
case (2.11) reduces to

P(X = x) = P(X E {x}). (2.13)

Now we come to a general rule of how to evaluate P(a ::; X ::; b). This is sug-
gested by the way we have counted the probability of throwing a die with a result-
ing number of spots lying inbetween 2 and 5. We use the fact that X(w) is countable
if Q is countable. We denote the distinct values of X(w) by VI' V2' ••• , Vm ••• and
the set {VI> V2' ••• } by Vx ' We further define that P(X = x) = 0 if x If Vx ' We may
furthermore admit that some of the v;s have zero probabilities. We abbreviate
Pn = P(X = vn) (cf. Fig. 2.7a, b). Using the axioms on page 21 we may deduce
that the probability P(a :::; X:::; b) is given by

(2.14)

and more generally the probability P(X E A) by

P(X E A) = LVnEA Pn' (2.15)

34
2.4 Distribution 23

h;; h4 I (a)
hi h3 hS
P(X:: vn ) .:: Pn

P;;

P,
~

v, v;; ~ ~ I \(b)

FX(x)

Fig. 2.7. (a) Same as rhs of Fig. 2.6, but abscissa


and ordinate exchanged. (b) Probability measure Pn.
We made the heights discrete by taking the average
value Vn = 1/2(hn + hn+ 1) of a person's height in
each interval hn . . . hn+1. (c) Distribution function
X Fx(x) corresponding to b)
v, v;; V3 ~ (c)

If the set A consists of all real numbers X of the interval of - 00 until x we define
the so called distribution/unction of X by (cf. Fig. 2.7c)

(2.16)

The Pn's are sometimes called elementary probabilities. They have the properties

Pn ~ 0 for all n, (2.17)

and

(2.18)

again in accordance with the axioms. The reader should be warned that "distribu-
tion" is used with two different meanings:

I II

"Distribution function" " Probability distribution"


defined by (2.16) (2.13) or
" probability density"

35
24 2. Probability

2.5 Random Variables with Densities


In many practical applications the random variable X is not discrete but con-
tinuous. Consider, for instance, a needle spinning around an axis. When the needle
comes to rest (due to friction), its final position may be considered as a random
variable. If we describe this position by an angle'" it is obvious that", is a con-
tinuous random variable (cf. Fig. 2.8). We have therefore to extend the general
formulation of the foregoing section to this more general case. In particular, we
wish to extend (2.14) in a suitable manner, and we may expect that this generaliza-
tion consists in replacing the sum in (2.14) by an integral. To put this on mathe-
matically safe ground, we first consider a mapping ~ ---> f(~) where the function
f(~) is defined on the real axis for - 00 to + 00. We require thatfW has the follow-
ing properties

1/21r 1 - -- -- - - ,

Fig. 2.8. Angle I{! and its density


o function /(I{!)

1) V~:f(~) ~ 0, (2.19)2

(2.20)

(The integral is meant as Riemann's integral). We will callfa density function and
assume that it is piecewise continuous so that J~f(e) de exists. We again consider a
random variable X defined on Q with w ---> X(w). We now describe the probability
by (cf. Fig. 2.9)

P(a::; X::; b) = J>Wd~. (2.21)3

The meaning of this definition becomes immediately evident if we think again of


the needle. The random variable", may acquire continuous values in the interval
°
between and 2n. If we exclude, for instance, gravitational effects so that we may
assume that all directions have the same probability (assumption of equal like-

2 Read v/;: "for all /;'s".


3 More precisely,/(/;) should carry the suffix Xto indicate that/x (';) relates to the random variable
X (cf. (2.16».

36
2.5 Random Variables with Densities 25

! IV

Plo~X",b)

Fig. 2.9. P(a S X S b) as area ~

lihood !), the probability of finding the needle in a certain direction in the interval
'" 0 and "'0 + d", is (J j2n )d"'. Then the probability of finding '" in the interval '" 1
and "'2
will be given by

Note that Ij(2n) stems from the normalization condition (2.20) and that in our
casef(~) = Ij(2n) is a constant. We may generalize (2.21) if A consists of a union
of intervals. We then define accordingly

P(X E A) = tf(~) d~. (2.22)

A random variable with density is sometimes also called a continuous random


variable. Generalizing the definition (2.16) to continuous random variables we
obtain the following: If A = (- 00, x), then the distribution function Fx(x) is given
by

(2.23)

In particular iff is continuous we find

F~(x) = fx(x). (2.24)

Exercise on 2.5

I) Dirac's b-function is defined by:

b(x - x o) = 0 for x 9= Xo

and

f:::: b(x - xo)dx = 1 forany e > O.

37
26 2. Probability

{(x)

I
Vr~

Fig. 2.10. The o-function may be considered as a Gaussian


'----::....-+-~-.L...._x function 0!-'n-l/ 2 exp [-(x - XO)2/0!2] in the limit O! --> 0

Show: The distribution function

f(x) = L'i= I p/)(x - x)

allows us to write (2.16) in the form (2.23).


2) a) Plotf(x) = (X exp (-(Xx) and the corresponding Fx(x), x ~ versus x . °
b) Plotf(x) = {3(exp (-(Xx) + 'l'b(x - XI»
and the corresponding Fx(x); x ~ 0, versus x. Determine {3 from the normalization
condition (2.20).
Hint : The b-function is "infinitely high." Thus indicate it by an arrow.

2.6 Joint Probability


So far we have considered only a single random variable, i.e., the height of persons.
We may ascribe to them, however, simultaneously other random variables, e.g.,
their weights, color, etc. (Fig. 2.11). This leads us to introduce the "joint probability"
which is the probability of finding a person with given weight, a given height, etc.
To cast this concept into mathematical terms we treat the example of two random
variables. We introduce the set S consisting of all pairs of values (u, v) that the

em
X (w) (height) x

18()
170
190
160
§~~S~~~~~~~
ISO

Y(w) (weight)

Fig. 2.11. The random variables X (height) and Y (weight). On the rhs the probability measures
for the height (irrespective of weight) and for the weight (irrespective of height) are plotted

38
2.6 Joint Probability 27

P{heighl)

2.15 - - - - - - - -
P{weight)
//6

X
(height)

y
(weight)

Fig. 2.12. The persons are grouped according to their weights and heights
P{height) ({ X, y) f 5)
3/6 3/6

2/6 216
P{weight)
1/6

, 2/6

1/6

Fig.2.13. The joint probability P«X. Y) E S) plotted over X and Y. The subsets S are the single
squares (see for example shaded square)

random variables (X, Y) acquire. For any subset S' E S we then define the joint
probability, generalizing (2.12), as (cf. Fig. 2.13)

P«X, Y) E S') = P({w I (X(w), Y(w» E S'}). (2.25)

Labelling the distinct values of X(w) = Um by m and those of Y(w) = Vn by n, we put


(2.26)

Using the axioms of Section 2.3, we may easily show that the probability P(X =
um) i.e. the probability that X(w) = Um irrespective of which values Yacquires, is
given by

(2.27)

39
28 2. Probability

The joint probability is sometimes also called multi-variate probability.

Exercise on 2.6

I) Generalize the definitions and relations of Sections 2.4 and 2.5 to the present
case.
2) Generalize the above definitions to several random variables X,(w), XzCw), ... ,
XN(W).

2.7 Mathematical Expectation E(X), and Moments


The quantity E{X) we shall define is also called expected value, the mean, or, the
first moment. We consider a random variable X which is defined on a countable
space Q. Consider as an example the die. We may ask, what is the mean value of
spots which we obtain when throwing the die many times. In this case the mean
value is defined as the sum over 1,2, ... ,6, divided by the total number of possible
throws. Remembering that lj{number of possible throws) is equal to the prob-
ability of throwing a given number of spots, we are led to the following expression
for the mean:

E(X) = IenEn X(w)P({w}) (2.28)

where X{w) is the random variable, P is the probability of the sample point wand
the sum runs over all points of the sample set Q. If we label the sample points by
integers I, 2, ... , n we may use the definition of Section 2.4 and write:

E{ X) = In PnVn' (2.29)

Because each function of a random variable is again a random variable we can


readily generalize (2.28) to the mean of a function cp( X) and find in analogy to (2.28)

E(cp(X» = IWEn cp(X(w»P({w}), (2.30)

and in analogy to (2.29),

(2.31)

The definition of the mean (or mathematical expectation) may be immediately


generalized to a continuous variable so that we only write down the resulting
expression

E(cp(X» = J:oo cp(~)f(~) d~. (2.32)

When we put cp(X) = X' in (2.30), or (2.31), or (2.32) the mathematical expectation

40
2.8 Conditional Probabilities 29

E(X') is called the rth moment of X. We further define the variance by

(2.33)

Its square root, (1, is called standard deviation.

2.8 Conditional Probabilities


So far we have been considering probabilities without further conditions. In many
practical cases, however, we deal with a situation, which, if translated to the
primitive game of rolling a die, could be described as follows: We roll a die but now
ask for the probability that the number of spots obtained is three under the condition
that the number is odd. To find a proper tool for the evaluation of the correspond-
ing probability, we remind the reader of the simple rules we have established
earlier. If Q is finite and all sample points have the same weight, then the probability
of finding a member out of a set A is given by

IAI (2.34)
P(A) = IQI

where IAI, IQI denote the number of elements of A or Q. This rule can be generalized
if Q is countable and each point w has the weight P(w). Then

P(A) = LweA P(w). (2.35)


Lweu P(w)

In the cases (2.34) and (2.35) we have admitted the total sample space. As an
example for (2.34) and (2.35) we just have to consider again the die as described at
the beginning of the section.
We now ask for the following probability: We restrict the sample points to a
certain subset S(w) and ask for the proportional weight of the part of A in S
relative to S. Or, in the case of the die, we admit as sample points only those
which are odd. In analogy to formula (2.35) we find for this probability

P(A I S) = L.,eAnS P(w). (2.36)


Lwes P(w)

Extending the denominator and numerator in (2.36) by IIL.,Eu P(w) we may re-
write (2.36) in the form

P(A IS) = P(~(~ S). (2.37)

This quantity IS called the conditional probability of A relative to S. In the literature

41
30 2. Probability

other terminologies are also used, such as "knowing S", "given S", "under the
hypothesis of S".

Exercises on 2.8

1) A die is thrown. Determine the probability to obtain 2 (3) spots under the
hypothesis that an odd number is thrown.
Hint: Verify A = {2}, or = {3}, S = {t, 3, 5}. Determine A n S and use (2.37).
2) Given are the stochastic variables X and Y with probability measure

P(rn, n) =P(X = rn, Y = n).

Show that

P(rn, n)
P(rn I n) = Lm P(rn, n)'

2.9 Independent and Dependent Random Variables


For simplicity we consider countable, valued random variables, though the defini-
tion may be readily generalized to noncountable, valued random variables. We
have already mentioned that several random variables can be defined simulta-
neously, e.g., the weight and height of humans. In this case we really expect a
certain relation between weight and height so that the random variables are not
independent of each other. On the other hand when we roll two dice simultaneously
and consider the number of spots of the first die as random variable, Xl' and that
of the second die as random variable, X 2 , then we expect that these random
variables are independent of each other. As one verifies very simply by this ex-
ample, the joint probability may be written as a product of probabilities of each
individual die. Thus we are led to define the following quite generally: Random
variables Xl' Xz, ... , Xn are independent if and only if for any real numbers
Xl' .•. , Xn we have

P(XI = Xl' ..• , Xn = Xn) = P(XI = XI )P(X2 = X2 ) ••• P(Xn = Xn). (2.38)

In a more general formulation which may be derived from (2.38) we state that the
variables vary independently if and only if for arbitrary countable sets S I, . . . , Sn,
the following holds:

(2.39)

We can mention a consequence of (2.38) with many practical applications. Let


<PI' •.. , <Pn be arbitrary, real valued functions defined on the total real axis and
Xl' ... ,Xnindependent random variables. Then the random variables <PI (Xl)' ... ,
<Pn(Xn) are also independent. If random variables are not independent of each other,

42
2.10 Generating Functions and Characteristic Functions 31

it is desirable to have a measure for the degree of their independence or, in a more
positive statement, about their correlation. Because the expectation value of the
product of independent random variables factorizes (which follows from (2.38»,
a measure for the correlation will be the deviation of E(XY) from E(X)E( Y). To
avoid having large values of the random variables X, Y, with a small correlation
mimic a large correlation, one normalizes the difference

E(XY) - E(X)E(Y) (2.40)

by dividing it by the standard deviations a(X) and a( Y). Thus we define the so-
called correlation

( _ E(XY) - E(X)E(Y)
P X, Y) - O"(X)a(Y) (2.41)

Using the definition of the variance (2.33), we may show that

(2.42)

for independent random variables X and Y (with finite variances).

Exercise on 2.9

The random variables X, Y may both assume the values 0 and 1. Check if these
variables are statistically independent for the following joint probabilities:

a) b) c)
P(X = 0, Y = 0) = ! = ! = 1

P(X = 1, Y = 0) = ! = 0 = 0

P(X = 0, Y = I) =! =0 =0
P(X = I, Y = I) =! =! =0

Visualize the results by coin tossing.

2.10* Generating Functions and Characteristic Functions

We start with a special case. Consider a random variable X taking only non-
negative integer values. Examples for X are the number of gas molecules in a cell
of a given volume out of a bigger volume, or, the number of viruses in a volume of
blood out of a much larger volume. Let the probability distribution of X be given
by

P(X = j) = aj, j = 0, 1,2, ... (2.43)

43
32 2. Probability

We now want to represent the distribution (2.43) by means of a single function. To


this end we introduce a dummy variable z and define a generating function by

(2.44)

We immediately obtain the coefficients aj with aid of the Taylor expansion of the
function g(z) by taking the j's derivative of g(z) and dividing it by j!

1 djg
aj=-=fj
I (2.45)
J.dz z=o
The great advantage of (2.44) and (2.45) rests in the fact that in a number of im-
portant practical cases g(z) is an explicitly defined function, and that by means of
g(z) one may easily calculate expectation values and moments. We leave it to the
reader how one may derive the expression of the first moment of the random
variables X by means of (2.44). While (2.45) allows us to determine the probability
distribution (2.43) by means of (2.44), it is also possible to derive (2.44) in terms of a
function of the random variable X, namely,

(2.46)

This can be shown as follows: For each z, ill -> ZX(ro) is a random variable and thus
according to formulas (2.30), (2.43) we obtain

E(zX) = Ii=o P(X = j)zi = g(z). (2.47)

We mention an important consequence of (2.44). If the random variables Xl' ... ,


Xn are independent and have gl' ... , gn as their generating functions, then the
generating function of the sum Xl + X 2 + .,. + Xn is given by the product
g 192 . . . gn' The definition of the generating function of the form (2.46) lends itself
to a generalization to more general random variables. Thus one defines for non-
negative variables a generating function by replacing z by e -).

(2.48)

and for arbitrary random variables by replacing z by e i9

(2.49)

The definitions (2.48) and (2.49) can be used not only for discrete values of X but
also for continuously distributed values X. We leave to the reader the detailed
formulation as an exercise.

Exercise on 2.10

Convince yourself that the derivatives of the characteristic function

44
2.11 A Special Probability Distribution: Binomial Distribution 33

yield the moments:

2.11 A Special Probability Distribution: Binomial Distribution


In many practical cases one repeats an experiment many times with two possible
outcomes. The simplest example is tossing a coin n times. But quite generally we
may consider two possible outcomes as success or failure. The probability for
success of a single event will be denoted by p, that of failure by q, with p + q = I.
We will now derive an expression for the probability that out of n trials we will
have k successes. Thus we are deriving the probability distribution for the random
variable X of successes where X may acquire the values k = 0, I, ... , n. We take
the example of the coin and denote success by 1 and failure by 0. If we toss the coin
n times, we get a certain sequence of numbers 0, 1,0,0,1, I, ... , 1,0. Since the sub-
sequent events are independent of each other, the possibility of finding this specific
sequence is simply the product of the corresponding probabilities p or q. In the case
just described we find P = qpqqpp ... pq = pkqn-k. By this total trial we have
found the k successes, but only in a specific sequence.
In many practical applications we are not interested in a specific sequence but
in all sequences giving rise to the same number of successes. We therefore ask the
question how many different sequences of zero's and one's we can find with the
same amount, k, of numbers "1 ". To this end we consider n boxes each of which
can be filled with one number, unity or zero. (Usually such a box model is used by
means of a distribution of black and white balls. Here we are taking the numbers
zero and unity, instead). Because this kind of argument will be repeated very often,
°
we give it in detail. Let us take a number, or I. Then we have n possibilities to put
that number into one of the n boxes. For the next number we have n - 1 pos-
sibilities (because one box is already occupied), for the third n - 2 possibilities and
so on. The total number of possibilities of filling up the boxes is given by the product
of these individual possibilities which is

n(n - 1)(n - 2) ... 2·1 = n!


This filling up of the boxes with numbers does not lead in all cases to a different
configuration because when we exchange two units or several units among each
other we obtain the same configuration. Since the "1 "'s can be distributed in k!
different manners over the boxes and in a similar way the "O'''s in (n - k)! manners,
we must divide the total number n! by k!(n - k)!. The expression n!/k!(n - k)! is
denoted bY(Z) and is called a binomial coefficient. Thus we find the total probability

distribution in the following manner: There are (Z) different sequences each having
the probability pkqn-k. Because for different subsets of events the probabilities are

45
34 2. Probability

0.2

0,1-

(a)
o 5 10
B11 (np)
,

0.2

0,1

I 1
5 10
I I k
(b)

Fig. 2.14a and b. Binomial distribution Bk(n, p) as function of k for p = 1/4 and n = 15 (a),
n = 30 (b)

additive, the probability of finding k successes irrespective of the sequence, is given


by

(2.50)

(binomial distribution). Examples are given in Fig. 2.14. The mean value of the
binomial distribution is

E(X) = np, (2.51)

the variance

(J2 = n.p.q. (2.52)

46
2.11 A Special Probability Distribution: Binomial Distribution 35

We leave the proof as an exercise to the reader. For large n it is difficult to evaluate
B. On the other hand in practical cases the number of trials n may be very large.
Then (2.50) reduces to certain limiting cases which we shall discuss in subsequent
sections.
The great virtue of the concept of probability consists in its flexibility which
allows for application to completely different disciplines. As an example for the
binomial distribution we consider the distribution of viruses in a blood cell
under a microscope (cf. Fig. 2.15). We put a square grid over the cell and

• •• •

••
• • ••

• •• •
• •• Fig. 2.15. Distribution of particles under a grid

ask for the probability distribution to find a given number of particles in a square
(box). If we assume N squares and assume n = J.lN particles present, then J.l is the
average number of particles found in a box. The problem may be mapped on the
preceding one on coin tossing in the following manner: Consider a specific box and
first consider only one virus. Then we have a positive event (success) if the virus
is in the box and a negative event (failure) when it is outside the box. The prob-
ability for the positive event is apparently p = liN. We now have to distribute
J.lN = n particles into two boxes, namely into the small one under consideration
and into the rest volume. Thus we make n trials. If in a "Gedankenexperiment",
we put one virus after the other into the total volume, this corresponds exactly to
tossing a coin. The probability for each special sequence with k successes, i.e., k
viruses in the box under consideration, is given as before by pkqn-k. Since there are
again (~) different sequences, the total probability is again given by (2.50) or
using the specific form n = J.lN by

(2.53)

In practical applications one may consider N and thus also n as very large numbers,
while Jl is fixed and p tends to zero. Under the condition n -+ 00, J.l fixed, p -+ 0
one may replace (2.50) by the so-called Poisson distribution.

47
36 2. Probability

2.12 The Poisson Distribution


We start from (2.53) which we write using elementary steps in the form

B(np)=
k ,
n(n - 1)(n - 2) ... (n - k
k!
+ 1) (fl)k( 1--fl)n(
. -n n
fl)-k
1--
n

~~ '~--------------~-----------

2 3

fl,
For k fixed but n -> 00 we find the following expressions for the factors 1, 2, 3
in (2.54). The first factor remains unchanged while the second factor yields the
exponential function

(2.55)

The third factor reduces in a straightforward way to

(2.56)

We thus obtain the Poisson distribution

= flk! e -Il .
k

nk,1l -= l'Imn-->oo Bk (n,p) (2.57)

Examples are given in Fig. 2.16. Using the notation of Section 2.10 we may also
write

(2.58)

We thus find for the generating function

(2.59)

The straightforward evaluation of the mathematical expectation and of the


variance (which we leave to the reader as an exercise) yields

E(X) = g'(I) = fl, (2.60)


(j2(X) = fl. (2.61)

48
2.13 The Normal Distribution (Gaussian Distribution) 37

0,/
0,3-

O,}

0,1-

}O 40

(a) (b)

Fig. 2.16a and b. Poisson distribution 1C' •• as function of k for J1 = 2 (a) and J1 = 20 (b)

Exercise on 2.12

Prove : E(X(X - 1) ... (X - 1+ 1)) = J1-'


Hint: Differentiate g(z), (2.59), I times.

2.13 The Normal Distribution (Gaussian Distribution)

The normal distribution can be obtained as a limiting case of the binomial dis-
tribution, again for n -. 00 but for p and q not small, e.g., p = q = 1/2. Because
the presentation of the limiting procedure requires some more advanced mathe-
matics which takes too much space here, we do not give the details but simply
indicate the spirit. See also Section 4.1. We first introduce a new variable, u, by the
requirement that the mean value of k = np corresponds to u = 0; we further
introduce a new scale with k -. k /a where a is the variance. Because according to
formula (2.52) the variance tends to infinity with n -. 00, this scaling means that we
are eventually replacing the discrete variables k by a continuous variable. Thus
it is not surprising that we obtain instead of the original distribution function B, a
density function , q> where we use simultaneously the transformation q> = aB. Thus
more precisely we put

(2.62)

49
38 2. Probability

We then obtain the normal distribution by letting n ...... 00 so that we find

.
1Im._ ( ) ( ) 1 -.1..2
oo CP. u = cP u = .J2n e 2 (2.63)

cp(u) is a density function (cf. Fig. 2.17). Its integral is the normal (or Gaussian)
distribution

, Iu}

Fig. 2.17. Gaussian distribution rp(u) as


_ _-&.._ _ _---L_ _ _- - - ' ' ' - - _ U
function of u. Note the bell shape

F(x) 1 JX
= -- e- t • 2 duo (2.64)
Fn -00

Normal (or Gaussian) Probability Density of Several Variables

When several random variables are present in practical applications often the
following joint Gaussian probability density appears

(2.65)

X is a vector composed of the random variables Xl' .. . , X., x is the corresponding


vector the random variable X can acquire. Q(flij) is n by n matrix. We assume that
its inverse Q-l exists. IQI = det Q denotes the determinant of Q. m is a given
constant vector, T denotes the transposed vector. The following relation holds:
The mean of the random variable Xj is equal to the j component of m :

mj = E{Xj},j = I, ... ,n. (2.66)

Furthermore the components of the variance flij as defined by

flij = E{(Xi - m;)(Xj - m)} (2.67)

coincide with the matrix elements of Q = (Qi); i.e., flij = Qij.

50
2.15 Central Limit Theorem 39

Exercises on 2.13

1) Verify that the first moments of the Gaussian density

f(x) = ~ exp ( - a:x2 )


~;
are given by m 1 = 0, m2 = IJ(2a:)
2) Verify that the characteristic function of the Gaussian density is given by

E{exp (i8 X)} = exp ( - 8 4a:».


2 /(

3) Verify (2.66) and (2.67).


Hint: Introduce new variables Yj = x J - m j and diagonalize Q by new linear
combinations ek = L aljYj. Q is a symmetric matrix.
4) Verify

2.14 Stirling's Formula


For several applications in later chapters we have to evaluate n! for large values
of n. This is greatly facilitated by use of Stirling's formula

I 1
12(n + t) < w(n) < 12n· (2.68)

Since in many practical cases, n » I, the factor exp w(n) may be safely dropped.

2.15* Central Limit Theorem


Let Xj' j ~ 1 be a sequence of independent and identically distributed random
variables. We assume that the mean m and the variance (72 of each Xi is finite. The
sum Sn = Xl + ... + X n , n ~ I, is again a random variable (compare Section
2.2). Because the random variables Xj are independent we find for the mean

E(SJ = n·m (2.69)

and for the variance

(2.70)

51
40 2. Probability

Subtracting n·m from Sn and dividing the difference by u.jn we obtain a new
random variable

S - n·m
y _ ---,,-n_;=_-
n - u.jn (2.71)

which has zero mean and unit variance. The central limit theorem then makes a
statement about the probability distribution of Yn in the limit n -> 00. It states

(2.72)

Particularly in physics, the following form is used: We put a = x, and b = x + dx,


where dx denotes a small interval. Then the integral on the rhs of (2.72) can be ap-
proximately evaluated and the result reads

II·mn --+ 00 P(x < Yn < x + dx) = / I e- / dx


x2 2
(2.73)
" 2n
or, verbally, in the limit n -> 00, the probability that Yn lies in the interval x, ... ,
x + dx is given by a Gaussian density multiplied by the interval dx. Note that the
interval dx must not be dropped. Otherwise errors might occur if coordinate
transformations are performed. For other practical applications one often uses
(2.72) in a rough approximation in the form

(2.74)

where

(2.75)

The central limit theorem plays an enormous role in the realm of synergetics in
two ways: it applies to all cases where one has no correlation between different
events but in which the outcome of the different events piles up via a sum. On the
other hand, the breakdown of the central limit theorem in the form (2.72) will
indicate that the random variables Xj are no more independent but correlated.
Later on, we will encounter numerous examples of such a cooperative behavior.

52
3. Information
How to Be Unbiased

3.1 Some Basic Ideas


In this chapter we want to show how, by some sort of new interpretation of prob-
ability theory, we get an insight into a seemingly quite different discipline, namely
information theory. Consider again the sequence of tossing a coin with outcomes 0
and 1. Now interpret 0 and I as a dash and dot of a Morse alphabet. We all know
that by means of a Morse alphabet we can transmit messages so that we may ascribe
a certain meaning to a certain sequence of symbols. Or, in other words, a certain
sequence of symbols carries information. In information theory we try to find a
measure for the amount of information.
Let us consider a simple example and consider Ro different possible events
("realizations") which are equally probable a priori. Thus when tossing a coin we
have the events I and 0 and Ro = 2. In the case of a die we have 6 different out-
comes, therefore Ro = 6. Therefore the outcome of tossing a coin or throwing a
die is interpreted as receiving a message and only one out of Ro outcomes is
actually realized. Apparently the greater R o, the greater is the uncertainty before
the message is received and the larger will be the amount of information after the
message is received. Thus we may interpret the whole procedure in the following
manner: In the initial situation we have no information 10 , i.e., 10 = 0 with Ro
equally probable outcomes.
In the final situation we have an information 11 "# 0 with Rl = 1, i.e., a single
outcome. We now want to introduce a measure for the amount of information
I which apparently must be connected with Ro. To get an idea how the connection
between Ro and I must appear we require that I is additive for independent events.
Thus when we have two such sets with ROl or R02 outcomes so that the total
number of outcomes is

(3.1)
we require

(3.2)

This relation can be fulfilled by choosing

1= KlnRo (3.3)
42 3. Infonnation

where K is a constant. It can even be shown that (3.3) is the only solution to (3.2).
The constant K is still arbitrary and can be fixed by some definition. Usually the
following definition is used. We consider a so-called "binary" system which has
only two symbols (or letters). These may be the head and the tail of a coin, or
answers yes and no, or numbers 0 and 1 in a binomial system. When we form all
possible "words" (or sequences) of length n, we find R = 2n realizations. We now
want to identify I with n in such a binary system. We therefore require

1== KlnR == Knln2 =n (3.4)

which is fulfilled by

K = l/ln 2 = log2 e. (3.5)

With this choice of K, another form of (3.4) reads

1= log2 R. (3.4a)

Since a single position in a sequence of symbols (signs) in a binary system is called


"bit", the information I is now directly given in bits. Thus if R = 8 = 23 we find
1= 3 bits and generally for R = 2n , I = n bits. The definition of information for
(3.3) can be easily generalized to the case when we have initially Ro equally probable
cases and finally Rl equally probable cases. In this case the information is

1= KlnRo - KlnRl (3.6)

which reduces to the earlier definition (3.3), if R J I. A simple example for this is
given by a die. Let us define a game in which the even numbers mean gain and the
odd numbers mean loss. Then Ro = 6 and R\ = 3. In this case the information
content is the same as that of a coin with originally just two possibilities. We now
derive a more convenient expression for the information. To this end we first
consider the following example of a simplified 1 Morse alphabet with dash and dot.
We consider a word of length G which contains Nl dashes and N2 dots, with

(3.7)

We ask for the information which is obtained by the receipt of such a word. In the
spirit of information theory we must calculate the total number of words which
can be constructed out of these two symbols for fixed N l , N 2 • The consideration is
quite similar to that of Section 2. I 1 page 33. According to the ways we can dis-
tribute the dashes and dots over the N positions, there are

(3.8)

1 In the realistic Morse alphabet, the intermission is a third symbol.

54
3.1 Some Basic Ideas 43

possibilities. Or, in other words, R is the number of messages which can be trans-
mitted by Nt dashes and N2 dots. We now want to derive the information per
symbol, i.e., i = liN. Inserting (3.8) into (3.3) we obtain

I = Kin R = K[ln N! - In Nl! - In N2!]' (3.9)

Using Stirling's formula presented in (2.68) in the approximation

In Q! ~ Q(ln Q - 1) (3.10)

which is good for Q > 100, we readily find

(3.11)

and with use of (3.7) we find

(3.12)

We now introduce a quantity which may be interpreted as the probability of


finding the sign "dash" or "dot". The probability is identical to the frequency with
which dash or dot are found

Pj = N'

j = 1, 2. (3.13)

With this, our final formula takes the form

I
i = N= -K(pl lnpl + P2 ln P2)' (3.14)

This expression can be easily generalized if we have not simply two symbols but
several, such as letters in the alphabet. Then we obtain in quite similar manner as
just an expression for the information per symbol which is given by

(3.15)

Pj is the relative frequency of the occurrence of the symbols. From this interpreta-
tion it is evident that i may be used in the context of transmission of information,
etc.
Before continuing we should say a word about information used in the sense
here. It should be noted that "useful" or "useless" or "meaningful" or "meaning-
less" are not contained in the theory; e.g., in the Morse alphabet defined above
quite a number of words might be meaningless. Information in the sense used here
rather refers to scarcity of an event. Though this seems to restrict the theory con-
siderably, this theory will tum out to be extremely useful.

55
44 3. Information

The expression for the information can be viewed in two completely different
ways. On the one hand we may assume that the p;'s are given by their numerical
values, and then we may write down a number for [ by use of formula (3.3). Of
still greater importance, however, is a second interpretation; namely, to consider [
as a/unction of the p;'s; that means if we change the values of the p,'s, the value of
[changes correspondingly. To make this interpretation clear we anticipate an ap-
plication which we will treat later on in much greater detail. Consider a gas of atoms
moving freely in a box. It is then of interest to know about the spatial distribution
of the gas atoms. Note that this problem is actually identical to that of Section 2.11
but we treat it here under a new viewpoint. We again divide the container into M
cells of equal size and denote the number of particles in cell k by N k • The total
number of particles be N. The relative frequency of a particle to be found in cell k
is then given by

Nk
N = Pk' k = 1, 2, ... , M. (3.16)

Pk may be considered as distribution function of the particles over the cells k.


Because the cells have equal size and do not differ in their physical properties, we
expect that the particles will be found with equal probability in each cell, i.e.,

Pk = 11M. (3.17)

We now want to derive this result (3.17) from the properties of information. Indeed
the information may be as follows: Before we make a measurement or obtain a
message, there are R possibilities or, in other words, Kin R is a measure of our
ignorance. Another way of looking at this is the following: R gives us the number
of realizations which are possible in principle.
Now let us look at an ensemble of M containers, each with N gas atoms. We
assume that in each container the particles are distributed according to different
distribution functions Pk' i.e.,

Accordingly, we obtain different numbers of realizations, i.e., different informa-


tions. For example, if Nt = N, N2 = N3 = ... = 0, we have p\1) = I, p~1) =
At) = ... = 0 and thus [(1) = O. On the other hand, if Nt = N2 = N3 = .. , =
NIM, we have p\2) = 11M, p~2) = 11M, ... , so that [(2) = -Mlog 2 (11M) =
M log2 M, which is a very large number if the number of boxes is large.
Thus when we consider any container with gas atoms, the probability that it is
one with the second distribution function is much greater than one with the first
distribution function. That means there is an overwhelming probability of finding
that probability distribution Pk realized which has the greatest number of pos-
sibilities R and thus the greatest information. Hence we are led to require that

- Ipi lnpi = Extr! (3.18)

56
3.1 Some Basic Ideas 45

is an extremum under the constraint that the total sum of the probabilities Pi
equals unity

(3.19)

This principle will turn out to be fundamental for applications to realistic systems
in physics, chemistry, and biology and we shall come back to it later.
The problem (3.18) with (3.19) can be solved using the so-called Lagrangian
mUltipliers. This method consists in multiplying (3.19) by a still unknown para-
meter A. and adding it to the Ihs of (3.18) now requiring that the total expression
becomes an extremum. Here we are now allowed to vary the p;'s independently
of each other, not taking into account the constraint (3.19). Varying the left-hand
side of

(3.20)

means taking the derivative of it with respect to Pi which leads to

-In Pi - 1 + A. = o. (3.21)

Eq. (3.21) allows for the solution

Pi = exp(A. - I) (3.22)

which is independent of the index i, i.e., the p;'s are constant. Inserting them into
(3.19) we may readily determine A. so that

M exp(A. - 1) = 1, (3.23)

or, in other words, we find

1
Pi= M (3.24)

in agreement to (3.17) as expected.

Exercises on 3.1

1) Show by means of(3.8) and the binomial theorem that R = 2N if N" N z may be
choosen arbitrarily (within (3.7)).
2) Given a container with five balls in it. These balls may be excited to equidistant
energy levels. The difference in energy between two levels is denoted by A.
Now put the energy E = SA into the system. Which is the state with the largest
number of realizations, if a state is characterized by the number of "excited"
balls? What is the total number of realizations? Compare the probabilities for
the single state and the corresponding number of realizations. Generalize the
formula for the total number of states to 6 balls which get an energy E = NA.

57
46 3. Information

3) Convince yourself that the number of realizations of Nt dashes and N2 dots


over N positions could be equally well derived by the distribution of N particles
(or balls) over 2 boxes, with the particle (ball) numbers Nt, N2 in boxes I, 2
fixed.
Hint: Identify the (numbered) balls with the (numbered) positions and index
I or 2 of the boxes with "dash" or "dot". Generalize this analogy to one
between M symbols over N positions and N balls in M boxes.
4) On an island five different bird populations are found with the following relative
abundance: 80 %, 10 %, 5 %, 3 %, 2 %. What is the information entropy of this
population?
Hint: Use (3.15).

3.2* Information Gain: An IDustrative Derivation


We consider a distribution of balls among boxes, which are labeled by I, 2, ... ,
We now ask in how many ways we can distribute N balls among these boxes so that
finally we have Nt, N2 etc. balls in the corresponding boxes. This task has been
treated in Section 2.11 for two boxes 1,2. (compare (3.8». Similarly we now obtain
as number of configurations

N!
Zt = n k N k',. (3.25)

Taking the logarithm of Zt to the basis of two, we obtain the corresponding in-
formation. Now consider the same situation but with balls having two different
colors, black and white (cf. Fig. 3.1). Now these black balls form a subset of all
balls. The number of black balls be N', their number in box k be N~ . We ~ow
calculate the number of configurations which we find if we distinguish between
black and white balls. In each box we must now subdivide Nk into N~ and Nk - N~.
Thus we find

2 3 5
N, =5 N] =3 N3 = 3 Ns = 5 Fig. 3.1. Distribution of white
Nj =1 z
N =1 Ni =2 N5 =2 and black balls oyer boxes

N'! (N - N')!
Z2 = n N{!' nCNk _ ND! (where Il = Ilk) (3.26)

realizations. Now we consider the ratio between the numbers of realizations

(3.27)

58
3.2 Information Gain: An Illustrative Derivation 47

or, using (3.25), (3.26)

N! nN~!n(Nk - ND!
Z = n Nk! . ~ (N - N')! .
(3.28)

Using Stirling's formula (2.68), we may put after some analysis

N! N'
(N - N')! ~N (3.29)

and

(3.30)

and thus

(3.31)

Using again Stirling's formula, we find

(3.32)

and using

(3.33)

we obtain

In Z = "
L..k Nk N~ - "AT'
, In Nk L..k lVk In N'
N (3.34)

which can be written in short in the form

N' N'/N'
In Z = N' Lk N~ln ;k/N ' (3.35)

As above we now introduce the relative frequencies, or probabilities,

(3.36)

and

N~/N' = Pk' (3.37)

If we still divide both sides by N' (and then multiply by the constant K (3.5», we

59
48 3. Information

obtain our final formula for the information gain in the form

K(p',p) = NK,ln Z = KIkP~ In!1 (3.38)


Pk
where

(3.39)

and

(3.40)

The information gain K(p',p) has the following important property, which we
will use in later chapters:

K(p',p) ~ o. (3.41)

The equality sign holds if and only if

p' == p, i.e. p~ = Pk for all k's.

3.3 Information Entropy and Constraints


In this and the following two sections we have especially applications of the in-
formation concept to physics in our minds so that we shall follow the convention
to denote the information by S identifying the constant Kin (3.3) with Boltzmann's
constant kB • For reasons which will appear later, S will be called information
entropy. Because chemical and biological systems can be viewed as physical sys-
tems, our considerations apply equally well to these other systems. Still more im-
portant, the general formalism of this chapter is also applicable to other sciences,
such as information processing, etc. We start from the basic expression

(3.42)

The indices i may be considered as describing individual features of the particles


or subsystems. Let us explain this in some detail. The index i may describe, for
instance, the position of a gas particle or it may describe its velocity or both
properties. In our previous examples the index i referred to boxes filled with balls.
In the initial chapters on probability and random variables, the index i represented
the values that a random variable may acquire. In this paragraph we assume for
simplicity that the index i is discrete.
A central task to be solved in this book consists in finding ways to determine the
p;'s. (compare for example the gas molecules in a container where one wants to
know their location). The problem we are confronted with in many disciplines is to
make unbiased estimates leading to p/s which are in agreement with all the possible

60
3.3 Information Entropy and Constraints 49

knowledge available about the system. Consider an ideal gas in one dimension.
What we could measure, for instance, is the center of gravity. In this case we
would have as constraint an expression of the form

(3.43)

where q i measures the position of the cell i. M is a fixed quantity equal Q/ N, where
Q is the coordinate of the center of gravity, and N the particle number. There are,
of course, very many sets of p;'s which fulfill the relation (3.43). Thus we could
choose a set {p;} rather arbitrarily, i.e., we would favor one set against another one.
Similar to common life, this is a biased action. How may it be unbiased? When we
look again at the example of the gas atoms, then we can invoke the principle stated
in Section 3.1. With an overwhelming probability we will find those distributions
realized for which (3.42) is a maximum. However, due to (3.43) not all distributions
can be taken into account. Instead we have to seek the maximum of (3.42) under
the constraint (3.43). This principle can be generalized if we have a set of con-
straints. Let for example the variable i distinguish between different velocities. Then
we may have the constraint that the total kinetic energy mr! of the particles is fixed.
Denoting the kinetic energy of a particle with mass m and velocity Vi by li[fi =
(m/2)v71 the mean kinetic energy per particle is given by

(3.43a)

In general the single system i may be characterized by quantities Ilk), k = I, 2, ... , M


(position, kinetic energy or other typical features). If these features are additive,
and the corresponding sums are kept fixed at values/k the constraints take the form

(3.44)

We further add as usual the constraint that the probability distribution is nor-
malized

(3.45)

The problem of finding the extremum of (3.42) under the constraints (3.44) and
(3.45) can be solved by using the method of Lagrange parameters Ak' k = 1,2, ... , M
(cf. 3.1). We multiply the Ihs of (3.44) by Ak and the lhs of (3.45) by (A - 1) and
take the sum of the resulting expressions. We then subtract this sum from(l/kB)S.
The factor l/kB amounts to a certain normalization of A, Ak. We then have to vary
the total sum with respect to the p;'s

(3.46)

Differentiating with respect to Pi and putting the resulting expression equal to

61
50 3. Information

zero, we obtain

(3.47)

which can be readily solved for Pi yielding

(3.48)

Inserting (3.48) into (3.45) yields

(3.49)

It is now convenient to abbreviate the sum over i, Li in (3.49) by


(3.50)

which we shall call the partition function. Inserting (3.50) into (3.49) yields

(3.51)

or

A = In Z, (3.52)

which allows us to determine A once the Ak'S are determined. To find equations for
the Ak'S, we insert (3.48) into the equations of the constraints (3.44) which lead
immediately to

(3.53)

Eq. (3.53) has a rather similar structure as (3.50). The difference between these
two expressions arises because in (3.53) each exponential function is still multiplied
by I?). However, we may easily derive the sum occurring in (3.53) from (3.50) by
differentiating (3.50) with respect to Ak • Expressing the first factor in (3.53) by Z
according to (3.51) we thus obtain

(/~k» = 1( -a~J Ii exp {- II Ad~/)} (3.54)

or in still shorter form

_ (k) __ aIn Z (3.55)


h = (Ii> - aAk

Because the lhs are prescribed (compare (3.44)) and Z is given by (3.50) which is a

62
3.3 Information Entropy and Constraints 51

function of the Ak's in a special form, (3.55) is a concise form for a set of equations
for the Ak's.
We further quote a formula which will become useful later on. Inserting (3.48)
into (3.42) yields

(3.56)

which can be written by use of (3.44) and (3.45) as

(3.57)

The maximum of the information entropy may thus be represented by the mean
values Ik and the Lagrange parameters Ak' Those readers who are acquainted with
the Lagrange equations of the first kind in mechanics will remember that the
Lagrange parameters have a physical meaning, in that case, of forces. In a similar
way we shall see later on tht the Lagrange parameters Ak have phjsical (or chemical
or biological, etc.) interpretations. By the derivation of the above formulas (i.e.,
(3.48), (3.52) with (3.42), (3.55) and (3.57» our original task to find the p's and
Smax is completed.
We now derive some further useful relations. We first investigate how the
information Smax is changed if we change the functions Ilk) and!" in (3.44). Because
S depends, according to (3.57), not only on thef's but also on Aand Ak's which are
functions of the j's, we must exercise some care in taking the derivatives with
respect to the f's. We therefore first calculate the change of A (3.52)

1
c5A == c5ln Z = Zc5Z.

Inserting (3.50) for Z yields

which, by definition of Pi (3.48) transforms to

Eq. (3.53) and an analogous definition of <c5/lk » allow us to write the last line as

(3.58)

Inserting this into oSmax (of (3.57» we find that the variation of the Ak's drops out
and we are left with

(3.59)

63
52 3. Information

We write this in the form

(3.60)

where we define a "generalized heat" by means of

(3.61)

The notation "generalized heat" will become clearer below when contact with
thermodynamics is made. In analogy to (3.55) a simple expression for the variance
of f\k) (cf. (2.33)) may be derived:

(3.62)

In many practical applications,f\k) depends on a further quantity a (or a set of


such quantities aI' a 2 , • •• ). Then we want to express the change of the mean value
(3.44), when rx is changed. Taking the derivative off\~~ with respect to a and taking
the average value, we find

8,\k») 8,\k)
( ~ -~ ~ (3.63)
8a - L,i Pi 8a .

Using the p/s in the form (3.48) and using (3.51), (3.63) may be written in the form

(3.64)

which may be easily expressed as a derivative of Z with respect to a

(3.64) = - z1 I,;1 8Z
8a· (3.65)

Thus we are led to the final formula

1 8 In Z _ (8f~~~)
-I,;--aa - a;- . (3.66)

If there are several parameters a l present, this formula can be readily generalized
by providing on the left and right hand side the a with respect to which we differ-
entiated by this index I.
As we have seen several times, the quantity Z (3.50), or its logarithm, is very
useful (see e.g., (3.55), (3.62), (3.66)). We want to convince ourselves that In Z == A-
(cf. (3.52)) may be directly determined by a variational principle. A glance at (3.46)
reveals that (3.46) can also be interpreted in the following way: Seek the extremum

64
3.4 An Example from Physics: Thermodynamics 53

of

(3.67)

under the only constraint

(3.68)

Now, by virtue of (3.44), (3.57) and (3.52) the extremum of (3.67) is indeed identical
with In Z. Note that the spirit of the variational principle for In Z is different from
that for S. In the former case, we had to seek the maximum of S under the con-
straints (3.44), (3.45) with fk fixed and A.k unknown. Here now, only one con-
straint, (3.68), applies and the A.k'S are assumed as given quantities. How such a
switching from one set of fixed quantities to another one can be done will become
more evident by the following example from physics, which will elucidate also many
further aspects of the foregoing.

Exercise on 3.3

To get a feeling for how much is achieved by the extremal principle, answer the
following questions:

I) Determine all solutions of PI + P2 = 1 (El)

with 0 :::; Pi :::; 1,

all solutions of PI + P2 + P3 = I (E2)

all solutions of I Pi = 1 (E3)


2) In addition to (El), (E2) one further constraint (3.44) is given. Determine now
all solutions Pi'
Hint: Interpret (El) - (E3) as equation in the PI - Pz plane, in the PI - P2 -
P3 space, in the n-dimensional space. What is the geometrical interpretation of
additional constraints? What does a specific solution PlO), ... ,p~O) mean in the
(PI,P2"" ,Pn)-space?

3.4 An Example from Physics: Thermodynamics

To visualize the meaning of the index i, let us identify it with the velocity of a
particle. In a more advanced theory Pi is the occupation probability of a quantum
state i of a many-particle system. Further, we identify f~~~ with energy E and the
parameter a with the volume. Thus we put

f~~~ = Ei(V); k = 1, (3.69)

65
54 3. Information

and have the identifications

(3.70)

We have, in particular, called Ai = /3. With this, we may write a number of the
previous formulas in a way which can be immediately identified with relations
well known in thermodynamics and statistical mechanics. Instead of (3.48) we find

(3.71)

which is the famous Boltzmann Distribution Function.


Eq. (3.57) acquires the form

I
kB Smax = lnZ + /3U (3.72)

or, after a slight rearrangement of this equation

(3.73)

This equation is well known in thermodynamics and statistical physics. The first
term may be interpreted as the internal energy U, 1//3 as the absolute temperature T
multiplied by Boltzmann's constant k B • Smax is the entropy. The rhs represents the
free energy, :IF, so that (3.73) reads in thermodynamic notation

U - TS = :IF. (3.74)

By comparison we find

(3.75)

and S = Smax. Therefore we will henceforth drop the suffix "max". (3.50) reads
now

(3.76)

and is nothing but the usual partition function. A number of further identities of
thermodynamics can easily be checked by applying our above formulas.
The only problem requiring some thought is to identify independent and de-
pendent variables. Let us begin with the information entropy, Smax. In (3.57) it
appears as a function of A, Ail'S and Ik'S. However, A and A/S are themselves
determined by equations which contain the Ik'S and l\k),S as given quantities (cf.
(3.50), (3.52), (3.53)). Therefore, the independent variables are h, Ilk>, and the
dependent variables are A, Ak , and thus, by virtue of (3.57), Smax. In practice the
l\k)'S are fixed functions of i (e.g., the energy of state "i"), but still depending on
parameters ex (e.g., the volume, cf. (3.69)). Thus the truly independent variables

66
3.4 An Example from Physics: Thermodynamics 55

in our approach are !k'S (as above) and the IX'S. In conclusion we thus find S =
SU;"IX). In our example'!l = E == U, IX = V, and therefore

S = S(U,V). (3.77)

Now let us apply the general relation (3.59) to our specific model. If we vary only
the internal energy, U, but leave V unchanged, then

(3.78)

and

o!I,lj == oEi(V) = (oElV)/oV)oV = 0, (3.79)

and therefore

or

(3.80)

According to thermodynamics, the lhs of (3.80) defines the inverse of the absolute
temperature

oS/oU = liT. (3.81 )

This yields p = Ij(kBT) as anticipated above. On the other hand, varying V but
leaving U fixed, i.e.,

(3.82)
but

(3.83)

yields in (3.59)

oS = k B ( -A1)(bEi(V)/bV)bV
or

(3.84)

Since thermodynamics teaches us that

bS P
bV=T (3.85)

67
56 3. Information

where P is the pressure, we obtain by comparison with (3.84)

(oElV)joV) = -Po (3.86)

Inserting (3.81) and (3.85) into (3.59) yields

1 1
bS = T OU + TPOV. (3.87)

In thermodynamics the rhs is equal to dQjT where dQ is heat. This explains the
notation "generalized heat" used above after (3.61). These considerations may be
generalized to different kinds of particles whose average numbers N k , k = 1, ... m
are prescribed quantities. We therefore identify II with E, butIk'+ 1 with N k " k' =
1, ... , m (Note the shift of index !). Since each kind of particle, I, may be present
with different numbers Nt we generalize the index ito i, N I , . . . , N m and put

To be in accordance with thermodynamics, we put

(3.88)

11k is called chemical potential.


Equation (3.57) with (3.52) acquires (after multiplying both sides by kBT) the
form

TS = kBTln Z + U - fllNl - fl2N2 - (3.89)


'-----v--'
-:}'

Eq. (3.59) permits us to identify

(3.90)

The partition function reads

While the above considerations are most useful for irreversible thermodynamics
(see the following Section 3.5), in thermodynamics the role played by independent
and dependent variables is, to some extent, exchanged. It is not our task to treat
these transformations which give rise to the different thermodynamic potentials
(Gibbs, Helmholtz, etc). We just mention one important case: Instead of U, V
(and N 1 , ••• , N m ) as independent variables, one may introduce V and T =
(8Sj8U)-1 (and N 1 , ••• , N n ) as new independent variables. As an example we

68
3.5 An Approach to Irreversible Thermodynamics 57

treat the U-V case (putting formally /11' /12' ... = 0). According to (3.75) the free
energy, IF, is there directly given as function of T. The differentiation aIFlaTyields

aIF 1 1
- aT = kB In Z + TZLi Ei exp( -{JEJ.

The second term on the rhs is just U, so that

(3.92)

Comparing this relation with (3.73), where 11{J = kBT, yields the important
relation

aIF
--=s
aT
(3.93)

where we have dropped the suffix "max."

3.5* An Approach to Irreversible Thermodynamics


The considerations of the preceding chapter allow us to simply introduce and
make understandable basic concepts of irreversible thermodynamics. We consider
a system which is composed of two subsystems, e.g., a gas of atoms whose volume
is divided into two subvolumes. We asSume that in both subsystems probability
distributions Pi and P; are given. We then define the entropies in the corresponding
subsystems by

S = -kBLiPilnPi (3.94a)

and

(3.94b)

Similarly, in both subsystems constraints are given by

(3.95a)

and

(3.95b)

We require that the sums of the f's in both subsystems are prescribed constants

(3.96)

69
58 3. Information

(e.g., the total number of particles, the total energy, etc.). According to (3.57) the
entropies may be represented as

(3.97a)

and

(3.97b)

(Here and in the following the suffix "max" is dropped). We further introduce the
sum of the entropies in the total system

(3.98)

To make contact with irreversible thermodynamics we introduce "extensive


variables" X. Variables which are additive with respect to the volume are called
extensive variables. Thus if we divide the system in two volumes so that Vi + V2 =
V, then an extensive variable has the property XVI + X V2 = Xv' Examples are
provided by the number of particles, by the energy (if the interaction is of short
range), etc. We distinguish the different physical quantities (particle numbers,
energies, etc.) by a superscript k and the values such quantities acquire in the state
i by the subscript i (or a set of them). We thus write XI k ).

Example: Consider the particle number and the energy as extensive variables.
Then we choose k = I to refer to the number of particles, and k = 2 to refer to the
energy. Thus

and
XI2) = E i•
Evidently, we have to identify xl k ) withflk),
(3.99)

For the rhs of (3.95) we introduce correspondingly the notation X k • In the first
part of this chapter we confine our attention to the case where the extensive variables
X k may vary, but the xl k ) are fixed. We differentiate (3.98) with respect to Xk • Using
(3.97) and
(3.100)

(compare (3.96)) we find


aso as as'
aXk = iJXk - ax;. = kBP'k - A;'). (3.101)

70
3.5 An Approach to Irreversible Thermodynamics 59

Consider an example: take XI; = U (internal energy) then AI; = P = 1/(kBT)


(cf. Sec. 3.4). T is again the absolute temperature.
Since we want to consider processes we now admit that the entropy S depends
on time. More precisely, we consider two subsystems with entropies Sand S'
which are initially kept under different conditions, e.g., having different tempera-
tures. Then we bring these two systems together so that for example the energy
between these two systems can be exchanged. It is typical for the considerations of
irreversible thermodynamics that no detailed assumptions about the transfer
mechanisms are made. For instance one completely ignores the detailed collision
mechanisms of molecules in gases or liquids. One rather treats the mechanisms in a
very global manner which assumes local thermal equilibrium. This comes from the
fact that the entropies S, S' are determined just as for an equilibrium system with
given constraints. Due to the transfer of energy or other physical quantities the
probability distribution Pi will change. Thus for example in a gas the molecules will
acquire different kinetic energies when the gas is heated and so the Pi change.
Because there is a steady transfer of energy or other quantities, the probability dis-
tribution Pi changes steadily as a function of time. When the p;'s change, of course,
the values of the h'S (compare (3.95a) or (3.95b» are also functions of time. We
now extend the formalism of finding the entropy maximum to the time-dependent
case, i.e., we imagine that the physical quantities h are given (and changing) and
the p;'s must be determined by maximizing the entropy

(3.102)

under the constraints

(3.103)

In (3.103) we have admitted that the h's are functions of time, e.g., the energy U
is a function of time. Taking now the derivative of (3.102) with respect to time and
having in mind that S depends via the constraints (3.103) onh == Xk we obtain

dS _ ~ as dXI;
dt - L..l;axl; dt . (3.104)

As we have seen before (compare (3.101»

(3.105)

introduces the Lagrange multipliers which define the so-called intensive variables.
The second factor in (3.104) gives us the temporal change of the extensive quan-
tities XI;, e.g., the temporal change of the internal energy. Since there exist conser-
vation laws, e.g., for the energy, the decrease or increase of energy in one system
must be caused by an energy flux between different systems. This leads us to define

71
60 3. Information

the flux by the equation

(3.106)

Replacing S by SO in (3.104), and using (3.101) and (3.106), we obtain

(3.l07a)

The difference (A.k - ADkB is called a generalized thermodynamic force (or affinity)
and we thus put

(3.107b)

The motivation for this nomenclature will become evident by the examples treated
below. With (3.107b) we may cast (3.107a) into the form

(3.108)

which expresses the temporal change of the entropy SO of the total system (1 + 2)
in terms of forces and fluxes. If Fk = 0, the system is in equilibrium; if Fk # 0, an
irreversible process occurs.

Examples; Let us consider two systems separated by a diathermal wall and take
X k = U (U; internal energy). Because according to thermodynamics

(3.109)

we find

(3.l1O)

We know that a difference in temperature causes a flux of heat. Thus the generalized
forces occur as the cause of fluxes. For a second example consider X k as the mole
number. Then one derives

Fk = Lui IT' - }lIT]. (3.l11)

A difference in chemical potential, }l, gives rise to a change of mole numbers.


In the above examples, I and 2, corresponding fluxes occur, namely in 1) a heat
flux, and in 2) a particle flux. We now treat the case that the extensive variables X

72
3.5 An Approach to Irreversible Thermodynamics 61

are parameters off In this case we have in addition to (3.99)

Xk = IXk' (3.112)
f~k) = f~~, •.... aJ"'" (3.113)

(Explicit examples from physics will be given below). Then according to (3.59)
we have quite generally

oS
OIX.
= '"
L...k
k A {Oil _ (Of!.~,,,,)},
B k OIX. OIX.
(3.114)
1 J J

We confine our subsequent considerations to the case whereil does not depend on
i.e., we treat il and IX as independent variables. (Example: In
IX,

(3.115)

the volume is an independent variable, and so is the internal energy U = (Ei(V)')


Then (3.114) reduces to

~ oS _ '"
kB OlXj - L...k Ak{
_ «k) )
Ofi.a, . . ./0IX j}. (3.116)

For two systems, characterized by (3.98) and (3.100), we have now the additional
conditions

(3.117)

(Example: IX = volume)
The change of the entropy S(O) of the total system with respect to time, dS(O)/dt
may be found with help of (3.l07a), (3.116) and (3.117),

1 dS(O)
kB dt = Lk (Ak - AI.) d/ - Lj.k Ak
dX [(OX{k»)
O;j
(or(k»)] . dlXd:'
- AI. o~j (3.118)

Using the definitions (3.106) and (3.107b) together with

dlX j _ J (3.119)
dt - i'

and

(3.120)

we cast (3.118) into the form


dS(O)
dt = IZ=l FkJk + Ij=l Fij· (3.121)

73
62 3. Information

Putting

PI = JI = J I
F 1; if I < I < n }
(3.122)
FI = FI - n ; J1 = i l - n if n + 1 :s; l:s; n +m
we arrive at an equation of the same structure as (3.107) but now taking into
account the parameters IX,

(3.123)

The importance of this generalization will become clear from the following example
which we already encountered in the foregoing chapter (IX = V), (i ~ i, N)

(3.124)
f (2)
~Ia
= N,

Inserting (3.124) into (3.118) we arrive at

dS(O) = (~ _ ~)dU (~_ l!:.)dN (~ _ P')dV (3.125)


dt T T' dt + T' T dt + T T' dt·

The interpretation of (3.125) is obvious from the example given in (3.110, III).
The first two terms are included in (3.107), the other term is a consequence of the
generalized (3.123).
The relation (3.125) is a well-known result in irreversible thermodynamics.
It is, however, evident that the whole formalism is applicable to quite different
disciplines, provided one may invoke there an analogue of local equilibrium.
Since irreversible thermodynamics mainly treats continuous systems we now
demonstrate how the above considerations can be generalized to such systems.
We again start from the expression of the entropy

(3.126)

We assume that S and the probability distribution Pi depend on space and time.
The basic idea is this. We divide the total continuous system into subsystems
(or the total volume in subvolumes) and assume that each subvolume is so large
that we can still apply the methods of thermodynamics but so small that the
probability distribution, or, the value of the extensive variablefmay be considered
as constant. We again assume that the p/s are determined instantaneously and
locally by the constraints (3.103) whereh(t) has now to be replaced by

h(X,f). (3.127)

We leave it as an exercise to the reader to extend our considerations to the case

74
3.5 An Approach to Irreversible Thermodynamics 63

where the /(k) depends on parameters IX,

(k)
/ i,ot:{x,t)- (3.128)

We thus consider the equations

(3.129)

as determining the p;'s. We divide the entropy S by a unit volume Vo and consider
in the following the change of this entropy density

S
s = Vo' (3.130)

We again identify the extensive variables with thef's

Xk =h(X,t), (3.131)

which are now functions of space and time. We further replace

(3.132)

It is a simple matter to repeat all the considerations of Section 3.3 in the case in
which the f's are space- and time-dependent. One then shows immediately that one
may write the entropy density again in the form

(3.133)

which completely corresponds to the form (3.97).


It is our purpose to derive a continuity equation for the entropy density and to
derive explicit expressions for the local temporal change of the entropy and the
entropy flux. We first consider the local temporal change of the entropy by taking
the derivative of (3.133) which gives us (as we have seen already in formula (3.59»

(3.134)

We concentrate our consideration on the case in which (0/(k)/8t) vanishes. (8/ot)};.


gives us the local temporal change of the extensive variables, e.g., the energy, the
momentum etc. Since we may assume that these variables X k obey continuity equa-
tions of the form

(3.135)

where J k are the corresponding fluxes, we replace in (3.134) Xk by - VJ k' Thus

75
64 3. Information

(3.134) reads

(3.136)

Using the identity AkVJk = V(AkJk) - JkVAk and rearranging terms in (3.136), we
arrive at our final relation

(3.137)

whose lhs has the typical form of a continuity equation. This leads us to the idea
that we may consider the quantity

as an adequate expression for the entropy flux. On the other hand the rhs may be
interpreted as the local production rate of entropy so that we write

(3.138)

Thus (3.137) may be interpreted as follows: The local production of entropy leads
to a temporal entropy change (first term on the lhs) and an entropy flux (second
term). The importance of this equations rests on the following: The quantities J k
and VAk can be again identified with macroscopically measurable quantities. Thus
for example if J k is the energy flux, kB VAk = VCI/T) is the thermodynamic force.
An important comment should be added. In the realm of irreversible thermo-
dynamics several additional hypotheses are made to find equations of motion for
the fluxes. The usual assumptions are I) the system is Markovian, i.e., the fluxes
at a given instant depend only on the affinities at that instant, and 2) one considers
only linear processes, in which the fluxes are proportional to the forces. That means
one assumes relations of the form

(3.139)

The coefficients Ljk are called Onsager coefficients. Consider as an example the heat
flow J. Phenomenologically it is related to temperature, T, by a gradient

J = -1<-VT, (3.140)

or, written in a somewhat different way by

(3.141)

By comparison with (3.139) and (3.138) we may identify VOlT) with the affinity

76
3.5 An Approach to Irreversible Thermodynamics 65

and KT2 with the kinetic coefficient L\\. Further examples are provided by Ohm's
law in the case of electric conduction or by Fick's law in the case of diffusion.
In the foregoing we have given a rough sketch of some basic relations. There are
some important extensions. While we assumed an instantaneous relation between
p;(t) andh(t) (cf. (3.129», more recently a retarded relation is assumed by Zubarev
and others. These approaches still are subject to the limitations we shall discuss in
Section 4.11.

Exercise on 3.5

1) Since S(O), Sand S' in (3.98) can be expressed by probabilities Pi' ... according
to (3.42) it is interesting to investigate the implications of the additivity relation
(3.98) with respect to the p's. To this end, prove the theorem: Given are the en-
tropies (we omit the common factor k B )

S(O) = - Li,jp(i,}) Inp(i,j) (E.1)

S = - LiPi Inpi (E.2)

S' = - LjPj Inpj, (E.3)

where

Pi Ii p(i,j)
= (E.4)

pj = Ii p(i, j) (E.5)

Show: the entropies are additive, i.e.,

S(O) = S + S' (E.6)

holds, if and only if

p(i,j) = PiPj (E.7)

(i.e., the joint probability p(i,j) is a product or, in other words, the events
X = i, Y = j are statistically independent).

Hint: I) to prove "if", insert (E.7) into (E.I), and use IiPi = I, IiP} = 1,
2) to prove "only if", start from (E.6), insert on the Ihs (E.l), and on the
rhs (E.2) and (E.3), which eventually leads to

Lij p(i,j) In (p(i,j)/(PiP') = O.

Now use the property (3.41) of K(p,p'), replacing in (3.38) the index k
by the double index i, j.
2) Generalize (3.125) to several numbers, Nk> of particles.

77
66 3. Information

3.6 Entropy-Curse of Statistical Mechanics?


In the foregoing chapters we have seen that the essential relations of thermo-
dynamics emerged in a rather natural way from the concept of information. Those
readers who are familiar with derivations in thermodynamics will admit that the
present approach has an internal elegance. However, there are still deep questions
and problems behind information and entropy. What we had intended originally
was to make an unbiased guess about the p/s. As we have seen, these p/s drop out
from the thermodynamic relations so that it actually does not matter how the p/s
look explicitly. It even does not matter how the random variables, over whose values
i we sum up, have been chosen. This is quite in accordance with the so-called
subjectivistic approach to information theory in which we just admit that we have
only limited knowledge available, in our case presented by the constraints.
There is, however another school, called the objectivistic. According to it, it
should be, at least in principle, possible to determine the p/s directly. We then have
to check whether our above approach agrees with the direct determination of the
p;'s. As example let us consider physics. First of all we may establish a necessary
condition. In statistical mechanics one assumes that the p/s obey the so-called
Liouville's equation which is a completely deterministic equation (compare the
exercises on Section 6.3). This equation allows for some constants of motion
(energy, momentum, etc.) which do not change with time. The constraints of
Section 3.4 are compatible with the approach of statistical mechanics, provided the
h's are such constants of motion. In this case one can prove that the p/s fulfill
indeed Liouville's equation (compare the exercises on Section 6.3).
Let us consider now a time-dependent problem. Here it is one of the basic
postulates in thermodynamics that the entropy increases in a closed system. On the
other hand, if the p/s obey Liouville's equation and the initial state of the system
is known with certainty, then it can be shown rigorously that S is time independent.
This can be made plausible as follows: Let us identify the subscripts i with a co-
ordinate q or with indices of cells in some space (actually it is a high-dimensional
space of positions and momenta). Because Pi obeys a deterministic equation, we
know always that Pi = 1 if that cell is occupied at a time t, and Pi = 0 if the cell is
unoccupied. Thus the information remains equal to zero because there is no un-
certainty in the whole course of time. To meet the requirement of an increasing
entropy, two approaches have been suggested:
1) One averages the probability distribution over small volumes 2 replacing Pi
J
by 1/Av P idV = Pi and forming S = - kB Li Pi In P i (or rather an integral over i).
This S is called "coarse-grained entropy". By this averaging we take into account
that we have no complete knowledge about the initial state. One can show that the
initial distribution spreads more and more so that 'an uncertainty with respect to
the actual distribution arises resulting in an increase of entropy. The basic idea of
coarse graining can be visualized as follows (cf. page 1). Consider a drop of ink
poured into water. If we follow the individual ink particles, their paths are com-

2 Meant are volumes in phase-space.

78
3.6 Entropy-Curse of Statistical Mechanics? 67

pletely determined. However, if we average over volumes, then the whole state gets
more and more diffuse. This approach has certain drawbacks because it appears
that the increase of entropy depends on the averaging procedure.
2) In the second approach we assume that it is impossible to determine the
time-dependence of Pi by the completely deterministic equations of mechanics.
Indeed it is impossible to neglect the interaction of a system with its surroundings.
The system is continuously subjected to fluctuations from the outer world. These can
be, if no other contacts are possible, vacuum fluctuations of the electromagnetic
field, fluctuations of the gravitational field, etc. In practice, however, such interac-
tions are much more pronounced and can be taken into account by "heatbaths",
or "reservoirs" (cf. Chap. 6). This is especially so if we treat open systems where
we often need decay constants and transition rates explicitly. Thus we shall adopt
in our book the attitude that the p/s are always, at least to some extent, spreading
which automatically leads to an increase of entropy (in a closed system), or more
generally, to a decrease of information gain (cf. Section 3.2) in an open (or closed)
system (for details cf. Exercises (1) and (2) of Section 5.3).
Lack of space does not allow us to discuss this problem in more detail but we
think the reader will learn still more by looking at explicit examples. Those will be
exhibited later in Chapter 4 in context with the master equation, in Chapter 6 in
context with the Fokker-Planck equation and Langevin equation, and in Chapters
8 to 10 by explicit examples. Furthermore the impact of fluctuations on the entropy
problem will be discussed in Section 4.10.

79
4. Chance
How Far a Drunken Man Can Walk

While in Chapter 2 we dealt with a fixed probability measure, we now study


stochastic processes in which the probability measure changes with time. We first
treat models of Brownian movement as example for a completely stochastic
motion. We then show how further and further constraints, for example in the
frame of a master equation, render the stochastic process a more and more deter-
ministic process.
This Chapter 4, and Chapter 5, are of equal importance for what follows. Since
Chapter 4 is somewhat more difficult to read, students may also first read 5 and
then 4. On the other hand, Chapter 4 continues directly the line of thought of
Chapters 2 and 3. In both cases, chapters with an asterisk in the heading may be
omitted during a first reading.

4.1 A Model of Brownian Movement

This chapter will serve three purposes. 1) We give a most simple example of a
stochastic process; 2) we show how a probability can be derived explicitly in this
case; 3) we show what happens if the effects of many independent events pile up.
The binomial distribution which we have derived in Section 2.11 allows us to treat
Brownian movement in a nice fashion. We first explain what Brownian movement
means: When a small particle is immersed in a liquid, one observes under a micro-
scope a zig-zag motion of this particle. In our model we assume that a particle
moves along a one-dimensional chain where it may hop from one point to one of
its two neighboring points with equal probability. We then ask what is the prob-
ability that after n steps the particle reaches a certain distance, x, from the point
where it had started. By calling a move to the right, success, and a move to the
left, failure, the final position, x, of the particle is reached after, say, s successes and
(n - s) failures. Ifthe distance for an elementary step is a, the distance, x, reached
reads

x = a(s - (n - s» = a(2s - n) (4.1)

from which we may express the number of successes by

n x
s = 2 + 20· (4.2)
70 4. Chance

Denoting the transition time per elementary step by r, we have for the time t after
n steps

t = nr. (4.3)

The probability of finding the configuration with n trials out of which s are success-
ful has been derived in Section 2.11. It reads

P(s,n) = C)GY (4.4)

since p = q = 1/2. Replacing sand n by the directly measurable quantities x


(distance), and t (time) via (4.2) and (4.3), we obtain instead of P (4.4)

p(x, t) == pes, n) =
tlr
( tl(2r) + xl(2a)
)(l)tl<
"2 (4.4a)

Apparently, this formula is not very handy and, indeed, difficult to evaluate exactly.
However, in many cases of practical interest we make our measurements in times t
which contain many elementary steps so that n may be considered as a large number.
Similarly we also assume that the final position x requires many steps s so that also
s may be considered as large. How to simplify (4.4a) in this limiting case requires
a few elementary thoughts in elementary calculus. (Impatient readers may quickly
pass over to the final result (4.29) which we need in later chapters). To evaluate
(4.4) for large sand n we write (4.4) in the form

n!
s!(n - s)! 2
(l)n (4.5)

and apply Stirling's formula (compare Section 2.14) to the factorials. We use
Stirling's formula in the form

(4.6)

and correspondingly for s! and (n - s)! Inserting (4.6) and the corresponding
relations into (4.4), we obtain

p(x, t);: P(s,n) = A·B·C·D (4.7)

where
e- n
A = e se (n s) = I, (4.8)

B = exp[n In n - sin s - (n - s) In (n - s)], (4.9)


C = nl/2(2ns(n - S»-1 /2,
(4.10)
F = exp( -n In 2).

82
4.1 A Model of Brownian Movement 71

A stems from the powers of e in (4.6) which in (4.8) cancel out. B stems from the
second factor in (4.6). The factor C stems from the last factor in (4.6), while F
stems from the last factor in (4.5). We first show how to simplify (4.9). To this end
we replace s by means of (4.2), put

x
-=~ (4.11)
2a '

and use the following elementary formulas for the logarithm

In (n12 ± ~) = In n - In 2+ In (1 ± 2:), (4.12)

and
a2
In (1 + a) = a - "2 + (4.13)

Inserting (4.12) with (4.13) into (4.9) and keeping only terms up to second order in
~, we obtain for F·B

{- ~2't"} (4.14)
B·F = exp{ -2(j2In} = exp 2a 2t .

We introduce the abbreviation

(4.15)

where D is called the diffusion constant. In the following we require that when we
let a and 't" go to zero the diffusion constant D remains a finite quantity. Thus (4.14)
takes the form

B·F = exp {-~}


2Dt . (4.16)

We now discuss the factor C in (4.7). We shall see later that this factor is closely
related to the normalization constant. Inserting (4.3) and (4.2) into C, we find

(4.17)

or, after elementary algebra

(4.18)

83
72 4. Chance

Because we have in mind to let -r and a go to zero but letting D fixed we find that

-rD -+ 0

so that the term containing x 2 in (4.18) can be dropped and we are left with

(4.19)

Inserting the intermediate results A (4.8), B·F (4.16) and C (4.19) into (4.7), we
obtain

(4.20)

The occurrence of the factor -rl/2 is, of course, rather disturbing because we want
to let -r -+ O. This problem can be resolved, however, when we bear in mind that it
does not make sense to introduce a continuous variable Xl and seek a probability
to find a specific value x. This probability must necessarily tend to zero. We there-
fore have to ask what is the probability to find the particle after time t in an interval
between Xl and X2 or, in the original coordinates, between Sl and S2. We therefore
have to evaluate (compare Section 2.5)

(4.21)

If pes, n) is a slowly varying function in the interval Sl' . . . , S2' we may replace the
sum by an integral

L:~ pes, n) = I
S2

Sl
pes, n) ds. (4.22)

We now pass over to the new coordinates X, t. We note that on account of (4.2)

I
ds = 2adx. (4.23)

Using furthermore (4.7) we find instead of (4.22)

L:~ pes, n) = J
X2

x,
I
p(x, t)2- dx.
a
(4.24)

As we shall see in a minute, it is useful to abbreviate

1
2a p(x, t) = I(x, t) (4.25)

1 Note that through (4.15) and (4.1) x becomes a continuous variable as 'I" tends to zero!

84
4.1 A Model of Brownian Movement 73

so that (4.22) reads finally

L~~ P(s, n) = J
X2

XI
f(x, t) dx. (4.26)

We observe that when inserting the probability distribution (4.7), which is


proportional to ,1/ 2 , into (4.21) the factor ,1/ 2 occurs simultaneously with l/a.
We therefore introduce in accordance with (4.15)
1/2
,_ = D- 1 / 2 (4.27)
a

so that (4.21) acquires the final form

(4.28)

When we compare the final form (4.28) with (2.21), we may identify

f(x, t) = (2nDt)-1 /2 exp {- 2~t} (4.29)

as the probability density (Fig. 4.1). In the language of physicists,J(x, t) dx gives


us the probability of finding the particle after time t in the interval x ... x + dx.
The above steps (implying (4.21) - (4.29» are often abbreviated as follows:
Assuming again a slow, continuous variation of P(s, n) and of J, the sum on the
Ihs of (4.26) is written as

{(x, I)

Fig. 4.1. The distribution function


(4.29) as a function of the space
--~----~------~------~--~~_ x point x for three different times

whereas the rhs of (4.26) is written as

85
74 4. Chance

Passing over to infinitesimal intervals, (4.26) is given the form

P(s, n) ds = f(x, t) dx. (4.30)

Explicit examples can be found in a number of disciplines, e.g., in physics. The


result (4.29) is of fundamental importance. It shows that when many independent
events (namely going backward or forward by one step) pile up the probability of
the resulting quantity (namely the distance x) is the given by (4.29), i.e., the normal
distribution. A nice example for (4.4) or (4.29) is provided by the gambling machine
of Fig. 4.2. Another important result: In the limit of continuously varying variables,
the original probability takes a very simple form.

~:~
...,..._ - j n =/

I ~o
J
~ ~'\ ~
PiS, n}
4
~ tel ~'\ ~
steps

______ Fig. 4.2. Random walk of a ball in


________- -__- -___________
,2s -n=m gambling machine yields the binomial
distribution (4.4). Here n = 4. Shown is
-4 -2 o 2 average number of balls in final boxes

Let us now discuss restricted Brownian movement. We again treat a hopping


process, but insert between each two points a diaphragm so that the particle can
only hop in one direction, say, to the right. Consequently,

p = 1 and q = O. (4.31)

In this case, the number, s, of successes coincides with the number, n, of hops

s = n. (4.32)

Thus the probability measure is

P(s, n) = bs •n • (4.33)

We again introduce a coordinate x by sa and time by t = no, and let a -+ 0, ' -+ 0,


so that x and t become continuous variables. Using (4.30) we obtain a probability
density
a
f(x, t) = «5(x - vt), v =~, (4.34)

86
4.2 The Random Walk Model and Its Master Equation 75

because the Kronecker 8 becomes Dirac's 8-function in the continuum. Note that
limt _ o and lima _ o must be taken such that v = air remains finite.

Exercise on 4.1

Calculate the first two moments for (4.29) and (4.34) and compare their time
dependence. How can this be used to check whether nerve conduction is a diffusion
or a unidirectional process?

4.2 The Random Walk Model and Its Master Equation


In the preceding Section 4.1 we have already encountered an example of a stochastic
process going on in time; namely, the motion of a particle which is randomly hop-
ping backward or forward. It is our goal to derive an equation for the probability
measure, or, more precisely, we wish to establish an equation for the probability
that after n + 1 pushes the particle is at the position m, m = 0, ± 1, ±2, ± 3.
We denote this probability by P(m; n + 1). Since the particle can hop at one ele-
mentary act only over a distance "I", it must have been after n pushes at m - lor
m + 1 that it now reaches exactly the position m. There it had arrived with prob-
abilities P(m - 1; n) or P(m + 1; n). Thus P(m; n + 1) consists of two parts
stemming from two possibilities for the particle to jump. The particle moves from
m - 1 to m with the transition probability

w(m, m - 1) = H= p). (4.35)

The probability, P(m; n + 1), of finding the particle at m after n + 1 pushes when
it has been at time n, at m - 1 is given by the product w(m, m - 1)· P(m - 1; n).
A completely analogous expression can be derived when the particle comes from
m + 1. Since the particle must come either from m - 1 or m + 1 and these are
independent events, P(m; n + 1) can be written down in the form

P(m; n + 1) = w(m, m - 1)P(m - I; n) + w(m, m + I)P(m + 1; n) (4.36)

where

w(m,m + 1) = w(m,m - 1) = 1/2. (4.37)

Note that we did not use (4.37) explicitly so that (4.36) is also valid for the general
case, provided

p + q == w(m, m + 1) + w(m, m - 1) = 1. (4.38)

Our present example allows us to explain some concepts in a simple manner.


In Section 2.6 we introduced the joint probability which can now be generalized to
time dependent processes. Let us consider as an example the probability P(m, n + 1;

87
76 4. Chance

m', n) to find the particle at step n at point m' and at step n + 1 at m. This
probability consists of two parts as we have already discussed above. The particle
must be at step n at point m' which is described by the probability P(m'; n). Then it
must pass from m' to m which is governed by the transition probability w(m, m').
Thus the joint probability can generally be written in the form

P(m,n + l;m',n) = w(m,m')P(m';n) (4.39)

where w is apparently identical with the conditional probability (compare Section


2.8). Since the particle is pushed at each time away from its original position,

m =f. m', (4.40)

and because it can jump only over an elementary distance "I"

1m - m'l = 1, (4.41)

it follows that w(m, m') = 0 unless m' = m ± 1 (in our example). We return to
(4.36) which we want to transform further. We put

w(m, m ± l)fr: = w(m, m ± 1),

and shall refer to w(m, m ± 1) as transition probability per second (or per unit
time). We subtract P(m; n) from both sides of (4.36) and divide both sides by r.
Using furthermore (4.38) the following equation results

P(m; n + 1) - P(m; n)
-'---'-----'----'--'----'- = w(m, m - I)P(m - 1; n) + w(m, m + I)P(m + 1; n)
r
- (w(m + 1, m) + w(m - 1, m))P(m; n). (4.42)

In our special example the w's are both equal to If(2r). We relate the discrete
variable n with the time variable t by writing t = m and accordingly introduce now
a probability measure P by putting

P(m, t) = P(m; n) == P(m; tfr). (4.43)

Pin (4.43) is an abbreviation for the function P(m, tfr:). We now approximate the
difference on the Ihs of (4.42) by the time derivative

P(m; n + 1) - P(m; n) dP
(4.44)
r: :::::: dt

so that the original equation (4.36) takes the form

dP(m, t) _ _
dt = w(m, m - I)P(m - 1, t) + w(m, m + I)P(m + 1, t)

-(w(m + 1, m) + w(m - 1, m))P(m, t). (4.45)

This equation is known in the literature as master equation.

88
4.2 The Random Walk Model and Its Master Equation 77

Fig. 4.3. How to visualize detailed balance

Let us now consider the stationary state in which the probability P is time-
independent. How can this be reached in spite of the hopping process going on all
the time? Consider a line between m and m + I. Then P does not change, if (per
unit time) the same number of transitions occur in the right and in the left direction.
This requirement is called the principle of detailed balance (Fig. 4.3) and can be
written mathematically in the form

w(m, m')P(m'; n) = w(m', m)P(m; n). (4.46)

In Section 4.1 we found it still more advantageous to replace m by a continuous


variable, x, by putting m = x/a. When x is finite and a tends to 0, m must neces-
sarily become a very large number. In this case the unity "I" may be considered
as a small quantity compared to m. Therefore we replace, in a formal way, I bye.
We treat the special case

I
w(m, m + I) = w(m, m - I) = 2,'

In this case the right hand side of (4.42) reads


1 _ _ _
2, (P(m - e, t) + P(m + e, t) - 2P(m, t» (4.47)

which we expand into a Taylor series up to second order in e


1 _ _ _ _ _
(4.47) ::::: 2, {P(m, t) + P(m, t) - 2P(m, t) + [P'(m, t) - P'(m, t)]e

+ t[P"(m, t) + P"(m, t)]e 2 }, (4.48)

where the primes denote differentiation of P with respect to m. Because the first
and second terms ex eO or e cancel, the first nonvanishing term is that of the second
derivative of P. Inserting this result into (4.42) yields

dP 1 d 2P 2
-=--e (4.49)
dt 2,dm 2 •

It is now appropriate to introduce a new functionfby

P(m, t)Am = f(x, t)Ax. (4.50)

89
78 4. Chance

(Why have we introduced Am and Ax here? (cf. Section 4.1).) Using furthermore

d a·d
dm = dx' e = 1, (4.51)

we find the fundamental equation

(4.52)

which we shall refer to as diffusion equation. As we will see much later (cf. Section
6.3), this equation is a very simple example of the so-called Fokker-Planck equation.

°
The solution of (4.52) is, of course, much easier to find than that of the master
equation. The e-expansion relies on a scaling property. Indeed, letting e ----+ can be
identified with letting the length scale a go to zero. We may therefore repeat the
above steps in a more formal manner, introducing the length scale a as expansion
parameters. We put (since Ax/Am = a)

P(m, t) = af(x, t), (4.53a)

and, concurrently, since m + I implies x + a:


P(m ± I, t) = af(x ± a, t). (4.53b)

Inserting (4.53a) and (4.53b) into (4.42) or (4.47) yields, after expanding f up to
second order in a, again (4.52). The above procedure is only valid under the con-
dition that for fixed a, the function fvaries very slowly over the distance Ax =i' a.
Thus the whole procedure implies a self-consistency requirement.

Exercises on 4.2

1) Show that pes, n) (4.4) fulfils (4.36) with (4.37) if s, n are correspondingly
related to m, n. The initial condition is P(m,O) = bm,o (i.e. = I for m = 0, and
°
= for m 0). "*
2) Verify that (4.29) satisfies (4.52) with the initial condition f(x, 0) = b(x) (15:
Dirac's b-function).
3) Derive the generalization of equation (4.52) if w(m, m + 1) "*w(m, m - 1).
4) Excitation transfer between two molecules by a hopping process. Let us denote
the two molecules by the indexj = 0, I and let us denote the probability to find
the molecule j in an excited state at time t by P(j, t). We denote the transition
rate per unit time by w. The master equation for this process reads

1'(0, t) = wP(I, t) - wP(O, t)


pel, t) = wP(O, t) - wP(l, t). (E.I)

90
4.3 Joint Probability and Paths. Markov Processes. The Chapman-Kolmogorov Equation 79

Show that the equilibrium distribution is given by

P(j) =}, (E.2)

determine the conditional probability

P(j, t Ij', 0), (E.3)

and the joint probability

P(j, t;j', t'). (EA)

Hint : Solve equations (E.1) by the hypothesis

(E.5)

using for j' = 0, j' = 1 the initial condition (t = 0)

P(O,O) = 1, P(l, 0) = O. (E.6)

4.3* Joint Probability and Paths. Markov Processes. The Chapman-


Kolmogorov Equation. Path Integrals

To explain the main ideas of this section, let us first consider the example of the
two foregoing sections : A particle hopping randomly along a chain. When we
follow up the different positions the particle occupies in a specific realization, we
may draw the plot of Fig. 4.4a. At times fl ' f2' . .• , fn the particle is found at n
specific positions m l , m 2 , • • • , m n • If we repeat the experiment, we shall find an-
other realization, say that of Fig. 4Ab. We now ask for the probability of finding
the particle at times fl' f2' •.. at the corresponding points m l , m 2 , • • • We denote
this probability by

(4.54)

m m

7
6 ·.
7
5
.,
5
~
/ "- r'\. /
·· 5
~
.,
J '\d"' 3 r'\. V I'\.!
i\.. V
;
I ·
2
/
~.
.
o o Fig. 4.4. Two paths of a Brownian
-1 I, 1; 1J ~ 15 ~ 17 -/ 1, 12 ~ ~ 15 ~ 17 particle t1 = 0, m1 = 4, ... ,
n=7

91
80 4. (]hance

Apparently Pn is a joint probability in the sense of Section 2.6. Once we know the
joint probability with respect to n different times, we can easily find other prob-
ability distributions containing a smaller number of arguments m j , I j • Thus for
example if we are interested only in the joint probability at times I I, . • • , In-I we
have to sum up (4.54) over mn

(4.55)

Another interesting probability is that for finding the particle at time In at the
position mn and at time I I at position m l , irrespective of the positions the particle
may have acquired at intermediate times. We find the probability by summing
(4.54) over all intermediate positions:

(4.56)

Let us now consider an experiment (a realization of the random process) in


which the particle was found at times I I, . . . , In _ I at the corresponding positions
ml> ... , m n- I . We then ask what is the probability of finding it at time In at
position mn ? We denote this conditional probability by

(4.57)

According to Section 2.8 this conditional probability is given by

Pn(mn> In; m n- I , In-I; ... ; m l , II)'


(4.58)
Pn-I(mn- I , In-I;'" ; m l , II)

So far our considerations apply to any process. However, if we consider our


special example of Sections 4.1 and 4.2, we readily see that the probability for the
final position m. depends only on the probability distribution at time (.-1 and not
on any earlier time. In other words, the particle has lost its memory. In this case
the conditional probability (4.57) depends only on the arguments at times In and
1.- 1 so that we may write

(4.59)

If the conditional probability obeys the equation (4.59) the corresponding process
is called a Markov process. In the following we want to derive some general rela-
tions for Markov processes.
The rhs of (4.59) is often referred to as transition probability. We now want to
express the joint probability (4.54) by means of the transition probability (4.59).
In a first step we multiply (4.58) by Pn - I

(4.60)

92
4.3 Joint Probability and Paths. Markov Processes. The Chapman-Kolmogorov Equation 81

Replacing in this equation n by n - 1 we find

(4.61)

Reducing the index n again, and again we finally obtain

(4.62)

and

(4.63)

Consecutively substituting Pn - t by Pn - 2, Pn - 2 by Pn - 3, etc., we arrive at

Thus the joint probability of a Markov process can be obtained as a mere product
of the transition probabilities. Thus the probability of finding for example a
particle at positions mj for a time sequence tj can be found by simply following up
the path of the particle and using the individual transition probabilities from one
point to the next point.
To derive the Chapman-Kolmogorov equation we consider three arbitrary times
subject to the condition

(4.65)

We specialize equation (4.56) to this case:

(4.66)

We now use again formula (4.62) which is valid for any arbitrary time sequence.
In particular we find that for the lhs of (4.65)

(4.67)

With help of (4.64) we write P 3 on the rhs of (4.66) in the form

(4.68)

Inserting (4.67) and (4.68), we find a formula in which on both sides Pt(m t , t 1 )
appears as a factor. Since this initial distribution can be chosen arbitrarily, that
relation must be valid without this factor. This yields as final result the Chapman-
Kolmogorov equation

(4.69)

93
82 4. Chance

Note that (4.69) is not so innocent as it may look because it must hold for any
time sequence between the initial time and the final time.
The relation (4.69) can be generalized in several ways. First of alI we may replace
mj by the M-dimensional vector mj' Furthermore we may let mj become a con-

tinuous variable qj so that P becomes a distribution function (density). We leave


it to the reader as an exercise to show that in this latter case (4.69) acquires the form

(4.70)

Another form of the Chapman-Kolmogorov equation is obtained if we mul-


tiply (4.69) by P(ml> t 1 ) and sum up over mi' Using

(4.71)

and changing the indices appropriately we obtain

P(m, t) = Lm' pt,t,(m, ml)P(m' , I') (4.72)

in the discrete case, and

P(q, t) = J... JPt.t.(q, ql)P(q', t') dMq' (4.73)

for continuous variables. (Here and in the following we drop the index I of P).
We now proceed to consider infinitesimal time intervals, i.e., we put I =
t ' + T. We then form the following expression

I
- {P(m, t
T
+ T) - P(m, t)}, (4.74)

and let T go to zero or, in other words, we take the derivative of (4.72) with respect
to the time t, which yields

P(m, I) = Lm' p,(m, ml)P(m', I) (4.75)

where p,(m, m') = lim, ... o r- 1 (pt+<.t(m, m') - ptim, m'». The discussion of Pt
requires some care. If m' #- m, P, is nothing but the number of transitions from m'
to m per second for which we put

p,(m, m') = w(m, m'l (4.76)

or, in other words, w is the transition probability per second (or unit time). The
discussion of p,(m, m) with the same indices m = m' is somewhat more difficult
because it appears that no transition would occur. For a satisfactory discussion we
must go back to the definition of P according to (4.59) and consider the difference

94
4.3 Joint Probability and Paths. Markov Processes. The Chapman-Kolmogorov Equation 83

of the conditional probabilities

P(m, t + 't" I m, t) - P(m, tim, t). (4.77)

This difference represents the change of probability of finding the particle at


point m at a later time, t + 't", if it had been at the same point at time t. This change
of probability is caused by all processes in which the particle leaves its original
place m. Thus, if we divide (4.77) by 't" and let 't" go to zero, (4.77) is equal to the sum
over rates (4.76) for the transitions from m to all other states

p,(m, m) = - Lm' w(m', m). (4.78)

Inserting now (4.76) and (4.78) into (4.75) leaves us with the so-called master
equation

P(m, t) = Lm' w(m, m')P(m', t) - P(m, t) Lm' w(m', m). (4.79)

In passing from (4.75) to this equation we have generalized it by replacing m by


the vector m.

4.3.1 Example of a Joint Probability: The Path Integral as Solution


of the Diffusion Equation

Our somewhat abstract considerations about the joint probability at different


times can be illustrated by an example which we have already met several times in
our book. It is the random motion of a particle in one dimension. We adopt a
continuous space coordinate q. The corresponding diffusion equation was given in
Section 4.2 by (4.52). We write it here in a slightly different form by calling the
functionf(q) of (4.52) now p",(q, q'):

(Jat - "2D Jq2


J2 )
Ptt,(q, q') = 0, t > t'. (4.80)

The reason for this new notation is the following: we subject the solution of (4.80)
to the initial condition:

Ptt.(q, q') = [)(q - q') for t = t'. (4.81)

That is we assume that a time ( equal to the initial time (' the particle is with cer-
tainty at the space point q'. Therefore p",(q, q') dq is the conditional probability to
find the particle at time ( in the interval q ... q + dq provided it has been at time
t' at point q' or, in other words, p".(q, q') is the transition probability introduced
above. For what follows we make the replacements

(4.82)

95
84 4. Chance

and

(4.83)

wherejwill be an index defined by a time sequence t j . Fortunately, we already know


the solution to (4.80) with (4.81) explicitly. Indeed in Section 4.1 we derived the
conditional probability to find a particle undergoing a random walk after a time
t at point q provided it has been at the initial time at q' = O. By a shift of the coordi-
nate system, q -> q - q' == qj+ I - qj and putting t j + 1 - tj = r that former solu-
tion (4.29) reads

Ptt,(qj+ 1> q) = (2nDr) -1/2 exp { - 2~r (qj+ 1 - q}?}. (4.84)

By inserting (4.84) into (4.80) one readily verifies that (4.84) fulfils this equation.
We further note that for r --> 0 (4.84) becomes a b-function (see Fig. 2.10). It is
now a simple matter to find an explicit expression for the joint probability (4.54).
To this end we need only to insert (4.84) (with j = 1, 2, ... ) into the general
formula (4.64) which yields

= (2~D~)-(n-I)/2
" , exp{ __l_ "'~-I
2Dr L.,J = 1
(q.J + 1 -
qj )2}P(q I, t)
1 . (4.85)

This can be interpreted as follows: (4.85) describes the probability that the
particle moves along the path qlq2q3 .... We now proceed to a continuous time
scale by replacing

r -> dt (4.86)
and

(4.87)

which allows us to write the exponential function in (4.85) in the form

(4.88)

(4.88) together with a normalization factor is the simplest form of a probability


distribution for a path. If we are interested only in the probability of finding the
final coordinate qn = q at time tn = t irrespective of the special path chosen, we
have to integrate over all intermediate coordinates qn-l' ... , q 1 (see (4.56». This
probability is thus given by

(4.89)

96
4.4 How to Use Joint Probabilities. Moments. Characteristic Function. Gaussian Processes 85

which is called a path integral. Such path integrals playa more and more important
role in statistical physics. We will come back to them later in our book.

Exercise on 4.3

Determine

for the random walk model of Section 4.1, for the initial condition P(m h t 1) = I
for m l == 0; = 0 otherwise.

4.4* How to Use Joint Probabilities. Moments. Characteristic


Function. Gaussian Processes
In the preceding section we have become acquainted with joint probabilities for
time-dependent processes

(4.90)

where we now drop the index n of Pn• Using (4.90) we can define moments by
generalization of concepts introduced in Section 2.7

In this equation some ofthe v's may be equal to zero. A particularly important case
is provided by

(4.92)

where we have only the product of two powers of m at different times. When we
perform the sum over all other m's according to the definition (4.91) we find the
probability distribution which depends only on the indices I and n. Thus (4.92)
becomes

(4.93)

We mention a few other notations. Since the indicesj = 1 ... n refer to times we
can also write (4.92) in the form

(4.94)

This notation must not be interpreted that m is a given function of time. Rather,
In and 11 must be interpreted as indices. If we let the I/S become a continuous time
sequence we will drop the sufficesj and use tor t' as index so that (4.94) is written

97
86 4. Chance

in the form

(4.95)

In a completely analogous manner we can repeat (4.93) so that we obtain the


relation

<mV(t)mV'(I') = Lm(t),m(t') mV(t)mV'(t')P(m(t), t; m(t'), t'). (4.96)

Another way of writing is m t instead of m(t). As already mentioned before we can


let m become a continuous variable q in which case P becomes a probability
density. To mention as an example again the case of two times, (4.96) acquires the
form

(4.97)

In analogy of Section 2.10 we introduce the characteristic function by

(4.98)

in the case of a discrete time sequence, and by

(exp{i L dt'ut,qt'}) = 4>({ut}) (4.99)

in the case of a continuous time sequence. By taking ordinary or functional deriva-


tives we can easily recover moments in the case of a single variable by

(4.100)

and in the case of variables at different times by

(4.101)

We furthermore define cumulants by

4>(un, In;"', U1 , ( 1) = exp {L~l ~ L: .....O:.=l k (tO:l"'" IO:.)UO: 1 ••• uo:.}.


S

(4.102)

We call a process Gaussian if all cumulants except the first two vanish, i.e., if

(4.103)

holds. In the case of a Gaussian process, the characteristic function thus can be

98
4.4 How to Use Joint Probabilities. Moments. Characteristic Function. Gaussian Processes 87

written in the form

According to (4.101) all moments can be expressed by the first two cumulants or,
because k[ and k2 can be expressed by the first and second moment, all higher
moments can be expressed by the first two moments.
The great usefulness of the joint probability and of correlation functions for
example of the form (4.97) rests in the fact that these quantities allow us to discuss
time dependent correlations. Consider for example the correlation function

(qt qt')' t > t', (4.105)

and let us assume that the mean values at the two times t and t'

(qt) = 0, <qt') =0 (4.106)

vanish. If there is no correlation between q's at different times we can split the
joint probability into a product of probabilities at the two times. As a consequence
(4.105) vanishes. On the other hand if there are correlation effects then the joint
probability cannot be split into a product and (4.105) does not vanish in general.
We will exploit this formalism later on in greater detail to check how fast fluctua-
tions decrease, or, how long a coherent motion persists. If the mean values of q
do not vanish, one can replace (4.105) by

«q, - (qt»(qt' - (qt'») (4.107)

to check correlations.

Exercise on 4.4

Calculate the following moments (or correlation functions) for the diffusion process
under the initial conditions
1) P(q, t = 0) = ~(q),

2) P(q, t = 0) = (f3/n)1/2 exp ( - f3q2),


(q(t», (q2(t», (q(t)q(t'», (q2(t)q2(t'»

Hint: f:: qV e -aq2 dq = 0, v: odd,

Try in ( ... ) a replacement of qt by qt + qt'!

99
88 4. Chance

4.5 The Master Equation


This section can be read without knowledge of the preceding ones. The reader
should be acquainted, however, with the example of Section 4.2. The master equa-
tion we shall derive in this chapter is one of the most important means to determine
the probability distribution of a process. In Section 4.2 we have already encountered
the example of a particle which is randomly pushed back or forth. Its motion was
described by a stochastic equation governing the change of the probability dis-
tribution as a function of time. We will now consider the general case in which the
system is described by discrete variables which can be lumped together to a vector
m. To visualize the process think of a particle moving in three dimensions on a
lattice.
The probability of finding the system at point m at a time t increases due to
transitions from other points m' to the point under consideration. It decreases due
to transitions leaving this point, i.e., we have the general relation

F(m, t) = rate in - rate out. (4.108)

Since the "rate in" consists of all transitions from initial points m' to m, it is
composed of the sum over the initial points. Each term of it is given by the prob-
ability to find the particle at point m', multiplied by the transition probability per
unit time to pass from m' to m. Thus we obtain

rate in = Lm' w(m, m')P(m', t). (4.109)

In a similar way we find for the outgoing transitions the relation

rate out = P(m, t)· Lm' w(m', m). (4.110)

Putting (4.109) and (4.110) into (4.108) we obtain

F(m, t) = Lm' w(m, m')P(m', t) - P(m, t) Lm' w(m', m) (4.111)

which is called the master equation (Fig. 4.5). The crux to derive a master equation
is not so much writing down the expressions (4.109) and (4.110), which are rather
obvious, but to determine the transition rates w explicitly. This can be done in two
ways. Either we can write down the w's by means of plausibility arguments. This
has been done in the example of Section 4.2. Further important examples will be
given later, applied to chemistry and sociology. Another way, however, is to derive
the w's from first principles where mainly quantum statistical methods have to be
used.

Exercise on 4.5

Why has one to assume a Markov process to write down (4.109) and (4.11O)?

100
4.6 Exact Stationary Solution of the Master Equation for Systems in Detailed Balance 89

Fig. 4.5. Example of a network of the


master equation

4.6 Exact Stationary Solution of the Master Equation for Systems


in Detailed Balance
In this section we prove the following: If the master equation has a unique stationary
solution (p = 0) and fulfils the principle of detailed balance, this solution can be
obtained explicitly by mere summations or, in the continuous case, by quadratures.
The principle of detailed balance requires that there are as many transitions per
second from state m to state n as from n to m by the inverse process. Or, in mathe-
matical form:

wen, m)P(m) = w(m, n)P(n). (4.112)

In physics, the principle of detailed balance can be (in most cases) derived for
systems in thermal equilibrium, using microreversibility. In physical systems far
from thermal equilibrium or in nonphysical systems, it holds only in special case
(for a counter example cf. exercise 1).
Equation (4.112) represents a set of homogeneous equations which can be
solved only if certain conditions are fulfilled by the w's. Such conditions can be
derived, for instance, by symmetry considerations or in the case that w can be
replaced by differential operators. We are not concerned, however, with this
question here, but want to show how (4.112) leads to an explicit solution. In the
following we assume that pen) '# O. Then (4.112) can be written as

P(m) w(m, n)
(4.113)
pen) = wen, m)"

Writing m = n j+ I' n = nj' we pass from no to nN by a chain of intermediate


states. Because there exists a unique solution, at least one chain must exist. We then
find

(4.114)

Putting

P(m) = ,AI' exp c!>(m) (4.115)

101
90 4. Chance

where % is the normalization factor, (4.114) may be written as

(4.116)

Because the solution was assumed to be unique, l[J(nN) is independent of the path
chosen. Taking a suitable limit one may apply (4.116) to continuous variables.
As an example let us consider a linear chain with nearest neighbor transitions.
Since detailed balance holds (cf. exercise 2) we may apply (4.114). Abbreviating
transition probabilities by

w(m, m - 1) = w+(m) (4.117)


w(m, m + 1) = w_(m) (4.118)

we find

= P(O) nm'=O wm,m


m- 1 w(m' + I,m') nm- I w+(m' + I)
P(m) (' ,
+ 1) = P(O) m'=O (').
w_m
(4.119)

In many practical applications, w + and w _ are "smooth" functions of m, since m is


generally large compared to unity in the regions of interest. Plotting P(m) gives then
a smooth curve showing extrema (compare Fig. 4.6). We establish conditions for
extrema. An extremum (or stationary value) occurs if

P(m )

,11;;:::::',;;;:::::;; ;;;1, , • m
Fig. 4.6. Example of P(m) showing maxima and
minima

P(mO) = P(mo + I). (4.120)

Since we obtain P(mo + I) from P(mo) by multiplying

(4.121)

P(mo) is a maximum, if

P(m) < P(mo) for m < mol


(4.122)
and for m> mo

102
4.6 Exact Stationary Solution of the Master Equation for Systems in Detailed Balance 91

Equivalently, P(mo) is a maximum if (and only if)

(4.123)

In both cases, (4.122) and (4.123), the numbers m belong to a finite surrounding of
mo·

Exercises on 4.6

1) Verify that the process depicted in Fig. 4.7 does not allow for detailed balance.

Ls
3

Fig. 4.7. Circular transitions violat-


ing the principle of detailed
balance, e.g., in a three-level atom
(right) 1 --+ 3: pump from external
source, 3 --+ 2, 2 --+ 1 recombina-
, w(t.2) 2 tion of electron

2) Show: in a linear chain m with nearest neighbor transitions m -+ m ± 1 the


principle of detailed balance always holds.
3) What are the conditions that P(mo) is a minimum?
4) Generalize the extremal condition to several dimensions, i.e., m -+ m =
(mh m2' ... , mN), and nearest neighbor transitions.
5) Determine extrema and determine P(m) explicitly for
a) w(m, m ± 1) = w,
w(m, m + n) = 0, n =1= ±1
Note: normalize P only in a finite region -M ::s; m ::s; M, and put P(m) = 0
otherwise.
m +1
b) w(m, m + I) = --,;r- Wo for O::s; m ::s; N - 1

N-m+1
w(m, m - I) = N Wo for 1::s; m ::s; N

w(m, m') = 0 otherwise.


c) w(m + 1, m) = w+(m + 1) = oc(m + 2) m ~ 0
w(m, m + I) = w_(m) = fJ(m + l)(m + 2).

103
92 4. Chance

Show that P(m) is the Poisson distribution 1I:k,,..( == ak) (2.57) with m +-+ k and
p. +-+ (1./p.
Hint: Determine P(O) by means of the normalization condition

d) Plot P(m) in the cases a) - c).

4.7* The Master Equation with Detailed Balance. Symmetrization,


Eigenvalues and Eigenstates
We write the master equation (4. II I) in the form

(4.124)

where we have used the abbreviation

L m , .. = w(m, n) - <5 80 , .. I, w(l, n). (4.125)

The master equation represents a set of linear differential equations of first order.
To transform this equation into an ordinary algebraic equation we put

(4.126)

where rpm is time independent. Inserting (4.126) into (4.124) yields

(4.127)

The additional index (1. arises because these algebraic equations allow for a set of
eigenvalues A. and eigenstates rp,. which we distinguish by an index (1.. Since in general
the matrix Lao,. is not symmetric the eigenvectors of the adjoint problem

"
~m
x(rt) L
III mn
= - A.«An
v(rt) (4.128)

are different from the eigenvectors of (4.127). However, according to well-known


results of linear algebra, rp and X form a biorthonormal set so that

(4.129)

In (4.129) the lhs is an abbreviation for the sum over ft. With help of the eigen-
vectors rp and X, Lm.. can be written in the form

L ... = -"
L.Jz A.«'I'.
rn(rt)x(rt)
_ . (4.130)

We now show that the matrix occurring in (4.127) can be symmetrized. We first

104
4.7 The Master Equation with Detailed Balance 93

define the symmetrized matrix by

p l / 2 (n)
L!..n = w(m, n) pl/2(m) (4.131)

for m "# n. For m = n we adopt the original form (4.125). Pen) is the stationary
solution of the master equation. It is assumed that the detailed balance condition

w(m, n)P(n) = wen, m)P(m) (4.132)

holds. To prove that (4.131) represents a symmetric matrix, L", we exchange in


(4.131) the indices n, m

pl/2(m)
L~ ... = wen, m) pl/2(n)' (4.133)

By use ef (4.132) we find

Pen) pl/2(m)
(4.133) = w(m, n) P(m)' pI/2(n) (4.134)

which immediately yields

L!n .• (4.135)

so that the symmetry is proven. To show what this symmetrization means for
(4.127), we put

mea) = p l / 2 1n(a) (4.136)


"1'11 (n)"1'11

which yields

I /2 -(a)
".t...ra L mil p (n) pI/2 (4.137)
CPII = - 1
Aa
-(a)
(m) CP ...

Dividing this equation by PI!,?; we find the symmetrized equation


"~. L 11111""""
S • rn(a) = _ A.«..,...",
rn(a)
• (4.138)

We proceed with (4.128) in an analogous manner. We put

Xn(a) = p- 1/2 X
(n)
-(a)
"'
(4.139)

insert it into (4.128) and mUltiply by pl/2(n) which yields

"L.,.m X-calLs
.. II1II
= -A.u..a
.y(a) • (4.140)

The i's may be now identified with the q,'s because the matrix L':... is symmetric.

105
94 4. Chance

This fact together with (4.136) and (4.139) yields the relation

(4.141)

Because the matrix L' is symmetric, the eigenvalues A. can be determined by the
following variational principles as can be shown by a well-known theorem of linear
algebra. The following expression must be an extremum:

(XL'i)}
-A. = Extr. { (XX) = Extr. {(XLcP)}
(xcp) • (4.142)

X. has to be chosen so that it is orthogonal to all lower eigenfunctions. Furthermore,


one immediately establishes that if we chose X~O) = 1, then on account of

(4.143)

the eigenvalue A. = 0 associated with the stationary solution results. We now show
that all eigenvalues A. are nonnegative. To this end we derive the numerator in
(4.142)

(4.144)

(compare (4.139), (4.141), and (4.131» in a way which demonstrates that this
expression is nonpositive. We multiply w(m, n)P(n) ~ 0 by -lj2(X... - X..)2 ::;; 0
so that we obtain

- L...... teX... - x..)2 w(m, n)P(n) ::;; O. (4.145)

The evaluation of the square bracket yields

-t Lm ... x~w(m, n)P(n) - t I ...... x;w(m, n)P(n) + I ...... x...x..w(m, n)P(n).


(1) (2) (3)
(4.146)

In the second sum we exchange the indices m, n and apply the condition of detailed
balance (4.132). We then find that the second sum equals the first sum, thus that
(4.146) agrees with (4.144). Thus the variational principle (4.142) can be given the
form

1
I\.
=E
X
t
r
{t I ..... (x... - x.)2 w(m, n)p(n)} > 0'
I .. x;P(n) - .
(4.147)

from which it is evident that A. is nonnegative. Furthermore, it is evident that if we


chose X = const., we obtain the eigenvalue A. = O.

106
4.8 Kirchhoff's Method of Solution of the Master Equation 9S

4.8* Kirchhoff's Method of Solution of the Master Equation


We lirst present a simple counter example to the principle of detailed balance.
Consider a system with three states 1, 2, 3, between which only transition rates
w(1,2), w(2, 3) and w(3, 1) are nonvanishing. (Such an example is provided by a
three-level atom which is pumped from its first level to its third level, from where
it decays to the second, and subsequently to the first level, cf. Fig. 4.7). On physical
grounds it is obvious that pel) and P(2) i= 0 but due to w(2, 1) = 0, the equation

w(2, I)P(1) = w(I,2)P(2) (4.148)

required by detailed balance cannot be fulfilled. Thus other methods for a solution
of the master equation are necessary. We confine our treatment to the stationary
solution in which case the master equation, (4.111), reduces to a linear algebraic
equation. One method of solution is provided by the methods of linear algebra.
However, that is a rather tedious procedure and does not use the properties inherent
in the special form of the master equation. We rather present a more elegant method
developed by Kirchhoff, originally for electrical networks. To find the stationary
solution Pen) of the master equation 2 •

r.:=1 w(rn, n)P(n) - P(rn) r.~=1 wen, rn) = 0, (4.149)

and subject to the normalization condition

~=1 Pen) = I (4.150)

we use a little bit of graph theory.


We define a graph (or, in other words a figure) which is associated with (4.149).
This graph G contains all vertices and edges for which w(rn, n) i= O. Examples of
graphs with three or four vertices are provided by Fig. 4.8. For the following solu-
tion, we must consider certain parts of the graph G which are obtained from G by
omitting certain edges. This subgraph, called maximal tree T(G), is defined as
follows:
I) T(G) covers subgraph so that
a) all edges of T(G) are edges of G,
b) T(G) contains all vertices of G.
2) T(G) is connected.
3) T(G) contains no circuits (cyclic sequence of edges).
This definition, which seems rather abstract, can best be understood by looking
at examples. The reader will immediately realize that one has to drop a certain
minimum number of edges of G. (Compare Figs. 4.9 and 4.10). Thus in order to

2 When we use n instead of the vector, this is not a restriction because one may always rearrange
a discrete set in the form of a sequence of single numbers.

107
96 4. Chance

Fig. 4.8a and b. Examples of graphs


with 3 or 4 vertices
(a) (b)

3 3 3

7
L/\~ 2 7 2 7 2
Fig. 4.9. The maximal trees T(G)
belonging to the graph of Fig. 4.8a

~ucn
ZV171L Fig. 4.10. The maximal trees T(G)
belonging to the graph G of Fig. 4.8b

obtain the maximal trees of Fig. 4.8b one has to drop in Fig. 4.8b either one side
and the diagonal or two sides.
We now define a directed maximal tree with index n, Tn(G). It is obtained from
T( G) by directing all edges of T( G) towards the vertex with index n. The directed
maximal trees belonging to n = I, Fig. 4.9, are then given by Fig. 4.11. After these
preliminaries we can give a recipe how to construct the stationary solution Pen).
To this end we ascribe to each directed maximal tree a numerical value called A:
A (Tn(G)): This value is obtained as product of all transition rates wen, m) whose
edges occur in Tn(G) in the corresponding direction. In the example of Fig. 4.11
we thus obtain the following different directed maximal trees

I
~/\L
T(I)
'I
2 I
r,(2)
I
2 7
T,(3}
7
2
Fig. 4.11. The directed maximal
trees Tl belonging to Fig. 4.9 and
n = I

T\l): w(l, 2)w(2, 3) = A(T\I») (4.151)


T\2): w(l, 3)w(3, 2) = A(T\2») (4.152)
T\3): w(l, 2)w(l, 3) = A(T\3») (4.153)

108
4.9 Theorems about Solutions of the Master Equation 97

(It is best to read all arguments from right to left). Note that in our example Fig.
4.11 w(3, 2) = w(1, 3) = O. We now come to the last step. We define Sn as the sum
over all maximal directed trees with the same index n, i.e.,

(4.154)

In our triangle example we would have, for instance,

Sl = w(l, 2)w(2, 3) + w(l, 3)w(3, 2) + w(l, 2)w(l, 3) (4.155)


= w(l, 2)w(2, 3) (since w(3, 2) = w(l, 3) = 0).

Kirchhoff's formula for the probability distribution Pn is then given by

Sn
Pn = ,,\,N S' (4.156)
L-'=1 I

In our standard example we obtain using (4.151) etc.

w(1, 2)w(2, 3)
Pl=-w(~I-,2~)w-(~2,~3~)-+-w-(~2,-3~)w~(~3,~I~)-+-w~(-3,-I~)w~(l~,~2)' (4.157)

Though for higher numbers of vertices this procedure becomes rather tedious, at
least in many practical cases it allows for a much deeper insight into the construc-
tion of the solution. Furthermore it permits decomposing the problem into several
parts for example if the master equation contains some closed circles which are
only connected by a single line.

Exercise on 4.8

Consider a chain which allows only for nearest neighbor transitions. Show that
Kirchhoff's formula yields exactly the formula (4.119) for detailed balance.

4.9* Theorems about Solutions of the Master Equation


We present several theorems which are important for applications of the master
equation. Since the proofs are purely mathematical, without giving us a deeper in-
sight into the processes, we drop them. We assume the w's are independent of time.
I) There exists always at least one stationary solution P(m), F(m) = O.
2) This stationary solution is unique, provided the graph G of the master equation is
connected (i.e., any two pairs of points m, n can be connected by at least one
sequence of lines (over other points».
3) If at an initial time, t = 0,

o ::; P(m,O) ::; I (4.158)

109
98 4. Chance

for all rn, and

Lm P(rn, 0) = 1, (4.159)

then also for all later times, t > 0,

o ::: P(rn, t) ::::: 1, (4.160)

and

Lm P(m, t) = I. (4.161)

Thus the normalization property and the positiveness of probability are guar-
anteed for all times.
4) Inserting P(rn, t) = am exp( - At) into the master equation yields a set of linear
algebraic equations for am with eigenvalues A. These eigenvalues A have the
following properties;
a) The real part of A is nonnegative Re A :2: 0,
b) if detailed balance holds, all A's are purely real.
5) If the stationary solution is unique, then the time-dependent solution P(m, t)
for any initial distribution pOem) (so that P(m, 0) = pOem»~ tends for t -> 00
to the stationary solution. For a proof by means of the information gain com-
pare exercise 1 of Section 5.3.

4.10 The Meaning of Random Processes. Stationary State,


Fluctuations, Recurrence Time
In the foregoing we investigated processes caused by random actions. In this section
we want to discuss some rather general aspects, mainly utilizing a specific model.
We consider what happens when we bring together two boxes filled with gas.
Then the gas atoms of one box will diffuse into the other box and vice versa. The
transitions may be considered as completely random because they are caused by
the many pushes that the gas atoms suffer. Another example is provided by chemi-
cal processes where a reaction can take place only if two corresponding molecules
hit each other, which is again a random event. Therefore, it is not surprising that
such random processes occur in many different disciplines and are of utmost
importance for the understanding of ordering phenomena.
To illuminate the basic ideas we consider a very simple model the so-called
Ehrenfest urn model. This model was originally devised to discuss the meaning of
the so-called H-theorem in thermodynamics (cf. exercises 1), 2) of Section 5.3).
Here, however, we treat this model to illustrate some typical effects inherent in
random processes and also in establishing equilibrium. Let us consider two boxes
(urns) A, B filled with N balls which are labelled by 1,2, ... , N. Let us start with
an initial distribution in which there are N] balls in box A, and N z balls in box B.
Let us assume now that we have a mechanism which randomly selects one of the

110
4.10 The Meaning of Random Processes. Stationary State, Fluctuations, Recurrence Time 99

numbers 1, ... ,N, with the same probability liN, and repeat this selection process
regularly in time intervals 1:. If the number is selected, one removes the correspond-
ing ball from its present box and puts it into the other one. We are then interested
in the change of the numbers Nl and N2 in the course of time. Since N2 is given
by N - Nt, the only variable we must consider is N l . We denote the probability of
finding Nt after s steps at time t (i.e., t = S1:) by peNt, s).
We first establish an equation which gives us the change of the probability
distribution as a function of time and then discuss several important features. The
probability distribution, P, is changed in either of two ways. Either one ball is
added to box A or removed from it. Thus the total probability peNt, s) is a sum of
the probabilities corresponding to these two events. In the first case, adding a ball
A, we must start from a situation in which there are Nt - 1 balls in A. We denote
the probability that one ball is added by w(Nl' Nt - 1). If a ball is removed, we
must start from a situation in which there are Nt + 1 balls in A at "time" s - 1.
We denote the transition rate corresponding to the removal of a ball by
w(Nl, Nl + 1). Thus we find the relation

P(Nl's) = w(Nl, Nl - I)P(N l - 1, s - 1)


+ weN!> Nl + I)P(N l + 1, s - 1). (4.162)

Since the probability for picking a definite number is liN and there are N2 + 1
N - Nt + 1 balls in urn B the transition rate weNt, Nt - 1) is given by

(4.163)

Correspondingly the transition rate for picking a ball in urn A is given by

(4.164)

Thus (4.162) acquires the form

N - Nl + 1 Nt + 1
P(Nt,s) = N peNt - 1, s - 1) + ~ P(Nl + 1, s - 1).

(4.165)
If any initial distribution of balls over the urns is given we may ask to which
final distribution we will come. According to Section, 4.9 there is a unique final
stationary solution to which any initial distribution tends. This solution is given by

(4.166)

where the normalization constant rx is given by the condition

(4.167)

111
100 4. Chance

and can easily be determined to be

(4.168)

We leave it to the reader as an exercise to check the correctness of (4.166) by insert-


ing it into (4.165). (Note that P does not depend on s any more). We already met
this distribution function much earlier, namely starting from quite different con-
siderations in Section 3.1 we considered the number of configurations with which
we can realize a macrostate by Nl balls in A, and N2 balls in B for different micro-
states in which the labels of the balls are different but the total numbers N l , N2
remain constant. This model allows us to draw several very important and rather
general conclusions.
First we have to distinguish between a given single system and an ensemble of
systems. When we consider what a given single system does in course of time,
we find that the number N! (which is just a random variable), acquires certain
values

(4.169)

Thus, in the language of probability theory, a single event consists of the sequence
(4.169). Once this sequence is picked, there is nothing arbitrary left. On the other
hand, when talking about probability, we treat a set of events, i.e., the sample set
(sample space). In thermodynamics, we call this set "an ensemble" (including its
changes in course of time). An individual system corresponds to a sample point.
In thermodynamics, but also in other disciplines, the following question is
treated: If a system undergoes a random process for a very long time, does the tem-
poral average coincide with the ensemble average? In our book we deal with the
ensemble average if not otherwise noted. The ensemble average is defined as the
mean value of any function of random variables with respect to the joint probability
of Section 4.3.
In the following discussion the reader should always carefully distinguish
between a single system being discussed, or the whole ensemble. With this warning
in mind, let us discuss some of the main conclusions: The stationary solution is
by no means completely sharp, i.e., we do not find NI2 balls in box A and NI2
balls in box B with probability unity. Due to the selection process, there is always
a certain chance of finding a different number, N! t= N12, in box A. Thus fluctua-
tions occur. Furthermore, if we had initially a given number N!, we may show that
the system may return to this particular number after a certain time. Indeed, to
each individual process, say N! ~ N! + 1, there exists a finite probability that the
inverse process, N! ~ N! - 1, occurs (Evidently this problem can be cast into a
more rigorous mathematical formulation but this is not our concern here). Thus
the total system does not approach a unique equilibrium state, N! = N12. This
seems to be in striking contrast to what we would expect on thermodynamic
grounds, and this is the difficulty which has occupied physicists for a long time.
It is, however, not so difficult to reconcile these considerations with what we
expect in thermodynamics, namely, the approach to equilibrium. Let us consider

112
4.1 0 The Meaning of Random Processes. Stationary State, Fluctuations, Recurrence Time 101

the case that N is a very large number, which is a typical assumption in thermo-
dynamics (where one even assumes N -+ (0). Let us first discuss the stationary
distribution function. If N is very large we may convince ourselves very quickly that
it is very sharply peaked around NI = N12. Or, in other words, the distribution func-
tion effectively becomes a <5-function. Again this sheds new light on the meaning of
entropy. Since there are N!INI !Nz ! realizations of state N I , the entropy is given by

N!
S = kBln N 'N , (4.170)
l' z·

or

(4.171)

or after dividing it by N (i.e. per ball)

SIN = -kB(PI Inpl + pzlnpz) (4.172)

where

(4.173)

Since the probability distribution P(N I ) is strongly peaked, we shall find in all
practical cases NI = NI2 realized so that when experimentally picking up any
distribution we may expect that the entropy has acquired its maximum value where
PI = Pz = 1/2. On the other hand we must be aware that we can construct other
initial states in which we have the condition that NI is equal a given number
No =1= N12. If No is given, we possess a maximal knowledge at an initial time. If
we now let time run, P will move to a new distribution and the whole process has the
character of an irreversible process. To substantiate this remark, let us consider the
development of an initial distribution where all balls are in one box. There is a
probability equal to one that one ball is removed. The probability is still close to
unity if only a small number of balls has been removed. Thus the system tends
very quickly away from its initial state. On the other hand, the probability for
passing a ball from box A to box B becomes approximately equal to the probability
for the inverse process if the boxes are equally filled. But these transition prob-
abilities are by no means vanishing, i.e., there are still fluctuations of the particle
numbers possible in each box. If we wait an extremely long time, fro there is a finite
probability that the whole system comes back to its initial state. This recurrence
time, fro has been calculated to be

(4.174)

where fr is defined as the mean time between the appearance of two identical
macrostates.

113
102 4. Chance

In conclusion we may state the following: Even if N is large, N\ is not fixed


but may fluctuate. Large deviations from N\ = NI2 are scarce, however. The most
probable state is Nt = N12. If no information about the state is available or, in
other words, if we have not prepared the system in a special manner, we must
assume that the stationary state is present. The above considerations resolve the
contradiction between an irreversible process which goes on in only one direction
towards an "equilibrium state", and fluctuations which might drive back an
individual system even to its initial state. Both may happen and it is just a question
of probability depending on the preparation of the initial state as to which process
actually takes place. The important role that fluctuations play in different kinds of
systems will transpire in later chapters.

Exercises on 4.10

I) Why is (12 = <N;> - <Nt >2 a measure for the fluctuations of Nt? How large
is (12?
2) Discuss Brownian movement by a single system and by an ensemble.

4.11 * Master Equation and Limitations of Irreversible


Thermodynamics
Let us assume that a system composed of subsystems can be described by a master
equation. We wish to show that the approach used by irreversible thermodynamics
which we explained in Section 3.5 implies a very restrictive assumption. The
essential point can be demonstrated if we have only two subsystems. These sub-
systems need not necessarily be separated in space, for instance in the laser the
two subsystems are the atoms and the light field. To make contact with earlier nota-
tion we denote the indices referring to one subsystem by i, those to the other sub-
system by t. The probability distribution then carries the indices ii'. The cor-
responding master equation (4.111) for (P jr =: P(j,j', t), (j,j' +-+ m), reads

(4.175)

We have encountered such indices already in Section 3.3 and 3.4. In particular
we have seen that (Sect. 3.5) the entropy S is constructed from the probability
distribution, Pi' whereas the entropy of system S' is determined by a second prob-
ability distribution Pi" Furthermore we have assumed that the entropies are
additive. This additivity implies that PH' factorizes (cf. Exercise 1 on 3.5)

(4.176)

Inserting (4.176) into (4.175) shows after some inspection that (4.175) can be solved
by (4.176) only under very special assumptions, namely, if

(4.177)

114
4.11 Master Equation and Limitations of Irreversible Thermodynamics 103

(4.177) implies that there is no interaction between the two subsystems. The
hypothesis (4.176) to solve (4.175) can thus only be understood as an approxima-
tion similar to the Hartree-Fock approximation in quantum theory (compare
exercise below). Our example shows that irreversible thermodynamics is valid only
if the correlations between the two subsystems are not essential and can be taken
care of in a very global manner in the spirit of a self-consistent field approach.

Exercise on 4.11

Formulate a variational principle and establish the resulting equations for Pi'
pi, in the case that the master equation obeys detailed balance.
Hint: Make the hypothesis (4.176), transform the master equation to one with a
self-adjoint operator (according to Sect. 4.7) and vary now the resulting expres-
sion (4.142) with respect to pipi, = Xii" For those who are acquainted with the
Hartree-Fock procedure, convince yourself that the present procedure is identical
with the Hartree-Fock self-consistent field procedure in quantum mechanics.

115
5. Necessity
Old Structures Give Way to New Structures

This chapter deals with completely deterministic processes. The question of


stability of motion plays a central role. When certain parameters change, stable
motion may become unstable and completely new types of motion (or structures)
appear. Though many of the concepts are derived from mechanics, they apply to
many disciplines.

5.1 Dynamic Processes


a) An Example: The Overdamped Anharmonic Oscillator
In practically all disciplines which can be treated quantitatively, we observe changes
of certain quantities as a function of time. These changes of quantities result from
certain causes. A good deal of the corresponding terminology has evolved from
mechanics. Let us consider as an example the acceleration of a particle with mass
m under the action of a force Fo. The velocity v of the particle changes in time ac-
cording to Newton's equation

dv
m dt = Fo· (5.1)

Let us further assume that the force Fo may be decomposed into a "driving force"
F and a friction force which we assume proportional to the velocity, v. Thus we
replace Fo by

Fo --4 F - ')Iv (5.2)

and obtain as equation of motion


dv
m-+')Iv-F
dt -. (5.3)

In many practical cases F is a function of the particle coordinate, q. In the case


of a harmonic oscillator, Fig. 5.l, F is proportional to the elongation, q, from
the equilibrium position. Denoting Hooke's constant by k, we have (compare
Fig. 5.1b)

F(q) = -kq. (5.4)


106 5. Necessity

~ o (a)

F= -k.q

--------~~------- q

(b)

Fig. 5.1 a--c. The harmonic oscillator. (a) Configura-


tion with spring and point mass m. "0" indicates the
equilibrium position. (b) Force as function of
(c) elongation q. (c) Potential (cf. (5.14), (5.15»

The minus sign results from the fact that the elastic force tends to bring the particle
back to its equilibrium position. We express the velocity by the derivative of the
coordinate with respect to time and use a dot above q to indicate this

dq .
v = dt == q. (5.5)

With (5.2) and (5.5), (5.1) acquires the form

mij + ')'4. = F(q). (5.6)

We shall use this equation to draw several important general conclusions which
are valid for other systems and allow us to motivate the terminology. We mainly
consider a special case in which m is very small and the damping constant')' very
large, so that we may neglect the first term against the second term on the lhs of
(5.6). In other words, we consider the so-called overdamped motion. We further
note that by an appropriate time scale

t = ')'t' (5.7)

we may eliminate the damping constant ')'. Eq. (5.6) then acquires the form

Ii = F(q). (5.8)

Equations of this form are met in many disciplines. We illustrate this by a few
examples: In chemistry q may denote the density of a certain kind Q of molecules,

118
5.1 Dynamic Processes 107

which are created by the reaction of two other types of molecules A and B with
concentrations a and b, respectively. The production rate of the density q is de-
scribed by

q = kab (5.9)

where k is the reaction coefficient. An important class of chemical reactions will be


encountered later. These are the so-called autocatalytic reactions in which one of
the constituents, for example B, is identical with Q so that (5.9) reads

q = kaq. (5.10)

Equations of the type (5.10) occur in biology where they describe the multiplica-
tion of cells or bacteria, or in ecology, where q can be identified with the number of a
given kind of animals. We shall come back to such examples in Section 5.4 and,
in a much more general form, later on in our book in Chapters 8 through 11. For the
moment we want to exploit the mechanical example. Here one introduces the nota-
tion of "work" which is defined by work = force times distance. Consider for
example a body of weight G (stemming from the gravitational force that the earth
exerts on the body). Thus we may identify G with the force F. When we lift this
body to a height h (== distance q) we "do work"

W = G·h = F-q. (5.11)

In general cases the force F depends on the position, q. Then (5.11) can be formu-
lated only for an infinitesimally small distance, dq, and we have instead of (5.11)

dW = F(q)dq. (5.12)

To obtain the total work over a finite distance, we have to sum up, or, rather to
integrate

W = Jql

qo
F(q)dq. (5.13)

The negative of W is called the potential, V. Using (5.12) we find

dV
F(q) = - dq' (5.14)

Let us consider the example of the harmonic oscillator with F given by (5.4).
One readily establishes that the potential has the form

V(q) = !kq2 (5.15)

(besides an additive constant which we have put equal to zero). To interpret V,


which is plotted for this example in Fig. 5.1c, we compare it with the work done

119
108 5. Necessity

when lifting a weight. This suggests interpreting the solid curve in Fig. 5.2 as the
slope of a hill. When we bring the particle to a certain point on the slope, it will
fall back down the slope and eventually come to rest at the bottom of the hill.
Due to the horizontal slope at q = 0, F(q), (5.14) vanishes and thus 4 = O. The
particle is at an equilibrium point. Because the particle returns to this equilibrium
point when we displace the particle along the slope, this position is stable.

V(q) = G·h

Fig. 5.2. Potential curve interpreted as slope of a


hill

Now consider a slightly more complicated system, which will turn out later on
to be of fundamental importance for self-organization (though this is at present not
at all obvious). We consider the so-called anharmonic oscillator which contains a
cubic term besides a linear term in its force F.

(5.16)

The equation of motion then reads

(5.17)

The potential is plotted in Fig. 5.3 for two different cases, namely, for k > 0 and

Vfq). k>O

(a)

------------~~~-----------.q

Vfq). k<O
(b)

--~--~----~~~----~--~-+ q
Fig. 5.3a and b. The potential of the force
(5.16) for k > O (a) and k < 0 (b)

120
5.1 Dynamic Processes 109

k < 0 (k 1 > 0). The equilibrium points are determined by

4= O. (S.l8)

From Fig. S.3, it is immediately clear that we have two completely different situa-
tions corresponding to whether k > 0 or k < O. This is fully substantiated by an
algebraic discussion of (S.17) under the condition (S.18). The only solution in case
a) k > 0, kl > 0 is

q = 0, stable, (S.19)

whereas in case
b) k < 0, kl > 0,
we find three solutions, namely, q = 0 which is evidently unstable, and two stable
solutions ql,2 so that

q = 0 unstable, ql,2 = ±Jlkl/kl stable. (S.20)

q.

Fig. 5.4. The equilibrium coordinate q. as function of k


(cf. (5.19), and (5.20)). For k > O, q.=O, but for k<O,
qe = O becomes unstable (dashed line) and is replaced by
two stable positions (solid fork)

There are now two physically stable equilibrium positions, (Fig. S.4). In each of
them the particle is at rest and stays there for ever.
By means of (S.I 7) we may now introduce the concept of symmetry. If we
replace everywhere in (S.17) q by -q we obtain

(S.17a)

or, after division of both sides by -1 , we obtain the old (S.17). Thus (S.17) remains
unchanged (invariant) under the transformation

q --* -q (S.17b)

or, in other words, (S.17) is symmetric with respect to the inversion q --+ - q.
Simultaneously the potential

(S.21)

121
110 5. Necessity

remains invariant under this transformation

V(q) -> V( - q) = V(q). (5.17c)

Though the problem described by (5.17) is completely symmetric with respect to


the inversion q -> - q, the symmetry is now broken by the actually realized solu-
tion. When we gradually change k from positive values to negative values, we come
to k = 0 where the stable equilibrium position q = 0 becomes unstable. The
whole phenomenon may be thus described as a symmetry breaking instability. This
phenomenon can be still expressed in other words. When k passes from k > 0 to
k < 0 the stable equilibrium positions are exchanged, i.e., we have the so-called
exchange of stability. When we deform the potential curve from k > 0 to k < 0,
it becomes flatter and flatter in the neighborhood of q = O. Consequently the
particle falls down the potential curve more and more slowly, a phenomenon called

°
critical slowing down. For later purposes we shall call the coordinate q now r.
When passing from k > 0 to k < the stable position r = 0 is replaced by an un-
stable position at r = 0 and a stable one at r = roo Thus we have the scheme

unstable point
stable point / (5.22)
~
stable point

Since this scheme has the form of a fork, the whole phenomenon is called "bifurca-
tion." Another example of bifurcation is provided by Fig. 5.7 below where the
two points (stable at r 1, unstable at ro) vanish when the potential curve is deformed.

b) Limit Cycles
We now proceed from our one-dimensional problem to a two-dimensional problem
as follows: We imagine that we rotate the whole potential curve V(r) around the
V-axis which gives us Fig. 5.5. We consider a case in which the particle runs along

Fig. 5.5. Rotating symmetric potential

the bottom of the valley with a constant velocity in tangential direction. We may
either use cartesian coordinates q 1 and qz or polar coordinates (radius r and angle

122
5.1 Dynamic Processes 111

cp)l. Since the angular velocity, cp, is constant, the equations of motion have the
form

r = F(r), (5.23)
cp = w.
We do not claim here that such equations can be derived for purely mechanical
systems. We merely use the fact that the interpretation of V as a mechanical
potential is extremely convenient for visualizing our results. Often the equations of
motion are not given in polar coordinates but in cartesian coordinates. The relation
between these two coordinate systems is

q! =rcoscp,
q2 = r sin cp. (5.24)

Since the particle moves along the valley, its path is a circle. Having in mind the
potential depicted in Fig. 5.5 and letting start the particle close to q = 0 we see that
it spirals away to the circle of Fig. 5.6. The point q = 0 from which the particle
spirals away is called an unstable focus. The circle which it ultimately approaches is
called a limit cycle. Since our particle 'also ends up in this cycle, if it is started from
the outer side, this limit cycle is stable. Of course, there may be other forms of the
potential in radial direction, for example that of Fig. 5.7a now allowing for a

unstablr!
focus

stable limit
cycle
Fig. 5.6. Unstable focus and limit cycle

vrr) Vfr)

Fig. 5.7a and b. (a) An un-


stable limit cycle (ro) and
a stable limit cycle (rl).
(a) '0 r, (b) (b) The limit cycles coa-
lesce and disappear
I In mechanics, ql and q2 may be sometimes identified with the coordinate q and the momentum
p of a particle.

123
112 5. Necessity

stable and an unstable limit cycle. When this potential is deformed, these two
cycles coalesce and the limit cycles disappear. We then have the following bifurca-
tion scheme

stable limit cycle----


)--no limit cycle (5.25)
unstable limit cycle---

c) Soft and Hard Modes, Soft and Hard Excitations


When we identify the coordinate q with the elongation of a pendulum, we may
imagine that the present formalism is capable of describing clocks and watches.
Similarly these equations can be applied to oscillations of radio tubes or to lasers
(cf. Sect. 8.1). The great importance of limit cycles lies in the fact that now we can
understand and mathematically treat self-sustained oscillations. As our above
examples show clearly, the final curve (trajectory) is followed by the particle
independently of initial conditions. Consider the example depicted by Fig. 5.5 or
5.6. If we start the system near to the unstable focus q = 0, the oscillation starts
on its own. We have the case of self-excitation. Since an infinitesimally small initial
perturbation suffices to start the system, one calls this kind of excitation soft self-
excitation. The watch or clock starts immediately. Fig. 5.7a is an example of the
so-called hard excitation.
To bring the particle or coordinate from the equilibrium value q = 0 (== r = 0)
to the stable limit cycle at r = r 1 , the potential hill at r = ro, i.e., a certain threshold
value, must be surmounted. In the literature considerable confusion has arisen with
the notation "soft" and "hard" modes and "soft" and "hard" excitations. In our
notation soft and hard mode refers to w = 0 and w t= 0, respectively. The form of
the potential in the r-direction causes soft or hard excitations. When the system
rotates, the bifurcation scheme (5.22) must be replaced by the scheme

/ stable limit cycle


stable focus "- (5.26)
'" unstable focus

In view of (5.26) the bifurcation of limit cycles can be formulated as follows.


Consider that F(r) in (5.23) is a polynomial. To have circular motion requires
dr/dt = 0, i.e., F(r) must have real positive roots. A bifurcation or inverse bifurca-
tion occurs if one double (or multiple) real root ro becomes complex for certain
values of external parameters (in our above case k) so that the condition ro = real
cannot be fulfilled. While in the above examples we could easily find closed "trajec-
tories" defining limit cycles, it is a major problem in other cases to decide whether
given differential equations allow for stable or unstable limit cycles. Note that
limit cycles need not be circles but can be other closed trajectories (cf. Sects. 5.2,
5.3). An important tool is the Poincare-Bendixson theorem, which we will present
in Section 5.2.

124
5.2 Critical Points and Trajectories in a Phase Plane. Once Again Limit Cycles 113

5.2* Critical Points and Trajectories in a Phase Plane. Once Again


Limit Cycles
In this section we consider the coupled set of first-order differential equations

ql = Fl(ql, q2), (5.27)


q2 = F2(qt> q2). (5.28)

The usual equation of motion of a particle with mass m in mechanics

mij - F(q, q) =0 (5.29)

is contained in our formalism. Because putting

mq = p (p = momentum) (5.30)

we may write (5.29) in the form

mij = p = F2(q, p) == F(q, p), (5.31)

and supplement it by (5.30) which has the form

(5.32)

This is identical with (5.27), identifying ql = q, q2 = p. We confine our analysis to


so-called autonomous systems in which Fl , F2 do not explicitly depend on time.
Writing out the differentials on the left-hand side in (5.27) and (5.28) and dividing
(5.28) by (5.27), one immediately establishes 2

dq2 F2 (qt> q2)


(5.33)
dql = Fl(ql, q2)"

The meaning of (5.27) and (5.28) becomes clearer when we write ql in the form

(5.34)

where

Llql = ql(t + t) - qt(t), (5.35)


and

Lit = "to (5.36)

2 This "division" is done here in a formal way, but it can be given a rigorous mathematical basis,
which, however, we shall not discuss here.

125
114 5. Necessity

Thus we may write (5.27) in the form

(5.37)

This form (and an equivalent one for q2) lends itself to the following interpretation:
If ql and q2 are given at time t, then their values can be determined at a later time
t + 't by means of the rhs of (5.37) and

(5.38)

uniquely, which can also be shown quite rigorously. Thus when at an initial time ql
and q2 are given, we may proceed from one point to the next. Repeating this pro-
cedure, we find a unique trajectory in the ql,q2-plane (Fig. 5.8). We can let this
trajectory begin at different initial values. Thus a given trajectory corresponds to
an infinity of motions differing from each other by the "phase" (or initial value)
ql(O), q2(0).

qil+r) - - T

Fig. 5.8. Each trajectory can be approximated by


a polygonial track which enables us to construct
a trajectory. Here we have shown the approxima-
tion by means of secants. Another well known
approach consists in proceeding from q(t) to
q(t+r) by taking the tangents to the true trajec-
tory using the rules of (5.37), (5.38)

Let us now choose other points q I' q2 not lying on this trajectory but close to it.
Through these points other trajectories pass. We thus obtain a "field" of trajec-
tories, which may be interpreted as streamlines of a fluid. An important point to
discuss is the structure of these trajectories. First it is clear that trajectories can
never cross each other because at the crossing point the trajectory must continue
in a unique way which would not be so if they split up into two or more trajectories.
The geometrical form of the trajectories can be determined by eliminating the time-
dependence from (5.27), (5.28) leading to (5.33). This procedure breaks down,
however, if simultaneously

(5.39)

for a couple q?, q~. In this case (5 .33) results in an expression % which is mean-
ingless. Such a point is called a singular (or critical) point. Its coordinates are deter-
mined by (5 .39). Due to (5.27) and (5.28), (5 .39) implies 41 = 42 = 0, i.e., the
singular point is also an equilibrium point. To determine the nature of equilibrium

126
5.2 Critical Points and Trajectories in a Phase Plane. Once Again Limit Cycles 115

(stable, unstable, neutral), we have to take into account trajectories close to the
singular point. We call each singular point asymptotically stable if all trajectories
starting sufficiently near it tend to it asymptotically for t -4 00 (compare Fig. 5.9a).
A singular point is asymptotically unstable if all trajectories which are sufficiently
close to it tend asymptotically (t -4 - 00) away from it. Interpreting trajectories as
streamlines, we call singular points which are asymptotically stable, "sinks", be-
cause the streamlines terminate at them. Correspondingly, asymptotically unstable
singular points are called "sources".

(c)

1---+---~-+---r--;---~~

Fig. 5.9a-c. Trajectories (a), equipotential curves


(b), and potential (c), V= 1/2 qi + a/2 q~, a> O of a
(b) node

The behavior of trajectories near critical points may be classified. To introduce


these classes we first treat several special cases. We assume that the singular point
lies in the origin of the coordinate system which can always be achieved by a shift
of the coordinate system. We further assume that F[ and F2 can be expanded into a
Taylor series and that at least F[ or F2 starts with a term linear in q[, q2' Neglecting
higher powers in qt, Q2' we have to discuss (5.27) and (5.28) in a form where F[
and F2 are linear in q[ and q2' We give a discussion of the different classes which
may occur:

127
116 5. Necessity

1) Nodes and Saddle Points


Here (5.27) and (5.28) are of the form

41 = ql'
4z = aqz, (5.40)

which allow for the solution

(5.41)

Similarly the equations

41 = -ql'
4z = -aqz (5.42)

have the solutions

qz = cze -at . (5.43)

For ql =I 0 (5.33) acquires the form

dq2 aq2
dql = q; (5.44)

with the solution

(5.45)

which can also be obtained from (5.41) and (5.43) by the elimination of time.
For a > 0 we obtain parabolic integral curves depicted in Fig. 5.9a. In this case we
have for the slope dq2/dql

dq2
-=
Caqla-I . (5.46)
dql
We now distinguish between positive and negative exponents of ql' If a > 1 we
find dq2/dql -> 0 for ql -> O. Every integral curve with exception of the qz-axis
approaches the singular point along the q I axis. If a < lone immediately establishes
that the roles of ql and q2 are interchanged. Singular points which are surrounded
by curves of the form of Fig. 5.9a are called nodes. For a = I the integral curves
are half lines converging to or radiating from the singular point. In the case of the
node every integral curve has a limiting direction at the singular point.
We now consider the case a < O. Here we find the hyperbolic curves

(5.47)

128
5.2 Critical Points and Trajectories in a Phase Plane. Once Again Limit Cycles 117

------------+------------~ ------~

(c)

-+--+-+---~f---+_+_~ Q,

Fig. 5.l0a-c. Trajectories (a), equipotential curves


(b), and potential (c), V=1/2 qf-lal/2 q~, a < O of a
saddle point

For a = -1 we have ordinary hyperbolas. The curves are depicted in Fig. 5.10. Only
four trajectories tend to the singular point namely As, Ds for t --+ 00, D s, Cs for
t --+ - 00. The corresponding singular point is called saddle point.

2) Focus and Center


We now consider equations of the form

(5.48)

(5.49)

For a> 0, by use of polar coordinates (compare (5.24», (5.48) and (5.49) acquire
the form

;- = -ar (5.50)
11 = 1, (5.51)

129
118 5. Necessity

which have the solutions

(5.52)
(5.53)

C I , C 2 are integration constants. We have already encountered trajectories of this


kind in Section 5.1. The trajectories are spirals approaching the singular point at
the origin. The radius vector rotates anticIockwise with the frequency (5.51). This
point is called a stable focus (Fig. 5.11). In the case a < 0 the motion departs from
the focus: we have an unstable focus. For a = 0 a center results (Fig. 5.12).

V(r)

--------~~~------~r

(a) (c)

---+--+-+--+- -+-+-+----q,

Fig. 5.11 a--<:. Trajectories close to a stable focus.


Here no potential exists, except when using a rotat-
ing frame (b and c)
(b)

We now turn to the general case. As mentioned above we assume that we can
expand FI and F2 around q? = 0 and q~ = 0 into a Taylor series and that we
may keep as leading terms those linear in ql and q2 , i.e.,

41 = aql + bq2' (5.54)

42 = cql + dq2' (5.55)

130
5.2 Critical Points and Trajectories in a Phase Plane. Once Again Limit Cycles 119

Fig. 5.12. A center. No potential exists, Fig. 5.13. Compare text


except when using a rotating frame. In
it, V=const

One then makes a linear transformation

e = aql + /3q2
tT = "Iql + ~q2 (5.56)

so that (5.54) and (5.55) are transformed into

(5.57)
(5.58)

The further discussion may be performed as above, if AI' A2 are real. If AI( = Ai)
are complex, inserting ¢ = 1'/* = rei'" into (5.57), (5.58) and separating the equa-
tions into their real and imaginary parts yields equations of the type (5.50), (5.51)
(with a = - Re A1> and + 1 replaced by 1m AI)'
Let us now discuss a further interesting problem, namely. How to find limit
cycles in the plane: The Poincare-Bendixson theorem. Let us pick a point qO =
(q?, qg) which is assumed to be nonsingular, and take it as initial value for the
solution of (5.27), (5.28). For later times, t > to, q(t) will move along that part of
the trajectory which starts at q(t o) = qO. We call this part half-trajectory. The
Poincare-Bendixson theorem then states: If a half-trajectory remains in a finite
domain without approaching singular points, then this half-trajectory is either a
limit cycle or approaches such a cycle. (There are also other forms of this theorem
available in the literature). We do not give a proof here, but discuss some ways
of applying it invoking the analogy between trajectories and streamlines in hydro-
dynamics. Consider a finite domain called D which is surrounded by an outer and
an inner curve (compare Fig. 5.13). In more mathematical terms: D is doubly con-
nected. If all trajectories enter the domain and there are no singular points in it or

131
120 5. Necessity

on its boundaries, the conditions of the theorem are fulfilled 3 . When we let
shrink the inner boundary to a point we see that the conditions of the Poincare-
Bendixson theorem are still fulfilled if that point is a source.

The Potential Case (n Variables)


We now consider n variables qj which obey the equations

(5.59)

The case in which the forces F j can be derived from a potential V by means of

F. = __
8 V---,(~ql:-'_'_.q=n) (5.60)
J 8qj

represents a straightforward generalization of the one-dimensional case considered


in Section 5.1. It can be directly checked by means of the forces F j if they can be
derived from a potential. For this they have to satisfy the conditions

8Fj 8Fk
for all j, k = 1, ... , n. (5.61)
8qk = 8qj

The system is in equilibrium if (5.60) vanishes. This can happen for certain points
q?, ... ,q~, but also for lines or for hypersurfaces. When we let V depend on
parameters, quite different types of hypersurfaces of equilibrium may emerge
from each other, leading to more complicated types of bifurcation. A theory of
some types has been developed by Thorn (cf. Sect. 5.5).

Exercise on 5.2

Extend the above considerations of the Poincare-Bendixson theorem to the time-


reversed case by inverting the directions of the streamlines.

5.3* Stability
The state of a system may change considerably if it loses its stability. Simple ex-
amples were given in Section 5.1 where a deformation of the potential caused an
instability of the original state and led to a new equilibrium position of a "fictitious
particle". In later chapters we will see that such changes will cause new structures
on a macroscopic scale. For this reason it is desirable to look more closely into the
question of stability and first to define stability more exactly. Our above examples
referred to critical points where qj = O. The idea of stability can be given a much
more general form, however, which we want to discuss now.
3 For the experts we remark that Fig. 5.5 without motion in tangential direction represents a
"pathological case". The streamlines end perpendicularly on a circle. This circle represents a
line of critical points but not a limit cycle.

132
5.3 Stability 121

To this end we consider the set of differential equations

(5.62)

which determine the trajectories in q space. Consider a solution qj = u}t) of (5.62)


which can be visualized as path of a particle in q space. This solution is uniquely
determined by the original values at initial time t = to. In practical applications
the initial values are subject to perturbations. Therefore, we ask what happens to
the path of the "particle", if its initial condition is not exactly the one we have
considered above. Intuitively it is clear that we shall call the trajectory u}t) stable
if other trajectories which are initially close to it remain close to it for later times.
Or, in other words, if we prescribe a surrounding of uP), then Uj(t) is stable, if all
solutions which start initially in a sufficiently close surrounding of u}O) remain
in the prescribed surrounding, (Fig. 5.14). If we cannot find an initial surrounding

Fig. 5.14. Example of a stable trajectory u. u(l): neigh-


boring trajectory

So so that this criterion can be fulfilled, uit) is called unstable. Note that this
definition of stability does not imply that the trajectories vit) which are initially
close to uit) approach it so that the distance between Uj and Vj vanishes eventually.
To take care of this situation we define asymptotic stability. The trajectory uj is
called asymptotic stable iffor all vit) which fulfil the stability criterion the condition

(5.63)

holds in addition. (cf. Fig. 5.6, where the limit cycle is asymptotically stable). So far
we have been following up the motion of the representative point (particle on its
trajectory). If we do not look at the actual time dependence of u but only at the form
of the trajectories then we may define orbital stability. We first give a qualitative
picture and then cast the formulation into a mathematical frame.
This concept is a generalization of the stability definition introduced above.
The stability consideration there started from neighboring points. Stability then
meant that moving points always remain in a certain neighborhood in the course
of time. In the case of orbital stability we do not confine ourselves to single points,
but consider a whole trajectory C. Orbital stability now means, a trajectory in a
sufficiently close neighborhood of C remains in a certain neighborhood in the
course of time. This condition does not necessarily imply that two points which

133
122 5. Necessity

Fig. 5.15. Example of orbital stability

belong to two different trajectories and have been close together originally, do
closely remain together at later times (cf. Fig. 5.15). In mathematical terms this
may be stated as follows: C is orbitally stable if given e > 0 there is 1'/ > 0 such that
if R is a representative point of another trajectory which is within 1'/( C) at time T
then R remains within a distance e for t > T. If no such 1'/ exists, C is unstable.
We further may define asymptotic orbital stability by the condition that C is
orbitally stable, and in addition that the distance between Rand C tends to 0 as t
goes to co. To elucidate the difference between stability and orbital stability con-
sider as an example the differential equations

e = r, (5.64a)
r= O. (5.64b)

According to (5.64b) the orbits are circles and according to (5.64a) the angular
velocity increases with the distance. Thus two points on two neighboring circles
which are initially close are at a later time far apart. Thus there is no stability.
However, the orbits remain close and are orbitally stable.
One of the most fundamental problems now is how to check whether in a given
problem the trajectories are stable or not. There are two main approaches to this
problem, a local criterion and a global criterion.

1) Local Criterion
In this we start with a given trajectory q/t) = u/t) whose asymptotic stability we
want to investigate. To this end we start with the neighboring trajectory

(5.65)

e
which differs from Uj by a small quantity j • Apparently we have asymptotic sta-
bility if e j - 0 for t - 00. Inserting (5.65) into (5.62) yields

(5.66)

Since the e's are assumed to be small, we may expand the right-hand side of (5.66)
into a Taylor series. Its lowest order term cancels against Uj on the Ihs. In the
next order we obtain

(5.67)

134
5.3 Stability 123

In general the solution of (5.67) is still a formidable problem because dF/de is still
a function of time since the u/s were functions of time. A number of cases can be
treated. One is the case in which the u/s are periodic functions. Another very
simple case is provided if the trajectory uj consists ofa singular point so that Uj =
constant. In that case we may put

(5.68)

where A jk are constants. Introducing the matrix

(5.69)

the set of (5.67) acquires the form

~ = A~, (5.70)

These first-order linear differential equations of constant coefficients may be solved


in a standard manner by the hypothesis

(5.71)

Inserting (5.71) into (5.70), performing the differentiation with respect to time, and
dividing the equation by eAt, we are left with a set of linear homogeneous algebraic
equations for ~o. The solvability condition requires that the determinant vanishes,
i.e.,

All - A A12 A 1n
A22 - A = o. (5.72)

The evaluation of this determinant leads to the characteristic equation

(5.73)

of order n. If the real parts of all A's are negative, then all possible solutions (5.71)
decay with time so that the singular point is per definition asymptotically stable.
If anyone of the real parts of the A's is positive, we can always find an initial
trajectory which departs from uP) so that the singular point is unstable. We have
so-called marginal stability if the A's have non positive real parts but one or some
of them are equal zero. The following should be mentioned for practical applica-
tions. Fortunately it is not necessary to determine the solutions of (5.73). A number
of criteria have been developed, among them the Hurwitz criterion which allows us
to check directly from the properties of A if Re A < o.

135
124 5. Necessity

Hurwitz Criterion
We quote this criterion without proof. All zeros of the polynomialf(A.) (5.73) with
real coefficients C i lie on the left half of the complex A.-plane (i.e., Re A. < 0) if and
only if the following conditions are fulfilled

C1 C2 Cn
a) - > 0 - > 0 .. · - > 0
Co ' Co ' , Co

b) The principal subdeterminants H j ~Hurwitz determinants) of the quadratic


scheme

C1 Co 0 0 0 0
C3 C2 C1 Co 0 0
Cs C4 C 3 C2 0 0

0 0 0 0 Cn - l Cn- 2
0 0 0 0 0 Cn

(e.g., HI = C 1 ,H2 = C 1 C2 - COC3 , ••• ,Hn-1,Hn = CnHn- l ) satisfytheinequal-


ities HI > 0; H2 > 0; ... ; Hn > O.

2) Global Stability (Ljapunov Function)


In the foregoing we have checked the stability by investigating the immediate
vicinity of the point under discussion. On the other hand the potential curves de-
picted in Section 5.1 had allowed us to discuss the stability just by looking at the
form of the potential, or in other words, we had a global criterion. So whenever
we have a system which allows for a potential (compare Sect. 5.2) we can im-
mediately discuss the stability of that system. However, there are quite a number
of systems which do not possess a potential. In this case it is where Ljapunov's
ideas come in. Ljapunov has defined a function which has the desirable properties
of the potential to allow discussion of global stability, but which is not based on
the requirement that the forces can be derived from a potential. We first define this
Ljapunov function VL(q). (cf. Fig. 5.16).
1) Vdq) is continuous with its first partial derivatives in a certain region Q about
the origin. (5.74a)
2)
(5.74b)

Without loss of generality we put the critical point into the origin.
3)
For q "# 0, Vdq) is positive in Q. (5.75)

4) We now take into account that q occurring in VL is a time-dependent function

136
5.3 Stability 125

Fig. 5.16. Example of a Ljapunov function


VL in two dimensions. In contrast to the
potential-case (cf. page 120) where the
"particle" always follows the steepest descent
(dashed line with arrow) it may follow here
an arbitrary path (solid line with arrow),
provided it does not go uphill (stability) or
it goes everywhere downhill (asymptotic
stability)

obeying the differential equations (5.62). Thus VL becomes itself a time dependent
function. Taking the derivative of VL with respect to time we obtain

(5.76)

where we can replace 4 by means of (5.62). Now the requirement is

VL = F(q) grad V L ~ 0 in Q. (5.77)

The great importance of VL rests on the fact that in order to check (5.77) we need
not solve the differential equations but just have to insert the Ljapunov function V L
into (5.77). We are now in a position to define Ljapunov's stability theorem.

A) Stability Theorem
If there exists in some neighborhood Q of the origin a Ljapunov function VL(q),
then the origin is stable. We furthermore have the

B) Asymptotic Stability Theorem


If - VL is likewise positive definite in Q, the stability is asymptotic. On the other
hand one may also formulate theorems for instability. We mention two important
cases. The first one is due to the Ljapunov and the next one represents a generaliza-
tion due to Chetayev.

C) Instability Theorem of Ljapunov


Let V(q) with V(O) = 0 have continuous first partial derivatives in Q. Let V be
positive definite and let V be able to assume positive values arbitrarily near the
origin . Then the origin is unstable.

D) Instability Theorem of Chetayev


Let Q be a neighborhood of the origin. Let there be given a function V(q) and a
region Q 1 in Q with the following properties ;
1) V(q) has continuous first partial derivatives in Q,

137
126 S. Necessity

2) V(q) and V(q) are positive in Dl


3) At the boundary points of Dl inside D, V(q) = 0
4) The origin is a boundary point of D 1
Then the origin is unstable.
So far we have treated autonomous systems in which Fin (5.62) does not depend
on time t explicitly. We mention in conclusion that the Ljapunov theory can be
extended in a simple manner to nonautonomous systems

q= F(q, t). (5.78)

Here it must be assumed that the solutions of (5.78) exist and are unique and that

F(O, t) =0 for t ~ O. (5.79)

Though the reader will rather often meet the concepts of Ljapunov functions in the
literature, there are only a few examples in which the Ljapunov function can be
determined explicitly. Thus, though the theory is very beautiful its practical ap-
plicability has been rather limited so far.

Exercise on 5.3

1) Show that the information gain (3.38)

K(P, P') = I ... P(m) In (P(m)/P'(m» (E.1)

is a Ljapunov function for the master equation (4.111), where P'(m) is the sta-
tionary solution of (4.111) and P(m) = P(m, t) a time-dependent solution of it.
Hint: Identify P(m) with qj of this chapter (i.e., P ~ q, m _ j) and check that
(E.1) fulfils the axioms of a Ljapunov function. Change q = 0 to q = qO ==
{P'(m)}. To check (5.75) use the property (3.41). To check (5.77), use (4.111) and
the inequality In x ~ 1 - I/x. What does the result mean for P(m, t)? Show that
P'(m) is asymptotically stable.
2) If microreversibility holds, the transition probabilities w(m, m') of (4.111) are
symmetric:

w(m, m') = w(m', m).


Then the stationary solution of (4.111), P'(m) = const. Show that it follows
from exercise 1) that the entropy S increases in such a system up to its maximal
value (the famous Boltzmann's H-theorem).

5.4 Examples and Exercises on Bifurcation and Stability


Our mechanical example of a particle in a potential well is rather instructive because
it allows us to explain quite a number of general features inherent in the original

138
5.4 Examples and Exercises on Bifurcation and Stability 127

differential equation. On the other hand it does not shed any light on the importance
of these considerations in other disciplines. To this end we present a few examples.
Many more will follow in later chapters. The great importance of bifurcation rests
in the fact that even a small change of a parameter, in our case the force constant
k, leads to dramatic changes of the system.
Let us first consider a simple model of a laser. The laser is a device in which
photons are produced by the process of stimulated emission. (For more details see
Sect. 8.1). For our example we need to know only a few facts. The temporal
change of photon number n, or, in other words, the photon production rate is
determined by an equation of the form

Ii = gain - loss. (5.80)

The gain stems from the so-called stimulated emission. It is proportional to the
number of photons present and to the number of excited atoms, N, (For the experts:
We assume that the ground level, where the laser emission terminates, is kept
empty) so that

gain = GNn. (5.81)

G is a gain constant which can be derived from a microscopic theory but this is not
our present com,ern. The loss term comes from the escape of photons through the
endfaces of the laser. The only thing we need to assume is that the loss rate is
proportional to the number of photons present. Therefore, we have

loss = 2xn. (5.82)

2x = l/to, where to is the lifetime of a photon in the laser. Now an important point
comes in which renders (5.80) nonlinear. The number of excited atoms N decreases
by the emission of photons. Thus if we keep the number of excited atoms without
laser action at a fixed number No by an external pump, the actual number of excited
atoms will be reduced due to the laser process. This reduction, AN, is proportional
to the number of photons present, because all the time the photons force the atoms
to return to their groundstates. Thus the number of excited atoms has the form

N = No - AN, AN = om. (5.83)

Inserting (5.81), (5.82) and (5.83) into (5.80) gives us the basic laser equation in our
simplified model

(5.84)

where the constant k is given by

k = 2x - GNo ~ o. (5.85)

If there is only a small number of excited atoms, No, due to the pump, k is positive,

139
128 5. Necessity

whereas for sufficiently high No, k can become negative. The change of sign occurs
at

GNo = 2", (5.86)

which is the laser threshold condition. Bifurcation theory now tells us that for
k > 0 there is no laser light emission whereas for k < 0 the laser emits laser
photons. The laser functions in a completely different way when operating below
or above threshold. In later chapters we will discuss this point in much more detail
and we refer the reader who is interested in a refined theory to these chapters.
Exactly the same (5.84) or (5.80) with the terms (5.81), (5.82) and (5.83) can be
found in a completely different field, e.g., chemistry. Consider the autocatalytic
reaction between two kinds of molecules, A, B, with concentrations nand N,
respectively:

A + B -+ 2A production
A -+ C decay

The molecules A are created in a process in which the molecules themselves par-
ticipate so that their production rate is proportional to n (compare (5.81». Further-
more the production rate is proportional to N. If the supply of B-molecules is not
infinitely fast, N will decrease again by an amount proportional to the number n of
A-molecules present.
The same equations apply to certain problems of ecology and population
dynamics. If n is the number of animals of a certain kind, N is a measure for the
food supply available which is steadily renewed but only at a certain finite pace.
Many more examples will be given in Chapters 9 and 10.

Exercise

Convince yourself that the differential equation (5.84) is solved by

k Ikl c'e- Iklt _l


net) = - 2kl - 2kl . c· e Iklt + l' (5.87)

where

c = Ik/kll - k/kl - 2no


Ik/kll + k/kl + 2no

and no = nCO) is the initial value. Discuss the temporal behavior of this function
and show that it approaches the stationary state n = 0 or n = Ik/ k II irrespective
of the initial value no, but depending on the signs of k, k \. Discuss this dependence!

In a two-mode laser, two different kinds of photons I and 2 with numbers nl

140
5.4 Examples and Exercises on Bifurcation and Stability 129

and n2 are produced. In analogy to (5.80) [with (5.81,2,3)] the rate equations read

111 = G1Nni - 2:J<lnl' (5.88)

112 = G2Nn 2 - 2:J<2 n 2' (5.89)

where the actual number of excited atoms is given by

(5.90)

The stationary state

(5.91)

implies that for

(5.92)

at least nl or n2 must vanish (proof?).

Exercise

What happens if

(5.93)

Discuss the stability of (5.88) and (5.89). Does there exist a potential?
Discuss the critical point nl = n 2 = o.
Are there further critical points?
Eqs. (5.88), (5.89) and (5.90) have a simple but important interpretation in ecology.
Let n 1 , n 2 be the numbers of two kinds of species which live on the same food sup-
ply No. Under the condition (5.92) only one species survives while the other dies
out, because the species with the greater growth rate, G 1 , eats the food much more
quickly than the other species and finally eats all the food. It should be mentioned
that, as in (5.83) the food supply is not given at an initial time but is kept at a certain
rate. Coexistence of species becomes possible, however, if the food supply is at least
partly different for different species. (Compare Sect. 10.1).

Exercise

Discuss the general equations

(5.94)
(5.95)

141
130 5. Necessity

Another interesting example of population dynamics is the Lotka-Volterra


model. It was originally devised to explain temporal oscillations in the occurrence
of fish in the Adriatic sea. Here two kinds of fish are treated, namely, predator fishes
and their preyfishes. The rate equations have again the form

(5.96)

We identify the prey fishes with the index 1. If there are no predator fishes, the prey
fishes will multiply according to the law

(5.97)

The prey fishes suffer, however, losses by being eaten by the predators. The loss
rate is proportional to the number of preys and predators

(5.98)

We now turn to the equationj = 2, for the predators. Evidently we obtain

(5.99)

Because they live on the prey, predator multiplication rate is proportional to their
own number and to that of the prey fishes. Since predators may suffer losses by
death, the loss term is proportional to the number of predator fishes present

(5.100)

The equations of the Lotka- Volterra model therefore read

(5.101)

Exercise

1) Make (5.101) dimensionless by casting them into the form

(5.102)

(Hint: Put
X2 '.
n l = 2anu n2 = lXI,
-n2,• t 1 ')
= -t (5.103)
P IX 1X1

142
5.4 Examples and Exercises on Bifurcation and Stability 131

2) Determine the stationary state iiI = liz = O. Prove the following conservation
law

L= ~ (nj -
1,2 In nj) = const, ( al
az
=
=a
1) (5.104)
J

or
(5.105)

Hint: Introduce new variables vj = In nj, j = 1, 2 and form


(5.106)

and use (5.102). From Fig. 5.17 it follows that the motion of n l , nz is periodic.
cf. Fig. 5.18. Why?

~ .
....""""
2
2 ,,
,,
,
--_ .. . - '
,- " Fig. 5.17. Two typical trajectories in the
nl-n2 phase plane of the Lotka-Volterra
model (after GoeI, N. S., S. C. Maitra,
1.5 2 2.5 :3 E. W. MontroIl: Rev. Mod. Phys. 43,
n~ -+ 231 (1971)) for fixed parameters

40.---~---r---'----r---'----r---'

30

20

10
Fig. 5.18. Time variation of
the two populations n" n2
corresponding to a trajectory
~--~--~--~--~--~--~--~+t of Fig. 5.17
2 4 6 8 10 12 14

Hint: Use Section 5.1. Are there singular points? Are the trajectories stable
or asymptotically stable?
Answer: The trajectories are stable, but not asymptotically stable. Why?

By means of a suitable Ljapunov function check the stability or instability of the


following systems (..1. > 0, Jl. > 0):

143
132 5. Necessity

I)
41 = -Aql'
42 = -M2' (S.107)

Hint: Take as Ljapunov function

VL = q~ + q~. (S.108)

2)
41 = Aql'
(S.109)
42 = -M2'

Hint: Take as Ljapunov function

VL = qi + q~. (S.110)

3) For complex variables q, the equations are

4 = (a + bi)q,
(S.111)
4* = (a - bi)q*.
with

a,b =1= 0 (S.112)

and

a < O. (S.I13)

Hint: Take as Ljapunov function

v= qq*. (S.1l4)

Show in the above examples that the functions (S.108), (S.1IO) and (S.114) are
Ljapunov functions. Compare the results obtained with those of Section S.2
(and identify the above results with the case of stable node, unstable saddle
point, and stable focus).
Show that the potential occurring in anharmonic oscillators has the properties
of a Ljapunov function (within a certain region, Q).

Exercise: Van der Pol Equation

This equation which has played a fundamental role in the discussion of the perform-
ance of radio tubes has the following form:

ij + e(q2 - 1)4 +q=0 (S.IIS)

144
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 133

with

e > O. (5.116)

Show by means of the equivalent equations

q =p - e(q3/3 - q),
(5.117)
p = -q,

that the origin is the only critical point which is a source. For which e's is it an
unstable focus (node)?
Show that (5.117) allows for (at least) one limit cycle.
Hint: Use the discussion following the Poincare-Bendixson theorem on p. 119.
Draw a large enough circle around the origin, q = 0, p = 0, and show that all
trajectories enter its interior. To this end, consider the rhs of (5.117) as components
of a vector giving the local direction of the streamline passing through q, p. Now
form the scalar product of that vector and the vector q = (q, p) pointing from the
origin to that point. The sign of this scalar product tells you into which direction
the streamlines point.

5.5* Classification of Static Instabilities, or an Elementary Approach


to Thorn's Theory of Catastrophes
In the foregoing we have encountered examples where the potential curve shows
transitions from one minimum to two minima which led to the phenomenon of
bifurcation. In this chapter we want to discuss the potential close to those points
where linear stability is lost. To this end we start with the one-dimensional case
and will eventually treat the general n-dimensional case. Our goal is to find a
classification of critical points.

A) One-Dimensional Case
We consider the potential V(q) and assume that it can be expanded into a Taylor
senes:

(5.118)

The coefficients of the Taylor series of (5.118) are given as usual by

c(O) = YeO), (5.119)

C(l) = dV! (5.120)


dq q=o'

C(2) =! d2~! (5.121)


2 dq q=o'

145
134 S. Necessity

and quite generally by

(I) -! dlVl (5.122)


c - I! dql =0'

provided that the expansion is taken at q = O. Since the form of the potential curve
does not change if we shift the curve by a constant amount, we may always put

c(O) = O. (5.123)

We now assume that we are dealing with a point of equilibrium (which may be
stable, unstable, or metastable)

dV =0 (5.124)
dq .

From (5.124) it follows

C(l) = O. (5.125)

Before going on let us make some simple but fundamental remarks on small-
ness. In what follows in this section we always assume that we are dealing with
dimensionless quantities. We now compare the smallness of different powers of q.
Choosing q = 0.1, q2 gives 0.01, i.e., q2 is only 10% of q. Choosing as a further
example q = 0.01, q2 = 0.0001, i.e., only I % of q. The same is evidently true for
consecutive powers, say qft and qft + 1. When we go from one power to the next,
choosing q sufficiently small allows us to neglect qn+l compared to qn. Therefore
we can confine ourselves in the following to the leading terms of the expansion
(5.118). The potential shows a local minimum provided

!2 d2Vl' -= c > 0
.12
(2) (5.126)
uq =0

(compare Fig. 5.3a).


For the following we introduce a slightly different notation

(5.127)

As we will substantiate later on by many explicit examples, J.! may change its sign
when certain parameters of the system under consideration are changed. This turns
the stable point q = 0 into an unstable point for J.! < 0 or into a point of neutral
stability for Jl = O. In the neighborhood of such a point the behavior of V(q) is
determined by the next nonvanishing power of q. We will call a point where J.! = 0
an instability point. We first assume

1) C(3) =1= 0 so that V(q) = C(3)q3 + (5.128)

146
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 135

We shall show later on in practical cases, V(q) may be disturbed either by external
causes, which in mechanical engineering may be loads, or internally by imperfec-
tions (compare the examples of Chapter 8). Let us assume that these perturbations
are small. Which of them will change the character of (5.128) the most? Very close
to q = 0 higher powers of q e.g., q4, are much smaller than (5.128), so that such a
term presents an unimportant change of (5.128). On the other hand, imperfections
or other perturbations may lead to lower powers of q than cubic so that these can
become dangerous in perturbing (5.128). Here we mean by "dangerous" that the
state of the system is changed appreciably.
The most general case would be to include all lower powers leading to

(5.129)

Adding all perturbations which change the original singularity (5.128) in a non-
trivial way are called according to Thorn "unfoldings". In order to classify all pos-
sible unfoldings of (5.128) we must do away superfluous constants. First, by an
appropriate choice of the scale of the q-axis we can choose the coefficient C(3) in
(5.128) equal to 1. Furthermore we may shift the origin of the q axis by the trans-
formation

q = q' +D (5.130)

to do away the quadratic term in (5.129). Finally we may shift the zero point of the
potential so that the constant term in (5.129) vanishes. We are thus left with the
"normal" form V(q)

V(q) = q3 + uq. (5.131)

This form depends on a single free parameter, u. The potentials for u = 0 and
u < 0 are exhibited in Fig. 5.19. For u -> 0 the maximum and minimum coincide
to a single turning point.

2) C(3) = 0, but C(4) "# o.

V(q }

Fig. 5.19. The potential curve V(q)=q3+ uq and its un-


folding (5.131) for u<O

147
136 5. Necessity

Fig. 5.20. The potential


(5.133) for several values
of u and v (after Thorn)

The potential now begins with

(5.132)

The unfolding of this potential is given by (cf. Fig. 5.20)

q4 uq2
V(q) = "4 + T + vq, (5.133)

where we have already shifted the origin of the (q - V)-coordinate system ap-
propriately. The factors 1/4 and 1/2 in the first or second term in (5.133) are chosen
in such a manner that the derivative of V acquires a simple form

dV
dq = q3 + uq + v. (5.134)

If we put (5.134) = 0, we obtain an equation whose solution gives us the


positions of the three extrema of the potential curve. Depending on the parameters
u and v, we may now distinguish between different regions. If u 3 /27 + v 2 /4 > 0,
°
there is only one minimum, whereas for u 3 /27 + v2 /4 < we obtain two minima
which may differ in depth depending on the size of v. Thinking of physical systems,
we may imagine that only that state is realized which has the lowest minimum.
Therefore, if we change the parameters u and v, the state of the system may jump
from one minimum to the other. This leads to different regions in the u-v plane
depending on which minimum is realized. (For a critical comment on this jumping
see the end of this section) (Fig. 5.21).

- - - - -
o II Fig. 5.21. In the u-v plane, the solid curve separates the
- ~-t-----
region with one potential minimum (right region) from the
region with two potential minima. The solid line represents
in the sense of Thom the catastrophic set co nsisting of
bifurcation points. The dotted line indicates where the two
minima have the same value. This line represents the cata-
strophic set consisting of conflict points (after Thom)

148
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 137

3) If C(4) = 0 but the next coefficient does not vanish we find as potential of the
critical point

(5.135)

Normalizing q appropriately, the unfolding of V reads

qS uq3 vq2
V=-+-+-+wq (5.136)
5 3 2 '

where the extrema are determined by

oV
- = q4 + uq2 + vq + w = 0 (5.137)
oq
allowing for zero, two or four extrema implying zero, one or two minima. If we
change u, or v, or w or some of them simultaneously it may happen that the number
of minima changes (bifurcation points) or that their depths become equal (con-
flict points).
It turns out that such changes happen in general at certain surfaces in u-v-w
space and that these surfaces have very strange shapes. (Compare Fig. 5.22). Then,
as last example we mention the potential

Fig. 5.22. The u, v, w space decays into


regions determined by surfaces ("cata-
strophic sets") where the number of
potential minima changes. The surfaces
separating regions with one minimum
from those with two minima have the
form of a swallowtail (after Thorn)

4) V = q6 and its unfolding

(5.138)

Let us now proceed to the

B) Two-Dimensional Case
Expanding the potential V into a Taylor series yields

V(q(, q2) = c(O) + c~()q( + C&I)q2 + cWqi + (c~~ + C&2/)qlq2


+ c&~q~ + c~;\qi + C~;)2q~q2 + ... (5.139)

149
138 5. Necessity

where we may assume

(5.140)

Again by a shift of the V-coordinate we may ensure that

c(O) = o. (5.141)

Furthermore, we assume that we are at the position of a local extremum, i.e., we


have

(5.142)

and

(5.143)

Thus the leading term of (5.139) has the form

(5.144)

As we know from high-school mathematics, putting V.r = constant, (5.144)


defines a hyperbola, a parabola, or an ellipse. By a linear orthogonal transformation
of the coordinates ql and q2 we can make the axis of the coordinate system coincide
with the principal axis of the ellipse, etc. Thus applying the transformation

ql = A l1 u 1 + A 12 U 2,
(5.145)
q2 = A 21 u 1 + A 22 U 2
Vir acquires the form

(5.146)

If we apply the transformation (5.145) not only to the truncated form of V (5.144)
but to the total form (5.139) (however with c(O) = c~l) = C~I) = 0), we obtain a
new potential in the form

(5.147)

This form allows us, again in a simple way, to discuss instabilities. Those occur if
by a change of external parameters, 111 or 112 or both become = O.
For further discussion we first consider 111 = 0 and 112 > 0, or, in other words,
the system loses its stability in one coordinate. In what follows we shall denote the
coordinate in the "unstable direction" by x, in the "stable direction" by y. Thus

150
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 139

we discuss V in the form

(5.148)

where we have in particular

V, (x) oc X3 + higher order,


(5.149)
g(x) oc x 2 + higher order,
h(x) oc x + higher order,
(5.150)
f(x) oc 1 + higher order.

Since we want to discuss the immediate neighborhood of the instability point,


x = 0, we may assume that x is small and we may even assume that it is so small
that

h« f1.2 (5.151)

is fulfilled. Furthermore, we can restrict our analysis to a neighborhood of y = °


so that higher order terms of y can be neglected. A typical situation is depicted in
Figs. 5.23 and 5.24. Due to the term g(x) in (5.148) the y minimum may be shifted

Fig. 5.23. The potential (5.147) for III < 0 repre-


sents a distorted saddle (compare also Fig. 5.24)

Fig. 5.24. Equipotential lines of the poten-


tial of Fig. 5.23

151
140 5. Necessity

in y direction. For further discussion we exhibit the leading terms of (5.148) in the
form

(5.152)

where we have added and subtracted the quadratic complement and where we have
used the abbreviation

112 = JJ.2 + h(x). (5.153)

Introducing the new coordinate

(5.154)

helps us to cast (5.148) into the form (compare (5.151»

(5.155)

Provided the higher order terms are small enough we see that we have now found
a complete decomposition of the potential V into a term which depends only on x
and a second term which depends only on y. We now investigate in which way the
so-called higher order terms are affected by the transformation (5.154). Consider
for example the next term going with y3. It reads

(5.156)

Since yeo) ~ g(x), and g(x) is a small quantity, the higher order terms in lowest
approximation in y contribute

As (5.156) contains thus g3(X), and g is a small quantity, we find that the higher
order terms give rise to a correction to the x-dependent part of the potential V of
higher order. Of course, if the leading terms of the x-dependent part of the potential
vanish, this term may be important. However, we can evidently devise an iteration
procedure by which we repeat the procedure (5.148) to (5.155) with the higher
order terms, each term leading to a correction term of decreasing importance.
Such iteration procedure may be tedious, but we have convinced ourselves, at least
in principle, that this procedure allows us to decompose V into an x and into a y
(or y-) dependent part provided that we neglect terms of higher order within a
well-defined procedure.
We now treat the problem quite generally without resorting to that iteration
procedure. In the potential V(x, y) we put

y = yeo) + y. (5.157)

152
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 141

We now require that /0) is chosen so that

V(x, /0) + ji) (5.158)

has its minimum for ji = 0, or, in other words, that

aVI
aji y=O
=0 (5.159)

holds. This may be considered as an equation for y(O) which is of the form

W(x, y(O») = O. (5.160)

For any given x we may thus determine /0) so that

(5.161)

For ji ¥- 0 but small we may use the expansion

(5.162)

where the linear term is lacking on account of (5.159). By the above more explicit
analysis (compare (5.155)) we have seen that in the neighborhood of x = 0 the
potential retains its stability in y-direction. In other words, using the decomposition

fi2 = 112 + h(x) (5.163)

we may be sure that (5.163) remains positive. Thus the only instability we have to
discuss is that inherent in VI(x) so that we are back with the one-dimensional case
discussed above.
We now come to the two-dimensional case, III = 0,112 = O. The first, in general
nonvanishing, terms of the potential are thus given by

(5.164)

Provided that one or several of the coefficients C(3) are unequal zero, we may stop
with the discussion of the form (5.164). Quite similar to the one-dimensional case
we may assume that certain perturbations may become dangerous to the form
(5.164). In general these are terms of a smaller power than 3. Thus the most general
unfolding of (5.164) would be given by adding to (5.164) terms of the form

(5.165)

(where we have already dropped the constant term).


We now describe qualitatively how we can cut down the formulas (5.164) and

153
142 5. Necessity

(5.165) by a linear transformation of the form

x = AllXI + A 12 x 2 + B I ,
(5.166)
y = A 2l X I + A 22 X 2 + B 2 •
We may cast (5.164) into simple normal forms quite similar to ellipses etc. This
can be achieved by a proper choice of A's. By proper choice of E's we can further
cast the quadratic or bilinear terms of (5.165) into a simpler form. If we take the
most general form (5.164) and (5.165) including the constant term Vo, there are 10
constants. On the other hand the transformation (5.166) introduces 6 constants
which together with c(O) yields 7 constants. Taking the total number of possible
constants minus the seven, we still need three constants which can be attached to
certain coefficients (5.165). With these considerations and after some analysis we
obtain three basic forms which we denote according to Thom, in the following
manner:
Hyperbolic umbilic

(5.167)

Elliptic umbilic

(5.168)

Parabolic umbilic

v = XiX2 + wx~ + tx~ - UXI - VX2 + t(xi + x1). (5.169)


(umbilic = umbilicus = navel)

For the example of the parabolic umbilic a section through the potential curve
is drawn in Fig. 5.25 for different values of the parameters u, v, t. We note that for
t < 0 the parabolic umbilic goes over to the elliptic umbilic whereas for t > 0 we
obtain the hyperbolic umbilic. Again for each set of u, v, W, t, the potential curve
may exhibit different minima, one of which is the deepest (or several are simul-
taneously the deepest) representing a state of the system. Since one deepest mini-
mum may be replaced by another one by changing the parameters u, v, W, t, the
u,v,w,t-space is separated by different hyper-surfaces and thus decomposed into
subspaces. A note should be added about the occurrence of quartic terms in (5.169)
which seems to contradict our general considerations on page 134 where we stated
that only powers smaller than that of the original singularity can be important.
The reason is that the unfolding in (5.169) includes terms tx~ which have the same
power as the original terms, e.g., xi or xix 2 • If we now seek the minimum of V,

(5.170)

154
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 143

eal(,)

Fig. 5.25. The universal unfolding of the hyperbolic urnbilic (at center) surrounded by the local
potentials in regions I, 2 and 3 and at the cusp C (after Thorn)

and take u = v = 0 we have to solve the equation


xi + 2WX2 + 3tx~ = O. (5.171)

At least one solution of this quadratic equation (5.171), however, tends to infinity
for t -+ O. This contradicts our original assumption that we are only considering
the immediate vicinity of x I = X2 = O. The zeros of (5.171), however, are bounded
to that minimum if we take quartic terms into account according to (5.169).

C) The General n-Dimensional Case


We assume from the very beginning that the potential function depending on the
coordinate q I . • • qn is expanded into a Taylor series

V(ql' ... ,qn) = c(O) + Lj cll)qj + Ljj' c)]!qjqj'


+ Ljj'j" c)Prqjqj'qr + (5.172)

where the first coefficients are given by

(5.173)

155
144 5. Necessity

and

(5.174)

We assume that the minimum of V lies at qj = 0, i.e.,

(5.175)

Because the (negative) derivative of V with respect to qj gives us the forces F j =


-oV/oqj, the equilibrium is characterized by such a state where no force acts on
the "particle." Again c(O) will be done away with and due to (5.175) the leading
terms of (5.172) are now given by

(5.176)

where we may choose the c's always in a symmetric form

cJP = C)!]. (5.177)

This allows us to perform a principal-axis transformation

(5.178)

Linear algebra tells us that the resulting quadratic form

(5.179)

has only real values /1j. Provided that all /1j > 0, the state q = 0 is stable. We now
assume that by a change of external parameters we reach a state where a certain
set of the /1'S vanishes. We number them in such a manner that they are the first,
j = 1, ... , k, so that

111 = 0, /12 = 0, ... , 11k = O. (5.180)

Thus we have now two groups of coordinates, namely, those associated with indices
1 to k which are coordinates in which the potential shows a critical behavior, while
for k + 1, ... , n the coordinates are those for which the potential remains un-
critical. To simplify the notation we denote the coordinates so as to distinguish
between these two sets by putting

(5.181)
Uk + 1> ••• , Un = Y1' •.. ,Yn-k'

156
5.5 Classification of Static Instabilities, or an Approach to Thorn's Theory 145

The potential we have to investigate is reduced to the form

v = Ii:1 f.1jyJ + Ii:1 Yjg/x l , • •• , Xk) + V I (X 1 , ••• , Xk)


+ Ii,j~ YjYj'hjj'(Xl' ... ,Xk) + higher order in Yl' ... with
coefficients still being functions of Xl, ... , Xk (5.182)

Our first goal is to get rid of the terms linear in Yj which can be achieved in the
following way. We introduce new coordinates Ys by

Ys = y~O) + Ys> s = 1, ... , n. (5.183)

The y(O)'s are determined by the requirement that for Y s = y~O) the potential
acquires a minimum

-OVI =0 (5.184)
oYs y(O) •

Expressed in the new coordinates Ys> V acquires the form

V = V(XI' ... ,Xk) + Ijj' YjYj'hjP(Xl' ..• ,xk ) + h.o. in y


hjj' = i5 j j'f.1j + hjj.(x l , ••• ,xk ). (5.185)
small

where hjj' contains f.1j > 0 stemming from the first term in (5.182) and further
correction terms which depend on x I' . • . ,Xk' This can be, of course, derived in
still much more detail; but a glance at the two-dimensional case on pages 137-141
teaches us how the whole procedure works. (5.185) contains higher-order terms,
i.e., higher than second order in the ji's. Confining ourselves to the stable region
in the y-direction, we see that V is decomposed into a critical Vdepending only on
x I' . . . , Xb and a second noncritical expression depending on Y and x. Since all
the critical behavior is contained in the first term depending on k coordinates
x I, ••. , Xk' we have thus reduced the problem of instability to one which is of a
dimension k, usually much smaller than n. This is the fundamental essence of our
present discussion.
We conclude this chapter with a few general comments: If we treat, as every-
where in Chapter 5, completely deterministic processes, a system cannot jump
from a minimum to another, deeper, minimum if a potential barrier lies between.
We will come back to this question later on in Section 7.3. The usefulness of the
above considerations lies mainly in the discussion of the impact of parameter
changes on bifurcation. As will become evident later, the eigenvalues f.1j in (5.182)
are identical with the (imaginary) frequencies occurring in linearized equations for
the u's. Since (cj]l) is a real, symmetric matrix, according to linear algebra the
f.1/s are real. Therefore no oscillations occur and the critical modes are only soft
modes.

157
146 5. Necessity

Our presentation of the basic ideas of catastrophe theory has been addressed to
students and scientists of the natural sciences. In particular, we wanted to present
a classification scheme of when systems change their states. To this end we as-
sumed that the function V may be expanded into a Taylor series. For a mathema-
tician, other aspects may be more important, for instance how to prove his
theorems with minimal assumptions about V. Here, indeed, catastrophe theory
does not require the assumption that V can be expanded into a Taylor series, but
only that V can be differentiated infmitely often. Furthermore, for our presenta-
tion it was sufficient to neglect the higher-order terms because of their smallness.
In catastrophe theory it is shown that they can even be transformed away.

158
6. Chance and Necessity
Reality Needs Both

6.1 Langevin Equations: An Example


Consider a football dribbled ahead over the grass by a football (soccer) player.
Its velocity v changes due to two causes. The grass continuously slows the ball
down by a friction force whereas the football player randomly increases the velocity
of the ball by his kicks. The equation of motion of the football is precisely given by
Newton's law: Mass· acceleration = force, i.e.,

m·v = F. (6.1)

We determine the explicit form of F as follows:


We assume as usual in physics that the friction force is proportional to the velocity.
We denote the friction constant by y, so that the friction force reads -yv. The
minus sign takes into account that the friction force is opposite to the velocity of
the particle. Now we consider the effects of the single kicks. Since a kick impulse
lasts only a very short time, we represent the corresponding force by a c5-function of
strength cp

c[J j = cpc5(t - t), (6.2)

where tj is the moment a kick occurs. The effect of this kick on the change of
velocity can be determined as follows: We insert (6.2) into (6.1):

(6.3)

Integration over a short time interval around t = t j on both sides yields

(6.4)

Performing the integration, we obtain

mv(tj + 0) - mv(tj - 0) == mAv = cp, (6.5)

which describes that at time tj the velocity v is suddenly increased by the amount
cp/m. The total force exerted by the kicker in the course of time is obtained by
148 6. Chance and Necessity

summing up (6.2) over the sequence j of pushes:

CP(t) = qJ Li ~(t - t). (6.6)

To come to realistic applications in physics and many other disciplines we have


to change the whole consideration by just a minor point, which, by the way,
happens rather often in football games. The impulses are not only exerted in one
direction but randomly also in the reverse direction. Thus we replace cP given by
(6.6) by the function

(6.7)

in which the sequence of plus and minus signs is a random sequence in the sense
discussed earlier in this book with respect to tossing coins. Taking into account
both the continuously acting friction force due to the grass and the random kicks
of the football player, the total equation of motion of the football reads

mv = -}IV + P(t)

or, after dividing it by m

v= -(tv + F(t), (6.8)

where

(t = }11m (6.9a)

and

1
-If' = - L
qJ
F(t) == m m
~(t - t)(± l)j. (6.9b)

How to perform the statistical average requires some thought. In one experiment,
say a football game, the particle (ball) is moved at certain times tj in a forward or
backward direction so that during this experiment the particle follows a definite
path. Compare Fig. 6.1 which shows the change of velocity during the time due to
the impulses (abrupt changes) and due to friction force (continuous decreases
inbetween the impulses). Now, in a second experiment, the times at which the
particle is moved are different. The sequence of directions may be different so that
another path arises (compare Fig. 6.2). Because the sequences of times and direc-
tions are random events, we cannot predict the single path but only averages. These
averages over the different time sequences and directions of impulses will be per-
formed below for several examples. We now imagine that we average F over the
random sequence of plus and minus signs. Since they occur with equal probability,
we immediately find

(F(t) = O. (6.10)

160
6.1 Langevin Equations: An Example 149

Fig. 6.1 . The velocity v changes due to pushes Fig. 6.2. Same as in Fig. 6.1 but with a
(random force) and friction force different realization

We further form the product of Fat a time t with F at another time t' and take the
average over the times of the pushes and their directions. We leave the evaluation
to the reader as an exercise (see end of this paragraph). Adopting a Poisson process
(compare Sect. 2.12) we find the correlation function
2
<F(t)F(t') = ~ 8(t - t') = C8(t - t'). (6.11)
m to

Equation (6.8) together with (6.10) and (6.1 1) describes a physical process which
is well known under the name of Brownian movement (Chapt. 4). Here a large
particle immersed in a liquid is pushed around by the random action of particles
of the liquid due to their thermal motion. The theory of Brownian movement
plays a fundamental role not only in mechanics but in many other parts of physics
and other disciplines, as we shall demonstrate later in this book (cf. Chapt. 8).
The differential equation (6.8) can be solved immediately by the method of varia-
tion of the constant. The solution is given by

(6.12)

In the following we neglect "switching on" effects. Therefore we neglect the last
term in (6.12), which decays rapidly.
Let us now calculate the mean kinetic energy defined by

(6.13)

Again the brackets denote averaging over all the pushes. Inserting (6.12) into (6.13)
we obtain

(6.14)

Since the averaging process over the pushes has nothing to do with the integration,

161
150 6. Chance and Necessity

we may exchange the sequence of averaging and integration and perform the
average first. Exploiting (6.11), we find for (6.14) immediately

m
- (v 2 ) = - mJt Jt d,dr' e- 2at + a «+<')b(, - ,/)C. (6.15)
2 2 0 0

Due to the b-function the double integration reduces to a single integral which can
be immediately evaluated and yields

(6.16)

Since we want to consider the stationary state we consider large times, t, so that
we can drop the exponential function in (6.16) leaving us with the result

(6.17)

Now, a fundamental point due to Einstein comes in. We may assume that the
particle immersed in the liquid is in thermal equilibrium with its surroundings.
Since the one-dimensional motion of the particle has one degree of freedom,
according to the equipartition theorem of thermodynamics it must have the mean
energy 0/2) kBT, where kB is Boltzmann's constant and Tthe absolute temperature.
A comparison of the resulting equation

m 2
- (v ) = "ikBT (6.18)
2

with (6.17) leads to the relation

(6.19)

Recall that the constant C appears, in the correlation function of the fluctuating
forces (6.11). Because C contains the damping constant y, the correlation function
is intrinsically connected with the damping or, in other words, with the dissipation
of the system. (6.19) represents one of the simplest examples of a dissipation",
fluctuation theorem: The size of the fluctuation (~ C) is determined by the size
of dissipation (~ y). The essential proportionality factor is the absolute tempera-
ture. The following section will derive (6.8), not merely by plausibility arguments
but from first principles.
As we have discussed above we cannot predict a single path but only averages.
One of the most important averages is the two-time correlation function

(v(t)v(t ') ) (6.20)

which is a measure how fast the velocity loses its memory or, in other words, if we
have fixed the velocity at a certain time f', how long does it take the velocity to

162
6.1 Langevin Equations: An Example 151

differ appreciably from its original value? To evaluate (6.20) we insert (6.12)

(6.21)

into (6.20) which yields

(6.22)

This reduces on account of (6.11) to

(6.23)

The integrals can be immediately evaluated. The result reads, for t > t',

(6.24)

If we consider a stationary process, we may assume t and t' large but t - t' small.
This leaves us with the final result

<v(t)v(t'» = ~ e-«It-t'l. (6.25)

Thus the time T after which the velocity loses its memory is T = l/ex. The case
ex = 0 leads evidently to a divergence in (6.25). The fluctuations become very large
or, in other words, we deal with critical fluctuations. Extending (6.20) we may
define higher-order correlation functions containing several times by

(6.26)

When we try to evaluate these correlation functions in the same manner as before,
we have to insert (6.21) into (6.26). Evidently we must know the correlation func-
tions

(6.27)

The evaluation of (6.27) requires additional assumptions about the F's. In many
practical cases the F's may be assumed to be Gaussian distributed (Sect. 4.4). In that
case one finds

(6.28)
<F('n)F(rn- t )·· ·F('t» = Lp <F(';.)F(,;»·· . <F(';'n_,)F(T;.J> for n even.

where Lp runs over all permutations (AI, ... , An) of (I, ... , n).

163
152 6. Chance and Necessity

Exercise on 6.1

1) Evaluate the Ihs of (6.11) by assuming for the t/s a Poisson process.

Hint: <8(t - t)8(t' - t) = ~8(t - t').


to

2) Eq. (6.28) provides an excellent example to show the significance of cumulants.


Prove the following relations between moment functions mn(t t, . . . , t n ) and
cumulants kn(tt, ... , tn) by use of (4.101) and (4.102).

mt(tt) = kt(tt),
mitt, t z ) = kzCtt, t z ) + kt(tt)kt(t z),
m 3(tt, t z , t 3) = k3(tt' t z , t 3) + 3{k l (tl)kz{t z , t 3)}s + k t (tt)k t (t z )k t (t3),
mitt, t z , t 3, t 4) = kitt, t z , t 3, t 4) + 3{kz(tt, t Z )kz(t 3, t 4)}s
+ 4{k t (tt)k 3(t Z' t 3, t 4)}s + 6{k t (tt)k t (tz)kit 3, t 4)}s
+ kt(tt)kt(tz)kt(t3)kt(t4)' (E.1)

{ ... }s are symmetrized products.


These relations can be taken to rewrite (6.28) using cumulants. The resulting
form is more simple and compact.

3) Derive (6.28) with help of (4.101) and (4.104), k t = O.

6.2* Reservoirs and Random Forces

In the preceding chapter we have introduced the random force F and its properties
as well as the friction force by plausibility arguments. We now want to show how
both quantities can be derived consistently by a detailed physical model. We make
the derivation in a manner which shows how the whole procedure can be extended
to more general cases, not necessarily restricted to applications in physics. Instead
of a free particle we consider a harmonic oscillator, i.e., a point mass fixed to a
spring. The elongation of the point mass from its equilibrium position is called q.
Denoting Hooke's constant by k, the equation of the harmonic oscillator reads
(v = q)

mij = -kq. (6.29)

For what follows we introduce the abbreviation

m
k
= w o,
z
(6.30)

164
6.2 Reservoirs and Random Forces 153

where Wo is the frequency of the oscillator. The equation of motion (6.29) then
reads
(6.31)

Equation (6.31) can be replaced by a set of two other equations if we put, at first,

q =p (6.32)

and then replace q by pin (6.31) which gives


(6.33)

(compare also Sect. 5.2). It is now our goal to transform the pair of equations
(6.32) and (6.33) into a pair of equations which are complex conjugate to each
other. To this end we introduce the new variable bet) and its conjugate complex
b*(t), which shall be connected with p and q by the relations

(6.34)

and

1 (_ ,-
./2 (" woq - ip/" wo) = b*. (6.35)

Multiplying (6.32) by ./wo and (6.33) by il./w o and adding the resulting equations
we find after an elementary step

b' = -iwob, where b' == db/dt. (6.36)

Similarly the subtraction of (6.33) from (6.32) yields an equation which is just the
conjugate complex of (6.36).
After these preliminary steps we return to our original task, namely, to set up a
physically realistic model which eventually leads us to dissipation and fluctuation.
The reason why we hesitate to introduce the damping force - yv from the very
beginning is the following. All fundamental (microscopic) equations for the motion
of particles are time reversal invariant, i.e. the motion is completely reversible.
Originally there is no place in these equations for a friction force which violates
time reversal invariance. Therefore, we want to start from equations which are the
usual mechanical equations having time reversal invariance. As we have mentioned
above in connection with Brownian movement the large particle interacts with
(many) other particles in the liquid. These particles act as a "reservoir" or "heat-
bath"; they maintain the mean kinetic energy of the large particle at 1/2 kBT (per
one degree of freedom). In our model we want to mimic the effect of the "small"
particles by a set of very many harmonic oscillators, acting on the "large" particle,
which we treat as an oscillator by (6.36) and its conjugate complex. We assume that
the reservoir oscillators have different frequencies w out of a range Llw (also called
bandwidth). In analogy to the description (6.34) and (6.35), we use complex

165
154 6. Chance and Necessity

amplitudes Band B* for the reservoir oscillators, which we distinguish by an index


OJ. In our model we describe the joint action of the B's on the "large" oscillator as
a sum over the individual B's and assume that each contributes linearly. (This
assumption amounts to a linear coupling between oscillators). For these reasons
we obtain as our starting equation

(6.37)

The coefficients gw describe the strength of the coupling between the other oscil-
lators and the one under consideration. Now the "big" oscillator reacts on all the
other oscillators which is described by

(6.38)

(Readers interested in how to obtain (6.37) and (6.38) in the usual framework of
mechanics are referred to the exercises). The solution of (6.38) consists of two
parts, namely, the solution of the homogeneous equation (where ibg w = 0) and
a particular solution of the inhomogeneous equation. One readily verifies that the
solution reads

(6.39)

where Bw(O) is the initial value, at t = 0, of the oscillator amplitude Bro. Inserting
(6.39) into (6.37) we find an equation for b

(6.40)

The only reminiscence from the B's stems from the last term in (6.40). For further
discussion, we eliminate the term -iOJob(t) by introducing the new variable 6,

(6.41)

Using the abbreviation

w= OJ - OJo (6.42)

we may write (6.40) in the form

(6.43)

To illustrate the significance of this equation let us identify, for the moment,
b with the velocity v which occurred in (6.8). The integral contains b linearly which
suggests that a connection with a damping term - yb might exist. Similarly the last
term in (6.43) is a given function of time which we want to identify with a random
force F. How do we get from (6.43) to the form of (6.8)? Apparently the friction

166
6.2 Reservoirs and Random Forces 155

force in (6.43) not only depends on time, t, but also on previous times r. Under
which circumstances does this memory vanish? To this end we consider a transition
from discrete values w to continuously varying values, i.e., we replace the sum
over w by an integral

(6.44)

For time differences t - r not too short, the exponential function oscillates rapidly
for ill =1= 0 so that only contributions of ill :::::: 0 to the integral are important. Now
assume that the coupling coefficients g vary in the neighborhood of Wo (i.e., ill = 0)
only slightly. Because only small values of ill are important, we may extend the
boundaries of the integral to infinity. This allows us to evaluate the integral (6.44)

(6.44) = 2ng 2 (i(t - r), (g = groo)' (6.45)

We now insert (6.45) into the integral over the time occurring in (6.43). On account
of the (i-function we may put b(-r) = bet) and put it in front of the integral. Further-
more, we observe that the (i-function contributes to the integral only with the factor
1/2 because the integration runs only to r = t and not further. Since the (i-function
is intrinsically a symmetric function, only 1/2 of the (i-function is covered. We thus
obtain

(6.46)

where", has been introduced as an abbreviation. The last term in (6.43) will now be
abbreviated by F

F= eiroot~Lw gwe-irotBw(~)' (6.47)


F

By this intermediate steps we may rewrite our original (6.43) in the form

b' = -",5 + F(t). (6.48)

When returning from b to b (cf. (6.41)) we find as fundamental equation

b' = -iwb - ",b + F(t). (6.49)

The result looks very fine because (6.49) has exactly the form we have been
searching. There are, however, a few points which need a careful discussion.
In the model of the foregoing Section 6.1, we assumed that the force F is random.
How does the randomness come into our present model? A look at (6.47) reveals
that the only quantities which can introduce any randomness are the initial values
of Bro. Thus we adopt the following attitude (which closely follows information
theory). The precise initial values of the reservoir oscillators are not known except

167
156 6. Chance and Necessity

in a statistical sense; that means we know, for instance, their distribution functions.
Therefore we will use only some statistical properties of the reservoirs at the initial
time. First we may assume that the average over the amplitudes Bro vanishes be-
cause otherwise the nonvanishing part could always be subtracted as a deter-
ministic force. We furthermore assume that the Bro's are Gaussian distributed,
which can be motivated in different manners. If we consider physics, we may
assume that the reservoir oscillators are all kept at thermal equilibrium. Then the
probability distribution is given by the Boltzmann distribution (3.71) Pro =
.¥roe - Ew/kBT where Ero is the energy of oscillator co and .¥ ro == z;;; 1 is the normaliza-
tion factor. We assume, as usual for harmonic oscillators, that the energy is propor-
tional to p2 + q2 (with appropriate factors), or in our formalism, proportional
to BwB!, i.e., Ero = croB!Bro. Then we immediately find this announced Gaussian
distribution for B,B*,J(B, B*) = .¥ro exp( -croB!Bro/kBT). Unfortunately there is
no unique relation between frequency and energy, which would allow us to deter-
mine CW • (This gap can be filled only by quantum theory).
Let us now investigate if the force (6.47) has the properties (6.10), (6.11) we
expect according to the preceding section. Because we constructed the forces that
time in a completely different manner out of individual impulses, it is by no means
obvious that the forces (6.47) fulfil relations of the form (6. 11). But let us check it.
We form

<P*(t)F(t'». (6.50)

(Note that the forces are now complex quantities). Inserting (6.47) into (6.50) and
taking the average yields

(6.51)

We assume that initially the B's have been uncorrelated, i.e.,


(6.52)

where we have abbreviated <B!(O)Bro(O» by N ro . Thus (6.51) reduces to

(6.53)

The sum occurring in (6.53) is strongly reminiscent of that of the Ihs of (6.44) with
the only difference that now an additional factor, namely, the average <... ) occurs.
Evaluating (6.53) by exactly the same considerations which have led us from (6.44)
to (6.45), we now obtain

(6.54)

Using this final result (6.54) instead of (6.51) we obtain the desired correlation
function as

<P*(t)P(t'» = 2xNro /)(t - t'). (6.55)

168
6.2 Reservoirs and Random Forces 157

This relation may be supplemented by

(F*(t)F*(t'» = (F(t)F(t'» = 0, (6.56)

if we make the corresponding assumptions about the original average values of


B!B! or B",B",. How to determine the constant N",o? Adopting the thermal dis-
tribution function oc exp( - c",B!B",/kBT) it is clear that N",o must be proportional
to kBT. However, the proportionality factor remains open or could be fixed
indirectly by Einstein's requirement (Sect. 6.1). (In quantum theory there is no
difficulty at all. We can just identify N",o with the number of thermal quanta of the
oscillator wo).
In our above treatment a number of problems have been swept under the carpet.
First of all, we have converted a set of equations which have complete time-
reversal invariance into an equation violating this principle. The reason lies in
passing from the sum in (6.44) to the integral and in approximating it by (6.45).
What happens in practice is the following: First the heatbath amplitudes B", are
out of phase, thus leading to a quick decay of (6.44) considered as a function of the
time difference t - T. However, there remains some parts of the integrand of (6.40)
whi.:h are very small but nevertheless decisive in maintaining reversibility jointly
with the impact of the fluctuating forces (6.47).
Another problem which has been swept under the carpet is the question of why
in (6.54) N occurs with the index Woo This was, of course, due to the fact that we
have evaluated (6.53) in the neighborhood of w = 0 so that only terms with index
Wo survive. We could have done exactly the same procedure not by starting from
(6.47) but by starting from Falone. This then had led us to a result similar to (6.53),
however, with Wo = O. The idea behind our approach is this: if we solve the original
full equation (6.40) taking into account memory effects, then the eigenfrequency
Wo occurs in an interation procedure picking up every time only those contributions
W which are close to resonance with wo.1t must be stressed, however, that this pro-
cedure requires a good deal of additional thought and has not yet been performed
in the literature, at least to our knowledge.
In conclusion we may state that we were, to some extent, able to derive the
Langevin equation from first principles, however, with some additional assump-
tions which go beyond a purely mechanical treatment. Adopting now this for-
malism, we may generalize (6.49) to that of an oscillator coupled to reservoirs at
different temperatures and giving rise to different damping constants 'K j • A com-
pletely analogous treatment as above yields

(6.57)

The correlation functions are given by

(6.58)

Adopting from quantum theory that at elevated temperatures

(6.59)

169
158 6. Chance and Necessity

we may show that

(6.60)

i.e., instead of one temperature T we have now an average temperature

T =
x\TI + ... + x n Tn.
(6.61)
Xl + ... + X.
It is most important that the average temperature depends on the strength of the
dissipation (~ Xj) caused by the individual heatbath. Later on we will Ie am about
problems in which coupling of a system to reservoirs at different temperatures
indeed occurs. According to (6.59) the fluctuations should vanish when the tem-
perature approaches absolute zero. However, from quantum theory it is known
that the quantum fluctuations are important, so that a satisfactory derivation of the
fluctuating forces must include quantum theory.

~xercises on 6.2

1) Derive (6.37), (6.38) from the Hamiltonian

H = wob*b + Lro wB!B", - Lro groCbB! + b* Bro),


(g()) real)

by means of the Hamiltonian equations

b· = -iiJH/ob*, B = -ioH/iJB*
(and their complex conjugates).

2) Perform the integration in (6.23) for the two different cases: t > t' and t < t'.
For large values t and t', one may confirm (6.25).

6.3 The Fokker-Planck Equation


We first consider

A) Completely Deterministic Motion


and treat the equation

£jet) = K(q(t», (6.62)

which may be interpreted as usual in our book as the equation of overdamped


motion of a particle under the force K. Since in this chapter we want to derive

170
6.3 The Fokker-Planck Equation 159

equations capable of describing both deterministic and random processes we try


to treat the motion of the particle in a formalism which uses probability theory.
In the course of time the particle proceeds along a path in the q-t plane. If we pick
out a fixed time I, we can ask for the probability of finding the particle at a certain
coordinate q. This probability is evidently zero if q -:j: q(/) where q(/) is the solution
of(6.62). What kind of probability function yields 1 if q = q(t) and = 0 otherwise?
This is achieved by introducing a probability density equal to the b-function
(compare Fig. 6.3).

Fig. 6.3. Example of an infinitely peaked "prob-


ability" distribution

P(q, t) = b(q - q(t». (6.63)

Indeed we have seen earlier in Section 2.5, Exercise that an integral over the
function b(q - qo) vanishes, if the integration interval does not contain qo, and that
it yields I if that interval contains a surrounding of qo :

J qo+£
qo-' b(q - qo) dq = I (6.64)
= 0 otherwise.
It is now our goal to derive an equation for this probability distribution P.
To this end we differentiate P with respect to time. Since on the rhs the time de-
pendence is inherent in q(/), we differentiate first the b-function with respect to q(t),
and then, using the chain rule, we multiply by 4

. d
P(q, t) = dq(t) b(q - q(t»4(t). (6.65)

The derivative of the b-function with respect to q(t) can be rewritten as

d
- dq b(q - q(t»4(t). (6.66)

Now we make use of the equation of motion (6.62) replacing 4(t) by K. This yields

171
160 6. Chance and Necessity

the final formula

· d
P(q, t) = - dq (K(q)P). (6.67)

It must be observed that the differentiation of q now implies the differentiation of


the product of P and K. We refer the reader for a proof of this statement to the end
of this chapter. Our result (6.67) can be readily generalized to a system of differential
equations

qi = K;(q), i= 1, ... ,n, (6.68)

where q = (ql' ... , qn). Instead of (6.68) we use the vector equation

4= K(q). (6.69)

At time t the state of the total system is described by a point ql = ql(t), q2 =


q2(t) ... qn = qn(t) in the space of the variables ql ... qn' Thus the obvious general-
ization of (6.63) to many variables is given by

(6.70)

where the last identity just serves a definition of the b-function of a vector. We again
take the time derivative of P. Because the time t occurs in each factor, we obtain a
sum of derivatives

· d d d
P(q, t) = - -p·(Mt) - -P·q2··· - dqnPqn (6.71)
dql dq2

which can be transformed, in analogy to (6.67), into

· d d
P(q, t) = - -PK 1 (q) ... - -PKn(q). (6.72)
dql dqn

Writing the derivatives as an n-dimensional V operator, (6.72) can be given the


elegant form

(6.73)

which permits a very simple interpretation by invoking fluid dynamics. If we


identify P with a density in q-space, the left hand side describes the temporal change
of this density P, whereas the vector KP can be interpreted as the (probability)
flux. K is the velocity in q-space. (6.73) has thus the form of a continuity equation.

B) Derivation of the Fokker-Planck Equation, One-Dimensional Motion


We now combine what we have learned in Section 4.3 about Brownian movement

172
6.3 The Fokker-Planck Equation 161

and in the foregoing about a formal description of particle motion by means of a


probability distribution. Let us again consider the example of a football which
follows several different paths during several games. The probability distribution
for a given path, 1, is
(6.74)

for a given other path, 2,

(6.75)

and so on. Now we take the average over all these paths, introducing the function

/(q, t) = <P(q, t). (6.76)

If the probability of the occurrence of a path i is Pi' this probability distribution can
be written in the form

/(q, t) = IiPi~(q - q;(t)) (6.77)

or, by use of (6.74), (6.75), (6.76)

/(q, t) = <~(q - q(t))). (6.78)

/dq gives us the probability of finding the particle at position q in the interval dq at
time t. Of course, it would be a very tedious task to evaluate (6.77) which would
require that we introduce a probability distribution of the pushes during the total
course of time. This can be avoided, however, by deriving directly a differential
equation for f. To this end we investigate the change of/in a time interval At

A/(q, t) == /(q, t + At) - /(q, t) (6.79)

which by use of (6.78) takes the form

A/(q, t) = <~(q - q(t + At))) - <~(q - q(t))). (6.80)

We put
q(t + At) = q(t) + Aq(t) (6.81)

and expand the ~-function with respect to powers of Aq. We now have in mind that
the motion of q is not determined by the deterministic equation (6.62) but rather by
the Langevin equation of Section 6.1. As we shall see a little later, this new situa-
tion requires that we expand up to powers quadratic in Aq. This expansion thus
yields

A/(q, t) = (( - ~~(q - q(t)) )Aq(t)) +~ (:;2 ~(q - q(t))(Aq(t))2). (6.82)

173
162 6. Chance and Necessity

By means of the Langevin equation

£jet) = -yq(t) + F(t) (6.83)

we find Llq by integration over the time interval At. In this integration we assume
that q has changed very little but that many pushes have already occurred. We
thus obtain by integrating (6.83)

Jt+dt
t £j(t') dt' = q(t + At) - q(t) == Aq

= - f+.1t yq(t') dt' + f+.1t F(t') dt' = -yq(t)Llt + AF(t). (6.84)

We evaluate the first term on the right hand side of (6.82)

(6.85)

Inserting the rhs of (6.84) into (6.85) yields

d
dq {(o(q - q(t»( -yq(t)At» + ([)(q - q(t»)(AF)}. (6.86)

The splitting of the average containing AF in the product of two averages


requires a comment: AF contains all pushes which have occurred after the time t
whereas q(t) is determined by all pushes prior to this time. Due to the independence
of the pushes, we may split the total average into the product of the averages as
written down in (6.86). Since the average of F vanishes and so does that of AF,
(6.86) reduces to

d
-yAt dq {(o(q - q(t»q)}. (6.87)

Note that we have replaced q(t) by q, exactly as in (6.67).


We now come to the evaluation of the term

(:;20(q - q(t»(Aq(t»2) (6.88)

which using the same arguments as just now can be split into

(6.89)

When inserting Aq in the second part using (6.84) we find terms containing At 2,
terms containing AtAF, and (AF)2. We will show that <CAFf) goes with At. Since
the average of LlFvanishes, «AF)2) is the only contribution to (6.89) which is linear

174
6.3 The Fokker-Planck Equation 163

in L1t. We evaluate

<L1F(t)t1F(t) = J
t+Llt rt+Llt
t J t dt' dt" <F(t')F(t"). (6.90)

We assume that the correlation function between the F's is <5-correlated

<F(t)F(t') = Q<5(t - t') (6.91)

which permits the immediate evaluation of (6.90) yielding

QL1t. (6.92)

Thus we have finally found (6.88) in the form

d2
dq2 <<5(q - q(t)))QL1t. (6.93)

We now divide the original equation (6.82) by At and find with the results of (6.87)
and (6.93)

df d d2
dt = dq (yqf) + t Q dqzf (6.94)

after the limit At ---> 0 has been taken. This equation is the so-called Fokker-Planck
equation which describes the change of the probability distribution of a particle
during the course of time (cf. Fig. 6.4). K = - yq is called drift-coefficient, while
Q is known as diffusion coefficient. Exactly the same method can be applied to the
general case of many variables and to arbitrary forces Ki(q), i.e., not necessarily
simple friction forces. If we assume the corresponding Langevin equation in the

((q,t)

Fig. 6.4. Example of/(q, t) as function of


time t and variable q. Dashed line: most
probable path

175
164 6. Chance and Necessity

form

(6.95)

and o-correlated fluctuating forces F

(6.96)

we may derive for the distribution function/in q-space

/(ql'.·.' qn; t) =/(q; t) (6.97)

the following Fokker-Planck equation

(6.98)

In our above treatment we included derivatives of the c5-function up to second


order. A detailed treatment shows that, in general, higher derivatives also yield
contributions oc At. An important exception is made if the fluctuating forces are
Gaussian (cf. Sect. 4.4). In this case, the Fokker-Planck equations (6.94) and (6.98)
are exact.
We want to prove that in (6.67) the differentiation with respect to q must also
involve K(q). To prove this we form the expression

f q=q(t)+e

q=q(t) -e
d
h(q) dq o(q - q(t))K(q(t)) dq (6.99)

which is obtained from the rhs of (6.66) by replacing q by means of (6.62) and by
multiplication with an arbitrary function h(q). Note that this procedure must
always be applied if a c5-function appears in a differential equation in a form like
(6.65). Partial integration of (6.99) leads to

- fq(t)+e

q=q(t)-e h'(q)c5(q - q(t»)K(q(t)) dq, (6.100)

where the o-function can now be evaluated. On the other hand we end up with the
same result (6.100) if we start right away from

f h(q) ~ {o(q - q(t))K(q)} dq, (6.101)

where the coordinate q(t) in K is now replaced by q.

Exercises on 6.3

1) In classical mechanics the coordinates qit) and momenta Pj(t) of particles obey

176
6.4 Some Properties and Stationary Solutions of the Fokker-I1lanck Equation 165

the Hamiltonian equations of motion

where the Hamiltonian function H depends on all q/s and p/s: H = H(q, p).
We define a distribution function f(q, p; t) by f = b(q - q(t»b(p - p(t».
Show that f obeys the so-called Liouville equation

(E.l)

Hint: Repeat the steps (6.63)-(6.67) for qj and Pj'

2) Functions f satisfying the Liouville equation E.l with J' = 0 are called con-
stants of motion.
Show a) g = H(q, p) is a constant of motion;
b) if hl(q, p) and h2(q, p) are constants of motion then also hi + h2 and
hi' h2 are constants of motion;
c) if hi, ... , hi are such constants, then any function G(h l , ... ,hi) is
also a constant of motion.
Hint : a) insert g into (E.1)
b) use the product rule of differentiation
c) use the chain rule.

3) Show by generalizing 2) thatf(gl" .. ,gl) is a solution of (E.1), if the gk'S are


solutions, gk = gk(q, p; t), of (E. I).

4) Verify that the information entropy (3.42) satisfies (E. 1) provided the following
identifications are made:

qj} -+ index i (value of "random variable")


Pj
f(q, p) -+ Pi'

Replace I in (3.42) now by an integral S ... dnpdnq. Why does also the coarse-
grained information entropy fulfil (E.1)?

5) Verify that (3.42) with (3.48) is solution of (E.1) provided the.t;;s are constants
of motion (cf. exercise 2).

6.4 Some Properties and Stationary Solutions of the


Fokker-Planck Equation
In this section we show how to find time-independent solutions of several types of
Fokker-Planck equations which are often met in practical applications. We confine
the following considerations to q-independent diffusion coefficients, Qjk'

177
166 6. Chance and Necessity

A) The Fokker-Planck Equation as Continuity Equation


1) One-dimensional example:
We write the one-dimensional Fokker-Planck equation (6.94) in the form

K = K(q) , f = f(q, t). (6.102)

By means of the abbreviation

J= df )
. ( Kf-tQdq' (6.103)

(6.102) can be represented as

(6.104)

This is the one-dimensional case of a continuity equation (cf. (6.73) and the
exercise): The temporal change of the probability density f(q) is equal to the negative
divergence of the probability current j.

2) n-Dimensional case
The Fokker-Planck equation (6.98) may be cast in the form

(6.105)

We now define the probability current by

where

.
1k = [(,k f -
1 ".
2 L.,l= 1
Q af
kl aql •
(6.106)

In analogy to (6.104) we then obtain

J" + Vq·j = 0, (6.107)


where Vq = (djdql' ... ,djdqn).

B) Stationary Solutions of the Fokker-Planck Equation


The stationary solution is defined by J" = 0, i.e., f is time-independent.

1) One-dimension
We obtain from (6.104) by simple integration

j = const. (6.108)

178
6.4 Some Properties and Stationary Solutions of the Fokker-Planck Equation 167

In the following we impose the "natural boundary condition" on f which means


that f vanishes for q --+ ± 00. This implies (compare (6.103» that j --+ 0 for q --+
± 00, i.e., the constant in (6.108) must vanish. Using (6.103), we then have

tQ~ = Kf. (6.109)

It is a simple matter to verify that (6.109) is solved by

f(q) = .;V exp (-2V(q)/Q), (6.110)


where

V(q) = - r' K(q) dq (6.111)


J'0
has the meaning of a potential, and the normalization constant .;V is determined by

Jr+_oof(q) dq = 1.
oo
(6.112)

2) n dimensions
Here, (6.107) with!" = 0 reads
(6.113)

Unfortunately, (6.113) does not always imply j = 0, even for natural boundary
conditions. However, a solution analogous to (6.110) obtains, ifthe drift coefficients
Kk(q) fulfil the so-called potential condition:

a
Kk = - -a
qk
V(q). (6.II4)

If, furthermore, the diffusion coefficients obey the condition

(6. II 5)

f(q) = .;V exp {-2V(q)/Q}. (6. II 6)

It is assumed that V(q) ensures thatf(q) vanishes for Iql --+ 00.

C) Examples
To illustrate (6.110) we treat a few special cases:
a)
K(q) = -l1.q. (6. II 7)

179
168 6. Chance and Necessity

We immediately find

(J. 2
V(q) =-q
2

which is plotted in Fig 6.5. The corresponding probability density f(q) is plotted
in the same figure. To interpretf(q) let us recall the Langevin equation correspond-
ing to (6.1 02), (6.117)

VJ

q Fig. 6.5. Potential V(q) (solid line) and probability density


f(q) (dashed line) for (6.117)

4= -rxq + F(t).

What happens to our particle with coordinate q is as follows: The random force
F(t) pushes the particle up the potential slope (which stems from the systematic
force, K(q». After each push, the particle falls down the slope. Therefore, the most
probable position is q = 0, but also other positions q are possible due to the random
pushes. Since many pushes are necessary to drive the particle far from q = 0, the
probability of finding it in those regions decreases rapidly. When we let rx become
smaller, the restoring force K becomes weaker. As a consequence, the potential
curve becomes flatter and the probability density f(q) is more spread out.
Once f(q) is known, moments (qn) = S qnf(q) dq may be calculated. In our
present case, (q) = 0, i.e., the center off(q) sits at the origin, and (q2) = (l/2)(Q/rx)
is a measure for the width of f(q) (compare Fig. 6.5).
b)
K(q) = -rxq - f3 q 3, (6.118)
rx f3
V(q) = "2q2 + 4 q4 ,
q= -rxq - f3 q 3 + F(t).

for ct < ° °
The case rx > is qualitatively the same as a), compare Fig. 6.6a. However,
a new situation arises (cf. Fig. 6.6b). While without fluctuations, the

180
6.4 Some Properties and Stationary Solutions of the Fokker-Planck Equation 169

V(q) f(q)

--.:.----"'"'''t''"''''--- q q

(0) (b)

Fig. 6.6a and b. Potential V(q) (solid line) and probability density (dashed line) for (6.118).
(a) 0( > 0, (b) 0«0

particle coordinate occupies either the left or right valley (broken symmetry,
compare Sect. 5.1), in the present casef(q) is symmetric. The "part~cle" may be
found with equal probability in both valleys. An important point should be men-
tioned, however. If the valleys are deep, and we put the particle initially at the
bottom of one of them, it may stay there for a very long time. The determination
of the time it takes to pass over to the other valley is called the "first passage time
problem".
c)
K(q) = -rxq _ yq2 _ fJ q 3, (6.119)

F()
y'q ="2rx q 2 +3Y3 fJ q4 ,
q +4

4= -rxq - yq2 - fJ q 3 + F(t).

We assume l' > 0, fJ > 0 as fixed, but let rx vary from positive to negative values.
Figs. 6.7a-d exhibit the corresponding potential curves a)-d) and probability
densities. Note the pronounced jump in the probability density at q = 0 and q = q 1
when passing from Fig. c to Fig. d.

d) This and the following example illustrate the "potential case" in two dimen-
sions.

K1(q) = -rxq 1}
force, (6.120)
K 2(q) = -rxq2

V(q) = ~(qi + qi) potential,

4; = -rxq; + F;(/), i = 1,2; Langevin equation,

where <F;(/)Fif'» = Qijb(1 - I') = b;JQb(1 - (').


The potential surface V and the probability density f(q) are qualitatively those of
Figs. 6.8 and 6.9.

181
170 6. Chance and Necessity

V,f Y,f

q q
(a) (b)

vJ V.f
--\
/ \
/ \

\
\.q ./

(e) (d)

Fig. 6.7. (6.119) V(solid) andf(dashed) for varying IX

Fig. 6.S. The potential belonging to Fig. 6.9. The distribution function belonging
(6.121) for IX > O to the potential (6.121), IX > O

182
6.4 Some Properties and Stationary Solutions of the Fokker-Planck Equation 171

e) We present a two-dimensional generalization of case b) above:

(6.121)

or, in short,

K(q) = - rxq - f3 q2. q force,

V(q) = ~ (qi + qi) + ~ (qi + qW potential.

We assume f3 > O. For rx > 0, potential surface and probability density f(q) are
shown in Figs. 6.8 and 6.9, respectively. For rx < 0, V and f are shown in Figs.
6.10 and 6.11, respectively. What is new compared to case b is the continuously
broken symmetry. Without fluctuating forces, the particle could sit anywhere at
the bottom of the valley in an (marginal) equilibrium position. Fluctuations drive
the particle round the valley, completely analogous to Brownian movement in one
dimension. In the stationary state, the particle may be found along the bottom with
equal probability, i.e., the symmetry is restored.
f) In the general case of a known potential V(q), a discussion in terms of Section
5.5 may be given. We leave it to the reader as an exercise to perform this "transla-
tion".

Exercise on 6.4

Convince yourself that (6.104) is a continuity equation.

frq ,qp

Fig. 6.10. The potential belonging to Fig. 6.11. The distribution function belonging to
(6.121) for a < O the potential (6.121) for a < 0

183
172 6. Chance and Necessity

Hint: Integrate (6.103) from q = ql till q = q2 and discuss the meaning of

d
dt
Jq2 f(q) dq etc.
q,

6.S Time-Dependent Solutions of the Fokker-Planck Equation

1) An Important Special Case: One-Dimensional Example


The drift coefficient is linear in q:

K = -rxq

(by a simple shift of the origin of the q-coordinate we may cover also the case
K = c - rxq). We present a more or less heuristic derivation of the corresponding
solution. Because the stationary solution (6.110) has the form of a Gaussian
distribution, provided that the drift coefficients are linear, we try a hypothesis in
the form of a Gaussian distribution

f(q, t) = .;V(t) exp { - aq2 + -;q


2b}
, (6.122)

where we admit that the width of the Gaussian distribution, a, the displacement, b,
and the normalization, .;V(t), are time-dependent functions. We insert (6.122) into
the time-dependent Fokker-Planck equation (6.102). After performing the differ-
entiation with respect to time t and coordinate q, we divide the resulting equation
on both its sides by

.;V(t) exp {_q2ja + 2bq/a}.

We are then left with an equation containing powers of q up to second order.


Comparing in this equation the coefficients of the same powers of q, yields the
following three equations (after some rearrangements)

a= -2rxa + 2Q, (6.123)


b" = -rxb, (6.124)
.¥ 2b 2 Q
.;V = rx + Q-;;z - a' (6.125)

Eqs. (6.123) and (6.124) are linear differential equations for rx and f3 which can be
solved explicitly

Q
aCt) = - (1 - exp( - 2rxt))
IX
+ ao · exp( - 2rxt), (6.126)

bet) = b o exp( -lXt). (6.127)

184
6.5 Time-Dependent Solutions of the Fokker-Planck Equation 173

Eq. (6.125) looks rather grim. It is a simple matter, however, to verify that it is
solved by the hypothesis

(6.128)

which merely comes from the fact that (6.128) normalizes the distribution function
(6.122) for all times. Inserting (6.128) into (6.122) we obtain

f(q, t) = (na(t» -1/2 exp{ - (q - b(t»2 faCt)}. (6.129)

Fig. 6.4 shows an example of (6.129). If the solution (6.129) is subject to the initial
condition a -+ 0 (i.e., ao = 0) for the initial time t -+ to == 0, (6.129) reduces to a
b-function, b(q - b o), at t = 0, or, in other words, it is a Green's function of the
Fokker-Planck equation. The same type of solutions of a Fokker-Planck equation
with linear drift and constant diffusion coefficients can be found also for many
variables q.

Examples for the Application of Time-Dependent Solutions


With help of the time-dependent solutions, we may calculate time-dependent
moments, e.g.,

(q) = Jqf(q, t) dq. (6.130)

Inserting (6.129) into (6.130) we may evaluate the integral immediately (by passing
over to a new coordinate q = q' + bet»~ and obtain

(q) = bo exp( -Ct.t). (6.131)

Becausef(q, t) is determined uniquely only if the initial distributionf(q, 0) is given,


when evaluating (6.130) we must observe this condition. In many practical cases,
f(q,O) is chosen as b-function,f(q, O} = b(q - qo), i.e., we know for sure that the
particle was at q = qo for t = O. To indicate this, (6.130) is then written as

(6.132)

In our above example, b o == qo.


In many practical cases two-time correlation functions

(q(t)q(t'»

are important. They are defined by (compare Sect. 4.4)

(q(t)q(t'» = Jqdq Jq' dq'f(q,t; q',t'), (6.133)


where

f(q,t; q',t') (6.134)

185
174 6. Chance and Necessity

is a joint probability density. Since the Fokker-Planck equation describes only


Markovian processes, we may decompose (6.134) into a probability density at time
t',f(q', t') and a conditional probability f(q, t I q', t') according to Section 4.3

f(q,t; q',t') = f(q, t I q', t')f(q', t'). (6.135)

In practical cases f(q', t') is taken as the stationary solution f(q), of the Fokker-
Planck equation, if not otherwise stated. f(q, t I q', t') is just that time-dependent
solution of the Fokker-Planck equation which reduces to the b-function b(q - q')
at time t = t'. Thus f(q, t I q', t') is a Green's function. In our present example
(6.117), we have f(q) = (rx/nQ)1/2 exp( _rxq2/Q) andf(q, t I q', t') given by (6.129)
with a o = 0 and bo = q'. With these functions we may simply evaluate (6.133)
where we put without loss of generality t ' = O.

(6.133) = H q(na(t»-1/2 exp { - a~t) (q - qle-at?}dq

. q'(nQ/rx)-1/2 exp { -~ q'2 }dql . (6.136)

Replacing q by q + q' exp( - rxt) we may simply perform the integrations (which
are essentially over Gaussian densities),

<q(t)q(t') = e- at <q'2)
1 Q -at
= 2;e , (6.137)

which is in accordance with (6.25).


Now we turn to the general equation (6.105).

2)* Reduction of the Time-Dependent Fokker-Planck Equation to a Time-Independent


Equation
We put

f(q, t) = exp( - At) 'P(q) (6.138)

and insert it into (6.105). Performing the differentiation with respect to time and
then multiplying both sides of (6.105) by exp(At) yields

(6.139)

We do not discuss methods of solution of (6.139). We simply mention a few


important properties: (6.139) allows for an infinite set of solutions,. IJ'm(q) and
eigenvalues Am' m = 0, 1,2, ... provided that suitable boundary conditions are
given, e.g., natural ones. The most general solution of (6.105) is obtained as linear

186
6.5 Time-Dependent Solutions of the Fokker-Planck Equation 175

combination of (6.138).

(6.140)

When a stationary solution of (6.105) exists, Ao = O. The coefficients Cm may be


fixed by prescribing the initial distribution, e.g., at time t = 0:

f(q, 0) = fo(q)· (6.141)

Even in the one-dimensional case and for rather simple K's and Q's (6.139) can be
solved only with help of computers.

3)* A Formal Solution


We start from the Fokker-Planck equation (6.105) which we write in the form

J"=Lf (6.142)

In it, L is the "operator"

(6.143)

If L were just a number, solving (6.142) would be a trivial matter andf(t) would
read

f(t) = eUf(O). (6.144)

By inserting (6.144) into (6.142) and differentiating it with respect to time t, one
verifies immediately that (6.144) fulfils (6.142) even in the present case where L is
an operator. To evaluate (6.144), we define exp(Lt) just by the usual power series
expansion of the exponential function:

= 1 LVtV
eLI ,",co
L..v=ov!
_
. (6.145)

r means: apply the operator L v times on a function standing on the right of it:

rf(q, t) = L·L·L ... Lf(q, t). (6.146)


v

4)* An Iteration Procedure


In practical applications, one may try to solve (6.142) by iteration:
Bef(q, t) given at time t = to, we wish to construct fat a slightly later time, t + r.
To this end we recall the definition

f· = lim,--> 0 ~ (J(t + r) - f(t».


r

187
176 6. Chance and Necessity

Leaving -r finite (but very small), we recast (6.142) into the form:

f(q, to + or) = f(q, to) + -rLf(q, to) == (I + -rL)f(q, to)· (6.147)

Repeating this procedure at times t2 = to + 2-r, ... , t. = to + N-r, we find


(tN = t)

f(q, I) = (I + -rL)Nf(q, to). (6.148)

(6.148) becomes an exact solution of (6.142) in the limit -r ---+ 0, N ---+ 00 but N-r =
t - 10 , (6.148) is an alternative form to (6.144).

Exercises on 6.5

Verify that (6.129) with (6.126), (6.127), ao = 0, reduces to aD-function, D(q - qo)
for I ---+ O.

6.6* Solution of the Fokker-Planck Equation by Path Integrals


1) One-Dimensional Case
In Section 4.3 we solved a very special Fokker-Planck equation by a path integral,
namely in the case of a vanishing drift coefficient, K == O. Here we present the idea
of how to generalize this result to K ",. O. In the present case, L of(6.142) is given by

(6.149)

For an infinitesimal time-interval or, we try the following hypothesis (generalizing


(4.89) for a single step, to ---+ to + or)

f(q, to + or) = .;V r: exp { - 2~-r (q - q' - -rK(q'»2} f(q', to) dq'. (6.150)

Readers, who are not so much interested in mathematical details may skip the
following section and proceed directly to formula (6.162). We will expand (6.150)
into a power series of-r, hereby proving, that (6.150) is equivalent to (6.147) up to
and including the order or, i.e., we want to show that the rhs of (6.150) may be
transformed into

d 2f
f + -r ( - dq Kf + t Q ddq2 ) ,where f = f(q, to)· (6.151)

To this end, we introduce a new integration variable ~ by

q' = q + ~. (6.152)

188
6.6 Solution of the Fokker-Planck Equation by Path Integrals 177

We simultaneously evaluate the curly bracket in the exponent

I(q, to + ,) =.;V r: exp{-2~,(e + 2,eK(q + e) + ,ZK(q + e)Z)}


·/(q + e, to) de· (6.153)

The basic idea is now this: the Gaussian distribution

ex p ( _ _
1
2Q,
e) (6.154)

gets very sharply peaked as , ~ O. More precisely, only those terms under the
integral are important for which lei < .j"T..JQ. This suggests to expand all factors
e
of (6.154) in (6.153) into a power series of and ,. Keeping the leading terms we
obtain after some rearrangements

(6.155)

where

(6.156)

Terms odd in ehave been dropped, because they vanish after integration over e.
Here

dK(q) dl " dZI


K = K(q), K = dq' = I(q) , = dq' = dqz.
I I
and I I I (6.157)

To evaluate (6.155) further, we integrate over e. Using well-known formulas


r+
J -00
oo
exp { __I_ eZ}de = (2Qn-r)1/Z,
2Q,
(6.158)

r+ e exp{ __2Q,I_ e}de = Q,(2Qn,)1/Z,


J-
oo
00
(6.159)

we obtain (6.155) in the form

(6.155) = .;V(2Q,n)1/Z{I - ,K,! - ,K/' + '~f"}. (6.160)

Choosing the constant .;V,

(6.161)

189
178 6. Chance and Necessity

we may establish the required identity with (6.147). We may now repeat the whole
procedure at times t2 = to + 2r, ... , tN = t2 + Nr and eventually pass over to
the limit N -> [fJ; Nr = t - to fixed. This yields an N-dimensional integral
(tN == t, to = 0)

f(q, t) = lim~;=o:J:: ... J£&q exp( --iO)f(q', to), (6.162)

where

q)q = (2Qrn)-N/2 dqo ... dqN-I,


o = L~= I r{(qv - qv- I)/r - K(qv_I)}2Q-l, (6.163)

We leave it to the reader to compare (6.162) with the path integral (4.85) and to
interpret (6.162) in the spirit of Chapter 4. The most probable path of the particle
is that for which 0 has a minimum, i.e.,

(6.164)

Letting r -> 0 this equation is just the equation of motion under the force K. In the
literature one often puts (l/r)(qv - qv-I) = q and replaces K(qv) by K(q). This is
correct, as long as this is only a shorthand for (6.163). Often different translations
have been used leading to misunderstandings and even to errors.

2) In the n-dimensional case the path integral solution of (6.105) reads

f(q, t) = lim~;=~ r: . . J £& exp( --i O)f(q', to), (6.165)

where

£& = nZ:5 {(2nr)-n/2(det Q)-1/2}(dql ... dqn)w (6.166)


qN = q; qo = q', det = determinant,
o= +rL~=l (q; - K;-I)Q-I(qv - Kv - I ), (6.167)

where qv = (qv - qv-I)/r, K v- I = K(qv-I), and T denotes the transposed vector.


Q is the diffusion matrix occurring in (6.105). 0 may be called a generalized
Onsager-Machlup function because these authors had determined 0 for the special
case of K's linear in q.

Exercise on 6.6

Verify that (6.165) is solution to (6.105).

190
6.7 Phase Transition Analogy 179

Hint: Proceed as in the one-dimensional case and use (2.65), (2.66) with m --+ K
What is the most probable path?
Hint: Q and thus Q-l are positive definite.

6.7 Phase Transition Analogy


In the introductory Chapter I, we mentioned a few examples of phase transitions
of physical systems, for example that of the ferromagnet. A ferromagnet consists
of very many atomistic elementary magnets. At a temperature T greater than a
"critical" temperature Tc,T > Te, these magnets point in random directions (see
Fig. 1.7). When T is lowered, suddenly at T = Te , a macroscopic number of these
elementary magnets become aligned. The ferromagnet has now a spontaneous
magnetization. Our considerations on the solutions of Fokker-Planck equations
in the foregoing chapter will allow us to draw some very close analogies between
phase transitions, occurring in thermal equilibrium, and certain disorder-order
transitions in nonequilibrium systems. As we will demonstrate in later chapters,
such systems may belong to physics, chemistry, biology, and other disciplines.
To put these analogies on a solid basis we first consider the free energy of a physical
system (in thermal equilibrium) (compare Sect. 3.3 and 3.4). The free energy, fF,
depends on temperature, T, and possibly on further parameters, e.g., the volume.
In the present case, we seek the minimum of the free energy under an additional
constraint. Its significance can best be explained in the case of a ferromagnet.
When MI elementary magnets point upwards and M2 elementary magnets point
downwards, the "magnetization" is given by

(6.168)

where m is the magnetic moment of a single elementary magnet. Our additional


constraint requires that the average magnetization M equals a given value, or, in
the notation of Section 3.3

h=M. (6.169)

Because we also want to treat other systems, we shall replace M by a general co-
ordinate q

M --+ q, andA = q. (6.170)

In the following we assume that fF is the minimum of the free energy for a fixed
value of q. To proceed further we expand fF into a power series of q,
1
fF(q, T) = fF(O, T) + fFI(O, T)q + ... + 4! fF""(O, T)q4 + (6.171)

and discuss fF as a function of q. In a number of cases

fF' = fF'" = ° (6.172)

191
) 80 6. Chance and Necessity

due to inversion symmetry (cf. Sect. 5.1). Let us discuss this case first. We write
fF in the form

fF(q, T) = fF(O, T) +"2a q 2 + P


4q4 . (6.173)

Following Landau we call q "order parameter". To establish a first connection of


Section 6.4 with the Landau theory of phase transitions we identify (6.173) with the
potential introduced in Section 6.4. As is shown in thermodynamics, (cf. Sections
3.3, 3.4)

/= %exp{ -fF/kBT} (6.174)

gives the probability distribution if fF is considered as a function of the order para-


meter, q. Therefore the most probable order parameter is determined by the require-
ment fF = min! Apparently the minima of (6.173) can be discussed exactly as in
the case of the potential V(q). We investigate the position of those minima as func-
tion of the coefficient a. In the Landau theory this coefficient is assumed in the form

a = aCT - Te) (a > 0), (6.175)

i.e., it changes its sign at the critical temperature T = Te. We therefore distinguish
between the two regions T> Te and T < Te (compare Table 6.1). For T> T e,
a > 0, and the minimum of fF (or V) lies at q = qo = o. As the entropy is con-
nected with the free energy by the formula (cf. (3.93»

s = _ ofF(q, T) (6.176)
oT '
we obtain in this region, T > Te ,

S = S = _ ofF(O, T) (6.176a)
o aT·
The second derivative of !F with respect to temperature yields the specific heat
(besides the factor T):

C = T(!:), (6.177)

which, using (6. 176a), gives

(6.177a)

Now we perform the same procedure for the ordered phase T < Te , i.e., a < o.
This yields a new equilibrium value ql and a new entropy as exhibited in Table 6.1.

192
6.7 Phase Transition Analogy 181

Table 6.1. Landau theory of second-order phase transition


Distribution function.r = % exp (-ff /kBT),
free energy ff(q, T) = ff(O, T) + (rx/2)q2 + (P/4)q\
rx == rx(T) = aCT - Tc).

State Disordered Ordered


Temperature T> Tc
"External" parameter rx>O rx<O
Most probable order para- qo = 0 qo = ±( --rx/P)1/2
meterqo:/(q)~' max!, Broken symmetry
ff = min!
Entropy S = - {aff(qo, T)/ aT} ~o = - {aff(O, T)/ aT} So + (a 2/2p)(T - Tc!
Continuous arT = Tc (rx = 0)
Specific heat c = TeaS/aT) T(aso/aT) T(aSo/aT) + (a 2/2p)'!
Discontinuous atT = Tc (rx = 0)

One may readily check that S is continuous at T = Tc for (1 = O. However (con-


sider the last row of Table 6.1), when we calculate the specific heat we obtain two
dIfferent expressions above and below the critical temperature and thus a dis-
continuity at T = Te.
This phenomenon is called a phase transition of second order because the
second derivative of the free energy is discontinuous. On the other hand, the
entropy itself is continuous so that this transition is also referred to as a continuous
phase transition. In statistical physics one also investigates the temporal change
of the order parameter. Usually, in a more or less phenomenological manner, one
assumes that q obeys an equation of the form
off
q =-- (6.178)
oq

which in the case of the potential (6.173) takes the explicit form

(6.179)

which we have met in various examples in our book in different context. For
simplicity, we have omitted a constant factor in front of offjoq. For (1 --> 0 we ob-
serve a phenomenon called critical slowing down, because the "particle" with
coordinate q falls down the potential slope more and more slowly. Furthermore
symmetry breaking occurs, which we have already encountered in Section 5.1.
The critical slowing down is associated with a soft mode (cf. Sect. 5.1). In statistical
mechanics one often includes a fluctuating force in (6.179) in analogy to Sect. 6.1.
For (1 --> 0 critical fluctuations arise: since the restoring force acts only via higher
powers of q, the fluctuations of q(t) become considerable.
We now turn to the case where the free energy has the form
q2 q3 q4
ff(q, T) = (12 + Y3" + fJ 4 (6.180)

193
182 6. Chance and Necessity

-t---~+oo::~---+. q

vase face

Fig. 6.12. Broken symmetry in visual perception. When focussing the attention to the center and
interpreting it as foreground of a picture, a vase is seen, otherwise two faces

(f3 and")' positive but (1. may change its sign according to (6.175)). When we change
the temperature T, i.e., the parameter (1., we pass through a sequence of potential
curves exhibited in Figs. 6.7a-d. Here we find the following situation.
When lowering temperature, the local minimum first remains at q = O. When
lowering temperature further we obtain the potential curve of Fig. 6.7d, i.e., now the
"particle" may fall down from q = 0 to the new (global) minimum of §' at qt.
The entropies of the two states, qo and q t, differ. This phenomenon is called "first
order phase transition" because the first derivative of §' is discontinuous. Since the
entropy S is discontinuous this transition is also referred to as a discontinuous
phase transition. When we now increase the temperature we pass through the figures
in the sequence d ---> a. It is apparent that the system stays at qt longer than it had
been before when going in the inverse direction of temperature. Quite obviously
hysteresis is present. Fig. 6. I 3. (Consider also Figs. 6.7a-d).
Our above considerations must be taken with a grain of salt. We certainly can
and will use the nomenclature connected with phase transitions as indicated above.
Indeed these considerations apply to many cases of nonequilibrium systems as we
shall substantiate in our later chapters. However, it must clearly be stated that the
Landau theory of phase transitions (Table 6.1) was found inadequate for phase
transitions of systems in thermal equilibrium. Here specific singularities of the
specific heat, etc., occur at the phase transition point which are described by so-
called critical exponents. The experimentally observed critical exponents do in
general not agree with those predicted by the Landau theory. A main reason for
this consists in an inadequate treatment of fluctuations as will transpire in the
following part. These phenomena are nowadays successfully treated by Wilson's
renormalization group techniques. We shall not enter into this discussion but
rather interpret several nonequilibrium transitions in the sense of the Landau theory
in regions where it is applicable.
In the rest of this chapter1 we want to elucidate especially what the discon-
tinuous transition of the specific heat means for nonequilibrium systems. As we
will show by explicit examples in subsequent chapters, the action of a macroscopic
system can be described by its order parameter q (or a set of them). In many cases
a measure for that action is q2. Adopting (6. I 74) as probability distribution, the
average value of q2 is defined by
1 This part is somewhat technical and can be skipped when reading this book for the first time.

194
6.7 Phase Transition Analogy 183

(a) L-____ ~------~--~ -T


~-----;~~----r---- - T
-r., (c)
-r.c,

eillropy

d~r
.-----'001$,"9 r Fig. 6.13 a-<:. Behavior of a system at a first order
phase transition. To discuss Figs. a-<:, first have a look
at Figs. 6.7 a--d on page 170. When the potential curves
are deformed passing from Fig. 6.7a to Fig. 6.7d, we
obtain first one extremum (minimum) of V, then three,
and finally only one again.
Fig. 6.13a shows how the coordinate qo of these
extrema varies when the parameter determining the
(b) shape of the potential, V, changes. This parameter is in
Fig. 6.13 taken as (negative) temperature. Evidently,
when passing in Fig. 6.7 from a) to d), the system
stays at qo ~ 0 until the situation d) is reached, where the system jumps to a new equilibrium
value at the absolute minimum of V. On the other hand, when now passing from 6.7d to 6.7a
the system remains first at qo # 0 and jumps only in the situation 6.7b again to qo ~ O. These
jumps are evident in Fig. 6.l3c, where we have plotted the coordinate qo of the actually realized
state of the system. Fig. 6.l3b represents the corresponding variation of the entropy.

Fig. 6.14. Hysteresis effect in visual per-


ception. Look at the picture first from
upper left to lower right and then in the
opposite direction. Note that perception
switches at different points depending on
the direction

Jq2 exp( - V) dq
Jexp( - V) dq , (6.181)

where we have now absorbed Q into V, (cf. (6.116». We assume Pin the form

(6.182)

195
184 6. Chance and Necessity

Table 6.2. Phase transition analogy


Physical system in thermal equilibrium Synergetic system with stationary distribution
functionJ(q)
Order parameters q Order parameters q
Distribution functionJ = ff exp (-F/kBT) Distribution function J(q) = exp (- V), where
V defined by V = -lnJ
Temperature External parameters, e.g., power input
Entropy Action (e.g., power output)
Specific heat Change of action with change of external
parameter: efficiency

Apparently we can write q2 in the form

2
q =-
ar- (6.183)
art.
which allows us to write (6.181) in the form

Jfuar- exp( - r-) dq


J exp( - r-) dq
(6.184)

It is a simple matter to check that an equivalent form for (6.184) is

(6.185)

For the evaluation of the integral of (6.185) we assume that exp (- t/) is strongly
peaked at q = qo. If there are several peaks (corresponding to different minima
of r-(q» we assume that only one state q = qo is occupied. This is an ad hoc assump-
tion to take into account symmetry breaking in the "second-order phase transi-
tion", or to select one of the two local minima of different depth in the "first-order
phase transition". In both cases the assumption of a strongly peaked distribution
implies that we are still "sufficiently far" away from the phase transition point,
rt. = (J.c' We expand the exponent around that minimum value keeping only terms
of second order

(6.186)

Splitting the logarithm into two factors and carrying out the derivative with respect
to rt. in the first factor we obtain

(6.187)

196
6.7 Phase Transition Analogy 185

The last integral may be easily performed so that our final result reads

(6.188)

A comparison of (6.188) with (6.176) by means of the correspondence

reveals that the first term on the rhs of (6.188) is proportional to the rhs of (6.176).
Therefore the entropy S may be put in parallel with the output activity (q2).
The discontinuity of (6.177) indicates a pronounced change of slope. (compare
Fig. 1.12). The second part in (6.188) stems from fluctuations. They are significant
in the neighborhood of the transition point IX = ric. Thus the Landau theory can be
interpreted as a theory in which mean values have been replaced by the most
probable values. It should be noted that at the transition point the behavior of
(q2) is not appropriately described by (6.187), i.e., the divergence inherent in the
second part of (6.188) does not really occur, but is a consequence of the method of
evaluating (6.185). For an illustration the reader should compare the behavior of
the specific heat which is practically identical with the laser output below and above
threshold.
The Landau theory of second order phase transitions is very suggestive for
finding an approximate (or sometimes even exact) stationary solution of a Fokker-
Planck equation of one or several variables q = (ql ... q.). We assume that for a
parameter IX > 0 the maximum of the stationary solution f(q) lies at q = O. To
study the behavior of f(q) at a critical value of ri, we proceed as follows:
1) We writef(q) in the form

f(q) = ,AI" exp ( - p(q)),


(,AI": normalization factor)

2) Following Landau, we expand p(q) into a Taylor series around q = 0 up to


fourth order

P(q) = p(O) + II' Pl'ql' + ;! Il'v Pl'vqllqv

1 1
+ 3! Illvl Pllvlqllqvql + 4! IllvAK PllvlKql'qvqlqK' (6.189)

The subscripts of Pindicate differentiation of Pwith respect to qll' q" ... at q = O.


3) It is required that p(q) is invariant under all transformations of q which leave
the physical problem invariant. By this requirement, relations among the coeffi-
cients PIl ' PIl " ••• , can be established so that the number of these expansion
parameters can be considerably reduced. (The adequate methods to deal with this

197
186 6. Chance and Necessity

problem are provided by group theory). If the stationary solution of the Fokker-
Planck equation is unique, this symmetry requirement with respect to f(q) (or f/(q))
can be proven rigorously.

An example:

Let a problem be invariant with respect to the inversion q -4 -q. Then L in


f' = Lfis invariant and alsof(q)(withf' = 0). Insertingthepostulatef(q) =f(-q)
and thus fI(q) = fI( -q) into (6.189) yields VI' = 0 and Vl'v;' = o.

6.8 Phase Transition Analogy in Continuous Media:


Space-Dependent Order Parameter
Let us again use the ferromagnet as example. In the foregoing section we introduced
its total magnetization M. Now, we subdivide the magnet into regions (or "cells")
still containing many elementary magnets so that we can still speak of a "macro-
scopic" magnetization in each cell. On the other hand, we choose the cells small
compared to macroscopic dimensions, say 1 cm. Denoting the center coordinate
of a cell by x, we thus are led to introduce a space dependent magnetization, M(x).
Generalizing this idea we introduce a space dependent order parameter q(x) and
let the free energy ff depend on the q's at all positions. In analogy to (6.171) we may
expand ff({q(x)}, T) into a power series of all q(x)'s. Confining our present con-
siderations to inversion symmetric problems we retain only even powers of q(x).
We discuss the form of this expansion. In first approximation we assume that the
cells, x, do not influence each other. Thus ff can be decomposed into a sum (or an
integral in a continuum approximation) of contributions of each ceIl. In a second
step we take the coupling between neighboring cells into account by a term describ-
ing an increase of free energy if the magnetizations M(x), or, generally, q(x) in
neighboring cells differ from each other. This is achieved by a term y(Vq(x)?
Thus we represent ff in the form of the famous Ginzburg-Landau functional:

(6.190)

In the frame of a phenomenological approach the relations (6.174) and (6.178) are
generalized as follows:
Distribution function:

f({q(x)}) = ,Ai exp [-ff/kBT] (6.191)

with ff defined by (6.190).


Equation for the relaxation of q(x)

1Jff
q(x) = - 1Jq(x) , (6.192)

198
6.8 Phase Transition Analogy in Continuous Media: Space-Dependent Order Parameter 187

where q is now treated as function of space, x, and time, t. (Again, a constant factor
on the rhs is omitted). Inserting (6.l90) into (6.l92) yields the time-dependent
Ginzburg-Landau equation

Ii = -r:t.q - pq 3 + yAq + (P). (6.193)

The typical features of it are:


a linear term, -r:t.q, where the coefficient r:t. changes'its sign at a certain "threshold",
T= Te ,
a nonlinear term, - pq 3, which serves for a stabilization of the system,
a "diffusion term", yLlq, where Ll is the Laplace operator.
Finally, to take fluctuations into account, a fluctuating force F(x, t) is added ad hoc.
In Section 7.6-7.8 we will develop a theory yielding equations of the type (6.193)
or generalizations of it. If the fluctuating forces F are Gaussian and Markovian
with zero mean and

(F(x', t')F(x, t» = Q8(x - x')8(t - t'), (6.194)

the Langevin equation (6.l93) is equivalent to the functional Fokker-Planck


equation:

. r 8
f = Jdnx { 8q(x) Q 8 }
(lXq(x) + pq(x) - yLlq(x» + 2" 8q(X)2 f·
3
2
(6.l95)

Its stationary solution is given (generalizing (6.116» by

(6.196)

A direct solution of the nonlinear equation (6.l93) or of the time-dependent Fokker-


Planck equation (6.195) appears rather hopeless, since computers are necessary
for the solution of the corresponding equations even with an x-independent q.
Thus we first study for IX > 0 the linearized equation (6.l93), i.e.,

q= -r:t.q + yAq + F. (6.197)

It can be easily solved by Fourier analyzing q(x, t) and F(x, t). Adopting the cor-
relation function (6.194), we may calculate the two-point correlation function

(q(X', t')q(X, t». (6.198)

We quote one case: one dimension, equal times, t' = t, but different space points:
(q(X', t)q(x, t» = Q/(r:t.y)1/2 exp (_(1Xfy)1/2Ix' - xl). (6.199)

The factor of Ix' - xl in the exponential function has the meaning of (length)-l.

199
188 6. Chance and Necessity

We therefore put Ie = (a/y) -1/2. Since (6.199) describes the correlation between
two space points, Ie is called correlation length. Apparently

Ie --+ 00 as a --+ 0,

at least in the linearized theory. The exponent /1- in Ie OC a" is called critical exponent.
In our case, J1 = -1/2. The correlation function <q(x', t)q(x, t» has been evaluated
for the nonlinear case by a computer calculation (see end of this chapter). In
many practical cases the order parameter q(x) is a complex quantity. We denote it
by ~(x). The former equations must then be replaced by the following:
Langevin equation

( = -a~ - PI~12~ + yLl~ + F. (6.200)

Correlation function offluctuating forces

<FF) = 0, <F* F*) = 0, (6.201)


<F*(x', t')F(x, t» = Qc5(x - x ')c5(t - t'). (6.202)

Fokker-Planck equation

(6.203)

Stationary solution of Fokker-Planck equation

(6.204)

A typical correlation function reads e.g.,

G*(X', t')~(X, t». (6.205)

For equal times, t = t', the correlation functions (6.198) and (6.205) have been
determined for the nonlinear one-dimensional case (i.e., p #- 0) by using path-
integrals and a computer calculation. To treat the real q and the complex ~ in the
same way, we put

q(X)}
~(x) == lJI(x).

It can be shown that the amplitude correlation functions (6.198) and (6.205) can
be written in a good approximation, as

<1JI*(x', t)lJI(x, t» = <I 1JI1)2 exp (- t; llx - xiI) (6.206)

200
6.8 Phase Transition Analogy in Continuous Media: Space-Dependent Order Parameter 189

5 r-~-----'------,

.;:,
"'-
" "' ',',
r' mean"'"
1 field ""\',
\ '
\ \,
!..:.!.. H
-3 -2 -1 o 3 ,1t
-3 -2 -1 0 2 3 4t

Fig. 6.15. <1'1'12) versus "pump par- Fig. 6.16. Inverse correlation lengths I, and
ameter" (t- J)/M. (Compare text) 12 for real (dashed) and complex (solid) fields.
M = 2(PQ/2/0rxr,)2/3, 10 =(2r/rxo)'/2 Dashed curve: linearized theory ("mean field
(Redrawn after Scalapino et al.) theory"). (Redrawn after Scalapino et al.)

i.e., they can again be expressed by a single correlation length, 1\. <1lJ'1 2 is the >
average of lJ' over the steady state distribution. For reasons which will transpire
later we put a = C(o(l - t). (For illustration we mention a few examples. In super-
conductors, t = TlTe, where T: absolute temperature, Te: critical temperature,
c(o < 0; in lasers, t = DIDe, where D: (unsaturated) atomic inversion, Dc: critical

inversion, c(o > 0; in chemical reactions: t = blb c where b concentration of a


certain chemical, be critical concentration, etc.) We further define the length
10 = (2ylC(0) 1/2 . Numerical results near threshold are exhibited in Figs. 6.15 and
6.16 for <1lJ'1 2 >and III, 121 respectively. The intensity correlation function can be
(again approximately) expressed as

<1lJ'(x', tWIlJ'(x, tW> - <1lJ'(x', t)1 2 ><IlJ'(x, tW>


= {<1lJ'1 4 >- <1lJ'1 2 / } exp (-/21Ix - x'l). (6.207)

Numerical results for Ii 1 near threshold are exhibited in Fig. 6.16.

201
7. Self-Organization
Long-Living Systems Slave Short-Living Systems

In this chapter we come to our central topic, namely, organization and self-
organization. Before we enter into the mathematical treatment, let us briefly discuss
what we understand by these two words in ordinary life.

a) Organization
Consider, for example, a group of workers. We then speak of organization or,
more exactly, of organized behavior if each worker acts in a well-defined way on
given external orders, i.e., by the boss. It is understood that the thus-regulated
behavior results in a joint action to produce some product.

b) Self-Organization
We would call the same process as being self-organized if there are no external
orders given but the workers work together by some kind of mutual understanding,
each one doing his job so as to produce a product.
Let us now try to cast this rather vague description of what we understand by
organization or self-organization into rigorous mathematical terms. We have to
keep in mind that we have to develop a theory applicable to a large class of different
systems comprising not only the above-mentioned case of sociological systems but,
still more, physical, chemical, and biological systems.

7.1 Organization
The above-mentioned orders of the boss are the cause for the subsequent action of
the workers. Therefore we have to express causes and actions in mathematical
terms. Consider to this end an example from mechanics: Skiers on a ski lift pulled
uphill by the lift. Here the causes are the forces acting on the skiers. The action
consists in a motion of the skiers. Quite another example comes from chemistry:
Consider a set of vessels into which different chemicals are poured continuously.
This input causes a reaction, i.e., the output of new chemicals. At least in these
examples we are able to express causes and actions quantitatively, for example
by the velocity of skiers or the concentrations of the produced chemicals.
Let us discuss what kind of equations we will have for the relations between
causes and actions (effects). We confine our analysis to the case where the action
(effect), which we describe by a quantity q, changes in a small time interval At by
an amount proportional to At and to the size Fof the cause. Therefore, mathemat-
192 7. Self-Organization

ically, we consider only equations of the type

q(t) = Fo(q(t); t).

Furthermore we require that without external forces, there is no action or no out-


put. In other words, we wish q = 0 in the absence of external forces. Furthermore,
we require that the system come back to the state q = 0 when the force is switched
off. Thus we require that the system is stable and damped for F = o. The simplest
equation of this type is

q= -yq, (7.1)

where 'I is a damping constant. When an external "force" F is added, we obtain the
simple equation

q= -yq + F(t). (7.2)

In a chemical reaction, F will be a function of the concentration of chemical


reactants. In the case of population dynamics, F could be the food supply, etc.
The solution of (7.2) can be written in the form

q(t) = t e-y(t-t)F(r) dr, (7.3)

where we shall neglect here and lateron transient effects. (7.3) is a simple example
for the following relation: The quantity q represents the response of the system
with respect to the applied force F(r). Apparently the value of q at time t depends
not only on the "orders" given at time t but also given in the past. In the following
we wish to consider a case in which the system reacts instantaneously i.e., in which
q(t) depends only on F(t). For further discussion we put for example

(7.4)

The integral (7.3) with (7.4) can immediately be performed yielding

(7.5)

With help offormula (7.5) we can quantitatively express the condition under which
q acts instantaneously. This is the case if 'I » 15, namely

a I
q(t) ~ -e- M == -F(t), (7.6)
Y Y
or, in other words, the time constant to = 1/'1 inherent in the system must be much
shorter than the time constant t' = 1/15 inherent in the orders. We shall refer to this
assumption, which turns out to be of fundamental importance for what follows, as
"adiabatic approximation". We would have found the same result (7.6) if we had

204
7.1 Organization 193

put q = 0 in eq. (7.2) from the very beginning, i.e., if we had solved the equation

o= -yq + F(t). (7.7)

We generalize our considerations in a way which is applicable to quite a number


of practical systems. We consider a set of subsystems distinguished by an index Ji.
Each subsystem may be described by a whole set of variables qfJ..I ..• qfJ.. 2 ' Further-
more a whole set of "forces" FI ... Fm is admitted. We allow for a coupling
between the q's and for coupling coefficients depending on the external forces F j •
Finally the forces may occur in an inhomogeneous term as in (7.2), where this term
may be a complicated nonlinear function of the F/s. Thus, written in matrix form,
our equations are

tifJ. = AqfJ. + B(F)qfJ. + C(F), (7.8)

where A and B are matrices independent of qw We require that all matrix elements
of B which are linear or nonlinear functions of the F's vanish when F tends to zero.
The same is assumed for C. To secure that the system (7.8) is damped in the absence
of external forces (or, in other words, that the system is stable) we require that the
eigenvalues of the matrix A have all negative real parts

Re A. < O. (7.9)

Incidentally this guarantees the existence of the inverse of A, i.e., the determinant
is unequal zero

det A '* O. (7.10)

Furthermore, on account of our assumptions on Band C, we are assured that the


determinant

det IA + B(F) I (7.11)

does not vanish, provided that the F's are small enough. Though the set of equa-
tions (7.8) is linear in qfJ.' a general solution is still a formidable task. However,
within an adiabatic elimination technique we can immediately provide an explicit
and unique solution of (7.8). To this end we assume that the F's change much
more slowly than the free system qw This allows us in exactly the same way as
discussed before to put

(7.12)

so that the differential equations (7.8) reduce to simple algebraic equations which
are solved by

(7.13)

205
194 7. Self-Organization

Note that A and B are matrices whereas C is a vector. For practical applications
an important generalization of (7.8) is given by allowing for different quantities
A, Band C for different subsystems, i.e., we make the replacements

(7.14)

in (7.13). The response qJl to the F's is unique and instantaneous and is in general a
nonlinear function of the F's.
Let us consider some further generalizations of (7.8) and discuss its usefulness.
a) (7.8) could be replaced by equations containing higher order derivatives of qJl
with respect to time. Since equations of higher order can always be reduced to a
set of equations of lower order e.g. of first order, this case is already contained in
(7.8). b) Band C could depend on F's of earlier times. This case presents no diffi-
culty here provided we still stick to the adiabatic elimination technique, but it leads
to an enormous difficulty when we treat the case of selforganization below. c) The
right hand side of (7.8) could be a nonlinear function of qw This case may appear
in practical applications. However, we can exclude it if we assume that the systems
Jl are heavily damped. In this case the q's are relatively small and the rhs of (7.8)
can be expanded in powers of q where in many cases one is allowed to keep only
linear terms. d) A very important generalization must be included later. (7.8) is a
completely causal equation, i.e., there are no fluctuations allowed for. In many
practical systems fluctuations play an important role, however.
In summary we can state the following: To describe organization quantitatively
we will use (7.8) in the adiabatic approximation. They describe a fairly wide class
of responses of physical, chemical and biological and, as we will see later on, socio-
logical systems to external causes. A note should be added for physicists. A good
deal of present days physics is based on the analysis of response functions in the
nonadiabatic domain. Certainly further progress in the theory of selforganization
(see below) will be made when such time-lag effects are taken into account.

7.2 Self-Organization
A rather obvious step to describe self-organization consists in including the external
forces as parts of the whole system. In contrast to the above-described cases,
however, we must not treat the external forces as given fixed quantities but rather
as obeying by themselves equations of motion. In the simplest case we have only
one force and one subsystem. Identifying now F with q1 and the former variable
q with qz, an explicit example of such equations is

41 = -Y1Q1 - aq1qz, (7.15)

4z = -Yzqz + bqi· (7.16)

Again we assume that the system (7.16) is damped in the absence of the system

206
7.2 Self-Organization 195

(7.15), which requires Yz > O. To establish the connection between the present case
and the former one we want to secure the validity of the adiabatic technique. To this
end we require

(7.17)

Though YI appears in (7.15) with the minus sign we shall later allow for both
Y1 ~ 0. On account of (7.17) we may solve (7.16) approximately by putting 42 =
which results in
°
(7.18)

Because (7.18) tells us that the system (7.16) follows immediately the system (7.15)
the system (7.16) is said to be slaved by the system (7.15). However, the slaved
system reacts on the system (7.15). We can substitute q2 (7.18) in (7.15). Thus we
obtain the equation

(7.19)

which we have encountered before in Section 5.1. There we have seen that two
completely different kinds of solutions occur depending on whether Y1 > or °
YI < 0. For YI > 0, q. = 0, and thus also q2 = 0, i.e., no action occurs at all.
However, if Y. < 0, the steady state solution of (7.19) reads

(7.20)

°
and consequently q2 "# according to (7.18). Thus the system, consisting of the two

° °
subsystems (7.15) and (7.16) has internally decided to produce a finite quantity qz,
i.e., nonvanishing action occurs. Since ql "# or ql = are a measure if action or
if no action occurs, we could call ql an action parameter.
For reasons which will become obvious below when dealing with complex
systems, ql describes the degree of order. This is the reason why we shall refer to
ql as "order parameter". In general we shall call variables, or, more physically
spoken, modes "order parameters" if they slave subsystems. This example lends
itself to the following generalization: We deal with a whole set of subsystems which
again are described by several variables. For all these variables we now use a single
kind of suffices running from I to n. For the moment being we assume these equa-
tions in the form

41 = -Ylql + UI(qt> ... ,qn)'


42 = -Y2q2 + U2(qt> ... , qn),

(7.21)

To proceed further, we imagine that we have arranged the indices in such a way

207
196 7. Self-Organization

that there are now two distinct groups in which i = 1, ... , m refers to modes
with small damping which can even become unstable modes (i.e., y :'S 0) and an-
other group with s = m + 1, ... , n referring to stable modes. It is understood
that the functions gj are nonlinear functions of qt, ... , qn (with no constant or
linear terms) so that in a first approximation these functions can be neglected
compared to the linear terms on the right hand side of equations (7.21). Because

Yi -4 0, but Ys > 0 and finite, (7.22)


i = I, ... , m; s = m + 1, ... , n

holds we may again invoke the adiabatic approximation principle putting qs = O.


Furthermore we assume that the /qs/'s are much smaller than the /qJs which is
motivated by the size of the Ys (but which must be checked explicitly in each
practical case). As a consequence we may put all qs = 0 in gs. This allows us to
solve the (7.21) for s = m + 1, ... , n with ql ... qm as given quantities:

ysqs = g.(ql, ... , qn), s = m + 1, ... , n. (7.23)

where qm+ I, • . . , qn must be put equal zero in gs. Reinserting (7.23) into the first
m equations of (7.21) leads us to nonlinear equations for the q;'s alone.

(7.24)

The solutions of these equations then determine whether nonzero action of the
subsystems is possible or not. The simp1est example of (7.24) leads us back to an
equation of the type (7.19) or of e.g., the type

(7.25)

The equations (7.21) are characterized by the fact that we could group them
into two clearly distinct groups with respect to their damping, or, in other words,
into stable and (virtually) unstable variables (or "modes"). We now want to show
that self-organized behavior need not be subject to such a restriction. We rather
start now with a system in which from the very beginning the q's need not belong
to two distinct groups. We rather consider the following system of equations

(7.26)

where hj are in general nonlinear functions of the q's. We assume that the system
(7.26) is such that it allows for a time independent solution denoted by qJ. To
understand better what follows, let us have a look at the simpler system (7.21).
There, the rhs depends on a certain set of parameters, namely, the y's. Thus in a
more general case we shall also allow that the h/s on the rhs of (7.26) depend on
parameters- called (J l ' . . . , (J I' Let us first assume that these parameters are chosen
in such a way that the qO,s represent stable values. By a shift of the origin of the

208
7.2 Self-Organization 197

coordinate system of the q's we can put the qO's equal to zero. This state will be
referred to as the quiescent state in which no action occurs. In the following we put

qit) = qJ + uit ) or q(t) = qO + u(t) (7.27)

and perform the same steps as in the stability analysis (compare Sect. 5.3 where
~(t) is now called u(t». We insert (7.27) into (7.26). Since the system is stable we may
assume that the u/s remain very small so that we can linearize the equations (7.26)
(under suitable assumptions about the h/s, which we don't formulate here ex-
plicitly). The linearized equations are written in the form

(7.28)

where the matrix element Ljj' depends on qO and simultaneously on the parameters
In short we write instead of (7.28)
0"1,0"2' ••••

Ii = Lu. (7.29)

(7.28) or (7.29) represent a set of first order differential equations with constant
coefficients. Solutions can be found as in 5.3 in the form

(7.30)

where A/l'S are the eigenvalues of the problem

(7.31)

and u(/l)(O) are the right hand eigenvectors. The most general solution of (7.28) or
(7.29) is obtained by a superposition of (7.30)

(7.32)

with arbitrary constant coefficients ~/l' We introduce left-hand eigenvectors veil)


which obey the equation

(7.33)

Because we had assumed that the system is stable the real part of all eigenvalues
A/l is negative. We now require that the decomposition (7.27) satisfies the original
nonlinear equations (7.26) so that u(t) is a function still to be determined

Ii = Lu + N(u). (7.34)

We have encountered the linear part Lu in (7.28), whereas N stems from the
residual nonlinear contributions. We represent the vector u(t) as a superposition
of the right-hand eigenvectors (7.30) in the form (7.32) where, however, the ~'s are

209
198 7. Self-Organization

now unknown functions of time, and exp (A"t) is dropped. To find appropriate
equations for the time dependent amplitudes ~it) we multiply (7.34) from the
left by

v{"l(O) (7.35)

and observe the orthogonality relation

(7.36)

known from linear algebra. (7.34) is thus transformed into

(7.37)

where
(7.38)

The lhs (7.37) stems from the corresponding lhs of (7.34). The first term on the
rhs of (7.37) comes from the first expression on the rhs of (7.34). Similarly, N is
transformed into g. Note that g" is a nonlinear function in the ~'s. When we now
identify~" with qj and A" with -Yj we notice that the equations (7.37) have exactly
the form of (7.21) so that we can immediately apply our previous analysis. To do
this we change the parameters 0"1' O"b' •• in such a way that the system (7.28)
becomes unstable, or in other words, that one or several of the A,,'S acquire a vanish-
ing or positive real part while the other A.'s are still connected with damped modes.
The modes ~" with Re A" ~ 0 then play the role of the order parameters which
slave all the other modes. In this procedure we assume explicitly that there are two
distinct groups of A,,'S for a given set of parameter values of 0" I, ••. ,0"••
In many practical applications (see Sect. 8.2) only one or very few modes ~"
become unstable, i.e., Re A" ~ O. If all the other modes ~" remain damped, which
again is satisfied in many practical applications, we can safely apply the adiabatic
elimination procedure. The important consequence lies in the following: Since
all the damped modes follow the order parameters adiabatically, the behavior of the
whole system is determined by the behavior of very few order parameters. Thus even
very complex systems may show a well regulated behavior. Furthermore we have
seen in previous chapters that order parameter equations may allow for bifurcation.
Consequently, complex system can operate in different "modes", which are well
defined by the behavior of the order parameters (Fig. 7.1). The above sections of
the present chapter are admittedly somewhat abstract. We therefore strongly advise
students to repeat all the individual steps by an explicit example presented in the
exercise. In practice, we often deal with a hierarchical structure, in which the
relaxation constants can be grouped so that

In this case one can apply the adiabatic elimination procedure first to the variables

210
7.2 Self-Organization 199

,,,,,.oj,, ~
subsystems

joint action
of total system (a)

stable modes s feed bock

(b)
output

Fig. 7.1. Typical example of a system composed of interacting subsystems each coupled to reser-
voirs (a). The reservoirs contain many degrees of freedom and we have only very limited knowl-
edge. They are treated by information theory (or in physics by thermodynamics or statistical
physics). After elimination of the reservoir variables, the "unstable" modes of the subsystems are
determined and serve as order parameters. The stable modes generate in many cases a feedback
loop selecting and stabilizing certain configurations of the order parameters (b)

connected with 'P>' leaving us with the other variables. Then we can apply this
methods to the variables connected with y(2) and so on.
There are three generalizations of our present treatment which are absolutely
necessary in quite a number of important cases of practical interest.
1) The equations (7.26) or equivalently (7.37) suffer from a principle draw back.
Assume that we have first parameters CT 1, CT 2 for the stable regime. Then all u = 0
in the stationary case, or, equivalently all ~ = 0 in that case. When we now go over
to the unstable regime, ~ = 0 remains a solution and the system will never go to
the new bifurcated states. Therefore to understand the onset of self-organization,
additional considerations are necessary. They stem from the fact that in practically
all systems fluctuations are present which push the system away from the unstable
points to new stable points with ~ =F- 0 (compare Sect. 7.3).
2) The adiabatic elimination technique must be applied with care if the Xs have
imaginary parts. In that case our above procedure can be applied only if also the
imaginary parts of Aunstable are much smaller than the real parts of Astable'
3) So far we have been considering only discrete systems, i.e., variables q depend-
ing on discrete indices j. In continuously extended media, such as fluids, or in con-
tinuous models of neuron networks, the variables q depend on space-points x in a
continuous fashion.
The points 1-3 will be fully taken into account by the method described in Sec-
tions 7.7 to 7.11.
In our above considerations we have implicitely assumed that after the adiabatic
elimination of stable modes we obtain equations for order parameters which are
now stabilized (cf. (7.19». This is a self-consistency requirement which must be
checked in the individual case. In Section 7.8 we shall present an extension of the
preseht procedure which may yield stabilized order-parameters in higher order of
a certain iteration scheme. Finally there may be certain exceptional cases where the

211
200 7. Self-Organization

damping constants ')'S of the damped modes are not large enough. Consider for
example the equation for a damped mode

If the order parameter qj can become too large so that qj - ')'s > 0, the adiabatic
elimination procedure breaks down, because ')'eff == -')'s + qj > O. In such a case,
very interesting new phenomena occur, which are used e.g., in electronics to con-
struct the so-called "universal circuit". We shall come back to this question in Sec-
tion 12.4.

Exercise on 7.2

Treat the equations

4, = -q, + f3q2 - a(q; - qi) == h,(q" q2), (E.l)


42 = f3ql - q2 + b(qj + q2)2 == h 2(qj, q2), (E.2)

using the steps (7.27) till (7.38). f3 is to be considered as a parameter 2: 0, starting


from f3 = O. Determine f3 = f3e, so that A, = O.

7.3 The Role of Fluctuations: Reliability or Adaptibility? Switching

The typical equations for self-organizing systems are intrinsically homogeneous,


i.e., q = 0 must be a solution (except for a trivial displacement of the origin of q).
However, if the originally inactive system is described by q = O. it remains at
q = 0 forever and no self-organization takes place. Thus we must provide a certain
initial push or randomly repeated pushes. This is achieved by random forces which
we came across in chapter 6. In all explicit examples of natural systems such fluctua-
tions occur. We quote just a few: laser: spontaneous emission of light, hydro-
dynamics: hydrodynamic fluctuations, evolution: mutations.
Once self-organization has occurred and the system is in a certain state q(1), fluc-
tuations drive the system to explore new states. Consider Fig. 7.2 where the system
is described by a single order parameter q. Without fluctuations the system would

V(q)

Fig. 7.2. (compare text)

212
7.3 The Role of Fluctuations: Reliability or Adaptibility? Switching 201

V(q)

--~r-~------~~~~~------+--+---q

Fig. 7.3. Switching of a device


by deformation of potential V(q)
(after Landauer)

never realize that at q = q(2) there is a still more stable state. However, fluctuations
can drive the system from q(l) to q(2) by some diffusion process. Since q describes
the macroscopic performance of a system, among the new states may be those in
which the system is better adapted to its surroundings. When we allow for an
ensemble of such systems and admit competition, selection will set in (compare
Sect. lO.3). Thus the interplay between fluctuations and selection leads to an
evolution of systems.
On the other hand certain devices, e.g., tunnel diodes in electronics operate in
states described by an order parameter q with an effective potential curve e.g. that
of Fig. 7.3. By external means we can bring the tunnel diode (or other systems)
into the state q(l) and thus store information at q(1). With this state a certain
macroscopic feature of the system is connected (e.g., a certain electrical current).
Thus we can measure the state of the system from the outside, and the device
serves as a memory. However, due to fluctuations, the system may diffuse to state
q(2) thus losing its memory. Therefore the reliability of the device is lowered due to

fluctuations.
To process information we must be able to switch a device, thus bringing it
from q(l) to q(2). This can be done by letting the potential barrier become first
lower and lower (cf. Fig. 7.3). Diffusion then drives the system from q(l) to q(2).
When the potential barrier is increased again, the system is now trapped at q(2).
It may be that nature supports evolution by changing external parameters so that
the just-described switching process becomes effective in developing new species.
Therefore the size of the order parameter fluctuations is crucial for the performance
of a system, acting in two opposite ways: adaptibility and ease of switching require
large fluctuations and flat potential curves, whereas reliability requires small fluc-
tuations and deep potential valleys. How can we control the size of fluctuations?
In self-organizing systems which contain several (identical) subsystems this can be
achieved by the number of components. For a fixed, (i.e., nonfluctuating) order
parameter each subsystem, s, has a deterministic output qJs ) and a randomly
fluctuating output q?). Let us assume that the total output q(s) is additive

(7.39)

213
202 7. Self-Organization

Let us further assume that q(s) are independent stochastic variables. The total out-
put qtotal = Is q(S) can then be treated by the central limit theorem (cf. Sect. 2.15).
The total output increases with the number N of subsystems, while the fluctuations
increase only with IN. Thus we may control the reliability and adaptibility just
by the number of subsystems. Note that this estimate is based on a linear analysis
(7.39). In reality, the feedback mechanism of nonlinear equations leads to a still
more pronounced suppression of noise as we will demonstrate most explicitly in the
laser case (Chapter 8).
In conclusion, we discuss reliability of a system in the sense of stability against
malfunctioning of some of its subsystems. To illustrate a solution that self-organiz-
ing systems offer let us consider the laser (or the neuron network). Let us assume
that laser light emission occurs in a regular fashion by atoms all emitting light at
the same frequency Woo Now assume that a number of atoms get a different transi-
tion frequency WI. While in a usual lamp both lines Wo and WI appear-indicating
a malfunction-the laser (due to nonlinearities) keeps emitting only Woo This is a
consequence of the competition between different order parameters (compare
Sect. 5.4). The same behavior can be expected for neurons, when some neurons
would try to fire at a different rate. In these cases the output signal becomes merely
somewhat weaker, but it retains its characteristic features. If we allow for fluctua-
tions of the subsystems, the fluctuations remain small and are outweighed by the
"correct" macroscopic order parameter.

7.4* Adiabatic Elimination of Fast Relaxing Variables


from the Fokker-Planck Equation
In the foregoing Sections 7.1 and 7.2 we described methods of eliminating fast
variables from the equations of motion. In several cases, e.g., chemical reaction
dynamics, a Fokker-Planck equation is more directly available than the cor-
responding Langevin equations. Therefore it is sometimes desirable to perform the
adiabatic elimination technique with the Fokker-Planck equation. We explain the
main ideas by means of the example (7.15) (7.16) where now fluctuations are
included. To distinguish explicitly between the "unstable" and "stable" modes,
we replace the indices

I by u ("unstable")
2 by s ("stable")

The Fokker-Planck equation corresponding to (7.15) and (7.16) reads

(7.40)

214
7.4 Adiabatic Elimination of Fast Relaxing Variables from the Fokker-Planck Equation 203

We write the joint distribution function/(qu' q.) in the form

/(qu, q.) = h(q. I qJg(qJ, (7.41)

where we impose the normalization conditions

(7.42)

and

(7.43)

Obviously, h(q. I qu) can be interpreted as conditional probability to find q. under


the condition that the value of qu is given. It is our goal to obtain an equation for
g(qu), i.e., to eliminate q•. Inserting (7.41) into (7.40) yields

(7.44)

To obtain an equation for g(qu) alone we integrate (7.44) over q•. Using (7.42) we
obtain

(7.45)

As indicated we use the abbreviation

(7.46)

Apparently, (7.45) contains the still unknown function h(q. I qJ. A closer inspec-
tion of (7.44) reveals that it is a reasonable assumption to determine h by the equa-
tion

(7.47)

This implies that we assume that h varies much more slowly as a function of qu
than as a function of q., or, in other words, derivates of first (second) order of h
with respect to qu can be neglected compared to the corresponding ones with
respect to q•. As we will see explicitly, this requirement is fulfilled if y. is sufficiently

215
204 7. Self-Organization

large. Furthermore we require h" = 0 so that h has to fulfil the equation

Lsh = 0, (7.48)

where Ls is the differential operator of the rhs of (7.47). As in our special example

Fs = -ysqs + cp(q.) , (7.49)

the solution of (7.48) can be found explicitly with aid of (6.129). (Note that q. is
here treated as a constant parameter)

(7.50)

Since in practically all applications F. is composed of powers of qs' the integral


(7.46) can be explicitly performed. In our present example we obtain

(7.51)

Provided the adiabatic condition holds, our procedure can be applied to general
functions F. and Fs (instead of those in (7.40». In one dimension, (7.48) can then
be solved explicitly (cf. Section 6.4). But also in several dimensions (see exercise)
(7.48) can be solved in a number of cases, e.g., if the potential conditions (6.114,115)
hold. In view of Sections 7.6 and 7.7 we mention that this procedure applies also
to functional Fokker-Planck equations.

Exercise on 7.4

Generalize the above procedure to a set of qu's and q;s, u = I, ... , k, s = 1, ... , I.

7.5* Adiabatic Elimination of Fast Relaxing Variables


from the Master Equation
In a number of applications one may identify slow and fast (stable) quantities, or,
more precisely, random variables Xs ' X., the probability distribution of which obeys
a master equation. In that case we may eliminate the stable variables from the
master equation in a way closely analogous to our previous procedure applied to
the Fokker-Planck equation. We denote the values of the unstable (stable) variables
by m.(ms)' The probability distribution P obeys the master equation (4.111)

p(m.. m.; t) = Lm."mu' w(m.. m.; m~,m~)P(m~,m~; t)


- P(ms,m u ; t) Llfts"mu' w(m~,m:,; ms,m u). (7.52)

We put

P(m .. m u, t) = G(mJH(m s 1m.), (7.53)

216
7.6 Self-Organization in Continuously Extended Media. An Outline 205

and require

Int. H(m.1 mu) = 1, (7.54)

Imu G(mu) = 1. (7.55)

Inserting (7.53) into (7.52) and summing up over m. on both sides yields the still
exact equation

(7.56)

where a little calculation shows that

(7.57)

To derive an equation for the conditional probability H(m. I mu) we invoke the
adiabatic hypothesis namely that Xu changes much more slowly than X•. Conse-
quently we determine H from that part of (7.52), in which transitions between m~
and m. occur for fixed mu = m~: Requiring furthermore II = 0, we thus obtain

Int; w(m.,mu; m~,mu)H(m~ I mu)


- H(m. I mu) Int.- w(m~,mu; m.,mu) = o. (7.58)

The equations (7.58) and (7.56) can be solved explicitly in a number of cases,
e.g., if detailed balance holds. The quality of this procedure can be checked by
inserting (7.53) with G and H determined from (7.56) and (7.58) into (7.52) and
making an estimate of the residual expressions. If the adiabatic hypothesis is valid,
these terms can be now taken into account as a small perturbation. Inserting the
solution H of (7.58) into (7.57) yields an explicit expression for lV, which can now
be used in (7.56). This last equation determines the probability distribution G of
the order parameters.

7.6 Self-Organization in Continuously Extended Media.


An Outline of the Mathematical Approach

In this and the following sections we deal with equations of motion of continuously
extended systems containing fluctuations. We first assume external parameters
permitting only stable solutions and then linearize the equations, which define a
set of modes. When external parameters are changed, the modes becoming un-
stable are taken as order parameters. Since their relaxation time tends to infinity,
the damped modes can be eliminated adiabatically leaving us with a set of non-
linear coupled order-parameter equations. In two and three dimensions they allow
for example for hexagonal spatial structures. Our procedure has numerous practical
applications (see Chap. 8).
To explain our procedure we look at the general form of the equations which

217
206 7. Self-Organization

are used in hydrodynamics, lasers, nonlinear optics, chemical reaction models and
related problems. To be concrete we consider macroscopic variables, though in
many cases our procedure is also applicable to microscopic quantities. We denote
the physical quantities by U = (U j , U2 , ••• ) mentioning the following examples.
In lasers, U stands for the electric field strength, for the polarization of the medium,
and for the inversion density of laser active atoms. In nonlinear optics, U stands
for the field strengths of several interacting modes. In hydrodynamics, U stands
e.g., for the components of the velocity field, for the density, and for the tempera-
ture. In chemical reactions, U stands for the numbers (or densities) of molecules
participating in the chemical reaction. In all these cases, U obeys equations of the
following type

a
at U" = GlV, U) + D"V
2
Ull + Fit); f1 = 1,2, ... ,n. (7.59)

In it, G" are nonlinear functions of U and perhaps of a gradient. In most applica-
tions like lasers or hydrodynamics, G is a linear or bilinear function of U, though in
certain cases (especially in chemical reaction models) a cubic coupling term may
equally well occur. The next term in (7.59) describes diffusion (D real) or wave type
propagation (D imaginary). In this latter case the second order time derivative of
the wave equation has been replaced by the first derivative by use of the "slowly
varying amplitude approximation". The F,,(t)'s are fluctuating forces which are
caused by external reservoirs and internal dissipation and which are connected
with the damping terms occurring in (7.59).
We shall not be concerned with the derivation of equations (7.59). Rather, our
goal will it be to derive from (7.59) equations for the undamped modes which
acquire a macroscopic size and determine the dynamics of the system in the
vicinity of the instability point. These modes form a mode skeleton which grows
out from fluctuations above the instability and thus describes the "embryonic"
state of the evolving spatio-temporal structure.

7.7* Generalized Ginzburg-Landau Equations for Nonequilibrium


Phase Transitions
We now start with a treatment of (7.59). We assume that the functions Gil in (7.59)
depend on external parameters O'j, 0'2' . . . (e.g., energy pumped into the system).
First we consider such values of 0' so that U = U 0 is a stable solution of (7.59).
At higher instabilities, U 0 may be space- and time-dependent in the form

In a number of cases the dependence of U I on x and t can be transformed away so


that again a new space- and time-independent U 0 (or 0 0 ) results. We then decom-
pose U

U = Uo +q (7.60)

218
7.7 Generalized Ginzburg-Landau Equations for Nonequilibrium Phase Transitions 207

with

q -'
_(~I(X't)) . (7.61)
qn (x, t)
Splitting the rhs of(7.59) into a linear part, Kq, and a nonlinear part g(q) we obtain
(7.59) in the form

(7.62)

In it the matrix

(7.63)

has the form

(7.64)

Our whole procedure applies, however, to a matrix K which depends in a general


way on V. g is assumed in the form

(7.65)

g(2) mayor may not depend on V, and equally g(3) mayor may not depend on V
(If g(3) depends on V, the sequence of g and q must be chosen properly). First we
consider the linear homogeneous equation

(7.66)

where the matrix K is given by (7.64). To obtain the solution of (7.66) we split the
space- and time-dependent vector q into the product ofa time-dependent function
exp(At), a constant vector 0, and a space-dependent function X(x), i.e.

q(x, t) = eAt 0X(x). (7.67)

Inserting (7.67) into (7.66) yields

K(V)OX(x) = AOX(x). (7.66a)

We now require that X(x) obey the wave equation

(7.68)

where the individual solutions X are distinguished by the index k.

219
208 7. Self-Organization

As is well known, the solutions of (7.68) are only determined if X,,(x} obeys
proper boundary conditions. Such a condition could be that X,,(x} vanishes on the
boundary. Depending on the shape of the boundary, the solutions may take
various forms. For boundaries forming a square or cube in two or three dimen-
sions, respectively, products of sin functions are appropriate for X,,(x}. For a
circular boundary in the plane, Bessel functions of radius r multiplied by exp (icp)
can be used for x", where one chooses polar coordinates r, and cp. In the present
context we want to make as close contact as possible with the Ginzburg-Landau
theory of phase transitions of systems in thermal equilibrium. To this end we
assume that the system under consideration is infinitely extended, so that there are
no boundary conditions except that IX(x} I remains bounded for Ixl-+cx:>. The
appropriate solutions of (7.68) are then plane waves

x,,(x} = .AI" exp (ikx).

Using this function in (7.66 a), we readily obtain

K(ik}O = AO (7.66b)

after dividing both sides of (7.66 a} by X,,(x}. Because the matrix K whose elements
have, for instance, the form

(7.66c)

depends on k, the eigenvalues 0 and eigenvalues A also become functions of k.


Because (7.66b) represents a system of linear, homogeneous algebraic equations,
nontrivial solutions 0 can be found only if the corresponding determinant van-
ishes. This can be achieved by a proper choice of A'S, which we distinguish by the
index j. But because the determinant depends on k [cf. (7.66c)], the A'S will also
depend on k. Therefore we write A = Aj(ik). For formal reasons which will imme-
diately become clear, we have included in A the imaginary unit i. Correspondingly,
we must distinguish between the different eigenvectors 0:

o= O(j)(ik).

By means of a formal trick we can write many of the following relations more
concisely. As we shall see below, the quantities K(ik}, Aj(ik}, and O(j)(ik} will always
be followed by exp(ikx}. In order to understand our formal trick, consider the
special case of one dimension, where kx reduces to kx and V to d/dx.
Let us further assume for simplicity that Aj(ik} can be expanded into a power
series of ik, i.e.

We now observe that

:xexp(ikx} = ik exp (ikx)

220
7.7 Generalized Ginzburg-Landau Equations for Nonequilibrium Phase Transitions 209

and more generally

(tx)n exp(ikx) = (ik)" exp(ikx).

As an obvious consequence, we may write

lI)ik) exp (ikx) == L;,"= 0 cn (ik)n exp (ikx)

where Aj (d/dx) is defined by its power series expansion as explicitly indicated.


Generalizing this trick to three dimensions and applying it to K(ik), O(j)(ik), and
Aj(ik), we replace these quantities by

K(V),O(j)(V), and AAV), respectively.

In this way, (7.66b) is transformed into

Incidentally,

is replaced by

Note that we have to retain the index k of Xk' because this index indicates by which
ik the operator V is to be replaced in OUl and Aj. As we shall see below, this
formalism is most convenient when dealing with infinitely extended media and the
corresponding plane-wave solutions. When we are dealing with finite boundaries,
however, k (or k) in general belongs to a discrete set, and in such cases it is
advantageous to retain the notation K (ik), etc. We leave it as an exercise to the
reader to formulate the final equations of Sect. 7.7 and 7.8 in that way as well
(where there are no "finite bandwidth excitations", as will be discussed below).
Because in general K is not self-adjoint, we introduce the solutions () of the
adjoint equation

(7.69)

where

(7.70)

221
210 7. Self-Organization

and we may choose 0 always so that

(7.71)

The requirements (7.67), (7.69), (7.71) fix the O's and O's besides "scaling"
factors S/V): O(j) -> OU)S/V), oU) -> O(j)Sj(V)-l. This can be used in the mode
expansion (7.72) to introduce suitable units for the ~k./S. Since ~ and 0 occur
jointly, this kind of scaling does not influence the convergence of our adiabatic
elimination procedure used below. We represent q(x) as superposition

(7.72)

Now we take a decisive step for what follows. It will be shown later by means
of explicit examples like laser or hydrodynamic instabilities that there exist finite
band width excitations of unstable modes. This suggests to build up wave packets
(as in quantum mechanics) so that we take sums over small regions of k together.
We thus obtain carrier modes with discrete wave numbers k and slowly varying
amplitudes ~k.j(X). The essentials of this procedure can be seen in a one-
dimensional example. Since k in (7.72) is a continuous variable, we write, instead
of (7.72),

(7.72 a)

Because ou> does not depend on the integration over k, we may put O(j) in front
ofthe integral. We now subdivide the integration over k into equidistant intervals
[k' - (jj2, k' + (jj2], where k' are equidistant discrete values. Recalling that Xk is an
exponential function, we then find

(7.72b)

where .K is the normalization constant of Xk'


Writing k in the form k = k' + f, where - fJj2 $; f $; (jj2, we may cast (7.72b)
in the form

(7.72b) = OU>.KLk,exp(ik'x) f O/ 2
~k'+Ii.jexp{ifx)df.
-0/2

Using the abbreviations

.Kexp(ik'x) = xdx)

and

222
7.7 Generalized Ginzburg-Landau Equations for Nonequilibriurn Phase Transitions 211

we may cast (7.72b) in the form

q(x, t) = Lk' O(j) (d/dx) ~k" j(x, t)xdx). (7.72 c)

In the following we shall drop the prime on k'. The generalization of (7.72) to three
dimensions is obvious. Thus, we replace (7.72) with our new hypothesis

q(x,t) = L",jO(j)(V )!'.. ",j(x,t)X,,(x). (7.72 d)

But in contrast to (7.72), the sum in (7.72d) runs over a discrete set of k' s only,
and the ~' s do not only depend on time t, but are also slowly varying functions of
space x. Our first goal is to derive a general set of equations for the mode ampli-
tudes ~. To do so we insert (7.72d) into (7.62), multiply from the left hand side by
xt,(x)O(j) and integrate over a region which contains many oscillations of X" but
in which ~ changes very little. The resulting first expression on the left hand side
of (7.62) can be evaluated as follows:

fxt,(x)~,,)x)X,,(X)d3X ~ ~")x)t5",,,. (7.73)

Because of

[Aj(V),O(j')] = 0, where [a, b J == ab - ba,

it suffices to study the resulting second expression in the form

(7.74)

In order to evaluate this expression, we use the explicit form of

x,,(x) = .;V exp (ikx)

and note that

Aj(V) exp (ikx) = exp (ikx)Aj(V + ik).


Using the abbreviation iiV, k) = Aj(V + ik) and observing that ij~")x) is a
function slowly varying in space, we readily obtain, in analogy to (7.73),

(7.75)

Because ~,,)x) are slowly varying functions of x, the application of V yields a


small quantity. We are therefore permitted to expand i j into a power series of V,
and to confine ourselves to the leading term. In an isotropic medium we thus
obtain
(7.76)

223
212 7. Self-Organization

It is, of course, no problem to retain the higher-order terms, and this is indeed
required in a number of practical applications of our equations, for instance to
instabilities in fluid dynamics.
On the rhs of (7.62) we have to insert (7.72) into g (see (7.62) and (7.65», and to do
the multiplication and integration described above. Denoting the resulting func-
tion of ~ by 11, we have to evaluate

(7.77)

The explicit result will be given below, (7.87) and (7.88). Here we mention only a
few basic facts. As we shall see later the dependence of ~k.i(X) on x is important
only for the unstable modes and here again only in the term connected with 1 This
means that we make the following replacements in (7.77):

(jU)(V) = (jU)(k'), (7.78)

and

(7.79)

Since g (or 11) contains only quadratic and cubic terms in q, the evaluation of (7.77)
amounts to an evaluation of integrals of the form

(7.80)

where

(7.81)

and

(7.82)

where

(7.83)

Finally, the fluctuating forces FI' give rise to new fluctuating forces in the form

(7.84)

224
7.7 Generalized Ginzburg-Landau Equations for Nonequilibrium Phase Transitions 213

After these intermediate steps the basic set of equations reads as follows:

(7.85)

where

Hk,j({~(X)}) == Lk'k" akk'k·,j,j',j'.[""'k·~k'j'~k·r


j'j"

+ Lk'k"k'"
j'j"j'"
bkk'k"k'" ,j ,j' ,j" ,j"" Jkk'k·k"'~k'j'~k" j"ek"'j"" (7.86)

The coefficients a and b are given by

- (1.)
ale"'k"}}'}" - "
2 L..-p,vv'
O-(j)(k)OU')(k')OU"l(k")
Jl v v' { g,.nv'
(2) (k") + g"v'v,
(2) (k')} (7.87)

and

b""'k"k'"}}'}''}''' = ,,(3)
~JlV'V"V'" gllv'v"v'"
O-(j)(k)OU')(k')O(i")(k")OU'")(k"')
Jl v' ,," 11 m ,
(7.88)

respectively. So far practically no approximations have been made, but to cast


(7.85) into a practicable form we have to eliminate the unwanted or uninteresting
modes which are the damped modes. Accordingly we put

j =u unstable if Re Xu(O, k) ~ 0, (7.89)

and

j = s stable if Re X.(O, k) < O. (7.90)

An important point should be observed at this stage: Though bears two indices, e
k and u, these indices are not independent of each other. Indeed the instability
usually occurs only in a small region of k = kc (compare Fig. 7.4a-c). Thus we must
carefully distinguish between the k values at which (7.95) and (7.96) (see below)
are evaluated. Because of the "wave packets" introduced earlier, k runs over a set
of discrete values with Ikl = k c • The prime at the sums in the following formulas
indicates this restriction of the summation over k. Fig. 7.4c provides us with an
example of a situation where branchj of ~k.j can become unstable at two different
values of k. If k = 0 and k = kc are connected with a hard and soft mode, respec-
tively, parametrically modulated spatio-temporal mode patterns appear.
The basic idea of our further procedure is this. Because the undamped modes
may grow unlimited provided the nonlinear terms are neglected, we expect that the
amplitudes of the undamped modes are considerably bigger than those of the
damped modes. Since, on the other hand, close to the "phase transition" point the
relaxation time of the undamped modes tends to infinity, i.e., the real part of ~
tends to zero, the damped modes must adiabatically follow the undamped modes.
Though the amplitudes of the damped modes are small, they must not be neglected
completely. This neglect would lead to a catastrophe if in (7.86), (7.87), (7.88) the

225
214 7. Self-Organization

Re).Jo,k)

~--------------~~~~---------k

(a)
Re)..!o,k)

--~~~%~'~'-----------------------------+k

,,

(b)
ReA.{o,k)

(c)

Fig. 7.4. (a) Example of an eigenvalue A leading to an instability at k,. This dependence of A and
k corresponds qualitatively to that of the Brusselator model of chemical reactions. (cf. Section 9.4)
The dashed curve corresponds to the stable situation, the solid curve to the marginal, and the
dotted curve leads to an instability for a mode continuum near k,
(b) The situation is similar to that of Fig. 7.4a the instability occurring now at k=O. In the case
of the Brusselator model, this instability is connected with a hard mode
(c) Example of two simultaneously occurring instabilities. If the mode at k=O is a hard mode
and that at k=k" a soft mode, spatial and temporal oscillations can occur

cubic terms are lacking. As one convinces oneself very quickly, quadratic terms can
never lead to a globally stable situation. Thus cubic terms are necessary for a
stabilization. Such cubic terms are introduced even in the absence of those in the

226
7.7 Generalized Ginzburg-Landau Eq uations for N onequilibrium Phase Transitions 215

original equations by the elimination of the damped modes. To exhibit the main
features of our elimination procedure more clearly, we put for the moment

'k,j ~ (k,j), (7.91)

and drop all coefficients in (7.85). We assume I/;,sl « I/;,ul and, in a selfconsistent
way, /;,S oc /;,;,. Keeping in (7.85) only terms up to third order in /;'u, we obtain

(d~t - iu)(k, u) = Lk'k"u's (k', u')· (k", s) + Ln" (k', u')(k", u") u'u"

+ Lk'k"k'" (k', u')(k", u")(k"', u"') + Fk,u' (7.92)


u'u"u'"

Consider now the corresponding equation for j = s. In it we keep only terms neces-
sary to obtain an equation for the unstable modes up to third order

(ddt - is)Ck, s) = Lk'k" Ck', u')(k", u") + .... (7.93)


u'u"

If we adopt an iteration scheme using the inequality II;,sl « lI;,ul one readily con-
vinces oneself that 's is at least proportional to /;,; so that the only relevant terms in
(7.93) are those exhibited explicitly. We now use our second hypothesis, namely,
that the stable modes are damped much more quickly than the unstable ones which
is well fulfilled for the soft mode instability. In the case of a hard mode, we must be
careful to remove the oscillatory part of (k', u')(k", u") in (7.93). This is achieved
by keeping the time derivative in (7.93). We therefore write the solution of (7.93) in
the form

(k, s) = (ddt - ,t'1- 1 Lk'k" (k', u')(k", u"),


g'g"
(7.94)

,-I
where we have used the notation of Heaviside calculus. According to it,

d
( dt - i.S) f(t) =
It -00 exp[Xs(t - r)]f(-r)d-r. (7.94a)

This definition remains valid iff is a vector and As a matrix acting in the vector
space to whichfbelongs. Using this definition and the results obtained in the next
section, we can eliminate the stable modes precisely (in the rigorous mathematical
sense). For most practical cases, however, an approximate evaluation of the ope-
rator (d/dt - As) -1 is quite sufficient; in the case of a soft mode, d/dt can be entirely
neglected, whereas in the case of a hard mode, d/dt must be replaced by the
corresponding sum of the frequencies of ~k.j(X, t) which occur in

227
216 7. Self-Organization

As can be shown by a more detailed analysis, in most cases of practical interest


we can also neglect the spatial derivative V which occurs in f.s • In this way
equations of type (7.93) can be readily solved. If time and space derivatives are
neglected, as just mentioned, the solution of (7.93) is a purely algebraic problem,
so that we could take into account higher-order terms in (7.93) [e.g., products of
(k, u) and (k', s)] without difficulties, at least in principle.
Our discussion shows that it is possible to express the damped modes by the
undamped modes (k, u), which will serve as order parameters. This possibility,
which plays a significant role in synergetics, will be called the slaving principle 1.
Inserting the result of the evaluation of (7.94) into (7.92) we obtain the funda-
mental set of equations for the order parameters

+ I",,,,,,,,,,
u'a"u'"
~"'u'C""'"'''''''' uu'u"u'" ~""u" ~""'u'" + Fk,u
(7.95)

where we have used the abbreviation

C,,"'' ' ' ' ' ,uu'u"u'" = b"k'"'''''''uu'u''u",J'''''k'''''''

F",u is defined by

P" ,lx, t) = P" ,u(x, t) + 2 ~ s a""'kuu,.I""'fi~"'u'(x)·


L..fi"'u'

. { dtd - ~s(O, k)
}-l F fi,s(X, t). (7.97)

7.8* Higher-Order Contributions to Generalized Ginzburg-Landau


Equations
It is the purpose of this section to show how the first steps of the procedure of the
foregoing chapter can be extended to a systematic approach which allows us to
calculate all higher correction terms to generalized Ginzburg-Landau equations
by explicit construction. We first cast our basic equations (7.85) into a more
concise form. We introduce the vector
(7.98a)

1 Because of its importance, other forms of the slaving principle have been presented in
H. Haken: Advanced Synergetics, Springer Ser. Synergetics, Vol. 20 (Springer, Berlin, Heidel-
berg, New York, Tokyo 1983).

228
7.8 Higher-Order Contributions to Generalized Ginzburg-Landau Equations 217

and

(7.98b)

The ~'s are the expansion coefficients introduced in (7.72) which may be slowly
space dependent and also functions of time. The F's are fluctuating forces defined
in (7.84). k j are wave vectors, whereas iI, i2 , • • or m l , m 2 , ••• distinguish between
stable or unstable modes i.e.

Ij = u or s. (7.99)

Note that k and I j are not independent variables because in some regions of k the
modes may be unstable while they remain stable in other areas of k. We now
introduce the abbreviation

(7.100)

which defines the Ihs of that equation. The coefficients a and I have been given in
(7.87), (7.81). In a similar way we introduce the notation
(7.101)

where band J can be found in (7.88), (7.83). We further introduce the matrix

Am!(V' kl) 0
(
A. ~: l:,(V. k,) (7.102)

where the A's are the eigenvalues occurring in (7.85). With the abbreviations (7.98),
(7.100) to (7.102) the (7.85), if specialized to the stable modes, can be written in the
form

(~ - As)S = Asuu: u: U + 2Asus: u: S + Asss: s: S + Bsuuu: u: u: U

+ 3Bsuus: u: u: S + 3Bsuss: u: s: S + Bssss: s: s: S + Fs. (7.103)

In the spirit of our previous approach we assume that the vector S is completely
determined by this equation. Because (7.103) is a nonlinear equation, we must
devise an iteration procedure. To this end we make the ansatz

(7.104)

where c(n) contains the components of U exactly n times. We insert (7.104) into
(7.103) and compare terms which contain exactly the same numbers of factors of

229
218 7. Self-Organization

u. Since all stable modes are damped while the unstable modes are undamped,
we are safe that the operator d/dt - As possesses an inverse. By use of this we find
the following relations

-1
(d )
e (2) (u) _
-
_
dt
_
As
.•
{Asuu· u. u + Fs},
~
(7.105)

and for n :?: 3

e(n)(u) = (~ _ As) -1 { . . . }, (7.106)

where the bracket is an abbreviation for

{ ... } = 2A sus·. U'. e(n-I) + (I - 0n,3 ) "n-22 A sss·. elm).. e(n-m)


~m=

+ On.3Bsuuu: u: u: u + 3(1 - 0n,3)Bsuus : u: u: e(n-2)


+ 3(1 - 0n,3)(1 - On,4)Bsuss: u: elm): e(n-l-m)
+ (1 - 0n,3)(1 - 0n,4)(1 - on,S) Lml,m2,m3~2 Bssss: e(m tl : e(m z): e(m 3).
ml +m2+mJ=n
(7.107)

This procedure allows us to calculate all e(n),s consecutively so that the C's are
determined uniquely. Because the modes are damped, we can neglect solutions of
the homogeneous equations provided we are interested in the stationary state or
in slowly varying states which are not affected by initial distributions of the damped
modes. Note that F is formally treated as being of the same order as u: u. This
is only a formal trick. In applications the procedure must possibly be altered to pick
out the correct orders of u and F in the final equations. Note also that 1 and
thus A may still be differential operators with respect to x but must not depend on
time t. We now direct our attention to the equations of motion for the unstable
modes. In our present notation they read

(~- Au)U = Au",,: u: U + 2A"us: u: s + Auss: s: S


+ Buuuu: u: u: U + 3Buu",s: u: u: S
+ 3Bu"ss: u: s: s + Busss: s: s: s + Fu' (7.108)

As mentioned above our procedure consists in first evaluating the stable modes as
a functional or function of the unstable modes. We now insert the expansion (7. 104)
where the C's are consecutively determined by (7.105) and (7.106) into (7.108).
Using the definition
e(O) = e(1) = 0 (7.109)

and collecting terms containing the same total number of u's, our final equation

230
7.9 Scaling Theory of Continuously Extended Nonequilibrium Systems 219

reads

( dd - Au ) U
_ _.
-

Au..u· u. u + 2Auus· u. L.=o C (v) + L'Io'2=0
• • n-[ •
Auss· C
(Vt}.
.C
(V2)

t '1+V2~n

+ B ....".. : u: u: u + 3B""".: u: L~:5 C(·)

(7.110)

In the general case the solution of (7.110) may be a formidable task. In practical
applications, however, in a number of cases solutions can be found (cf. Chap. 8).

7.9* Scaling Theory of Continuously Extended Nonequilibrium


Systems

In this section we want to develop a general scaling theory applicable to large classes
of phase transition-like phenomena in nonequilibrium systems. We treat an N-
component system and take fluctuations fully into account. Our present approach
is in general only applicable to the one-dimensional case, though there may be
cases in which it applies also to two and three dimensions. The basic idea of our
procedure is as follows:
We start from a situation in which the external parameters permit only stable
solutions. In this case the equations can be linearized around their steady state
values and the resulting equations allow for damped modes only. When changing
an external parameter, we eventually reach a marginal situation with one or several
modes becoming unstable. We now expand the nonlinear equations around the
marginal point with respect to powers of e, where e2 denotes the deviation of the
actual parameter from the parameter value at the marginal point. In the framework
of perturbation theory applied to the linearized equations one readily establishes
that the complex frequency depends on e2 • This leads to the idea to scale the time
with 8 2 • On the other hand, if the linearized equations contain spatial derivatives,
one may show rather simply that the corresponding changes are so that the space
coordinate r goes with e. In the following we shall therefore use these two scalings
and expand the solutions of the nonlinear equations into a superposition of solu-
tions of the linearized equations at the marginal point. Including terms up to third
order in e, we then find a self-consistent equation for the amplitude of the marginal
solution. The resulting equation is strongly reminiscent oftime-dependent Ginzburg-
Landau equations with fluctuating forces (compare Sect. 7.6, 7.7, 7.8.). As we
treat essentially the same system as in Section 7.6 we start with (7.62) which we
write as

rq = g(q) + F(x, t) (7.1 11)

231
220 7. Self-Organization

using the abbreviation

(7.112)

In a perturbation approach one could easily solve (7.111) by taking the inverse of
r. However, some care must be exercised because the determinant may vanish in
actual cases. We therefore write (7.111) in the form

det IFlq = f[g(q) + F(x, t)], (7.113)

where fik are the subdeterminants of order (N - 1) belonging to the element r ki


of the matrix (7.112).

a) The Homogeneous Problem


The linear homogeneous problem (7.111) defines a set of right-hand eigenvectors
(modes)
(7.114)

with K defined by (7.64). Because r is in general not self-adjoint, we have to define


corresponding left-hand eigenvectors by means of

(7.115)

Note that Un and Vn are time and space dependent functions. They may be repre-
sented by plane wave solutions. The index n distinguishes both, the k-vector and
the corresponding eigenvalue of r: n = (k, m). The vectors (7.114) and (7.115)
form a biorthogonal set with the property
(7.116)

where ( I ) denotes the scalar product.


b) . Scaling
We assume that one or several external parameters are changed so that one goes
beyond the marginal point. The size of this change is assumed to be propor-
tional to 8 2 which is a smallness parameter for our following purposes. In general
the change of an external parameter causes a change of the coefficients of the
diffusion matrix and of the matrix K

,
D.' = D!O) + D!Z)e
, 2/ (7.117)
Kij = K{J) + KlJ)e 2 •

Correspondingly we expand the solution q into a power series of 8


q = eq(1) + e2q(2) + ... = L~= 1 emq<m). (7.118)

232
7.9 Scaling Theory of Continuously Extended Nonequilibrium Systems 221

To take into account finite band width effects, space and time are scaled simul-
taneously by

(7.119)

c) Perturbation Theory
Inserting the expansions (7.117), (7.118) and the scaling laws (7.119) in (7.113)
we obtain

(I~ 0 e' det ITI(l)· (I:'= I enq(n)

= (I,~ 0 e'f(l)(I:::= I emg(m) + I:= I emF(m)(x, t». (7.120)

In order e1 we come back to the homogeneous problem (7.114). In the general


order e' (7.120) reduces to

(7.121)

We extract q(l) from the Ihs and bring the rest to the rhs. This yields

det IFI(O)q(l) = - I~: i det ITI(r)q(l-r)


+ I~:b f(r)[g(l-r)(q) + F(l-r)(x, t)]. (7.122)

We will explicitly treat two different situations which may arise at the critical
point. These situations are distinguished by a different behavior of the complex
frequencies A.
I) one soft mode, that is for one and only one mode Al = 0,
2) the hard mode case in which a pair of complex conjugate modes becomes
unstable so that Re Al = 0 but 1m Al "# O.
To write the perturbation theory in the same form for both cases, we introduce
the concept of modified Green's functions. To this end we rewrite the homogeneous
problem (7.114)

(7.123)

We now define Q by

(7.124)

where I is the unity matrix. The Green's function belonging to (7.124) reads

(7.125)

233
222 7. Self-Organization

In connection with the modified Green's function we have the orthogonality


condition

The concept of modified Green's functions is inevitable to treat the soft mode
case Al = 0, where the index 1 means (k = ko m = 1). By means of the Green's
function the solution can be written in the form

Obviously the same procedure is applicable to the left hand eigenvectors. If the
critical mode is degenerate, it is straightforward to define an appropriate, modified
Green's function. Instead of (7.126), one now gets a set of h orthogonality condi-
tions. (h is the degree of degeneracy). Each one of these conditions must be fulfilled
separately.

7.10* Soft-Mode Instability


We assume that the time and space dependence of Un can be represented by plane
waves
(7.128)

As mentioned before, the critical point is fixed by

(7.129)

From (7.129) we get

(7.130)

Equation (7.l30) is a condition for the coefficients D i , Kik as functions of k 2 at the


marginal point. kc is fixed by the condition

(7.131)

which has the consequence

(7.l32)

From (7.132) we get

".
~l D(O)f(?)(A k 2-
t i l l = O', ) = tr (Df(A 1 0 k 2 )) = 0 ,
= , (7.l33)

234
7.10 Soft-Mode Instability 223

where tr means "trace". The quantities Fii are the adjoint subdeterminants of
order (N - 1) to the elements r ii of the matrix r. Now one may construct the right-
hand and left-hand eigenvectors of the critical mode. We represent the right-hand
eigenvector as

(7.134)

ai(k e , 1) are the components of the right-hand eigenvector to the eigenvalue


AI = 0 in the vector space (ql' q2, ... , qN). Here it is assumed that there exists
only one eigenvector to AI. (In other words, the problem is nondegenerate). The
left-hand eigenvector is given by

(7.135)

The scalar product between left-hand and right-hand eigenvectors is defined as


follows:

_1
L
J+LI 2dx·e i(k-k')x In~1
N - • 1·_
an(k, l)an(k ,J) - bkk,b ji • (7.136)
-L12

Because of the critical mode which is dominating all other modes at the marginal
point the solution of (7.122) in order 8 1 is given by

(7.137)

The common factor ~(R, T) is written as a slowly varying amplitude in space and
time. In order 8 2 we find the following equation

(7.138)

Because det 1T1(1) contains differential operators, one may readily show that on ac-
count of (7.133)

(7.139)

To write (7.138) more explicitly, we represent g(2)(q) as

(7.140)

(summation is taken over dummy indices). qf1l may be approximated by the


solution q(1) of (7.137). The solvability condition (7.126) is automatically fulfilled.

235
224 7. Self-Organization

Therefore the solution q(2) reads

q?) = e· Lm Am(~kc) apkc, m)a i(2k c, m)gfNak(ko I)alkc, I) e-2ikcx


+ 1~12 Lm Am~O) aj(O, m)ai(O, m)glNat(kc, I)af(k c , 1). (7.141)

We finally discuss the balance equation of third order. The corresponding solv-
ability condition

(Vl,kc I det ITi(2)q(l) + det + det Jr(O)lq(3))


JrI(1)q(2)
= (V1,kc I f'(O)[g(3)(q) + F(x, t)] + f'(1)g(2)(q) + f'(O)g,(2)V R) (7.142)1

reduces on account of

(7.143)

and

to

(7.145)

To evaluate (7.145) we calculate the single terms separately. For det JrI(2) we find

(7.146)

Yii,jj are the subdeterminants of order (N - 2) to r. Because of (7.146) the structure


of the lhs of (7.145) is given by

(7.147)

with the coefficients

1- -_"
L..i r ii
-(0)
(AI' kJ (7.l48a)
B = Ii D\O)f'\?)(Al, kJ + 2 Li,j D!O) D~Ol...lii,jjk; (7.148b)
C = Iij (- KU) + D!2)k;b i)f'!J 1(A\, kJ. (7. 148c)

1 We have split the original g(2) into a V-independent part, g(21, and a part with the scaled V, g'(2).

236
7.10 Soft-Mode Instability 225

We now evaluate the right-hand side of (7.145). Inserting the explicit form of v 1 ,
and q(2) we obtain
9(2)

iim(k e, l){fm/A1' ke)[g\NI + g\fJI + gg/j]ar(k e, l)ak(k e, I)alke' I)

+ rm/A1,kJ(gijk
- (2)
+ gikj)[ak(k
(2) "I - *
e, I) u AlO)aj(O,l)a/O,l)gpqraq(ke,I)ar(ke' I)

+ a:(ke, I) II Al~ke) apke, /)ii/2ko /)g~~taike' l)arCke' 1m

x I~(R, T)12~(R, T). (7.l49)

This can be written in the form

MI~(R, TW~(R, T). (7.150)

Finally we discuss the fluctuating forces. We thereby assume that they are
slowly varying in time and contribute in order 8 3 • For the fluctuating force we get

(V1,kc I f(O)F(x, t)) = F(R, T). (7.l51)

F(R, T) can be written as

(7.l52)

with

F (R, T) = eikcXF(R, t) (7.153)

and

(7.154)

Again the fluctuating forces are Gaussian. Combining the expressions (7.147),
(7.150), (7.151) we can write the final equation for the slowly varying amplitude
~(R, T) which reads

A· a~(:~ T) _ B. V~~(R, T) = (- C + M I ~(R, T)12)~(R, T) + F(R, T)

(7.155)

with A, B, C defined in (7.148). Equation (7.155) is analogous to the time de-


pendent Ginzburg-Landau equation in superconductivity and the equation for the
continuous mode laser. The stochastically equivalent Fokker-Planck equation has
a potential solution which is formally the same as the Ginzburg-Landau free
energy (cf. Sect. 6.8).

237
226 7. Self-Organization

7.11 * Hard-Mode Instability


In this section we assume that the instability occurs at kc = but we assume that °
the eigenvalue is complex allowing for two conjugate solutions. Basically the
procedure is analogous to that of Section 7.10 so that we mainly exhibit the differ-
ences. When looking at the homogeneous problem, we may again define a biorthog-
onal set of eigenvectors. However, we have now to define a scalar product with
respect to time by

_ _I
(vn,pl "n',p') - T
J+ T 2
/
e
-iro(p-p')t
Ij=1
N - I I _
a/p,n)a/p,n)dt - (5"n,(5pp' (7.156)
-T/2

where

T
2n p and p' are mtegers.
= -,
. (7.157)
Wo

°
We assume that 1m Al 01 is practically fixed, whereas the real part of AI goes
with e2 which again serves as small expansion parameter. While the e1 balance had
defined the homogeneous problem, the e2 balance yields the equation

(7.158)

On account of

(7.159)

(7.158) has the same structure as (7.138) of the preceding paragraph. Thus the
solvability condition is fulfilled. Taking the Green's function

G -_ 1....P
\' \,1 I ipwo(t-t')I)( I (7.160)
1...." A _ ipw e "n,p vn,p'
n 0

the solution of the corresponding equation reads

q~2) = e In A n -
12 , ap, n)ap, n)gINak(l, l)al(I, l)e2iwot
1Wo

+ 1~12 In ~ a/O, n) ai(O, n)g INa: (1 , l)all, 1). (7.161)


n

Inserting (7.161) into the e3 balance equation, leads on the rhs to terms of the form

MI~12~(R, T) + F(R, T), (7.162)

where F(R, T) is the projection of the fluctuations on the unstable modes. On the

238
7.11 Hard-Mode Instability 227

lhs we obtain expressions from det IrI(2). These terms read

(7.163)

where l'ij is the subdeterminant belonging to the element r ji . As a result we find


expressions having exactly the same structure as (7.147). The only difference rests
in complex coefficients which are now given by

A = tr l'(Ol(wo), (7. I 64a)


B = tr (D(Oll'(Ol(w o », (7.164b)
c=- Lij Ki<]ll'fJl(wo). (7.164c)

M has the same structure as in the soft mode case

M = am(1, l){l'~~l(wo)[gfNI + gff]l + ght~]

+ n~)(wo)[glN + gl:] ][ak(l, 1) Ln 1


n
oil, n)a,(I, n)g1;~a;(1, l)ai l , I)
+ 0:(1, 1) Ln A.n - 12 , aj (2, n)a;(2, n)g1;~ap(l, l)ail, I)}.
lWo
(7.165)

239
8. Physical Systems

8.1 Cooperative Effects in the Laser: Self-Organization and


Phase Transition

The laser is nowadays one of the best understood many-body problems. It is a


system far from thermal equilibrium and it allows us to study cooperative effects
in great detail. We take as an example the solid-state laser which consists of a set of
laser-active atoms embedded in a solid state matrix (cf. Fig. 1.9). As usual, we assume
that the laser end faces act as mirrors serving two purposes: They select modes in
axial direction and with discrete cavity frequencies. In our model we shall treat
atoms with two energy levels. In thermal equilibrium the levels are occupied ac-
cording to the Boltzmann distribution function. By exciting the atoms, we create
an inverted population which may be described by a negative temperature. The
excited atoms now start to emit light which is eventually absorbed by the surround-
ings, whose temperature is much smaller than nOJ/kB (where OJ is the light frequency
of the atomic transition and kB is Boltzmann's constant) so that we may put this
temperature ~ O. From a thermodynamic point of view the laser is a system (com-
posed of the atoms and the field) which is coupled to reservoirs at different tem-
peratures. Thus the laser is a system far from thermal equilibrium.
The essential feature to be understood in the laser is this. If the laser atoms are
pumped (excited) only weakly by external sources, the laser acts as an ordinary
lamp. The atoms independently of each other emit wavetracks with random
phases. The coherence time of about 10- 11 sec is evidently on a microscopic scale.
The atoms, visualized as oscillating dipoles, are oscillating completely at random.
If the pump is further increased, suddenly within a very sharp transition region the
linewidth of the laser light may become of the order of one cycle per second so that
the phase of the field remains unchanged on the macroscopic scale of 1 sec. Thus
the laser is evidently in a new, highly ordered state on a macroscopic scale. The
atomic dipoles now oscillate in phase, though they are excited by the pump com-
pletely at random. Thus the atoms show the phenomenon of self-organization.
The extraordinary coherence of laser light is brought about by the cooperation
of the atomic dipoles. When studying the transition-region lamp laser, we shall
find that the laser shows features of a second-order phase transition.
230 8. Physical Systems

8.2 The Laser Equations in the Mode Picture

In the laser, the light field is produced by the excited atoms. We describe the field
by its electric field strength, E, which depends on space and time. We consider only
a single direction of polarization. We expand E = E(x, t) into cavity modes

E(x, t) = i LA {(2n/iw,,/V)1/2 exp (ik).x)b A- e.e.}, (8.1 )

where we assume, for simplicity, running waves. A is an index distinguishing the


different modes. w A is the mode frequency, Vthe volume of the cavity, kA the wave
vector, b Aand b~ are time-dependent complex amplitudes. The factor (2n/iwA/V)1/Z
makes b A, b~ dimensionless (its precise form stems from quantum theory). We
distinguish the atoms by an index fl. Of course, it is necessary to treat the atoms
by quantum theory. When we confine our treatment to two laser-active atomic
energy levels, the treatment can be considerably simplified. As shown in laser
theory, we may describe the physical properties of atom fl by its complex dipole
moment IXJl and its inversion (JI'" We use IXJl in dimensionless units. (JJl is the difference
of the occupation numbers N z and Nl of the upper and lower atomic energy level:

As is also shown in laser theory with the help of "quantum-classical-correspon-


dence", the amplitudes b A, the dipole moments IXfl and the inversion (J,l can be
treated as classical quantities, obeying the following sets of equations:

a) Field Equations

(8.2)

KA is the decay constant of mode A if left alone in the cavity without laser action.
KA takes into account field losses due to semi-transparent mirrors, to scattering
centers etc. gflA is a coupling constant describing the interaction between mode A
and atom fl. g #A is proportional to the atomic dipole matrix element. FA is a stochastic
force which occurs necessarily due to the unavoidable fluctuations when dissipation
is present. Eq. (8.2) describes the temporal change of the mode amplitude b A due
to different causes: the free oscillation of the field in the cavity (~ w)), the damping
(-K)), the generation by oscillating dipole moments (-ig#AIX,J, due to fluctuations
(e.g., in the mirrors), (~ F). On the other hand, the field modes influence the atoms.
This is described by the

b) Matter Equations
1) Equationsfor the Atomic Dipole Moments

(8.3)

v is the central frequency of the atom, y its linewidth caused by the decay of the

242
8.3 The Order Parameter Concept 231

atomic dipole moment. r it) is the fluctuating force connected with the damping
constant 1'. According to (8.3), IXI' changes due to the free oscillation of the atomic
dipole moment, (- iv), due to its damping (-1') and due to the field amplitudes
( ~ b.). The factor aI' serves to establish the correct phase-relation between field and
dipole moment depending on whether light is absorbed (al' == (N2 - N 1 )1' < 0) or
emitted (al' > 0). Finally, the inversion also changes when light is emitted or
absorbed.

2) Equation/or the Atomic Inversion

(8.4)

do is an equilibrium inversion which is caused by the pumping process and inco-


herent decay processes if no laser action takes place, 1'1\ is the relaxation time after
which the inversion comes to an equilibrium. In (8.3) and (8.4) the r's are again
fluctuating forces.
Let us first consider the character of the equations (8.2) to (8.4) from a mathe-
matical view point. They are coupled, first-order differential equations for many
variables. Even if we confine ourselves to modes within an atomic line width, there
may be in between dozens to thousands of modes. Furthermore there are typically
lOI4 laser atoms or still many more, so that the number of variables of the system
(8.2) to (8.4) is enormous. Furthermore the system is nonlinear because of the terms
ba in (8.3) and rxb+, rx+ bin (8.4). We shall see in a moment that these nonlinearities
playa crucial role and must not be neglected. Last but not least, the equations
contain stochastic forces. Therefore, at a first sight, the solution of our problem
seems rather hopeless. We want to show, however, that by the concepts and
methods of Chapter 7 the solution is rather simple.

8.3 The Order Parameter Concept

A discussion of the physical content of (8.2) to (8.4) will help us to cut the problem
down and solve it completely. Eq. (8.2) describes the temporal change of the mode
amplitude under two forces: a driving force stemming from the oscillating dipole
moments (rxl') quite analogous to the classical theory of the Hertzian dipole, and a
stochastic force F. Eqs. (8.3) and (8.4) describe the reaction of the field on the atoms.
Let us first assume that in (8.3) the inversion a I' is kept constant. Then b acts as a
driving force on the dipole moment. If the driving force has the correct phase and
is near resonance, we expect a feedback between the field and the atoms, or, in other
words, we obtain stimulated emission. This stimulation process has two opponents.
On the one hand the damping constants K and l' will tend to drive the field to zero
and, furthermore, the fluctuating forces will disturb the total emission process by
their stochastic action. Thus we expect a damped oscillation. As we shall see more
explicitly below, if we increase aI" suddenly the system becomes unstable with ex-
ponential growth of the field and correspondingly of the dipole moments. Usually
it is just a single-field mode that first becomes undamped or, in other words, that

243
232 8. Physical Systems

becomes unstable. In this instability region its internal relaxation time is apparently
very long. This makes us anticipate that the mode amplitudes, which virtually be-
come undamped, may serve as the order parameters. These slowly varying am-
plitudes now slave the atomic system. The atoms have to obey the orders of the
order parameters as described by the rhs of (8.3) and (8.4). If th~ atoms follow im-
mediately the orders of the order parameter, we may eliminate the "atomic"
variables 0(+, ex, a, adiabatically, obtaining equations for the order parameters b).
alone. These equations describe most explicitly the competition of the order
parameters among each other. The atoms will then obey that order parameter which
wins the competition. In order to learn more about this mechanism, we anticipate
that one b). has won the competition and we first confine our analysis to this
single-mode case.

8.4 The Single-Mode Laser

We drop the index JI. in (8.2) to (8.4), assume exact resonance, W = v, and eliminate
the main time dependence by the substitutions

(8.5)

where we finally drop the tilde, ,....,. The equations we consider are then

b" = -Kb - (I" gllO(" + F(t), (8.6)

rill = -YO("+ ig:ba" + rit), (8.7)

a" = YII(do - all) + 2i(gIl0(Ilb+ - c.c.) + r",it). (8.8)

We note that for running waves (in a single direction) the coupling coefficients gil
have the form

(8.9)

g is assumed real. Note that the field-mode amplitude b is supported via a sum of
dipole moments

(8.10)

We first determine the oscillating dipole moment from (8.7) which yields in an
elementary way

(J." = ig: L"" e-y(t-')(ball ), dr: + fit) (8.11 )

with

(8.12)

244
8.4 The Single-Mode Laser 233

We now make a very important assumption which is quite typical for many
cooperative systems (compare Section 7.2). We assume that the relaxation time
of the atomic dipole moment Q( is much smaller than the relaxation time inherent
in the order parameter b as well as in (JIt. This allows us to take b· (JIt out of the
integral in (8.11). By this adiabatic approximation we obtain

(8.13)

(8.13) tells us that the atoms obey instantaneously the order parameter. Inserting
(8.13) into (8.6) yields

(8.14)

where P is now composed of the field noise source, F, and the atomic noise sources,
r,
(8.15)

In order to eliminate the dipole moments completely, we insert (8.13) into (8.8).
A rather detailed analysis shows that one may safely neglect the fluctuating forces.
We therefore obtain immediately

(8.16)

We now again assume that the atom obeys the field instantaneously, i.e., we put

(8.17)

so that the solution of (8.16) reads

(8.18)

Because we shall later be mainly interested in the threshold region where the char-
acteristic laser features emerge, and in that region b + b is still a small quantity, we
replace (8.18) by the expansion

(8.19)

As we shall see immediately, laser action will start at a certain value of the inversion
do. Because in this case b + b is a small quantity, we may replace do by de in the
second term of (8.19) to the same order of approximation. We introduce the total
inversion

(8.20)

245
234 8. Physical Systems

and correspondingly (N = number of laser atoms)

(8.21)

Inserting (8.19) into (8.14) we obtain (with Ndc = Kyjg2)

b' =(-K + g2 Do)b _ 4g2K b+bb + F(t). (8.22)


y nil
If we treat for the moment being b as a real quantity, q, (8.22) is evidently identical
with the overdamped anharmonic oscillator discussed in Sections 5.1 and 6.4,
where we may identify

(8.22a)

(compare (6.118)).
Thus we may apply the results of that discussion in particular to the critical
region, where the parameter rx changes its sign. We find that the concepts of sym-
metry breaking instability, soft mode, critical fluctuations, critical slowing down,
are immediately applicable to the single mode laser and reveal a pronounced
analogy between the laser threshold and a (second order) phase transition. (cf.
Sect. 6.7). While we may use the results and concepts exhibited in Sections 5.1, 6.4,
6.7 we may also interpret (8.22) in the terms of laser theory. If the inversion Do is
small enough, the coefficient of the linear term of (8.22) is negative. We may safely
neglect the nonlinearity, and the field is merely supported by stochastic processes
(spontaneous emission noise). Because F is (approximately) Gaussian also b is
Gaussian distributed (for the definition of a Gaussian process see Sect. 4.4). The
inverse of the relaxation time of the field amplitude b may be interpreted as the
optical line width. With increasing inversion Do the system becomes more and more
undamped. Consequently the optical line width decreases, which is a well-observed
phenomenon in laser experiments. When rx (8.22a) passes through zero, b acquires a
new equilibrium position with a stable amplitude. Because b is now to be inter-
preted as a field amplitude, this means that the laser light is completely coherent.
This coherence is only disturbed by small superimposed amplitude fluctuations
caused by F and by very small phase fluctuations.
If we consider (8.22) as an equation for a complex quantity b we may derive'the
right hand side from the potential

(8.23)

By methods described in Sections 6.3 and 6.4, the Fokker-Planck equation can
be established and readily solved yielding

2V(lbl))
feb) = ,;V exp ( --Q- , (8.24)

246
8.5 The Multimode Laser 235

03

t a--2

\
,\0
~ v\ V~ V,
01

o
/' ~ ....... '-.......
~ C 8 10
Fig. 8.1. The stationary distribution as a
function of the normalized "intensity" n.
(After H. Risken: Z. Physik 186, 85 (1965))
"
n -

where Q (compare (6.91» measures the strength of the fluctuating force. The
function (8.24) (cf. Fig. 8.1) describes the photon distribution of laser light, and
has been checked experimentally with great accuracy. So far we have seen that by
the adiabatic principle the atoms are forced to obey immediately the order para-
meter. We must now discuss in detail why only one order parameter is dominant.
If all parameters occurred simultaneously, the system could still be completely
random.

Exercise on 8.4

Verify that

f . = [ -8b(-rxb
8 82 ] f
- Plbl 2 b) + c.c. + Q8b8b*

is the Fokker-Planck equation belonging to (8.22).

8.5 The Multimode Laser


We now repeat the steps done before for the multimode case. We anticipate that
the field mode with amplitude b1 may be decomposed into a rapidly oscillating part
with frequency Q 1 and a slowly varying amplitude B1 :

(8.25)

247
236 8. Physical Systems

Inserting (8.25) into (8.3) yields after integration

IX
I'
= i"
~).
g*1'). JI
-00
e(-iV-Y)(I-T)(b
).
(J)
I't
dor: + t 1" (8.26)

or, if again the adiabatic approximation is made

(8.27)

We insert (8.27) into (8.2) and use the abbreviation

(8.28)

We thus obtain

(8.29)

We now consider explicitly the case in which we have a discrete spectrum of


modes and we assume further that we may average over the different mode phases,
which in many cases is quite a good approximation. (It is also possible, however,
to treat phaselocking which is of practical importance for the generation of ultra-
short pulses). Multiplying (8.29) by b). and taking the phase average we have

(8.30)

where n). is the number of photons of the mode A. If we neglect for the moment
being the fluctuating forces in (8.29) we thus obtain

(8.31)

with
2yg2
= -:-:---'--'c"----,, (8.32)
(D). - vf + y2'
W
).
Igl').12 = g2. (8.33)

In the same approximation we find

(8.34)

or, after solving (8.34) again adiabatically

(8.35)

where Dc is the critical inversion of all atoms at threshold. To show that (8.31)-

248
8.6 Laser with Continuously Many Modes. Analogy with Superconductivity 237

(8.35) lead to the selection of modes (or order parameters), consider as example
just two modes treated in the exercise of Section 5.4. This analysis can be done
quite rigorously also for many modes and shows that in the laser system only a
single mode with smallest losses and closest to resonance survives. All the others
die out. It is worth mentioning that equations of the type (8.31) to (8.35) have been
proposed more recently in order to develop a mathematical model for evolution.
We will come back to this point in Section 10.3.
As we have seen in Section 6.4, it is most desirable to establish the Fokker-
Planck equation and its stationary solution because it gives us the overall picture of
global and local stability and the size of fluctuations. The solution of the Fokker-
Planck equation, which belongs to (8.29), with (8.35) can be found by the methods
of Section 6.4 and reads

(8.36)

where

(8.37)

The local minima of (/> describe stable or metastable states. This allows us to study
multimode configurations if some modes are degenerate.

8.6 I.oIaser with Continuously Many Modes. Analogy with


Superconductivity

The next example, which is slightly more involved, will allow us to make contact
with the Ginzburg-Landau theory of superconductivity. Here we assume a con-
tinuum of modes all running in one direction. Similar to the case just considered,
we expect that only modes near resonance will have a chance to participate at laser
action; but because the modes are now continuously spaced, we must take into
consideration a whole set of modes near the vicinity of resonance. Therefore we
expect (which must be proven in a self-consistent way) that only modes with

IQ.. - vi «}' (8.38)

and
(8.39)

are important near laser threshold. Inserting (8.27) into (8.4), we obtain

(8.40)

249
238 8. Physical Systems

which under the just mentioned simplifications reduces to

(J/l::::O ( do - -2d "i...).).'


c
'I _ I *
g/l).g/l).,b).+ b)., + C.c.)• (S.4 I)
'III

Inserting this into (S.29) yields

(S.42)

Using the form (S.9), one readily establishes

(S.43)

where N is the number of laser atoms. Note that we have again assumed (S.3S)
in the nonlinear part of (S.42). If

(S.44)

possesses no dispersion, i.e., Q). IX k)., the following exact solution of the cor-
responding Fokker-Planck equation holds

feb) = 50 expC;), (S.45)

where

(S.46)

We do not continue the discussion of this problem here in the mode picture but
rather establish the announced analogy with the Ginzburg-Landau theory. To this
end we assume

w). = elk).l, (S.47)


Q). = vlk).l, (S.4S)
K). = K. (S.49)

Confining ourselves again to modes close to resonance we use the expansion

(S.50)

250
8.6 Laser with Continuously Many Modes. Analogy with Superconductivity 239

We now replace the index A. by the wave number k and form the wave packet

'J'(x, t) = J+OO
-00 Bke+ikX-ivlkltdk. (8.51)

The Fourier transformation of (8.42) is straightforward, and we obtain

.
'J'(x, t) = -a'J'(x, t) +c (d
iv dx + v )2 'J'(x, t) - 2f31'J'(x, tW'J'(x, t) + F(x, t),
(8.52)

where in particular the coefficient a is given by

(8.53)

Eq. (8.52) is identical with the equation of the electron-pair wave function of the
Ginzburg-Landau theory of superconductivity for the one-dimensional case if the
following identifications are made:

Table 8.1

Superconductor Laser
'1', pair wave function '1', electric field strength
ex DC T-Te ex DC De-D
T temperature D total inversion
Te critical temperature Dc critical inversion
v DC Ax-component of v atomic frequency
vector potential
F(x, t) thermal fluctuations fluctuations caused by spon-
taneous emission, etc.

Note, however, that our equation holds for systems far from thermal equilibrium
where the fluctuating forces, in particular, have quite a different meaning. We may
again establish the Fokker-Planck equation and translate the solution (8.45) (8.46)
to the continuous case which yields

(8.54)

with

(8.55)

Eq. (8.55) is identical with the expression for the distribution function of the
Ginzburg-Landau theory of superconductivity if we identify (in addition to Table
8. I) cP with the free energy g; and Q with 2kBT. The analogy between systems away

251
240 8. Physical Systems

from thermal equilibrium and in thermal equilibrium is so evident that it needs no


further discussion. As a consequence, however, methods originally developed for
one-dimensional superconductors are now applicable to lasers and vice versa.

8.7 First-Order Phase Transitions of the Single-Mode Laser


So far we have found that at threshold the single-mode laser undergoes a transition
which has features of a second-order phase transition. In this section we want to
show that by some other physical conditions the character of the phase transition
can be changed.
a) Single Mode Laser with an Injected External Signal
When we send a light wave on a laser, the external field interacts directly only with
the atoms. Since the laser field is produced by the atoms, the injected field will
indirectly influence the laser field. We assume the external field in the form of a
plane wave

E = Eoei'Po+iroot-ikox + c.c., (8.56)

where Eo is the real amplitude and qJo a fixed phase. The frequency 0)0 is assumed in
resonance with the atomic transition and also with the field mode. ko is the cor-
responding wave number. We assume that the field strength Eo is so weak that it
practically does not influence the atomic inversion. The only laser equation that
E enters into is the equation for the atomic dipole moments. Since here the external
field plays the same role as the laser field itself, we have to replace b by b + const· E
in (8.7). This amounts to making the replacement

(8.57)

where

(8.58)

and 9 is proportional to the atomic dipole moment transition matrix element.


In (8.57) we have taken into account only the resonant term. Since Eo is anyway
only a higher order correction term to the equations we will replace in the following
(1" by do.

lt is now a simple matter to repeat all the iteration steps performed after (8.11).
We then see that the only effect of the change (8.57) is to make in (8.15) the following
replacements
l' . '\' n d - i'Po - 1
r -->
~
F - 1 L.." g"\7,, oEoe l' . (8.59)

Inserting (8.59) into (8.22) yields the basic equation

(8.60)

252
8.7 First-Order Phase Transitions of the Single-Mode Laser 241

where we have used the abbreviation

(8.61)

Again it is a simple matter to write the rhs of (8.60) and of its conjugate complex
as derivatives of a potential where we stilI add the fluctuating forces

db+/dt = - iJV + ft+ (8.62)


iJb '
iJV
db/dt = - iJb+ + ft. (8.63)

Writing b in the form

(8.64)

the potential reads

(8.65)

where Vo is identical with the rotation symmetric potential of formula (8.23). The
additional cosine term destroys the rotation symmetry. If Eo = 0, the potential is
rotation symmetric and the phase can diffuse in an undamped manner giving rise
to a finite linewidth of the laser mode. This phase diffusion is no more possible if
Eo =1= 0, since the potential considered as a function of qJ has now a minimum at
qJ = 'Po + n. (8.66)

The corresponding potential is exhibited in Fig. 8.2 which really exhibits a pinning

V(r,,!)

Fig. 8.2. The rotational symmetry of the potential is


broken due to an injected signal. (After W. W.
Chow, M. O. Scully, E. W. van Stryland: Opt.
Commun. 15, 6 (1975»

253
242 8. Physical Systems

of l{J. Of course, there are still fluctuating forces which let l{J fluctuate around the
value (8.66).

b) Single Mode Laser in the Presence of a Saturable Absorber


Here we treat an experimental setup in which a so-caIIed saturable absorber is
inserted between the laser material and one of the mirrors (compare Fig. 1.9). The
saturable absorber is a material having the following property. If it is irradiated by
light of weak intensity the saturable absorber absorbs light. On the other hand at
high light intensities the saturable absorber becomes transparent. To take into
account the effect of a saturable absorber in the basic equations we have to allow
for intensity dependent losses. This could be done by replacing" in (8.14) by

(8.67)

While (8.67) is a reasonable approximation for not too high intensities Ib1 2 , at very
high fields (8.67) would lead to the wrong result that the loss becomes negative.
Indeed one may show that the loss constant should be replaced by

(8.68)

From it the meaning of I, becomes clear. It is that field intensity where the decrease
of the loss becomes less and less important. It is a simple matter to treat also the
variation of the inversion more exactly which we take now from (8.18). Inserting
this into (8.13) and the resulting expression into (8.6) we obtain, using as loss
constant (8.68)

+
db Idt = -
(
/(0 + I +" ,) +
IWlls b + I +
G
IWlla, b
+
+ F~+ (t) (8.69)

Here we have used the abbreviations

(8.70)

and

(8.71)

Again (8.69) and its conjugate complex possess a potential which can be found by
integration in the explicit form

-V(b) = -KoIW - I,Ksln(l + IWII,) + la.Gln(l + IWIlas)' (8.72)

For a discussion of (8.72) let us consider the special case in which I, « las. This
allows us to expand the second logarithm for not too high Ibj2, but we must keep

254
8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses 243

the first logarithm in (8.72). The resulting potential curve of

(8.73)

IJ = IJT
IJ < IJT

/---f---t---t- IJ« IJ T

--"-'~--~-""~-f------ Ibl Fig. 8.3. Change of the potential curve for


different pump parameters 0' = do. (After
J. W. Scott, M. Sargent III, C. D. Cantrell:
Opt. Commun. 15, 13 (1975))

is exhibited in Fig. 8.3. With change of pump parameter G the potential is de-
formed in a way similar to that of a first order phase transition discussed in Section
6.7 (compare the corresponding curves there). In particular a hysteresis effect
occurs.

8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses


We consider a laser with continuously many modes. We treat the electric field
strength {f directly, i.e., without decomposing it into cavity modes. To achieve also
a "macroscopic" description of the laser material, we introduce the macroscopic
polarization g>(x, t) and inversion density £&(x, t) by

(8.74)

(e 21 : atomic dipole matrix element) and

(8.75)

The laser equations then read

(8.76)

82&'/8(2 + 2y8&,/8t + w~&' = -1wo(182112/h)&'£&, (8.77)


8£&/8( = YII(£&o - £&) + (2/ hwo)&,8&,/8t. (8.78)

Eq. (8.76) follows directly from Maxwell's equation, while (8.77), (8.78) are

255
244 8. Physical Systems

material equations described in laser theory using quantum mechanics. To show


some of the main features, we have dropped the fluctuation forces. We leave their
inclusion as an exercise to the reader. K is the "cavity" loss, l' the atomic linewidth,
l' ~ 1 = Tl the atomic inversion relaxation time, Wo the atomic transition freq uency,
h Planck's constant. The meaning of these equations is the same as those discussed
on page 230. Do is the "unsaturated" inversion due to pump and relaxation pro-
cesses. We treat only one direction of field polarization and wave propagation in
x-direction. Since we anticipate that the field oscillates essentially at resonance
frequency wo, we put

g = eiroot - ikoxE<-)(x, t) + c.c., (8.79)

where Wo = cko and E<+) = E<-h. An analogous decomposition of & is made.


E<-)(x, t) and pC-lex, t) are treated as slowly varying functions of x and t allowing
us to neglect e.g., E" compared to woE, etc. Thus we arrive at the following equations

aE(-) ~
aE(-)/at + c-ax- + KE(-) = -2niw
0
p(-)
,
(8.80)

ap<-)/at + 1'p(-) = ~(le2112/h)E<-)D, (8.81)

aD/at = 1'11(D o - D) + (2i/h)(E(+)P<-) - E(-)P(+»). (8.82)

These equations are equivalent to (8.2), (8.3), (8.4) provided we assume there
KA = K and a set of discrete running modes. We now start with a small inversion,
Do, and investigate what happens to E; P, D, when Do is increased.

1) Do is small, Do < K1'/g2.


We readily find as solution

E = P = 0, D = Do, (8.83)

i.e., no laser action occurs. To check the stability of the solution (8.83), we perform
a linear stability analysis putting

(8.84)
P= beAt - ikx, (8.85)
15 - Do = ceAt- ikx, (8.86)

which yields the following characteristic values

A(I) = -1'11' (8.87)


A<2,3) = (l/2){ -1' - K + ick ± J-}' (8.88)

256
8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses 245

where

(8.89)

At present, Dthr ,! is just an abbreviation:


(8.90)

A closer investigation of (8.88), (8.89) reveals that ..1.(2) ::::: 0 for Do ::::: D thr and an
instability occurs, at first for k = O.

2) Dthr ,! ::;; Do ::;; Dthr ,2'


It is a simple matter to convince oneself that

E(x, t) = Eew' P(x, t) = Pew and D(x, t) = Dew' (8.91)

is now the stable solution. The rhs of (8.91) are space and time-independent con-
stants, obeying (8.80)-(8.82). This solution corresponds to that of Sections 8.4-6
in the laser domain if fluctuations are neglected. For what follows it is convenient
to normalize E, P, D with respect to (8.91), i.e., we put

E = EjE e.w ., P = PjP e.w ., D = DjD thr (8.92)

and introduce as effective pump parameter

(8.93)

A detailed analysis reveals that E, P, D may be chosen real so that (8.80)-(8.82)


acquire the form

(fr + Y)p = 'lED, (8.94)

(fr + YII)D = YII(A + 1) - YII AEP , (8.95)

(~at + K + C~)E
ax = KP. (8.96)

The solutions E, P, D are, on account of the normalization,

E=P=D=l. (8.97)

We again perform a linear stability analysis by putting

(8.98)

257
246 8. Physical Systems

where we abbreviate

(8.99)

Putting q(x, t) = q(O)e At - ikx/c we obtain a characteristic equation of third order in


A. which allows for an unstable solution, i.e., Re A. > 0 provided

A > Ac == 4 + 3e + 2J2(1 + e)(2 + e), (8.100)

holds, where e = "111/"1.

3) Do ~ D thr . 2 •

At A = A e , the real part of the "damping" constant Au (u = unstable) vanishes,


but A possesses a nonvanishing imaginary part, determined by

(8.101)

Simultaneously, at A = A e , also the wavenumber k is unequal zero,

(8.102)

We expand q into a superposition of plane waves and eigenvectors of the linearized


equations, assuming a ring laser with periodic boundary conditions with length L,

q = Lk,j O(j)(c :X)~k,jXk (8.103)

with

Xk-- _l_e ikx / c


JI . (8.104)

The further steps are those of Section 7.7 and 7.8. Resolving the equations for the
stable modes up to fourth order in the amplitude of the unstable mode ~kc,u yields
the following equation:

(8.105)

Splitting ~ into a real amplitude and a phase factor

(8.106)

258
8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses 247

we obtain an equation for R

(-ata+ caxa)
- R = fJR + aR 3 - bR 5 ==
av
aR
(8.107)

with

a = Re Ii > °or < 0, depending on the length L,

b = Re 6> 0, (8.108)

fJ > °for A > A c'

< °for A < Ac'


Introducing new variables

x= x/c - t, 1 = t (8.109)

we transform (8.96) into a first order differential equation with respect to time,
where x plays the role of a parameter which eventually drops out from our final
solution. Interpreting the rhs of (8.107) as a derivative of a potential function V
and the Ihs as the acceleration term of the overdamped motion of a particle, the
behavior of the amplitude can immediately be discussed by means of a potential
curve. We first discuss the case Re Ii > 0. Because the shape of this curve is quali-
tatively similar to Fig. 8.3, the reader is referred to that curve. For

d == 4fJb/a 2 < - I (8.110)

°
there is only one minimum, R = 0, i.e., the continuous wave (cw) solution is stable.
For -I < d < the cw solution is still stable; but a new minimum of the potential

finally d > 0, R = °
curve indicates that there is a second new state available by a hard excitation. If
becomes unstable (or, more precisely, we have a point of

°
marginal stability) and the new stable point lies at R = R 2 • Thus the laser pulse
amplitude jumps discontinuously from R = to R = R 2 • The coefficients of (8.105)
are rather complicated expressions and are not exhibited here. Eq. (8.107) is solved
numerically; and the resulting R (and the corresponding phase) are then inserted
into (8.103). Thus the electric field strength E takes the form

E = (I + Eo) + E1 cos (WeT + ipl) + E2 COS ( 2WeT + ip2)


+ E3 COS (3W cT + ip3)'
T = t - kcx/Cwe C) , (8.111 )

where we have included terms up to the cubic harmonic. Fig. 8.4 exhibits the cor-
responding curves for the electric field strength E, the polarization P and the
inversion D for A = Ae. These results may be compared with the numerical curves

259
248 8. Physical Systems

E.P D

2.0

to
10

0.6
o

Fig. 8.4. Electric fielil strength E, polarization P and inversion D for A = Ac.
(After H. Haken, H. Ohno: Opt. Commun. 16,205 (1976))

obtained by direct computer solutions of (8.94-96) for A = 11. Very good agree-
ment is found, for example, with respect to the electric field strength.
We now discuss the case Re a < o. The shapes of the corresponding potential
curves for f3 < 0 or f3 > 0 are qualitatively given by the solid lines of Figs. 6.6a
and 6.6b, respectively, i.e., we deal now with a second order phase transition and a
soft excitation. The shapes of the corresponding field strength, polarization and
inversion resemble those of Fig. 8.4.

(3 a
[y ] [¥ ]

0.001
0.04

-0.04

[em] L

Fig. 8.4a. The coefficients p, a of (8.107) as functions of laser length L. Left ordinate refers to p,
right to a. Chosen parameters: y = 2 X 10· S-I; K = O.ly; YII = 0.5y; A = 11.5. (After H.
Haken and H. Ohno, unpublished)

The dependence of the coefficients a and f3 (8.108) on laser length L is shown


in Fig. 8.4a for a given pump strength.

260
8.9 Instabilities in Fluid Dynamics: The Benard and Taylor Problems 249

The method of generalized Ginzburg-Landau equations allows us also to treat


the buildup of ultrashort laser pulses. Here, first the time-dependent order para-
meter equation (S.l 07) with (S.l 09) is solved and then the amplitudes of the slaved
modes are determined. A typical result is exhibited in Fig. S.4b.

Fig. 8.4b. Temporal buildup of electric field strength


of ultrashort laser pulse in the case of first order
transition. (After H. Ohno, H. Haken: Phys. Lett.
O.5'--------------;_)(~ 59A, 261(1976»

The preceding treatment has shown that instability hierarchies may occur in
lasers. As we now know, such hierarchies are a typical and widespread pheno-
menon in synergetic systems. They are given detailed treatment in my book
Advanced Synergetics (see Footnote on page 216).

8.9 Instabilities in Fluid Dynamics: The Benard and


Taylor Problems
The following problems have fascinated physicists for at least a century because
they are striking examples how thermal equilibrium states characterized by com-
plete disorder may suddenly show pronounced order if one goes away from thermal
equilibrium. There are, among others, the so-called Benard and Taylor problems.
We first describe the Benard problem, or, using another expression for it, convec-
tion instability: Let us consider an infinitely extended horizontal fluid layer which
is heated from below so that a temperature gradient is maintained. This gradient,
if expressed in suitable dimensionless units, is called the Rayleigh number, R.
As long as the Rayleigh number is not too large, the fluid remains quiescent and
heat is transported by conduction. If the number exceeds a certain value, however,
suddenly the fluid starts to convect. What is most surprising is that the convection
pattern is very regular and may either show rolls or hexagons. The hexagons
depicted in Fig. 1.16 are the convection cells as seen from above. One may have
the fluid rising in the middle of the cell and going down at the boundaries or vice
versa. An obvious task for the theoretical treatment is to explain the mechanism
of this sudden disorder-order transition and to predict the form and stability of the

261
250 8. Physical Systems

cells. In a more refined theory one may then ask questions as to the probability of
fluctuations.
A closely related problem is the so-called Taylor-problem: The flow between a
long stationary outer cylinder and a concentric rotating inner cylinder takes place
along circular streamlines (Couette flow) if a suitable dimensionless measure of the
inner rotation speed (the Taylor number) is small enough. But Taylor vortices
spaced periodically in the axial direction appear when the Taylor number exceeds a
critical value.
In the following we shall explicitly treat the convection instability. The physical
quantities we have to deal with are the velocity field with components uj (j =
1,2, 3 <--> x, y, z) at space point x, y, z, the pressure p and the temperature T.
Before going into the mathematical details, we shall describe its spirit. The velocity
field, the pressure, and the temperature obey certain nonlinear equations of fluid
dynamics which may be brought into a form depending on the Rayleigh number
R which is a prescribed quantity. For small values of R, we solve by putting the
velocity components equal zero. The stability of this solution is proven by lineariz-
ing the total equations around the stationary values of u, p, T, where we obtain
damped waves. If, however, the Rayleigh number exceeds a certain critical value Ro
the solutions become unstable. The procedure is now rather similar to the one which
we have encountered in laser theory (compare Sect. 8.8). The solutions which
become unstable define a set of modes. We expand the actual field (u, T) into these
modes with unknown amplitudes. Taking now into account the nonlinearities of
the system, we obtain nonlinear equations for the mode amplitudes which quite
closely resemble those oflaser theory leading to certain stable-mode configurations.
Including thermal fluctuations we again end up with a problem defined by non-
linear deterministic forces and fluctuating forces quite in the spirit of Chapter 7.
Their interplay governs in particular the transition region, R :::0 RoO'

8.10 The Basic Equations


First, we fix the coordinate system. The fluid layer of thickness d is assumed to be
infinitely extended in the horizontal directions with Cartesian coordinates x and
y. The vertical coordinate will be denoted by z. The basic cquations of fluid
dynamics read as follows:
a) The law of conservation of matter is described by the continuity equation.
which for an incompressible fluid acquires the form
div u = 0, (8.112)

where u = u (x, t) is the velocity field.


b) The next equation refers to the change in momentum. We assume that the
internal friction is proportional to the gradient of the velocity field and that the
fluid is incompressible. This leads to the Navier-Stokes equation

(8.113)

262
8.10 The Basic Equations 251

The index i = 1,2,3 corresponds to x, y, z and (Xl> X2, X3) = (x, y, z), where we
have used the convention of summing up over those expressions in which an
index, for instance j, occurs twice. The lhs represents the "substantial derivative",
where the first term may be interpreted as the local, temporal change in the
velocity field. The second term stems from the transport of matter to or from the
space point under consideration. The first term on rhs represents a force due to
the gradient of the pressure p, where (] is the density of the fluid and
(]o = (](x, y, z = 0). The second term describes an acceleration term due to gravity
(g is the gravitational acceleration constant). In this term the explicit dependence
of (] on the space point x is taken into account, and () is the Kronecker symbol.
The third term describes the friction force, where v is the kinematic viscosity.
The last term describes fluctuating forces which give rise to velocity fluctuations.
Near thermal equilibrium F(u) is connected with the fluctuating part of the stress
tensor s:
(8.114)

In the following we shall assume, as usual, that s is Gaussian distributed and


()-correlated. As can be shown in detail, the correlation function can be derived by
means of the fluctuation-dissipation theorem

(8.114a)

where kB is the Boltzmann constant and T the absolute temperature. We now


consider a one-component fluid which is described by the following equation of
state:
()(] = (](T) - (](To) = - (J.(]o(T - To),
where (J. is the thermal expansion coefficient, and To the temperature at the
bottom of the fluid.
c) The third law which we must invoke is that of the conservation of energy.
In the present context this law is equivalent to the equation describing thermal
conduction:

-iJT
iJt
+ u·-iJT
J iJXj
= KAT+ F(T)
,
(8.115)

where K is the thermal conductivity. F(T) describes fluctuations due to spontane-


ously and locally occurring heat currents h, with which F(T) is connected by the
relation

(8.116)

We assume that the heat currents h are Gaussian and possess the following
correlation functions:
(8.116a)

263
252 8. Physical Systems

Furthermore, we assume that hand s are uncorrelated:

<hi(x, t)Skl{X', t'» = O. (8.116 b)

Equations (8.112,113,115) form the basic equations in the Boussinesq approxi-


mation. In order to define the problem completely, we must impose boundary
conditions on the solutions. For a viscous fluid we have at rigid boundaries

Ui = 0 (i = 1,2,3) (8.117)

or at free boundaries

(8.117 a)

where I: is the completely antisymmetric tensor.


In the following we shall explicitly treat the case of free boundaries by way of
a model. It is possible to go beyond the Boussinesq approximation by including
further terms. We do not treat them explicitly here, but note that some of them
are not invariant with respect to reflections at the horizontal symmetry plane of
the fluid. This will have an important consequence for the structure of the final
equations, as we shall see later.

8.11 The Introduction of New Variables


We start from a fluid at rest and neglect fluctutions. One may convince oneself
very easily that the following quantities, denoted by the superscript s, fulfill the
basic equations in the stationary case:

U~S) = 0 (i = 1,2,3), (8.118)

T(S) = To - /3z, (8.118 a)

p(S) = Po - Qogz(l + i a/3z), (S.l1Sb)

where f3 denotes the constant temperature gradient.


In the following we introduce new variables which describe deviations in the
velocity field, the temperature field, and the pressure from their stationary values.
To this end we introduce the corresponding quantities u', (9, and w by means of
the following relations:

u = u(s) + u', (S.l18c)

T= T(s) + e, (8.11Sd)

(8.l1Se)

264
8.11 The Introduction of New Variables 253

Having made these transformations, we drop the prime in u', because u' is identi-
cal with u. The continuity equation (S.112) remains unaltered:

divu = 0, (S.l12a)

but the Navier-Stokes equation (S.113) now reads

a ~
OUi
~
ut
oUi _
+ Uj ~
uXj
- -
ow
~
uXi
+ rxgou3,i + V.dUi + F(u)
i, (S.l13 a)

while the equation of thermal conductivity (S.115) acquires the form

(S.115a)

°
For the lower and upper surfaces z = and z = d respectively, we must now
require e = 0. For u the boundary conditions remain unaltered. In fluid dynam-
ics, dimensionless variables are often introduced. In accordance with conventional
procedures we therefore introduce the new scaled variables u', e', w', as well as x'
and t', by means of

d2
Xi = dxi, t = -t"
K '

(S.119)

For simplicity, we will drop the prime from now on. Then the continuity equation
(S.112) again reads

divu = 0, (S,112b)

while the Navier-Stokes equation acquires the form

~
OU i
~
ut
+ Uj;:)uXj
OUi _
- -
ow
~
uXi
+ pa P" F'(u)
OU3,i + aUi + i , (S.113b)

and the equation for thermal conductivity reads

-Oe
ot
oe
+ u·-
J ox.
=.de + Ru + F'(T).
3
(S.l15b)
J

Obviously the only free dimensionless parameters occurring in these three equa-
tions are the Prandtl number

p = V/K (S.119a)

265
254 8. Physical Systems

and the Rayleigh number

R = cxgfJ d 4 .
(S.119b)
VK

In the context of this book we will consider the Rayleigh number to be a control
parameter which can be manipulated from the outside via {3, whereas the Prandtl
number will be assumed to be fixed.
The new fluctuating forces are connected with the former ones by

(S.1l4b)

(S.116b)

For sake of completeness we shall write down the relations between the new
fluctuating forces and the fluctuating stress tensor and the fluctuating heat cur-
rents. The corresponding formulae read

(S.114c)

(S.116c)

where s' and h' are fixed by the condition that their correlation functions remain
the same as in (S.114a) and (S.l16a) in the new variables x', t'.

8.12 Damped and Neutral Solutions (R ::5; Rc)


In order eventually to determine the undamped modes, which will become the
order parameters, and the slaved modes, we perform a linear stability analysis of
(S.112b), (8.113b), and (S.l15b) in which we check the stability of the solution
u = e = O. To this end we introduce the following vector:

q = (u, e,w). (S.120)

Using matrix notation, the linearized equations can be written in rather concise
form:

(S.121)

266
8.12 Damped and Neutral Solutions (R ::;; R,l 255

where the matrices K and S are defined as follows:

0
0 0 0 -R-
ox
0
0 RP,1 0 0 -R-
oy
0
K= 0 0 RP,1 RP -R- (8.121 a)
oz
0 0 RP P,1 0

R.! R.! 0 0
oy oz

R 0 0 0 0
0 R 0 0 0
S= 0 0 R 0 0 (8.121 b)
0 0 0 P 0
0 0 0 0 0

To study the stability, we seek solutions of (8.121) of the form

(8.121c)

where qo may still be a function of space coordinates x, y, z. Inserting (8.121 c) into


(8.121) we obtain

(8.121d)

which is still a set of differential equations with respect to x, y, z. In the following


we shall use a new notation of the space coordinates, abbreviating the two-
dimensional vector (x, y) in the following way:

(x,y)=x. (8.122)

We accordingly introduce the two-dimensional wave vector

(8.122 a)

We shall now discuss the spatial dependence of qo. Because there is no


boundary condition in the horizontal direction except that the solution must be
bounded when Ixl tends to infinity, we may use an exponential function in the x-y
direction: ex exp (- ikx). On the other hand, the free boundary conditions to be
imposed on the solutions at the lower and upper surface require a more specific
hypothesis for the z dependence. As one can readily convince oneself, the following

267
256 8. Physical Systems

hypothesis fulfills all the boundary conditions required:

qo = qk,I(X,Z) = Q3.k.lsinlnz exp(-ikx). (8.123)

The constant I must be chosen as an integer 1,2, etc. The coefficients ql to qs


depend on k and I and must still be determined. To this end we insert (8.123) into
(8.121 d) and obtain a set of homogeneous algebraic equations for the unknowns
ql to qs· In order to obtain a nontrivial solution, the corresponding determinant
must vanish. This condition leads us, after some straightforward algebra, to the
eigenvalues which occur in (8.121 d). For the discussion that follows, the eigen-
values of the modes which may become unstable are of particular interest. These
eigenvalues read

Explicitly determining the corresponding eigenvectors is quite straightforward:

ik
- k{ In cos Inz

ik
- k i In cos Inz

sin Inz exp( - ikx). (8.123 b)


R . I
,
AI + k2 + 12 n 2 SIll nz

In
- k2 [AI + P(k 2 + 12 n 2 )] cos Inz

Nk,l is a normalization constant and can be evaluated to yield

(8.123c)

when we define the scalar product

(8.124)

268
8.12 Damped and Neutral Solutions (R s Rcl 257

Incidentally, this equation shows the orthogonality of the q' s in the sense defined
by the scalar product. The solutions are stable as long as ..1.1 possesses a negative
real part. When we change the Rayleigh number R, ..1.1 also changes. For a given
wave number k we obtain the Rayleigh number at which ..1.1 vanishes, i.e. where
linear stability is lost. From (S.123 a) we obtain

(S.125)

which yields the neutral curve defined by

(S.125 a)

In order to obtain the critical Rayleigh number and the corresponding critical
wave number k we seek the minimum of the neutral curve (S.125 a) according to

oR (S.125b)
O(k2) = O.

From (S.125) and (S.125b) we readily obtain, for I = 1,

(S.125c)

as critical wave number and

(S.125d)

as critical Rayleigh number (under the assumption of free surfaces). For Rayleigh
numbers R close to Rc the eigenvalue of the unstable mode may be written in the
form

(S.125e)

The above analysis allows us to distinguish between the unstable and the stable
modes. Using these modes we will transform our original set of equations into a
new set of equations for the corresponding mode amplitudes. We further note that
the unstable modes are highly degenerate because linear stability analysis only
fixes the modulus of k, and the rolls described by the solutions (S.123) can be
oriented in all horizontal directions.

269
258 8. Physical Systems

8.13 Solution Near R = Rc (Nonlinear Domain).


Effective Langevin Equations
We now adopt the procedure used in Sect. 7.7, where we derived generalized
Ginzburg-Landau equations. Thus the present case may serve as another explicit
example of the use of these equations. In the first step we will ignore finite band-
width excitations, i.e. we will adopt a discrete spectrum. This can be achieved, for
instance, by imposing periodic boundary conditions in the x and y directions.
Therefore our hypothesis reads

(8.126)

where q was defined in (8.120), and ~k.l(t) are still unknown time-dependent
amplitudes. The sum runs over all k and l. We wish to derive equations for the
amplitudes ~ which fully contain the nonlinearities. To do this we must insert
(8.126) into the complete nonlinear equations. Therefore the equation we have to
treat reads

Sq = Kq + N(q), (S.I27)

where N contains the nonlinearities

(8.127a)

The matrices K and S were defined in (8.121 a, b), and qi(i = 1, ... ,5) are the
components of the vector q introduced in (8.120). We insert (8.126) into (8.127 a)
and use the fact that qk, leX, z) obeys (8.121 d). In order to obtain equations for the
~f s we multiply both sides of (S.127) by one of the eigenmodes qt, leX, z) and
integrate over the normalization volume. Because the corresponding calculations
are simple in principle but somewhat lengthy, we quote only the final result. We
obtain equations which have the following structurc: 1

~k,l(t) = A(k, l)~k,l(t) - Lk',l" Ak,l;k',l';k",l"~k'.l'(t)~k",l,,(t). (8.l2S)


k",l"

The explicit expressions for the coefficients A are rather lengthy and will not
be presented here. We only note that one summation, e.g. over k" and I", can be
carried out from selection rules which lead to bk,k' + k" (due to conservation of wave
vectors) and a further combination of Kronecker symbols containing I, If, I". This
allows immediate evaluation of the sums over k", I".
In analogy with our treatment in Sect. 7.7 we must now consider the equations
for the unstable modes individually. Because explicit calculation and discussion
do not give us much insight into the physics, we will restrict ourselves to a verbal

1 We remind the reader that in the interest of simplicity we have already omitted a further index.
say m (m = 1,2) in the equation (8.128) corresponding to Am. see (8.125a)

270
8.13 Solution Near R = Rc (Nonlinear Domain). Effective Langevin Equations 259

description. In the first step the equations for the unstable mode amplitudes
~"c.l(t) are selected. (This corresponds to the amplitude ~".u of Sect. 7.7.) In the
lowest order of approximation in the nonlinearities we confine ourselves to the
terms ex ~"c.l'~".l.s in the equation of motion for the unstable modes ~"c.l' Here
~".l.s stands for the set of stable modes (see Sect. 7.7) which couple through the
nonlinearities and are restricted by selection rules.
Note that there is still a whole set of degenerate unstable modes which are
distinguished by different directions of the vector k c • These modes can be visuali-
zed as rolls whose axes are oriented in different horizontal directions. For the sake
of simplicity we shall consider only one definite direction in the discussion which
follows. This kind of model can be justified and related to reality if we choose a
rectangular boundary in the horizontal plane and assume that one edge is much
shorter than the other. In such a case the rolls whose axes are parallel to the
shorter edge become unstable first. We note, however, that it is by no means trivial
to extend this presentation to the general case of many directions and continuous
wave vectors, where finite bandwidth effects play an important role.
Once we have selected a specific direction k" the problem becomes one-
dimensional and we need only take into account the stable mode ~O.2' The equa-
tion for the unstable mode then reads

(8.129)

Here the coefficient of the linear term has been taken from (8.125e). In the second
step we must consider the equation of ~O.2' Here we shall restrict our attention to
terms ex I~"c.112 in the nonlinearities, i.e. to the same order of approximations as
before. We then obtain

(8.130)

We now invoke the principle of adiabatic elimination (i.e. a specific form of the
slaving principle). Accordingly we put

~O.2 = 0 (8.131)

and readily obtain (if we put R = Rc for the nonlinear terms)

(8.132)

which allows us to eliminate ~O.2 in the equations for the unstable modes which
now become the order parameters. The final equation reads

(8.133)

271
260 8. Physical Systems

For the sake of completeness, we will briefly discuss four important generaliza-
tions: (1) three dimensions, (2) finite bandwidth effects, (3) fluctuations, and (4)
symmetry breaking.
1. Three Dimensions. Here we have to consider all directions of the unstable
modes and, in addition, their coupling to further classes of stable modes which
were not taken into account in (8.129). As a consequence of these facts, (8.133) is
now generalized to

(8.133a)

i.e. the different directions kc are coupled through the coupling constants {3kc,k~'

2. Finite Bandwidth Effects. In an infinitely extended medium the spectrum of the


eigenvalues is continuous with respect to k, and we must consider finite band-
width excitations in accordance with Chap. 7. This can be done as follows. Equa-
tion (8.l23a) reveals that Al depends on k through k 2 only. Since Al reaches its
maximum at k 2 = k;, we can expand Al with respect to (k 2 - k;) at this maxi-
mum. Confining this expansion to the leading terms, we then obtain more expli-
citly

Al(k2,R) = Al(k;,R - Rc) + n 2(:: p)(k 2 - k;)2, (8.134)

where Al(k;,R - Rc) was derived in (8.125e). If required, it is, of course, possible
to include higher-order terms. In analogy with our procedure in Sect. 7.7 we make
use of the relation

(8.135)

where

(8.135a)

This allows us to transform (8.134) into

(8.136)

which occurs in the order parameter equations, so that (8.136) replaces ill in
(8.129). We note that the form (8.136) preserves the rotation symmetry of the
original equations. In the case where R > Rc and a definite roll pattern has
developed, e.g. along the x direction, an alternative form of (8.136) is more appro-
priate, namely

ill
4P
= Al(k;,R - Rc) - 1 + P 8x -
(8 i 8)2
J'in
2
8y2 . (8.136a)

272
8.13 Solution Near R = R, (Nonlinear Domain). Effective Langevin Equations 261

This form can be derived by taking into account the first nonvanishing bandwidth
corrections in the x and y directions.

3. Fluctuations. We shall take into account the projections of the fluctuating


forces onto the unstable modes only. They may be written as

(8.137)

and obey the final correlation functions

(8.138)

where

(8.139)

4. Symmetry Breaking. Finally, we note that additional terms may occur if the
equations of motion of the fluid do not possess inversion symmetry with respect
to the central horizontal plane. This symmetry violation may be caused, for
instance, by a more complicated dependence of the density e on temperature than
assumed at the beginning of Sect. 8.10, or by surface tensions or heating effects. A
discussion would be rather lengthy, but the net effect can be written down in a
simple manner. It turns out that these effects can be taken into account by
additional terms on the rhs of (8.133a), namely terms which are bilinear in the
amplitudes ~kc,l:

(8.140)

(See the exercise on Sect. 8.13).


If all terms are combined, the most general form of the order parameter
equations takes the final form

(8.141)

where L is the linear operator:

(8.141 a)

and y can be directly identified in (8.136). Furthermore, 0( is given by


0( = Al(k~,R - R c), and the nonlinearities which are denoted by Nk 1 read
c'

Nkc,l({~k"d) = - () Lk~,k~ ~t,l~:~,l{)kc+k~+k~,O - Lk~Pk"k~l~k~,112~k,,1'


(8.141 b)

273
262 8. Physical Systems

Exercise on 8.13

The stationary solution of the Benard problem for R < Rc yields a linear tempera-
ture profile:
1'5) = To - [3z.
In practice it may happen that the heat which is put into the system from below
cannot be completely carried away. As a consequence the temperature at the
boundaries increases at a constant rate 1'/. This may be taken into account by
changing the boundary conditions for the temperature in the following way:
Boundary conditions of

T= To - [3z + I'/r[~:

determine the change in temperature for R < Rc in this case. Show that the
symmetry of the linear temperature profile is lost, and use the elimination proce-
dure described in Chap. 7 to show that these boundary conditions give rise to a
term of the form (8.140).

8.14 The Fokker-Planck Equation and Its Stationary Solution


It is now a simple matter to establish the corresponding Fokker-Planck equation
which reads

(8.142)

We seek its solution in the form

f= %expl'J>. (8.143)

By a slight generalization of (6.116) one derives the following explicit solution

I'J> f
= ~ f{Lk C y~t.l (:x - fo :;2y ~kc.l
+ Lk c (XI ~kc.112 - t Lk~k~k~' (c5~t.l a~.l ~:~:1 (5,.~+k~+k~~O + c.c.)
- i Lkck~ [3kck~1 ~kc.1121 ~k~.l f} dx dy. (8.144)

It goes far beyond our present purpose to treat (8.144) in its whole generality. We
want to demonstrate, how such expressions (8.143) and (8.144) allow us to discuss

274
8.14 The Fokker-Planck Equation and Its Stationary Solution 263

the threshold region and the stability of various mode configurations. We neglect
the dependence of the slowly varying amplitudes ~"c.l on x, y. We first put J = O.
(8.143) and (8.144) are a suitable means for the discussion of the stability of
different mode configurations. Because qJ depends only on the absolute values of
~"c.l
(8.145)

we introduce the new variable

(8.146)

The values w"c for which IfJ has an extremum are given by

oqJ
-:;,-
uW = 0, or Ct - L", p" ",w,,' = 0 (8.147)
"c
c c c c

and the second derivative tells us that the extrema are all maxima

(8.148)

For symmetry reasons we expect

(8.149)

From (8.147) we then obtain

(8.150)

where we use the abbreviation

We now compare the solution in which all modes participate, with the single mode
solution for which

(8.151)

holds, so that
1 Ct 2
lfJ(w) ="2 If' (8.152)

A comparison between (8.150) and (8.152) reveals that the single mode has a
greater probability than the multimode configuration. Our analysis can be gener-

275
264 8. Physical Systems

alized to different mode configurations, leading again to the result that only a
single mode is stable.
Let us discuss the form of the velocity field of such a single mode configuration,
using (8.121) to (8.125). Choosing kc in the x direction, we immediately recognize
that for example the z-component of the velocity field, U z is independent of y, and
has the form of a sine wave. Thus we obtain rolls as stable configurations.
We now come to the question how to explain the still more spectacular
hexagons. To do this we include the cubic terms in (8.144) which stem from a
spatial inhomogeneity in z-direction e.g., from non-Boussinesq terms. Admitting
for the comparison only 3 modes with amplitudes ~;, ~r, i = 1,2,3, the potential
function is given by

l/> = !X(I ~112 + I~212 + I~312) - b(~r~1~~ + c.c.) - U:::kek~ Pked ~ke.1121~k~.112
(8.153)

')- -----<
D )---
,
I ,

---< , I
\ I
\
, .
!
Fig. 8.5. Construction of hexagon from basic triangle (for
" details consult the text)

where the kc-sums run over the triangle of Fig. 8.5 which arises from the condition
kc1 + ke2 + ke3 = 0 and Iked = const. (compare (8.127». To find the extremal
values of ([J we take the derivatives of (8.153) with respect to ~i' ~r and thus obtain
six equations. Their solution is given by

(8.154a)

Using (8.154a) together with (8.121)-(8.125) we obtain for example uAx). Concen-
trating our attention on its dependence on x, y, and using Fig. 8.5, we find (with
x' = n/J2x)

(8.1S4b)

or in a more concise form

uz(x) oc 2~[COSX' + cos(~ + f Y) + cos(~ - f y)J.


Using the solicLlines in Fig. 8.5 as auxiliary pattern one easily convinces oneself,
that the hexagon of that figure is the elementary cell of uAx) (8.154 b). We discuss
the probability of the occurrence of this hexagon compared with that of rolls. To
this end we evaluate (8.153) with aid of (8.154a) which yields (using the explicit

276
8.14 The Fokker-Planck Equation and Its Stationary Solution 265

expression for ():

(8.155)

Here we have used the abbreviations

(8.156)

(8.157)

(8.158)

We now discuss (8.155) for two different limiting cases.


1)

(8.159)

In this case we obtain

(8.160)

2)

(8.161)

In this case we obtain

(8.162)

A comparison between (8.160) or (8.162), respectively, with a single mode potential


(8.152) reveals the following: for Rayleigh numbers R > R c' which exceed Rc only
a little, the hexagon configuration has a higher probability than the roll configura-
tion. But a further increase of the Rayleigh number finally renders the single mode
configuration (rolls) more probable.
In conclusion, let us discuss our above results using the phase transition analogy
of Sections 6.7, 6.8. For (j = 0 our present system exhibits all features of a second
order nonequilibrium phase transition (with symmetry breaking, soft modes,
critical fluctuations, etc.). In complete analogy to the laser case, the symmetry can
be broken by injecting an external signal. This is achieved by superimposing a
spatially, periodically changing temperature distribution on the constant tempera-
ture at the lower boundary of the fluid. By this we can prescribe the phase (i.e., the
position of the rolls), and to some extent also the diameter of the rolls (__ wave-

277
266 8. Physical Systems

length}. For b i= 0 we obtain a first order transition, and hysteresis is expected to


occur.

8.15 A Model for the Statistical Dynamics of the Gunn


Instability Near Threshold 2
The Gunn effect, or Gunn instability, occurs in semiconductors having two con-
duction bands with different positions of their energy minima. We assume that
donors have given their electrons to the lower conduction band. When the electrons
are accelerated by application of an electric field, their velocity increases and thus
the current increases with the field strength. On the other hand, the electrons are
slowed down due to scattering by lattice vibrations (or defects). As a net effect,
a non-vanishing mean velocity, v, results. For small fields E, t; increases as E
increases. However, at sufficiently high velocities the electrons may tunnel into the
higher conduction band where they are again accelerated. If the effective mass of
the higher conduction band is greater than that of the lower conduction band, the
acceleration of the electrons in the higher conduction band is smaller than that of
those in the lower conduction band. Of course, in both cases the electrons are
slowed down again by their collisions with lattice vibrations. At any rate the
effective mean velocity of the electrons is smaller in the higher conduction band
than in the lower conduction band. Because with higher and higher field strength,
more and more electrons get into the higher conduction band, and the mean
velocity of the electrons of the whole sample becomes smaller. Thus we obtain a
behavior of the mean velocity veE) as a function of the electric field strength as

.rE)

Fig. 8.6. Mean velocity of electrons as function of electric


"'-_ _ _ _ _ _ _ _+ E field strength

exhibited in Fig. 8.6. Multiplying v by the concentration of electrons, no, yields


the current density. So far we have talked about the elementary effect which has
nothing or little to do with a cooperative effect. We now must take into account
that the electrons themselves create an electric field strength so that we are led to a
feedback mechanism. It is our intention to show that the resulting equations are
strongly reminiscent of the laser equations, and, in particular, that one can easily
explain the occurrence of current pulses.
Let us now write down the basic equations. The first is the equation describing
the conservation of the number of electrons. Denoting the density of electrons at

2 Readers not familiar with the basic features of semiconductors may nevertheless read this
chapter starting from (8.163)-(8.\65). This chapter then provides an example how one may
derive pulse-like phenomena from certain types of nonlinear equations.

278
8.15 A Model for the Statistical Dynamics of the Gunn Instability Near Threshold 267

time t and space point x by n and the electron current, divided by the electronic
charge e, by J, the continuity equation reads

on oj
-+-=0
ot ox . (8.163)

There are two contributions to the current J. On the one hand there is the streaming
motion of the electrons, nv(E), on the other hand this motion is superimposed by a
diffusion with diffusion constant D. Thus we write the "current" in the form

(8.164)

Finally the electric field is generated by charges. Denoting the concentration of


ionized donors by no, according to electrostatics, the equation for E reads

dE
dx = e'en - no), (8.165)

where we have introduced the abbreviation

, 4ne
e =-. (8.165a)
eo

eo is the static dielectric constant. We want to eliminate the quantities nand J


from the above equations to obtain an equation for the field strength E alone.
To this end we express n in (8.163) by means of (8.165) and J in (8.163) by means of
(8.164) which yields

(8.166)

Replacing again n in (8.166) by means of (8.165) and rearranging the resulting


equation slightly we obtain

(8.167)

This equation immediately allows for an integration over the coordinate. The
integration constant denoted by

1
- let) (8.167a)
e

can still be a function of time. After integration and a slight rearrangement we find

279
268 8. Physical Systems

the basic equation

oE dE d 2E 4n
- = -e'nov(E) - v(E)- + D -2 + - let). (8.168)
ot dx dx eo

let), which has the meaning of a current density, must be determined in such a way
that the externally applied potential

u= 1: dx E(x, t) (8.169)

is constant over the whole sample. It is now our task to solve (8.168). For explicit
calculations it is advantageous to use an explicit form for veE) which, as very
often used in this context, has the form

( ) _ I1IE(1 + BE/EJ (8.170)


vE - I + (E/Ecf

III is the electron mobility of the lower band, B is the ratio between upper and
lower band mobility. To solve (8.168) we expand it into a Fourier series

(8.171)

The summation goes over all positive and negative integers, and the fundamental
wave number is defined by

(8.172)

Expanding veE) into a power series of E and comparing then the coefficients of the
same exponential functions in (8.168) yields the following set of equations

(8.173)

The different terms on the rhs of (8.173) have the following meaning

(8.174)

where
(8.175)

Vb l ) is the differential mobility and W R is the negative dielectric relaxation frequency.


wm is defined by

(8.176)

280
8.15 A Model for the Statistical Dynamics of the Gunn Instability Near Threshold 269

The coefficients A are given by

(8.177)

where the derivatives of v are defined by

v~) = dSv(E)/dEsIE=Eo' (8.178)

Inserting (8.171) into the condition (8.169) fixes Eo

Eo = U/L. (8.179)

It is now our task to discuss and solve at least approximately the basic (8.173).
First neglecting the nonlinear terms we perform a linear stability analysis. We
observe that Em becomes unstable if IXm > 0. This can be the case if vb1 ) is negative
which is a situation certainly realized as we learn by a glance at Fig. 8.6. We in-
vestigate a situation in which only one mode m = ± 1 becomes unstable but the
other modes are still stable. To exhibit the essential features we confine our analysis
to the case of only 2 modes, with m = ± 1, ±2, and make the hypothesis

(8.180)

Eqs. (8.173) reduce to

(8.181)

and

(8.182)

Here we have used the following abbreviations

IX == IXI > 0, 1X2 == - P< 0, (8.183)


V = 2VP) = 2{ -!e'novb2 ) + imkoVb1)}, (8.184)
V ~ -e'novb2 ), (8.185)
W = -6V~4) ~ e'novb 3 ). (8.186)

In realistic cases the second term in (8.184) can be neglected compared to the first
so that (8.184) reduces to (8.185). Formula (8.186) implies a similar approximation.
We now can apply the adiabatic elimination procedure described in Sections 7.1, 7.2.
Putting

(8.187)

281
270 8. Physical Systems

(8.182) allows for the solution

_ V ci (8.18S)
C z - 2" f3 + Wlc1l z,

When inserting Cz into (S.181) we obtain as final equation

(S.IS9)

It is a trivial matter to represent the rhs of (S.IS9) as (negative) derivative of a


potential cP so that (S.lS9) reads

OC I _ oCP
at - - oci'
With the abbreviation I = IClIZ, cP reads

(S.190)

When the parameter a is changed, we obtain a set of potential curves, CP(J) which
are similar to those of Fig. 8.3. Adding a fluctuating force to (S.189) we may take
into account fluctuations. By standard procedures a Fokker-Planck equation be-
longing to (S.IS9) may be established and the stable and unstable points may be
discussed. We leave it as an exercise to the reader to verify that we have again a
situation of a first order phase transition implying a discontinuous jump of the
equilibrium position and a hysteresis effect. Since C I and C z are connected with oscil-
latory terms (compare (S.171), (S.lSO)) the electric field strength shows undamped
oscillations as observed in the Gunn effect.

8.16 Elastic Stability: Outline of Some Basic Ideas


The general considerations of Sections 5.1, 5.3 on stability and instability of sys-
tems described by a potential function find immediate applications to the nonlinear
theory of elastic stability. Consider as an example a bridge. In. the frame of a model
we may describe a bridge as a set of elastic elements coupled by links, Fig. 8.7.
We then investigate a deformation, and especially the breakdown under a load.
To demonstrate how such considerations can be related to the analysis of Sections
5.1, 5.3 let us consider the simple arch (compare Fig. 8.S). It comprises two linear
springs of stiffness k pinned to each other and to rigid supports. The angle between
a spring and the horizontal is denoted by q. If there is no load, the corresponding
angle is called a. We consider only a load in vertical direction and denote it by P.
We restrict our considerations to symmetric deformations so that the system has
only one degree offreedom described by the variable q. From elementary geometry

282
8.16 Elastic Stability: Outline of Some Basic Ideas 271

load

!
load

Fig. 8.7. Model of a bridge Fig. 8.8. Model of a simple arch

it follows that the strain energy of the two strings is

V (q) -
-
1 -
2 ( k
R - -R-
2 cos Ct cos q
)2 (8.191)

Since we confine our analysis to small angles q we may expand the cosine functions,
which yields

(8.192)

The deflection of the load in vertical direction reads

£(q) = R(tan Ct - tan q) (8.193)

or after approximating tan x by x (x = Ct or q)

£(q) = R(Ct - q). (8.194)

The potential energy of the load is given by - load . deflection, i.e.,

-P£(q). (8.195)

The potential of the total system comprising springs and load is given by the sum
of the potential energies (8.192) and (8 .195)

V(q) = V - P£ (8.196)

and thus

(8.197)

Evidently the behavior of the system is described by a simple potential. The system

283
272 8. Physical Systems

acquires that state for which the potential has a minimum

(8.198)

(8.198) allows us to determine the extremal position q as a function of P

q = q(P). (8.199)

In mechanical engineering one often determines the load as a function of the


deflection, again from (8.198)

P = P(q). (8.200)

Taking the second derivative

(8.201)

tells us if the potential has a maximum or a minimum. Note that we have to insert
(8.199) into (8.201). Thus when (8.201) changes sign from a positive to a negative
value we reach a critical load and the system breaks down. We leave the following
problem to the reader as exercise: Discuss the potential (8.197) as a function of the
parameter P and show that at a critical P the system suddenly switches from a
stable state to a different stable state. Hint: The resulting potential curves have the
form of Fig. 6.7 i.e., a first order transition occurs.
Second-order transitions can also be very easily mimicked in mechanical en-
gineering by the hinged cantilever (Fig. 8.9). We consider a rigid link of length I,

load

mrmmm'InJ'77;/ Fig. 8.9. Hinged cantilever

pinned to a rigid foundation and supported by a linear rotational spring of stiffness


k. The load acts in the vertical direction. We introduce as coordinate the angle q
between the deflected link and the vertical. The potential energy of the link (due to
the spring) reads

(8.202)

284
8.16 Elastic Stability: Outline of Some Basic Ideas 273

The deflection of the load is given by

E = L(l - cos q). (8.203)

Just as before we can easily construct the total energy which is now

v=U- PE = tkq2 - PL(1 - cos q). (8.204)

When we expand the cosine function for small values of q up to the fourth power
we obtain potential curves of Fig. 6.6. We leave the discussion of the resulting
instability which corresponds to a second order phase transition to the reader.
Note that such a local instability is introduced if PL > k. Incidentally this example
may serve as an illustration for the unfoldings introduced in Section 5.5. In practice
the equilibrium position of the spring may differ slightly from that of Fig. 8.9.
Denoting the equilibrium angle of the link without load by 8, the potential energy
of the link is now given by

U(q, 8) = tk(q - 8)2 (8.205)

and thus the total potential energy under load by

v = }k(q - 8)2 - PL(1 - cos q). (8.206)

Expanding the cosine function again up to 4th order we observe that the resulting
potential

(8.207)

is of the form (5.133) including now a linear term which we came across when
discussing unfoldings. Correspondingly the symmetry inherent in (8.204) is now
broken giving rise to potential curves of Fig. 5.20 depending on the sign of 8.
We leave it again as an exercise to the reader to determine the equilibrium positions
and the states of stability and instability as a function of load.

Remarks About the General Case

We are now in a position to define the general problem of mechanical engineering


with respect to elastic stability as follows: Every static mechanical system is
described by a certain set of generalized coordinates ql ... qn' which may include,
for instance, angles. In more advanced considerations also continuously distributed
coordinates are used, for example to describe deformation of elastic shells, which
are used, for instance, in cooling towers. Then the minima of the potential energy
as a function of one or a set of external loads are searched. Of particular importance
is the study of instability points where critical behavior shows up as we have seen in
many examples before. E.g. one minimum may split into 2 minima (bifurcation)

285
274 8. Physical Systems

or a limit point may be reached where the system completely loses its stability.
An important consequence of our general considerations of Section 5.5 should be
mentioned for the characterization of the instability points. It suffices to consider
only as many degrees of freedom as coefficients of the diagonal quadratic form
vanish. The coordinates of the new minima may describe completely different
mechanical configurations. As an example we mention a result obtained for thin
shells. When the point of bifurcation is reached, the shells are deformed in such a
way that a hexagonal pattern occurs. The occurrence of this pattern is a typical
post-buckling phenomenon. Exactly the same patterns are observed for example in
hydrodynamics (compare Sect. 8.13).

286
9. Chemical and Biochemical Systems

9.1 Chemical and Biochemical Reactions

Basically, we may distinguish between two different kinds of chemical processes:


1) Several chemical reactants are put together at a certain instant, and we are
then studying the processes going on. In customary thermodynamics, one usually
compares only the reactants and the final products and observes in which direction
a process goes. This is not the topic we want to treat in this book. We rather
consider the following situation, which may serve as a model for biochemical
reactants.
2) Several reactants are continuously fed into a reactor where new chemicals are
continuously produced. The products are then removed in such a way that we have
steady state conditions. These processes can be maintained only under conditions
far from thermal equilibrium. A number of interesting questions arise which will
have a bearing on theories of formation of structures in biological systems and on
theories of evolution. The questions we want to focus our attention on are especially
the following:
1) Under which conditions can we get certain products in large well-controlled
concentrations?
2) Can chemical reactions produce spatial or temporal or spatio-temporal patterns?
To answer these questions we investigate the following problems:
a) deterministic reaction equations without diffusion
b) deterministic reaction equations with diffusion
c) the same problems from a stochastic point of view

9.2 Deterministic Processes, Without Diffusion, One Variable


We consider a model of a chemical reaction in which a molecule of kind A reacts
with a molecule of kind X so to produce an additional molecule X. Since a molecule
X is produced by the same molecule X as catalyser, this process is called "auto-
catalytic reaction". Allowing also for the reverse process, we thus have the scheme
(cf. Fig. 9.1)

kl
A + X~2X. (9.1)
k~
276 9. Chemical and Biochemical Systems

- x x
A x Fig. 9.1. (Compare text)

The corresponding reaction rates are denoted by k 1 and k;, respectively. We further
assume that the molecule X may be converted into a molecule C by interaction with
a molecule B (Fig. 9.2)
k2
B+X~C. (9.2)
k~

B x c Fig. 9.2. (Compare text)

Again the inverse process is admitted. The reaction rates are denoted by k2 and k~.
We denote the concentrations of the different molecules A, X, B, C as follows:

A a
X n
(9.3)
B b
C c
We assume that the concentrations of molecules A, Band C and the reaction rates
k j , kj are externally kept fixed. Therefore, what we want to study is the temporal
behavior and steady state of the concentration n. To derive an appropriate equa-
tion for n, we investigate the production rate of n. We explain this by an example.
The other cases can be treated similarly.
Let us consider the process (9.1) in the direction from left to right. The number
of molecules X produced per second is proportional to the concentration a of mole-
cules A, and to that of the molecules X, n. The proportionality factor is just the
reaction rate, k l ' Thus the corresponding production rate is a· n· k l' The complete
list for the processes 1 and 2 in the directions indicated by the arrows reads

(9.4)

The minus signs indicate the decrease of concentration n. Taking the two processes
of 1 or 2 together, we find the corresponding rates '1 and '2 as indicated above.
The total temporal variation of n, dn/dt == Ii is given by the sum of '. and '2 so
that our basic equation reads

(9.5)

288
9.2 Deterministic Processes, Without Diffusion, One Variable 277

In view of the rather numerous constants a, ... ,kl' ... appearing in (9.4) it is
advantageous to introduce new variables. By an appropriate change of units of
time t and concentration n we may put

(9.6)

Further introducing the abbreviations

(9.7)

we may cast (9.5) in the form

Ii = (1 - f3)n - n 2 + y == <pen). (9.8)

Incidentally (9.8) is identical with a laser equation provided y = 0 (compare (5.84».


We first study the steady state, Ii = 0, with y = O. We obtain

0 for f3 > 1 (9.9)


n- {
- 1 - f3 for f3 < 1,

i.e., for f3 > 1 we find no molecules of the kind X whereas for f3 < 1 a finite con-
centration, n, is maintained. This transition from "no molecules" to "molecules X
present" as f3 changes has a strong resemblance to a phase transition (cf. Section
(6.7» which we elucidate by drawing an analogy with the equation of the fer-
romagnet writing (9.8) with Ii = 0 in the form

y = n2 - (I - f3)n. (9.10)

The analogy is readily established by the following identifications

H +--+ Y
TITc +--+ f3

H = M2 - (1 - ~) M, (9.11)

where M is the magnetization, H the magnetic field, T the absolute temperature


and Tc the critical temperature.
To study the temporal behavior and the equilibrium states it is advantageous to
use a potential (compare Sect. 5.1). Eq. (9.8) then acquires the form

av
Ii = - an (9.12)

289
278 9. Chemical and Biochemical Systems

with

(9.l3)

We have met this type of potential at several occasions in our book and we may
leave the discussion of the equilibrium positions of n to the reader. To study the
temporal behavior we first investigate the case

a) y = O.
The equation

Ii = -([3 - I)n - n 2 (9.14)

is to be solved under the initial condition

t = 0; n = no. (9.l5)

In the case [3 = 1 the solution reads

(9.l6)

i.e., n tends asymptotically to O. We now assume [3 of. 1. The solution of (9.14) with
(9.15) reads

(1 - [3) A c exp ( - At) - 1


n = - - - - -. ---"--;-~-- (9.17)
2 2 c exp ( - At) + I .

Here we have used the abbreviations

A = 11 - {JI, (9.18)
11 - [31 + (1 - [3) - 2no
c = -:-:----';;-;----:-.:---------'~-=--" (9.19)
11 - [31 + (1 - [3) + 2n o'

In particular we find that the solution (9.17) tends for t --> 00 to the following
equilibrium values

0 for [3 > 1
{ (9.20)
noo = (1 - [3) for [3 < 1 .

Depending on whether [3 > 1 or [3 < 1, the temporal behavior is exhibited in Figs.


9.3 and 9.4.

b) ')' of. O.
In this case the solution reads

(I - [3) A c exp ( - At) - 1


(9.21)
net) = - 2 - - 2' c exp (-At) + 1 '

290
9.2 Deterministic Processes, Without Diffusion, One Variable 279

n
l-fl - - - - - - - - -

Fig. 9.3. Solution (9.17) for P > 1 Fig. 9.4. Solution (9.17) for P < 1 and
two different initial conditions

where c is the same as in (9.19) but A is now defined by

A = J(l - P)2 + 4y. (9.22)

Apparently n tends to an equilibrium solution without any oscillations.


As a second model of a similar type we treat the reaction scheme

(9.23)

k2
B + X<2 c. (9.24)
k~

Eq. (9.23) implies a trimolecular process. Usually it is assumed that these are very
rare and practically only bimolecular processes take place. It is, however, possible
to obtain a trimolecular process from subsequent bimolecular processes, e.g.,
A + X -> Y; Y + X -> 3X, if the intermediate step takes place very quickly and
the concentration of the intermediate product can (mathematically) be eliminated
adiabatically (cf. Sect. 7.1). The rate equation of (9.23), (9.24) reads

Ii = _n 3 + 3n 2 - pn + y == <pen), (9.25)

where we have used an appropriate scaling of time and concentration. We have


met this type of equation already several times (compare Sect. 5.5) with n == q.
This equation may be best discussed using a potential. One readily finds that there
may be either one stable value or two stable and one unstable values. Eq. (9.25)

291
280 9. Chemical and Biochemical Systems

describes a first-order phase transition (cf. Sect. 6.7). This analogy can be still
more closely exhibited by comparing the steady state equation (9.25) with Ii = 0,
with the equation of a van der Waals gas. The van der Waals equation of a real gas
reads

(9.26)

The analogy is established by the translation table

n +-+ llv, v: volume


y +-+ pressure p (9.27)
f3 +-+ RT
(R: gas constant, T: absolute temperature)

We leave it to readers who are familiar with van der Waals' equation, to exploit
this analogy to discuss the kinds of phase transition the molecular concentration n
may undergo.

9.3 Reaction and Diffusion Equations


As a first example we treat again a single concentration as variable but we now
permit that this concentration may spatially vary due to diffusion. We have met a
diffusion equation already in Sections 4.1 and 4.3. The temporal change of n, Ii,
is now determined by reactions which are described e.g., by the rhs of (9.8) and in
addition by a diffusion term (we consider a one-dimensional model)

a2 n
Ii = x ax2 + cp(n). (9.28)

cp(n) may be derived from a potential Vor from its negative cP

cp(n) = an
a cP(n) (= _ aaVn)' (9.29)

We study the steady state n = °


and want to derive a criterium for the spatial
coexistence of two phases, i.e., we consider a situation in which we have a change
of concentration within a certain layer so that

n ~ n1 for z ~ + 00,
n ~ n2 for z ~ - 00. (9.30)

To study our basic equation

(9.31)

292
9.3 Reaction and Diffusion Equations 281

we invoke an analogy with an oscillator or, more generally, with a particle in the
potential field cP(n) by means of the following correspondence:

x <-> t time
cP <-> potential (9.32)
n <-> q coordinate

Note that the spatial coordinate x is now interpreted quite formally as time while
the concentration variable is now interpreted as coordinate q of a particle. The
potential cP is plotted in Fig. 9.5. We now ask under which condition is it possible

Fig. 9.5. l/>(n) in case of coexistence of two phases


with plane boundary layer (Mter F.
n,
~--~----------~---+n
nb n2
Schlagl: Z. Physik 253, 147 (1972))

that the particle has two equilibrium positions so that when starting from one equi-
librium position for t = - 00 it will end at another equilibrium position for t -->
+ oo? From mechanics it is clear that we can meet these conditions only if the
potential heights at ql (== n 1) and q2 (== n2 ) are equal

(9.33)

Bringing (9.33) into another form and using (9.29), we find the condition

(9.34)

We now resort to our specific example (9.25) putting

cp(n) = _n 3 + 3n 2 - f3n + y. (9.35)


'-------",~--~/

- 'P(n)

This allows us to write (9.34) in the form

(9.36)

293
282 9. Chemical and Biochemical Systems

or resolving (9.36) for l'

l' = (n _I n ) J"2 P(n) dn. (9.37)


2 1 "I

In Fig. 9.6 we have plotted l' versus n. Apparently the equilibrium condition implies

T
vdn)

n,
L---~----------~-----+n Fig. 9.6. Maxwellian construction of the
coexistence value y

that the areas in Fig. 9.6 are equal to each other. This is exactly Maxwell's con-
struction which is clearly revealed by the comparison exhibited in (9.27)

t (9.38)

This example clearly shows the fruitfulness of comparing quite different systems
in and apart from thermal equilibrium.

9.4 Reaction--Diffusion Model with Two or Three Variables:


The Brusselator and the Oregonator

In this section we first consider the following reaction scheme of the "Brusselator"

A-+X
B+X-+Y+D
2X + Y -+ 3X
X-+E (9.39)

between molecules of the kinds A, X, Y, B, D, E. The following concentrations enter

294
9.4 Reaction-Diffusion Model with Two or Three Variables: Brusselator, Oregonator 283

into the equations of the chemical reaction:

(9.40)

The concentrations a and b will be treated as fixed quantities, whereas n l and n 2


are treated as variables. Using considerations completely analogous to those of the
preceding chapters the reaction-diffusion equations in one dimension, x, read

onl 2 o2nl
at = a - (b + l)nl + n ln2 + DI OX2' (9.41)

~2 ~~
at =
Z
bn l - n1n Z + Dz oxz , (9.42)

where DI and D z are diffusion constants. We will subject the concentrations n l


nl(x, t) and n z = nz(x, t) to two kinds of boundary conditions; either to

nl(O, t) = nl(l, t) = a, (9.43a)


b
nz(O, t) = nz(l, t) = -, (9.43b)
a
or to
nj remains finite for x --> ± 00. (9.44)

Eqs. (9.41) and (9.42) may, of course, be formulated in two or three dimensions.
One easily verifies that the stationary state of (9.41), (9.42) is given by

n? = a, (9.45)

To check whether new kinds of solutions occur, i.e., if new spatial or temporal
structures arise, we perform a stability analysis of the (9.41) and (9.42). To this end
we put

(9.46)

and linearize (9.41) and (9.42) with respect to ql' q2. The linearized equations are

(9.47)

(9.48)

295
284 9. Chemical and Biochemical Systems

The boundary conditions (9.43a) and (9.43b) acquire the form

(9.49)

whereas (9.44) requires qj finite for x -> ± 00. Putting, as everywhere in this book,

q = (:J, (9.50)

(9.47) and (9.48) may be cast into the form

4 = Lq, (9.51)

where the matrix L is defined by

(9.52)

To satisfy the boundary conditions (9.49) we put

q(x, t) = qo exp (A'lt) sin lnx (9.53)

with

1= 1,2, .... (9.53a)

Inserting (9.53) into (9.51) yields a set of homogeneous linear algebraic equations
for qo. They allow for nonvanishing solutions only if the determinant vanishes.

-D; +b- 1- A
I -b
a
-D~
2

- a2 - A
I=0 '
A = AI' (9.54)

In it we have used the abbreviation

(9. 54a)

To make (9.54) vanishing, A must obey the characteristic equation

A2 - IXA + f3 = 0, (9.55)

where we have used the abbreviations

IX = (- D; + b - 1 - D'z - a 2 ), (9.56)

and
(9.57)

296
9.4 Reaction-Diffusion Model with Two or Three Variables: Brusselator, Oregonator 285

An instability occurs if Re (A) > o. We have in mind keeping a fixed but changing
the concentration b and looking for which b = be the solution (9.53) becomes
unstable. The solution of (9.55) reads, of course,

(9.58)

We first consider the case that A is real. This requires

a2 - 413 > 0, (9.59)

and A > 0 requires

a+ Ja 2 - 413 > O. (9.60)

On the other hand, if A is admitted to be complex, then

a2 - 413 < 0, (9.61)

and we need for instability

a> O. (9.62)

We skip the transformation of the inequalities (9.59), (9.60), (9.61) and (9.62) to
the corresponding quantities a, b, D;, D;, and simply quote the final result: We
find the following instability regions:

1) Soft-mode instability, A real, A 2: 0

(9.63)

This inequality follows from the requirement 13 < 0, cf. (9.59),


2) Hard-mode instability, A complex, Re A 2: 0

D~ + D; + I + a2 < b < D~ - D; + 1 + a 2 + 2aJl + D~ - D;. (9.64)

The left inequality stems from (9.62), the right inequality from (9.6\). The instabili-
ties occur for such a wave number first for which the smallest, b, fulfils the in-
equalities (9.63) or (9.64) for the first time. Apparently a complex A is associated
with a hard mode excitation while A real is associated with a soft mode. Since
instability (9.63) occurs for k -# 0 and real A a static spatially inhomogeneous
pattern arises. We can now apply procedures described in Sections 7.6 to 7.11.
We present the final results for two different boundary conditions. For the boundary
conditions (9.43) we put

(9.65)

297
286 9. Chemical and Biochemical Systems

where the index u refers to "unstable" in accordance with the notation of Section
7.7. The sum over j contains the stable modes which are eliminated adiabatically
leaving us, in the soft-mode case, with

(9.66)

provided I is even. The coefficients c[ and C3 are given by

(9.66a)
D,2
C3 = -[D' _ a2(D,2C+ 1 _ D' )f {(D~c + l)(D;c + a 2)}
2c 1c 2c

26 / ; , 2 00 (1 - (_1)/)2 a2(D~ + I) - D;(D;c + I) }


{
1 + 112a2 (D 2c - a ) Ll= 1 /2(12 _ 4{;) a 2bc _ (b c _ I - D'I)(D; + a 2)
(9.66b)
and

(9.66c)
where Ie is the critical value of I for which instability occurs first. A plot of (u
as a function of the parameter b is given in Fig. 5.4 (with b == k and (u == q).
Apparently at b = bc a point of bifurcation occurs and a spatially periodic structure
is established (compare Fig. 9.7). If on the other hand I is odd, the equation
for ~u reads

Fig. 9.7. Spatially inhomogene-


ous concentration beyond
•x instability point, Ie even

(u = c[(b - bc)(u + C2(~ + C3(~' (9.67)


cl , C2, C3 real, C1 > 0, C3 < O.

C1 and C3 are given in (9.66a), (9.66b), and C2 by

2 5/ 2 1)lc)D' 3/2(D' 2)1/2 (D;c - a 2)(D;c + I)


c2 = -11a
3 Ic (I - (- 2c 2c +a [D'2c _ a 2(D'Ie + I _ D'2e )]3/2
(9.67a)

(u is plotted as function of b in Fig. 9.8. The corresponding spatial pattern is


exhibited in Fig. 9.9. We leave it to the reader as an exercise to draw the potential
curves corresponding to (9.66) and (9.67) and to discuss the equilibrium points in

298
9.4 Reaction-Diffusion Model with Two or Three Variables: BrusseIator, Oregonator 287

~ ..
--+-----=::......_-- - - _ b

Fig. 9.8. The order parameter eu as Fig. 9.9. Spatially inhomogeneous


a function of the "pump" concentration beyond instability point, Ie odd
parameter b. A task for the reader:
Identify for fixed b the values of
eu with the minima of potential
curves exhibited in Section 6.3

analogy to Section 5.1. So far we have considered only instabilities connected with
the soft mode. If there are no finite boundaries we make the following hypothesis
for q

(9.68)

The methods described in Section 7.8 allow us to derive the following equations
for eN,ke == e
a) Soft mode

(9.69)
where

A.I = (b - bc)(I + a 2 - p,2 - ap,3)-1 + O[(b - bc)2] (9.69a)


A.~ = 4ap,«1 - p,2)(1 + ap,)k;)-1 (9.69b)
A = (9(1 - p,2)p,3(1 - ap,)2a)-I( _8a 3p,3 + 5a 2p,2 + 20ap, - 8) (9.69c)
and

(9.69d)

b) Hard mode

(9.70)

299
288 9. Chemical and Biochemical Systems

e
Note that A can become negative. In that case higher powers of must be included.
With increasing concentrations b, still more complicated temporal and spatial
structures can be expected as has been revealed by computer calculations.
The above equations may serve as model for a number of biochemical reactions
as well as a way to understand, at least qualitatively, the Belousov-Zhabotinski
reactions where both temporal and spatial oscillations have been observed. It
should be noted that these latter reactions are, however, not stationary but occur
rather as a long-lasting, transitional state after the reagents have been put together.
A few other solutions of equations similar to (9.41) and (9.42) have also been con-
sidered. Thus in two dimensions with polar coordinates in a configuration in
which a soft and a hard mode occur simultaneously, oscillating ring patterns are
found.
Let us now come to a second model, which was devised to describe some es-
sential features of the Belousov-Zhabotinski reaction. To give an idea of the
chemistry of that process we represent the following reaction scheme:

BrO; + Br- + 2H+ -+ HBr0 2 + HOBr (C.l)


HBr0 2 + Br- + H+ -+ 2HOBr (C.2)
BrO; + HBr0 2 + H+ -+ 2BR0 2 + H 20 (C.3a)
Ce3+ + Br0 2 + H+ -+ Ce 4 + + HBr0 2 (C.3b)
2HBr0 2 -+ BrO; + HOBr + H+ (C.4)
nCe 4 + + BrCH(COOH)2 -+ nCe3+ + Br- + oxidized products. (C.5)

Steps (CI) and (C4) are assumed to be bimolecular processes involving oxygen
atom transfer and accompanied by rapid proton transfers; the HOBr so produced
is rapidly consumed directly or indirectly with bromination of malonic acid. Step
(C3a) is ratedetermining for the overall process of (C3a) + 2(C3b). The Ce 4 +
produced in step (C3b) is consumed in step (C5) by oxidation of bromomalonic
acid and other organic species with production of the bromide ion. The complete
chemical mechanism is considerably more complicated, but this simplified version
is sufficient to explain the oscillatory behavior of the system.

Computational Model
The significant kinetic features of the chemical mechanism can be simulated by
the model called the "Oregonator".

A+Y-+X
X+Y-+P
B + X-+2X+ Z
2X-+ Q
Z -+fY.

300
9.5 Stochastic Model for a Chemical Reaction Without Diffusion. Birth and Death Processes 289

This computational model can be related to the chemical mechanism by the


identities A == B == Br03' , X == HBr02, Y == Br-, and Z == 2Ce 4 +. Here we
have to deal with three variables, namely, the concentrations belonging to X, Y, Z.

Exercise

Verify, that the rate equations belonging to the above scheme are (in suitable units)

iii = s(n2 - n 2n l + nl - qnf),


li2 = S-l( -n2 - n2nl + In3),
li3 = w(nl - n3)'

9.5 Stochastic Model for a Chemical Reaction Without Diffusion.


Birth and Death Processes. One Variable
In the previous sections we have treated chemical reactions from a global point of
view, i.e., we were interested in the behavior of macroscopic densities. In this and
the following sections we want to take into account the discrete nature of the
processes. Thus we now investigate the number of molecules N (instead of the con-
centration n) and how this number changes by an individual reaction. Since the
individual reaction b(!tween molecules is a random event, N is a random variable.
We want to determine its probability distribution P(N). The whole process is still
treated in a rather global way. First we assume that the reaction is spatially homo-
geneous, or, in other words, we neglect the space dependence of N. Furthermore
we do not treat details of the reaction such as the impact of local temperature or
velocity distributions of molecules. We merely assume that under given conditions
the reaction takes place and we want to establish an equation describing how the
probability distribution peN) changes due to such events. To illustrate the whole
procedure we consider the reaction scheme

kl
A+X~2X (9.71)
kf
k2
B+X~C, (9.72)
k~

which we have encountered before. The number N = 0, 1, 2, ... represents the


number of molecules of kind X. Due to any of the reactions (9.71) or (9.72) N is
changed by 1. We now want to establish the master equation for the temporal
change of P(N, t) in the sense of Chapter 4. To show how this can be achieved we
start with the simplest of the processes, namely, (9.72) in the direction k'z. In
analogy to Section 4.1 we investigate all transitions leading to N or going away
from it.

301
290 9. Chemical and Biochemical Systems

1) transition N -7 N + 1. "Birth" of a molecule X (Fig. 9.10).


The number of such transitions per second is equal to the occupation probability,
peN, t), multiplied by the transition probability (per second), weN + 1, N).
weN + I, N) is proportional to the concentration c of molecules C and to the
reaction rate k~. It will turn out below that the final proportionality factor must
be the volume V.
2) transition N - I -7 N (Fig. 9.10).
Because we start here from N - J, the total rate of transitions is given by
peN - I, t)k~c· V. Taking into account the decrease of occupation number due to
the first process N -7 N + I by a minus sign of the corresponding transition rate
we obtain as total transition rate

2 <- VP(N - I, t)k~c - peN, t)k~cV. (9.73)

N+1 N+I N+I

N N N N

N-1 N-I N-I N-I

Fig. 9.10. How P(N) changes Fig. 9.11. How P(N) changes due to death of
due to birth of a molecule a molecule

In a similar way we may discuss the first process in (9.72) with rate k z . Here the
number of molecules N is decreased by J ("death" of a molecule X. cf. Fig. 9.11).
If we start from the level N the rate is proportional to the probability to find that
state N occupied times the concentration of molecules b times the number of
molecules X present times the reaction rate k z . Again the proportionality factor
must be fixed later. It is indicated in the formula written below. By the same
process, however, the occupation number to level N is increased by processes start-
ing from the level N + 1. For this process we find as transition rate

N
2-7 peN + 1, t)(N + I)bk z - peN, t) vbkz V. (9.74)

It is now rather obvious how to derive the transition rates belonging to the pro-
cesses 1. We then find the scheme

N-I N
1 -> peN - I, t)V· -v-ak) - peN, t)· vak) V (9.75)

and
(N + l)N N(N - 1) ,
1<- peN + I, t)V VZ k; - peN, t) v2 k) V. (9.76)

302
9.5 Stochastic Model for a Chemical Reaction Without Diffusion. Birth and DeathProcesses 291

The rates given in (9.73)-(9.76) now occur in the master equation because they
determine the total transition rates per unit time. When we write the master equa-
tion in the general form

PeN, t) = weN, N - I)P(N - 1, t) + weN, N + I)P(N + 1, t)


- {weN + 1, N) + weN - 1, N)}P(N, t), (9.77)

we find for the processes (9.71) and (9.72) the following transition probabilities per
second:

. (N - 1) ,)
weN, N - 1) = V ( ak t V + k 2c , (9.78)

weN, N + 1) = V k t
(
I (N +
v2
I)N
+ k2
beN +
V
1») . (9.79)

The scheme (9.78) and (9.79) has an intrinsic difficulty, namely that a stationary
solution of (9.77) is just P(O) = 1, peN) = 0 for N", O. For a related problem
compare Section 10.2. For this reason it is appropriate to include the spontaneous
creation of molecules X, from molecules A with the transition rate kl in (9.71) and
(9.72) as a third process

k,
A--X. (9.80)

Thus the transition rates for the processes (9.71), (9.72) and (9.80) are

(9.81)

and

weN, N + 1) = V kl
(
I (N +
v2
l)N
+ k 2b
(N +
V
1)) . (9.82)

The solution of the master equation (9.77) with the transition rates (9.78) and
(9.79) or (9.81) and (9.82) can be easily found, at least in the stationary state,
using the methods of Chapter 4. The result reads (compare (4.119»

w(v + 1, v)
peN) = P(O)· TI v=o
N-t
(9.83)
w
(
v, v + 1)'

The further discussion is very simple and can be performed as in 4.6. It turns out
that there is either an extremal value at N = 0 or at N = No '" 0 depending on the
parameter b.
In conclusion we must fix the proportionality constants which have been left
open in deriving (9.73) to (9.76). These constants may be easily found if we require

303
292 9. Chemical and Biochemical Systems

that the master equation leads to the same equation of motion for the density, n,
which we have introduced in Section 9.2, at least in the case of large numbers N.
To achieve this we derive a mean value equation for N by mUltiplying (9.77) by N
and summing up over N. After trivial manipulations we find

d
(it <N) = <weN + 1, N» - <weN - 1, N» (9.84)

where, as usual,

<N) = LN'=o NP(N, t)

and

<weN ± 1, N» = LN'=o weN ± I,N)P(N, t).

Using (9.78), (9.79) we obtain

d
(it<N) {+ I)I
+ 2c - l l
= V ak l -y<N I» -} k k'l V 2 <N(N - k 2 b-y<N) .
(9.85)

A comparison with (9.4-5) exhibits a complete agreement provided that we put


n = (lfV)<N), we neglect 1 compared to N, and we approximate <N(N - I» by
<N)2. This latter replacement would be exact if P were a Poisson distribution
(compare the exercise on (2.12». In general P is not a Poisson distribution as can
be checked by studying (9.83) with the explicit forms for the w's. However, we ob-
tain such a distribution if each of the two reactions (9.71) and (9.72) fulfils the
requirement of detailed balance individually. To show this we decompose the transi-
tion probabilities (9.78) and (9.79) into a sum of probabilities referring to a process
1 or process 2

weN, N - 1) = wt(N, N - 1) + wzCN, N - 1), (9.86)


weN - 1, N) = wt(N - 1, N) + wiN - 1, N), (9.87)

where we have used the abbreviations

wl(N, N - 1) = akl(N - 1), (9.88)

wl(N - 1, N) = k;N(N - 1)~, (9.89)

wzCN, N - 1) = Vk 2c, (9.90)


w2 (N - 1, N) = k 2 bN. (9.91)

If detailed balance holds individually we must require

wl(N, N - 1)P(N - 1) = wl(N - 1, N)P(N), (9.92)

304
9.5 Stochastic Model for a Chemical Reaction Without Diffusion. Birth and Death Processes 293

and

wiN, N - l)P(N - I) = wiN - I, N)P(N). (9.93)

Dividing (9.92) by (9.93) and using the explicit forms (9.88)-(9.91) we find the
relation

ak J k;c
(9.94)
k'JN/V = k 2 bN/V·

Apparently both sides of (9.94) are of the form /1IN where /1 is a certain constant.
(9.94) is equivalent to the law of mass action. (According to this law,

product of final concentrations


·· . I concentratIOns
pro d uct 0 f ImtIa . = const.

In our case, the numerator is n·n·c, the denominator a·n·b·n). Using that (9.94) is
equal /11 N we readily verify that

/1
w(N, N - I) = N w(N - I, N) (9.95)

holds. Inserting this relation into (9.83) we obtain

/1 N
P(N) = P(O) N! . (9.96)

P(O) is determined by the normalization condition and immediately found to be

(9.97)

Thus in the present case we, in fact, find the Poisson distribution

(9.98)

In general, however, we deal with a nonequilibrium situation where the individual


laws of detailed balance (9.92) and (9.93) are not valid and we consequently obtain a
non-Poissonian distribution. It might be shown quite generally that in thermal
equilibrium the detailed balance principle holds so that we always get the Poisson
distribution. This is, however, in other cases no more so if we are far from equi-
librium.

Exercises on 9.5

I) Derive the transition rates for the master equation (9.77) for the following

305
294 9. Chemical and Biochemical Systems

processes

kl
A + 2X+2 3X (E. I)
k~

k2
B+X+2C. (E.2)
k~

2) Discuss the extrema of the probability distribution as a function of the con-


centration a, b.
3) Derive (9.84) explicitly.
4) Treat a set of reactions

kJ
Aj + IjX+2 B j + (lj + I)X,j = 1, ... , k,
k~

under the requirement of detailed balance for each reaction). Show that peN) is
Poissonian.
Hint: Use wiN, N - 1) in the form

wiN, N - 1) oc (N - l)(N - 2) ... (N - I)


wiN - 1, N) oc N(N - 1)(N - 2) ... (N - Ij + 1).

Note that for N ::; Ij a division by Wj is not possible!

9.6 Stochastic Model for a Chemical Reaction with Diffusion.


One Variable
In most chemical and biochemical reactions diffusion plays an important role
especially when we want to investigate the formation of spatial patterns. To obtain
a proper description we divide the total volume into small cells of volume v. We
distinguish the cells by an index 1 and denote the number of molecules in that cell
by N,. Thus we now investigate the joint probability

P( ... , N" N'+a' ... ) (9.99)

for finding the cells 1 occupied by N, molecules. In this chapter we consider only a
single kind of molecules but the whole formalism can be readily generalized to
several kinds. The number of molecules N, now changes due to two causes; namely,
due to chemical reactions as before, but now also on account of diffusion. We
describe the diffusion again as a birth and death scheme where one molecule is
annihilated in one cell and is created again in the neighboring cell. To find the total
change of P due to that process, we have to sum up over all neighboring cells
1 + a of the cell under consideration and we have to sum up over all cells 1. The

306
9.6 Stochastic Model for a Chemical Reaction with Diffusion. One Variable 295

temporal change of P due to diffusion thus reads

P( . .. ,N" ...)Idiffusion = Ll,a D'{(Nl+a + 1)P(. , , ,N, - 1, Nl+a + 1)

- N,P( ... , N " N'h'" .)}. (9.100)

The total change of P has the general form

P= PI diffusion + Plreaction, (9.101)

where we may insert for PI reaction the rhs of (9.77) or any other reaction scheme.
For a nonlinear reaction scheme, (9.101) cannot be solved exactly. We therefore
employ another method, namely, we derive equations of motion for mean values
or correlation functions. Having in mind that we go from the discrete cells to a
continuum we shall replace the discrete index I by the continuous coordinate x,
I -+ x. Correspondingly we introduce a new stochastic variable, namely, the local
particle density

N,
p(x) = - (9.102)
v

and its average value

1
n(x, t) = -v (N,)
1
= -;; LIN N,P( ... , N" ...).
j } (9.103)

We further introduce a correlation function for the densities at space points x and
x'

g(x, x', t) = ~
v
(N,N,,) - (N,)(N,,)
v
~- b(x - x')n(x, t), (9.104)

where we use the definition

(N,N,.) = LIN)} N,N"P( ... , N " ... , N " , ...). (9.105)

As a concrete example we now take the reaction scheme (9.71, 72). We assume,
however, that the back reaction can be neglected, i.e., k; = O. Multiplying the
corresponding equation (9.101) by (9.102) and taking the average (9.103) on both
sides we obtain

an(x, t) 2
-a-(- = DV n(x, t) + (Xl - x 2 )n(x, t) + x1P, (9.106)

307
296 9. Chemical and Biochemical Systems

where we have defined the diffusion constant D by

D = D'lv. (9.107)

(9.107a)

(9.107b)

We leave the details, how to derive (9.106), to the reader as an exercise. In a


similar fashion one obtains for the correlation function (9.104) the equation

:; = D(V'; + V';,)g + 2(x\ - x 2 )g + 2x\n(x, t)b(x - x'). (9.108)

A comparison of (9.106) with (9.28), (9.5), (9.4) reveals that we have obtained
exactly the same equation as in the nonstochastic treatment. Furthermore we find
that putting k; = 0 amounts to neglecting the nonlinear term of (9.8). This is the
deeper reason why (9.106) and also (9.108) can be solved exactly. The steady state
solution of (9.108) reads

(9.109)

where the density of the steady state is given by

(9.110)

Apparently the correlation function drops off with increasing distance between x
and x'. The range of correlation is given by the inverse of the factor of Ix - xii
of the exponential. Thus the correlation length is

(9.111)

When we consider the effective reaction rates X\ and X 2 (which are proportional
to the concentrations ofmolecuIes a and b) we find that for Xl = X2 the coherence
length becomes infinite. This is quite analogous to what happens in phase transi-
tions of systems in thermal equilibrium. Indeed, we have already put the chemical
reaction models under consideration in parallel with systems undergoing a phase
transition (compare Sect. 9.2). We simply mention that one can also derive an
equation for temporal correlations. It turns out that at the transition point also the
correlation time becomes infinite. The whole process is very similar to the non-
equilibrium phase transition of the continuous-mode laser. We now study molecule
number fluctuations in small volumes and their correlation function. To this end
we integrate the stochastic density (9.102) over a volume A V where we assume

308
9.6 Stochastic Model for a Chemical Reaction with Diffusion. One Variable 297

that the volume has the shape of a sphere with radius R

(9.112)

It is a simple matter to calculate the variance of the stochastic variable (9.112) which
is defined as usual by

(9.113)

Using the definition (9.104) and the abbreviation Rile =r We obtain after ele-
mentary integrations

(9.114)

We discuss the behavior of (9.114) in several interesting limiting cases. At the


critical point, where Ie -+ 00, we obtain

(9.11 5)

i.e., an ever increasing variance with increasing distance. Keeping Ie finite and
letting R -+ 00 the variance becomes proportional to the square of the correlation
length

(9.116)

For volumes with diameter small compared to the correlation length R « Ie' we
obtain

(9.117)

Evidently for R -+ 0 (9.117) becomes a Poissonian distribution which is in agree-


ment with the postulate of local equilibrium in small volumes. On the other hand
for large R » Ie the variance reads

(9.118)

This result shows that for R -+ 00 the variance becomes independent of the distance
and would obtain from a master equation neglecting diffusion. It has been proposed
to measure such critical fluctuations by fluorescence spectroscopy which should be
much more efficient than light-scattering measurements. The divergences, which
occur at the transition point" 1 = "2' are rounded off if we take the nonlinear term

309
298 9. Chemical and Biochemical Systems

ex: n 2 , i.e., k; # 0, into account. We shall treat such a case in the next chapter
taking a still more sophisticated model.

Exercises on 9.6

= 0.
1) Derive (9.106) from (9.101) with (9.100), (9.77, 78, 79) for k't
Hint: Use exercise 3) of Section 9.5. Note that in our present dimensionless
units of space

lLa <N'+a + N,-a - 2N,) = vl<N,).

2) Derive (9.108) from the same equations as in exercise 1) for k; = O.


Hint: Multiply (9.101) by N,N" and sum over all N j .

3) Solve (9.101) for the steady state with Preaction = O.


Hint: Use the fact that detailed balance holds. Normalize the resulting prob-
ability distribution in a finite volume, i.e., for a finite number of cells.

4) Transform (9.100), one-dimensional case, into the Fokker-Planck equation:

f .= J { 0 ( D d 2p
dx - op(x) dx2(x»)
f + D (d 0)2 (p(x)f)·
dx' op(x)

Hint:
Divide the total volume into cells which still contain a number N, » 1. It is
assumed that P changes only little for neighboring cells. Expand the rhs of
(9.100) into a power series of "1" up to second order. Introduce p(x) (9.102),
and pass to the limit that I becomes a continuous variable x, hereby replacing
P by f = J{p(x)} and using the variational derivative %p(x) instead of iJ/iJN,.
(For its use see HAKEN, Quantum Field Theory of Solids, North-Holland,
Amsterdam, 1976).

9.7* Stochastic Treatment of the Brusselator Close to Its


Soft-Mode Instability

a) Master Equation and Fokker-Planck Equation


We consider the reaction scheme of Section 9.4

A-+X
B+X-+Y+D (9.119)
2X + Y -+ 3X
X-+E,

where the concentrations of the molecules of kind A, B are externally given and

310
9.7 Stochastic Treatment of the Brusselator Close to Its Soft-Mode Instability 299

kept fixed, while the numbers of molecules of kind X and Yare assumed to be
variable. They are denoted by M, N respectively. Because we want to take into
account diffusion, we divide the space in which the chemical reaction takes place
into cells which still contain a large number of molecules (compared to unity).
We distinguish the cells by an index I and denote the numbers of molecules in cell
I by M" N,. We again introduce dimensionless constants a, b which are propor-
tional to the concentrations of the molecules of kind A, B. Extending the results of
Sections 9.5 and 6, we obtain the following master equation for the probability
distribution P( ... , M" N, ... ) which gives us the joint probability to find
M", N", ... , M" N" ... molecules in cells I', ... , I

Pc .. ; M" N,; . ..) = I, v[aP(. . . ; M, - 1, N,; ... )


+b(M, + I)v- 1 PC . .. ; M, + 1, N, - 1; ...) + (M, - 2)(M, - l)(N, + l)v- 3
·P( ... ; M, - I,N, + 1; ...) + (M, + I)v-1p( ... ; M, + 1, N,; ...)
-Pc . .. ; M" N,; .. .)(a + (b + I)M,v- 1 + M,(M, - I)N,v- 3 )]
+ I'a[DaM'+a + I)·P(. .. ; M, - 1, N,; ... ; Ml+a + I, N'h; ... )
- M'+aP(, .. ; M" N,; ... ; Ml+a, N'+a; ... )}
+ D;{(Nl+a + I)·P( ... ; M" N, - 1; ... ; Ml+a, N'+a + 1, ...)
- N,P( . .. ; M" N,; ... ; Ml+a' Nl+a; ...)}]. (9.120)

In it v is the volume of a cell, I. The first sum takes into account the chemical reac-
tions, the second sum, containing the "diffusion constants" D;, D~, takes into ac-
count the diffusion of the two kinds of molecules. The sum over a runs over the
nearest neighboring cells of the cell I. If the numbers M" N, are sufficiently large
compared to unity and if the function P is slowly varying with respect to its argu-
ments we may proceed to the Fokker-Planck equation. A detailed analysis shows
that this transformation is justified within a weB-defined region around the soft-
mode instability. This implies in particular a» 1 and Jl == (DdD2)1/2 < 1.
To obtain the Fokker-Planck equation, we expand expressions of the type (M, + 1)
PC . .. ,M, + 1, N" ... ) etc. into a power series with respect to "1" keeping the
first three terms (cf. Sect 4.2). Furthermore we let I become a continuous index
which may be interpreted as the space coordinate x. This requires that we replace
the usual derivative by a variational derivative. Incidentally, we replace M,lv, N,lv
by the densities M(x), N(x) which we had denoted by p(x) in (9.102) and PC . .. , M"
N" ... ) by f( . .. , M(x), N(x), ... ). Since the detailed mathematics of this proce-
dure is rather lengthy we just quote the final result

f = J d 3 x[ -{(a - (b + I)M + M 2N + D 1·V2M)f}M(x)

-{(bM - M 2N + D 2 ·V 2N)f}N(X) + H(a + (b + I)M + M 2N)f}M(X).M(x)


-{(bM + M 2N)f}M(x),N(X) + H(bM + M 2N)f}N(x).N(x)
+ Dl(V(lJllJM(x»)2(Mf) + D2(V(lJIlJN(x»)2(Nf)]. (9.121)

311
300 9. Chemical and Biochemical Systems

The indices M(x) or N(x) indicate the variational derivative with respect to M(x)
or N(x). DI and D2 are the usual diffusion constants. The Fokker-Planck equation
(9.121) is still far too complicated to allow for an explicit solution. We therefore
proceed in several steps: We first use the results of the stability analysis of the
corresponding rate equations without fluctuations (cf. Section 9.4). According to
these considerations there exist stable spatially homogeneous and time-independent
solutions M(x) = a, N(x) = bja provided b < be. We therefore introduce new
variables qj(x) by

M(x) = a + ql(x), N(x) = bja + q2(X)


and obtain the following Fokker-Planck equation

f .= J
dx [- {( ql + a q2
(b- 1)2 + g(ql' q2) + DI V' 2ql)f}ql(X)
- {( -bql - a 2q2 - g(ql, q2) + D/i/2q2)f}q2(1t)
+ -H1>l1(q)f}Ql(1t),Ql(1t) - {Ddq)f}Ql(1tJ,Q2(1t)
+ -HD22(q)f}Q2(1tJ,Q2(1tj + DI(V'(8j8ql(x»)2(a + ql)f
+ D2(V(8j8q2(X»)2(bja + q2)f]. (9.122)

fis now a functional of the variables qix). We have used the following abbrevia-
tions:

+ bqUa + qiq2'
g(ql' q2) = 2aqlq2 (9.123)
1>11 = 2a + 2ab + (3b + l)ql + a2q2 + 2aqlq2 + (bja)qi + qiQ2' (9.124)
1>12 = D22 = 2ab + 3bql + bqUa + a2q2 + 2aqlq2 + q;Q2' (9.125)

b) The further treatment proceeds along lines described in previous chapters.


Because the details are rather space-consuming, we only indicate the individual
steps:
1) We represent q(x, t) as a superposition of the eigensolutions of the linearized
equations (9.47), (9.48). Use is made of the wave-packet formulation of Section 7.7.
The expansion coefficients ~/l(x, t) are slowly varying functions of space and time.
2) We transform the Fokker-Planck equation to the ~/l's which still describe both
unstable and stable modes.
3) We eliminate the ~'s of the stable modes by the method of Section 7.7. The
final Fokker-Planck equation then reads:

where
(9.127)

312
9.7 Stochastic Treatment of the Brusselator Close to Its Soft-Mode Instability 301

with

A(1)~ ::::: 4aJl«1 - Jll)(1 + aJl)k~)-IV1~ (9.128)


Ao = (b - be)(1 + al - Jll - aJl3)-1 + O«b - by). (9.129)

We have further

and

Hk(~) = Lk'k" Ik'k"k'" CI~k.(X)~k"(X)


- Lk'k"k'" ii1Jkk'k"k'" ~k'(X)~k"(X)~k"'(X), (9.130)

For a definition of I and J, see (7.81) and (7.83). For a discussion of the evolving
spatial structures, the "selection rules" inherent in I and J are important. One
readily verifies in one dimension:
1= 0 for boundary conditions (9.44), i.e., the Xk'S are plane waves and I ::::: 0 for
X" oc sin kx and k » 1.
Further
J"k'"''k''' = J'*0 only if two pairs of k's out of k, k', k", k'" satisfy:

kl = -k z = -ke
k3 = -k4 = -ke

if plane waves are used, or k = k' = k" = k'" = ke if Xk oc sin kx. We have evalu-
ated ii 1 explicitly for plane waves. The Fokker-Planck equation (9.126) then reduces
to

f' = [- Jdx(c5/c5~(x»«Ao + A(1)Vl)~(X) - A~3(X» J


+ d 3xG II c511c5~1 Jf,
(9.131)

where the coefficient A reads:

(9.132)

Note that for sufficiently big aJl the coefficient A becomes negative. A closer
inspection shows, that under this condition the mode with k = 0 approaches a
marginal situation which then requires to consider the modes with k = 0 (and
Ikl = 2kJ as unstable modes. We have met eqs. like (9.131) or the corresponding
Langevin equations at several instances in our book. It shows that at its soft-mode
instability the chemical reaction undergoes a nonequilibrium phase transition of
second order in complete analogy to the laser or the Benard instability (Chap. 8).

313
302 9. Chemical and Biochemical Systems

9.8 Chemical Networks

In Sections 9.2-9.4 we have met several explicit examples for equations describing
chemical processes. If we may assume spatial homogeneity, these equations have
the form

(9.133)

Equations of such a type occur also in quite different disciplines, where we have
now in mind network theory dealing with electrical networks. Here the n's have the
meaning of charges, currents, or voltages. Electronic devices, such as radios or
computers, contain networks. A network is composed of single elements (e.g.,
resistors, tunnel diodes, transistors) each of which can perform a certain function.
It can for example amplify a current or rectify it. Furthermore, certain devices can
act as memory or perform logical steps, such as "and", "or", "no". In view of the
formal analogy between a system of equations (9.133) of chemical reactions and
those of electrical networks, the question arises whether we can devise logical
elements by means of chemical reactions. In network theory and related disciplines
it is shown that for a given logical process a set of equations of the type (9.133) can
be constructed with well-defined functions Fj •
These rather abstract considerations can be easily explained by looking at our
standard example of the overdamped anharmonic oscillator whose equation was
given by

4 = rxq - fl q 3. (9.134)

In electronics this equation could describe e.g. the charge q of a tunnel diode of the
device of Fig. 7.3. We have seen in previous chapters that (9.134) allows for two
stable states q 1 = -J rx.1fl, q2 = - -J rxl fl, i.e., it describes a bistable element which
can store information. Furthermore we have discussed in Section 7.3 how we can
switch this element, e.g., by changing rx.. When we want to translate this device into
a chemical reaction, we have to bear in mind that the concentration variable, n,
is intrinsically nonnegative. However, we can easily pass from the variable q to a
positive variable n by making the replacement

(9.135)

so that both stable states lie at positive values. Introducing (9.135) into (9.134) and
rearranging this equation, we end up with

(9.136)

where we have used the abbreviations

(9.137)

314
9.8 Chemical Networks 303

1J. 2 = IJ. - 3f3q~, (9.138)


1J. 3 = 3f3qo. (9.139)

Since (9.134) allowed for a bistable state for q so does (9.136) for n. The next ques-
tion is whether (9.136) can be realized by chemical reactions. Indeed in the preceed-
ing chapters we have met reaction schemes giving rise to the first three terms in
(9.136). The last term can be realized by an adiabatic elimination process of a fast
chemical reaction with a quickly transformed intermediate state. The steps for
modeling a logical system are now rather obvious: 1) Look at the corresponding
logical elements of an electrical network and their corresponding differential
equations; 2) Translate them in analogy to the above example. There are two
main problems. One, which can be solved after some inspection, is that the im-
portant operation points must lie at positive values of n. The second problem is,
of course, one of chemistry; namely, how to find chemical processes in reality
which fulfil all the requirements with respect to the directions the processes go,
reaction constants etc.
Once the single elements are realized by chemical reactions, a whole network can
be constructed. We simply mention a typical network which consists of the follow-
ing elements: flip-flop (that is the above bistable element which can be switched),
delays (which act as memory) and the logical elements "and", "or", "no".
Our above considerations referred to spatially homogeneous reactions, but by
dividing space into cells and permitting diffusion, we can now construct coupled
logical networks. There are, of course, a number of further extensions possible,
for example, one can imagine cells separated by membranes which may be only
partly permeable for some of the reactants, or whose permeability can be switched.
Obviously, these problems readily lead to basic questions of biology.

315
10. Applications to Biology

In theoretical biology the question of cooperative effects and self-organization


nowadays plays a central role. In view of the complexity of biological systems
this is a vast field. We have selected some typical examples out of the following
fields:
1) Ecology, population-dynamics
2) Evolution
3) Morphogenesis
We want to show what the basic ideas are, how they are cast into a mathematical
form, and what main conclusions can be drawn at present. Again the vital interplay
between "chance" and "necessity" will transpire, especially in evolutionary
processes. Furthermore, most of the phenomena allow for an interpretation as non-
equilibrium phase transitions.

10.1 Ecology, Population-Dynamics


What one wants to understand here is basically the distribution and abundance
of species. To this end, a great amount of information has been gathered, for ex-
ample about the populations of different birds in certain areas. Here we want to
discuss some main aspects: What controls the size of a population; how many
different kinds of populations can coexist?
Let us first consider a single population which may consist of bacteria, or plants
of a given kind, or animals of a given kind. It is a hopeless task to describe the fate
of each individual. Rather we have to look for "macroscopic" features describing
the populations. The most apparent feature is the number of individuals of a
population. Those numbers play the role of order parameters. A little thought will
show that they indeed govern the fate of the individuals, at least "on the average".
Let the number (or density) of individuals be n. Then n changes according to the
growth rate, g, (births) minus the death rate, d.
n=g-d. (10.1)
The growth and death rates depend on the number of individuals present. In the
simplest form we assume
g = "In, (10.2)
d= ~n, (10.3)
306 to. Applications to Biology

where the coefficients y and b are independent of n. We then speak of density-


independent growth. The coefficients y and b may depend on external parameters,
such as available food, temperature, climate and other environmental factors.
As long as these factors are kept constant, the equation

Ii = ('J.n == (y - b)n (10.4)

allows for either an exponentially growing or an exponentially decaying popula-


tion. (The marginal state y = b is unstable against small perturbations of y or b).
Therefore, no steady state would exist. The essential conclusion to be drawn is
that the coefficients y or b or both depend on the density n. An important reason
among others for this lies in a limited food supply, as discussed earlier in this book
by some exercises. The resulting equation is of the type

Ii = ('J.on - {3n 2 ("Verhulst" equation) (10.5)

where - {3n 2 stems from a depletion of the food resources. It is assumed that new
food is supplied only at a constant rate. The behavior of a system described by
(10.5) was discussed in detail in Section 5.4.
We now come to several species. Several basically different cases may occur:
1) Competition and coexistence
2) Predator-prey relation
3) Symbiosis

1) Competition and Coexistence


If different species live on different kinds of food and do not interact with each
other (e.g., by killing or using the same places for breeding, etc.) they can certainly
coexist. We then have for the corresponding species equations of the type

(10.6)

Things become much more complicated if different species live or try to live on the
same food supply, and/or depend on similar living conditions. Examples are pro-
vided by plants extracting phosphorous from soil, one plant depriving the other
from sunlight by its leaves, birds using the same holes to build their nests, etc.
Since the basic mathematical approach remains unaltered in these other cases,
we talk explicitly only about "food". We have discussed this case previously
(Sect. 5.4) and have shown that only one species survives which is defined as the
fittest. Here we exclude the (unstable) case that, accidentally., all growth and
decay rates coincide.
For a population to survive it is therefore vital to improve its specific rates
('J.j, {3j by adaption. Furthermore for a possible coexistence, additional food supply
is essential. Let us consider as example two species living on two "overlapping"
food supplies. This can be modelled as follows, denoting the amount of available

318
10.1 Ecology, Population-Dynamics 307

food by N1 or N 2 :

1i1 = (rxllN1 + rx12N2)n1 - <5 1n1, (10.7)


1i2 = (rx21N1 + rx 22 N 2)n 2 - <5 zn2' (10.8)

Generalizing (Sect. 5.4) we establish equations for the food supply

N1 = "/t(Nr - N 1) - Jlll n 1 - Jl12 n 2' (10.9)

N2 = Y2(N~ - N 2) - Jl21 n 1 - Jl22 n2' (10.10)

Here YjNJ is the rate of food production, and - YjNj is the decrease of food due
to internal causes (e.g. by rotting). Adopting the adiabatic elimination hypothesis
(cf. Sect. 7.1), we assume that the temporal change of the food supply may be
neglected, i.e., N 1 = N2 = O. This allow us to express N1 and N2 directly by n 1
and n2' Inserting the resulting expressions into (10.7), (10.8) leaves us with equa-
tions of the following type:

1i1 = [(rxr1Nr + rx~2N~) - <5 1 - (1111 n1 + 1112n2)]nl> (10.11)


1i2 = [(rx~lNr + rx~2ND - <5 2 - (1121 n1 + 1122n2)]n2' (10.12)

From li1 = 1i2 = 0 we obtain stationary states n~, n~. By means of a discussion
of the "forces" (i.e., the rhs of (10.11/12)) in the n 1 - n 2 plane, one may easily
discuss when coexistence is possible depending on the parameters of the system.
(cf. Fig. 10.1 a-c) This example can be immediately generalized to several kinds
of species and of food supply. A detailed discussion of coexistence becomes tedious,
however.
From our above considerations it becomes apparent why ecological "niches"

Fig. 1O.1a-c. The eqs. (10.11), (10.12) for different parameters leading to different stable configu-
rations. (a) nI = 0, n2 = C is the only stable point i.e., only one species survives. (b) nI = 0, n2
# 0 or n2 = 0, nI # 0 are two stable points, i.e., one species or the other one can survive. (c) nI
# 0, n2 # 0, the two species can coexist. If the field of arrows is made closer, one finds the trajec-
tories discussed in Section 5.2. and the points where the arrows end are sinks in the sense of that
chapter

319
308 10. Applications to Biology

are so important for survival and why surviving species are sometimes so highly
specialized. A well-known example for coexistence and competition is the distribu-
tion of flora according to different heights in mountainous regions. There, well-
defined belts of different kinds of plants are present. A detailed study of such
phenomena is performed in biogeographics.

2) Predator-Prey-Relation
The basic phenomenon is as follows:
There are two kinds of animals: Prey animals living on plants, and predators,
living on the prey. Examples are fishes in the Adriatic sea, or hares and lynxes.
The latter system has been studied in detail in nature and the theoretical predic-
tions have been substantiated. The basic Lotka-Volterra equations have been
discussed in Section 5.4. They read

(10.13)
(10.14)

where (10.13) refers to prey and (10.14) to predators. As was shown in Section 5.4
a periodic solution results: When predators become too numerous, the prey is eaten
up too quickly. Thus the food supply of the predators decreases and consequently
their population decreases. This allows for an increase of the number of prey
animals so that a greater food supply becomes available for the predators whose
number now increases again. When this problem is treated stochastically, a serious
difficulty arises: Both populations die out (cf. Sect. 10.2).

3) Symbiosis
There are numerous examples in nature where the cooperation of different species
facilitates their living. A well-known example is the cooperation of trees and bees.
This cooperation may be modelled in this way: Since the multiplication rate of one
species depends on the presence of the other, we obtain

iii = (IXI + lX~nz)nl - oln l , (10.15)

liz = (lX z + lX;nl)n z - ()znz, (10.16)

as long as we neglect self-suppressing terms - Pin;. In the stationary case,


Ii I = liz = 0, two types of solutions result by putting the rhs of (l 0.15-16) equal to
zero:

a) nl = n z = 0, which is uninteresting,

or

b) IXI - ° + lX;nz =
1 0,
CX z - 0z + cx;nl = O.

320
10.2 Stochastic Models for a Predator-Prey System 309

It is an interesting exercise for the reader to discuss the stability properties of b).
We also leave it to the reader to convince himself that for initial values of nl and
n 2 which are large enough, an exponential explosion of the populations always
occurs.

4) Some General Remarks


Models of the above types are now widely used in ecology. It must be men-
tioned that they are still on a very global level. In a next step numerous other
effects must be taken into account, for example, time-lag effects, seasons, different
death rates depending on age, even different reaction behavior within a single
species. Even if we perform the analysis in the above-mentioned manner, in reality
biological population networks are more complicated, i.e., they are organized in
trophical (i.e., nourishment) levels. The first trophic level consists of green plants.
They are eaten by animals, which are in turn eaten by other animals, etc. Further-
more for example, a predator may live on two or several kinds of prey. In this case
the pronounced oscillations of the Lotka-Volterra model become, in general, smal-
ler, and the system becomes more stable.

10.2 Stochastic Models for a Predator-Prey System

The analogy of our rate equations stated above to those of chemical reaction
kinetics is obvious. Readers who want to treat for example (10.5) or (10.11), (10.12)
stochastically are therefore referred to those chapters. Here we treat as another
example the Lotka-Volterra system. Denoting the number of individuals of
the two species, prey and predator, by M and N, respectively, and again using the
methods of chemical reaction kinetics, we obtain as transition rates
1) Multiplication of prey

M -+ M + 1: w(M + 1, N; M, N) = 'X.1M.
2) Death rate of predator

N -+ N - 1: w(M, N - I; M, N) = 'X.2N.

3) Predators eating prey

M -+ M -
N -+
I}
N + 1 w(M - 1, N + 1; M, N) = (3MN.

The master equation for the probability distribution P(M, N, t) thus reads

P(M, N; t) = 'X.1(M - l)P(M - 1, N; t)


+ 'X.2(N + l)P(M, N + 1; t)
+ (3(M + l)(N - l)P(M + 1, N - 1; t)
- ('X.IM + 'X. 2 N + (3MN)P(M, N; t). (10.17)

321
310 10. Applications to Biology

Of course we must require P = 0 for M < 0 or N < 0 or both < O. We now


want to show that the only stationary solution of (10.17) is

P(O, 0) = 1 and all other P's = O. (10.18)

Thus both species die out, even if initially both have been present. For a proof we
put P = O. Inserting (10.18) into (10.17) shows that (10.17) is indeed fulfilled.
Furthermore we may convince ourselves that all points (M, N) are connected via
at least one path with any other point (M', N'). Thus the solution is unique. Our
rather puzzling result (10.18) has a simple explanation: From the stability analysis
of the non stochastic Lotka-Volterra equations, it is known that the trajectories
have "neutral stability". Fluctuations will cause transitions from one trajectory to
a neighboring one. Once, by chance, the prey has died out, there is no hope for the
predators to survive, i.e., M = N = 0 is the only possible stationary state.
While this may indeed happen in nature, biologists have found another reason
for the survival of prey: Prey animals may find a refuge so that a certain minimum
number survives. For instance they wander to other regions where the predators
do not follow so quickly or they can hide in certain places not accessible to pre-
dators.

10.3 A Simple Mathematical Model for Evolutionary Processes


In Section 10.1 we have learned about a few mathematical models from which we
may draw several general conclusions about the development of populations.
These populations may consist of highly developed plants or animals as well as of
bacteria and even biological molecules "living" on certain substrates. When we
try to apply these equations to evolution, an important feature is still missing.
In the evolutionary process, again and again new kinds of species appear. To see
how we can incorporate this fact into the equations of Section 10.1, let us briefly
recollect some basic facts. We know that genes may undergo mutations, creating
alleles. These mutations occur at random, though their creation frequency can be
enhanced by external factors, e.g., increased temperature, irrediation with UV-
light, chemical agents etc. As a consequence, a certain "mutation pressure" arises
by which all the time new kinds of individuals within a species come into existence.
We shall not discuss here the detailed mechanism, that is, that newly created
features are first recessive and only later, after several multiplications, possibly
become dominant. We rather assume simply that new kinds of individuals of a
population are created at random. We denote the number of these individuals by
n j • Since these individuals may have new features, in general their growth and
death factors differ. Since a new population can start only if a fluctuation occurs,
we add fluctuating forces to the equations of growth:

(10.19)

The properties of Fit) depend on both the population which was present prior to
the one described by the particular equation (10.19) and on environmental factors.

322
10.4 A Model for Morphogenesis 311

The system of different "subspecies" is now exposed to a "selection pressure".


To see this, we need only apply considerations and results of Section 10.1. Since
the environmental conditions are the same (food supply, etc.), we have to apply
equations of type (10.11/10.12). Generalizing them to N subspecies living on the
same "food" supply, we obtain

(10.20)

If the mutation rate for a special mutant is small, only that mutant survives which
has the highest gain factor 0(j and the smallest loss factor xj and is thus the "fittest".
It is possible to discuss still more complicated equations, in which the mUltiplica-
tion of a subspecies is replaced by a cycle A -+ B -+ C -+ .•. -+ A. Such cycles
have been postulated for the evolution of biomolecules. In the context of our book
it is remarkable that the occurrence of a new species due to mutation ("fluctu~ting
force") and selection ("driving force") can be put into close parallel to a second-
order, nonequilibrium phase transition (e.g., that of the laser).

10.4 A Model for Morphogenesis


When reading our book the reader may have observed that each discipline has its
"model" systems which are especially suited for the study of characteristic features.
In the field of morphogenesis one of such "systems" is the hydra. Hydra is an ani-
mal a few mm in length, consisting of about 100,000 cells of about 15 different types.
Along its length it is subdivided into different regions. At one end its "head" is
located. Thus the animal has a "polar structure". A typical experiment which
can be done with hydra is this: Remove part of the head region and transplant it to
another part of the animal. Then, if the transplanted part is in a region close to the
old head, no new head is formed, or, in other words, growth of a head is inhibited.
On the other hand, if the transplant is made at a distance sufficiently far away
from the old head, a new head is formed by an activation of cells of the hydra by
the transplant. It is generally accepted that the agents causing biological processes
such as morphogenesis are chemicals. Therefore, we are led to assume that there
are at least two types of chemicals (or "reactants"): an activator and an inhibitor.
Nowadays there is some evidence that these activator and inhibitor molecules
really exist and what they possibly are. Now let us assume that both substances
are produced in the head region of the hydra. Since inhibition was present still in
some distance from the primary head, the inhibitor must be able to diffuse. Also
the activator must be able to do so, otherwise it could not influence the neighboring
cells of the transplant.
Let us try to formulate a mathematical model. We denote the concentration of
the activator by a, that of the inhibitor by h. The basic features can be seen in the
frame of a one-dimensional model. We thus let a and h depend on the coordinate
x and time t. Consider the rate of change of a, iJa/iJt. This change is due to
I) generation by a source (head):
production rate: p, (10.21)

323
312 10. Applications to Biology

2) decay: - fla, (10.22)

where fl is the decay constant

3) d. . a2 a
IffusIOn: D a ax 2 , (10.23)

Da diffusion constant.

Furthermore it is known from other biological systems (e.g., slime mold, compare
Section 1.1) that autocatalytic processes ("stimulated emission") can take place.
They can be described-depending on the process-by the production rate

(10.24)
or
(10.25)

Finally, the effect of inhibition has to be modelled. The most direct way the
inhibitor can inhibit the action of the activator is by lowering the concentration of a.
A possible "ansatz" for the inhibition rate could be

-ah. (10.26)

Another way is to let h hinder the autocatalytic rates (10.24) or (10.25). The
higher h, the lower the production rates (10.21) or (10.25). This leads us in the case
(10.25) to

(10.27)

Apparently there is some arbitrariness in deriving the basic equations and a final
decision can only be made by detailed chemical analysis. However, selecting
typical terms, such as (10.21), (10.22), (10.23), (10.27), we obtain for the total rate
of change of a

(10.28)

Let us now turn to derive an equation for the inhibitor h. It certainly has a decay
time, i.e., a loss rate

-vh, (10.29)

and it can diffuse:

(10.30)

324
IDA A Model for Morphogenesis 313

Again we may think of various generation processes. Gierer and Meinhard, whose
equations we present here, suggested (among other equations)

production rate: ca 2 , (l0.31)

i.e., a generation by means of the activator. We then obtain

(10.32)

Before we represent our analytical results using the order parameter concept in
Section 10.5, we exhibit some computer solutions whose results are not restricted to
the hydra, but may be applied also to other phenomena of morphogenesis. We
simply exhibit two typical results: In Fig. 10.2 the interplay between activator and
inhibitor leads to a growing periodic structure. Fig. 10.3 shows a resulting two-
dimensional pattern of activator concentration. Obviously, in both cases the
inhibitor suppressed a second center (second head of hydra!) close to a first center
(primary head of hydra !). To derive such patterns it is essential that h diffuses more
easily than a, i.e, Dh > Da' With somewhat further developed activator-inhibitor
models, the structures of leaves, for example, can be mimicked.

Fig. 10.2. Developing activator


concentration as a function of space
and time (computer solution). (After
H. Meinhardt, A. Gierer: J. Cell Sci.
IS, 321 (1974»

In conclusion we mention an analogy which presumably is not accidental but


reveals a general principle used by nature: The action of neural networks (e.g., the
cerebral cortex) is again governed by the interplay between short-range activation
and long-range inhibition but this time the activators and inhibitors are neurons.

325
314 10. Applications to Biology

Fig. 10.3. Results of the morphogenetic model. Left column: activator concentration plottet over
two dimensions. Right column: same for inhibitor. Rows refer to different times growing from
above to below (computer solution). (After H. Meinlwrdr. A. GiereI': J. Cell Sci. 15. 321 (1974»

10.5 Order Parameters and Morphogenesis

In this chapter we apply the analytical methods developed in Sections 7.6-7.8


to determine the evolving patterns described by (10.28) and (10.32). Since we want
to treat the two-dimensional case, we replace 82 a/?x 2 and iJ 2 h/cx 2 by

respectively. We assume that p is the control parameter which can be changed


arbitrarily whereas all the other constants are prescribed quantities. It is con-
venient to go over to new variables so to reduce the number of parameters by
the transformations

, l jv k
x=Vo: x, (=l't , a'= - a,
c
(10.33)

326
10.5 Order Parameters and Morphogenesis 315

Then we have

'2
.,
a =p 11-J.l"+
,+a A' ,
a LJ 1I, (10.34)

h' =0'2 - h' + D' L1' h', (10.35)

where we have used the abbreviations

, pc , J1.
p = vk' J1. =-, (10.36)
v

D,=Dh (10.37)
Da'

From now on we shall drop the primes. The stationary homogeneous solution of
(10.34) and (10.35) reads

1
ao = - (p+1), (10.38)
J1.
ho=aij. (10.39)

To perform the stability analysis we introduce an expansion around the stationary


solution

(10.40)

Eqs. (10.34) and (10.35) then can be cast in the form (compare (7.62))

q=K(L1)q +g(q) (10.41)

where K is given by

J1.(_2_-1)+L1
p+1
K(L1) = (10.42)
2
-(p+1) -1 +DL1
J1.

and g(q) contains the nonlinearities. For the linear stability analysis we drop the
nonlinear term g(q) and make the hypothesis

q =Oe iluc + i.,. (10.43)

327
316 10. Applications to Biology

The resulting eigenvalue equation yields

a2 (k) _ {3(k) (10.44)


4

where
2 2/1
a(k)= -(D+1)k +~-/1-1, (10.45)
p+1

(10.46)

The condition for the first occurrence of a soft-mode instability is

(10.47)

A brief analysis of (10.44) reveals that (10.47) is fulfilled if

(1) a<O and (2) {3~0. (10.48)

From condition (1) we obtain

(10.49)

whereas condition (2) yields

(10.50)

The dependence of the critical P on the wave vector k (10.49), is exhibited in


Fig. 10.4. When P> Pc' the instability condition (10.47) cannot be fulfilled.
For a critical Pc the instability condition {3=0 can first be mel for two critical
values of k, namely k= +kc and k= -kC" For P=Pmax the condition 0:=0,
which indicates the onset of a hard-mode instability, can be met. In our following

Fig. 10.4. This figure shows the curves defined by (10.45) = 0,


and (10.46)=0 in the k, p plane. The parameters D and J1 are
kept fixed. The area above the curve Ii = 0 defines the stable
region. The area below it defines the unstable region. Since the
condition :x < 0 is met for all k the onset of the instability occurs
when p=p,

328
10.5 Order Parameters and Morphogenesis 317

analysis we will focus our attention on the soft mode case. kc and Pc and Pmax
are given by

(10.51 )

(10.52)

11- 1
Pmax = 11+ 1· (10.53)

In order that the soft-mode instability occurs first we have to require Pc> Pm,x'
from which it follows

D>211+ 1 +2v;+1. (10.54)

Using (10.37) and (10.36) we derive from (10.54) that the diffusion constant of the
inhibitor must be bigger than that of the activator by a certain amount. In other
words, "long range inhibition"" and "short range activation" are required for a
non oscillating pattern.
We assume a two-dimensional layer with side lengths L\ and L2 and first
adopt periodic boundary conditions. The detailed method of solution has been
described in Sections 7.6-7.8 and we repeat here only the main steps of the whole
procedure. We assume P close to Pc.
We make the hypothesis (cf. (7.72»

(10.55)

To exhibit the essential features, we neglect "small band excitations" and assume
~{ independent of x. The coefficients oj obey the equation

(10.56)

and the wave vector k is assumed in the form

(10.57)

Since the solution must be real we have to require

(10.58)

329
318 10. Applications to Biology

Inserting (10.55) into (10.41) and mUltiplying the resulting expressions from the
left with the conjugate complex of eikx and the adjoint of oj, we obtain after some
analysis the equations

(:t -)J(k)}~~=(N.L.T)~. (10.59)

The nonlinear term on the right hand side has the form

(N.L.T)~ = L"j" Lk'.k" a~j~!~,,~{.~r,Ik.k'.k"


(10,60)
+ LF,/'.j'" Lk'.k' .k'" b{/~j~l~", ~{. ~r, ~c.J k. k'.k" .k'"
where we have kept only the important terms including up to third order.
The integrals I, J are given by

Ik.k'. k" -- -1-


L1L z
1 F
d2 xe i(k'+k"-k)x_
-15 k • k'+k'" (10.61)

_
Jkk'k"k'''---
. ..
1
L1L2
12d
F
xe
i(k'+k"+k'''-k)x_ s:
-Ukk'+k"+k''''

(10.62)

We eliminate the stable modes as in Section 7.7. The great advantage of the
"slaving principle" consists in an enormous reduction of the degrees of freedom
because we keep now only the unstable modes with index k = k c ' These modes
serve as everywhere in this book as order parameters. Their cooperation or
competition determines the resulting patterns as we will show now. We introduce
a new notation by which we replace the vector kc by its modulus and the angle
cp which this vector forms with a fixed axis: ~kc -->~kc.<p' We let cp run from 0 to 7L
The resulting order parameter equations read

(10.63)

}. is proportional to (p - p), whereas C can be considered as independent of p.


The constants d(lcp- cp'!) have been evaluated by computer calculations and arc
exhibited in Fig. 10.5. Eq. (10.63) represents a set of coupled equations for the
time dependent functions ~:c.<P' These equations can be written in the form of
potential equations

(10.64)

where the potential function V is given by

V= - L;={ A 1~<p12 + ~(~<p~<p + i ~<p- i + c.c')


+t 1~<p12L;'=od(lcp-cp'i)I~<p,12 J (10.65)

330
10.5 Order Parameters and Morphogenesis 319

0.00f----~--~~---+--.

In
··· ...
Fig. 10.5. d(9) is plotted as the function of 9.
In the practical calculations the region of divergence
:: l: is cut out as indicated by the bars. This procedure
-0.89 : can be justified by the buildup of wave packets

We may assume as elsewhere in this book (cf. especially Sect. 8.13) that the
eventually resulting pattern is determined by such a configuration of ~'s where the
potential V acquires a (local) minimum. Thus we have to seek such ~'s for which

(10.66)

and

(10.67)

The system is globally stable if

d(l<p - <p'!) < 0 (10.68)

or if the matrix

has only positive eigenvalues.


We discuss three typical examples, which are strongly reminiscent of pattern
formation in hydrodynamics (cf. Sect. 8.13). 1) The entirely homogeneous state
for which all ~q> are equal to 0 is stable for ;,<0.
2) We obtain a "roll" pattern if

(10.69)

(10.70)

where

(10.71)

331
320 10. Applications to Biology

The angle <PI between the roll axis and a fixed axis is arbitrary, i.e., symmetry
breaking with respect to <P occurs. This configuration is locally stable for

.'.>0, d(.9=t=n)<d(n)<O. (10.72)

The resulting spatial pattern can be obtained by inserting (10.69, 70) into (10.55).
In our present treatment and in the corresponding figures we neglect the impact
of the slaved modes. They would give rise to a slight sharpening of the individual
peaks. The corresponding pattern is exhibited in Fig. 10.6.

Fig. 10.6. A roll-type pattern

3) Another pattern, in which we find hexagons, is realized when the minimum of


V is attained for

(10.73)

~ 'I' = 0 otherwise. (10.74)

XI is given by

(10.75)

This configuration is locally stable for

(10.76)

The bifurcation diagram of the solution (10.75) is shown in Fig. 10.7. The solid
line indicates the stable configuration, the dashed line the unstable configuration.
The order parameter equations allow us to determine not only the stationary

332
10.5 Order Parameters and Morphogenesis 321

I
\
" .... .....
C2
F Fig. 10.7. The amplitude Xl as a function of A.. The solid
line indicates a stable solution, the dotted line an
unstable solution

Fig. 10.8 Fig. 10.9

Figs. 10.8. to to.10. Buildup of a hex agonal


pattern of activator concentration for three
subsequent times

solution but also the transient. We start from a homogeneous solution on which
a small inhomogeneity of the form (10.73, 74) is superimposed and solve the
time-dependent equations (10.63). The resulting solutions of the spatial pattern are
exhibited in Figs. 10.8, 10.9, and 10.10. This shows again the usefulness of order
parameter equations.
In our next example we present the solution of the nonlinear equations within
a rectangular domain with nonflux boundary conditions (close to the instability

333
322 10. Applications to Biology

point). We now expand the wanted solution into a complete orthogonal system
with nonflux boundary conditions, i.e., with respect to functions of the form

coskxx'coskyY, where ( kk X
., , ) ~ ~(~L mn-zl)
-" ~_ . (10.77)

The procedure goes through in complete analogy to the above. k" and kl' must
be chosen so that k; +k; comes close to k;. When L t ~ L z , different 'modes
may simultaneously become unstable ("degeneracy"). For simplicity, we treat
here the case of a single unstable mode (i.e., L J =1= L z ). Its amplitude obeys the
equation

(10.78)

The solution of this time-dependent equation describes the growth of the spatial
pattern. Examples of the resulting patterns are given by Figs. 10.11 to 10.13.

Fig. 10.11. The activator concentration


belonging to the mode (10.77) with k, = 1[/ L 1
and ky = 51[/ L2

r
Fig. 10.12. The activator concentration
belonging to the mode (10.77) with kx = 21[/ L 1
and ky=51[/ L 2

334
r
10.5 Order Parameters and Morphogenesis 323

Fig. 10.13. The activator concentration


belonging to the mode (10.77) with k x = 3,,/ L 1
and ky = 5,,/ L2

Our last example refers to cylindrical nonflux boundary conditions. In this


case we introduce polar coordinates rand ({J and replace the formerly used plane
waves of the expansion (10.55) by cylinder functions of the form eimq> 1m (kr), where
Jm(kr) is the Bessel function . The nonflux boundary condition requires

(10.79)

This equation fixes a series of k-values for which (10.79) is fulfilled. The expansion
of q now reads

(10.80)

Inserting (10.80) into the original equation (10.41) leads eventually to equations
for the ~'s. The slaving principle allows us to do away the stable modes and we
thus obtain equations for the order parameter alone, since we have a discrete
sequence of k-values and for symmetry reasons we may assume that only one
mode becomes unstable first. The resulting order parameter equation has again
the form (10.78).
A marked difference occurs depending on whether the unstable mode (order
parameter) has m=mc=O or =1=0. If mc=O the right hand side of equation
(10.78) must be supplemented by a term quadratic in ~ which is absent if mc=l=O.
As is well known from phase transition theory (cf. Sect. 6.7), in the former case
(mc=O) we obtain a first-order phase transition connected with an abrupt change
of the homogeneous state into the inhomogeneous state connected with hysteresis
effects. In the latter case (mc =1= 0) we obtain a second-order phase transition,
and the pattern grows continuously out of the homogeneous state when p passes
through Pc' Some typical patterns are exhibited in Figs. 10.14 to 10.16.
In conclusion we mention that we can also take into account small band excitations
and fluctuations in complete analogy to the Benard instability (cf. Sects. 8.13
and 8.14) using the methods developed in this book. A comparison between the
figures of this section and those of Section 10.3 shows a qualitative resemblance
but no exact agreement. The reason for this lies in the fact that the analytical

335
324 10. Applications to Biology

Fig. 10.14. The activator concentration with


cylindrical nonflux boundary conditions
described by a rotation symmetric Bessel
function with m = 0

Fig. 10.15. The activator concentration with


cylindrical nonfl ux boundary conditions
described by a rotation symmetric Bessel
function with m= 1

Fig. 10.16. The activator concentration with


cylindrical nonflux boundary conditions
described by a rotation symmetric Bessel
function with m = 3

336
10.6 Some Comments on Models of Morphogenesis 325

approach yields "pure cases", whereas the computer solution makes use of
(artificially) introduced random fluctuations. As we know, in that latter case
the analytical approach would yield a probability distribution (of patterns)
of a whole ensemble, whereas a computer solution will be equivalent to a "simple
event", i. e., a specific realization.

10.6 Some Comments on Models of Morphogenesis


Present-day modelling of morphogenesis is based on the idea that by diffusion
and reaction of certain chemicals a prepattern (or "morphogenetic field") is
formed. This prepattern switches genes on to cause cell differentiation. Such
models are partly substantiated by direct observation of certain chemicals,
for instance the neural growth factor. On the other hand it might be necessary
to consider also other mechanisms of cell communications, for instance cell-cell
contacts using recognition sites.
Aside from this comment, looking at the general outline of the present book
we can draw the following important conclusion. On the one hand we have seen
that a single model, for instance in hydrodynamics or now in morphogenesis,
can produce quite different patterns depending on the individual parameters, on
boundary conditions, and on fluctuations. On the other hand, quite different
systems may produce the same pattern, for instance hexagons. As a consequence
we must conclude that different models of morphogenetic processes may lead to
the same resulting pattern. In each case there exists a total class of models (differen-
tial equations) giving rise to the same pattern. For this reason it seems very im-
portant to develop other criteria in morphogenesis to decide which kind of
model is adequate from the theoretical point of view. This can be done, for instance
by invoking general principles about fundamental processes, for example the
principle of "long range inhibition and short range activation". Possibly other
mechanisms or principles will also have to be discussed in future developments.
Furthermore the importance of experiments to decide between different mech-
anisms is quite evident.
In view of the general spirit of the present book the following phase transition
analogy seems to be interesting:
physical system biological system
full symmetry totipotent cells
symmetry breaking cell differentiation
first-order transition irreversible change

This analogy suggests that cell differentiation might occur spontaneously in


very much the same way as a ferromagnet acquires its spontaneous magnetization.
Unfortunately, lack of space does not allow us to elaborate further on this inter-
esting analogy, which, of course, is purely formal.
Next steps to morphogenetic models might include the morphogenesis of
neural nets taking into account irreversible storage of information, i.e., the
formation of the long time memory or, more generally, the process of learning

337
326 10. Applications to Biology

and its connection with the formation of, for instance, chemical patterns in the
brain.
Let us conclude with the following remark. In this book we have stressed the
profound analogies between quite different systems, and one is tempted to treat
biological systems in complete analogy to physical or chemical systems far from
thermal equilibrium. One important difference should be stressed, however.
While the physical and chemical systems under consideration lose their structurc
when the flux of energy and matter is switched off, much of the structure of bio-
logical systems is still preserved for an appreciable time. Thus biological systems
seem rather to combine nondissipative and dissipative structures. Furthermore,
biological systems serve certain purposes or tasks and it will be more appropriate
to consider them as functional structures. Future research will have to develop
adequate methods to cope with such functional structures. It can be hoped.
however, that the ideas and methods outlined in this book may serve as a first
step in that direction.

338
11. Sociology and Economics

11.1 A Stochastic Model for the Formation of Public Opinion

Intuitively it is rather obvious that formation of public opinion, actions of social


groups, etc., are of a cooperative nature. On the other hand it appears extremely
difficult if not impossible to put such phenomena on a rigorous basis because the
actions of individuals are determined by quite a number of very often unknown
causes. On the other hand, within the spirit of this book, we have seen that in
systems with many subsystems there exist at least two levels of description: One
analysing the individual system and its interaction with its surrounding, and the
other one describing the statistical behavior using macroscopic variables. It is on
this level that a quantitative description of interacting social groups becomes pos-
sible.
As a first step we have to seek the macroscopic variables describing a society.
First we must look for the relevant, characteristic features, for example of an opin-
ion. Of course "the opinion" is a very weak concept. However, one can measure
public opinion, for example by polls, by votes, etc. In order to be as clear as pos-
sible, we want to treat the simplest case, that of only two kinds of opinions denoted
by plus and minus. An obvious order parameter is the number of individuals n+,
n _ with the corresponding opinions + and -, respectively. The basic concept now
to be introduced is that the formation of the opinion, i.e., the change of the numbers
n+, n_ is a cooperative effect: The formation of an individual's opinion is influenced
by the presence of groups of people with the same or the opposite opinion. We
thus assume that there exists a probability per unit time, for the change of the
opinion of an individual from plus to minus or vice versa. We denote these transi-
tion probabilities by

(1Ll)

We are interested in the probability distribution function f(n+, n_, t). One may
easily derive the following master equation

df[n+, n; t]
dt = (n+ + l)p+ _[n+ + 1, n_ - l]f[n+ + 1, n_ - 1; t]
+ (n_ + l)p_ +[n+ - 1, n_ + l]f[n+ - 1, n_ + 1; t]
- {n+p+ _[n+, n_] + n_p_ +[n+, n_]}f[n+, n_; t]. (11.2)
328 11. Sociology and Economics

The crux of the present problem is, of course, not so much the solution of this
equation which can be done by standard methods but the determination of the
transition probability. Similar to problems in physics, where not too much is known
about the individual interaction, one may now introduce plausibility arguments to
derive p. One possibility is the following: Assume that the rate of change of the
opinion of an individual is enhanced by the group of individuals with an opposite
opinion and diminished by people of his own opinion. Assume furthermore that
there is some sort of social overall climate which facilitates the change of opinion
or makes it more difficult to form. Finally one can think of external influences on
each individual, for example, informations from abroad etc. It is not too difficult
to cast these assumptions into a mathematical form, if we think of the Ising model
of the ferromagnet. Identifying the spin direction with the opinion +, -, we are
led to put in analogy to the Ising model

p+ _[n+, n-J == p+ -(q) = vexp {


-(lq
e+ H)}
= v exp {-(kq + h)},

+(/q + H)}
p_+[n+, n-J == p-+(q) = vexp { e
= v exp {+(kq + h)}, (I1.3)

where I is a measure of the strength of adaptation to neighbours. H is a preference


parameter (H> 0 means that opinion + is preferred to -), e is a collective
climate parameter corresponding to k8T in physics (k8 is the Boltzmann constant
and T the temperature), v is the frequency of the "flipping" processes. Finally
(11.4)

For a quantitative treatment of (11.2) we assume the social groups big enough so
that q may be treated as a continuous parameter. Transforming (11.2) to this
continuous variable and putting

w+_(q) == n+p+_[n+, n-l = net + q)p+_(q), (J 1.5)


w_+(q) == n_p_+[n+,n_l = nCt - q)p_+(q),
we transform (11.2) into a partial differential equation (see, e.g., Section 4.2.) Its
solution may be found by quadratures in the form

-1 { Jq
Kt(Y)d}
fslq) = cK2 (q) exp 2 _+ Kiy) Y (11.6)

with

Kl(q) = v{sinh(kq + h) - 2q cosh (kq + h)}


Kz(q) = (v/n){cosh (kq + h) - 2q sinh (kq + h)}. (11.7)

340
11.2 Phase Transitions in Economics 329

(a) k=O
h=O

Fig. 11.1. (a) Centered distribution in


the case of rather frequent changes of
(b) opinion (independent decision), (b)
Distribution at the transition between
independent and strongly adaptive
decision, (c) "Polarizatiorr phenome-
non" in the case of strong neighbor-
neighbor interaction. {After W.
Weidlich: Collective Phenomena 1, 51
(1972»

Fig. 11.1 shows a plot of the result when there is no external parameter. As one
may expect from a direct knowledge of the Ising model, there are typically two
results. The one corresponds to the high-temperature limit: on account of rather
frequent changes of opinion we find a centered distribution of opinions. If the social
climate factor e is lowered or if the coupling strength between individuals is in-
creased, two pronounced groups of opinions occur which clearly describe the
by now well-known "polarization phenomenon" of society. It should be noted that
the present model allows us to explain, at least in a qualitative manner, further
processes, for example unstable situations where the social climate parameter is
changed to a critical value. Here suddenly large groups of a certain opinion are
formed which are dissolved only slowly and it remains uncertain which group
(+ or -) finally wins. Using the considerations of Section 6.7 it is obvious again
that here concepts of phase transition theory become important, like critical
slowing down (Remember the duration of the 1968 French student revolution?),
critical fluctuations, etc. Such statistical descriptions certainly do not allow unique
predictions due to the stochastic nature of the process described. Nevertheless,
such models are certainly most valuable for understanding general features of
cooperative behavior, even that of human beings, though the behavior of an
individual may be extremely complicated and not accessible to a mathematical
description. Quite obviously the present model allows for a series of generalizations.

11.2 Phase Transitions in Economics


The concepts and methods described in this book can also be used to shed new
light on a number of economic processes. Let us take as an example the problem
of underemployment, one of considerable importance in economics. In early times
the economy was considered static, and economists used concepts such as elastic-
ity or economization in investigating how well a company can adapt to changes
in its market. New ideas have evolved over the years, and nowadays the economy

341
330 II. Sociology and Economics

is more and more considered to undergo an evolutionary process. This is in


agreement with the general concepts of synergetics, where we do not consider
structures to be given, but try to understand them by means of their evolution. In
the following our point of departure will be a mathematical model developed by
the economist Gerhard Mensch. This model can easily be subsumed under the
general approaches of synergetics, and it well reflects the empirical data.
Industrial development passes through phases of prosperity and depression.
This is well known to all of us and has been substantiated by Habeler and many
other economists. The transitions between different phases can be quite pronoun-
ced. The numerous examples in the preceding chapters of this book have shown
that even small changes in the surroundings or in constraints called control
parameters can cause dramatic changes in the total order of many systems. In the
following the transitions between full employment and underemployment are
treated in the light of these results. Before illustrating the causes of such phase
transitions in economics, we will mention some relevant empirical observations
made in economic research.

Innovations. We have seen again and again that there are quite different regimes
with respect to the behavior of a variety of systems. On the one hand there is a
region where a lamp or a fluid layer behaves normally. In such a case their
behavior does not change qualitatively if the perturbations are not too great. On
the other hand there are particularly interesting regions where a system becomes
unstable and tends to acquire a new state, i.e. where circumstances have become
favorable for transition into a new state. When and how this transition occurs is
quite often determined by fluctuations. Precisely this behavior can also be found
in economic models. But what plays the role of fluctuations, i.e. of triggers, in
economics? One group of events which plays that role are innovations, especially
those which are based on inventions - the gasoline engine, the airplane, the
telephone, or even a new vacuum cleaner. A large group of less evident but very
important inventions are those which simplify production.
According to empirical innovation research, the initial phase starts with fun-
damental innovations which create new industrial branches, such as the invention
of the automobile. If several such fundamental innovations occur simultaneously,
they are usually followed by innovations aimed at improving production in the
new industrial branches. The growth of these branches influences other branches
of the economy so that the general economic situation gives rise to general pros-
perity. This happens in different ways, e.g. via a high level of employment and
high purchasing power.
Economic studies have further revealed that innovations aimed at the produc-
tion of new products exceeded by far the introduction of new kinds of production
processes in the European industrial countries in the late 1940s and 1950s. Then
in the sixties a shift of innovations occurred which resulted in production changes.
This shift can be characterized by the catchword "rationalization".
Considerations of profit are unquestionably a major motivation in economic
actions, and discussions of such questions are seldom free from emotion, for
example when car drivers talk of the increase in gasoline prices and the profits
gained thereby. Let us not be governed here by emotions, but instead keep in mind

342
11.2 Phase Transitions in Economics 331

that decreasing gains will ultimately result in losses and a possible drop in employ-
ment. Let us consider the economic aspects only.
To make a profit requires the sale of a sufficient number of products. Higher
salaries will diminish a company's profit and affect prices, possibly making com-
petition difficult. At the same time, an increase in production is often connected
with the introduction of new products, which, at least in the beginning, can be
expensive. Both effects, namely increased salaries and the avoidance of high initial
costs for new products, are reasons to make investments which will lead to ratio-
nalization rather than to an increase in sales. In other words, companies prefer
innovations which will improve the production process to those which will intro-
duce new kinds of products: an automobile company would rather introduce new
welding automata than a new car.
Based on empirical data, Mensch has developed a mathematical model which
is borrowed from catastrophe theory. It describes the observed transition from full
employment to underemployment. Here we will present his model in the frame-
work of synergetics and extend it correspondingly. To this end we introduce the
following quantities and relations. We start with a mean annual production X 0
(measured, for example, in dollars) and we study the behavior of the deviation X
from it, so that the actual production can be represented as X 0 + X. Furthermore,
we denote by I the annual additional investments which result in expanded
production. The annual change in X will then be given by

(11.8)

Those annual investments which result in rationalization will be called R. But


these act in a multiplicative manner, i.e. the temporal change X, of X, is given by

R·X. (11.9)

Thus R is beneficial for the production per article. Finally, it is known from
economics (and from many examples in the present book) that a saturation point
is reached if the deviations in X become too large. This saturation can be modeled
by the term

(11.10)

Putting the individual terms from (11.8) to (11.10) together, we obtain the funda-
mental equation

(11.11)

We are well acquainted with this kind of equation. The potential belonging to its
rhs can be interpreted as synergetic costs. Companies will try to minimize that
potential as far as possible; i.e. they will arrange the production in such a way that
V reaches its minimum. If R = 0, V has only one minimum. This is in accordance
with the basic idea of Adam Smith, the father of the theory of the free market.
According to that theory the market will always equilibrate to a single stable

343
332 II. Sociology and Economics

position because of its "internal forces". Quite surprisingly, however, we immedia-


tely find two minima if R > O. The companies therefore have to choose between
increased production and decreased production. Whether the deeper minimum
lies at X h or Xn obviously depends on the sign of I, i.e. on an increase or decrease
in investments which expand production. Because investments I which will in-
crease production and investments R leading to rationalization stem from the
same total amount of investment money, an increase in R will be connected with
a decrease in I. Therefore Xn will be realized; i.e. underproduction and conse-
quently underemployment will result. This model suggests that the problem of
underemployment can be solved when I is increased, i.e. when investments leading
to increased production are enlarged.
A detailed treatment of the various aspects of this model is beyond the scope
ofthis book. But this model should suggest to the reader how economic processes
can be modeled. It might be added here that the classical theory of the free market
according to Adam Smith rests on the idea that the economy always tends to one
definite equilibrium position due to intrinsic economic forces. The above example
shows that this need not to be the case. The economy can, instead, possess two
equilibrium positions, and may jump from one to the other, as is fully substantiat-
ed by the empirical data published by Mensch.
I think we must get accustomed to the idea that the economy is a dynamic
system composed of many subsystems; i.e. it is a genuine synergetic system. From
there it is quite simple to go on and construct more complicated economic models.
In the model treated above we dealt with a single variable obeying a first-order
differential equation, where only one or several stable points can occur as station-
ary solutions. If we introduce one more dynamic variable, more complicated
situations can occur, namely limit cycles. Such cycles have indeed been observed
in economics: one model describing the periodic transitions between full employ-
ment and underemployment is called the Schumpeter clock. It is interesting to
note that when a third dynamic variable becomes important, the corresponding
three coupled nonlinear differential equations can give rise to chaotic behavior.
Such behavior will be discussed in a different context in the next chapter. It is
rather surprising that the concept of chaos has not yet been included in economic
studies, at least to my knowledge. In future the general concepts of synergetics,
which treats the various forms of collective behavior, will undoubtedly play an
important role in economics.

344
12. Chaos

12.1 What is Chaos?

Sometimes scientists like to use dramatic words of ordinary language in their


science and to attribute to them a technical meaning. We already saw an example in
Thorn's theory of "catastrophes". In this chapter we become acquainted with the
term "chaos". The word in its technical sense refers to irregular motion. In previous
chapters we encountered numerous examples for regular motions, for instance
an entirely periodic oscillation, or the regular occurrence of spikes with well-
defined time intervals. On the other hand, in the chapters about Brownian motion
and random processes we treated examples where an irregular motion occurs
due to random, i. e., in principle unpredictable, causes. Surprisingly the irregular
motion represented in Fig. 12.1 stems from completely deterministic equations. To

\ A N\ AlIIi Po d .
If YV vv~ Fig. 12.1. Example of chaotic motion of a variable q
(versus time)

characterize this new phenomenon, we define chaos as irregular motion stemming


from deterministic equations. The reader should be warned that somewhat different
definitions of chaos and the criteria to check its occurrence are available in the
literature. The difficulty rests mainly in the problem of how to define "irregular
motion" properly. For instance, the superposition of motions with different
frequencies could mimic to some extent an irregular behavior and one wants to
preclude such a case from representing chaos. We shall come back to this question
in Section 12.5, where we shall discuss the typical behavior of the correlation
function of chaotic processes. A good deal of present-day analysis of "chaos"
rests on computer calculations.
In the following we shall present one of the most famous examples, namely the
so-called Lorenz model of turbulence, which reveals some of the most interesting
features of chaos.
334 12. Chaos

12.2 The Lorenz Model. Motivation and Realization


In this section we describe how the Lorenz model can be motivated. It turns out
that it is an instructive but not quite realistic model for turbulence in fluids. It can
probably be realized in lasers and spin systems. The reader who is more interested
in the mathematics than in the physics of this model is advised to go straight to
Section 12.3.
A long-standing and still unsolved problem is the explanation of turbulence in
fluids. The original purpose of the Lorenz equations is to provide a model for
turbulence. To this end we recall briefly some of the results of Section 8.12 on the
Benard instability. There we have seen that out of the quiescent state first a single
mode, namely a certain component of the velocity field in vertical direction,
becomes unstable. (Here we neglect mode degeneracies with respect to the hori-
zontal direction). This "mode" serves as order parameter. As one may show in
more detail, this mode slaves in particular two further modes which are connected
with temperature deviations. To obtain equations for the amplitudes of these
three variables in a systematic manner we first decompose the components of the
velocity field into a Fourier series

(12.1)

and similarly the temperature deviation field. We then insert these expressions
into the Navier-Stokes equations (in the Boussinesq approximation) and keep
only the three terms

u j (1,0,1)=X, 8(1,0,1)=Y, 8(0,0,2)=Z. (12.2)

After some analysis and using properly scaled variables we obtain the Lorenz
equations

X=O"Y-O"X, (12.3)
Y=-XZ+rX - Y, (12.4)
Z=X·Y-bZ. (12.5)

0"= vjx' is the Prandtl number (where v is the kinematic viscosity, x' the thermo-
metric conductivity), r=R/R c (where R is the Rayleigh number, Rc the critical
Rayleigh number), b=4n 2 /(n 2 +ki). When we put

X=(, Y=1], Z=r-(,

(12.3H12.5) acquire the form

t=b(r-()-(1] . (12.6)

Astonishingly equations entirely equivalent to the set of equations (12.3- 5)


occur in laser physics. We start from the laser equations (8.94- 96) for the field

346
12.2 The Lorenz Model. Motivation and Realization 335

strength E, the polarization P, and the inversion D (in properly chosen units).
We assume single mode operation by putting jJ E/ jJ x = O. In the following
we assume that E and P are real quantities, i.e., that their phases can be kept
constant, which can be proved by computer calculations. The thus resulting
equations read

E=xP-xE, (12.7)
F=yED-yP, (12.8)
Jj =YII(A+l)-YIID- YIiAEP. (12.9)

These equations are identical with those of the Lorenz model in the form (12.6)
which can be realized by the following identifications

t--+t'(J/x, E--+rx~ where rx={b(r-l)}-1/2, r>l

P--+rx1'/, D--+(, I'1I=xb/(J, I'=x/(J, A=r-1.

In particular the following correspondence holds:

Table 12.1

Benard problem Laser

0": Prandtl number 0"= x/y


R
r = - (R Rayleigh number) r=A+1
R,

Eqs. (12.6) describe at least two instabilities which have been found in-
dependently in fluid dynamics and in lasers. For A<O (r<l) there is no laser
action (the fluid is at rest), for A~O (r~ 1) laser action (convective motion starts)
with stable, time-independent solutions ~,1'/,( occurs. As we will see in Section 12.3,
besides this well known instability a new one occurs provided

laser: fluid: (J>b+l (12.10)


and
A> ('I + I'll + x)(y + x)/y(x- 'I - 'III) r> (J((J + b + 3)/((J -1- b). (12.11)

This instability leads to the irregular motion, an example of which we have


shown in Fig. 12.1. When numerical values are used in the conditions (12.10) and
(12.11) it turns out that the Prandtl number must be so high that it cannot be
realized by realistic fluids. On the other hand, in lasers and masers it is likely
that conditions (12.10) and (12.11) can be met. Furthermore it is well known

347
336 12. Chaos

that the two-level atoms used in lasers are mathematically equivalent to spins.
Therefore, such phenomena may also be obtainable in spin systems coupled to
an electromagnetic field.

12.3 How Chaos Occurs


Since we can scale the variables in different ways, the Lorenz equations occur
in different shapes. In this section we shall adopt the following form

q! = -rxq! +q2, (12.12)

q2=-fJq2+q!q~, (12.13)

q~ = d~ - q~ - q! q 2 . (12.14)

These equations result from (12.3-5) by the scaling

1
t=-t'·
b '

(12.15)

The stationary state of (12.12-14) with q!,q2,q~=O is given by

~ (d~ -rxfJ), (12.16)


rx

A linear stability analysis reveals that the stationary solution becomes unstable
for

d~ = rx 2 fJ . rx + 3 ~~ . (12.17)
rx-fJ-1

In this domain, (12.12-14) were solved by computer calculations. An example


of one variable was shown in Fig. 12.1. Since X, Y,Z span a three-dimensional
space, a direct plot of the trajectory is not possible. However, Fig. 12.2 shows
projections of the trajectory on two planes. Apparently the representative point
(X, Y,Z), or (q!,q2,Q;), circles first in one part of space and then suddenly jumps
to another part where it starts a circling motion again. The origin of that kind of
behavior, which is largely responsible for the irregular motion, can be visualized
as follows. From laser physics we know that the terms on the right hand side of
(12.12-14) have a different origin. The last terms

348
12.3 How Chaos Occurs 337

Fig. 12.2. Upper half: Trajectories


projected on the X - Z plane.
x Lower half: Trajectories projected on
the X - Y plane.
The points represent the steady state
solution (after M. Lucke)

stem from the coherent interaction between atoms and field. As is known in
laser physics coherent interaction allows for two conservation laws, namely
energy conservation and conservation of the total length of the so-called pseudo
spin. What is of relevance for us in the following is this. The conservation laws are
equivalent to two constants of motion

R'Z =qi +q~ +q~, (12.18)

pZ=q~+(q3-1)Z, (12.19)

where

(12.20)

On the other hand the first terms in (12.12-14)

stem from the coupling of the laser system to reservoirs and describe damping
and pumping terms, i.e., nonconservative interactions. When we first ignore
these terms in (12.12-14) the point (Q!,QZ,Q3) must move in such a way that the

349
338 12. Chaos

conservation laws (12.18) and (12.19) are obeyed. Since (12.18) describes a sphere
in Ql,Q2,Q3 space and (12.19) a cylinder, the representative point must move
on the cross section of sphere and cylinder. There are two possibilities depending
on the relative size of the diameters of sphere and cylinder. In Fig. 12.3 we have
two well-separated trajectories, whereas in Fig. 12.4 the representative point can

Fig. 12.3. The cross section of the sphere (12.18) and the Fig. 12.4. The case R < 1 + p yields
cylinder (12.19) is drawn for R> 1 + p. One obtains two a single closed trajectory
separated trajectories

move from one region of space continuously to the other region. When we include
damping and pump terms the conservation laws (12.18) and (12.19) are no longer
valid. The radii of sphere and cylinder start a "breathing" motion. When this
motion takes place, apparently both situations of Fig. 12.3 and 12.4 are accessible.
In the situation of Fig. 12.3 the representative point circles in one region of space,
while it may jump to the other region when the situation of Fig. 12.4 becomes
realized. The jump of the representative point depends very sensitively on where
it is when the jump condition is fulfilled. This explains at least intuitively the
origin of the seemingly random jumps and thus of the random motion. This
interpretation is fully substantiated by movies produced in my institute. As we
will show below, the radii of cylinder and sphere cannot grow infinitely but are
bounded. This implies that the trajectories must lie in a finite region of space.
The shape of such a region has been determined by a computer calculation and is
shown and explained in Fig. 12.5.

o Fig. 12.5. Projection of the Lorenz surface on the


q2, q3 plane. The heavy solid curve and extensions as
dotted curves indicate natural boundaries. The isopleths
of q, (i.e., lines of constant q,) as function of q2 and q]
are drawn as solid or dashed thin curves (redrawn after
r--------- Q2
L---------O
E.N. Lorenz)

350
12.3 How Chaos Occurs 339

When we let start the representative point with an initial value outside of
this region, after a while it enters this region and will never leave it. In other
words, the representative point is attracted to that region. Therefore the region
itself is called attractor. We have encountered other examples of attracting regions
in earlier sections. Fig. 5.11a shows a stable focus to which all trajectories con-
verge. Similarly we have seen that trajectories can converge to a limit cycle. The
Lorenz attractor has a very strange property. When we pick a trajectory and follow
the representative point on its further way, it is as if we stick a needle through a
ball of yarn. It finds its path without hitting (or asymptotically approaching)
this trajectory. Because of this property the Lorenz attractor is called a strange
attract or.
There are other examples of strange attractors available in the mathematical
literature. We just mention for curiosity that some of these strange attractors
can be described by means of the so-called Cantor set. This set can be obtained
as follows (compare Fig. 12.6). Take a strip and cut out the middle third of it.

Fig. 12.6. Representation of a Cantor set. The whole area is first divided into three equal parts. The
right and left intervals are closed, whereas the interval in the middle is open. We remove the open
interval and succeed in dividing the two residual intervals again into three parts, and again remove
the open intervals in the middle. We proceed by again dividing the remaining closed intervals into
three parts, removing the open interval in the center, etc. If the length of the original area is one, the
Lebesgues measure of the set of all open intervals is given by I:" [ 2" - 1/3" = 1. The remaining set of
closed intervals (i.e., the Cantor set) therefore has the measure zero

Then cut out the middle third of each of the resulting parts and continue in this
way ad infinitum. Unfortunately, it is far beyond the scope of this book to present
more details here. We rather return to an analytical estimate of the size of the
Lorenz attractor.

351
340 12. Chaos

The above considerations using the conservation laws (12.18) and (12.19) suggest
introduction of the new variables R,p, and q3. The original equations (12.12-14)
transform into

~(R2)" = - 11.+ fJ - a. R2 + (a. - fJ)p2 + (2(a- fi)+ do)q3 - (1- fJ)q~ , (12.21)

~(p2r = - do + fJ - fJp2 + (do + 1 - 2 fJ)q3 - (1- fJ)q~ , (12.22)

llJ =d o -q3 -(± )(R 2 - p2 + 1-2q3)1/2((1 + p - q3)(q3 -1 + p))1/2. (12.23)

The equation for q3 fixes allowed regions because the expressions under the roots
in (12.23) must be positive. This yields

(12.24)

(two separate trajectories of a limit cycle form), and

(12.25)

(a single closed trajectory). Better estimates for Rand p can be found as follows.
We first solve (12.22) formally

(12.26)

with

(12.27)

We find an upper bound for the right hand side of (12.26) by replacing g by its
maximal value

A2
gmax(q3) = 4B· (12.2X)

This yields

(12.29)

For t-+ r:D, (12.29) reduces to

(12.30)

352
12.4 Chaos and the Failure of the Slaving Principle 341

In an analogous way we may treat (12.21) which after some algebra yields the
estimate

1
R~( ex) ) = 4exf3(1- m{ exd~ + (ex - f3) [2(2f3 -1)d o + 1 + 4f3(ex-l)] } . (12.31)

These estimates are extremely good, as we can substantiate by numerical


examples. For instance we have found (using the parameters of Lorenz' original
paper)
a) our estimate

PM( ex) ) = 41.03


R M ( ex) ) = 41.89

b) direct integration of Lorenz equations over some time

P = 32.84
R=33.63

c) estimate using Lorenz' surface (compare Fig. 12.7)

PL = 39.73

RL = 40.73 .

o
Fig. 12.7. Graphical representa tion o f estimates (12.30),
(12.31). Central figure: Lorenz auractor (compare F ig. 12.5 ).
Outer solid circle: estimated maximal radius of cylinder.
PM (1 2. 30). Dashed circle : Projection of cylinder (12.29)
with radius Pc constructed from the condition that at a
L---------~-------q2
o certain time q , = q2= 0 a nd q ,= d o

12.4 Chaos and the Failure of the Slaving Principle


A good deal of the analysis ofthe present book is based on the adiabatic elimination
of fast relaxing variables, technique which we also call the slaving principle. This
slaving principle had allowed us to reduce the number of degrees of freedom
considerably. In this sense one can show that the Lorenz equations result from
that principle. We may, however, ask if we can again apply that principle to the

353
342 12. Chaos

Lorenz equations themselves. Indeed, we have observed above that at a certain


threshold value (12.11) the steady-state solution becomes unstable. At such an
instability point we can distinguish between stable and unstable modes. It turns
out that two modes become unstable while one mode remains stable. Since
the analysis is somewhat lengthy we describe only an important new feature. It
turns out that the equation for the stable mode has the following structure:

~s=( -IAsl +U~s + nonlinear terms, (12.32)


¢u: amplitude of an unstable mode.

As we remarked on page 200, the adiabatic elimination principle remains valid


only if the order parameter remains small enough so that

I¢ul ~ I)·s l· (12.33)

Now let us compare numerical results obtained by a direct computer solution


of the Lorenz equation and those obtained by means of the slaving principle
(Figs. 12.8, 12.9). We observe that for a certain time interval there is rather
good agreement, but suddenly a discrepancy occurs which persists for all later
times. A detailed analysis shows that this discrepancy occurs when (12.33) is

E
,
;1
"
""
:!
., ,,

Fig. 12.8. The field strength E( OC 4,) of a single mode


, laser under the condition of chaos.
',I t] Solid line: solution by means of "slaving principlc";
dashed line: direct integration of Lorenz eq uations.
Initially, solid a nd da shed lincs coincide. but then
"slaving principle" fails

p Fig. 12.9. Same physica l problem as in Fig. 12.2. Trajectories


projected on P (polarization) - D (inversion) plane.
... --,- Solid line: solution by means of slaving principle; da shed
line: Direct integration of Lorenz equation s. Initiall y,
solid and dashed lines coincide

354
12.5 Correlation Function and Frequency Distribution 343

E
,
I

\ -' Fig. 12.10. Same as in Fig. 12.8, but trajectories projected on


E - P plane

violated. Furthermore, it turns out that at that time the representative point
just jumps from one region to the other one in the sense discussed in Section 12.3.
Thus we see that chaotic motion occurs when the slaving principle fails and the
formerly stable mode can no longer be slaved but is destabilized.

12.5 Correlation Function and Frequency Distribution

The foregoing sections have given us at least an intuitive insight into what chaotic
motion looks like. We now want to find a more rigorous description of its pro-
perties. To this end we use the correlation function between the variable q(t)
at a time t and at a later time t + t'. We have encountered correlation functions
already in the sections on probability. There they were denoted by

( q(t)q(t + n) . (12.34)

In the present case we do not have a random process and therefore the averaging
process indicated by the brackets seems to be meaningless. However, we may
replace (12.34) by a time average in the form

lim - 1 fT q(t)q(t + t')dt , (12.35)


T ~<x, 2T - T

where we first integrate over t and then let the time interval 2 T become very
large or, more strictly speaking, let T go to infinity. Taking purely periodic
motion as a first example, i. e., for instance

q(t)=sinw 1 t, (12.36)

we readily obtain

(12.37)

355
344 12. Chaos

That means we obtain again a periodic function (compare Fig. 12.11). One may
simply convince oneself that a motion containing several frequencies such as

(12.38)

also gives rise to an oscillating nondecaying behavior of (12.34).


On the other hand, when we think of a purely diffusive process based on
random events which we encountered in the section on probability theory, we
should expect that (12.34) goes to 0 for t' -Hf) (compare Fig. 12.12). Because

<q(t)q(t +1"»

f--\----f--+----'';-----t'
I
/
/ Fig. 12.11. The correlation function as function of
.L __ _
time t' for the periodic function q(t), (12.36). Note
that there is no decay of the amplitude even if t' ..... <Xl

Fig. 12.12. The correlation function for t' for a


chaotic motion. Note that an oscillating behavior
need not be excluded but that the correlation
function must vanish as t' ..... <Xl

we wish to characterize chaotic motion as seemingly random (though caused by


deterministic forces), we may adopt as a criterion for chaotic motion a behavior
as shown in Fig. 12.12. A further criterion results if we decompose q(t) into a
Fourier integral

q(t) =
1
-2
f+oo C(W) eiwt dw . (12.39)
1! -00

Inserting it into (12.36) we find two infinitely high peaks at W= ±w 1 (compare


Fig. 12.13). Similarly (12.38) would lead to peaks at W= ±W1 ±W2 . On the

Fig. 12.13. The power spectrum Ic(wl!' for a purely


periodic variable q(t) (compare (12.36)). It shows
-----L------~------L---
- W1 W1
.. w only two peaks at - w, and Wi' For multiperiodic
variables a set of peaks would appear

356
12.6 Discrete Maps, Period Doubling, Chaos, Intermittency 345

other hand, chaotic motion should contain a continuous broad band of fre-
quencies. Fig. 12.14 shows an example of the intensity distribution of frequencies
of the Lorenz model. Both criteria using (12.34) and (12.39) in the way described
above are nowadays widely used, especially when the analysis is based on com-
puter solutions of the original equations of motion. For the sake of completeness
we mention a third method, namely that based on Ljapunov exponents. A de-
scription of this method is, however, beyond the scope of this book 1.

Fig. 12.14. The power spectrum lc(wW for the Lorenz attract or
(redrawn after Y. Aizawa and 1. Shimada). Their original figure
contains a set of very closely spaced points. Here the studied
'----------w variable is q2(t) of the Lorenz model as described in the text

Chaotic motion in the sense discussed above occurs in quite different disciplines.
In the last century Poincare discovered irregular motion in the three-body
problem. Chaos is also observed in electronic devices and is known to electrical
engineers. It occurs in the Gunn oscillator, whose regular spikes we discussed in
Section 8.14. More recently numerous models of chemical reactions have been
developed showing chaos. It may occur in models both with and without diffusion.
Chaos occurs also in chemical reactions when they are externally modulated,
for instance by photochemical effects. Another example are equations describing
the flipping of the earth magnetic field which again shows a chaotic motion, i.e.,
a random flipping. Certain models of population dynamics show entirely irregular
changes of populations. It seems that such models can account for certain fluctua-
tion phenomena of insect populations.
Many of the recently obtained results are based on discrete maps, which are
treated in the next section.

12.6 Discrete Maps, Period Doubling, Chaos, Intermittency


Let us first explain what discrete maps are. Processes in nature are usually de-
scribed by means of a continuous time t on which the variables q of a system
depend. Quite often, considerable insight into the behavior of a system can be
gained simply by studying q at a discrete sequence of times tn' n = 0,1,2, .... An
example of such a sequence Xn = q(tn} is given in Fig. 12.15, where the continuous
1 The interested reader is referred to H. Haken: Advanced Synergetics, Springer Ser. Synergetics,
Vol. 20 (Springer, Berlin, Heidelberg, New York, Tokyo 1983)

357
346 12. Chaos

trajectory q(t) (which may either be the solution of a differential equation or may
stem from experimental data) hits the ql axis at a sequence of points XI, X2, .... Of
course, each intersection point X n + I is determined, at least in principle, once the
previous point Xn is fixed. Therefore we may assume that a relation
Xn + I = f,.(xn) (12.40)
exists. As it turns out in many cases of practical interest, the functionf., is indepen-
dent of n, so that (12.40) reads
Xn+1 =f(xn). (12.41)
Because (12.41) maps a value Xn onto a value Xn + 1 at discrete indices n, (12.41) is
called a discrete map. Fig. 12.16 shows an explicit example of such a function 2.
Of these functions ("maps"), the "logistic map"
Xn+ I = aXn(1 - xn) (12.42)

of the interval 0 :s; x :s; 1 onto itself has probably received the greatest attention.
In it, a plays the role of a control parameter which can be varied between
O:s; a:s; 4. It is an easy matter to calculate the sequence xJ,xz, ... ,x., ... deter-
mined by (12.42) with a pocket computer. Such calculations reveal a very inter-
esting behavior of the sequence XI, ... for various values of a. For instance, for
a < 3 the sequence XI' X 2 , ... converges towards a fixed point (Fig. 12.17). When
a is increased beyond a critical value ai' a new type of behavior occurs (Fig. 12.18).
In this latter case, after a certain transition "time" no, the points X"o + 1, Xno + 2, ...
jump periodically between two values, and a motion with "period two" is reached.
When we increase a further, beyond a critical value a2, the points Xn tend to a
sequence which is only repeated after 4 steps (Fig. 12.19), so that the "motion" goes
on with a "period four". With respect to "period two", the period length has
doubled.
Ifwe increase a further and further, at a sequence of critical values (1.1 the period
doubles each time. The thus resulting Fig. 12.20 shows a sequence of "period
doubling" bifurcations. The critical values obey a simple law:
lim al+1 - (1.1. = lJ = 4.6692016 ... , (12.43)
l~oo (1.1+2 - al+ I

as was found by Grossmann and Thomae in the case of the logistic map. Feigen-
baum observed that this law has universal character because it holds for a whole
class of functions f(xn). Beyond a critical value (I.e' the "motion" of Xn becomes
chaotic, but at other values of (I. periodic windows occur. Within these windows the
motion of Xn is periodic. Interesting scaling laws have been obtained for such
maps, but a presentation of them would go beyond the scope of this book.
2 A warning must be given here. While that section of the curve of Fig. 12.16 which is above
the diagonal (dashed line) is compatible with the trajectory depicted in Fig. 12.15, the section
below the diagonal does not folJow from a sequence X n + 1 > Xn but rather from .Yn + I < X n ·
This in turn implies that the trajectory of Fig. 12.15 crosses itself, which is not possible for an
autonomous system in the plane. Indeed, the map of Fig. 12.16 can be realized only by an
autonomous system of (at least) three dimensions, and the crossing points Sn can be obtained
(for example) by a projection of the three-dimensional trajectory on the q I - q2 plane, where
the new trajectory can indeed cross itself.

358
12.6 Discrete Maps, Period Doubling, Chaos, Intermittency 347

/
/
/

/
/
/

//
/
---+----+--1--~--~-1~--~X-3- q1 /
/
/
/
/
/
/
/
/
/
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ~ __ Xn

Fig. 12.15 Example of how a trajectory Fig. 12.16 Example of mapping X. onto
q(t) (the spiral) can be connected with a x. + l ' The mapping function is a parabola,
discrete series of points Xl' X 2 , X 3 , ... see (12.42). The dashed line represents the
diagonal

L--+--~--~_+--~--~_+--~---n n

Fig. 12.17 Fig. 12.18

Fig. 12.17 As n increases, the sequence of


points X. determined by (12.42) tends to a
fixed point

Fig. 12.18 Periodic jumps of X. between


two values

n
Fig. 12.19 Jumps of period 4 of x.

359
348 12. Chaos

1, ~.___ :._,.__........

. t Fig. 12.20. The set of possible values of Xn


(n -> 00) (ordinate) versus control param-
.-----_.._---_. eter 1100 - 11 (abscissa) on a logarithmic
x 0- . .· · ....__......1./··············· ......-............ ~".=:c_..... .... ...
scale. Here the logistic equation is linearly
transformed into the equation x n + 1
= 1 - I1X;, 1100 corresponds to the critical
value 1X00 [Po Collet, J. P. Eckmann: in
Progress in Physics, Vol. I, ed. by A. Jaffe,
D. Ruelle (Birkhiiuser, Boston 1980.)]
-11.0 0.5 0.25 0.1 0.05 0.025 0.01 0.005 0.0025
fl~-fl

08
07

0.6
~ 05

0.4
03

0.2

01
/
/
/ '
01 02 03 04 0.5 0.6 07 0.8 0.9 10
,/

Fig. 12.21 Fig. 12.22

Fig. 12.21 The map / 3 (x) == /(3) (x) for


10
one specific parameter value
05
1 Fig. 12.22 Enlarged section of Fig. 12.21

, t.??/2~///////-;-; ... Fig. 12.23 This plot of the variable x as a


function of "time" clearly shows the quies-
-05 cent zone within the horizontal lines and
the "turbulent" zones outside the stripe
-l oL .-'~-~- ,.. [J. P. Eckmann, L. Thomas, P. Witter: J.
o 200 400 600 800 1000 Phys. A14, 3153 (1981)]

Another important aspect of the logistic map can be rather easily explained,
however. A well-known phenomenon occurring in turbulent flows is "inter-
mittency". Times in which laminar flows occur are interrupted by times of turbu-
lent outbursts. A model for this kind of behavior is provided by an "iterated map",
which we obtain as follows: We replace n by n + 1 everywhere in (12.42) so that
we obtain
(12.44)

360
12.6 Discrete Maps, Period Doubling, Chaos, Intermittency 349

Then we replace x n+ 1 by Xn according to (12.42) and obtain

Xn+ 2 = 1X2xn(1 - xn)(1 - IXx n(1 - Xn)). (12.45)

Because the mapping described by (12.42) has been applied twice, a shorthand
notation of (12.45) is

(12.46)

Replacing n everywhere by n + 1 and using (12.42) again, we obtain the equation


(12.47)

where we leave it as an exercise to the reader to determine j(3) explicitly.


A plot of this map is shown in Fig. 12.21 for a specific value of IX. Figure 12.22
shows an enlarged section of Fig. 12.21. As is evident from Fig. 12.22, the sequence
of points Xn can be easily constructed by means of the trajectory drawn in that
figure. Quite evidently, when the trajectory passes through the region of Fig. 12.22,
Xn changes very little from step to step. But when this region is left, Xn can make
wild jumps, and chaotic motion occurs. Then, after a while, the points Xn return
and enter the "tunneling" region of Fig. 12.22, and a "quiet" motion follows. All
in all, the result is the behavior depicted in Fig. 12.23, which clearly shows inter-
mittency.
We conclude this section with a general remark.
Experimentally, period-doubling sequences, chaos, and intermittency are
found in quite different systems: fluids, chemical reactions, electronic devices, etc.
In each case these systems possess an enormous number of degress of freedom.
Therefore at first glance it may be puzzling that the behavior of such systems can
be described by a single discrete variable X n• The answer lies in the slaving prin-
ciple discussed in Sect. 7.7. According to that principle, the behavior of the total
system may be governed by only very few degrees of freedom, namely the order
parameters. If a single order parameter behaves more or less periodically, it may
be subjected to a Poincare map (see e.g. Figs. 12.15, 16 and Footnote 2 on p. 347),
and the whole analysis proceeds as outlined above.

361
13. Some Historical Remarks and Outlook

The reader who has followed us through our book has been most probably amazed
by the profound analogies between completely different systems when they pass
through an instability. This instability is caused by a change of external parameters
and leads eventually to a new macroscopic spatio-temporal pattern of the system.
In many cases the detailed mechanism can be described as follows: close to the
instability point we may distinguish between stable and unstable collective mo-
tions (modes). The stable modes are slaved by the unstable modes and can be
eliminated. In general, this leads to an enormous reduction of the degrees of
freedom. The remaining unstable modes serve as order parameters determining the
macroscopic behavior of the system. The resulting equations for the order par-
ameters can be grouped into a few universality classes which describe the dynamics
of the order parameters. Some of these equations are strongly reminiscent of
those governing first and second order phase transitions of physical systems in
thermal equilibrium. However, new kinds of classes also occur, for instance
describing pulsations or oscillations. The interplay between stochastic and deter-
ministic "forces" ("chance and necessity") drives the systems from their old states
into new configurations and determines which new configuration is realized.

The General Scheme

Old structure ------> Instability ------> New structure


Change of external parameters

~ Unstable mOdes}
Old structure ~ slave ->

~ Stable modes

Unstable modes Same type of eqUations} ->


{
---> Order parameters for different systems

------> U ni versali ty classes New structure


352 13. Some Historical Remarks and Outlook

A first detailed and explicit account of the phase-transition analogy between a


system far from thermal equilibrium (the laser) and systems in thermal equilibrium
(superconductors, ferromagnets) was given in independent papers by Graham and
Haken (1968,1970), and by DeGiorgio and Scully (1970).1 When we now browse
through the literature, knowing of these analogies, we are reminded of Kohelet's
words: There is nothing new under this sun. Indeed we now discover that such
analogies have been inherent, more or less visibly, in many phenomena (and
theoretical treatments).
In the realm of general systems theory (with emphasis on biology), its founder
von Bertalanffi observed certain analogies between closed and open systems.
In particular, he coined the concept of "flux equilibrium" (Fliel3gleichgewicht).
In other fields, e.g., computers, such analogies have been exploited in devices.
Corresponding mathematical results were obtained in papers by Landauer on tun-
nel diodes (1961, 1962) and by myself on lasers (1964). While the former case
referred to a certain kind of switching, for instance analogous to a Bloch wall
motion in ferromagnets, the latter paved the way to compare the laser threshold
with a second order phase transition.
Knowing of the laser phase transition analogy, a number of authors established
similar analogies in other fields, in particular non-equilibrium chemical reactions
(SchI6gl, Nicolis, Nitzan, Ortoleva, Ross, Gardiner, Walls and others).
The study of models of chemical reactions producing spatial or temporal structures
had been initiated in the fundamental work by Turing (1952) and it was carried
further in particular by Prigogine and his coworkers. In these latter works the
concept of excess entropy production which allows to find instabilities has played
a central role. The present approach of synergetics goes beyond these concepts in
several respects. In particular it investigates what happens at the instability point
and it determines the new structure beyond it. Some of these problems can be
dealt with by the mathematical theory of bifurcation, or, more generally, by a
mathematical discipline called dynamic systems theory. In many cases presented in
this book we had to treat still more complex problems, however. For instance,
we had to take into account fluctuations, small band excitations and other features.
Thus, synergetics has established links between dynamic systems theory and
statistical physics. Undoubtedly, the marriage between these two disciplines has
started.
When it occurred to me that the cooperation of many subsystems of a system is
governed by the same principles irrespective of the nature of the subsystems I
felt that the time had come to search for and explore these analogies within the
frame of an interdisciplinary field of research which I called syncrgetics. While
I was starting from physics and was led into questions of chemistry and biology,
quite recently colleagues of some other disciplines have drawn my attention to the
fact that a conception, called synergy, has long been discussed in fields such as
sociology and economics. Here for instance the working together of different parts
of a company, to improve the performance of the company, is studied. It thus
appears that we are presently from two different sides digging a tunnel under a big

1 The detailed references to this chapter are listed on page 349.

364
13. Some Historical Remarks and Outlook 353

mountain which has so far separated different disciplines, in particular the "soft"
from the "hard" sciences.
It can be hoped that synergetics will contribute to the mutual understanding and
further development of seemingly completely different sciences. How synergetics
might proceed shall be illustrated by the following example taken from philology.
Using the terminology of synergetics, languages are the order parameters slaving
the subsystems which are the human beings. A language changes only little over
the duration of the life of an individual. After his birth an individual learns a
language, i. e., he is slaved by it, and for his lifetime contributes to the survival of
the language. A number of facts about languages such as competition, fluctuations
(change of meaning of words, etc.) can now be investigated in the frame established
by synergetics.
Synergetics is a very young discipline and many surprising results are still ahead
of us. I do hope that my introduction to this field will stimulate and enable the
reader to make his own discoveries of the features of self-organizing systems.

365
References, Further Reading, and Comments

Since the field of Synergetics has ties to many disciplines, an attempt to provide a more or less
complete list of references seems hopeless. Indeed, they would fill a whole volume. We therefore
confine the references to those papers which we used in the preparation ofthis book. In addition
we quote a number of papers, articles or books which the reader might find useful for further
study. We list the references and further reading according to the individual chapters.

l. Goal

H. Haken, R. Graham: Synergetik-Die Lehre vom Zusammenwirken. Umschau 6. 191 (1971)


H. Haken (ed.): Synergetics (Proceedings of a Symposium on Synergetics. Elmau 1972) (B. G.
Teubner, Stuttgart 1973)
H. Haken (ed.): Cooperative £,Uixts, Progress ill SYllergetics (North Holland, Amsterdam 1974)
H. Haken: Cooperative effects in systems far from thermal equilibrium and in nonphysical
systems. Rev. Mod. Phys. 47, 67 (1975)
H. Haken (ed.): Springer Series in SYllergetics, Vols. 2-20 (Springer, Berlin, Heidelberg, New
York)
For a popularisation see
H. Haken: Erfo1gsgeheimnisse der Natur (Deutsche Verlagsanstalt, Stuttgart 1981) English edition in
preparation.

1.1 Order alld Disorder. Some Typical Ph en omelia


For literature on thermodynamics see Section 3.4. For literature on phase transitions see Section
6.7. For detailed references on lasers, fluid dynamics, chemistry and biology consult the references
of the corresponding chapters of our book. Since the case of slime-mold is not treated any further
here, we give a few references:
1. T. Bonner, D. S. Barkley, E. M. Hall, T. M. Konijn, 1. W. Mason, G. O'Keefe, P. B. Wolfe:
Develop. BioI. 20, 72 (1969)
T. M. Konijn: Advanc. CycI. Nucl. Res. I, 17 (1972)
A. Robertson, D. 1. Drage, M. H. Cohen: Science 175, 333 (1972)
G. Gerisch, B. Hess: Proc. nat. Acad. Sci (Wash.) 71.2118 (1974)
G. Gerisch: Naturwissenschaften 58, 430 (1971)
356 References, Further Reading, and Comments

2. Probability

There are numerous good textbooks on probability. Here are some of them:
Kai Lai Chung: Elementary Probability Theory with Stochastic Processes (Springer. Berlin-Heidel-
berg-New York 1974)
W. Feller: An Introduction to Probability Theory and Its Applications, Vol. 1 (Wiley, New York
1968), Vol. 2 (Wiley, New York 1971)
R.C. Dubes: The Theory of Applied Probability (Prentice Hall, Englewood Cliffs, N.J. 1968)
Yu. V. Prokhorov, Yu. A. Rozanov: Probability Theory. In Grundlehren der mathematisehen
Wissenschajien in Einzeldarstellungen, Bd. 157 (Springer, Bcrlin-Heidelberg-J\cw York 1968)
1. L. Doob: Stochastic Processes (Wiley, New York-London 1953)
M. Loeve: Probability Theory (D. van Nostrand, Princeton, N.J.-Toronto-New York-London
1963)
R. von Mises: Mathematical Theory of Probability and Statistics (Academic Press, New York-
London 1964)

3. Information

3.1. Some Basic Ideas


Monographs on this subject are:
L. Brillouin: Science and Information Theory (Academic Press, New York-London 1962)
L. Brillouin: Scientific Uncertainty and Information (Academic Press, New York-London 1964)
Information theory was founded by
C. E. Shannon: A mathematical theory of communication. Bell System Techn. J. 27, 370 423,
623-656 (1948)
C. E. Shannon: Bell System Techn. J. 30, 50 (1951)
C. E. Shannon, W. Weaver: The Mathematical Theory oj Communication (Univ. of lllin. Press,
Urbana 1949)
Some conceptions, related to information and information gain (H-theorem l ) were introduced by
L. Boltzmann: Vorlesungen iiber Gastheorie, 2 Vols. (Leipzig 1896, 1898)

3.2 Information Gain. An Illustrative Derivation


F or a detailed treatment and definition see
S. Kullback: Ann. Math. Statist. 22, 79 (1951)
S. Kullback: Information Theory and Statistics (Wiley, New York 1951)
Here we follow our lecture notes.

3.3 Information Entropy and Constraints


We follow in this chapter essentially
E. T. Jaynes: Phys. Rev. 106,4,620 (1957); Phys. Rev. 108, 171 (1957)
E. T. Jaynes: In Delaware Seminar in the Foundations oj Physics (Springer, Berlin-Heidelberg-New
York 1967)
Early ideas on this subject are presented in
W. Elsasser: Phys. Rev. 52, 987 (1937); Z. Phys. 171, 66 (1968)

3.4 An Example from Physics. Thermodynamics


The approach of this chapter is conceptually based on Jaynes' papers, I.c. SectIOn 3.3. For text-
books giving other approaches to thermodynamics see
Landau-Lifshitz: In Course oj Theoretical Physics, Vol. 5: Statistical Physics (Pergamon Press.
London-Paris 1952)
R. Becker: Theory of Heat (Springer, Berlin-Heidelberg-New York 1967)
A. Munster: Statistical Thermodynamics, Vol. 1 (Springer, Berlin-Heidelberg-New York 1969)
H. B. Callen: Thermodynamics (Wiley, New York 1960)
P. T. Landsberg: Thermodynamics (Wiley, New York 1961)
R. Kubo: Thermodynamics (North Holland, Amsterdam 1968)

368
References, Further Reading, and Comments 357

W. Brenig: Statistische Theorie der Wiirme (Springer, Berlin-Heidelberg-New York 1975)


W. Weidlich: Thermodynamik und statistische Mechanik (Akademische Verlagsgesellschaft, Wieshaden
1976)

3.5 An Approach to Irreversible Thermodynamics


An interesting and promising link between irreversible thermodynamics and network theory has
been established by
A. Katchalsky, P. F. Curran: Nonequilibrium Thermodynamics in Biophysics (Harvard University
Press, Cambridge Mass. 1967)
For a recent representation including also more current results see
J. Schnakenberg: Thermodynamic Network Analysis of Biological Systems, Universitext (Springer,
Berlin-Heidelberg-New York 1977)
For detailed texts on irreversible thermodynamics see
I. Prigogine: Introduction to Thermodynamics of Irreversible Processes (Thomas, New York 1955)
I. Prigogine: Non-equilibrium Statistical Mechanics (Interscience, New York 1962)
S. R. De Groot, P. Mazur: Non-equilibrium Thermodynamics (North Holland, Amsterdam 1962)
R. Haase: Thermodynamics of Irreversible Processes (Addison-Wesley, Reading, Mass. 1969)
D. N. Zubarev: Non-equilibrium Statistical Thermodynamics (Consultants Bureau, New York-
London 1974)
Here, we present a hitherto unpublished treatment by the present author.
3.6 Entropy~Curse of Statistical Mechanics?
For the problem subjectivistic-objectivistic see for example
E. T. Jaynes: Information Theory. In Statistical Physics, Brandeis Lectures, Vol. 3 (W. A. Ben-
jamin, New York 1962)
Coarse graining is discussed by
A. Miinster: In Encyclopedia of Physics, ed. by S. Fliigge, Vol. III/2: Principles of Thermodynamics
and Statistics (Springer, Berlin-Giittingen-Heidelberg 1959)
The concept of entropy is discussed in all textbooks on thermodynamics, cf. references to Section 3.4.

4. Chance

4.1 A Model of Brownian Motion


For detailed treatments of Brownian motion see for example
N. Wax, ed.: Selected Papers on Noise and Statistical Processes (Dover Publ. Inc., New York 1954)
with articles by S. Chandrasekhar, G. E. Uhlenbeck and L. S. Ornstein, Ming Chen Wang and
G. E. Uhlenbeck, M. Kac
T. T. Soong: Random Differential Equations in Science and Engineering (Academic Press, New York
1973)

4.2 The Random Walk Model and Its Master Equation


See for instance
M. Kac: Am. Math. Month. 54, 295 (1946)
M. S. Bartlett: Stochastic Processes (Univ. Press, Cambridge 1960)
4.3 Joint Probability and Paths. Markov Processes. The Chapman-Kolmogorov Equation.
Path Integrals
See references on stochastic processes, Chapter 2. Furthermore
R. L. Stratonovich: Topics in the Theory of Random Noise (Gordon Breach, New York-London,
Vol. I 1963, Vol. II 1967)
M. Lax: Rev. Mod. Phys. 32, 25 (1960); 38, 358 (1965); 38,541 (1966)
Path integrals will be treated later in our book (Section 6.6), where the corresponding references
may be found.

369
358 References, Further Reading, and Comments

4.4 How to Use Joint Probabilities. Moments. Characteristic Function. Gaussian Processes
Same references as on Section 4.3.

4.5 The Master Equation


The master equation does not only play an important role in (classical) stochastic processes, but
also in quantum statistics. Here are some references with respect to quantum statistics:
H. Pauli: Prohleme der Modernen Physik. Festschrift zum 60. Geburtstage A. Sommerfelds, ed. by
P. Debye (Hirzel. Leipzig 1928)
L. van Hove: Physica 23, 441 (1957)
S. Nakajiama: Progr. Theor. Phys. 20, 948 (1958)
R. Zwanzig: J. Chern. Phys. 33, 1338 (1960)
E. W. Montroll: Fundamental Problems in Statistical Mechanics, compiled by E. D. G. Cohen (North
Holland, Amsterdam 1962)
P. N. Argyres, P. L. Kelley: Phys. Rev. 134, A98 (1964)
For a recent review see
F. Haake: In Springer Tracts in Modern Physics, Vol. 66 (Springer, Berlin-Heidelberg-New York
1973) p. 98.

4.6 Exact Stationary Solution of the Master Equatiun for Systems in Detailed Balance
For many variables see
H. Haken: Phys. Lett. 46A, 443 (1974); Rev. Mod. Phys. 47,67 (1975),
where further discussions are given.
F or one variable see
R. Landauer: J. Appl. Phys. 33, 2209 (1962)

4.8 Kirchhoffs Method of Solution oj the Master Equation


G. Kirchhoff: Ann. Phys. Chern., Bd. LXXII 1847, Bd. 12, S. 32
G. Kirchhoff: Poggendorffs Ann. Phys. 72,495 (1844)
R. Bott, J. P. Mayberry: Matrices and Trees, Economic Acti!"ity Analysis (Wiley, New York 1954)
E. L. King, C. Altmann: J. Phys. Chern. 60,1375 (1956)
T. L. Hill: J. Theor. BioI. 10,442 (1966)
A very elegant derivation of Kirchhoffs solution was recently given by
W. Weidlich; Stuttgart (unpublished)

4.9 Theorems About Solutions of the Master Equation


I. Schnakenberg: Rev. Mod. Phys. 48, 571 (1976)
J. Keizer: On the Solutions and the Steady States ofa Master Equation (Plenum Press, New York
1972)

4.10 The Meaning of Random Processes. Statiunary State, Fluctuations, Recurrence Time
For Ehrenfesfs urn model see
P. and T. Ehrenfest: Phys. Z. 8, 311 (1907)
and also
A. Miinster: In Encyclopedia (j( Physics, ed. by S. Fliigge, Vol. III/2; Principles of Thermodynamics
and Statistics (Springer, Berlin-Gbttingen-Heidelberg 1959)

5. Necessity

Monographs on dynamical systems and related topics are


N. N. Bogoliubov, Y. A. Mitropolsky: Asymptotic Methods in the Theory (jf Nonlinear Oscillatio11S
(Hindustan Publ. Corp., Delhi 1961)
N. Minorski: Nonlinear Oscillations (Van Nostrand, Toronto 1962)
A. Andronov, A. Vitt, S. E. Khaikin: Theory (jf Oscillators (Pergamon Press, London-Paris 1966)
D. H. Sattinger In Lecture Notes in Mathematics, Vol. 309: Topics in Stability and Bifurcation
Theory, ed. by A. Dold, B. Eckmann (Springer, Berlin-Heidelberg-New York 1973)

370
References, Further Reading, and Comments 359

M. W. Hirsch, S. Smale: Dijf('rential Eqllalions, Dynamical Systems, and Linear Alyehra (Academic
Press, New York-London 1974)
V. V. Nemytskii, V. V. Stepanov: Qualitatire Theory oj DifJerential Equations (Princeton Univ. Press,
Princeton, N.J. 1960)
Many of the basic ideas are due to
H. Poincare: Oellvres, Vol. 1 (Gauthiers-Villars, Paris 1928)
H. Poincare: Sur I'equilibre d'une masse tluide animee d'un mouvement de rotation. Acta Math. 7
(1885)
H. Poincare: Figures d'equilibre d'une masse fluide (Paris 1903)
H. Poincare: Sur Ie probleme de trois corps et les equations de la dynamique. Acta Math. 13 (1890)
H. Poincare: Les methodes nourelles de la mechanique ("(!leste (Gauthier-Villars, Paris 1892-1899)
5.3 Stahility
1. La Salle, S. Lefshetz: Stahility by Ljapunov's Direct Method with Applications (Academic Press,
New York-London 1961)
W. Hahn: Stability of Motion. In Die Grundlehren dcr mathematischen Wissenscha/ten in Einzel-
darstel/ungen. Bd. 138 (Springer, Berlin-Heidelberg-New York 1967)
D. D. Joseph: Stahiliry of Fluid Motions I and II, Springer Tracts in Natural Philosophy, Vois.
27. 28 (Springer, Berlin-Heidelberg-New York 1976)
Exercises 5.3: F. Schlagl: Z. Phys. 243, 303 (1973)

5.4 Exampks and Exercises on Bifurcation and Stability


A. Lotka: Proc. Nat. Acad. Sci. (Wash.) 6, 410 (1920)
V. Volterra: Lerom sur la theorie mathematiques de ia lutte pour la vie (Paris 1931)
N. S. GoeL S. C Maitra, E. W. Montroll: Rev. Mod. Phys. 43, 231 (1971)
B. van der Pol: Phil. Mag. 43, 6, 700 (1922); 2, 7, 978 (1926); 3, 7, 65 (1927)
H. T. Davis: Introduction to Nonlinear Differential and Integral Equations (Dover Pub!. Inc., New
York 1962)
G. loss. D. D. Joseph: Elementary Stability alld Bifurcation Theory (Springer, Berlin, Heidelberg,
New York 1980)

5.5 Classification of Static Instabilities, or an Elementary Approach to Thom's Theory of Catas-


trophes
R. Thorn: Structural Stability and Morphogenesis (W. A. Benjamin, Reading, Mass. 1975)
Thom's book requires a good deal of mathematical background. Our "pedestrian's" ap-
proach provides a simple access to Thorn's classification of catastrophes. Our interpretation
of how to apply these results to natural sciences. for instance biology is, however, entirely
ditTerent from Thorn's.
T. Poston, I. Steward: Catastrophe Theory and its Applications (Pitman, London 1978)
E. C Zeeman: Catastrophe Theory (Addison-Wesley, New York 1977)
P. T. Sounders: An Introduction to Catastrophe Theory (Cambridge University Press, Cambridge
1980)

6. Chance and Necessity

6.1 Langel,in Equations: An Example


For general approaches see
R. L. Stratonovich: Topics in the Theory of Random Noise, Vo!'1 (Gordon & Breach, New York-
London 1963)
M. Lax: Rev. Mod. Phys. 32, 25 (1960); 38, 358, 541 (1966); Phys. Rev. 145, 110(1966)
H. Haken: Rev. Mod. Phys. 47, 67(1975)
with further references
P. Hiinggi, H. Thomas: Phys. Rep. 88, 208 (1982)

371
360 References, Further Reading, and Comments

6.2 Reservoirs and Random Forces


Here we present a simple example. For general approaches see
R. Zwanzig: 1. Stal. Phys. 9, 3, 215 (1973)
H. Haken: Rev. Mod. Phys. 47, 67 (1975)

6.3 The Fokker-Planck Equation


Same references as for Section 6.1

6.4 Some Properties and Stationary Solution of the Fokker-Planck Equation


The "potential case" is treated by
R. L. Stratonovich: Topics in the Theory of Random Noise, Vol. 1 (Gordon & Breach. New York-
London 1963)
The more general case for systems in detailed balance is treated by
R. Graham, H. Haken: Z. Phys. 248, 289 (1971)
R. Graham: Z. Phys. 840, 149 (1981)
H. Risken: Z. Phys. 251, 231 (1972);
see also
H. Haken: Rev. Mod. Phys. 47, 67 (1975)

6.5 Time-Dependent Solutions of the Fokker-Planck Equation

The solution of the n-dimensional Fokker-Planck equation with linear drift and constant diffu-
sion coefficients was given by
M. C. Wang, G. E. Uhlenbeck: Rev. Mod. Phys. 17,2 and 3 (1945)
For a short representation of the results see
H. Haken: Rev. Mod. Phys. 47,67 (1975)

6.6 Solution of the Fokker-Planck Equation by Path Integrals


L. Onsager, S. Machlup: Phys. Rev. 91, 1505, 1512 (1953)
I. M. Gelfand, A. M. Yaglome: 1. Math. Phys. I, 48 (1960)
R. P. Feynman, A. R. Hibbs: Quantum Mechanics and Path Integrals (McGraw-Hill, New York
1965)
F. W. Wiegel: Path Integral Methods in Statistical Mechanics, Physics Reports 16C, No.2 (North
Holland, Amsterdam 1975)
R. Graham: In Springer Tracts in Modern Physics, Vol. 66 (Springer, Berlin-Heidelberg-New
York 1973) p. 1
A critical discussion of that paper gives W. Horsthemke, A. Bach: Z. Phys. 822, I g9 (1975)
We follow essentially H. Haken: Z. Phys. 824, 321 (1976) where also classes of solutions of
Fokker-Planck equations are discussed.

6.7 Phase Transition Analogy

The theory of phase transitions of systems in thermal equlibrium is presented, for example, in the
following books and articles
L. D. Landau, I. M. Lifshitz: In Course of Theoretical Physics, Vol. 5: Statistical Physics (Perga-
mon Press, London-Paris 1959)
R. Brout: Phase Transitions (Benjamin, New York 1965)
L. P. Kadanoff, W. Gotze, D. Hamblen, R. Hecht, E. A. S. Lewis, V. V. Palcanskas, M. Rayl,
1. Swift, D. Aspnes, 1. Kane: Rev. Mod. Phys. 39, 395 (1967)
M. E. Fischer: Repts. Progr. Phys. 30, 731 (1967)
H. E. Stanley: Introduction to Phase Transitions and Critical Phenomena. Internal. Series of
Monographs in Physics (Oxford University, New York 1971)
A. Miinster: Statistical Thermodynamics, Vol. 2 (Springer, Berlin-Heidelberg-New York and Aca-
demic Press, New York-London 1974)

372
References, Further Reading, and Comments 361

C. Domb, M. S. Green, cds.: Phase Transitions and Critical Phenomena, Vols. 1-5 (Academic
Press, London 1972··76)
The modern and powerful renormalization group technique of Wilson is reviewed by
K. G. Wilson, J. Kogut: Phys. Rep. 12 C, 75 (1974)
S.-K. Ma: Modern Theory of Critical Phenomena (Benjamin, London 1976)
The profound and detailed analogies between a second order phase transition of a system in
thermal equilibrium (for instance a superconductor) and transitions of a non-equilibrium system
were first derived in the laser-case in independent papers by
R. Graham, H. Haken: Z. Phys. 213, 420 (1968) and in particular Z. Phys. 237, 31 (1970),
who treated the continuum mode laser, and by
V DeGiorgio, M. O. Scully: Phys. Rev. A2, 1170 (1970),
who treated the single mode laser.
For further references elucidating the historical development see Section 13.

6.8 Phase Transition AnaloRY in Continuous Media: Space Dependent Order Parameter
a) References to Systems in Thermal Equilihrium
The Ginzburg-Landau theory is presented, for instance. by
N. R. Werthamer: In Superconductivity, Vo\. 1, ed. by R. D. Parks (Marcel Dekker Inc., New
York 1969) p. 321
with further references
The exact evaluation of correlation functions is due to
0.1. Scalapino, M. Sears, R. A. Ferrell: Phys. Rev. B6, 3409 (l9n)
Further papers on this evaluation are:
L. W. Gruenberg, L. Gunther: Phys. Lett. 38A, 463 (1972)
M. Nauenberg, F. Kuttner, M. Fusman: Phys. Rev. A 13, 1185 (1976)
h) References to Systems Far from Thermal Equilibrillnl (and Nonphysical Systems)
R. Graham. H. Haken: Z. Phys. 237, 31 (1970)
Furthermore the Chapter 8 and 9

7. Self-Organization

7.1 Organization
H. Haken: unpublished material

7.2 Self-Organization
A different approach to the problem of self-organization has been developed by
J. v. Neuman: Theory of Self-reproducing Automata. ed. and completed by Arthur W. Burks (Uni-
versity of Illinois Press, 1966)

7.3 The Role of Fluctuations: Reliability or Adaptabilit y 1 Switching


F or a detailed discussion of reliability as well as switching, especially of computer clements, see
R. Landauer: IBM Journal 183 (July 1961)
R. Landauer: 1. App\. Phys. 33, 2209 (1962)
R. Landauer, J. W. F. Woo: In Synergetics, ed. by H. Haken (Teubner, Stuttgart 1973)

7.4 Adiahatic Elimination of Fast Relaxing Variables from the Fokker-Planck Equation
H. Haken: Z. Phys. B 20, 413 (1975)

7.5 Adiabatic Elimination of Fast Relaxing Variahles from the Master Equation
H. Haken: unpublished

373
362 References, Further Reading, and Comments

7.6 Self-Organization in Continuously Extended Media. An Outline of the Mathematical Approach

7.7 Generalized Ginzburg-Landau Equations for Nonequilibrium Phase Transitions


H. Haken: Z. Phys. 821, 105 (1975)

7.8 Higher-Order Contributions to Generalized Ginzburg-Landau Equations


H. Haken: Z. Phys. 822, 69 (1975); 823, 388 (1975)
For another treatment of the slaving principle see
A. Wunderlin, H. Haken: Z. Phys. 844, 135 (1981)
H. Haken, A. Wunderlin: Z. Phys. 847, 179 (1982)

7.9 Scaling Theory of Continuously Extended Nonequilibrium Systems


We follow essentially
A. Wunderlin, H. Haken: Z. Phys. B21, 393 (1975)
For related work see
E. Hopf: Berichte der Math.-Phys. Klasse der Siichsischen Akademie der Wissenschaften, Leipzig
XCIV, 1(1942)
A. Schluter, D. Lortz, F. Busse. J. Fluid Mech. 23, 129 (1965)
A. C. Newell, J. A. Whitehead: J. Fluid Mech. 38, 279 (1969)
R. C. Diprima, W. Eckhaus, L. A. Segel: 1. Fluid Mech. 49, 705 (1971)

8. Physical Systems

For related topics see


H. Haken: Rev. Mod. Phys. 47, 67 (1975)
and the articles by various authors in
H. Haken, ed.: Synergetics (Teubner, Stuttgart 1973)
H. Haken, M. Wagner, cds.: Cooperative Phenomena (Springer, Berlin-Heidelberg-New York
1973)
H. Haken, ed.: Cooperative Effects (North Holland, Amsterdam 1974)
H. Haken (ed.): Springer Series in Synergetics Vols. 2-20 (Springer, Berlin-Heidelberg-New York)

8.1 Cooperative Effects in the Laser: Self-Organization and Phase Transition


The dramatic change of the statistical properties of laser light at laser threshold was first derived
and predicted by
H. Haken: Z. Phys. 181,96 (\964)

8.2 The Laser Equations in the Mode Picture


For a detailed review on laser theory see
H. Haken: In Encyclopedia of Physics, Vol. XXV/c: Laser Theory (Springer, Berlin-Heidelberg-
New York 1970)

8.3 The Order Parameter Concept


Compare especially
H. Haken: Rev. Mod. Phys. 47, 67 (1975)

8.4 The Single Mode Laser


Same references as of Sections 8.1- 8.3.
The laser distribution function was derived by
H. Risken: Z. Phys. 186,85 (1965) and
R. D. Hempstead, M. Lax: Phys. Rev. 161, 350 (1967)

374
References, Further Reading, and Conunents 363

For a fully quantum mechanical distribution function cf.


W. Weidlich, H. Risken, H. Haken: Z. Phys. 201, 396 (1967)
M. Scully, W. E. Lamb: Phys. Rev. 159,208 (1967): 166,246 (1968)

8.5 The Multimode Laser


H. IIaken: Z. Phys. 219, 246 (1969)

8.6 Laser with Continuously Many Modes. Analogy with Superconductivity


For a somewhat different treatment see
R. Graham, H. Baken: Z. Phys. 237, 31 (1970)

8.7 First-Order Phase Transitions oj' the SinRle Mode Laser


1. F. Scott, M. Sargent III, C. D. Cantrell: Opt. Commun. 15, 13 (1975)
W. W. Chow. M. O. Scully, E. W. van Stryland: Opt. Commun. 15, 6 (1975)

8.8 Hierarchy of Laser Instabilities and Ultrashort Laser Pulses


We follow essentially
H. Haken, H. Ohno: Opt. Commun. 16,205 (1976)
H. Ohno, H. Haken: Phys. Lett. 59A, 261 (1976), and unpublished work
For a machine calculatIOn see
H. Risken, K. Nummedal: Phys. Lett. 26A, 275 (1968); 1. appl. Phys. 39, 4662 (1968)
For a discussion of that instability sec also
R. Graham, H. Haken: Z. Phys. 213, 420 (1968)
For temporal oscillations of a single mode laser cf.
K. Tomita, T. Todani, H. Kidachi: Phys. Lett. 51 A, 483 (1975)
For further synergetic efTects see
R. Bonifacio (cd.): Dissipative Systems in Quantum Optics, Topics Current Phys., Vol. 27 (Sprin·
ger, Berlin-Heidelberg-New York 1982)

8.9 Instabilities in Fluid Dynamics: The Benard and Taylor Problems

8.10 The Basic Equations

8.11 Introduction of new variables

8.12 Damped and Neutral Solutions (R :S RJ


Some monographs in hydrodynamics:
L. D. Landau, E. M. Lifshitz: In Course of Theoretical Physics, Vol. 6: Fluid Mechanics (Perga·
mon Press, London-New York-Paris-Los Angeles 1959)
Chia-Shun-Yih: Fluid Mechanics (McGraw Hill, New York 1969)
G. K. Batchelor: An introduction to Fluid Dynamics (University Press, Cambridge 1970)
S. Chandrasekhar: Hydrodynamic and H ydromaRnetic Stability (Clarendon Press, Oxford 1961.
Stability problems are treated particularly by Chandrasekhar Lc. and by
C. C. Lin: Hydrodynamic Stability (University Press, Cambridge 1967)

8.13 Solution Near R = R, (Nonlinear Domain). Effective Langevin Equations

8.14 The Fokker-Planck Equation and its Stationary Solution


We follow essentially
H. Haken: Phys. Lett. 46A, 193 (1973) and in particular Rev. Mod. Phys. 47, 67 (1976)
F or related work see
R. Graham: Phys. Rev. Lett. 31,1479 (1973): Phys. Rev. 10,1762 (1974)

375
364 References, Further Reading, and Comments

A. Wunderlin: Thesis. Stuttgart University (1975)


1. Swift, P. C. Hohenberg: Phys. Rev. A 15, 319 (1977)
For the analysis of mode-configurations. but without fluctuations, d.
A. Schluter, D. Lortz, F. Busse: 1. Fluid Mech. 23. 129 (1965)
F. H. Busse: 1. Fluid Mech. 30, 625 (1967)
A. C. Newell, 1. A. Whitehead: J. Fluid Mech. 38, 279 (1969)
R. C. Diprima, H. Eckhaus, L. A. Segel: 1. Fluid Mech. 49, 705 (1971)
Higher instabilities are discussed by
F. H. Busse: 1. FI uid Mech. 52, 1, 97 (1972)
D. Ruelle, F. Takens: Comm. Math. Phys. 20, 167 (1971)
J. B. McLaughlin. P. C. Martin: Phys. Rev. A12, 186 (1975)
1. Gollup, S. V. Benson: In Pattern Formation by Dynamic Svstems and Pattern Recognitioll, (cd
by H. Haken), Springer Series in Synergetic Vol. 5 (Springer. Berlin-Heidclberg-New York
1979)
where further references may be found.
A review on the present status of experiments and theory give the books
Fluctuations, Instabilities and Phase Transitions, ed. by T. Riste (Plenum Press, New York 1975)
H. L. Swinney, 1. P. Gollub (cds.): Hydrodynamic Instabilities and the TransitiollS to Ttlrhulence.
Topics Appl. Phys., Vol. 45 (Springer, Berlin-Heidelberg-New York 1981)
For a detailed treatment of analogies between fluid and laser instabilities c.f.
M. G. Velarde: In Evolution a/Order and Chaos. ed. by H. Haken. Springer Series in Synergetics.
Vo!.17 (Springer, Berlin-Heidelberg-New York 1982) where further references may be found.

8.15 A Model for the Statistical Dynamics of the Gunn Instahility Near Threshold
1. B. Gunn: Solid State Commun. I, 88 (1963)
1. B. Gunn: IBM J. Res. Develop. 8. (1964)
For a theoretical discussion of this and related effects see for instance
H. Thomas: In Synergetics. cd. by H. Haken (Teubner, Stuttgart 1973)
Here, we follow essentially
K. Nakamura: 1. Phys. Soc. Jap. 38. 46 (1975)

8.16 Elastic Stability: Dulline of Some Basic Ideas


Introductions to this field give
1. M. T. Thompson, G. W. Hunt: A General Theory of Elastic Stahilitv (Wiley. London 1973)
K. Huseyin: Nonlinear Theory of Elastic Stability (Nordhoff. Leyden 1975)

9. Chemical and Biochemical Systems


In this chapter we particularly consider the occurrence of spatial or temporal structures in
chemical reactions.
Concentration oscillations were reported as early as 1921 by
C. H. Bray: 1. Am. Chern. Soc. 43. 1262 (1921)
A different reaction showing oscillations was studied by
B. P. Belousov: Sb. ref. radats. med. Moscow (1959)
This work was extended by Zhabotinsky and his coworkers in a series of papers
V. A. Vavilin, A. M. Zhabotinsky, L. S. Yaguzhinsky: Oscillatory Processes in Biological alld
Chemical Systems (Moscow Science Pub!. 1967) p. 181
A. N. Zaikin, A. M. Zhabotinsky: Nature 225, 535 (1970)
A. M. Zhabotinsky, A. N. Zaikin: J. Theor. BioI. 40. 45 (1973)
A theoretical model accounting for the occurrence of spatial structures was fIrst given by
A. M. Turing: Phil. Trans. Roy. Soc. B 237, 37 (1952)

376
References, Further Reading, and Comments 365

Models of chemical reactions showing spatial and temporal structures were treated in numerous
publications by Prigogine and his coworkers.
P. Glansdorff, I. Prigogine: Thermodynamik Theory of Structures, Stability and Fluctuations
(Wiley, New York 1971)
with many references, and
G. Nicolis, I. Prigogine: Sel{-orlianization in Non-equilibrium Systems (Wiley, New York 1977)
Prigogine has coined the word "dissipative structures". Glansdorff and Prigogine base their work
on entropy production principles and use the excess entropy production as means to search for
the onset of an instability. The validity of such criteria has been critically investigated by
R. Landauer: Phys. Rev. A 12, 636 (1975). The Glansdorff-Prigogine approach does not give an
answer to what happens at the instability point and how to determine or classify the new evolving
structures. An important line of research by the Brussels school, namely chemical reaction
models, comes closer to the spirit of Synergetics.
A review of the statistical aspects of chemical reactions can be found in
D. McQuarry: Supplementary Review Series in Appl. Probability (Methuen, London 1967)
A detailed review over the whole field gives the
Faraday Symposium 9: Phys. Chemistry of Oscillatory Phenomena. London (1974)
Y. Schiffmann: Phys. Rep. 64, 88 (1980)
For chemical oscillations see especially
G. Nicolis. J. Portnow: Chern. Rev. 73, 365 (1973)

9.2 Deterministic Processes, Without Diffusion, One Variable

9.3 Reaction and Diffusion Equations


We essentially follow
F. Schlag I: Z. Phys. 253. 147 (1972),
who gave the steady state solution. The transient solution was determined by
H. Ohno: Stuttgart (unpublished)

9.4 Reaction-Diffusion Model with Two or Three Variables; the Brusselator and the Oregonator
We give here our own nonlinear treatment (A. Wunderlin, H. Haken. unpublished) of the reac-
tion-diffusion equations of the "Brusselator" reaction, originally introduced by Prigogine and
coworkers, I.c. For related treatments see
J. F. G. Auchmuchty, G. Nicolis: Bull. Math. BioI. 37. I (1974)
Y. Kuramoto, T. Tsusuki: Progr. Theor. Phys. 52, 1399 (1974)
M. Herschkowitz-Kaufmann: Bull. Math. BioI. 37, 589 (1975)
The Belousov-Zhabotinsky reaction is described in the already cited articles by Belousov and
Zha botinsk y.
The "Oregonator" model reaction was formulated and treated by
R.1. Field,E. Koras, R. M. Noyes: J. Am. Chern. Soc. 49, 8649 (1972)
R. J. Field, R. M. Noyes: Nature 237, 390 (1972)
R.1. Field, R. M. Noyes: J. Chern Phys. 60. 1877 (1974)
R. J. Field, R. M. Noyes: 1. Am. Chern. Soc. 96, 2001 (1974)

9.5 Stochastic Model for a Chemical Reaction Without Diffusion. Birth and Death Processes.
One Variable
A first treatment of this model is due to
V. J. McNeil. D. F. Walls: 1. Stat. Phys. 10,439 (1974)

9.6 Stochastic Model for a Chemical Reaction with Diffusion. One Variable
The master equation with diffusion is derived by
H. Hakcn: Z. Phys. B20, 413 (1975)

377
366 References, Further Reading, and Comments

We essentially follow
C. H. Gardiner, K. 1. McNeil, D. F. Walls, I. S. Matheson: 1. Stat. Phys. 14. 4. 307 (1976)
Related to this chapter are the papers by
G. Nicolis, P. Aden, A. van Nypelseer: Progr. Theor. Phys. 52.1481 (1974)
M. Malek-Mansour, G. Nicolis: preprint Febr. 1975

9.7 Stochastic Treatment oJthe Brusselator Close to Irs Solt Mode Instability
We essentially follow
H. Haken: Z. Phys. 820.413 (1975)

9.8 Chemical Networks


Related to this chapter are
G. F. Oster, A. S. Perclson: Chern. Reaction Dynamics. Arch. Rat. Mcch. Anal. 55. 230 (1974)
A. S. Perelson, G. F. Oster: Chern. Reaction Dynamics. Part II; Reaction Netwurks. Arch Rat.
Mech. Anal. 57, 31 (1974/75)
with further references.
G. F. Oster, A. S. Perelson. A. Katchalsky: Quart. Rev. Biophys. 6, 1 (1973)
O. E. Rossler: In Lecture Notes in Biomathematics, Vol. 4 (Springer, Berlin-Heidclberg-New York
1974) p. 419
O. E. Rossler: Z. Naturforsch. 31a, 255 (1976)

10. Applications to Biology

10.1 Ecology, Population Dynamics

10.2 Stochastic ModelsJor a Predator-Prey System


For general treatments see
N. S. Goel, N. Richter-Dyn: Stochastic Models in Biology (Academic Press. New York 1974)
D. Ludwig: In Lecture Notes in Biomathematics, Vol. 3: Stochastic Population Theories. ed. by
S. Levin (Springer, Berlin-Heidelberg-New York 1974)
For a ditTerent treatment of the problem of this section see
V. T. N. Reddy: 1. Statist. Phys. 13, 1 (1975)

10.3 A Simple Mathematical ModelJor Evolutionary Processes


The equations discussed here seem to have first occurred in the realm of laser physics. where they
explained mode-selection in lasers (H. Haken, H. Sauermann: Z. Phys. 173.261 (1963)). The
application of laser-type equations to biological processes was suggested by
H. Haken: Talk at the Internat. Conference From Theoretical Physics to BioloKI'. ed. by
M. Marois, Versailles 1969
see also
H. Haken: In From Theoretical Physics to Biology, ed. by M. Marois (Karger, Basel 1973)
A comprehensive and detailed theory of evolutionary processes has been developed by M. Eigcn:
Die Naturwissenschaften 58, 465 (1971). With respect to the analogies emphaSIzed in our
book it is interesting to note that Eigen's "Bewertungsfunktion" is identical With the saturated
gain function (8.35) of multimode lasers.
An approach to interpret evolutionary and other processes as games is outlined by
M. Eigen, R. Winkler-Oswatitsch: Das Spiel (Piper, Munchen 1975)
An important new concept is that of hypercycles and, connected with it, of "quasi-species"
M. Eigen, P. Schuster: Naturwissensch. 64, 541 (1977); 65, 7 (1978): 65. 341 (197R)

378
References, Further Reading, and Comments 367

10.4 A Model for Morphogenesis


We present here a model due to Gierer and Meinhardt cf.
A. Gierer, M. Meinhardt: Biological pattern formation involving lateral inhibition. Lectures on
Mathematics in the Life Sciences 7, 163 (1974)
H. Meinhardt: The Formation of Morphogenetic Gradients and Fields. Ber. Deutsch. Bot. Ges.
87, 101 (1974)
H. Meinhardt, A. Gierer: Applications of a theory of biological pattern formation based on
lateral inhibition. 1 Cel!. Sci. 15,321 (1974)
H. Meinhardt: preprint 1976
H. Meinhardt: Models of Biological Paltern Formation (Academic, London 1982)

IO.S Order Parameter and Morphogenesis


We present here results by H. Haken and H. Olbrich. 1. Math. Bio!. 6, 317 (1978)

11. Sociology and Economics

11.1 A Stochastic Model for the Formation of Public Opinion


We present here Weidlich's mode!.
W. Weidlich: Collective Phenomena 1, SI (1972)
W. Weidlich: Brit. J. math. stat. Psycho!. 24. 251 (1971)
W. Weidlich: In Synergetics, ed. by H. Haken (Teubner. Stuttgart 1973)
The following monographs deal with a mathematization of sociology:
1. S. Coleman: Introduction to Mathematical Sociology (The Free Press, New York 1964)
D. J. Bartholomew: Stochastic Modelsfor Social processes (Wiley. London 1967)
W. Weidlich. G. Haag: Concepts and Models of a Quantitaril'e Sociology, Springer Ser. Synerget-
ics. Vo!.14 (Springer, Berlin-Heidelberg-New York 1983)

11.2 Phase Transitions in Economics


I present here my "translation" [first published in the German version: Synergetik. Eine Einfuh-
rung (Springer, Berlin Heidelberg, New York 1982)] of an economic model by G. Mensch et a!.
into the formalism and language of synergetics.
G. Mensch, K. Kaasch, A. Kleinknecht, R. Schnapp: "Innovation Trends, and Switching Be-
tween Full- and Under-Employment Equilibria", IIM/dp 80-S, Discussion paper series Inter-
national Institute of Management, Wissenschaftszentrum Berlin (l9S0-1978)

12. Chaos

12.1 »-hat is Chaos?


For mathematically rigorous treatments of examples of chaos by means of mappings and other
topological methods see
S. Smale: Bul!. A. M. S. 73, 747 (1967)
T. Y. Li, 1. A. Yorke: Am. Math. Monthly 82, 985 (1975)
D. Ruelle, F. Takens: Commun. math. Phys. 20, 167 (1971)

12.2 The Lorenz Model. Motivation and Realization


E. N. Lorenz: 1. Atmospheric Sci. 20, 130 (1963)
E. N. Lorenz: 1. Atmospheric Sci. 20,448 (1963)
Historically, the first papers showing a "strange attractor". For further treatments of this model
see
1. B. McLaughlin, P. C. Martin: Phys. Rev. Al2, 186 (197S)
M. Liicke: J. Stat. Phys. 15, 455 (1976)
C. T. Sparrow: The Lorenz Equations: Bifurcations, Chaos and Strange Attractors (Springer,
Berlin-Heidelberg-New York 1982)

379
368 References, Further Reading, and Comments

For the laser fluid analogy presented in this chapter see


H. Haken: Phys. Lett. 53A, 77 (1975)

12.3 How Chaos Occurs


H. Haken, A. Wunderlin: Phys. Lett. 62 A, 133 (1977)

12.4 Chaos and the Failure of the Slaving Principle


H. Haken, 1. Zorell: Unpublished

12.5 Correlation Function and Frequency Distribution


M. Liicke: 1. Stat. Phys. 15, 455 (1976)
Y. Aizawa, 1. Shimada: Preprint 1977

12.6 Further Examples of Chaotic Motion


Three Body Problem:
H. Poincare: Les methodes nouvelles de la mechanique celeste. Gauthier-Villars, Paris (1892/99).
Reprint (Dover Pub!', New York 1960)
For electronic devices especially Khaikin's "universal circuit" see
A. A. Andronov, A. A. Vitt, S. E. Khaikin: Theory of Oscillators (Pergamon Press. Oxford-
London-Edinburgh-New York-Toronto-Paris-Frankfurt 1966)
Gunn Oscillator:
K. Nakamura: Progr. Theoret. Phys. 57, 1874 (1977)
Numerous chemical reaction models (without diffusion) have been treated by
O. E. Roessler. For a summary and list of reference consult
O. E. Roessler: In Synergetics, A Workshop, ed. by H. Haken (Springer, Berlin-Heidelberg-New
York, 1977)
For chemical reaction models including diffusion see
Y. Kuramoto, T. Yamada: Progr. Theoret. Phys. 56, 679 (1976)
T. Yamada, Y. Kuramoto: Progr. Theoret. Phys. 56, 681 (1976)
Modulated chemical reactions have been trated by
K. Tomita, T. Kai, F. Hikami: Progr. Theoret. Phys. 57,1159 (1977)
For experimental evidence of chaos in chemical reactions see
R. A. Schmitz, K. R. Graziani, 1. L. Hudson: 1. Chern. Phys. 67, 3040 (1977);
O. E. Roessler, to be published
Earth magnetic field:
J. A. Jacobs: Phys. Reports 26, 183 (1976) with further references
Population dynamics:
R. M. May: Nature 261, 459 (1976)
Review articles:
M. I. Rabinovich: Sov. Phys. Usp. 21,443 (1978)
A. S. Monin: Sov. Phys. Usp. 21, 429 (1978)
D. Ruelle: La Recherche N° 108, Fevrier (1980)
Some fundamental works on period doubling are
S. Grossmann, S. Thomae: Z. Naturforsch. A32, 1353 (1977)
This paper deals with the logistic map given in the text. The universal behavior of period
doubling was discovered by
M.1. Feigenbaum: J. Stat. Phys. 19, 25 (1978): Phys. Lett. A74, 375 (1979)
An extensive presentation of later results. as well as many further references, are given in
P. Collet, J. P. Eckmann: Iterated Maps on the Interval as Dynamical System (Birkhauser, Boston
1980)

380
References, Further Reading, and Comments 369

Conference proceedings dealing with chaos:


L. Garrido (ed.): Dynamical Systems and Chaos, Lecture Notes Phys., Vol. 179 (Springer, Berlin,
Heidelberg, New York 1983)
H. Haken (ed.): Chaos and Order in Nature. Springer Ser. Synergetics, Vol. 11 (Springer, Berlin
Heidelberg, New York 1981)
H. Haken (ed.): Evolution of Order and Chaos, Springer Ser. Synergetics, Vol. 17 (Springer, Berlin,
Heidelberg, New York 1982)
The influence of fluctuations on period doubling has been studied by the following authors:
G. Mayer-Kress. H. Haken: J. Stat. Phys. 24, 345 (1981)
1. P. Crutchfield, B. A. Huberman: Phys. Lett. A 77,407 (1980)
A. Zippelius, M. Liicke: 1. Stat. Phys. 24, 345 (1981)
J. P. Crutchfield, M. Nauenberg, J. Rudnick: Phys. Rev. Lett. 46, 933 (1981)
B. Shraiman, C. E. Wayne, P. C. Martin: Phys. Rev. Lett. 46, 935 (1981)
The corresponding Kolmogorov equation is established and discussed in
H. Haken, G. Mayer-Kress: Z. Phys. B43. 185 (1981)
H. Haken, A. Wunderlin: Z. Phys. B46, 181 (1982)

13. Some Historical Remarks and Outlook

J. F. G. Auchrnuchty, G. Nicolis: Bull. Math. BioI. 37, 323 (1975)


L. von Bertalanffi: Blatter fiir Deutsche Philosophie 18. Nr. 3 and 4 (1945); Science 111,23 (1950);
Brit. J. Phil. Sci. 1, 134 (1950): Biophysik des Flieftgleichgewichts (Vieweg, Braunschweig 1953)
G. Czajkowski: Z. Phys. 270, 25 (1974)
V. DeGiorgio, M. O. Scully: Phys. Rev. A2, 117a (1970)
P. Glansdorff. I. Prigogine: Thermodynamic Theory of Structure, Stability and Fluctuations
(Wiley. New York 1971)
R. Graham, H. Haken: Z. Phys. 213,420 (1968); 237,31 (1970)
H. Haken: Z. Phys. 181,96 (1964)
M. Herschkowitz-Kaufman: Bull. Math. BioI. 37, 589 (1975)
K. H. Janssen: Z. Phys. 270, 67 (1974)
G.1. Klir: The Approach to General Systems Theory (Van Nostrand Reinhold Comp., New York
1969)
G. J. Klir, ed.: Trends in General Systems Theory (Wiley. New York 1972)
R. Landauer: IBM 1. Res. Dev. 5, 3 (1961); J. Appl. Phys. 33, 2209 (1962); Ferroelectrics 2, 47
(1971)
E. Laszlo (cd.): The Relevance of General Systems Theory (George Braziller, New York 1972)
1. Matheson. D. F. Walls, C. W. Gardiner: J. Stat. Phys. 12, 21 (1975)
A. Nitzan, P. Ortoleva, J. Deutch, J. Ross: J. Chern. Phys. 61, 1056 (1974)
1. Prigoginc, G. Nicolis: J. Chern. Phys. 46, 3542 (1967)
1. Prigogine, R. Lefever: J. Chern. Phys. 48, 1695 (1968)
A. M. Turing: Phil. Trans. Roy. Soc. B234, 37 (1952)

381
Subject Index

Abundance of species 305 Characteristic function 31, 85, 86


Activation, short range 311,317 - -, derivatives 32, 33
Adaptability 200, 202 Chemical network 302
Affinity 60 - potential 56
Approximation, adiabatic 192, 195, 196, - reaction, stochastic model 289, 294
198 Chemical systems 275
Associativity (of sets) 19 Chetayev's instability theorem 125
Autocatalytic process 312 Cloud formation 7, 8
- reaction 107, 128, 275 Cloud streets 8
Autonomous system 113 Coarse graining 66
Average over ensemble 100 Coexistence of species 129, 306
Commutativity (of intersection of sets) 19
Balls in boxes 46 - (of union of sets) 19
Bandwidth 153 Company 331
Belousov-Zhabotinski reaction 9, 288 Competition of species 306
Benard instability 7, 249 Complement (of a set) 19
Bifurcation 110, 126 Constants of motion 165
- of limit cycles 112 Constraint 45, 48
Binary system 42 Continuity equation 63
Binomial coefficient 33 - - of probability 160
- distribution 33, 34 - - - - distribution 166
- -, mean value 34 Continuous mode laser 237
- -, variance 34 Convection instability 7, 249
Biochemical systems 275 Coordinates, cartesian III
Birth and death processes 289, 294 -, polar III
Bistable element 302 Correlation 31
Bit 42 - function 87, 329
Boltzmann distribution function 54 - - offluctuatingforces 149, lSI, 187, 188
- - - of harmonic oscillator 156 - -, two point 187, 188
Boltzmann's H-theorem 126 - -, two time 150, 173
Boussinesq approximation 252 - length 188
Box model 33 Critical exponent 188
Brownian movement 69, 149 - fluctuations of laser 234
- -, distribution function 73 - point 113
Brusselator 214, 282 - slowing down 110, 181
-, stochastic treatment 294 Cumulants 86, 152

Cantor set 339 De Morgan's law 19


Catastrophe 133 Density function 24
Cell differentiation 325 Detailed balance 89
Cells, totipotent 325 - - in chemical reactions 292
Center 117,118 - -, counterexample to 91
Central limit theorem 39 - -, principle 77,95
Chaos 333, 345 Deterministic equations leading to chaos
Chapman-Kolmogorovequation 79,81 333
372 Subject Index

Dictyostelium disciodeum 9 -, stable 112, 118


Diffusion 280 -, unstable 111,112, 118
- , stochastic model 294 Fokker-Planck equation 78, 158, 163, 202
Diffusion coefficient 163 - -, derivation 160
- constant 71 - -, functional 187, 188
- equation 78, 83 - - of convection instability 262
- process, correlation functions 87 - - of many variables 164
Discrete map 345 - - of multimode laser, solution of
Dirac's b-function 25 236, 237
Disorder 1 - - of single mode laser 234
Dissipation-fluctuation theorem, example - -, stationary solution 165, 166, 167,
150 171, 187, 188
Dissipative structures 361 - -, time-dependent solutions 172, 174
Distribution 21,23 Force, friction 147, 153
- function 23 Forces, fluctuating, correlation functions
Distributivity (of set relations) 19 156, 157
Drift-coefficient 163 - , fluctuating(random), Gaussian
Dynamic processes 105 distribution 151
- system theory 352 - , random 152
Free energy 54,57, 179, 186
Ecology 305 Free market 331
Economics 329 Frequency distribution 343
Economy 329 Friction force 105
Ehrenfest urn model 98 Functional structures 326
Elastic stability 270
Elimination in Fokker-Planck equation, Gambling machine with binomial distribution
adiabatic 202 74
- in master equation, adiabatic 204 Gas atoms in box 44
Energy, degradation 2 Gaussian distribution 37
- , internal 54 Gaussian probability density, characteristic
Ensemble 100 functions 39
Entropy 2, 54, 66, 101 - - - , first moment 39
- , additivity 65 - - -,mean 38
- , coarse grained 66 - - -,moments 39
- density 63 - - - of several variables 38
- - , continuity equation 64 - - -, second moment 39
- flux 64 - - -, variance 38
- , production rate 64 Gaussian process 85, 86
Equilibrium, thermodynamic 100 General systems theory 352
- points 108, 109, 114 Generating function 31
Event, simple 17 Ginzburg-Landau equation, generalized
Evolution 310 206-219
Excitation transfer by hopping process - -, time dependent 187,219
78 - functional 186
Expectation, mathematical 28 - theory of superconductivity 239
Extensive variables 58, 60 Graphs 95
- , examples 96
Ferromagnet 3, 179 Green's function 221
First passage time problem 169 - - of Fokker-Planck equation 173, 174
Flip-flop 303 Growth, density-dependent 306
Fluctuating forces, correlation function of - , density-independent 306
149, 151, 187, 188 Gunn instability 266
Fluctuations 98, 100
- in self-organization 200 H-theorem 126
Fluid, flow of 6 Half-trajectory 119
Flux equilibrium 352 Hamiltonian equations 165
Focus 117, 118 Hard excitation 112

384
Subject Index 373

- mode 112 - threshold 128


- - instability 221, 226, 316 Likelihood, equal 21
Harmonic oscillator coupled to reservoirs Limit cycle 110, 113, 119
153 - - , stable 111, 112
- - at different temperatures 157 - - , unstable 111,112
- -, coupled, Hamiltonian 158 Liouville equation 165
Heat 56 Ljapunov exponent 345
- flow 64 Ljapunov function 124
- , generalized 52, 56 - - , examples for 131, 132
Hexagon formation in fluids 264 - instability theorem 125
Hurwitz criterion 123 - stability theorem 125
Hydra 311 Logistic map 346
Hypothesis 30 Lorenz attractor 337, 339
Hysteresis 182 - model 331,332
- surface 336, 338
Information 41 Lotka-Volterra equations 308
- entropy 48 - -, stochastic treatment 309
- - as solution of Liouville equation 165 - model 130, 131
- - of population 46
- - , variational principle for 49 Magnetization 3, 179
- , extremal condition for 44 Malfunctioning, suppression 202
- gain 46, 48, 126 Markov process 79, 80
- per symbol 43 Mass action, law of 293
Inhibition 311,317,325 Master equation 75, 83, 88, 204
Innovation 330,331 - - for chemical reaction 289, 298
Instabilities, classification 133 - - - - -, stationary solution 291
- in fluid dynamics 249 - - , Kirchhoff's method of solution of 95
- in lasers 243 - -, Ljapunov function of 126
-, symmetry breaking 110 - -, theorems about solutions of 97
Intermittency 345, 348, 349 - - with detailed balance 92
Invariance 109 - - - - -, stationary solution 89
Inversion 109 Maxwell's construction 282
Investment 331 Mean 28
Irregular motion 333, 336 - value
Irreversible expansion of a gas Memory 201
- heat exchange I - device 201
- process 101, 102 Microreversibility 126
- thermodynamics 57 Mode 14,15
- -, limitations 102 Moments 29, 33, 85, 152
-, time-dependent 173
Joint probability of time-dependent processes Morphogenesis 311,314,325
75,76 Morphogenetic field 325
Multimode laser 235
Lagrange parameters 49
Lagrangian multipliers 45 Network, chemical 302
Landau theory of phase transitions 180 -, electrical 302
Langevin equation 147, 162, 163, 168, 169, Newton's law 105
187, 188 Niche, ecological 307
- - of fluids 258 Node 115, 116
Laser 4-6, 127, 229 Nonautonomous system 126
- equations 230 Nonequilibrium phase transition 206
-, irregular spiking 343 - - - of chemical reactions 30 I
- phase transition 229, 234 Normal distribution 37
-, single mode operation 335 - - of several variables 38
-, two-mode 128, 129
- - - analogy 352 Onsager coefficients 64
- pulses, ultrashort 243, 333 Order I

385
374 Subject Index

Order parameter 14,15,180,195,198,199, -, joint 26, 79, 85


231, 314, 318, 327, 332, 351 -, - of Markov process 81, 83
- -, space dependent 186 -, multivariate 28
Oregonator 282, 288 -, measure 21
Organization 191 Product 331
Oscillations in biology 9, II Production 330
Oscillator 106 Profit 330
-, anharmonic 108 Public opinion, stochastic model 327
-, harmonic 152
-, overdamped 107 Radio tube equation 132
Overdamped motion 106 Random processes, meaning 98
- variable 19
Parameters, external 15 - -, continuous 24
Partition function 50, 54, 56 - -,dependent 30
- -, variational principle for 52, 53 - -,independent 30
Path integral 79, 83, 84, 176 - -, sum of independent 39
- - , solution of Fokker-Planck equation - - with densities 24
176 Random walk 75
Paths 79 - -, master equation 75
Pattern formation 319,321,323 Rayleigh number 254
Period doubling 345, 346, 349 Reaction and diffusion equations 280, 282,
Phase plane 113 294
Phases 2, 3 - equations 276
Phase transition 3, 179 Recurrence time 98, \01
- - , continuous 186 Reliability 200- 202
- - of chemical reactions 277, 278, 296 Reservoirs 152
- -, first order 182
- -, - -, in morphogenesis 323, 325 Saddle point 115 \I 7
- -, - -, of single-mode laser 240 Salary 331
- -, Landau theory 181, 185 Sample point 17
- -, second order 181 - set 17
- -, - -, in morphogenesis 325 - space 17
- - analogy 179,184 Sampling 17
- - - in continuous media 186 Scaling theory of nonequilibrium systems
- - - for convection instability 265 219
- - - for lasers 234, 239 Self-excitation 112
Poincare-Bendixson theorem 119 Self-organization 191, 194
Poisson distribution 35, 36, 92 - in continuously extended media 205
- -, generating function 36 Self-sustained oscillation 112
- -, mathematical expectation 36 Set, empty 17
- - of molecules 293 Sets 17, 18
- -, variance of 36 -, disjoint 18
Polarization of public opinion 329 -, intersection 18
Population dynamics 128, 305 -, union 18
Post-buckling 274 Single-mode laser 232
Potential 107, 133 Singular point 114
- case 120 Sink 115
- condition 167 Slaved system 195
Power spectrum 344 Slaving 195,198,216,318,341,343
Prandtl number 253 - principle, failure 341
Predator-prey system 308 Slime mold 9
- - , stochastic model 309 Sociology 327
Prepattern 325 Soft excitation 112
Pressure 56 Soft mode 112, 145, 181
Probability 17,20,21,22 - - instability 222-225,316
-, conditional 29, 80 - - of laser 234
-, elementary 23 Source 115

386
Subject Index 375

Stability 108, 120, 126 - potential 56


- criteron, local 122 Thermodynamics 53
- - , global 124 Thorn's theory 133
-, exchange 110 Time-reversal invariance 153, 157
-, orbital 122 Trajectory 113
Stable, asymptotically lIS -, asymptotically stable 121
Standard deviation 29 -, stable 121
Stimulated emission 127, 231 -, unstable 121
Stirling's formula 39 Transition probability 75, 80
Stochastic process 75 - - per second 76
Strange attractor 339 Tree of a graph, directed maximal 96
Subgraph 95 - - - -, maximal 95, 96
Subset 17 Trophical levels 309
Superconductor 3,4 Tunnel diode 201
Survival of species 129 Turbulence 333, 334
Switching 200, 201 -, in lasers 334
Symbiosis 308
Symmetry breaking 181, 320, 325 Unbiased estimate 48, 49
- - instability 110 Underemployment 331
- - of laser 234 Unfolding 135
-, continuously broken 171 Universality classes 351
-, inversion 109 Unstable, asymptotically 115
Synergy 352 Value, expected 28
System, closed 2 Van der Pol equation 132
Van der Waals equation 280
Taylor instability 249 Variance 29,31
- series 133 Verhulst equation 306
Temperature, absolute, thermodynamic defi-
nition of 55 Waves, concentration 12
Thermal equilibrium 2 Words in binomial system 42
Thermodynamic force, generalized 60 Work 107

387
Part II

Advanced Topics

Instability Hierarchies of Self-Organization Systems and Devices


to Edith,
Maria, Karin, and Karl-Ludwig
Preface

This text on the interdisciplinary field of synergetics will be of interest to students


and scientists in physics, chemistry, mathematics, biology, electrical, civil and
mechanical engineering, and other fields. It continues the outline of basic con-
cepts and methods presented in my book Synergetics. An Introduction, which
has by now appeared in English, Russian, Japanese, Chinese, and German.
I have written the present book in such a way that most of it can be read in-
dependently of my previous book, though occasionally some knowledge of that
book might be useful.
But why do these books address such a wide audience? Why are instabilities
such a common feature, and what do devices and self-organizing systems have in
common? Self-organizing systems acquire their structures or functions without
specific interference from outside. The differentiation of cells in biology, and the
process of evolution are both examples of self-organization. Devices such as the
electronic oscillators used in radio transmitters, on the other hand, are man-
made. But we often forget that in many cases devices function by means of pro-
cesses which are also based on self-organization. In an electronic oscillator the
motion of electrons becomes coherent without any coherent driving force from
the outside; the device is constructed in such a way as to permit specific collective
motions of the electrons. Quite evidently the dividing line between self-organiz-
ing systems and man-made devices is not at all rigid. While in devices man sets
certain boundary conditions which make self-organization of the components
possible, in biological systems a series of self-imposed conditions permits and
directs self-organization. For these reasons it is useful to study devices and self-
organizing systems, particularly those of biology, from a common point of view.
In order to elucidate fully the subtitle of the present book we must explain
what instability means. Perhaps this is best done by giving an example. A liquid
in a quiescent state which starts a macroscopic oscillation is leaving an old state
and entering a new one, and thus loses its stability. When we change certain
conditions, e. g. power input, a system may run through a series of instabilities
leading to quite different patterns of behavior.
The central question in synergetics is whether there are general principles
which govern the self-organized formation of structures and/ or functions in both
the animate and the inanimate world. When I answered this question in the af-
firmative for large classes of systems more than a decade ago and suggested that
these problems be treated within the interdisciplinary research field of "synerge-
tics", this might have seemed absurd to many scientists. Why should systems
consisting of components as different as electrons, atoms, molecules, photons,
cells, animals, or even humans be governed by the same principles when they or-
VIII Preface

ganize themselves to form electrical oscillations, patterns in fluids, chemical


waves, laser beams, organs, animal societies, or social groups? But the past
decade has brought an abundance of evidence indicating that this is, indeed, the
case, and many individual examples long known in the literature could be sub-
sumed under the unifying concepts of synergetics. These examples range from
biological morphogenesis and certain aspects of brain function to the flutter of
airplane wings; from molecular physics to gigantic transformations of stars;
from electronic devices to the formation of public opinion; and from muscle con-
traction to the buckling of solid structures. In addition, there appears to be a re-
markable convergence of the basic concepts of various disciplines with regard to
the formation of spatial, temporal, and functional structures.
In view of the numerous links between synergetics and other fields, one might
assume that synergetics employs a great number of quite different concepts. This
is not the case, however, and the situation can best be elucidated by an analogy.
The explanation of how a gasoline engine functions is quite simple, at least in
principle. But to construct the engine of a simple car, a racing car, or a modern
aircraft requires more and more technical know-how. Similarly, the basic
concepts of synergetics can be explained rather simply, but the application of
these concepts to real systems calls for considerable technical (i. e. mathematical)
know-how. This book is meant to serve two purposes: (1) It offers a simple pre-
sentation of the basic principles of instability, order parameters, and slaving.
These concepts represent the "hard core" of synergetics in its present form and
enable us to cope with large classes of complex systems ranging from those of the
"hard" to those of the "soft" sciences. (2) It presents the necessary mathematical
know-how by introducing the fundamental approaches step by step, from simple
to more complicated cases. This should enable the reader to apply these methods
to concrete problems in his own field, or to search for further analogies between
different fields.
Modern science is quite often buried under heaps of nomenclature. I have
tried to reduce it to a minimum and to explain new words whenever necessary.
Theorems and methods are presented so that they can be applied to concrete
problems; i. e. constructions rather than existence proofs are given. To use a con-
cept of general systems theory, I have tried to present an operational approach.
While this proved feasible in most parts of the book, difficulties arose in connec-
tion with quasi-periodic processes. I have included these intricate problems (e. g.,
the bifurcation of tori) and my approach to solving them not only because they
are at the frontier of modern mathematical research, but also because we are con-
fronted with them again and again in natural and man-made systems. The
chapters which treat these problems, as well as some other difficult chapters,
have been marked with an asterisk and can be skipped in a first reading.
Because this book is appearing in the Springer Series in Synergetics, a few
words should be said about how it relates to other books in this series. Volume 1,
Synergetics. An Introduction, dealt mainly with the first instability leading to
spatial patterns or oscillations, whereas the present book is concerned with all
sorts of instabilities and their sequences. While the former book also contained
an introduction to stochastic processes, these are treated in more detail in C. W.
Gardiner's Handbook of Stochastic Methods, which also provides a general

394
Preface IX

background for the forthcoming volumes Noise-Induced Transitions by W.


Horsthemke and R. Lefever and The Fokker-Planck Equation. Methods of Solu-
tion and Applications by H. Risken. These three books cover important aspects
of synergetics with regard to fluctuations. The problem of multiplicative noise
and its most interesting consequences are treated only very briefly in the present
book, and the interested reader is referred to the volume by Horsthemke and
Lefever. The Fokker-Planck equation, a most valuable tool in treating systems at
transition points where other methods may fail, is discussed in both my previous
book and this one, but readers who want a comprehensive account of present-
day knowledge on that subject are referred to the volume by Risken. Finally, in
order to keep the present book within a reasonable size, I refrained from going
into too far-reaching applications of the methods presented. The application of
the concepts and methods of synergetics to sociology and economy are given
thorough treatment in W. Weidlich and G. Haag's book, Concepts and Models
of a Quantitative Sociology. Finally, Klimontovich's book, The Kinetic Theory
of Electromagnetic Processes, gives an excellent account of the interaction of
charged particles with electromagnetic fields, as well as the various collective
phenomena arising from this interaction.
While the present book and the others just mentioned provide us with the
theoretical and mathematical basis of synergetics, experiments on self-organizing
systems are at least equally important. Thus far, these experiments have only
been treated in conference proceedings in the Springer Series in Synergetics; it is
hoped that in future they will also be covered in monographs within this series. In
conclusion, it should be pointed out that synergetics is a field which still offers
great scope for experimental and theoretical research.
I am grateful to Dipl. Phys. Karl Zeile and Dr. Arne Wunderlin for their
valuable suggestions and their detailed and careful checking of the manuscript
and the calculations. My thanks also go to Dr. Herbert Ohno, who did the draw-
ings, and to my secretary, Mrs. U. Funke, who meticulously typed several ver-
sions of the manuscript, including the formulas, and without whose tireless ef-
forts this book would not have been possible. I gratefully acknowledge the excel-
lent cooperation of the members of Springer-Verlag in producing this book. I
wish to thank the Volkswagenwerk Foundation, Hannover, for its very efficient
support of the synergetics project, within the framework of which a number of
the results presented in this book were obtained over the past four years.

Stuttgart, January 1983 Hermann Haken

395
Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 What is Synergetics About? ................................ 1
1.2 Physics ................................................. 1
1.2.1 Fluids: Formation of Dynamic Patterns . . . . . . . . . . . . . . . . 1
1.2.2 Lasers: Coherent Oscillations ........................ 6
1.2.3 Plasmas: A Wealth of Instabilities .................... 7
1.2.4 Solid-State Physics: Multistability, Pulses, Chaos ....... 8
1.3 Engineering.............................................. 9
1.3.1 Civil, Mechanical, and Aero-Space Engineering:
Post-Buckling Patterns, Flutter, etc. .................. 9
1.3.2 Electrical Engineering and Electronics: Nonlinear
Oscillations ....................................... 9
1.4 Chemistry: Macroscopic Patterns ........................... 11
1.5 Biology ................................................. 13
1.5.1 Some General Remarks ............................. 13
1.5.2 Morphogenesis .................................... 13
1.5.3 Population Dynamics ............................... 14
1.5.4 Evolution......................................... 14
1.5.5 ImmuneSystem .................................... 15
1.6 Computer Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.1 Self-Organization of Computers, in Particular
Parallel Computing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.2 Pattern Recognition by Machines ..................... 15
1.6.3 Reliable Systems from Unreliable Elements. . . . . . . . . . . . . 15
1.7 Economy................................................ 16
1.8 Ecology................................................. 17
1.9 Sociology ............................................... 17
1.10 What are the Common Features of the Above Examples? ....... 17
1.11 The Kind of Equations We Want to Study .................... 18
1.11.1 Differential Equations .............................. 19
1.11.2 First-Order Differential Equations .................... 19
1.11.3 Nonlinearity....................................... 19
1.11.4 Control Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.11.5 Stochasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.11.6 Many Components and the Mezoscopic Approach . . . . . . . 22
1.12 How to Visualize the Solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.13 Qualitative Changes: General Approach. . . . . . . . . . . . . . . . . . . . . . 32
1.14 Qualitative Changes: Typical Phenomena .................... 36
XII Contents

1.14.1 Bifurcation from One Node (or Focus) into Two Nodes
(or Foci) .......................................... 37
1.14.2 Bifurcation from a Focus into a Limit Cycle
(Hopf Bifurcation) ................................. 38
1.14.3 Bifurcations from a Limit Cycle ...................... 39
1.14.4 Bifurcations from a Torus to Other Tori ............... 41
1.14.5 Chaotic Attractors ................................. 42
1.14.6 Lyapunov Exponents* .............................. 42
1.15 The Impact of Fluctuations (Noise). Nonequilibrium Phase
Transitions .............................................. 46
1.16 Evolution of Spatial Patterns ......................... . . . . . . 47
1.17 Discrete Maps. The Poincare Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.18 DiscreteNoisyMaps ....................................... 56
1.19 Pathways to Self-Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.19.1 Self-Organization Through Change of Control
Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
1.19.2 Self-Organization Through Change of Number of
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
1.19.3 Self-Organization Through Transients. . . . . . . . . . . . . . . . . 58
1.20 How We Shall Proceed .................................... 58

2. Linear Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 61


2.1 Examples of Linear Differential Equations: The Case of a
Single Variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.1.1 Linear Differential Equation with Constant
Coefficient ........................................ 61
2.1.2 Linear Differential Equation with Periodic
Coefficient ........................................ 62
2.1.3 Linear Differential Equation with Quasiperiodic
Coefficient ........................................ 63
2.1.4 Linear Differential Equation with Real Bounded
Coefficient ........................................ 67
2.2 Groups and Invariance .................................... 69
2.3 Driven Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.4 General Theorems on Algebraic and Differential Equations ..... 76
2.4.1 The Form of the Equations .......................... 76
2.4.2 Jordan's Normal Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.4.3 Some General Theorems on Linear Differential
Equations .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.4.4 Generalized Characteristic Exponents and
Lyapunov Exponents ............................... 79
2.5 Forward and Backward Equations: Dual Solution Spaces ....... 81
2.6 Linear Differential Equations with Constant Coefficients ....... 84
2.7 Linear Differential Equations with Periodic Coefficients . . . . . . . . 89
2.8 Group Theoretical Interpretation ........................... 93
2.9 Perturbation Approach* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

398
Contents XIII

3. Linear Ordinary Differential Equations with Quasiperiodic


Coefficients* ................................................. 103
3.1 Formulation ofthe Problem and of Theorem 3.1.1 ............. 103
3.2 Auxiliary Theorems (Lemmas) .... . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.3 Proof of Assertion (a) of Theorem 3.1.1: Construction of a
Triangular Matrix: Example of a 2 x 2 Matrix . . . . . . . . . . . . . . . . . 111
3.4 Proof that the Elements of the Triangular Matrix Care
Quasiperiodic in r (and Periodic in rpj and C k with Respect to rp):
Example of a 2 x 2 Matrix ................................. 112
3.5 Construction of the Triangular Matrix C and Proof that Its
Elements are Quasiperiodic in r (and Periodic in rpj and C k with
Respect to rp): The Case of an m x m Matrix, all A's Different ... 115
3.6 Approximation Methods. Smoothing. . . . . . . . . . .. .. . .. . . .. . . . 118
3.6.1 A Variational Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.6.2 Smoothing ........................................ 119
3.7 The Triangular Matrix C and Its Reduction ................... 122
3.8 The General Case: Some of the Generalized Characteristic
Exponents Coincide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.9 Explicit Solution of (3.1.1) by an Iteration Procedure . . . . . . . . . . . 134

4. Stochastic Nonlinear Differential Equations. . . . . . . . . . . . . . . . . . . . . . . 143


4.1 AnExample............................................. 143
4.2 The Ito Differential Equation and the Ito-Fokker-Planck
Equation ................................................ 146
4.3 The Stratonovich Calculus ................................. 150
4.4 Langevin Equations and Fokker-Planck Equation ............. 153

5. The World of Coupled Nonlinear Oscillators ...................... 154


5.1 Linear Oscillators Coupled Together. . . . . . . . . . . . . . . . . . . . . . . . . 154
5.1.1 Linear Oscillators with Linear Coupling ............... 155
5.1.2 Linear Oscillators with Nonlinear Coupling. An Example.
Frequency Shifts ................................... 156
5.2 Perturbations of Quasiperiodic Motion for Time-Independent
Amplitudes (Quasiperiodic Motion Shall Persist) .............. 158
5.3 Some Considerations on the Convergence of the Procedure* .. . . . 165

6. Nonlinear Coupling of Oscillators: The Case of Persistence


of Quasiperiodic Motion ....................................... 172
6.1 The Problem ............................................. 172
6.2 Moser's Theorem (Theorem 6.2.1) .......................... 179
6.3 The Iteration Procedure* .................................. 180

7. Nonlinear Equations. The Slaving Principle ....................... 187


7.1 An Example ............................................. 187
7.1.1 The Adiabatic Approximation . . . . . . . . . . . . . . . . . . . . . . .. 188
7.1.2 Exact Elimination Procedure . . . . . . . . . . . . . . . . . . . . . . . .. 189
7.2 The General Form of the Slaving Principle. Basic Equations ..... 195

399
XIV Contents

7.3 Formal Relations 198


7.4 The Iteration Procedure ................................. . 202
7.5 An Estimate of the Rest Term. The Question of Differentiability 205
7.6 Slaving Principle for Discrete Noisy Maps* ................. . 207
7.7 Formal Relations* ...................................... . 208
7.8 The Iteration Procedure for the Discrete Case* ......... , .... . 214
7.9 Slaving Principle for Stochastic Differential Equations* ...... . 216

8. Nonlinear Equations. Qualitative Macroscopic Changes .. . . . . . . . . .. 222


8.1 Bifurcations from a Node or Focus. Basic Transformations .... 222
8.2 A Simple Real Eigenvalue Becomes Positive ................. 224
8.3 Multiple Real Eigenvalues Become Positive .................. 228
8.4 A Simple Complex Eigenvalue Crosses the Imaginary Axis.
Hopf Bifurcation ........................................ 230
8.5 Hopf Bifurcation, Continued ... . . . . . . . . . . . . . . . . . . . . . . . . . .. 233
8.6 Frequency Locking Between Two Oscillators . . . . . . . . . . . . . . . .. 239
8.7 Bifurcation from a Limit Cycle ............................ 242
8.8 Bifurcation from a Limit Cycle: Special Cases. . . . . . . . . . . . . . .. 247
8.8.1 Bifurcation into Two Limit Cycles ................... 247
8.8.2 Period Doubling .................................. 249
8.8.3 Subharmonics .................................... 249
8.8.4 Bifurcation to a Torus ............................. 251
8.9 Bifurcation from a Torus (Quasiperiodic Motion) . . . . . . . . . . . .. 253
8.10 Bifurcation from a Torus: Special Cases. . . . . . . . . . . . . . . . . . . .. 258
8.10.1 A Simple Real Eigenvalue Becomes Positive ........... 258
8.10.2 A Complex Nondegenerate Eigenvalue Crosses the
Imaginary Axis ................................... 260
8.11 Instability Hierarchies, Scenarios, and Routes to Turbulence ... 264
8.11.1 The Landau-HopfPicture .......................... 264
8.11.2 The Ruelle and Takens Picture ...................... 265
8.11.3 Bifurcations of Tori. Quasiperiodic Motions.. . . . . . .. .. 265
8.11.4 The Period-Doubling Route to Chaos. Feigenbaum
Sequence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 266
8.11. 5 The Route via Intermittency ........................ 266

9. Spatial Patterns .............................................. 267


9.1 The Basic Differential Equations ..................... . . . . .. 267
9.2 The General Method of Solution ........................... 270
9.3 Bifurcation Analysis for Finite Geometries .................. 272
9.4 Generalized Ginzburg-Landau Equations. . . . . . . . . . . . . . . . . . .. 274
9.5 A Simplification of Generalized Ginzburg-Landau Equations.
Pattern Formation in Benard Convection . . . . . . . . . . . . . . . . . . .. 278

10. The Inclusion of Noise ........................................ 282


10.1 The General Approach ................................... 282
10.2 A Simple Example .................................. , . . .. 283

400
Contents XV

10.3 Computer Solution of a Fokker-Planck Equation for a Complex


Order Parameter ........................................ 286
10.4 Some Useful General Theorems on the Solutions
of Fokker-Planck Equations .............................. 293
10.4.1 Time-Dependent and Time-Independent Solutions of the
Fokker-Planck Equation, if the Drift Coefficients are
Linear in the Coordinates and the Diffusion Coefficients
Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 293
10.4.2 Exact Stationary Solution of the Fokker-Planck Equation
for Systems in Detailed Balance. . . . . . . . . . . . . . . . . . . . .. 294
10.4.3 An Example. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 298
10.4.4 Useful Special Cases ............................... 300
10.5 Nonlinear Stochastic Systems Close to Critical Points:
A Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

11. Discrete Noisy Maps .......................................... 303


11.1 Chapman-Kolmogorov Equation .......................... 303
11.2 The Effect of Boundaries. One-Dimensional Example ......... 304
11.3 Joint Probability and Transition Probability. Forward and
Backward Equation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.4 Connection with Fredholm Integral Equation ................ 306
11.5 Path Integral Solution .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 307
11.6 The Mean First Passage Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 308
11.7 Linear Dynamics and Gaussian Noise. Exact Time-Dependent
Solution of the Chapman-Kolmogorov Equation ............. 310

12. Example of an Unsolvable Problem in Dynamics. . . . . . . . . . . . . . . . .. 312

13. Some Comments on the Relation Between Synergetics and


Other Sciences. . . . .. . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . .. 314

Appendix A: Moser's Proof of His Theorem ......................... 317


A.1 Convergence of the Fourier Series ............................... 317
A.2 The Most General Solution to the Problem of Theorem 6.2.1 ........ 319
A.3 Convergent Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
A.4 Proof of Theorem 6.2.1 ....................................... 331

References ...................................................... 335

Subject Index ................................................... 351

401
1. Introduction

1.1 What is Synergetics About?

Synergetics deals with systems composed of many subsystems, which may be of


quite different natures, such as electrons, atoms, molecules, cells, neurons, me-
chanical elements, photons, organs, animals or even humans. In this book we
wish to study how the cooperation of these subsystems brings about spatial,
temporal or functional structures on macroscopic scales. In particular, attention
will be focused on those situations in which these structures arise in a self-
organized fashion, and we shall search for principles which govern these pro-
cesses of self-organization irrespective of the nature of the subsystems. In the
introduction, we present typical examples of disorder - order or order - order
transitions in various fields, ranging from physics to sociology, and give an
outline of the basic concepts and the mathematical approach.

1.2 Physics

In principle, all phase transitions of physical systems in thermal equilibrium such


as the liquid - gas transition, the ferromagnetic transition or the onset of super-
conductivity fall under the general definition of processes dealt with by synerge-
tics. On the other hand, these phenomena are treated intensively by the theory of
phase transitions, in particular nowadays by renormalization group techniques
on which a number of books and review articles are available (we recommend the
reader to follow up the list of references and further reading at the end of this
book while he reads the individual sections). Therefore, these phenomena will
not be considered and instead attention will be focused on those modern
developments of synergetics which are not covered by phase transition theory.

1.2.1 Fluids: Formation of Dynamic Patterns


Fluid dynamics provides us with beautiful examples of pattern formations of in-
creasing complexity. Because the formation of a pattern means that the former
state of fluid cannot persist any longer, i. e., that it becomes unstable, the phe-
nomena of pattern formation are often called instabilities. Consider as a first
2 1. Introduction

Fig. 1.2.1 Fig. 1.2.2a b


Fig. 1.2.1. Scheme of experimental setup for the study of the Taylor instability. A liquid is filled in-
between two coaxial cylinders of which the outer one is transparent. The inner cyli nder can be rotated
at a fixed speed

Fig. 1.2.2. (a) Schematic diagram of the formation of rolls. (b) A computer calculation of the
trajectory of a particle of the fluid within a roll. In the case shown two rolls are formed but only the
motion of a particle within the upper roll is presented. [After K. Marx: Diplom thesi s. University of
Stuttgart (1982)]

example the Taylor instability and its subsequent instabilities. In these experi-
ments (Fig. 1.2.1) the motion of a liquid between coaxial cylinders is studied.
Usually one lets the inner cylinder rotate while the outer cylinder is kept fixed,
but experiments have also been performed where both cylinders rotate. Here we
shall describe the phenomena observed with fixed outer cylinders but at various
rotation speeds of the inner cylinder. At slow rotation speeds the fluid forms
coaxial streamlines. This is quite understandable because the inner cylinder tries
to carry the fluid along with it by means of the friction between the fluid and the
cylinder. When the rotation speed is increased (which is usually measured in the
dimensioniess Taylor number), a new kind of motion occurs. The motion of the
fluid becomes organized in the form of rolls in which the fluid periodically moves
outwards and inwards in horizontal layers (Fig. 1.2.2a, b).
With further increase of the Taylor number, at a second critical value the rolls
start oscillations with one basic frequency, and at still more elevated Taylor
numbers with two fundamental frequencies. Sometimes still more complicated
frequency patterns are observed. Eventually, at still higher Taylor numbers
chaotic motion sets in. As is shown in Fig. 1.2.3, the evolving patterns can be
seen directly. Furthermore, by means of laser light scattering, the velocity distri-
bution and its Fourier spectrum have been measured (Fig. 1.2.4a - c). In par-
ticular cases with increasing Taylor number, a sequence of newly developing fre-
quencies which are just 1/2, 1/4, 1/8, 1116 of the fundamental frequency is
observed. Since half a frequency means a double period of motion, this phe-
nomenon is called period doubling. There are several features which are quite
typical for self-organizing systems. When we change an external parameter, in

404
1.2 Physics 3

(a) (b) (e) (d)

Fig. 1.2.3. Instability hierarchy of the Taylor instability. (a) Formation of rolls. (b)The rolls start an
oscillation . (e) A more complicated oscillatory motion of the rolls. (d) A chaotic motion. [After H . L.
Swinney, P. R. Fenstermacher, J. P . Gollub: In Synergetics, A Workshop, ed. by H. Haken,
Springer Ser. Synergetics , Vol. 2 (Springer, Berlin, Heidelberg, New York 1977) p. 60)

the present case the rotation speed, the system can form a hierarchy of patterns,
though these patterns are not imposed on the system by some means from the
outside. The patterns can become more and more complicated in their spatial and
temporal structure.
Another standard type of experiment leads to the convection instability (or
Benard instability) and a wealth of further instabilities. Here, a fluid layer in a
vessel with a certain geometry such as cylindrical or rectangular is used. The fluid
is subject to gravity. When the fluid layer is heated from below and kept at a
certain constant temperature at its upper surface, a temperature difference is
established. Due to this temperature difference (temperature gradient), a vertical
flux of heat is created. If the temperature gradient is small, this heat transport
occurs microscopically and no macroscopic motion of the fluid is visible. When
the temperature gradient is increased further, suddenly at a critical temperature
gradient a macroscopic motion of the fluid forming pronounced patterns sets in.
For instance, the heated liquid rises along stripes, cools down at the upper sur-
face and falls down again along other stripes so that a motion in the form of rolls
occurs. The dimension of the rolls is of the order of the thickness of the fluid
layer, which in laboratory systems may range from millimeters to several centi-
meters.
The same phenomenon is observed in meteorology where cloud streets with
dimensions of several hundred meters occur. When in a rectangular geometry the
temperature gradient, which in dimensionless units is measured by the Rayleigh
number, is increased further, the rolls can start an oscillation, and at a still higher
Rayleigh number an oscillation of several frequencies can set in. Finally, an
entirely irregular motion, called turbulence or chaos, occurs.
There are still other routes from a quiescent layer to turbulence, one such
route occurring by period-doubling. When the Rayleigh number is increased, a
rather complex motion occurs in which at well-defined Rayleigh numbers the
period of motion is repeatedly doubled (Fig. 1.2.5a - c). In cylindrical containers
concentric rolls can be observed or, if the symmetry with respect to the horizontal
middle plane is violated, hexagonal patterns occur. Also, transitions between
rolls and hexagons and even the coexistence of rolls and hexagons can be
observed (Fig. 1.2.6). Other observed patterns are rolls which are oriented at rec-

405
4 1. Introduction

2.5 ~----------, WI 3W,


2.0
~ 1. 5
'WI
E
u 1.0
2W I 5Wl oWI
7W 1
::. 0· 5 Sw

llT Ii~jIW,I\I,l'~1
::3
o ':-0- - - ' - -.......-...J3L--...L...--:!5
..
a.. II) ,
t lsi II.
a
o 5 10
W
IS 20
102 , -_ _ _, -_ _---,r-_ _--._ _ _--,

Plw )

Fig. 1.2.4. (a) (Left side) The radial com-


ponent of the local nuid velocity V,(t) is
b 0 w measured by laser-Doppler-velocimetry and
plotted versus time. The measurement is
made at the Reynold's number R c = 5.7 ,
where in the present case R is defined by
wcylrj(rO- r) l v, w cy l is the angu lar fre-

II
10 1
quency of the inner cylinder, rj and ro are the
P{wl R/Rc~ 19.2 radii of the inner and outer cylinder, respec-
tively, and v is the kinematic viscosity.
(Right side) The power spectrum corre-
W I -W3 W3 i
1 sponding to the left curve . There is one fun-
101 ,I damental frequency WI but remarkably
~1 ~
many harmonics. (b) The power spectrum of
A~JI"I . • IdLMl1 the radial component o f the velocity at
R 1R c = 13.3. Note that the spectrum now
C
0 0.8 1.&
W shows three fundament a l frequencies w I'
w2' wJ and severa l linear combinations of
10 1 them. (e) These spectra illustrate the dis-
appearance of w3 at R I R c = 19.8 ± 0.1.
R/R c ' 20 .1
P(wl There is a component B at W - 0.45 in both
spectra. [After H. L. Swinney, P. R. Fen-
stermacher, J. P. Gollub: In Synergetics, A
101
Workshop, ed. by H. H aken, Springer Ser.
Synergetics, Vol. 2 (Springer, Berlin, Heidel-
berg, New York 1977) p. 60].
0 0.8 W 1&

tangles and which thus form a rectangular lattice (Fig. 1.2.7). At elevated
Rayleigh numbers a carpet-like structure is found. Many patterns can also exhibit
a number of imperfections (Fig. 1.2.8). In order to obtain most clearcut results,
in modern experiments the aspect ratio, i. e., the horizontal dimension of the
vessel divided by the vertical dimension, is taken small, i. e., of the order of
unity. At higher aspect ratios the individual transitions can very quickly follow
after one another or can even coexist. Another class of experiments, where the
fluid is heated from above, leads to the "Marengo instability". Further, in the
atmospheres of the earth and other planets where gravitation, rotation and
heating act jointly, a wealth of patterns is formed.

406
1.2 Physics 5
100,--____, -__________________- ,
~r-----------------------~

LGMAG
DB

- soc+-~~--~~--~~~--~~~
a 0.0 HZ
b -SOI.D+O-
, O~~-'-~~-H~Z~-~~~--l
5000 500,0
10,0 r-r-----------------------...,-,

Fig. 1.2.5 a-c. Experimental power spectra for


the Benard experiment for Rayleigh numbers of
40,5 R e. 42.7 Reand 43 Re' [After A. Libchaber.
1. Maurer: 1. Phys. Paris 41 . Colloq. C 3, 51
c (1980)]

Fig. 1.2.6

Fig. 1.2.7

Fig. 1.2.6. These photographs show the surface of a liquid heated from below. The coexistence of
hexagons and rolls is clearly visible. [After 1. Whitehead: private communication]
Fig. 1.2.7. Photograph of the surface of a liquid heated from below. The rectangular lattice formed
by two rectangular roll systems is clearly visible. [After J. Whitehead: In Fluctuations, Instabilities
and Phase Transitions, ed . by T. Riste (Plenum, New York 1975)]

407
6 1. Introduction

Fig. 1.2.8. Photograph of the surface of


a liquid heated from below in a circular
vessel. The roll systems are developed
only partly and form imperfections at
their intersection points. [After P. Berge:
In Chaos and Order in Nature, ed. by H.
Haken, Springer Ser. Synergetics, Vol.
11 (Springer, Berlin, Heidelberg, New
York 1981) p. 14]

A further class of patterns of important practical interest concerns the motion


of fluids or gases around moving objects, such as cars, airplanes and ships.
Again specific patterns give rise to various effects. Let us now turn to a quite dif-
ferent field of physics, namely lasers.

1.2.2 Lasers: Coherent Oscillations


Lasers are certain types of lamps which are capable of emitting coherent light.
Since I have treated the laser as a prototype of a synergetic system in extenso in
my book Synergetics, An Introduction [1], I shall describe here only a few
features particularly relevant to the present book.
A typical laser consists of a crystal rod or a glass tube filled with gas, but the
individual experimental arrangement is not so important in the present context.
What is most important for us are the following features: when the atoms the
laser material consists of are excited or "pumped" from the outside, they emit
light waves. At low pump power, the waves are entirely uncorrelated as in a usual
lamp. Could we hear light, it would sound like noise to us.
When we increase the pump rate to a critical value, the noise disappears and is
replaced by a pure tone. This means that the atoms emit a pure sinusoidal light
wave which in turn means that the individual atoms act in a perfectly correlated
way - they become self-organized. When the pump rate is increased beyond a
second critical value, the laser may periodically emit very intense and short
pulses. In this way the following instability sequence occurs (Fig. 1.2.9 a - c):

noise ->
h t '11 r }
{ co eren OSCI a IOn ->
{ periodic pulses at ff. equency
which modulate oscillation
W2}
at frequency Wj at frequency Wj
i.e.,
no oscillation -> 1 frequency -> 2 frequencies.

408
1.2 Physics 7

Eltl Eitl

I
V V V v V VV V V V

a b

Em

Fig. 1.2.9. (a) Schematic plot of the field


strength E(t) as a function of time in the case of
emission from a lamp. (b) The field strength
E(t) as a function of time of the light field
emitted from a laser. (c) Schematic plot of the
field strength E(t) as a function of time in the
c case of ultrashort laser pulses

Under different conditions the light emiSSIOn may become "chaotic" or


"turbulent", i. e., quite irregular. The frequency spectrum becomes broadband.
If a laser is not only pumped but also irradiated with light from another laser,
a number of interesting effects can occur. In particular, the laser can acquire one
of two different internal states, namely one with a high transmission of the
incident light, or another with a low transmission. Since these two states are
stable, one calls this system bistable. Bistable elements can be used as memories
and as logical devices in computers.
When a laser is coupled to a so-called saturable absorber, i. e., a material
where light transmissivity becomes very high at a sufficiently high light intensity,
a number of different instabilities may occur: bistability, pulses, chaos.
The laser played a crucial role in the development of synergetics for various
reasons. In particular, it allowed detailed theoretical and experimental study of
the phenomena occurring within the transition region, lamp .... laser, where a sur-
prising and far-reaching analogy with phase transitions of systems in thermal
equilibrium was discovered. This analogy includes a symmetry-breaking insta-
bility, critical slowing down and critical fluctuations. The results show that close
to the transition point of a synergetic system fluctuations playa crucial role.
(These concepts are explained in [1], and are dealt with here in great detail.)

1.2.3 Plasmas: A Wealth of Instabilities


A plasma consists of a gas of atoms which are partly or fully deprived of their
electronic cloud, i. e., ionized. Thus a plasma can be characterized as a gas or
fluid composed of electrically charged particles. Since fluids may show quite dif-
ferent instabilities it is not surprising that instabilities occur in plasmas also.

409
8 1. Introduction

Fig. 1.2.10. Calculated lines of constant vertical veloci-


ty of one-component plasma heated from below and
subjected to a vertical homogeneous magnetic field.
[After H. Klenk, H. Haken: Acta Phys. Austriaca 52,
187 (1980)]

Because of the charges and their interaction with electromagnetic fields, new
types of instabilities may occur, and indeed in plasmas a huge variety of instabili-
ties are observed. For example, Fig. 1.2.10 presents a theoretical result on the
formation of a velocity pattern of a plasma which is heated from below and sub-
jected to a constant vertical magnetic field. It is beyond the scope of this book to
list these instabilities here. Besides acquiring ordered states, a plasma can also
migrate from one instability to another or it may suffer a violent instability by
which the plasma state collapses. A study of these phenomena is of utmost im-
portance for the construction of fusion reactors and for other fields, e. g., astro-
physics.

1.2.4 Solid-State Physics: Multistability, Pulses, Chaos


A few examples of phenomena are given, where under a change of external con-
ditions qualitatively new phenomena occur.
a) The Gunn Oscillator. When a relatively small voltage is applied to a sample of
gallium arsenide (GaAs), a constant current obeying Ohm's law is generated.
When the voltage is increased, at a critical voltage this constant current is
replaced by regular pulses. With a still higher voltage irregular pulses (chaos) can
also be observed.
b) Tunnel Diodes. These are made of semiconductors doped with well-defined
impurity atoms, so that energy bands (in particular the valence and the conduc-
tion band) are deformed. When an electric field is applied, a tunnel current may
flow and different operating states of the tunnel diode may be reached, as in-
dicated in Fig. 1.2.11.

bd
I
Fig. 1.2.11. Different operating states A, B, C of a
tunnel diode where the current I is plotted versus applied
voltage V. The states A and C are stable, whereas B is
unstable. This picture provides a typical example of a
bistable device with stability points A and C

410
1.3 Engineering 9

c) Thermoelastic Instabilities. Qualitative changes of behavior on macroscopic


scales can also be observed in mechanics, for instance in thermoelastic instabili-
ties. When strain is applied to a solid, beyond the elastic region at critical para-
meter values of the strain qualitatively new phenomena may occur, for instance,
acoustic emission.
d) Crystal Growth. Crystals may exhibit structure at two different length scales.
At the microscopic level the atoms (or molecules) form a regular lattice with fixed
spacings between the lattice sites. The order at the microscopic level can be made
visible by X-ray or electron diffraction, which reveals the regular arrangement of
atoms or molecules within a lattice. The lattice constant provides one length
scale. The other length scale is connected with the macroscopic form of a crystal,
e. g., the form of the snow crystal. While it is generally adopted that the lattice
structure can be explained by the assumption that the atoms or molecules acquire
a state of minimal free energy, which at least in principle can be calculated by
means of quantum mechanics, the explanation of the macroscopic shapes
requires a different approach. Here we have to study the processes of crystal
growth. It is this latter kind of problem which concerns synergetics.

1.3 Engineering

1.3.1 Civil, Mechanical, and Aero-Space Engineering: Post-Buckling Patterns,


Flutter, etc.
Dramatic macroscopic changes of systems caused by change of external para-
meters are well known in engineering. Examples are: the bending of a bar under
load as treated by Euler in the eighteenth century, the breakdown of bridges
beyond a critical load, deformations of thin shells under homogeneous loads
(Fig. 1.3.1), where for instance hexagons and other post-buckling patterns can be
observed. Mechanical instabilities can also occur in a dynamical fashion, for
instance the flutter of wings of airplanes.

1.3.2 Electrical Engineering and Electronics: Nonlinear Oscillations


Radio waves are generated by electromagnetic oscillators, i. e., circuits
containing radio tubes or transistors. The coherent electromagnetic oscillations
of these devices can be viewed as an act of self-organization. In the non-oscil-
lating state, the electrons move randomly since their motion is connected with
thermal noise, whereas under suitable external conditions self-sustained oscilla-
tions may be achieved in which macroscopic electric currents oscillate at a well-
defined frequency in the electronic circuit. When oscillators are coupled to each
other, a number of phenomena common also to fluid dynamics occur, namely
frequency locking, period doubling or tripling (Fig. 1.3.2), and chaos, i. e.,
irregular emission. An important role is played by the question whether a system
of coupled oscillators can still act as a set of oscillators with individual fre-
quencies, or whether entirely new types of motion, e. g., chaotic motion, occur.

411
10 1. Introduction

Fig. 1.3.1. A thin metallic shell is


put under an internal subpres-
sure. This photograph shows
the poslbuckling pattern where
the individual hexagons are
clearly visible. [After R. L. Carl-
son, R. L. Sendelbeck, N. J .
Hoff: Experimental Mechanics 7,
281 (1967)]

Fig. 1.3.2a - d. These power spectra of an electronic device


with a certain nonlinear capacitance clearly show a series of
period doublings (from top to bottom) when the control
parameter is increased. [After P. S. Linsay: Phys. Rev.
Lett. 47, 1349 (1981)]
v [MHzJ

412
1.4 Chemistry: Macroscopic Patterns 11

1.4 Chemistry: Macroscopic Patterns

In this field synergetics focuses its attention on those phenomena where macro-
scopic patterns occur. Usually, when reactants are brought together and well
stirred, a homogeneous end product arises. However, a number of reactions may
show temporal or spatial or spatio-temporal patterns. The best known example is
the Belousov-Zhabotinsky reaction. Here Ce2(S04h, KBr03' CHiCOOHh,
H 2S04 as well as a few drops of ferroine (redox indicator) are mixed and stirred.

Fig. 1.4.1. Example of a chemical oscillation (sche·


matic)

red blue red blue

In a continuously stirred tank reactor, oscillations of the composition may occur,


as can be seen directly from the periodic change of color from red to blue (Fig.
1.4.1). In a closed system, i. e., without input of new materials, these oscillations
eventually die out. On the other hand, when new reactants are added con-
tinuously and the final products are continuously removed, the oscillations can
be sustained indefinitely. Nowadays more than a dozen systems are known
showing chemical oscillations. If the influx concentration of one reactant is
changed, a sequence of different kinds of behavior can be reached, e. g. an alter-
nation between oscillations and chaotic behavior (Figs. 1.4.2,3) . If the chemicals
are not stirred, spatial patterns can develop. Examples are provided by the
Belousov-Zhabotinsky reaction in the form of concentric waves (Fig. 1.4.4) or
spirals. Some of the oscillating reactions can be influenced by light (photo-
chemistry) where either frequency entrainment or chaotic states can be reached.
Quite another class of macroscopic patterns formed by a continuous influx of

1\1 I~

a 100 200 300 400 500


Fig. 1.4.2a, b. The Belousov-Zhabotinsky reaction in a chaotic state. (a) The optical density record
(arbitrary units) versus time [s) . (b) The corresponding power spectral density versus frequency in
semi-logarithmic plot. [After C. Vidal: In Chaos and Order in Nature, ed. by H . Haken, Springer
Ser. Synergetics, Vol. 11 (Springer, Berlin, Heidelberg, New York 1981) p. 69)

413
12 1. Introduction

~ 0
(0) (b)
-J - 1
~ -2
u
Lu
e; -3

~ -4
~
g
-J

FREQUENCY (Hz)

· '1· lee.,.
p.'
C '2C '3C ~C 's,I C5• ?
(c)
1
... "" Ie 'e,
2 3 4
I ,ieI • i •
2.0

1
I •
1.0 1.S T (hr)
Fig. 1.4.3a - c. Dynamics of the Belousov-Zhabotinsky reaction in a stirred fl ow reactor. Power
spectra of the bromide ion potential are shown (a) for a periodic regime, and (b) a chaotic regime for
different residence times T where T = reactor volume/flow rate. All other variables were held fixed.
The plot (c) shows the alternating sequence between periodic and chaotic regimes when the residence
rate is increased. [After] . S. Turner, J. C. Roux, W. D. McCormick, H. L. Swinney: Phys. Lett.
8SA, 9 (1981) and in Nonlinear Problems: Presence and Future, cd. by A. R. Bishop (North-Holland ,
Amsterdam 1981)]

Fig. 1.4.4. Concentric waves of the Be-


lousov-Zhabotinsky reaction. [After
M. L. Smoes: In Dynamics oj Synerge-
tic Systems, ed. by H. Haken, Springer
Ser. Synergetics, Vol. 6 (Springer, Ber-
lin, Heidelberg, New York 1980) p. 80]

414
1. 5 Biology 13

matter consists of flames . Hopefully the concepts of synergetics will allow us to


get new insights into these phenomena known since men observed fire . Finally, it
is expected that methods and concepts of synergetics can be applied to chemistry
at the molecular level also, in particular to the behavior of biological molecules.
But these areas are still so much in the developmental phase that it would be
premature to include them here.

1.5 Biology

1.5.1 Some General Remarks


The animate world provides us with an enormous variety of well-ordered and
well-functioning structures, and nearly nothing seems to happen without a high
degree of cooperation of the individual parts of a biological system. By means of
synergetic processes, biological systems are capable of "upconverting" energy
transformed at the molecular level to macroscopic energy forms. Synergetic
processes manifest themselves in muscle contraction leading to all sorts of move-
ment, electrical oscillations in the brain (Fig. 1.5.1 a, b), build up of electric
voltages in electric fish, pattern recognition, speech, etc. For these reasons,
biology is a most important field of research for synergetics. At the same time we
must be aware of the fact that biological systems are extremely complex and it
will be wise to concentrate our attention on selected problems, of which a few
examples in the following are mentioned.

Fig. 1.5.1a, b. Two examples of an elec-


a troencephalogram. (a) Normal behavior.
(b) Behavior in a epileptic seizure. [After
A. Kaczmarek, W. R. Adey , reproduced
by A. Babloyantz: In Dynamics oj Syn-
ergetic Systems, ed. by H. Haken, Sprin-
ger Ser. Synergetics, Vol. 6 (Springer,
Berlin, Heidelberg, New York 1980) p.
180}

1.5.2 Morphogenesis
The central problem of morphogenesis can be characterized as follows. How do
the originally undifferentiated cells know where and in which way to differen-
tiate? Experiments indicate that this information is not originally given to the
individual cells but that a cell within a tissue receives information on its position
from its surroundings, whereupon it differentiates. In experiments with embryos,
transplantation of a cell from a central region of the body into the head region
causes this cell to develop into an eye. These experiments demonstrate that the

415
14 1. Introduction

cells do not receive their information how to develop from the beginning (e. g.,
through their DNA), but that they receive that information from the position
within the cell tissue. It is assumed that the positional information is provided by
a chemical "pre-pattern" which evolves similarly to the patterns described in the
section on chemistry above. These pattern formations are based on the
cooperation of reactions and diffusion of molecules.
It is further assumed that at sufficiently high local concentration of these
molecules, called morphogenes, genes are switched on, leading to a differentia-
tion of their cells. A number of chemicals have been found in hydra which are
good candidates for being activating or inhibiting molecules for the formation of
heads or feet. While a detailed theory for the development of organs, e. g., eyes,
along these lines of thought is still to be developed, simpler patterns, e. g., stripes
on furs, rings on butterfly wings, are nowadays explained by this kind of
approach.

1.5.3 Population Dynamics


The phenomena to be explained are, among others, the abundance and distribu-
tion of species. If different species are supported by only one kind of food, etc.,
competition starts and Darwin's rules of the survival of the fittest apply.
(Actually, a strong analogy to the competition of laser modes exists.) If different
food resources are available, coexistence of species becomes possible.
Species may show temporal oscillations. At the beginning of the twentieth
century, fishermen of the Adriatic Sea observed a periodic change of numbers of
fish populations. These oscillations are caused by the "interaction" between
predator and prey fish. If the predators eat too many prey fish, the number of the
prey fish and thus eventually also the number of the predators decrease. This in
turn allows an increase of the number of prey fish, which then leads to an
increase of the number of predators, so that a cyclic change of the population
occurs. Other populations, such as certain insect populations, may show chaotic
variations in their numbers.

1.5.4 Evolution
Evolution may be viewed as the formation of new macroscopic patterns (namely
new kinds of species) over and over again. Models on the evolution of biomole-
cules are based on a mathematical formulation of Darwin's principle of the

Fig. 1.5.2. Examples of the Eigen-Schu-


ster hypercycles. Two kinds of biomole-
cules A and B are multiplied by autocata-
lysis but in addition the multiplication of
B is assisted by that of A and vice versa.
Here GS indicates certain ground sub-
stances from which the molecules are
formed. The right· hand side of this pic-
ture presents a cycle at which three kinds
of biomolecules A, B, C participate

416
1.6 Computer Sciences 15

survival of the fittest. It is assumed that biomolecules multiply by autocatalysis


(or in a still more complicated way by cyclic catalysis within "hypercycles") (Fig.
1.5.2). This mechanism can be shown to cause selections which in combination
with mutations may lead to an evolutionary process.

1.5.5 Immune System


Further examples for the behavior of complex systems in biology are provided by
enzyme kinetics and by antibody-antigen kinetics. For instance, in the latter case,
new types of antibodies can be generated successively, where some antibodies act
as antigens, thus leading to very complex dynamics of the total system.

1.6 Computer Sciences

1.6.1 Self-Organization of Computers, in Particular Parallel Computing


The problem here is to construct a computer net in which computation is distri-
buted among the individual parts of the computer network in a self-organized
fashion (Fig. 1.6.1) rather than by a master computer (Fig. 1.6.2). The distribu-
tion of tasks corresponds to certain patterns of the information flux. While con-
tinuous changes of the total task may be treated by synergetics' methods, con-
siderable research has still to be put into the problem of how the computer net
can cope with qualitatively new tasks .

Fig. 1.6.1. Schematic diagram of a computer


....
system whose individual computers are con-
nected with each other and determine the dis-
tribution of tasks by themselves

Fig. 1.6.2. Individual computers slaved by the


master computer ~

1.6.2 Pattern Recognition by Machines


For sake of completeness, pattern recognition is mentioned as a typical synergetic
process. Its mathematical treatment using methods of synergetics is still in its
infancy, however, so no further details are given here.

1.6.3 Reliable Systems from Unreliable Elements


Usually the individual elements of a system, especially those at the molecular
level, may be unreliable due to imperfections, thermal fluctuations and other

417
16 1. Introduction

v v

Fig. 1.6.3. How to visualize the buildup of a reliable system from unreliable elements. (Left side) The
potential function V of an individual element being very flat allows for quick jumps of the system
between the holding states 0 and 1. (Right side) By coupling elements together the effective potential
can be appreciably deepened so that a jump from one holding state to another becomes very
improbable

causes. It is suspected that the elements of our brain, such as neurons, are of such
a type. Nature has mastered this problem to construct reliable systems out of
these unreliable elements. When computer elements are made smaller and
smaller, they become less and less reliable. In which way can one put computer
elements together so that the total system works reliably? The methods of syn-
ergetics indicated below will allow us to devise systems which fulfill this task. Let
us exemplify here how a reliable memory can be constructed out of unreliable
elements. In order to describe the behavior of a single element, the concept of the
order parameter is used (to be explained later). Here it suffices to interpret the
order parameter by means of a particle moving in a potential (in an over damped
fashion) (Fig. 1.6.3). The two holding states 1 and 2 of the memory element are
identified with the two minima of the potential. Clearly, if the "valleys" are flat,
noise will drive the "particle" back and forth and holding a state is not possible.
However, by coupling several elements together, a potential with deeper values
can be obtained, which is obviously more reliable. Coupling elements in various
ways, several reliable holding states can be obtained. Let us now turn to phenom-
ena treated by disciplines other than the natural sciences.

1.7 Economy

In economy dramatic macroscopic changes can be observed. A typical example is


the switching from full employment to underemployment. A change of certain
control parameters, such as the kind of investment from production-increasing
into rationalizing investments, may lead to a new state of economy, i. e., that of
underemployment. Oscillations between these two states have been observed and
are explainable by the methods of synergetics. Another example of the develop-
ment of macroscopic structures is the evolution of an agricultural to an industri-
alized society.

418
1.10 What arc the Common Features of the Above Examples? 17

1.8 Ecology

Dramatic changes on macroscopic scales can be observed in ecology and related


fields. For instance, in mountainous regions the change of climate with altitude
acting as a control parameter may cause different belts of vegetation. Similar ob-
servations are made with respect to different zones of climate on the earth, giving
rise to different kinds of vegetation. Further examples of macroscopic changes
are provided by pollution where the increase of only very few percent of pollu-
tion may cause the dying out of whole populations, e. g., of fish in a lake.

1.9 Sociology

Studies by sociologists strongly indicate that the formation of "public opinion"


(which may be defined in various ways) is a collective phenomenon. One
mechanism which might be fundamental was elucidated by experiments by S.
Ash which, in principle, are as follows. The following task is given to about 10
"test persons": They have to say which reference line agrees in length with three
other lines of different lengths (Fig. 1.9.1). Except for one genuine test person,
all others were helpers of the experimenter, but the single test person did not
know that. In a first run, the helpers gave the correct answer (and so did the test
person). In the following runs, all helpers gave a wrong answer, and now about
60070 of the test persons gave the wrong answer, too. This is a clear indication
how individuals are influenced by the opinion of others. These results are
supported by field experiments by E. Noelle-Neumann. Since individuals
mutually influence each other during opinion-making, the field of public opinion
formation may be analyzed by synergetics. In particular, under certain external
conditions (state of economy, high taxes), public opinion can undergo dramatic
changes, which can manifest itself even in revolutions. For further details, the
reader is referred to the references.

Fig. 1.9.1. A typical arrangement of the experiment by Ash as described in


1 2 J the text

1.10 What are the Common Features of the Above Examples?

In all cases the systems consist of very many subsystems. When certain condi-
tions ("controls") are changed, even in a very unspecific way, the system can
develop new kinds of patterns on macroscopic scales. A system may go from a

419
18 1. Introduction

homogeneous, undifferentiated, quiescent state to an inhomogeneous, but well-


ordered state or even into one out of several possible ordered states. Such systems
can be operated in different stable states (bistability or multistability). These
systems can be used, e. g., in computers as a memory (each stable state represents
a number, e. g., 0 or 1). In the ordered state, oscillations of various types can also
occur with a single frequency, or with different frequencies (quasiperiodic oscil-
lations). The system may also undergo random motions (chaos). Furthermore,
spatial patterns can be formed, e. g., honeycomb structures, waves or spirals.
Such structures may be maintained in a dynamic way by a continuous flux of
energy (and matter) through the systems, e. g., in fluid dynamics. In other cases,
the structures are first generated dynamically but eventually a "solidification"
occurs, e. g., in crystal growth or morphogenesis. In a more abstract sense, in
social, cultural or scientific "systems", new patterns can evolve, such as ideas,
concepts, paradigms. Thus in all cases we deal with processes of self-organization
leading to qualitatively new structures on macroscopic scales. What are the
mechanisms by which these new structures evolve? In which way can we describe
these transitions from one state to another? Because the systems may be
composed of quite different kinds of parts, such as atoms, molecules, cells, or
animals, at first sight it may seem hopeless to search for general concepts and
mathematical methods. But this is precisely the aim of the present book.

1.11 The Kind of Equations We Want to Study

In this section we shall discuss how we can deal mathematically with the phenom-
ena described in the preceding sections and with many others. We shall keep in
mind that our approach will be applicable to problems in physics, chemistry, and
biology, and also in electrical and mechanical engineering. Further areas are
economy, ecology, and sociology. In all these cases we have to deal with systems
composed of very many subsystems for which not all information may be avail-
able. To cope with these systems, approaches based on thermodynamics or infor-
mation theory are frequently used. But during the last decade at least, it became
clear that such approaches (including some generalizations such as irreversible
thermodynamics) are inadequate to cope with physical systems driven away from
thermal equilibrium or with, say, economic processes. The reason lies in the fact
that these approaches are basically static, ultimately based on information theory
which makes guesses on the numbers of possible states. In [1], I have demon-
strated how that formalism works and where its limitations lie. In all the systems
considered here, dynamics plays the crucial role. As shall be demonstrated with
mathematical rigor, it is the growth (or decay) rates of collective "modes" that
determine which and how macroscopic states are formed. In a way, we are led to
some kind of generalized Darwinism which even acts in the inanimate world,
namely, the generation of collective modes by fluctuations, their competition and
finally the selection of the "fittest" collective mode or a combination thereof,
leading to macroscopic structures.
Clearly, the parameter "time" plays a crucial role. Therefore, we have to
study the evolution of systems in time, whereby equations which are sometimes

420
1.11 The Kind of Equations We Want to Study 19

called "evolution equations" are used. Let us study the structure of such
equations.

1.11.1 Differential Equations


We start with the example of a single variable q which changes in time. Such a
variable can be the number of cells in a tissue, the number of molecules, or the
coordinate of a particle. The temporal change of q will be denoted by dq Idt = q.
In many cases q depends on the present state of the system, e. g., on the number
of cells present. The simplest case of the corresponding equation is

q= aq. (1.11.1)

Such equations are met, for instance, in chemistry, where the production rate q
of a chemical is proportional to its concentration q ("autocatalytic reaction"), or
in population dynamics, where q corresponds to the number of individuals. In
wide classes of problems one has to deal with oscillators which in their simplest
form are described by

(1.11.2)

where w is the frequency of the oscillation.

1.11.2 First-Order Differential Equations


In contrast to (1.11.1), where a first-order derivative occurs, (1.11.2) contains a
second-order derivative ij1. However, by introducing an additional variable q2 by
means of

(1.11.3)

we may split (1.11.2) into two equations, namely (1.11.3) and


• 2
q2= -w q1· (1.11.4)

These two equations are equivalent to (1.11.2). By means of the trick of introduc-
ing additional variables we may replace equations containing higher-order
derivatives by a set of differential equations which contain first-order derivatives
only. For a long time, equations of the form (11.1.1-4) or generalizations to
many variables were used predominantly in many fields, because these equations
are linear and can be solved by standard methods. We now wish to discuss those
additional features of equations characteristic of synergetic systems.

1.11.3 Nonlinearity
All equations of synergetics are nonlinear. Let us consider an example from
chemistry, where a chemical with concentration ql is produced in an auto-

421
20 1. Introduction

catalytic way by its interaction with a second chemical with a concentration q2.
The increase of concentration of chemical 1 is described by

(1.11.5)

Clearly, the cooperation of parts (i. e., synergetics), in our case of molecules, is
expressed by a nonlinear term. Quite generally speaking, we shall consider equa-
tions in which the right-hand side of equations of the type (1.11.5) is a nonlinear
function of the variables of the system. In general, a set of equations for several
variables qj must be considered.

1.11.4 Control Parameters


The next important feature of synergetic systems consists in the control outside
parameters may exercise over them. In synergetics we deal mainly with open
systems. In physics, chemistry, or biology, systems are driven away from states
of thermal equilibrium by an influx of energy and/or matter from the outside.
We can also manipulate systems from the outside by changing temperature, ir-
radiation, etc. When these external controls are kept constant for a while, we
may take their effects into account in the equations by certain constant para-
meters, called control parameters. An example for such a parameter may be a
which occurs in (1.11.1). For instance, we may manipulate the growth rate of
cells by chemicals from the ouside. Let us consider a as the difference between a
production rate p and a decay rate d, i. e., a = p - d. We readily see that by
control of the production rate quite different kinds of behavior of the population
may occur, namely exponential growth, a steady state, or exponential decay.
Such control parameters may enter the evolution equations at various places. For
instance in

(1.11.6)

the constant fJ describes the coupling between the two systems ql and q2. When
we manipulate the strength of the coupling from the outside, fJ plays the role of a
control parameter.

1.11.5 Stochasticity
A further important feature of synergetic systems is stochasticity. In other
words, the temporal evolution of these systems depends on causes which cannot
be predicted with absolute precision. These causes can be taken care of by "fluc-
tuating" forces f(t) which in their simplest form transform (1.11.1) into

q= aq + f(t) . (1.11. 7)

Sometimes the introduction of these forces causes deep philosophical


problems which shall be briefly discussed, though later on a more pragmatic
point of view will be taken, i. e., we shall assume that the corresponding fluc-

422
1.11 The Kind of Equations We Want to Study 21

tuating forces are given for each system under consideration. Before the advent
of quantum theory, thinking not only in physics but also in many other fields was
dominated by a purely mechanistic point of view. Namely, once the initial state
of a system is known it will be possible to predict its further evolution in time
precisely. This idea was characterized by Laplace's catch phrase, "spirit of the
world". If such a creature knows the initial states of all the individual parts of a
system (in particular all positions and velocities of its particles) and their interac-
tions, it (or he) can predict the future for ever. Three important ideas have
evolved since then.
a) Statistical Mechanics. Though it is still in principle possible to predict, for
instance, the further positions and velocities of particles in a gas, it is either un-
desirable or in practice impossible to do this explicitly. Rather, for any purpose it
will be sufficient to describe a gas by statistics, i. e., to make predictions in the
sense of probability. For instance, how probable it will be to find n particles with
a velocity v between v and v + dv. Once such a probabilistic point of view is
I

adopted, fluctuations will be present. The most famous example is Brownian


motion in which /(1) in (1.11.7) represents the impact of all particles of a liquid
on a bigger particle. Such fluctuations occur whenever we pass from a micro-
scopic description to one which uses more or less macroscopic variables, for
instance when we describe a liquid not by the positions of its individual molecules
but rather by the local density and velocity of its molecules.
b) Quantum Fluctuations. With the advent of quantum theory in the 1920s it
became clear that it is impossible even in principle to make predictions with
absolute certainty. This is made manifest by Heisenberg's uncertainty principle
which states that it is impossible to measure at the same time the velocity and the
position of a particle with absolute precision. Thus the main assumption of
Laplace's "spirit of the world" is no longer valid. This impossibility is cast into
its most precise form by Born's probability interpretation of the wave function in
quantum theory. Since quantum theory lies at the root of all events in the
material world, uncertainties caused by quantum fluctuations are unavoidable.
This will be particularly important where microscopic events are amplified to
acquire macroscopic dimensions. (For instance, in biology mutations may be
caused by quantum fluctuations.)

c) Chaos. There is a third, more recent, development which shows that even
without quantum fluctuations the future path of systems cannot be predicted.
Though the equations describing temporal evolution are entirely deterministic,
future development can proceed along quite different routes. This rests on the
fact that some systems are extremely sensitive in their further development to
initial conditions. This can be most easily visualized by a simple mechanical
example. When a steel ball falls on a vertical razor-blade, its future trajectory
will depend extremely sensitively on its relative position before it hits the razor-
blade. Actually, a whole industry of gambling machines is based on this and
similar phenomena.
If the impact of fluctuations on a system is taken care of by a fluctuating
force, such as in (1.11.7), we shall speak of additive noise. A randomly fluc-

423
22 1. Introduction

tuating environment may cause other types of noise, too. For instance, the
growth rate in (1.11.1) may fluctuate. In such a case, we have

q= a(t)q, (1.11.8)

where we speak of multiplicative noise.

1.11.6 Many Components and the Mezoscopic Approach


So far we have discussed the main building blocks of the equations we want to
study. Finally, we should take into consideration that synergetic system are
generally composed of very many subsystems. Accordingly, these systems must
be described by many variables, denoted by ql, ... , qn' Because the values of
these variables at a given time t describe the state of the system, we shall call these
variables "state variables". All these variables can be lumped together into a state
vector q,
(1.11.9)
For practical purposes it is important to choose the variables q in an adequate
fashion. To this end we distinguish between the microscopic, the mezoscopic and
the macroscopic levels of approach. Let us take a liquid as an example. Accord-
ing to our understanding, at the microscopic level we are dealing with its indi-
vidual atoms or molecules described by their positions, velocities and mutual
interactions. At the mezoscopic level we describe the liquid by means of
ensembles of many atoms or molecules. The extension of such an ensemble is
assumed large compared to interatomic distances but small compared to the
evolving macroscopic pattern, e. g., the dimension of hexagons in the Benard
instability. In this case, the variables qi refer to the ensembles of atoms or
molecules. In the case of a fluid we may identify qi with the density and the mean
local velocity. When macroscopic patterns are formed, the density and velocity
may change locally. In other words, qi becomes a time- and space-dependent
variable. Finally, at the macroscopic level we wish to study the corresponding
spatial patterns. When we treat continuously extended systems (fluids, chemical
reactions, etc.), we shall start from the mezoscopic level and devise methods to
predict the evolving macroscopic patterns.
The mezoscopic level allows us to introduce concepts which refer to
ensembles of atoms but cannot be defined with respect 10 an individual atom.
Such a concept is, e. g., temperature. Another one is that of phases, such as
liquid or solid. Correspondingly, we may introduce two kinds of variables, say
ql (x, t) and q2(X, t), where ql refers to the density of molecules in the liquid and
q2 to their density in the solid phase. In this way, e. g., crystal growth can be
mathematically described by evolution equations.
In other fields of science the microscopic level need not be identified with
atoms or molecules. For instance, when mathematically treating a cell tissue, it
may be sufficient to consider the cells as the individual elements of the micro-
scopic level and their density (or type) as the appropriate variable at the
mezoscopic level.

424
1.12 How to Visualize the Solutions 23

In all such cases qj or the state vector q become functions of space and time

(qj (x, t), ... ,qn(x, t» = q(x, t) . (1.11.10)

In addition to the temporal change we now have to consider spatial changes


which will be mostly taken care of by spatial derivatives. An example is the equa-
tion for the diffusion of a substance, e. g.,

q=Df'!...q, (1.11.11)

where f'!... is the Laplace operator which reads in Cartesian coordinates

(1.11.12)

and D the diffusion constant.


Putting all these individual aspects together, we are led to nonlinear
stochastic partial differential equations of the general type

q =N(a, q, V,x, t), (1.11.13)

where V is the operator (%x, %y, %z). The study of such equations is an
enormous task and we shall proceed in two steps. First we shall outline how at
least in some simple cases the solutions of these equations can be visualized, and
then we shall focus our attention on those phenomena which are of a general
nature.

1.12 How to Visualize the Solutions

At least in principle it is possible to represent the solutions q(t) or q(x, t) by


means of graphs. Let us first deal with q/s, i. e., qj, ... , qn' which are independ-
ent of x but still dependent on time. The temporal evolution of each q/t) can be
represented by the graphs of Fig. 1.12.1. In many cases it is desirable to obtain an
overview of all variables. This may be achieved by taking qj, ... , q n as
coordinates of an n-dimensional space and attaching to each time tj, t 2 , etc., the
corresponding point (q1 (t), q2(t), ... ) (Fig. 1.12.2). When we follow up the con-
tinuous sequence of points to t -> + 00 and t -> - 00 we obtain a trajectory (Fig.
1.12.3). When we choose a different starting point we find a different trajectory
(Figs. 1.12.1- 3). Plotting neighboring trajectories we find a whole set of trajec-
tories (Fig. 1.12.4). Since these trajectories are reminiscent of streamlines of a
fluid they are sometimes called "streamlines" and their entity the "flow".
It is well known (see, for instance, [1]) that such trajectories need not always
go (in one dimension) from q = - 00 to q = + 00, but they may terminate in dif-

425
24 1. Introduction

Fig. 1.12.1 Fig. 1.12.2

Fig. 1.12.1. Temporal evolution of the variables ql and q2' respectively, in the case of time I. At an
initial time 10 a specific value of ql and q2 was prescribed and then the solid curves evolved. If dif-
ferent initial values of ql' q2 are given, other curves (- - -) evolve. If a system possesses many
variables, for each variable such plots must be drawn
Fig. 1.12.2. Instead of plotting the trajectories as a function of time as in Fig. 1.12.1, we may plot the
trajectory also in the ql' q2 plane where for each time Ij the corresponding point ql (tj)' q2(t) is
plotted. If the system starts at an initial time to from different points, different trajectories evolve. In
the case of n variables the trajectories must be drawn in an n-dimensional space

----
""""..-
"

I
I ""
I
I
I
I
I
I
I

t - -(X)

Fig. 1.12.3. In general, trajectories are plotted Fig. 1.12.4. Example of a set of trajectories
for t -> + 00 and t -> - 00 if one follows the tra-
jectory

ferent ways. For instance, in two dimensions they may terminate in a node (Fig.
1.12.5) or in a focus (Fig. 1.12.6). Because the streamlines are attracted by their
endpoints, these endpoints are called attractors. In the case of a node, the tem-
poral behavior of q is given by a graph similar to Fig. 1.12.7, whereas in the case
of a focus the corresponding graph is given in Fig. 1.12.8. In the plane the only
other singular behavior of trajectories besides nodes, foci and saddle points, is a
limit cycle (Fig. 1.12.9). In the case shown in Fig. 1.12.9 the limit cycle is stable
because it attracts the neighboring trajectories. It represents an "attractor". The
corresponding temporal evolution of qt, which moves along the limit cycle, is
shown in Fig. 1.12.10 and presents an undamped oscillation. In dimensions
greater than two other kinds of attractors may also occur. An important class
consists of attractors which lie on manifolds or form manifolds. Let us explain

426
1.12 How to Visualize the Solutions 25

the concept of a manifold in some detail. The cycle along which the motion
occurs in Fig. 1.12.9 is a simple example of a manifold. Here each point on the
manifold can be mapped to a point on an interval and vice versa (Fig. 1.12.11).

q, q,
Fig. 1.12.5. Trajectories ending at a (stable) Fig. 1.12.6. Trajectories ending at a (stable)
node. If time is reversed, the trajectories start focus. If time is reversed the trajectories start
from that node and the node is now unstable from that focus and it has become an unsta-
ble focu s

Fig. 1.12.8. Time depend-


Fig. 1.12.1. Time depend- ence of a variable q 1(I) in
ence of a variable ql (I) in the case of a focus. The
the case of a node. The motion is oscillatory but
motion is simply damped damped

q,
Fig. 1.12.9. Stable limit cycle in a Fig. 1.12.10. Temporal behavior of a variable, e. g.,
plane. The trajectories approach the q2(t), in the case of a limit cycle
limit cycle from the outside and in-
side of the limit cycle

427
26 1. Introduction

Fig. 1.12.11. One-to-one mapping of the individ- Fig. 1.12.12. One-to-one mapping of overlap-
ual points of a limit cycle onto points on a line ping pieces on a limit cycle onto overlapping
pieces on a line

The total manifold, especially, can be covered by pieces which can be mapped on
overlapping pieces along the line and vice versa (Fig. 1.12.12). Each interval on
the circle corresponds to a definite interval along the line. If we take as the total
interval 0 till 2 n we may describe each point along the circle by a point on the
qJ axis from 0 till 2 n. Because there is one-to-one mapping between points on the
limit cycle and on the interval 0 ... 2 n, we can directly use such a coordinate
(system) on the limit cycle itself. This coordinate system is independent of any co-
ordinate system of the plane, in which the limit cycle is embedded.
The limit cycle is a differentiable manifold because when we use, e. g., time as
a parameter, we assume q exists - or geometrically expressed - the limit cycle
possesses everywhere a tangent.
Of course, in general, limit cycles need not be circles but may be other closed
orbits which are repeated after a period T = 2 n/ w (w: frequency). If q exists, this
orbit forms a differentiable manifold again. In one dimension such a periodic
motion is described by
(1.12.1)
n

or in m dimensions by

q(wt) = L Cneinwt, n integers, (1.12.2)


n

where the vectors cn have m components.


Another example of a limit cycle is provided by Fig. 1.12.13 which must be
looked at at least in 3 dimensions. The next example of a manifold is a torus
which is presented in Fig. 1.12.14. In this case we can find a one-to-one corre-
spondence between each surface element on the torus and an element in the plane
where we may attach a point of the plane element to each point of the torus
element and vice versa. Furthermore, the torus can be fully covered by overlap-
ping pieces of such elements.
When we put the individual pieces on the plane together in an adequate way
we find the square of Fig. 1.12.15. From it we can construct the torus by folding
the upper and lower edge together to form a tube, bending the tube and glueing
the ends together. This makes it clear that we can describe each point of the torus

428
1.12 How to Visualize the Solutions 27

Fig. 1.12.13. Limit cycle in three dimensions

Fig. 1.12.14. Two-dimensional torus in three-di-


mensional space

2n r------------------,

2n
Fig. 1.12.15. The two-dimensional torus with local coordinates 'PI and 'P2 can be mapped one-to-one
on a square (left side of figure)

by the coordinate system ({Jl' ({J2' Because in each point ({Jl' ({J2 a tangent plane
exists, the torus forms a differentiable manifold. Such two- and higher-dimen-
sional tori are adequate means to visualize quasiperiodic motion, which takes
place at several frequencies. An example is provided by

q = sin (Wl t) sin (W2t) (1.12.3)

or, more generally, by

(1.12.4)

q can be represented by multiple Fourier series in the form

429
28 1. Introduction

(1.12.5)

where n· W = n l WI + n2w2 + ... + nNwN, nj integer.


Whether a vector function (1.12.4) fills the torus entirely or not depends on
the ratio between w's. If the w's are rational, only lines on the torus are covered.
This is easily seen by looking at the example in Fig. 1.12.16 in which we have
chosen W2: WI = 3: 2. Starting at an arbitrary point we find (compare also the
legend of Fig. 1.12.16) only one closed trajectory which then occurs as a closed
trajectory on a torus.
Another example is provided by w2: WI = 1 : 4 and W2: WI = 5: 1, respectively
(Figs. 1.12.17, 18). On the other hand, if the frequencies are irrational, e. g.,
W2: WI = n: 1 the trajectory fills up the total square, or in other words, it fills up
the whole torus (Fig. 1.12.19) because in the course of time the trajectory comes
arbitrarily close to any given point. (The trajectory is dense on the torus.) These

2 n: I---"'---~r---"

L -_ _ - L _ _ _L -_ _ ~_~ ~1

2n:
Fig. 1.12.16. A trajectory in a plane and its image on the torus in the case w2: WI = 3: 2. [n thi s
figure we have started the trajectory at 'PI = 0, 'P2 = O. Because of the periodicity condition we
continue that trajectory by projecting its cross section with the axis 'P2 = 271 down to the axis 'P2 = 0
(- - - ). It then continues till the point 'PI = 271, 'P2 = 71 from where we projected it on account of the
periodicity condition on 'PI = 0, 'P2 = 71 ( - - - ). From there it continues by means of the solid linc,
etc. Evidently by this construction we eventually find a closed line

2n: I--------~~

2n: Fig. 1.12.17. The case WI: w2 = 1 : 4

430
1.12 How to Visualize the Solutions 29

L-_'----_'----_'----_'----_L-_ <P,
2ft
Fig. 1.12.18. The case w 2 : wI = 5: 1

considerations can be easily generalized to tori in multidimensional spaces which


can be mapped on cubes with coordinates ({Jl, " " ({IN, 0::5 ({Jj ::5 2 n. After these
two explicit examples of (differentiable) manifolds (circle and torus), it is suf-
ficiently clear what we understand by a (differentiable) manifold. (For an
abstract definition the reader is referred to the references.) In the above example
of a stable limit cycle, all trajectories which started in the neighborhood of the
limit cycle eventually ended on that manifold. Manifolds which have such a
property will be called attracting manifolds.
Once q(t) of a system is on a stable limit cycle it will stay there for ever. In
such a case we call the limit cycle an invariant manifold because the manifold (the
limit cycle) is not changed during the motion. It is invariant against temporal
evolution. Such a definition applies equally well to all other kinds of manifolds.

2 ft hrrrlTT'-TT1rrrrr--rrTT1rTT1

L-LU~UL-U~~~~~LL~ ~

2n:
Fig. 1.12.19. The case W2 : wI = ;rr: 1, the case of an irrational number. In this computer plot neither
the trajectories in the 'PI' 'P2 plane nor on the torus ever close and indeed the traj ectories fill the whole
plane or the whole toru s, respectively. Note that in this plot the computer ran only a finite time,
otherwise the square or the torus would appear entirely black

431
30 1. Introduction

stable
slable manifold
manifold
(

-- unstab te

manifotd
_e--____
-

r
Fig. 1.12.20. The stable and unstable Fig. 1.12.21. The unstable and stable manifold connect-
manifold of a saddle point (compare ed with a limit cycle
text)

An important concept refers to stable and unstable manifolds. To exhibit the


ideas behind this concept we again shall discuss examples. Figure 1.12.20 shows
the stable and unstable manifold of a fixed point of the saddle type in the plane.
The stable manifold (of a fixed point) is defined as the set of all points which are
the initial points of trajectories which in the limit t ---+ + 00 terminate at the fixed
point. Obviously in our present example the stable manifold has the same dimen-
sion as the real line. This is so because all of the trajectories starting close to the
stable manifold will pass the saddle in a finite distance but then tend away from
it. The unstable manifold (of a fixed point) is defined as the set of initial points of
trajectories which end in the limit t ---+ - 00 at the fixed point. We note that both
types of manifolds fulfill the properties of an invariant manifold. A further
example is shown in Fig. 1.12.21, where the stable and unstable manifolds of a
limit cycle, which is embedded in a three-dimensional Euclidian space, are
drawn. These manifolds can be constructed locally from the linearized equations
of motion. Consider again as an example the saddle of Fig. 1.12.20. If we denote
the deviation from the saddle by q = (ql> q2), we have the following set of
equations

(1.12.6)

(1.12.7)

where a, y > 0 and the N;'s U = 1, 2) are nonlinear functions of qj. Confining
ourselves to small deviations qj' we may safely neglect the nonlinear terms N j
which are assumed to be of the order 0 (Iq 12 ). We then notice that small devia-
tions qj are exponentially enhanced in time, meaning that ql is tangent to the
unstable manifold of the saddle. Conversely, small perturbations q2 are exponen-
tially damped in time and we may therefore conclude that direction q2 is tangent
to the stable manifold at the saddle.
But in addition, generally a third class of directions might exist along which a
perturbation is neither enhanced nor damped, i. e., the behavior along these

432
1.12 How to Visualize the Solutions 31

z z

y y y

Fig. 1.12.22 Fig. 1.12.23

Fig. 1.12.22. Stereoplot of the Rossler attractor. To obtain a stereoscopic impression, put a sheet of
paper between your nose and the vertical middle line. Wait until the two pictures merge to one. The
parameter values are: a = b = 0.2, c = 5.7, x(O) = y(O) = - 0.7, z(O) = 1. Axes: - 14 ... 14 for x and
y , 0 .. . 28 for z. [After O. E. Rossler: In Synergetics, A Workshop, ed. by H. Haken, Springer Ser.
Synergetics, Vol. 2 (Springer, Berlin, Heidelberg, New York 1977) p. 184]
Fig. 1.12.23. Stereoplot of the (modified) Lorenz attractor. The parameters are: a = 2.2, (] = 4,
r=80, b = 8/3, x(0) = 5.1, y(O) = -13.72, z(O) = 52. Axes: - 50 .. . 50 for x, - 70 .. . 70 for y,
0 ... 14 for z. [After O. E . Rossler: In Synergetics, A Workshop, ed. by H . Haken, Springer Ser.
Synergetics, Vol. 2 (Springer, Berlin, Heidelberg, New York 1977) p. 184]

directions is neutral. These directions are tangent to the so-called center mani-
fold. An example is provided by the limit cycle of Fig. 1.12.21. Obviously a per-
turbation tangent to the cycle can be neither enhanced nor damped in time. Fur-
thermore, we observe that in the case of the saddle (Fig. 1.12.20) the center mani-
fold is reduced to the point itself.
From recent work it appears that there may be attractors which are not mani-
folds. Such attractors have been termed "strange attractors" or "chaotic attrac-
tors". The reader should be warned of some mathematical subtleties. The notion
"strange attractor" is nowadays mostly used if certain mathematical axioms are
fulfilled and it is not known (at least at present) whether systems in nature fulfill
these axioms. Therefore we shall use the more loosely defined concept of a
chaotic attractor. Once the vector q (t) has entered such a region it will stay there
forever. But the trajectory does not lie on a manifold. Rather, the vector q(t) will
go on like a needle which we push again and again through a ball of thread.
Chaotic attractors may occur in three and higher dimensions. Examples are pre-
sented in Figs. 1.12.22 and 23.
The trajectories of a chaotic attractor may be generated by rather simple dif-
ferential equations (if the parameters are adequately chosen). The simplest
example known is the Rossler attract or. Its differential equations possess only
one nonlinearity and read

x= -y-z , (1.12.8)

y=x+ay, (1.12.9)

i = b+z(x-c), (1.12.10)

where a, b, c are constant parameters. A plot of this attractor is presented in Fig.


1.12.22.

433
32 1. Introduction

The next simple (and historically earlier) example is the Lorenz attract or. The
corresponding differential equations are

x=a(y-x), (1.12.11 )

y=x(r-z)-y, (1.12.12)

z=xy-bz, (1.12.13)

where a, b, r are constant parameters. This model was derived for the convection
instability in fluid dynamics. The single mode laser is described by equations
equivalent to the Lorenz equations. A plot of a (modified) Lorenz attractor is
provided by Fig. 1.12.23. The modification consists in adding a constant a to the
rhs of (1.12.11).

1.13 Qualitative Changes: General Approach

A general discussion of the nonlinear partial stochastic differential equations


(1.11.13) seems rather hopeless, because they cover an enormous range of phe-
nomena with which nobody is able to deal. On the other hand, in the realm of
synergetics we wish to find out general features of complex systems. We can take
a considerable step towards that goal by focusing attention on those situations in
which the macroscopic behavior of a system changes dramatically. We wish to
cast this idea into a mathematical form. To this end we first discuss the concept
of structural stability, taking an example from biology.
Figure 1.13.1 shows two different kinds of fish, namely porcupine fish and
sun fish. According to studies by d' Arcy Wentworth Thompson at the beginning
of the twentieth century, the two kinds of fish can be transformed into each other
by a simple grid transformation. While from the biological point of view such a
grid transformation is a highly interesting phenomenon, from the mathematical
point of view we are dealing here with an example of structural stability 1. In a
mathematician's interpretation the two kinds of fish are the same. They are just
deformed copies of each other. A fin is transformed into a fin, an eye into an eye,
etc. In other words, no new qualitative features, such as a new fin, occur. In the
following, we shall have structural changes (in the widest sense of this word) in

1 The concept of structural stability seems to playa fundamental role in biology in a still deeper sense
than in the formation of different species by way of deformation (Fig. 1.13.1). Namely, it seems
that, say within a species, organisms exhibit a pronounced invariance of their functions against
spatial or temporal deformations. This makes it sometimes difficult to perform precise (and repro-
ducible) physical measurements on biological objects. Most probably, in such a case we have to
look for transformation groups under which the function of an organ (or animal) is left invariant.
This invariance property seems to hold for the most complicated organ, the human brain. For
example, this property enables us to recognize the letter a even if it is strongly deformed. From this
ability an art out of writing letters (in the double sense of the word) developed in China (and in old
Europe).

434
1.13 Qualitative Changes: General Approach 33

Fig. 1.13.1. The porcupine fish


(left) and the sun fish (right) can be
transformed into each other by a
simple grid transformation . [After
D' Arcy Thompson: In On Growth
and Form, ed. by J. T. Bonner
(University Press, Cambridge
1981)]

mind. In contrast to our example of the two kinds of fish (Fig. 1.13.1), we shall
not be concerned with static patterns but rather with patterns of trajectories, i. e.,
in other words, with the flows we treated in the foregoing section. As we know,
we may manipulate a system from the outside, which in mathematical form will
be done by changing certain control parameters. We want to show how the prop-
erties of a system may change dramatically even if we alter a control parameter
only slightly. In [1] a simple example for this kind of behavior was presented.

V(ql

- -r----7¥<f'----i--q
Fig. 1.13.2. Equilibrium position of a ball in tw o kind s of vase
(dashed line or solid line)

Consider a ball which slides down along the walls of a vase. If the vase has the
form of the dashed line of Fig. 1.13 .2, the ball comes to rest at q = O. If the vase,
however, is deformed as indicated by the solid line, the ball comes to rest at
q = + a or q = - a. The flow diagram of this ball is easily drawn. In the case of
the potential curve with a single minimum, we obtain Fig. 1.13.3, whereas in the
case of two minima, the flow diagram is given by Fig. 1.13.4. A related transition
from a single attractor to two attractors is provided by the self-explaining Figs.

- ---.+._0 _ _ _ - q
- - ....---- - q
Fig. 1.13.3. One-dimensional mo- Fig. 1.13.4. One-dimensional motion of a ball with
tion of a ball with coordinate q end- two points of stable equilibrium (.) and one unstable
ing at one stable point poin t ( 0 )

435
34 1. Introduction

- ./
/1
Fig. 1.13.5. Trajectories ending at a node Fig. 1.13.6. Two stable and an unstable node

Fig. 1.13.7. One- to-one mapping between these two


flow diagrams is possible

Fig. 1. 13.8. For these two flow diagram s a one-to-


one mapping is impossible

1.13 .5, 6. By the way, the example of Fig. 1.13 .2 reveals the important role
played by fluctuations. If the ball is initially at q = 0, to which valley it will
eventually go entirely depends on fluctuations.
What makes the transition from Figs. 1.13.3 to 4 or from Figs. 1.13.5 to 6 so
different from the example of the two kinds of fish is the following . Let us draw
one of the two fishes on a rubber sheet. Then by mere stretching or pushing
together the rubber sheet, we may continuously proceed from one picture to the
other. However, there is no way to proceed from Fig. 1.13.5 to 6 by merely
stretching or deforming a rubber sheet continuously as it is possible in Fig.
1.13.7. In other words, there is no longer any one-to-one mapping between the
stream lines of one flow to the stream lines of the other (Fig. 1.13.8). In the
mathematical sense we shall understand by "structural instability" or "structural
changes" such situations in which a one-to-one mapping becomes impossible.
We now wish to discuss briefly how to check whether the change of a control
parameter causes structural instability. We shall come to this problem in much
greater detail later. Here the main method is illustrated by means of a most
simple example. The equation which describes the sliding motion of a ball in a
vase reads:

(1.13.1)

°
For a> 0 the solution reads q = 0, although q = still remains a solution for

°
a<O. Of course, by looking at Fig. 1.13 .2 we immediately recognize that the
position q = is now unstable. However, in many cases of practical interest we

436
1.13 Qualitative Changes: General Approach 35

may not invoke the existence of a potential curve as drawn in Fig. 1.13.2. Rather,
we must resort to another approach, namely linear stability analysis. To this end
a small time-dependent deviation u is introduced so that we write the solution q
of (1.13.1) as

q = qo + u = u. (1.13.2)
Inserting (1.13.2) into (1.13.1) and keeping only linear terms, we find the
equation (for a < 0)

U= lalu, (1.13.3)

which gives the solution

u(t) = u(0)e 1a1t • (1.13.4)

Because I a I = - a > 0, u(t) grows exponentially. This indicates that the state
qo = 0 is unstable. In Chaps. 2, 3 linear stability analysis will be quite generally
presented. In particular, we shall study not only the case in which a constant qo
becomes unstable, but also the case in which motions on a limit cycle or on a
torus become unstable. The latter problem leads to the rather strange country of
quasiperiodic motions where still a great many discoveries can be made, to which
this book has made some contribution. After having performed the stability ana-
lysis we ask the question to which new states the system will go. In this context
two concepts are central to synergetics, namely the concept of the order para-
meter and that of slaving. Consider to this end the following two differential
equations
iII = Alql-qlq2, (1.13.5)

(1.13.6)

which may occur in a number of fields, for instance, chemistry. Equation


(1.13.5) describes the autocatalytic generation of a concentration ql of a chemical
1 by AI ql and the decay of the molecules of 1, due to their interaction with a dif-
ferent kind of molecules with concentration q2, by - ql q2. Equation (1.13.6)
describes by its first term the spontaneous decay of the molecules q2 and their
generation by a bimolecular process from ql. Of course, for a mathematical
treatment these interpretations are of no importance at all and we rather focus
our attention on the mathematical features. Let us assume that AI is very small or
slightly positive. In such a case ql will change very slowly provided ql and q2 are
small quantities (which allows us to neglect the quadratic term in a first approxi-
mation). According to (1.13.6), q2 is driven by qi. But because ql changes very
slowly, we may expect that q2 changes very slowly, too. If A2 is positive and much
bigger than AI we may neglect iI2 compared to A2q2. This result was deduced still
more explicitly in [1]. Putting
(1.13.7)

437
36 1. Introduction

we may immediately solve (1.13.6) by means of

(1.13.8)
This approach, which is often called the adiabatic approximation, allows us
to express q2 explicitly by q]. Or, in other words, q2 is slaved by q]. When we are
dealing with very many variables, which are slaved by q], we may reduce a com-
plex problem quite considerably. Instead of very many equations for the q's we
need to consider only a single equation for q], and then we may express all other
q's by q] according to the slaving principle. We shall call such a q] an order
parameter. Of course, in reality the situation may be more difficult. Then
(1.13.5, 6) must be replaced by equations which are much more involved. They
may depend on time-dependent coefficients and they may contain fluctuating
forces. Furthermore (1.13.7) and therefore (1.13.8) are just approximations.
Therefore it will be an important task to devise a general procedure by which one
may express q2 by q]. We shall show in Chap. 7 that it is indeed possible to
express q2(t) by q] (t) at the same time t, and an explicit procedure will be devised
to find the function q2(t) = f(q] (t» explicitly for a large class of stochastic non-
linear partial differential equations. There is a most important internal relation
between the loss of linear stability, the occurrence of order parameters and the
validity of the slaving principle. When we change control parameters, a system
may suffer loss oflinear stability. As can be seen from (1.13.5,6), in such a case
Re{Ad changes its sign, which means that it becomes very small. In such a situa-
tion the slaving principle applies. Thus we may expect that at points where struc-
tural changes occur, the behavior of a system is governed by the order parameters
alone. As we shall see, this connection between the three concepts allows us to
establish far-reaching analogies between the behavior of quite different systems
when macroscopic changes occur.

1.14 Qualitative Changes: Typical Phenomena

In this section we want to give a survey on qualitative changes caused by instabili-


ties. There are wide classes of systems which fall into the categories discussed
below. To get an overview we shall first neglect the impact of noise. Let us start
from equations of the form (1.11.13)

q =N(q, a, V,x, t). (1.14.1)

We assume that N does not explicitly depend on time, i. e., we are dealing with
autonomous systems. Ignoring any spatial dependence, we start with equations
of the form

q(t) = N(q(t), a). (1.14.2)

We assume that for a certain value (or a range of values) of the control parameter
a, a stable solution qo(t, a) of (1.14.2) exists. To study the stability of this solu-

438
1.14 Qualitative Changes: Typical Phenomena 37

tion when a is changed, we make the hypothesis


q = qo+ w(t) , (1.14.3)

where wet) is assumed to be small. Inserting (1.14.3) into (1.14.2) and keeping
only terms linear in w, we arrive at equations of the form

w=Lw, (1.14.4)

where L depends on qo. In Sect. 1.14.6 it will be shown explicitly how L is con-
nected with N. For the moment it suffices to know that L bears the same time
dependence as qo.
Let us assume that qo is time independent. To solve (1.14.4) we make the
hypothesis
w(t) = eAtv, v constant vector, (1.14.5)

which transforms (1.14.4) into


Lv = AV, (1.14.6)

which is a linear algebraic equation for the constant vector v and the eigenvalue
A.
To elucidate the essential features of the general approach, let us assume that
the matrix L is of finite dimension n. Then there are at most n different eigen-
values Aj' which in general depend on the control parameter a. An instability
occurs if at least one eigenvalue acquires a nonnegative real part.

1.14.1 Bifurcation from One Node (or Focus) into Two Nodes (or Foci)
Let us treat the case in which Re{A) ;::: 0 just for one j, say j = 1, and Re{AJ < 0
for all other j. This example illustrates some of the essential features of our
approach, an overview of which will be presented later. Since our final goal is to
solve the fully nonlinear equation (1.14.2), we must make a suitable hypothesis
for its solution q(t). To this end we repesent q(t) in the following form
(1.14.7)

where Vj are the solutions of (1.14.6), while <j are still unknown time-dependent
coefficients. Inserting (1.14.7) into (1.14.2), we may find after some manipula-
tions (whose explanation is not important here, cf. Chap. 8) equations for t;j. Let
us take for illustration just j = 1, 2. The corresponding equations read

~1 = Al <I + Nl (t;j, <2) , (1.14.8)

~2 = A2<2 + N2(t;I, <2)' (1.14.9)

Here AI' A2 are the eigenvalues of (1.14.6), while Nj are nonlinear functions of t;1,

439
38 1. Introduction

¢"2 which start at least with terms quadratic (or bilinear) in ¢"t, ¢"2. Now remember
that we are close to a control parameter value a where the system loses linear
stability, i. e., where Re{At} changes sign. But this means that IAtl <t!; IA21 so that
the slaving principle applies. Therefore we may express C;2 by ¢"1' C;2 =!(¢"1), and
need solve only a single equation of the form
(1.14.10)

(1.14.11)

Now consider a small surrounding of ¢"t = 0, in which we may approximate Nt by


a polynomial whose leading terms, only, need to be kept for small enough C;t. If
the leading term of Nt reads - P¢"I, we obtain as the order parameter equation
just

(1.14.12)
But when confining to real At this is precisely our former equation (1.13.1)
which describes the sliding motion of a ball in a vase with one valley (At < 0) or
two valleys (At> 0). Therefore the single node present for At < 0 is replaced by
two nodes for At> 0 (Figs. 1.13.5 and 6). Or, in other words, the single node
bifurcates into two nodes (Fig. 1.14.1). It is worth mentioning that (1.14.12) not
only describes the new equilibrium position, but also the relaxation of the system
into these positions and thus allows their stability to be verified. This example
may suffice here to give the reader a feeling how the nonlinear equation (1.14.6)
can be solved and what kind of qualitative change occurs in this case.

Fig. 1.14.1. Two ways of representing the


bifurcation of a stable node into two stable
nodes. In the upper part the two flow dia-
grams are plotted in the same plane. In the
lower part the control parameter A. is plotted
along the abscissa and the planes represent-
ing the flow diagrams corresponding to the
upper part of the figure are shown on indivi-
dual planes perpendicular to the A. axis

From now on this section will deal with classification and qualitative
description of the phenomena at instability points. The detailed mathematical
approaches for their adequate treatment will be presented in later chapters. For
instance, in Sect. 8.3 we shall discuss more complicated problems in which
several eigenvalues, which are real, become positive.

1.14.2 Bifurcation from a Focus into a Limit Cycle (Hopf Bifurcation)

A famous example of bifurcation is the Hop! bifurcation. In this case two com-
plex conjugate eigenvalues

440
1.14 Qualitative Changes: Typical Phenomena 39

Fig. 1.14.2. Two ways of representing the


bifurcation of a stable focus into a stable
limit cycle. The representation technique is
the same as in Fig. 1.14.1

Al = A' + i w , A2 = A' - iw , (1.14.13)


where A', w real, =1= 0, cross the imaginary axis so that A' ~ O. In such a case an
oscillation sets in and the originally stable focus bifurcates into a limit cycle
(Fig. 1.14.2). At this moment a general remark should be made. Loss of linear
stability does not guarantee that the newly evolving states are stable. Rather, we
have to devise methods which allow us to check the stability of the new solutions
explicitly. Our approach, discussed later in this book, will make this stability
directly evident.

1.14.3 Bifurcations from a Limit Cycle


In a number of realistic cases a further change of the control parameter can cause
an instability of a limit cycle. This requires an extension of the linear stability
analysis by means of (1.14.5), because the motion on the limit cycle is described
by a time-dependent qo(t). Therefore L of (1.14.4) becomes a function of t which
is periodic. In such a case we may again study the stability by means of the
Floquet exponents A occurring in (1.14.5), using the results derived in Sect. 2.7.
If a single real A becomes positive, the old limit cycle may split into new limit
cycles (Figs. 1.14.3 and 4). If, on the other hand, a complex eigenvalue (1.14.13)
acquires a positive real part, a new limit cycle is superimposed on the old one.

q,lt)

Fig. 1.14.3. Bifurcation of a limit cycle


in the plane into two limit cycles in the
same plane. The old limit cycle, which
has become unstable, is still represented
as a dashed line on the right-hand side of
this figure Fig. 1.14.4.

Fig. 1.14.4. The temporal behavior of a variable ql (t) of the limit cycle before bifurcation (left-hand
side) and after bifurcation (right-hand side). The variables ql belonging to the new stable limit cycles
are shown as solid and dashed-dotted lines, respectively. The unstable limit cycle is represented by a
dashed line

441
40 1. Introduction

--- t

Fig. 1.14.5. Bifurcation of a hmit cycle


in two dimensions to a limit cycle in three
dimensions . Depending on the frequency
of rotation along and perpendicular to
the dashed line, closed or unclosed orbits
may evolve. In the case of a closed orbit, Fig. 1.14.6. The temporal evolution of ql (I) of a
again a new limit cycle arises, whereas in limit cycle before bifurcation (lefl-hand side) and
the other case the trajectory fills a torus after bifurcation (righI-hand side)

This new motion of the solution vector q (I) can be visualized as that of a motion
on a torus (Figs. 1.14.5 and 6). In other words, we are now dealing with a quasi-
periodic motion. When a limit cycle becomes unstable, other phenomena may
occur also. The limit cycle can be replaced by another one where the system needs
twice the time to return to its original state or, to put it differently, the period has
doubled, or a subharmonic is generated. There are a number of systems known,
ranging from fluid dynamics to electronic systems, which pass through a
hierarchy of subsequent period doublings when the control parameter is
changed. Figures 1.14.7 -12 survey some typical results.
The phenomenon of period doubling or, in other words, of subharmonic
generation, has been known for a long time in electronics. For instance, a
number of electronic circuits can be described by the Duffing equation

(1.14.14)

which describes the response of a nonlinear oscillator to a periodic driving force.

Fig. 1.14.7 Fig. 1.14.8

Fig. 1.14.7. Projection (from the q I' q2' I space) of a trajectory on the q I ' q 2 plane of a traject ory of
the Duffing equation x + kx + x 3 = A cost!, where q l == X, q2 == X. The control parameter va lues
are fixed at k = 0.35 , A = 6.6. In this case a limit cycle occurs
Fig. 1.14.8. Solution of the same equation as in Fig. 1.14.7 but with k = 0.3 5, A = 8.0. The period
of evolution is now twice as large as that of Fi g. 1.14.7 but again a closed orbit occurs (period
doubling)

442
1.14 Qualitative Changes: Typical Phenomena 41

~:F,
Fig. 1.14.9. Same equation as in Figs. 1.14.7
and 8 with k = 0.35, A = 8.5. The period is 4
times as large as that of the original limit cycle
of Fig. 1.14.7
/ / I 1I,
I

Fig. 1.14.10

,
q It)
Fig. 1.14.10. Temporal evolution of co-
ordinate ql belonging to the case of Fig. 1.14.7
Hg. 1.14.11. Temporal evolution of the co-
ordinate q 1 after period doubling has oc-
curred. Note that two subsequent minima have
different depths
Fig. 1.14.12. Temporal evolution of ql be-
longing to Fig. 1.14.9. Close inspection of the
lower minima reveals that the cycle is repeated
only after a time 4 times as big as in Fig.
Fig. 1.14.12 1.14.10

Even this still rather simple equation describes a variety of phenomena including
period doubling and tripling. But other subharmonics may also be present. A
whole sequence of period doublings or triplings may occur, too. It seems that
here we are just at the beginning of a new development, which allows us to study
not only one or a few subsequent bifurcations, but a whole hierarchy. A warning
seems to be necessary here. While some classes of equations (certain "discrete
maps", cf. Sect. 1.17) indicate a complete sequence of period doubling, real
systems may be more complicated allowing, e. g., both for doubling and tripling
sequences or even for mixtures of them.

1.14.4 Bifurcations from a Torus to Other Tori


We have seen above that a limit cycle may bifurcate into a torus. Quite recently
the bifurcation of a torus into other tori of the same dimension or of higher

443
42 1. Introduction

dimensions has been studied. Here quite peculiar difficulties arise. The mathe-
matical analysis shows that a rather strange condition on the relative irrationality
of the involved basic frequencies WI, W2, ... of the system plays a crucial role.
Many later sections will be devoted to this kind of problem. Some aspects of this
problem are known in celestial mechanics, where the difficulties could be
overcome only about one or two decades ago. While in celestial mechanics
scientists deal with Hamiltonian, i. e., nondissipative, systems and are concerned
with the stability of motion, here we have to be concerned with the still more
complicated problem of dissipative, pumped systems, so that we shall deal in
particular with qualitative changes of macroscopic properties (in particular the
bifurcation from a torus into other tori).

1.14.5 Chaotic Attractors


When the motion on a torus becomes unstable due to a change of control para-
meter and if the specific conditions on the irrationality of the frequencies are not
fulfilled, then quite different things may happen to the flow q(t). A torus may
again collapse into a limit cycle, i. e., one or several frequencies may lock
together (Fig. 1.14.13). A large class of phenomena, in the focus of present
research, concerns irregular motions, i. e., the "chaotic attractors" briefly men-
tioned above.
I(w} I(w)

L-__ ~ ______- L_ _ _ _• W L-__ ~ __- L_ _4-____- ' W

Fig. 1.14.13. How the power spectrum [(wi reveals frequency locking. (Left-hand side) In the
unlocked state the system oscillates at two fundamental frequencies Wl and W2. (Right-hand side)
The power spectrum in case of a locked state. The two former frequencies Wl and Wz ( - - - ) have
disappeared and are replaced by a single line at frequency Wo

The results of all these studies are of great importance for many problems in
the natural sciences and other fields, because "motion on a torus" means for a
concrete system that it exerts a motion at several fundamental frequencies and
their suitable linear combinations. In many cases such a motion is caused by a
system of nonlinear oscillators which occur in nature and in technical devices
quite frequently. It is then important to know in which way such a system
changes its behavior if a control parameter is changed.

1.14.6 Lyapunov Exponents *


As we have seen in the preceding sections, there may be quite different kinds of
attractors, such as a stable focus, a limit cycle, a torus, or finally a chaotic attrac-

444
1.14 Qualitative Changes: Typical Phenomena 43

tor. It is therefore highly desirable to develop criteria which allow us to dis-


tinguish between different attractors. Such a criterium is provided by the
Lyapunov exponents which we are going to explain now.
Let us consider the simplest example, namely one variable which obeys the
nonlinear differential equation

q=N(q). (1.14.15)

In this case the only attract or possible is a stable fixed point (a "one-dimensional
node") and the "trajectory" of this attract or is just a constant, q = qQ, denoting
the position of that fixed point. In order to prove the stability of this point, we
make a linear stability analysis introduced in Sect. 1.13. To this end we insert

q(t) = qo + oq(t) (1.14.16)

into (1.14.15), linearize this equation with respect to oq, and obtain

d
-oq=Loq, (1.14.17)
dt
where L = aNlfJq Iq=qo is a constant.
The solution of (1.14.17) reads, of course,

oq(t) = oq(O) eLI . (1.14.18)

If L is negative, the fixed point is stable. In this trivial example, we can directly
read off L from (1.14.18). But in more complicated cases to be discussed below,
only a computer solution might be available. But even in such a case, we may
derive L from (1.14.18) by a simple prescription. Namely, we form

1
-In loq(t) I
t

and take the limit of t ---+ 00. Then, clearly

L = lim~lnloq(t)I. (1.14.19)
1_00 t

The concept of the Lyapunov exponent is a generalization of (1.14.19) in two


ways:
1) One admits trajectories in a multi-dimensional space so that q (t) is the cor-
responding vector which moves along its trajectory when time t elapses.
2) One tests the behavior of the system in the neighborhood of the trajectory
qo(t) under consideration (which may, in particular, belong to an attractor). In
analogy to (1.14.16) we put
q (t) = qoU) + oq (t) (1.14.20)

445
44 1. Introduction

so that f5q(t) "tests" in which way the neighboring trajectory q(t) behaves, i.e.
whether it approaches qo(t) or departs from it. In order to determine f5q(t) we
insert (1.14.20) into the nonlinear equation

q(t) = N(q(t» (1.14.21)

(of which qo is a solution), and linearize N with respect to oq(t). Writing


(1.14.21) in components we readily obtain

(1.14.22)

This is a set of linear differential equations with, in general, time-dependent coef-


ficients (ONj/oqk)'
In generalization of (1.14.19) one is tempted to define the Lyapunov
exponents by

A= lim~lnlf5q(t)I. (1.14.23)
t~oo t

This definition, though quite often used in scientific literature, is premature,


however, because the limit needs not to exist. Consider as an example

(1.14.24)

If we choose, in (1.14.23), t = In = 2nn/w, n being an integer, so that sin wI


vanishes, f5q behaves like eXp(A2/n) and (1.14.23) yields A = A2' If, on the other
hand, we put t= t~ = 2n(n + 1/2)/w, oq behaves like exp(Alt~) and (1.14.23)
yields

Clearly, if Al =1= A2' the limit (1.14.23) does not exist. Therefore one has to refine
the definition of the Lyapunov exponent. Roughly speaking, one whishes to
select the biggest rate A, and one therefore replaces "lim" in (1.14.23) by the
"limes superior", or in short lim sup, so that (1.14.23) is replaced by

A = lim sup ~ln loq(t) I. (1.14.25)


t-->oo t

We shall give the precise definition of lim sup in Chap. 2 where we shall present a
theorem on the existence of Lyapunov exponents, too. Depending on different
initial values of f5q at t = to, different Lyapunov exponents may exist, but not
more than m different ones, if m is the dimension of the vector space of N (or q).
We are now able to present the criterium anounced above, which helps us to
distinguish between different kinds of attractors.

446
1.14 Qualitative Changes: Typical Phenomena 45

In one dimension, there are only stable fixed points, for which the Lyapunov
exponents A are negative ( - ). In two dimensions, the only two possible classes of
attractors are stable fixed points, or limit cycles, as is proven rigorously in
mathematics. In the case of a stable fixed point (focus) the two Lyapunov
exponents (AI, ,1.2) (which may coincide) are negative ( - , -). In the case of a
stable limit cycle, the Lyapunov exponents belonging to a motion oq transversal
to the limit cycle qo(t) is negative (stability!), whereas the Lyapunov exponent
belonging to oq in tangential direction vanishes, as we shall demonstrate in a
later section. Therefore (AI, ,1.2) = (-,0). There may be a "pathological" case,
for which (AI, ,1.2) = (-,0), but no limit cycle present, namely if there is a line of
fixed points.
Finally we discuss typical cases in three dimensions. In each case we assume
an attractor qo, i. e. Iqo Iremains bounded for t ---+ 00.

( - , - , -) stable focus (fixed point)

( - , - ,0) stable limit cycle.

Neighboring trajectories of the limit cycle can approach the limit cycle from two
linearly independent directions transversal to the limit cycle so that (AI' ,1.2) =
( - , -), whereas the third Lyapunov exponent corresponding to a shift of the
trajectory in tangential direction is equal to zero.

stable torus.

The discussion is similar to the case of the limit cycle. (There are still some
subtleties analogous to the "pathological" case just mentioned.)
Chaos may occur, if one Lyapunov exponent becomes positive. But at any
rate, some more discussion is needed. For instance (}'1, ,1.2' ,1.3) = (+,0,0) may
mean we are dealing with an unstable torus (i. e. no attractor). If an attractor
possesses the exponents (,1.1' ,1.2' ,1.3) = (+ ,0, -), it is considered as a chaotic
attractor, (,1.1' ,1.2' ,1.3) = (+, +,0) may mean an unstable limit cycle (i. e. no
attract or) etc. Because in a chaotic attract or at least one Lyapunov exponent is
positive, neighboring trajectories depart very quickly from each other. But since
neighboring trajectories stem from initial conditions which differ but slightly we
recognize that the phenomena described by a chaotic attractor sensitively depend
on the initial conditions. It might be noteworthy that the research on Lyapunov
exponents - both what they mean for attractors and how they can be determined
- is still under its way.
In conclusion we mention the following useful theorem: If q(t) is a trajectory
which remains in a bounded region (e. g. the trajectory of an attract or) and if it
does not terminate at a fixed point, at least one of its Lyapunov exponents
vanishes. We shall present a detailed formulation of this theorem and its proof in
Sect. 2.4.

447
46 1. Introduction

1.15 The Impact of Fluctuations (Noise).


Nonequilibrium Phase Transitions

So far we have discussed a number of typical phenomena neglecting noise, i. e.,


the impact of fluctuations on the system. Over the past years it has become more
and more evident that just at critical points, where the system changes its macro-
scopic behavior, fluctuations playa decisive role. According to fundamental laws
of theoretical physics, whenever dissipation occurs, fluctuations must be present.
Therefore as long as we deal with physical, chemical, biological, mechanical or
electrical systems, fluctuations must not be ignored, at least in systems close to
critical points. For phase transitions of systems in thermal equilibrium, the
adequate treatment of fluctuations had been a long standing problem and could
be solved only recently by renormalization group techniques. In this book we are
concerned with instabilities of physical and chemical systems Jar Jrom thermal
equilibrium and other systems. Here fluctuations play an at least equally im-
portant role and ask for new approaches. For instance, the slaving principle we
got to know in Sect. 1.13 must be able to take into account fluctuations (cf.
Chap. 7), and the order parameter equations must be solved with the adequate
inclusion of fluctuations (cf. Chap. 10). In short, fluctuations render the phe-
nomena and problems of bifurcations (which are difficult enough) into the still
more complex phenomena and correspondingly more difficult problems of non-
equilibrium phase transitions.
In order to get some insight into the role played by fluctuations we wish to
study some relatively simple examples. Let us consider the transition of a stable
node to two stable nodes and an unstable node which we came across in Figs.
1.13.5 and 6. In the presence of noise, even in the steady state, the representative
point of a system, q(t), is pushed backwards and forwards in a random sequence
all the time. Therefore we can tell only what the probability will be to find the
system's vector q in a certain volume element dV = dql dq2 . .. dqn. This proba-
bility is decribed by a probability distribution functionJ(q, t) mUltiplied by dV.
This function may change its shape dramatically at transition points (Figs.
1.15.1,2). Close to these points the fluctuations of the order parameters become

Fig. 1.15.1. The distribution function f(q) Fig. 1.15.2. The distribution function f(q)
( ~ ~ - ) belonging to a node. The solid line ( - - - ) belonging to two nodes and a saddle
represents the potential in which the "particle" point (in one dimension). The solid line repre-
with coordinate q moves sents the potential in which the "particle"
moves

448
1.16 Evolution of Spatial Patterns 47

particularly large ("critical fluctuations"). When we retreat further from the


transition points a = ao, the new double peak may become very sharp, indicating
that the system has a high probability to be found in either of the new states.
Another feature of a system subjected to noise is the following. At an initial time
we may find ("measure") the system in a state q = qo or within a neighborhood
of such a state. But the fluctuations may drive the system away from this state qo.
We then may ask how long it will take for the system to reach another given state
ql for the first time. Since we are dealing with stochastic processes, such a transi-
tion time can be determined only in a statistical sense. This problem is called the
first passage time problem. As a special case the problem arises how to determine
the time which a system needs to go from one maximum of the distribution func-
tion to the other one.
Similarly, effects typical for the impact of noise occur when a focus becomes
unstable and is replaced by a limit cycle. Writing the newly evolving solution q in
the form exp [i w t + i cp(t)] ret) (w, cp, r are assumed real), it turns out that the
phase cp(t) undergoes a diffusion process, while the amplitude r performs fluc-
tuations around a stable value roo Important quantities to be determined are the
relaxation time of r as well as the diffusion constant of the phase diffusion. Other
phenomena connected with noise concern the destruction of a frequency-locked
state. Noise may drive a system from time to time away from that state so that
occasionally, instead of the one frequency, two frequencies may occur again. In
later chapters we shall treat the most important aspects of noise in detail, illus-
trating the procedure by characteristic examples and presenting several general
theorems which have turned out to be most useful in practical applications.

1.16 Evolution of Spatial Patterns

So far we have discussed qualitative changes of the temporal behavior of


systems, such as onset of oscillations, occurrence of oscillations at several fre-
quencies, sub harmonic oscillations, etc. In many physical, chemical, and biolo-
gical systems we must not ignore the spatial dependence of the system's variables.
For instance, we have seen in Sect. 1.2.1 that spatial patterns can arise in fluids.
In the simplest case we start from a spatially homogeneous state. At a certain
control parameter value the homogenous solution may become unstable, as is
revealed by linear stability analysis. Therefore we have to study linear equations
of the form

w=Lw. (1.16.1)

Since N in (1.14.1) contains spatial derivatives, so does L. To illustrate the


essential features, assume that L is of the form

L =Lo+ Dll., (1.16.2)

where Lo is a constant matrix and D a constant diagonal matrix. In general, Lo

449
48 1. Introduction

depends on the control parameter a. By the hypothesis w(x, t) = exp(U)v(x),


we transform (1.16.1) into

(Lo + D ~) v (x) = Av (x) , (1.16.3)

which is of the form of an elliptic partial differential equation. Under given


boundary conditions on v(x), (1.16.3) allows for a set of spatial modes Vj(x) with
eigenvalues Aj. When we change a, one or several of the A/S may cross the
imaginary axis, i. e., the corresponding modes become unstable. In principle, the
method of solution of the nonlinear equations (1.14.1) is the same as that of Sect.
1.14 in which we neglected any spatial dependence of q on space coordinate x.
We put

(1.16.4)

Again we may identify order parameters c,j' for which Re{AJ ~ O. We may apply
the slaving principle and establish order parameter equations. Confining (1.16.4)
to the leading terms, i. e., the order parameters c,j == Uj alone, we obtain the
skeleton of the evolving patterns. For instance, if only one Re{Aj} ~ 0, the
"skeleton" reads

(1.16.5)

Since in many cases Uj (I) obeys the equation

(1.16.6)

we obtain the following result: while for A1 < 0 only the solution u j = 0 and
therefore q(x, t) = const and such that the homogeneous distribution is stable,
for A1 > 0 the solution Uj =1= 0 and therefore the spatially inhomogeneous solution
(1.16.5) arises. This approach allows us to study the growth of a spatial pattern.
If several order parameters Uj =1= 0 are present, the "skeleton" is determined by

q(x, t) = const + L UjVj(x) . (1.16.7)


j

But in constrast to linear theories the u/s cannot be chosen arbitrarily. They are
determined by certain nonlinear equations derived later and which thus deter-
mine the possible "skeletons" and their growths. In higher approximation the
slaved modes contribute to the spatial pattern also. An important difference
between these transitions and phase transitions of systems in thermal equilibri-
um, where long-range order is also achieved, should be mentioned. With few
exceptions, current phase transition theory is concerned with infinitely extended
media because only here do the singularities of certain thermodynamic functions
(entropy, specific heat, etc.) become apparent. On the other hand, in nonequi-
librium phase transition, i. e., in the transitions considered here, in general the
finite geometry plays a decisive role. The evolving patterns depend on the shape

450
1.17 Discrete Maps. The Poincare Map 49

of the boundaries, for instance square, rectangular, circular, etc., and in addition
the patterns depend on the size of the system. In other words, the evolving
patterns, at least in general, bring in their own length scales which must be
matched with the given geometry.
With respect to applications to astrophysics and biology we have to study the
evolution of patterns not only in the plane or in Euclidian space, but we have to
consider that evolution also on spheres and still more complicated manifolds.
Examples are the first stages of embryonic evolution or the formation of atmos-
pheric patterns on planets, such as Jupiter. Of course, in a more model-like
fashion we may also study infinitely extended media. In this case we find phe-
nomena well known from phase transition theory and we may apply renormali-
zation group techniques to these phenomena.
Within active media such as distributed chemical reactions, interaction of
cells of a tissue, neuronal networks, etc., combined spatio-temporal patterns may
evolve also. The methods described later allow a mathematical treatment of large
classes of these phenomena.

1.17 Discrete Maps. The Poincare Map

In the previous sections a brief outline was given of how to model many processes
by means of evolution equations of the form

q(t) = N(q(t» . (1.17.1)

In order to know the temporal evolution of the system, which is described by


q(t), we have to know q for all times. Since time is continuous, we need to know
a continuous infinity of data! This is evidently an unsolvable task for any human
being. There are several ways out of this difficulty. We may find steady states
for which q is time independent and can be represented by a finite number of
data. Or we may write down an exact or approximate closed form of q, e.g.,
q = qo sin (wI).
The important thing is not so much that we can calculate sin (wt) for any time
I to (at least in principle) any desired degree of accuracy. Rather, we can im-
mediately visualize the function sin (wI) as a periodic oscillation, or exp (At) as
an ever increasing or decreasing function, depending on the sign of A.
There is yet another way we can do away with the problem of "infinite" infor-
mation required. Namely, as in a digital computer, we may consider q at a
discrete sequence of times In only. The differential equation (1.17.1) is then
replaced by a suitable set of difference equations. A still more dramatic reduction
is made by the Poincare map. Consider as an example the trajectory in a plane
(Fig. 1.17.1). Instead of following up the whole trajectory over all times, we con-
sider only the crossing points with the ql axis. Let us denote these points by
ql (n) == x n • Because (ql == Xn , q2 = 0) for a fixed n may serve as initial value for
the half-trajectory which hits the ql axis again at xn+ 1> it follows that xn+ I is
uniquely determined by X n , i.e.,

451
50 1. Introduction

Fig. 1.17.1. A trajectory ( - - ) hits the ql


axis at intersection points Xl' X 2 , X 3 , etc.
/
Fig. 1.17.2. A plot of xn+ 1 versus xn for the
example of Fig. 1.17.1 represented as continuous
graph
Xn+l =f(xn)· (1.17.2)

Of course, in order to find this connection, one has to integrate (1.17.1) over a
time interval from tn' where Xn is hit, to tn+ I> where xn+ 1 is hit. Thus no simpli-
fication seems to be reached at all. The following idea has turned out to be most
useful. Let us consider (1.17.2) for all n and a fixed function f in terms of a
model! Because we no longer need to integrate (1.17.1) over continuous times,
we expect to gain more insight into the global behavior of xn-
Equation (1.17.2) is called a map because each value of Xn is mapped onto a
value of xn+l' This can be expressed in form of a graph (Fig. 1.17.2). Probably
the simplest graph which gives rise to nontrivial results is described by the
"logistic" map which reads

(1.17 .3)

This equation is plotted in Fig. 1.17.3. In (1.17.3), a serves as a control para-


meter. If it runs between 0 and 4, any value of 0 ~ Xn ~ 1 is again mapped into (or
onto) an interval 0 ~ Xn+ 1 ~ 1. Starting from an initial value xo, it is quite simple
to calculate the sequence xI> X2,'" from (1.17.3), e.g., by means of a pocket
computer.

Fig. 1.17.3. i\ parabolic plot of x" t 1 verslIS


xn corresponding to Eq. 11.17.3)

452
1.17 Discrete Maps. The Poincare Map 51

However, a good insight into the behavior of the solutions of (1.17 .3) can be
gained by a very simple geometric construction explained in Figs. 1.17.4 - 9. The
reader is strongly advised to take a piece of paper and to repeat the individual
steps. As it transpires, depending on the numerical value of (1, different kinds of
behavior of xn result. For (1 < 3, Xn converges to a fixed point. For 3 < (1 < (12, Xn
converges to a periodic hopping motion with period T= 2 (Figs. 1.17.10, 11).
For (12 < (1 < (13' xn and tn big enough, xn comes back to its value for n + 4,
n + 8, ... , i. e., after a period T = 4, T = 8, ... (Figs. 1.17.12, 13). When we plot
the values of xn reached for n ..... 00 for various values of (1, Fig. 1.17.14 results.

Fig. 1.17.4. The series of Figs. 1.17.4 - 8 shows Fig. 1.17.5. Valuex 2 found in Fig. 1.17.4 serves
how one may derive the sequence X n ' now as a new initial value, obtained by project-
(n = 2, 3, 4, ... j, once one initial value Xl is ing x 2 in Fig. 1.17.4 or in the present figure on
given. Because the parabola represents the map- the diagonal straight line. Going from the
ping function, we have to go from Xl by a corresponding cross section downward, we find
vertical line until we cross the parabola to find x 2 x 2 as value on the abscissa. In order to obtain the
new value Xl we go vertically upwards to meet
the parabola

Fig. 1.17.6. To proceed from Xl to x 4 we re- Fig. 1.17.7. The construction is now rather
peat the steps done before, i. e., we first go obvious and we merely show how we may
horizontally from Xl till we hit the diagonal proceed from x 4 to Xs
straight line. Going down vertically we may
find the value Xl' but on the way we hit the
parabola and the cross section gives us x 4

453
52 1. Introduction

Xn 2 :0 4 5 6
Fig. 1.17.8. When we repeat the above steps Fig. 1.17.9. The individual values of Xl' X 2 •
many times we approach more and more a Xl' ... constructed by means of the previous
limiting point which is just the cross section figures. The resulting plot is shown where X n
of the parabola with the diagonal straight approaches more and more the dashed line
line

L - - ,__- ,__. -__, -__. - - - , - - - - - - . n


2 3 4 5 6
Fig. 1.17.10. When we choose the height of
the parabola differently (bigger than that of Fig. 1.17.11. A periodic hopping of xn corre-
Fig. 1.17.8), a closed curve arises which indi- sponding to the closed trajectory of Fig.
cates a hopping of xn between two values (see 1.17.10
Fig. 1.17.11)

- ~/\=

i ~l
' - - . , , - - - , - - , - - - , - - - - - , - - , . . - - - - ---,------y------+ n
234 557 8
Fig. 1.17.12. The case of period 4 Fig. 1.17.13. The case of period 4. Plot (lfx"
versus n

454
1.17 Discrete Maps. The Poincare Map 5:1

-1 1 0.5 0.25 0.1 0.05 0025 0.01 0.005 00025

Fi~. 1.17.14. The set of possible vailies of xn (n -> 00) (ordinate) versus control parameter 1100 - 11
(abscissa) on a logarithmic scale. Here the logistic equation is linearly transformed into the equation
xn t I = 1 - 11X~. 1100 corresponds to the critical value am. [After P. Collet, .I. P. Eckmann: In Pro-
qress in Pyhsics, Vol. 1, ed. by A. Jaffe, D. Ruelle (Birkhauser, Boston 1980)]

We find a sequence of period doublings which accumulate at a = aDO =


3.569945672 .... The sequence of an' at which period doubling occurs, obeys a
simple law

lim (an - an-I) = 0, where 0 = 4.6692016609 .... (1.17.4)


n->DO an+l - an

The number 0 is called the Feigenbaum number and is of a "universal" character,


because there is a whole class of maps (1.17.2) which give rise to a sequence of
period doublings with that number.
Though experimental results (Sect. 1.2.1) are in qualitative agreement with
the theoretically predicted value (1.17.4), one should not be surprised if the
agreement is not too good. First of all the Feigenbaum number is derived for
n -> 00, while experimental data were obtained for n = 2, 3, 4, 5. Furthermore,
the control parameter which describes the "real world" and which enters into
(1.17.1) need not be proportional to the control parameter a for (1.17 .3), but
rather the relation can be considerably more complicated.
Beyond the accumulation point aDO' chaos, i.e., an irregular "motion" of x n '
is observed.
Another important feature of many dynamical systems is revealed by the
logistic map: beyond aDO' with incrasing a, periodic, i. e., regular motion
becomes possible in windows of a between chaotic braids. Over the past years,
quite a number of regularities often expressed as "scaling properties", have been
found, but in the context of this book we shall be concerned rather with the main
approaches to cope with discrete maps.
The idea of the Poincare map can be more generally interpreted. For
instance, instead of studying trajectories in the plane as in Fig. 1.17.1, we may
also treat trajectories in an n-dimensional space and study the points where the
trajectories cut through a hypersurface. This can be visualized in three dim en-

455
54 1. Introduction

sions according to Fig. 1.17.15. In a number of cases of practical interest it turns


out that the cross points can be connected by a smooth curve (Fig. 1.17 .16). In
such a case one may stretch the curve into a line and use, e. g., the graph of Fig.
1.17.3 again. In general we are led to treat discrete maps of the form

(1.17.5)

where x n ' n = 1, 2, ... are vectors in an M-dimensional space. Here f may depend
on one or several control parameters a which allows us to study qualitative
changes of the "discrete dynamics" of xn when a is changed. A wide class of such
qualitative changes can be put in analogy to nonequilibrium phase transitions,
treated by the conventional evolution equations (1.17.1). Thus, we find critical
slowing down and symmetry breaking, applicability of the slaving principle (for
discrete maps), etc. These analogies become still closer when we treat discrete
noisy maps (see below). Equations (1.17 .5) enable many applications which so
far have been studied only partially. For instance, the state vector xn can
symbolize various spatial patterns.

Fig. 1.17.15. Poincare map belonging to the Fig. 1.17.16. The intersection points of Fig.
cross section of a two-dimensional plane with a 1.17.15
trajectory in three dimensions

We finally study how we may formulate Lyapunov exponents in the case of a


discrete map. To this end we proceed in close analogy to Sect. 1.14.6 where we in-
troduced the concept of Lyapunov exponents for differential equations.
For a discrete map, the "trajectory" consists of the sequence of points x n'
n = 0, 1, .... We denote the trajectory the neighborhood of which we wish to
study, by x~. It is assumed to obey (1.17.5). We write the neighboring trajectory
as

(1.17.6)

and insert it into (1.17.5). We may expand f(x~ + oXn) into a power series with
respect to the components of oXn-

456
1.17 Discrete Maps. The Poincare Map 55

Keeping the linear term we obtain

(1.17.7)

where the matrix L is given by

(1.17.8)

Equation (1.17.7) can be solved by iteration (provided x~ has been determined


before). Starting with oXo, we obtain

(1.17.9)

Expressing oXn by OX n -l, OXn_1 by OX n-2 etc., we readily obtain

(1.17.10)

where the L's are multiplied with each other by the rules of matrix multiplication.
The Lyapunov exponents are defined by

A = lim sup ~In IOxnl. (1.17.11)


n_oo n

Depending on different directions of oXo, different A'S may result (in multi-
dimensional maps).
For a one-dimensional map we may represent A in a rather explicit way. In
this case the L's and oXo are mere numbers so that according to (1.17.10)

IOXnl = IL (X~_I) ... L (x8) oXol


= IL(x~_dl·IL(x~_2)1 .. ·loxol· (1.17.12)

Inserting this latter expression into (1.17.11) and using (1.17.8) we obtain

1 n-I
A = lim sup - L In If'(x~) I (1.17.13)
n_oo nm=O

where!' is the derivative of lex) with respect to x.


As a nontrival example we show in Fig. 1.17.17 the Lyapunov exponent of the
logistic map (1.17.3) as a function of a. A positive value of the exponent corre-
sponds to chaotic motion whereas negative values indicate the presence of regular
(periodic) behavior.

457
56 1. Introduction

1 'A. Fig. 1.17.17. Lyapuno\ exponent corre.'ronding to Eq.


(1.17.:1) ver,us the rarametcr (/ ~ IH=I:l.4. 4J. [After (;.
Mayer-Kress. H. Haken: J. Stat. Phys. 26,149 (19R1)1

1.5

1.18 Discrete Noisy Maps

The impact of fluctuations on the dynamics of a system can be rather easily


modelled within the framework of discrete maps. Each new value Xn + 1 is not only
determined by!(xn ), but also by some additional fluctuations fin' so that

(1.18.1)

Also more complicated maps can be formulated and treated, e. g., maps for
vectors X n , and for fluctuations which depend on xn (Chap. 11).

1.19 Pathways to Self-Organization

In all the cases considered in this book the temporal, spatial or spatio-temporal
patterns evolve without being imposed on the system from the outside. We shall
call processes, which lead in this way to patterns, "self-organization". Of course,
in the evolution or functioning of complex systems, such as biological systems, a
whole hierarchy of such self-organizing processes may take place. In a way, in
this book we are considering the building blocks of self-organization. But in con-
trast to other approaches, say by molecular biology which considers individual
molecules and their interaction, we are mostly concerned with the interaction of
many molecules or many subsystems. The kind of self-organization we are
considering here can be caused by different means. We may change the global
impact of the surroundings on the system (as expressed by control parameters).
Self-organization can be caused also by a mere increase of number of
components of a system. Even if we put the same components together, entirely
new behavior can arise on the macroscopic level. Finally, self-organization can
be caused by a sudden change of control parameters when the system tries to
relax to a new state under the new conditions (constraints). This aspect offers us

458
1.19 Pathways to Self-Organization 57

a very broad view on the evolution of structures, including life in the universe_
Because the universe is in a transient state, starting from its fireball, and subject
to expansion, ordered structures can occur.
In the context of this book we are interested in rigorous mathematical for-
mulations rather than philosophical discussions. Therefore, we briefly indicate
here how these three kinds of self-organization can be treated mathematically.

1.19.1 Self-Organization Through Change of Control Parameters


This case has been discussed in extenso above. When we slowly change the
impact of the surroundings on a system, at certain critical points the system may
acquire new states of higher order or structure. In particular, spatial patterns
may evolve although the surroundings act entirely homogeneously on the system.

1.19.2 Self-Organization Through Change of Number of Components


Let us start from two uncoupled systems described by their corresponding state
vectors q(l) and q(2) which obey the equations

ti(1) = N(l)(q(1» , and

ti(2) = N(2)(q(2» .

Let us assume that these equations allow stable inactive states described by

q~) = 0, j = 1, 2. (1.19.3)

We introduce a coupling between these two systems and describe it by functions


KU) so that the Eqs. (1.19.1, 2) are replaced by

(1.19.4)

(1.19.5)

We now have to deal with the total system 1 + 2 described by the equation

q = N(q, a), where (1.19.6)

q = (q(1»)
(2) and (1.19.7)
q

N(1) + aK(1) )
N(q, a) = ( (2) (2)' (1.19.8)
N +aK

We havc introduced a parameter a which ranges from 0 till 1 and which plays the

459
58 1. Introduction

role of a control parameter. Under suitable but realistic conditions, a change of a


°
causes an instability of our original solution (1.19.3) and qo = is replaced by
q*o, (1.19.9)
which indicates some kind of new patterns or active states. This example shows
that we may treat this kind of self-organization by the same methods we use when
a is treated as a usual control parameter.
Clearly, if already the change of rather unspecific control parameters a can
cause patterns, this will be much more the case if control parameters are changed
in a specific way. For instance, in a network the couplings between different
components may be changed differently. Clearly, this opens new vistas in treat-
ing brain functions by means of nonequilibrium phase transitions.

1.19.3 Self-Organization Through Transients


Structures (or patterns) may be formed in a self-organized fashion when the
system passes from an initial disordered (or homogeneous) state to another final
state which we need not specify or which even does not exist. This is most easily
exemplified by a state vector of the form

q(x, t) = u(t) v(x) , (1.19.10)

where v (x) describes some spatial order, and the order parameter equation

it =AU. (1.19.11)

When we change a control parameter a quickly, so that A <


by A > 0, a transient state vector of the form
°is quickly replaced
q(x, t) = eAiv(x) (1.19.12)

occurs. It clearly describes some structure, but it does not tend to a new stable
state.
This approach hides a deep philosophical problem (which is inherent in all

° °
cases of self-organization), because to get the solution (1.19.12) started, some
fluctuations must be present. Otherwise the solution u == and thus q == will
persist forever.

1.20 How We Shall Proceed

In this book we shall focus our attention on situations where systems undergo
qualitative changes. Therefore the main line of our procedure is prescribed by
1) study of the loss of stability
2) derivation of the slaving principle
3) establishment and solution of the order parameter equations.

460
1.20 How We Shall Proceed 59

To this end we first study the loss of stability. Therefore Chaps. 2 and 3 deal
with linear differential equations. While Chap. 2 contains results well known in
mathematics (perhaps represented here, to some extent, in a new fashion),
Chap. 3 contains largely new results. This chapter might be somewhat more dif-
ficult and can be skipped during a first reading of this book.
Chap. 4 lays the basis for stochastic methods which will be used mainly in
Chap. 10. Chaps. 5 and 6 deal with coupled nonlinear oscillators and study
quasiperiodic motion. Both Chaps. 5 and 6 serve as a preparation for Chap. 8,
especially for its Sects. 8.8 -11. Chap. 6 presents an important theorem due to
Moser. In order not to overload the main text, Moser's proof of his theorem is
postponed to the appendix. Chap. 7 then resumes the main line, initiated by
Chaps. 2 and 3, and deals with the slaving principle (for nonlinear differential

Table 1.20.1. A survey on the relations between Chapters. Note that the catchwords in the boxes
don't agree with the chapter headings

Chap. 2 J stability of motion,

r
Chap. 3 linear equations

Chap. 4 stochastic nonlinear


equations
~~---'

Chap. 5 J coupled nonlinear oscillators,


Chap. 6 quasiperiodic motion

I Chap. 7 slaving principle

Chap. 8 order parameter equations


without fluctuations,
discrete systems

Chap. 9 order parameter equations,


continuous media

Chap. 10 order parameter equations


containing fluctuations

Chap. 11 discrete noisy maps

Chap. 12 an unsolvable problem

Chap. 13 some epistemology I

461
60 1. Introduction

equations) without and with stochastic forces. This chapter, which contains new
results, is of crucial importance for the following Chaps. 8 and 9, because it is
shown how the number of degrees of freedom can be reduced drastically.
Chaps. 8 and 9 then deal with the main problem: the establishment and solution
of the order parameter equations. Chap. 8 deals with discrete systems while
Chap. 9 is devoted to continuously extended media. Again, a number of results
are presented here for the first time. Chap. 10 treats the impact ofjluctuations on
systems at instability points. The subsequent chapter is devoted to some general
approaches to cope with discrete noisy maps and is still in the main line of this
book.
Chap. 12 is out of that line - it illustrates that even seemingly simple ques-
tions put in dynamic systems theory cannot be answered in principle. Finally
Chap. 13 resumes the theme of the introductory chapter - what has synergetics
to do with other disciplines. In a way, this short chapter is a little excursion into
epistemology. In conclusion it is worth mentioning that Chaps. 2, - 6 are of use
also in problems when systems are away from their instability points.

462
2. Linear Ordinary Differential Equations

In this chapter we shall present a systematic study of the solutions of linear


ordinary differential equations. Such equations continue to play an important
role in many branches of the natural sciences and other fields, e. g., economics,
so that they may be treated here in their own right. On the other hand, we should
not forget that our main objective is to study nonlinear equations, and in the con-
struction of their solutions the solutions of linear equations come in at several
instances.
This chapter is organized as follows. Section 2.1 will be devoted to detailed
discussions of the solutions of different kinds of homogeneous differential equa-
tions. These equations will be distinguished by the time dependence of their coef-
ficients, i. e., constant, periodic, quasiperiodic, or more arbitrary. In Sect. 2.2 we
shall point out how to apply the concept of invariance against group operations
to the first two types of equations. Section 2.3 deals with inhomogeneous dif-
ferential equations. Some general theorems from algebra and from the theory of
linear ordinary differential equations (coupled systems) will be presented in Sect.
2.4. In Sect. 2.5 dual solution spaces are introduced. The general form of solu-
tion for the cases of constant and periodic coefficient matrices will be treated in
Sects. 2.6 and 2.7,8, respectively. Section 2.8 and the beginning of Sect. 2.7 will
deal with aspects of group theory, Sect. 2.8 also including representation theory.
In Sect. 2.9, a perturbation theory will be developed to get explicit solutions for
the case of periodic coefficient matrices.

2.1 Examples of Linear Differential Equations: The Case of a


Single Variable
We consider a variable q which depends on a variable t, i. e., q(t), and assume,
that q is continuously differentiable. We shall interpret t as time, though in
certain applications other interpretations are also possible, for instance as a space
coordinate. If not otherwise stated it is taken that - 00 < t < + 00. We consider
various typical cases of homogeneous first-order differential equations.

2.1.1 Linear Differential Equation with Constant Coefficient


We consider
q=aq, (2.1.1)
62 2. Linear Ordinary Differential Equations

where a is a constant. The solution of this differential equation reads

q(t) = CeAt , (2.1.2)

where A = a as can be immediately checked by inserting (2.1.2) into (2.1.1). The


constant C can be fixed by means of an initial condition, e. g., by the requirement
that for t = 0

q(O) = qo, (2.1.3)

where qo is prescribed. Thus

C= qo :=q(O). (2.1.4)
Then (2.1.2) can be written as

q(t) = q(O) eAt, (A = a) . (2.1.5)

As shown in Sects. 1.14, 16, linear differential equations are important to deter-
mine the stability of solutions of nonlinear equations. Therefore, we shall discuss
here and in the following the time dependence of the solutions (2.1.5) for large
times t. Obviously for t > 0 the asymptotic behavior of (2.1.5) is governed by the
sign of Re{A}. For Re{A} > 0, Iqlgrows exponentially, for Re{A} = 0, q(t) is a con-
stant, and for Re{A} < 0, Iq I is exponentially damped. Here A itself is called a
characteristic exponent.

2.1.2 Linear Differential Equation with Periodic Coefficient


As a further example we consider

q=a(t)q, (2.1.6)

where a(t) is assumed continuous. The solution of (2.1.6) reads

q = q(O) exp [ta(T)dT 1· (2.1. 7)

We now have to distinguish between different kinds of time dependence of a(t).


If a(t) is periodic and, for instance, continuously differentiable, we may expand
it into a Fourier series of the form

00 •

a(t)=co+ L cne lnwt • (2.1.8)


n= - 00
n*O

To study the asymptotic behavior of (2.1.7), we insert (2.1.8) into the integral
occurring in (2.1. 7) and obtain

464
2.1 Examples of Linear Differential Equations: The Case of a Single Variable 63

t C .
Ia(r)dr= tco+ L -._n_(e lnwt _1). (2:1.9)
o n*O Inw

With n =1= 0, the sum in (2.1.9) converges at least as well as the sum in (2.1.8).
Therefore the sum in (2.1.9) again represents a periodic function. Thus the
asymptotic behavior of (2.1.7) is governed by the coefficient Co in (2.1.8).
Depending on whether its real part is positive, zero, or negative, we find
exponential growth, a neutral solution or exponential damping, respectively.
Combining our results (2.1.7 - 9), we may write the solution of (2.1.6) in the
form

(2.1.10)

with the characteristic exponent A = Co and

(2.1.11)

Because the exponential function of a periodic function is again periodic, u(t) is


a periodic function. We therefore have found the result that the solutions of dif-
ferential equations with periodic coefficients a(t) have the form (2.1.10), where
u(t) is again a periodic function. Since u(t) is bounded, the asymptotic behavior
of Iq(t) Iis determined by the exponent Re{A t}, as stated above.

2.1.3 Linear Differential Equation with Quasiperiodic Coefficient


The reader may be inclined to believe that our list of examples is about to become
boring but the next more complicated example will confront us with an intrinsic
difficulty. Assume that in the differential equation

q= a(t)q (2.1.12)

a(t) is quasiperiodic, i. e., we assume that a(t) can be expanded into a multiple
Fourier series

a(t) = Co + L em exp (im· wt) , (2.1.13)


m*O

where m is an n-dimensional vector whose components are integers. Further, w is


a vector of the same dimension with components WI, ... , Wn' so that

(2.1.14)

We shall assume that (2.1.14) vanishes only if 1m 1= 0, because otherwise we can


express one or several w's by the remaining w's and the actual number of "inde-
pendent" w's will be smaller [may be even one, so that (2.1.13) is a periodic func-

465
64 2. Linear Ordinary Differential Equations

tion]. Or, in other words, we exclude w/ s which are rational with respect to each
other. The formal solution of (2.1.12) again has the form (2.1.7); it is now appro-
priate to evaluate the integral in the exponent of (2.1.7). If the series in (2.1.13)
converges absolutely we may integrate term by term and thus obtain

t C
Ja(r)dr = cot + L . m [exp (im· wt) -1) . (2.1.15)
° m*O 1m· w

In order that the sum in (2.1.15) has the same form as (2.1.13), i.e., that it is a
quasiperiodic function, it ought to be possible to split the sum in (2.1.15) into

\' ----"'--exp
f..,. em (.1m wt ) + canst.
0 (2.1.16)
m*O 1m w 0

However, the time-independent terms of (2.1.15) which are formally equal to

const = - L . em (2.1.17)
m*O 1m 0 w

and similarly the first sum in (2.1.16) need not converge. Why is this so? The
reason lies in the fact that because the m's can take negative values there may be
combinations of m's such that

mow--O (2.1.18)
for
Iml-- 00. (2.1.19)

[Condition (2.1.18) may be fulfilled even for finite m's if the ratios of the w's are
rational numbers, in which case mow = 0 for some m = mo. But this case has
been excluded above by our assumptions on the w/s.] One might think that one
can avoid (2.1.18) if the w's are irrational with respect to each other. However,
even in such a case it can be shown mathematically that (2.1.18, 19) can be fulfil-
led. Because mow occurs in the denominator, the series (2.1.18) need not con-
verge, even if L Iem Iconverges. Thus the question arises under which conditions
m
the series (2.1.17) still converges. Because em and mow occur jointly, this condi-
tion concerns both Cm and the w/s. Loosely speaking, we are looking for such cm
which converge sufficiently quickly to zero for 1m 1-- 00 and such w/s for which
1mow 1 goes sufficiently slowly to zero for 1m I -- 00 so that (2.1.17) converges.

Both from the mathematical point of view and that of practical applications, the
condition on the w/s is more interesting so that we start with this condition.
Loosely speaking, we are looking for such w/s which are "sufficiently irra-
tional" with respect to each other. This condition can be cast into various mathe-
matical forms. A form often used is

466
2.1 Examples of Linear Differential Equations: The Case of a Single Variable 65

I(m. w)1 ~ Kllm II-(n+l), where (2.1.20)

(2.1.21)

Here K is a constant. Though for 11m 11--- 00, (2.1.20) again goes to 0, it may tend
°
to sufficiently slowly. We shall call (2.1.20,21) the Kolmogorov-Arnold-Moser
condition or, in short, the KAM condition. When a real system is given, the ques-
tion arises whether the frequencies occurring meet the KAM condition (2.1.20,
21). While in mathematics this seems to be a reasonable question, it is proble-
matic to decide upon this question in practice. Furthermore, since systems are
subject to fluctuations, it is even highly doubtful whether a system will retain its
frequencies such that a KAM condition is fulfilled for all times. Rather, it is
meaningful to ask how probable it is that the w's fulfill that condition. This is
answered by the following mathematical theorem (which we do not prove here):

°
In the space W = (WI' W2,"" wn) the relative measure of the set of those w's
which do not satisfy the condition (2.1.20) tends to together with K. Thus for
sufficiently small K most of the w's satisfy (2.1.20).
We now discuss the second problem, namely the convergence rate of the coef-
ficients cm in (2.1.13). Because simple Fourier series can be handled more easily
than multiple Fourier series of the form (2.1.13), we try to relate (2.1.13) with
simple Fourier series. This is achieved by the following trick. We introduce
auxiliary variables <PI' ... , <Pn and replace a(t) in (2.1.13) by

a(t, <PI' <P2,···, <Pn) = Co + L cm exp (iml WI <PI + ... + imnwn <Pn + im • wt) ,
m,*,O (2.1.22)

i.e., by a function of <PI"'" <Pn and t. We can take care of t by introducing


<Pj = <Pj + t. Now we may keep all <P's but one, say <Pj , fixed. This allows us to
apply theorems on simple Fourier series to (2.1.22). We shall use the following
(where we make wj<Pj = x). Let

f(x) = ~ am e imx ,
L... (2.1.23)
m=-oo

whose derivatives up to order (h -1) are continuous, and whose h'th derivative is
piecewise continuous, then

c (2.1.24)
laml~ --h'
Iml

Consider (2.1.22), and assume that its derivatives fulfill the just-mentioned con-
ditions for all <P/s. Then

(2.1.25)

467
66 2. Linear Ordinary Differential Equations

After these preparations we are able to demonstrate how the KAM condition
works. To this end we study the convergence of (2.1.17). We start with (L I
denotes m =1= 0)

L' Cm I~ L I ICm I , (2.1.26)


I
m im· W m 1m. wi
or, after use of (2.1.21, 25), we obtain for (2.1.17)

(2.1.27)

In order to obtain a sufficient condition for the convergence of (2.1.27), we


replace the Imj I's in the numerator each time by their biggest value mmax' so that

e
I I
n+l
~I Cm ~ _nn+l ~I mmax (2.1.28)
L... • '" L... h h •
m 1m·W K m Imll ... lmnl

According to elementary criteria on convergence, (2.1.28) converges, provided


that h ~ n + 3 (h being an integer). This exercise illustrates how the KAM condi-
tion and the convergence of (2.1.17) are linked.
Now let us assume that the cm's converge so rapidly that if the KAM condi-
tion (2.1.20) is fulfilled the first sum in (2.1.16) converges absolutely. Then the
solution of (2.1.12) with the quasiperiodic coefficients (2.1.13) can be written in
the form

(2.1.29)

where the characteristic exponent is given by

it = co' (2.1.30)

and u(t) is again a quasiperiodic function

u(t) = q(O) exp r L .


Lm*o 1m· w
em exp (im • wt)l' (2.1.31)

where

q(O) = q(O) exp (- L . cm ) . (2.1.32)


m*O 1m· w

Because the series in (2.1.31) converges absolutely, u(t) is bounded. Therefore,


the asymptotic behavior of (2.1.29) is determined by the exponential function
exp (At). Let us now finally turn to the general case.

468
2.1 Examples of Linear Differential Equations: The Case of a Single Variable 67

2.1.4 Linear Differential Equation with Real Bounded Coefficient


In the differential equation (2.1.6), a(t) is an arbitrary function of time but con-
tinuous and bounded for

O:;;;:;t<oo, (2.1.33)

so that

la(t)I:;;;:; B. (2.1.34)

We assume a(t) real. The general solution reads

q(t) = q(O) exp [ta(r)dr]. (2.1.35)

We wish to study how solution (2.1.35) behaves when t goes to infinity. In partic-
ular, we shall study whether (2.1.35) grows or decays exponentially. As a first
step we form

t
In Iq(t) I = In Iq(O) 1+ Sa(r)dr, (2.1.36)
o

so that we have to discuss the behavior of the integral in (2.1.36). To this end we
first explain the notion of the supremum which is also called the "least upper
bound". If A is a set of real numbers {a}, the supremum of A is the smallest real
number b such that a < b for all a. We write "sup {A}" to denote the supremum.
A similar definition can be given for the infimum or, in other words, for the
greatest lower bound (inf {A}). If A is an infinite set of real numbers then the
symbol "lim sup {A}" denotes the infimum of all numbers b with the property
that only a finite set of numbers in A exceed b. In particular if A is a sequence
{an} then lim sup {A} is usually denoted by lim sup {an}.
n~oo

These definitions are illustrated in Fig. 2.1.1. Starting from (2.1.36) we now
form

fb}

lim sup{a n ) .-.- -.- ;.-.- .


= inf {bl - -
....
- - - .-
Fig. 2.1.1. Simple behavior of a sequence
{at, a2 , ... } is plotted, where {b} is the set of
n
all real numbers with the property that only a
5 10 finite set of numbers at, a2 , ... exceed b

469
68 2. Linear Ordinary Differential Equations

lim sup \
t~oo l~t afa(r)dr]. (2.1.37)

Clearly, we get for the first term of the rhs of (2.1.36)

1
-In Iq(O) I..... 0 for t ..... 00 • (2.1.38)
t

From (2.1.34) we can construct two bounds, namely

t t
Sa(r)dr ~ SB dr = Bt (2.1.39)
a a

and
t
Sa(r)dr? -Bt. (2.1.40)
a

From (2.1.39, 40) it follows that

lim sup
t~oo
\~Ja(r)dr]=
II a
A (2.1.41 )

exists with

IAI< 00. (2.1.42)

Here Ais called generalized characteristic exponent. It gives us information about


the asymptotic behavior of solution (2.1.35) for large positive times. Its signif-
icance can be visualized if we use the fact that the logarithmic function is mono-
tonous and that both (2.1.41, 36) imply that the solution has an upper bound at
each time t given by

CeAt+f(t), where (2.1.43)

lim sup \
hoo (I
~f(t)J .. O. (2.1.44)

Thus the generalized characteristic exponents have the same significance as the
real part of the (characteristic) exponent A in the simple case of a differential
equation with constant coefficients.

470
2.2 Groups and Invariance 69

2.2 Groups and Invariance

The very simple example of the differential equation (2.1.6) allows us to explain
some fundamental ideas about groups and invariance. Let us start with the
differential equation for the variable q(t)

q(t) = a(t) q(t) . (2.2.1)

The properties of a(t) (i. e., being constant, periodic, or quasiperiodic) can be
characterized by an invariance property also. Let us assume that a(t) remains un-
altered if we replace its argument

i. e., (2.2.2)

a(t + to) = a(t) . (2.2.3)

If to can be chosen arbitrarily, then (2.2.3) implies that a(t) must be time inde-
pendent. If (2.2.3) is fulfilled for a certain to (and for integer multiples of it), a(t)
is periodic. As shown in Chap. 3 our present considerations can also be extended
to quasiperiodic functions a(t), but this requires more mathematics. Now let us
subject (2.2.1) to the transformation (2.2.2), which yields

q(t+ to) = a(t)q(t+ to), (2.2.4)

so that (2.2.4) has exactly the same appearance as (2.2.1). It is the same differen-
tial equation for a variable which we may denote by

(2.2.5)

However, it is known from the theory of linear differential equations that the
solution of (2.2.1) is unique except for a constant factor. Denoting this constant
factor by a, the relation

) (2.2.6)

must hold. We want to show that by means of relation (2.2.6) we may construct
the solution of (2.2.1) directly. To this end let us introduce the translation
operator T, defined as follows. If Tis applied to a functionf of time, we replace t
by t + to in the argument of f

Tf(t) =f(t+to)· (2.2.7)

471
70 2. Linear Ordinary Differential Equations

Due to the invariance property (2.2.3) we readily find

T a(t) q(t) = a(t + to) q(t + to)


= a(t) T q(t) . (2.2.8)

Since this relation holds for an arbitrary q(t), (2.2.8) can be written in the form

Ta(t) = a(t) T (2.2.9)

or, in other words, Tcommutes with a(t). Equation (2.2.6) can now be expressed
in the form

Tq = aq. (2.2.10)

This relation is described by saying that q is an eigenfunction to the (translation)


operator Twith eigenvalue a and it is a consequence of the invariance of (2.2.1)
against T. The operator T allows us to explain what a "group" is.
So far we have attached an operator T to the translation t -. t + to so that
Ta(t) = a(t+ to). When we perform the translation n times, we have to replace t
by t + n to so that a(t) ---+ a (t + n to). This replacement can be achieved by n times
applying the operator Ton a(t). The operation can be written as Tn, so that
Tna(t) = a(t+nto). Of course, we may also make these substitutions in the
reverse direction, t ---+ t - to. This can be expressed by the "inverse" operator T 1
which has the property T- 1 a(t) = a(t-to). Finally we note that we may leave t
unchanged. This can be expressed in a trivial way by TO: TOa(t) = a(t). We shall
call TD=E the unity operator. Summing up the various operations we find the
following scheme (n > 0, integer)

Displacement Operator Effect


t ---+ t + to T Ta(t) = a(t+to)
t ---+ t + n to rna(t) = a(t+nto)

t ---+ t - to T- 1 a(t) = a(t-to)


t ---+ t - n to T-na(t) = a(t-nto)

The operations Tn, n ~ 0, form a (multiplicative) group, because they fulfill the
following axioms.
Individually, Tn, n ~ 0 are elements of the group.
1) If we mUltiply (or, more generally speaking, combine) two elements with one
another, we obtain a new element of the group. Indeed Tn. T m = T n+ m is a
new element. This relation holds because a displacement by n to and one by
m to yields a new displacement by (n + m) to.
2) A (right-sided) unity operator E exists so that Tn. E = rn.

472
2.2 Groups and Invariance 71

3) There exists for each element (rn) a (left-sided) inverse T- n so that


T-nTn=E.
4) The associative law holds: (rn Tm) T' = rn(Tm T').
The relations (1) - (4) are quite obvious because we can all of them verify by
means of the properties of the translations t ---> t + n to. But once we can verify
that operators form a group, a number of most valuable theorems may be
applied and we shall come across some of them later. Because it does not matter
whether we first perform n translations t ---> t + n to and then m, or vice versa, we
immediately verify Tn T m = T mTn. In other words, any two elements of the
group commute with one another. A group composed of such elements only is
called an Abelian group. Because all elements of the present group are generated
as powers (positive and negative) of T, we call T the generator of the group.

Construction of the Solution of (2.2.10)


Now we want to show how to construct the solution from the relation (2.2.10).
Applying T to (2.2.10) n times and using (2.2.10) repeatedly we readily find

(2.2.11)

Since the application of Tn means replacing t by t + to n times we obtain instead


of(2.2.11)

(2.2.12)

This is an equation for q(t). To solve it we put

(2.2.13)

where u is still arbitrary. Inserting (2.2.13) into (2.2.12) we obtain

(2.2.14)

Putting a = exp (.Ie to) we arrive at the following equation

u(t+nto) = u(t). (2.2.15)

According to this relation, u(t) is a periodic function of t. Thus we have


rederived the general form of the solution of (2.1.6) with periodic coefficients
without any need to solve that differential equation directly. If a(t) is invariant
against T for any to we can convince ourselves immediately that u(t) must be a
constant. The present example teaches us some important concepts which we
shall use later in more complicated cases. The first is the concept of invariance. It
is expressed by relation (2.2.9), which stems from the fact that a(t) is unchanged
under the transformation T. It was possible to derive the general form of the
solution (2.2.13) by means of two facts.

473
72 2. Linear Ordinary Differential Equations

i) The solution of (2.2.1) is unique except for a constant factor.


ii) The differential equation is invariant against a certain operation expressed by
the operator T.
We then showed that q is an eigenfunction to the operator T. The operators
Tn form a group. Furthermore, we have seen by means of (2.2.11) that the
number a and its powers can be attached to the operators of the left-hand side of
(2.2.11). In particular we have the following correspondence

T -1
--+ a -1 , (2.2.16)

The relations (2.2.16) are probably the simplest example of a group representa-
tion. The basis of that representation is formed by q and a certain number, an, is
attached to each operation Tn so that the numbers an obey the same rules as the
operators Tn as described by the above axioms (1)-(4), i.e.,

TnT m = T n+ m
TnE= Tn
(2.2.17)
T-nTn=E --+a- n a n =1
(TnTm) TI = Tn(Tm TI) --+ (an am) a l = an(a mal).

We shall see later that these concepts can be considerably generalized and will
have important applications.

2.3 Driven Systems

We now deal with an inhomogeneous first-order differential equation

q(t) = a(t) q(t) + b(t) . (2.3.1)

Again it is our aim to get acquainted with general properties of such an equation.
First let us treat the case in which a == 0 so that (2.3.1) reduces to

q=b (2.3.2)

with the solution

t
q(t) = Sb(r)dr + c, (2.3.3)
to

where c is a constant which can be determined by the initial condition q(to) = qi


so that

474
2.3 Driven Systems 73

c = q(to) . (2.3.4)

To the integral in (2.3.3) we may apply the same considerations applied in Sects.
2.1.1- 3 to the integral as it occurs, for instance, in (2.1.7). If b(t) is constant or
periodic,

q(t) = bot + v(t) , (2.3.5)

where v (t) is a constant or periodic function, respectively. If b(t) is quasi-


periodic, we may prove that v(t) is quasiperiodic also if a KAM condition is ful-
filled and if the Fourier coefficients of v(t) (Sect. 2.1.3) converge sufficiently
rapidly.
Let us turn to the general case (2.3.1) where a(t) does not vanish everywhere.
To solve (2.3.1) we make the hypothesis

q(t) = qo(t) c(t) , (2.3.6)

where qo(t) is assumed to be a solution of the homogeneous equation

qo(t) = a(t) qo(t) , (2.3.7)

and c(t) is a still unknown function, which would be a constant if we were to solve
the homogeneous equation (2.3.7) only.
Inserting (2.3.6) into (2.3.1) we readily obtain

(2.3.8)

or, using (2.3.7)

(2.3.9)

Equation (2.3.9) can be immediately integrated to give

t
c(t) = Jqol(r)b(r)dr+ a. (2.3.10)
to

Again, it is our main goal to study what types of solutions from (2.3.1) result if
certain types of time dependence of a(t) and b(t) are given. Inserting (2.3.10)
into (2.3.6) we obtain the general form of the solution of (2.3.1), namely
t
q(t) = qo(t) Jqo 1 (r) b( r) dr + aqo(t) . (2.3.11)
to

Let us first consider an explicit example, namely

a == A. = const
b = const
'* 0 I. (2.3.12)

475
74 2. Linear Ordinary Differential Equations

In this case we obtain


I
q(t) = feA(I-r)bdr+aeAt, (2.3.13)
10

or after integration

b
q(t) = -_+pe At (2.3.14)
A
with

p= a+ !!....e- Ato . (2.3.15)


A

For A> 0, the long-time behavior (t-+ 00) of (2.3.14) is dominated by the
exponential function and the solution diverges provided p does not vanish, which
could occur with a specific initial condition. For A < 0 the long-time behavior is
determined by the first term in (2.3.14), i.e., q(t) tends to the constant -blA.
We can find this solution more directly by putting q = 0 in (2.3.1) which yields

q = -b/A. (2.3.16)

The fully time-dependent solution (2.3.14) is called a transient towards the sta-
tionary solution (2.3.16). The case A = 0 brings us back to the solution (2.3.5)
discussed above.
Quite generally we may state that the solution of the inhomogeneous equation
(2.3.1) is made up of a particular solution of the inhomogeneous equation, in our
case (2.3.16), and a general solution of the homogeneous equation, in our case
p exp (At), where P is a constant which can be chosen arbitrarily. It can be fixed,
however, for instance if the value of q(t) at an initial time is given.
Let us now consider the general case in which a(t) and b(t) are constant,
periodic or quasiperiodic. In order to obtain results of practical use we assume
that in the quasiperiodic case a KAM condition is fulfilled and the Fourier series
of a(t) converges sufficiently rapidly, so that the solution of the homogeneous
equation (2.3.7) has the form

(2.3.17)

with u(t) constant, periodic, or quasiperiodic. From the explicit form of u(t) and
from the fact that the quasiperiodic function in the exponential function of
(2.3.17) is bounded, it follows that Iu lis bounded from below, so that its inverse
exists.
We first consider a particular solution of the inhomogeneous equation (2.3.1)
which by use of (2.3.11) and (2.3.17) can be written in the form

1
q(t) = eAtu(t) fe-hu-\r)b(r)dr. (2.3.18)
10

476
2.3 Driven Systems 75

In the following we treat the case

Rep.} < 0 (2.3.19)

which allows us to take

(2.3.20)

Making further a change of the integration variable

r = r' + t (2.3.21)

we transform (2.3.18) into

o
q(t) = u(t) f e-h'u-I(r' + t)b(r' + t)dr' . (2.3.22)

We are now interested in what kind of temporal behavior the particular solution
of the inhomogeneous equation (2.3.22) represents. Let u(t) be quasiperiodic
with basic frequencies WI' ... , Wn • Since u - 1 exists (as stated above) and is quasi-
periodic, we may write it in the form

u-\r)= 'f,dmeim·WT. (2.3.23)


m

Similarly, we expand b(r) into a multiple Fourier series

b(r)= 'f,fnein-flT. (2.3.24)


n

Inserting (2.3.23, 24) into (2.3.22) we readily obtain

q( r) = 'f, dmfn [ - A. + i [(m • W + n • a)] -I exp [(im • (J) + n • a) t] .


~n' v J

LJ (2.3.25)

Because evidently

ILJI;;, IRe{A.}1 > 0, (2.3.26)

(2.3.25) represents a converging multiple Fourier series provided (2.3.23, 24)


were absolutely converging. (In this notion "quasiperiodic" implies that the
series converges absolutely.) In such a case (2.3.25) converges absolutely also.
From this and the general form of (2.3.25) it follows that q(t) is a quasiperiodic
function with basic frequencies WI, ... , WM, QI, .. . , QN. Since for Re{A.} < 0 the
solution of the homogeneous equation vanishes when t ~ 00, the temporal

477
76 2. Linear Ordinary Differential Equations

behavior of the solution q(t) of the inhomogeneous equation (2.3.1) for large tis
determined by the particular solution we have just discussed, i. e.,

q(l) = quasiperiodic with


basic frequencies WI, ... , WM' Ql, ... , QN' (2.3.27)

2.4 General Theorems on Algebraic and Differential Equations

2.4.1 The Form of the Equations


In this section some general theorems of algebra and the theory of linear dif-
ferential equations are recapitulated. The purpose will be the following. In the
subsequent section we shall study sets of coupled ordinary differential equations
which are of the general form

(;1(1) = al1ql(t)+a12q2(l)+ ... + alnqn(t)


(;2(t) = a21 ql (t) + aZ2Q2(t) + ... + G2nQn(t)
(2.4.1)

The unknown variables are the q's. The coefficients aik are assumed to be given.
They may be either constants, or time-dependent functions of various types dis-
cussed below. The set of Eqs. (2.4.1) can most concisely be written in matrix
notation

q(t) = Lq(t), (2.4.2)

where the vector q is defined by

(2.4.3)

and the matrix L is given by

(2.4.4)

2.4.2 Jordan's Normal }'orm


For the moment we are not concerned with the time dependence of the coef-
ficients of L, i. e., we assume that they are constant or are taken at a fixed value
of time t. Let us consider a quadratic matrix L of the form (2.4.4).

478
2.4 General Theorems on Algebraic and Differential Equations 77

The maximal numbers of linear independent rows or equivalently columns of


a matrix L are called the rank of that matrix or, symbolically, Rk {L}. A quadratic
matrix with n rows is called regular if Rk {L} = n.
Two quadratic matrices Land i each with n rows are called similar if there
exists a regular matrix S with i = S -I L S. After this reminder of some simple
definitions we now formulate the following important theorem concerning
Jordan's normal form.
Each quadratic matrix L whose elements are complex numbers is similar to a
normal matrix of the following form

(2.4.5)

In this normal matrix the square submatrices are uniquely determined by L


except for their sequence. The submatrices Ap possess the form

(2.4.6)

where Ap is an eigenvalue of L.
This theorem can be stated using the above definitions also as follows. For
each quadratic matrix L of complex numbers we may find a regular matrix S such
that

(2.4.7)

holds and i is of the form (2.4.5). Because S is regular, its determinant does not
vanish so that S -I exists.

2.4.3 Some General Theorems on Linear Differential Equations

We consider the differential equation (2.4.2) (also called "differential system")


where
L(t) = (aij(t» (2.4.8)

is a n x n complex-valued matrix, whose elements aij are continuous in the real


variable t on the interval I defined by
-oo<a~t~fJ<oo. (2.4.9)

A solution of (2.4.2) is a complex column vector q (I) EE n (En: n-dimensional


complex vector space), differentiable and satisfying the differential system for

479
78 2. Linear Ordinary Differential Equations

each tEl. The theory of linear differential equations assures us that for each toEI
and each column vector qoEEn a unique solution q(t) exists such that q(to) = qo'
The solutions of dqldt = L (t)q form an n-dimensional complex linear vector
space (compare Fig. 2.4.1). In other words, n linearly independent solutions of
(2.4.2) exist which we may label q(I)(t), ... , q(nl(t). Any solution q(t) of (2.4.2)
can be written as a linear combination

q(t)= ECjqUl(t), (2.4.10)


j=l

where the coefficients Cj are time independent. The qUl's are not determined
uniquely because by a transformation (2.4.10) we may go from one basis {qUl} to
another one {ij Ul}.
It is often convenient to lump the individual solution vectors q together to a
"solution matrix" Q

(2.4.11 )

Writing the matrix Q in the form

(2.4.12)

we find by comparison

(2.4.13)

The solution matrix then obeys the equation

Q(t) = L Q(t) . (2.4.14)

Our above statement on the transformations from one basis {qUl} to another can
now be given a more precise formulation.
Theorem 2.4.1. Let Q(t) be a solution matrix of dQldt = L(t)Q which is non-
singular. The set of all nonsingular matrix solutions is formed by precisely the

q,ltl

Fig. 2.4.1. Illustration of a three-dimensional real linear


x vector space

480
2.4 General Theorems on Algebraic and Differential Equations 79

matrices Q(t) C where C is any n X n constant, nonsingular matrix. For each toEI
and each complex constant matrix Qo a unique solution matrix Q(t) exists such
that Q(to) = Qo. A set of solution vectors q(I)(t), ... ,qn)(t) of dqldt=L(t)q
form a basis for the solution space if and only if they form the columns of a
solution matrix Q(t) of dQldt = L(t)Q which corresponds to a nonsingular
initial matrix Qo.
The differential system dqldt = L(t)q, where Ct < t < p = 00, is called stable
if every solution remains bounded as t -+ 00, L e.,

lim sup {Iq(t) I} < 00. (2.4.15)


t-+ 00

The following theorem is sometimes useful. Let Q(t) be a solution matrix of


dQldt=L(t)Q. Then (did!) det {Q(t)} = tr{L(t)} det{Q(t)} at each point tEl
where Q(t) is nonsingular, and tr {L(t)} is the sum of the entries on the principal
diagonal. A solution matrix Q(t) is nonsingular everywhere on I if it is non-
singular at one point on I.
In Sect. 2.1 we studied the asymptotic behavior of the solution of the dif-
ferential equation q = a(t)q. This led us to the concept of characteristic ex-
ponents and generalized characteristic exponents A. A similar statement can be
made in the general case of the system (2.4.2).

2.4.4 Generalized Characteristic Exponents and Lyapunov Exponents


Suppose L (t) is continuous and

sup {Iaij(t) J} < B for all a<t< 00 (2.4.16)

and some constant B. Then for every solution vector q(l)(t) of dqldt = L(t)q it
holds that

lim sup {t -lIn Iq (i)(t) J} = Ai' IAil < B n . (2.4.17)


t-+ 00

The real numbers Ai which arise in this way are called the generalized charac-
teristic exponents. There are at most n distinct generalized characteristic ex-
ponents. The differential system is stable whenever all the A'S are negative.
A special case of the "generalized characteristic exponents" is that of
Lyapunov exponents.
To this end let us consider nonlinear equations of the form

q(t) = N(q(t», (2.4.18)

where N is a nonlinear vector function of q. Let qo be a solution of (2.4.18) and


consider the linearization of (2.4.18) around qo, Le., we put q = qo + <5q and
retain only linear terms

<5q = L(qo(t»<5q, (2.4.19)

481
80 2. Linear Ordinary Differential Equations

where tht' matrix L(qo(t» is defined by L = (L k[),

Lkl = oNk(qo(t)) . (2.4.20)


oqo,[(t)

In such a case the generalized characteristic exponents of Jq are called Lyapunov


exponents of qo(t).
In conclusion of this section we present the following theorem: At least one
Lyapunov exponent vanishes if the trajectory q(t) of an autonomous system
remains in a bounded region for t ...... 00 and does not contain a fixed point.
In order to prove this theorem we make the following additional assumptions
on N(q(t)) in (2.4.18). N(q(t)) is a continuous function of q(t) and it possesses
only finitely many zeros (corresponding to fixed points).
The proof of this theorem is very simple. For brevity we put oq == u and drop
the index 0 of qo so that according to (2.4.19) u obeys the equation

it = L(q(t))u. (2.4.21)

The Lyapunov exponent is then defined by

A. = lim sup ~lnlu(t) I. (2.4.22)


1->00 t

We now construct a solution of (2.4.21) which possesses a vanishing Lyapunov


exponent. We can immediately verify by differentiating both sides of (2.4.18)
with respect to time that

(2.4.23)

is a solution to (2.4.21). Because N is continuous and q(t) is bounded for t ...... 00

we have

IN(q(t)) 1 < D, D > O. (2.4.24)

Using (2.4.18) and (2.4.23) it follows that

lul<D. (2.4.25)

As a result we obtain

A = lim sup ~lnlu(t) 1 ~ lim sup~lnD, (2.4.26)


1->00 t 1->00 t

from which we conclude

(2.4.27)

482
2.5 Forward and Backward Equations: Dual Solution Spaces 81

Now let us assume

,{ < O. (2.4.28)

According to the definition of lim sup, for any f. > 0 there exists a 10 so that for
any t > to the inequality

~ In I g I < ,{ + f. (2.4.29)
t

can be satisfied. We shall choose f. so small that

(2.4.30)

holds. We now choose a function

(2.4.31)

which majorizes g, so that

Ig I ~ IVole-I).'11 for all I> to· (2.4.32)

Therefore

Ig 1--> 0 for t--> 00 (2.4.33)

which according to (2.4.18) implies

N(q)-->O. (2.4.34)

Because N is continuous, vanishes only at a finite number of points q and


approaches O,q(t) must approach one of the singular points

q/ = const. , (2.4.35)

i. e., the trajectory terminates at a fixed point. Therefore, if the trajectory q(t)
does not terminate at a fixed point, the only remaining possibility for the
Lyapunov exponent under study is ,{ = 0, so that our theorem is proved.

2.5 Forward and Backward Equations: Dual Solution Spaces

For later purposes we slightly change the notation, writing w(k) instead of qU).
Equations (2.4.14, 11) read

where (2.5.1)

483
82 2. Linear Ordinary Differential Equations

(2.5.2)

We distinguish the different solution vectors by an upper index k and assume that
these solution vectors form a complete set, i. e., that they span the solution space.
The matrix L

(2.5.3)

may have any time dependence. Of course, we may represent (2.5.1) in the
somewhat more suggestive form

() ( )( ). (2.5.4)

W L w
As is known from algebra we may construct a dual space to w (kl (0) spanned by
vectors w(k'l (0) in such a way that

(W(k'l (0) w(kl(O» = i5kk , for all k, k' , where (2.5.5)

s _!1
Ukk' -
fork=k' (2.5.6)
ofor k =1= k' .
The scalar product ( ... ) between wand w is defined by
(W(k'lw(k l ) = L wjk'lwjkl . (2.5.7)
j

All our considerations can be easily extended to partial differential equations in


which L is an operator acting on spatial coordinates. In such a case the sum on
the rhs of (2.5.7) must be replaced by a sum over j and in addition by an integral
over space
(2.5.8)

We now ask whether we may find an equation for the basis vectors, cf. (2.5.5), of
the dual space which guarantees that its solutions are for all times t ~ 0 ortho-
gonal to the basic solutions (2.5.2) of the original equations (2.5.1). To this end
we define w(k'l as the vector
- (k'l = (W(k'l
W 1 , ... , w(k'l)
n , (2.5.9)

which obeys the equation

.i;(k'l = w(k'lL . (2.5.10)

484
2.5 Forward and Backward Equations: Dual Solution Spaces 83

We require that L has matrix elements connected with those of L by

(2.5.11)

or, in short, we require

L= -L. (2.5.12)

In analogy to (2.5.4) we may write (2.5.10) in the form

( ). (2.5.13)

L
We want to show that (2.5.5) is fulfilled for all times provided w obeys the equa-
tions (2.5.10). To this end we differentiate the scalar product (2.5.7), where we
use W(k') and W(k) at time t. We obtain

(2.5.14)

which by use of (2.5.10 and 1)is transformed into

(2.5.15)

On account of (2.5.12), the rhs of (2.5.15) vanishes. This tells us that (2.5.5) is
fulfilled for all later times (or previous times when going in backward direction).
Thus we may formulate our final result by

(2.5.16)

In later sections we shall show that for certain classes of L(t) we may decompose
w(k) into

(2.5.17)

where v (k) has certain properties. In this section we use the decomposition
(2.5.17), but in an entirely arbitrary fashion so that V(k) need not be specified in
any way. All we want to show is that whenever we make a decomposition of the
form (2.5.17) and a corresponding one for W,

(2.5.18)

485
84 2. Linear Ordinary Differential Equations

the relations (2.5.16) are also fulfilled by the v's. The Ak'S are quite arbitrary.
Inserting (2.5.17 and 18) in (2.5.16) we immediately find

exp[(Ak-Ak,)tj (V(k')V(k» = Okk' , (2.5.19)

which due to the property of the Kronecker symbol can also be written in the
form

(2.5.20)

The orthogonality relations (2.5.16, 20) will be used in later chapters.

2.6 Linear Differential Equations with Constant Coefficients

Now we wish to derive the form of the solutions of (2.5.1) if L is a constant


matrix. If q is prescribed at time t = 0, the formal solution of (2.5.1) with a con-
stant matrix L can be written in the form

(2.6.1)

which is in formal analogy to solution (2.1.5). Note, however, that L is a matrix


so that the formal solution (2.6.1) needs further explanation. It is provided by the
definition of the exponential function of an operator by means of the power
series expansion

eLI = E_1_(LtY. (2.6.2)


v=O v!

Since each power of a matrix is defined, (2.6.2) is defined too, and one can show
especially that the series (2.6.2) converges in the sense that each matrix element of
exp (Lt) is finite for any finite time t. By inserting (2.6.2) into (2.6.1) we may
readily convince ourselves that (2.6.1) fulfills (2.5.1) because

(2.6.3)

While this kind of solution may be useful for some purposes, we wish to derive a
more explicit form of q. Depending on different initial vectors qU)(O) different
solutions qU)(t) evolve. Provided the qU)(O),s are linearly independent, then so
are the qU) at all other times.

In the following we shall prove the following theorem:


We can choose the solutions qU)(t) in such a way that they have the form

(2.6.4)

486
2.6 Linear Differential Equations with Constant Coefficients 85

The exponents Aj' often called "characteristic exponents", are the eigenvalues of
the matrix L. The vectors vU)(t) have the form

(2.6.5)
i. e., they are polynomials in t where the highest power mj is smaller or equal to
the degeneracy of Aj. If all A/s are different from each other, vU)(t) must then be
a constanL If several A/s coincide it might but need not happen that powers of t
occur in vJ. We shall see later how to decide by construction up to which power t
occurs in (2.6.5). Let us start to prove these statements. We start from the formal
solution of Q = L Q, namely

Q(t) = eLI Q(O) , (2.6.6)

and leave the choice of Q(O) open. We now choose a regular matrix S so that Lis
brought to Jordan's normal form i,

(2.6.7)

Multiplying this equation by S from the left and S -I from the right we obtain

L =sis- I • (2.6.8)

Thus (2.6.6) acquires the form

(2.6.9)

Making use of the expansion of the exponential function (2.6.2), one may readily
convince oneself that (2.6.9) can be replaced by

(2.6.10)

By the special choice of the initial solution matrix Q(O),

Q(O) = S, (2.6.11)

a particularly simple form for (2.6.10) is found.


To discuss the resulting form of the solution matrix

Q(t) = SeLl (2.6.12)

we use the explicit form of Jordan's normal form of i, namely

--lITJ
L - [IJ [TI
0 J (2.6.13)
o

487
86 2. Linear Ordinary Differential Equations

In it each box may be either a single matrix element of the form


Ijl= Aj' (2.6.14)
or may represent a mj x mj matrix of the form
Aj 1
Aj 1 0
Jj I= Aj 1 (2.6.15)
o 1

In it, in the main diagonal all A/S are equal. Above this diagonal we have another
diagonal with 1'so All other matrix elements are equal to zero.
We first show that exp (it) has the same shape as (2.6.13). Using the rules of
matrix multiplication it is demonstrated that

(2.6.16)

and quite generally for an arbitrary power m

[
[]m ~m

o
!It
0

.
l (2.6.17)

When we mUltiply both sides of (2.6.17) by tm/m! and sum up over m we obtain
which yields the same structure as (2.6.13), namely
eLI

eLI = [ lEJ0 IH21... 0] == H. (2.6.18)

Therefore, for our study of the form of the solution it will be sufficient to focus
our attention on each of the matrices H j which are of the form

(2.6.19)

where we have denoted the boxj by M j • For what follows we shall put

e Lt = Q(t), (2.6.20)
where we wish to use the normal form (2.6.13). Dropping the indexj, we write
the matrix (2.6.15) in the form

M=A·1+K. (2.6.21)

488
2.6 Linear Differential Equations with Constant Coefficients 87

In it 1 is the mj x mj unity matrix

(2.6.22)

whereas K is defined by

(2.6.23)

with mj rows and columns. Because the matrix 1 commutes with all other
matrices we may split the exponential function according to

(2.6.24)

Let us consider the exponential function of K t by expanding it into a series

(2.6.25)

What happens can be seen immediately by means of examples. If K is one-dimen-


sional, i. e.,

K=O, (2.6.26)

(2.6.25) reduces to a constant. If K is two-dimensional,

(2.6.27)

we readily obtain by matrix multiplication

(2.6.28)

i. e., (2.6.25) contains a constant and a term linear in t, i. e.,

(2.6.29)

In the case of a 3 by 3 matrix

489
88 2. Linear Ordinary Differential Equations

K = [010]
001
000
(2.6.30)

we readily obtain

K2 =
0 0 1
[ 0 0 0
J , (2.6.31)
000

K3 = (0) (2.6.32)

so that exp (Kt) acquires the form

e K ! = 1 + tOO
[
0 1 01 J+- t 2 [00 00 10 ] = [1 t t 12-j
0 1 t
2
. (2.6.33)
000 2 000 o0 1

After these preparations we may find 12


(2.6.20). To illustrate what happens,
consider an example where Al belongs to a one-by-one box, ,12 to a two-by-two
box, etc. Again the general structure is obvious. In such a case we have

(2.6.34)

Ul ~ In' "'f
In order to obtain ijU) we decompose (2.6.34) into its column vectors ijU) which
yields

q'" ~ ,>., q'" ,>" ."'0 (2.6.35)

From these considerations we see how Q can be constructed. Now we must


remember that we still have to form (2.6.12) to obtain our solution matrix Q(t).
In order to explore its shape we have to perform the product

o 0
eA2! te A2 !

o e: 21
(2.6.36)

490
2.7 Linear Differential Equations with Periodic Coefficients 89

Taking the same example as before we immediately find

(2.6.37)

Evidently all these q's have the form (2.6.4) with (2.6.5), as the theorem stated at
the beginning of this section. The way in which this result was derived provides
an explicit construction method of these solutions. This procedure can be gener-
alized to the case of a time-periodic matrix L, to which case we now turn.

2.7 Linear Differential Equations with Periodic Coefficients

We wish to derive the general form of the solutions of the equation

q= L(t)q, (2.7.1)

where the coefficient matrix L is periodic. In other words, L is invariant against


the replacement

T: t->l+to, (2.7.2)

i. e., it commutes with the operator defined by (2.7.2)

TL=LT. (2.7.3)

MUltiplying

Q=LQ (2.7.4)

on both sides by T and using (2.7.3) we immediately obtain

(TQ) = L (TQ) . (2.7.5)

This equation tells us that TQ is again a solution of (2.7.4). According to


Theorem 2.4.1 (Sect. 2.4.3) we know that this solution matrix can be expressed
by the old solution matrix Q(t) by means of a constant transformation matrix C

TQ == Q(t+ to) = Q(t)C. (2.7.6)

491
90 2. Linear Ordinary Differential Equations

Let us assume that this transformation matrix C is known I. Instead of solving


(2.7.4) we explore the solution of (2.7.6). First of all we note that according to
Theorem 2.4.1 C is a regular matrix. As is shown in mathematics and as shown
below quite explicitly we can always find a matrix A so that

(2.7.8)

In order to solve (2.7.6) we make the hypothesis

Q(t) = U(t) eAt. (2.7.9)

Inserting this hypothesis into (2.7.6) we immediately find

U(t+ to) exp [A(t+ to)] = U(t) eA1C (2.7.10)

or due to (2.7.8)

U(t+ to) = U(t) . (2.7.11)

Equation (2.7.11) tells us that the solution matrix U is periodic.


We now introduce a matrix S which brings A into Jordan's normal form

A =SAS- 1 • (2.7.12)

This allows us to perform steps which are quite similar to those of the preceding
section. Due to (2.7.12) we have

(2.7.13)

Inserting this into (2.7.9) we obtain for the solution matrix

Q(t) = U(t)Se Ai S- 1 , (2.7.14)


'---v--'
Q(t)

which we may write in the form

Q(t) = Q(t)S-l. (2.7.15)

1 Since in most cases (2.7.4) cannot be solved analytically one may resort to computer calculations.
One takes Q(O) as the unit matrix and lets the computer calculate Q(t) by standard iteration
methods until the time t = to is reached. Specializing (2.7.6) to this case we obtain

Q(to) = C, (2.7.7)

which gives us directly the transformation matrix C.

492
2.7 Linear Differential Equations with Periodic Coefficients 91

According to Theorem 2.4.1 (where S -1 == C), Q is a solution matrix if Q(t) is


such a matrix. Because it will turn out that Q has a simpler form than Q(t) we
now treat this latter matrix.
With the abbreviation

fl(t) = U(t)S (2.7.16)

Qacquires the form


(2.7.17)

The form of this solution exhibits a strong resemblance to the form (2.6.12) of
the solution of a differential equation with constant coefficients. The only dif-
ference consists in the fact that the matrix S of (2.6.12) is now replaced by a
matrix fl, whose coefficients are periodic in time. This allows us to repeat all
former steps of Sect. 2.6 and to derive a standard form of the solution vectors.
They read

(2.7.18)

where

(2.7.19)

The characteristic exponents A.j in (2.7.18) are called Floquet exponents; they are
the eigenvalues of /1, cf. (2.7.12).
The coefficients vy)(t) are periodic functions of time with period to. For mj
we have the rule

mj';::; degree of degeneracy of A.j. (2.7.20)

This is the final result of this section. For readers who are interested in all details
we now turn to the question how to determine /1 in (2.7.8) if C is given. To this
end we introduce a matrix V which brings C into Jordan's normal form. From
(2.7.8) we find

(2.7.21)

Introducing the abbreviation

(2.7.22)

we obtain for the lhs of (2.7.21)

(2.7.23)

493
92 2. Linear Ordinary Differential Equations

In order to fulfill (2.7.23) where C has the shape

[ rsJ
o [Sjo.
lJ (2.7.24)

it is sufficient to assume that the still unknown .Ii (which stems from the still
unknown A) has the same decomposition corresponding to

(2.7.25)

[compare steps (2.6.16 to 18)]. Therefore our problem reduces to one which
refers to any of the submatrices of (2.7.24 or 25) or, in other words, we have to
solve an equation of the form

exp (Oto) =
Il
fl fl01 1 0
"~/J . (2.7.26)

where the box on the left-hand side represents a matrix still to be determined.
As C is a regular matrix,
(2.7.27)

in (2.7.26) holds. We put


e Ato = f1 (2.7.28)

and introduce the decomposition

o= A 1 + 0'1 to . (2.7.29)

This allows us to write (2.7.26) as

~exp(O') ~ [ fl fl01 1 0 " ~/J (2.7.30)

l j
or
o1 0
exp(D') = 1 + fl- 1 0 ~. . (2.7.31)
o . 1
o

494
2.8 Group Theoretical Interpretation 93

Though we are dealing here with matrices, we have already seen that we may also
use functions of matrices in analogy to usual numbers by using power series
expansions. Because the logarithm can be defined by a power series expansion,
we take the logarithm of both sides of (2.7.31) which yields

D':: In(l + 11-1K) (2.7.32)

with K as defined in (2.6.23). Expanding the logarithm of the right-hand side we


immediately obtain

(2.7.33)

Fortunately enough we need not worry about the convergence of the power series
on the rhs of (2.6.33) because it follows from formulas (2.6.28, 32) and their gen-
eralization that powers higher than a fixed number vanish. Equation (2.7.33)
gives us an explicit solution of our problem to determine the square in (2.7.29).
We have thus shown how we can explicitly calculate .It when C is given.

2.8 Group Theoretical Interpretation

Results of Sects. 2.2, 7 may serve as an illustration of basic concepts of the theory
of group representations. In complete analogy to our results of Sect. 2.2, the
operator T generates an Abelian group with elements Tn, n ~ 0, n integer. But
what happens to the correspondence T -+ a we established in (2.2.16)? There our
starting point was the relation (2.2.10), i. e.,

Tq(t):: aq(t) . (2.8.1)

In the present case, the analog to this relation is provided by

TQ(t) :: Q(t)C, (2.8.2)

i. e., (2.7.6) where C is a matrix. By applying T on both sides of (2.8.2) and using
(2.8.2) again we obtain

T2Q(t):: TQ(t)C:: (Q(t)C)C:: Q(t)C 2 , (2.8.3)

and similarly

(2.8.4)

Clearly

(2.8.5)

495
94 2. Linear Ordinary Differential Equations

if we define CO as unit matrix. Multipyling (2.8.2) on both sides by T -1 yields

Q(t) = T- 1 Q(t)C. (2.8.6)

Following from Theorem 2.4.1 (Sect. 2.4.3), C is a regular matrix. Thus we may
form the inverse C -1 and mUltiply both sides of (2.8.6) from the left with that
matrix. We then obtain

(2.8.7)

Similarly, we obtain from (2.8.4)

(2.8.8)

It is now quite obvious what the analog of relation (2.2.16) looks like:

T .... C
Tn .... C n
(2.8.9)
TO .... I
T- n -> C- n

and relation (2.2.17) now is generalized to

(2.8.10)
T-nTn=E .... c-nc n =!
(TnTm) T' = yn(TmT') .... (cncm)c' = cn(cmc').

But the fundamental difference between (2.2.16, 17) on the one hand and
(2.8.9,10) on the other rests on the fact that a was a number, whereas C is a
matrix. Therefore, the abstract transformations Tn are now represented by the
matrices cn, and the multiplication of elements of the T-group is represented by
multiplication of the matrices cn. Since in mathematics one knows quite well
how to deal with matrices, one can use them to study the properties of abstract
groups (in our case the group generated by T). This is one of the basic ideas of
group representation theory.
In the foregoing section we have seen that by the transformation (2.7.14),
i. e.,

Q(t) = Q(t)S-l, (2.8.11)

we can reduce L into Jordan's normal form

(2.8.12)

496
2.8 Group Theoretical Interpretation 95

i.e., i is decomposed into individual boxes along its diagonal [cf. (2.6.13)]. Now
we insert (2.8.11) into (2.8.2) and multiply both sides from the right by S,

(2.8.13)

This means that by a change of the basis of the solution matrix Q --+ Q, C in
(2.8.2) is transformed into

(2.8.14)

But according to (2.7.8)

(2.8.15)

holds so that

(2.8.16)

But because.ii is in Jordan's normal form, so is C, on account of our considera-


tions in Sect. 2.6. We then see that by the proper choice of Qthe matrix C of the
group representation can be reduced to the simple form

(2.8.17)

Since the individual boxes (i. e., matrices) cannot be reduced further (due to
algebra), they are called an irreducible representation. Because the boxes are
multiplied individually if we form en, we obtain

(2.8.18)

In this way we may attach an individual box k, or an irreducible representation,


to T:

(2.8.19)

and

(2.8.20)

497
96 2. Linear Ordinary Differential Equation<

Generally speaking, it is one of the main goals of group representation theory to


establish the irreducible representations of an abstract group (here Tn).

2.9 Perturbation Approach *

Since in general it is not possible to solve a set of differential equations with


periodic coefficients explicitly, in some cases a perturbation method may be
useful. To this end we decompose the matrix L of the differential equation

Q=LQ (2.9.1)

into

(2.9.2)

where La is a constant matrix whereas L j contains no constant terms. This can be


most easily achieved by expanding the matrix elements of L into a Fourier series
in which the constant term is clearly exhibited.
We now insert the ansatz

Q=SQ, (2.9.3)

where S is a constant regular matrix, into (2.9.1), which yields after multiplying
both sides by S - j

(2.9.4)

We choose S in such a way that

(2.9.5)

assumes Jordan's normal form. To simplify subsequent discussion, we shall


assume that J contains only diagonal elements. We further put

(2.9.6)

so that

(2.9.7)

Inserting the hypothesis

(2.9.8)

498
2.9 Perturbation Approach 97

into (2.9.7) we readily obtain after performing the differentiation with respect
to t

Q = e-tJMetJQ. (2.9.9)
'---v---'
M

It may be shown that the matrix elements M k , of M have the form

(2.9.10)

where J, are the diagonal elements of the diagonal matrix J. So far all transfor-
mations were exact. Now to start with perturbation theory we assume that the
periodic part of L, i. e., L j or eventually Min (2.9.9), is a small quantity. In order
to exhibit this explicitly we introduce a small parameter e. Furthermore, we
decompose the differential equation (2.9.9) for the solution matrix into
equations for the individual solution vectors. Therefore we write

ij = eMij, (2.9.11)
where the matrix M is of the form

Mil M12
[ M21 M22 ... , (2.9.12)
Mnl ...
"'J
Mnn

and in particular
_ _ 1 2n
Mjj=O, M jj = 271: JMjjd qJ , qJ=wt. (2.9.13)

The nondiagonal elements may be rewritten as

Mij = e,1;/ p~O)(t), L1ij =- (Ji - J) , (2.9.14)

where p~O) denote periodic functions of time.


Furthermore, we shall assume

L1ij+imW=l=O for m=O, ±1, ±2, .... (2.9.1 <;)

To solve (2.9.11) we make the following hypothesis

(2.9.16)

499
98 2. Linear Ordinary Differential Equations

with time-dependent vectors A (x). For e = 0 (2.9.16) reduces to the special vector

(2.9.17)

Since we may numerate the components of (j arbitrarily, our treatment is,


however, quite general. Inserting (2.9.16) into (2.9.11), performing the differen-
tiation and dividing both sides of the resulting equation by exp [t(a 2 e 2 + ... )) we
obtain

+ ...

= e [ Mll
~21 J + ... + ek + 1

Mnl

(2.9.18)
We now compare the coefficients of the same powers of e on both sides of
(2.9.18). This yields the set of differential equations discussed below.
In lowest order e, the first row of the resulting matrix equation reads

e: A~I) = Mll == L c~l)eimwt . (2.9.19)


m*O

Equation (2.9.19) can be fulfilled by


(II)
All) = L ~eimwl (2.9.20)
m*Olmw

It is not necessary to choose 1) All) =1= 0 because this would alter only a normaliza-
tion. In the same order e but for the other rows with 1=1= 1 we obtain the relations

1 The bar above All) denotes the averaging defined by (2.9.13).

500
2.9 Perturbation Approach 99

A)1) = Mil' 1= 2, 3, .... (2.9.21)

We can solve for A)1) by integrating (2.9.21). Now it is quite important to choose
the integration constant properly, because we intend to develop a perturbation
theory which yields solutions with the form

(2.9.22)

v(t) periodic. Making a specific choice of the integration constant means that we
choose a specific initial condition. Because Mil has the form

Mil = eLl/l t L c~l)eimw!, (2.9.23)


m*O

we may choose A)1) in the form

(11)
AP)(t) = eLl/l! L cm. e imw !, (2.9.24)
m*O .,1/1 + Imw

i. e., the exponential function exp (.,1/1 t) is multiplied by a periodic function. As it


will transpire below, such a form will secure that q(t) acquires the form (2.9.22).
Let us now consider the next order in e 2• For the first row of the resulting
matrix equation we obtain the relation

a2 + A~2) = t MIlAP).
/~I
(2.9.25)

To study the structure of the rhs of (2.9.25) more closely, we make use of the
explicit form of Mij (2.9.14) and of A)1) (2.9.20,24).
If we decompose the periodic functions contained in M and A)I) into their
Fourier series, we encounter products of the form

exp (iwmt + iwm' t). (2.9.26)

For

m+m'=t:O (2.9.27)

the result is again a periodic function which truly depends on time t, whereas for

m+m' =0 (2.9.28)

a constant results. Consequently, the rhs of (2.9.25) can be written in the form

(2.9.29)

501
100 2. Linear Ordinary Differential Equations

Because the exponential functions cancel, (2.9.29) consists of a sum of a constant


cf2) and a periodic function pF) containing no constant terms. We choose

(2.9.30)

so that the constant terms on both sides of (2.9.25) cancel.


The remaining equation for A~2) can be solved in a way similar to (2.9.20) so
that A~2) can be chosen as a periodic function with no constant term. In the same
order f.2 we obtain for the other rows, i. e., k =F 1, the following equations

A~2) = f
1=1
MkIAF) , (2.9.31)

where the rhs can be cast into the form

(2.9.32)
or
(2.9.33)

Equation (2.9.31) has a solution of the form

(2.9.34)

where Pf) is periodic without a constant term and 2) is constant. Now the struc- ti
ture of the evolving equations is clear enough to treat the general case with
powers v ~ 3 of e. For the first row we obtain for v ~ 3 (A (x) == 0 for }{ ~ 0, ax = 0
for){~ 1)

eV : a V + a2A~v-2)+ a3A~V-3)+ ... + aV_2A~2)


+ a
v-I
A(I)
1
+ A(v)
j
= ~
L.
M
II
A(v-I)
I . (2.9.35)
1=1

Here the constant a v and the function A ~v) are still unknown. All other functions,
including A lx), 1 ;§i }{ ;§i v - 2, have been determined in the previous steps, assum-
ing that it was shown that these A's are periodic functions without constant
terms. Substituting these previously determined A's into the rhs of (2.9.35) we
readily find that the rhs has the form

(2.9.36)

i. e., it consists of a constant and of a purely periodic function without a constant


term. We may now choose a v equal to that constant term, whereas A~v) can be
chosen as a purely periodic function without constant term. Clearly A~V) can be
determined explicitly by mere integration of (2.9.35).

502
2.9 Perturbation Approach 101

Let us now consider in the same order v the other rows of (2.9.18). Then we
obtain

a2A~-2) + a3A~-3) + ... + av_IA~) + A~) = f Mk/A\v-I). (2.9.37)


1= I

In it the function A~v) is unknown. All others are determined from previous steps
and have the general form

A 1 <X e Ll1t1 x periodic function. (2.9.38)


From this it follows that the rhs of (2.9.37) can be written in the form

(2.9.39)

or more concisely

(2.9.40)

The solution of (2.9.37) can therefore be written as

(2.9.41)

where p~v) is a periodic function without a constant and C~v) is a constant.


These considerations tell us several things. Firstly, we see that we can deter-
mine the subsequent contributions to the shift of the exponent A, i. e., the terms
a2, a3, ... , explicitly by an iteration procedure. Furthermore, the expressions in
square brackets in (2.9.16) can be constructed explicitly yielding each time a
periodic function. Putting these ingredients together we find the general structure

j
of the solution q

PI + C 1
q= e yl
[ eLl~ll (P2 + C2 ) (2.9.42)
eLlnll(Pn + Cn)

where all terms can be constructed explicitly by an iteration procedure. From this
solution vector of (2.9.11) we can go back via (2.9.8) to the solution of (2.9.4).
We then find that the first column of the solution matrix has the form

~
iiO' ,"'," I~:~: l . (2.9.43)
lPn+CnJ
The C/s are constants, while the P/s are periodic functions (without additive
constants).
Similarly, we may determine the other column vectors by a mere interchange
of appropriate indices. In a last step we may return to Q by means of (2.9.3). As

503
102 2. Linear Ordinary Differential Equations

we have seen in Sects. 2.6 (cf. (2.6.36» and 2.7, this transformation leaves the
structure of the solution vector (2.9.22) unaltered. Our present procedure can be
generalized to the case of degeneracy of the A'S in which case we find terms in
(2.9.43) which contain not only periodic functions but also periodic functions
multiplied by finite powers of t. For practical purposes and for the lowest few
orders this procedure is quite useful. On the other hand, the convergence of this
procedure is difficult to judge because in each subsequent iteration step the
number of terms increases. In Sect. 3.9 another procedure will be introduced
which may be somewhat more involved but which converges rapidly (and even
works in the quasiperiodic case).

504
3. Linear Ordinary Differential Equations
with Quasiperiodic Coefficients *

3.1 Formulation of the Problem and of Theorem 3.1.1

In this section we wish to study the general form of the solution matrix Q(t) of
the differential equation

Q(t) = M(t) Q(t) , (3.1.1)

where M is a complex-valued m X m matrix which can be expressed as a Fourier


series of the form

M(t)= L Mn,.n2 ..... nNexp(iwtntt+imzn2t+ ... +iwNnNt). (3.1.2)


n,.n2'·· .. nN

In it each Fourier coefficient M n,.n2•... is an m X m matrix. In view of the preced-


ing sections the question arises whether the solution vectors q (t) can be brought
into the form

(3.1.3)

where v(t) is quasiperiodic or a polynomial in t with quasiperiodic coefficients.


Though in the literature a number of efforts have been devoted to this problem,
it has not been solved entirely. In fact, other forms of the solution vector may
also be expected. Thus in this and the following sections we are led to the
forefront of mathematical research. The solution matrices Q(t) will be classified
according to their transformation properties. Among these classes we shall find a
class which implies the form (3.1.3). In order to study the transformation
properties we wish to introduce a translation operator T.
As one may readily convince oneself it is not possible to extend the translation
operator of Chap. 2 to the present case in a straightforward way. However, the
following procedure has proven to be successful. We consider (3.1.1) as a special
case of a larger set of equations in which instead of (3.1.2) we use the matrix

M(t, 'P) = L Mn'.n2 •.... nNexp [iw! n! (- 'Pt + t) + iW2n2( - 'P2 + t) + ... J
n,.n2····· nN (3.1.4)
104 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

which contains the phase angles <PI' ... ,<PN. In other words, we are embedding
the problem of solving (3.1.1) in the problem of solving

Q(t, <p) = M(t, <p)Q(t, <p). (3.1.5)

We introduce the translation operator

Tr'.[t~t+T (3.1.6)
<P ~ <P + T, T = Te ,

where T is an arbitrary shift and e is the vector (1, 1, ... , 1) in <P space. One sees
immediately that M is invariant against (3.1.6) or, in other words, that M
commutes with Tp

(3.1.7)

As a consequence of (3.1.7)

(3.1.8)

is again a solution of (3.1.5). Therefore the relation

TrQ(t, <p) = Q(t, <p)C(T, <p) (3.1.9)

must hold (compare Theorem 2.4.1), where C(T, <p) is a matrix independent of t.
The difficulty rests on the fact that the matrix C still depends on T and <p. Using
the lhs of (3.1.9) in a more explicit form we find

Q(t+ T, <P+ T) = Q(t, <p)C(T, <p). (3.1.10)

After these preparatory steps we are able to formulate Theorem 3.1.1 which we
shall prove in the following.
Theorem 3.1.1. Let us make the following assumptions on (3.1.4 and 5):
1) The frequencies WI>"" wNare irrational with respect to each other (other-
wise we could choose a smaller basic set of w's).
2) M(t, <p) = M(O, <p-t) is 1j-periodic in <Pj' 1) = 2n/wj' and Ckwith respect
to <p (k ~ 0) ("C k " means as usual "k times differentiable with continuous deriva-
tives").
3) For some lfJ = lfJo the generalized characteristic exponents Aj of q(})(t, lfJo),
j = 1, ... , m, are different from each other. We choose At> A2 > ....
4) We use the decomposition

(3.1.11)

where lu(})(t) I = 1, Zj real, j = 1, ... , m, and, clearly, Zj is connected with Aj by

506
3.1 Formulation of the Problem and of Theorem 3.1.1 105

t~oo lI
lim sup \"..!..-ZP)] = Aj. (3.1.12)

We then require

Idet u (i)(I) I> do> ° for all times I. (3.1.13)

This implies that the unit vectors u(t) keep minimum angles with each other, or,
in other words, that the u's never become collinear, i. e., they are linearly inde-
pendent.
5) Zj(t) possesses the following properties. A sequence In' In -+ 00 exists such
that

(3.1.14)

(3.1.15)

where

151,2>0, j= 1, ... ,m-1, and r;;::O. (3.1.16)

(In particular, the conditions (3.1.14) and (3.1.15) are fulfilled for any sequence
In if

(3.1.17)

where Wj(t) is bounded.)


6) The w's fulfill a KAM condition [cf. (2.1.20 and 21)] jointly with a suitably
chosen k in assumption (2).
Then the following assertions hold:
a) Under conditions (1-4), k ~ 0, Q(t, ffJ) can be chosen in such a way that in

(3.1.18)

the matrix C can be made triangular,

(3.1.19)

with coefficients quasiperiodic in r.


b) Under conditions (1 - 5), the coefficients of Care C k with respect to ffJ and
1j-periodic in ffJj' 1j = 2 nl Wj'

507
106 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

c) Under conditions (1 - 6), C can be made diagonal, and the solutions q U)


can be chosen such that

q Ui = eX/vU)(t''Y
m) ,
Re{X-\
JJ
= AJ ' (3.1.20)

where v U) is quasiperiodic in t and 1j-periodic in fIJi' and C k with respect to rp.


In particular

(3.1.21)

i. e., v U) is invariant against Tr•

In Sect. 3.8 a generalization of this theorem will be presented for the case
where some of the A's coincide.
The proof of Theorem 3.1.1 will be given in Sects. 3.2 - 5,3.7 in several steps
(Sect. 3.6 is devoted to approximation methods - linked with the ideas of the
proof - for constructing the solutions q U). After stating some auxiliary
theorems ("lemmas") in Sect. 3.2, we first show how C can be brought to
triangular form (exemplified in Sect. 3.3 by the case of a 2 x 2 matrix, and in
Sect. 3.5 by the case of an m x m matrix). This implies that we can choose
qU)(t, ({J) such that Al > .1.2 > '" > Am for all rp. Then we show that the elements
of the triangular matrix can be chosen according to statements (a) and (b) of
Theorem 3.1.1 (in Sect. 3.4 by the case of a 2 x 2 matrix, and in Sect. 3.5 by the
case of a m x m matrix). Finally in Sect. 3.7 we prove assertion (c).

3.2 Auxiliary Theorems (Lemmas)

Lemma 3.2.1. If M (3.1.4) is C k , k ~ 0 with respect to ({J, then M is bounded for


all t with - 00 < t < + 00 and all ({J. Therefore the conclusions of Sect. 2.4.4 apply
and we can define generalized characteristic exponents.
Proof: Since M is periodic in each rpi or, equivalently, in each flJj ,= fIJi + t, it is
continuous in the closed intervals 0:::;; flJj (mod 2 n/ Wi) :::;; 2 n/ Wi and therefore
bounded for all t and ({J.
Lemma 3.2.2. If M (3.1.4) and the initial solution matrix Q(O,rp) are C k with
respect to ({J, then the solution matrix Q(t, ({J) is also C k with respect to ({J for
- 00 < t < + 00.

Proof: We start from (3.1.5) where the matrix Mis given by (3.1.4). The solu-
tion of (3.1. 5) can be expressed in the formal way

Q(t, ({J) = Texp [tM(S, ({J)dS] Q(O, ({J) , (3.2.1)

where T is the time-ordering operator. The exponential function is defined by

508
3.2 Auxiliary Theorems (Lemmas) 107

texp [fM(s, qJ)dS] = 1 +


o
E
_1 t [IM(S, qJ)dS]n.
n=ln! 0
(3.2.2)

According to the time-ordering operator we have to arrange a product of


matrices in such a way that later times stand to the left of earlier times. This
means especially

0 0
J
I(nl == _1_ t [fM(S, qJ)ds]n = IdSnM(Sn> qJ) ds n_ 1M(sn_l, qJ) ...
n! 0

52
Jds 1M(SI,qJ) . (3.2.3)
o

Because M is a matrix with matrix elements Mjk we can also write (3.2.3) in the
form

IJf:l==-+t[[fM(s,qJ)dSJi =
n. 0 )k
L JdSn~'I._l(Sn,qJ)
11..... 1._ 10

According to Lemma 3.2.1 M is bounded

(3.2.5)

Therefore the estimate

(3.2.6)

holds. Because according to (3.2.1) we must eventually multiply (3.2.2) by an


initial solution vector q k at time t = 0, we need to know the norm of

(3.2.7)

Assume that each component q k is bounded,

(3.2.8)

Then we readily obtain

II(nlqk(O,qJ) I~ mn- 1 ~Nrmq (3.2.9)


n!

509
108 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

from which

(3.2.10)

Evidently the series converges for all times - 00 < t < + 00.
These considerations can be easily extended to prove the differentiability of
(3.2.1) with respect to rp. Denoting the derivative with respect to a specific rpj by a
prime

Q~(t, rp) == ~Q(t, rp) (3.2.11)


J orpj

and dropping the index rpj' we obtain the following expressions for the derivative
of each matrix elementjk of a member of the series (3.2.2)

C~rJ \~_1 ti.JM(s,rp)dSTJ


corpj Ijk
==
COrpj n! lo J Ijk

1·'" 2··· + ... J.


Uk
(3.2.12)
Let

IM'I::;;K then (3.2.13)

(3.2.14)

Let us consider (3.2.11) in more detail. Denoting derivatives with respect to ifJj
by a prime,

itexPJ. .. l.'Q(O,rp)i + itexpf ... lQ'(o,rp)i.' (3.2.15)


I oJ ~ I 0) ~
L ____ ....J
exists for all t

we may use the estimate (3.2.14) and obtain

I ~ Q(t,
orpj
rp) I ::;; K exp (mMt)mq
ik
+ Iitexp f... l_o_Q(o, ifJ) I ' (3.2.16)
0 ) orpj ik

510
3.2 Auxiliary Theorems (Lemmas) 109

where the second term on the rhs can be estimated in a way analogous to (3.2.9).
Thus if M is C l with respect to ({J, cf. (3.2.13), then (3.2.11) exists for all times
- 00 < 1 < 00. Similarly, one may prove the convergence (and existence) of the
k'th derivative of Q(t, ({J) provided M is correspondingly often continuously dif-
ferentiable with respect to ({J.
Lemma 3.2.3. If the initial matrix Q(O, ({J) is periodic in ({J with periods
~ = 2nlwj, then Q(t, ({J) has the same property.
Proof' In (3.2.1, 2) each term has this periodicity property.
Lemma 3.2.4. This lemma deals with the choice of the initial matrix as unity
matrix. We denote the (nonsingular) initial matrix of the solution matrix with
({J = ({Jo by 12,

Q(O, ({Jo) = 12(0, ({Jo) . (3.2.17)

For all other ({J's we assume the same initial condition for the solution matrix for
1=0

Q(O, ({J) = Q(O, ((Jo) == Qo· (3.2.18)

For what follows it will be convenient to transform the solutions to such a form
that the initial solution matrix (3.2.17) is transformed into a unity matrix. To this
end we write the solution matrix in the form (omitting the argument ((J)

Q(I) = U(t) Qo with U(O) = 1 . (3.2.19)

From (3.2.19) follows

Q = U(t)Qo with U = MU. (3.2.20)

Inserting (3.2.19) into (3.2.20) and multiplying both sides from the left by Qo I
we obtain

(3.2.21)

Introducing

(3.2.22)

as a new solution matrix we may write (3.2.21) in the form [compare (3.1.1)]

Q(t)=MQ, (3.2.23)

where Q obeys the initial condition


Q(O) = 1. (3.2.24)

511
110 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

Because each of the transformations can be made in a backwards direction it


suffices for our discussion to consider the new problem (3.2.23, 24). Entirely
analogous to the procedure in Sect. 2.7, we can show that the new solution
vectors ij U) are again of the form (3.1.11) with the same generalized characteristic
exponents as before.
From now on we shall use the transformed system (3.2.23,24), but drop the
tilde.
Lemma 3.2.5. The transformation matrix C (cf. 2.7.7) for the system (3.2.23, 24)
is given by

C(r, '1') = Q(r, '1'+ r) . (3.2.25)

Proof' Using the definition of Tr (3.1.6) we may write (3.1.9) in the form

Q(t+r,rp+r) = Q(t,rp)C(r,rp). (3.2.26)

When we choose t = 0 and apply (3.2.24), we obtain (3.2.25). Because Q(t, '1') is a
nonsingular matrix (cf. Theorem 2.4.1),

(3.2.27)

exists for all rand '1'.


By means of (3.2.25) we cast (3.2.26) into the form

Q(t+ r, rp+r) = Q(t, rp)Q(r, '1'+ r). (3.2.28)

Replacing everywhere 'I' by 'I' - r we find after a slight rearrangement

Q(t,rp-r) = Q(t+r,rp)Q-l(r,rp). (3.2.29)

In the special case 'I' = '1'0 we shall put

C(r, '1'0) = C(r). (3.2.30)

Lemma 3.2.6. Let us introduce the transposed matrix belonging to Q(r, '1')

(3.2.31)

so that we arrive at the following form of (3.2.29) with 'I' = '1'0

qU)(t, rpO-T) = ~ Ijk(r)q(k)(t+ r, '1'0), j = 1, ... , m. (3.2.32)


k~l

We put

(3.2.33)

512
3.3 Proof of Assertion (a) of Theorem 3.1.1 111

Then the assertions hold:


1) IDjk(r) 1< d j , d l independent of r;
2) D jk ( r) is C l with respect to r.
Proof: Assertion (1) follows immediately from the definition of the inverse of
a matrix from (3.2.30,25) and from assumption (3) of Theorem 3.1.1. The proof
of assertion (2) follows from the decomposition of q U) (which is at least C l with
respect to r) in a real factor exp (Zjt) and a vector Uj(t) of unit length.

3.3 Proof of Assertion (a) of Theorem 3.1.1:


Construction of a Triangular Matrix: Example of a 2 X 2 Matrix

Since all the basic ideas and essential steps can be seen by the simple case in which
M and thus Q are 2 x 2 matrices, we start with this example and put

(3.3.1)

We may write (3.2.32) in the form

q (I)(t, qJo - r) = /jl (r)q (I)(t + r, qJo) + /j2( r)q (2) (t+ r, qJo) (3.3.2)

q (2)(t, qJo - r) = 121 (r)q (I)(t + r, qJo) + 122 (r) q (2)(t + r, qJo) . (3.3.3)

Let us consider (3.3.2, 3) in more detail. On the lhs, q U)(t, qJo - r) is a subset of
the functions q U)(t, qJ) which are periodic in qJj and C k with respect to qJ. In par-
ticular, the functions on the lhs of (3.3.2, 3) are quasiperiodic in r. Because the
w's are assumed irrational with respect to each other, qJo- r lies dense in qJ or,
more precisely,

(qJo,j - r) mod 2:n: lies dense in qJj , (3.3.4)


Wj

As a consequence of this and the C k property of q U)(t, qJ), q U)(t, qJo - r) lies dense
in the space q U)(t, qJ). On the rhs the asymptotic behavior of q U) for t --> 00 is
known. According to our assumption, q U), j = 1, 2 possess different generalized
characteristic exponents Aj. We wish to form new solutions of (3.1.5), q(1) and
q (2), which combine both these features, namely asymptotic behavior (q U) shall
possess the generalized characteristic exponent A) and quasi periodicity in the
argument qJo- r. Take AI > A2' In order to construct q(2) we multiply (3.3.2) by
a(r) and (3.3.3) by per), forming

(3.3.5)

In order that q (2) no longer contains the generalized characteristic exponent AI,
we require

513
112 3. Linear Ordinary Differential Equations with Quasiperiodic Cocfri,:icnts

(3.3.6)

which can obviously be fulfilled because the vector q (2) drops out of this equa-
tion.
By means of (3.2.33) (Lemma 3.2.6) we may transform (3.3.6) into

a(r)u 1(t+ r)D ll (r) + f3(r)u 1(t+ r)D 21 (r) = 0, (3.3.7)

where the D's are bounded.


The solution of (3.3.7) can be written in the form

a(r) = - D21 (r)[ID 11 (r) 12 + ID21 (r) 12]-112 (3.3.8)

f3(r)= D ll (r)[ID 11 (r)1 2 + ID 21 (r)1 2]-li2, (3.3.9)

where due to the arbitrariness of the solution of the homogeneous equations we


may but need not include the denominators. Because of the linear independence
of the solutions the denominator does not vanish.
Using the same a and f3 we may construct ql by means of

(3.3.10)

Namely. using (3.3.8,9) and (3.3.2,3) with (3.2.33), q(1) reads explicitly

q(1)(t,'Po-r,r) = (ID ll (r)1 2 + ID 2I (r)1 2)Ii2 e -Z\(f)q(\)(t+r, 'Po)

+ terms containing q(2)(t + r, 'Po). (3.3.11)

(Here again DII (r) and D21 (r) cannot vanish simultaneously, bt::cause otherwise
the solution vectors qU)(t, 'Po- r) would become linearly dependent in contrast to
our assumption. Therefore the factor in front of q (1)(t + r, 'Po) does not vanish.)
Therefore the choice (3.3.10) secures that qI possesses the characteristic exponent
AI' In this way we have constructed two new solutions ql' q2 which are connected
with the generalized characteristic exponents Al and A2 , respectively.
When we use these new solutions vectors instead of the old ones, the matrix C
(3.1.18) appears in triangular form, as can be easily demonstrated and as will be
shown explicitly in Sect. 3.7.

3.4 Proof that the Elements of the Triangular Matrix Care


Quasiperiodic in T (and Periodic in qJj and C k with Respect to qJ):
Example of a 2 x 2 Matrix

Firstly we show that (perhaps up to a common factor) a( r) and f3( r) can be


approximated asymptotically, i. e., to any desired degree of accuracy by func-

514
3.4 Proof that the Elements of the Triangular Matrix C are Quasiperiodic in r 113

tions which are quasiperiodic in r. To this end we write the coefficients of a(r) in
(3.3.6) or (3.3.7) by use of (3.3.2) in the form

(3.4.1)

Similarly, we proceed with the coefficients of fJ(r) in (3.3.6). For the following it
will be sufficient to demonstrate how to deal with the coefficient of a(r) as an
example. In order to ensure that the coefficients of a(r) and fJ(r) remain
bounded for all positive times t and r, instead of (3.4.1) we form

JV(t, qJo-r)l!t(r)q(1)(t+r, qJo) = JV(t, qJo-7:)q(1)(t, qJo-7:)

- ,%(1, qJO-7:)1!2(r)q(2)(t+ r, qJo), (3.4.2)

where

(3.4.3)

We first present an argument, rigorously developed later, whereby we assume


that for t--+ 00 the time dependence of Z2(t+ r) is given by Z2(t+ r) "'" A2· (t+ r)
and that ofq(1)(t, qJo-7:) byexp (Att). Let us assume that At> A2. Then for t--+ 00
the second term in (3.4.2) vanishes. That means that we can approximate the
coefficient of a(r) in (3.3.7) and correspondingly of fJ in (3.3.7) by expressions
which contain q(1)(t, qJo- 7:) and q(2)(t, qJo- 7:) alone. But because q(j)(t, ({Jo- 7:) is
dense in q(j)(t, qJ) we may express the coefficients of a(r), fJ(r) to any desired
degree of accuracy by functions which are quasiperiodic in r and even C k with
respect to qJ. Therefore we may embed a(r) and fJ(r) in the set of functions a(qJ)
and fJ(qJ) which are quasiperiodic (qJ = qJo-7:) in r, and 1j-periodic and C k in qJ.
Now let us cast that idea in a more precise form. Using (3.1.11,2.33) we write
(3.3.2 and 3) in the form

q(j)(t, qJo- 7:) = exp [Zt (t+ r) - Zt (r)] u(1)(t+ r)Djt (r)
+ exp [Z2(t+ r) - z2(r)]u(2)(t+ r)Dj2 (r) , j = 1,2. (3.4.4)

We first choose r within

(3.4.5)

With help of JV (3.4.3) we form

,% (t, qJo - 7:)q (j)

and the corresponding expressions on the rhs of (3.4.4).


When both sides of the numerator and denominator of this new equation are
divided by

(3.4.6)

515
114 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

we see that the coefficients of D j2 (T) in (3.4.4) are mUltiplied by

exp [Z2(t+T) - Z2(T) - ZI(t+T) + Zt<T)]. (3.4.7)


We now recall the definition of the generalized characteristic exponents,
according to which

lim SUp
t~co
[~Zj(t)]
t
= Aj. (3.4.8)

Accordingly, we may find a sequence of times

(3.4.9)

and for each T, TI < T < T2, a corresponding sequence so that

~[ZI(tn+ T) - ZI(T)] > Al - 0', 0' >0. (3.4.10)


tn

For tn sufficiently large, 0' can be chosen as small as we wish. On the other hand,
because of

(3.4.11)

we may find some 0" > 0 so that for the same sequence tn and 'I < T< T2

~[Z2(tn+ T) - Z2(T)] < A2 + 0" , (3.4.12)


tn

and 0" may be chosen arbitrarily small as tn is sufficiently large. Taking (3.4.10
and 12) together we see that (3.4.7) goes to zero for t n -> 00.
Consequently, we may replace everywhere in (3.3.7) for Tl < ,< T2

Uj (t+ T)Dj1 (T) (3.4.13)


by
lim sup {AI (t, 'Po - r)q U)(t, 'Po - T)}, (3.4.14)
t ..... co

which is quasiperiodic in T. This allows us to construct a and j] as quasiperiodic


functions by making the corresponding replacement of Djl in (3.J.8, 9) where we
may eventually let T2 -> 00.
Provided assumption (5) of Theorem 3.1.1 is fulfilled, we may again replace
(3.4.13) by (3.4.14) in addition embedding .1 (t, 'Po- T)qP, 'Po- T) within

516
3.5 Construction of the Triangular Matrix C 115

JV(t, 'P)qU)(t, 'P). Because this expression is periodic and C k in 'P, so are
a(r) -+ a('P) and p(r) -+ P('P).
Let us summarize what we have achieved so far. Under assumptions (1 - 4) of
Theorem 3.1.1, we have constructed a complete set of solutions to (3.1.5),

(3.4.15)

with the following properties: ij I) is (at least) C l with respect t.o t and r, and it is
quasiperiodic in r.
Under assumptions (1 - 5) of Theorem 3.1.1 we have constructed a complete
set of solutions to (3.1.5),

(3.4.16)

with the following properties: ij I) is (at least) C l with respect to t, and 1)-
periodic and C k with respect to 'P. The set ijl)(t, 'P,) lies densely in the set
ijl)(t, 'P). In both (3.4.15, 16) the generalized characteristic exponents (for
t -+ + 00) of ij I) are given by Aj.

3.5 Construction of the Triangular Matrix C and Proof that


Its Elements are Quasiperiodic in T (and Periodic in rpj and C k with
Respect to rp): The Case of an m x m Matrix, alll's Different

So far we have considered the special case in which M is a 2 x 2 matrix. Now let
us turn to the general case of an m X m matrix, again assuming that the charac-
teristic exponents are all different from each other

(3.5.1)

We wish to construct linear combinations of ql)(t, 'Po- T) in the form

q(l)(t, 'Po- T, T) = L aY)(r)ql)(t, 'Po- T), 1= 1, ... , m , (3.5.2)


j

such that only one of these linear combinations has the generalized characteristic
exponent AI but that all other m - 1 linear combinations have at maximum a gen-
eralized characteristic exponent A2'
Furthermore, we want to show that ay) can be chosen as a quasiperiodic func-
tion in r or, more specifically, that we can embed aJI)(T) in functions aY)('P)
which are 1j-periodic and C k with respect to 'P. For simplicity, we shall drop the
index I in the following. We first introduce a normalization factor by

(3.5.3)

517
116 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

and form

(3.5.4)

According to (3.1.11), (3.2.32, 33) we may replace the rhs of (3.5.4) by

(3.5.5)

. l
Let us use a sequence tn --> 00 defined by (3.4.10). We shall denote such a sequence
by Lim Sup. We readily obtain

Lim Sup [XU)(t, rpo- r)] = Lim Sup D J1 (r)u(I)(t+.. r) } (3.5.6)


[m
t~oo t~oo
L
j=j
!Djj(r)! J
2 1!2

which may also be written in the form

(3.5.7)

When the coefficients aj(r) of (3.5.2) are subjected to the equation

(3.5.8)

where according to (3.5.7) all vectors C;U) are parallel, we can determine m - 1
linearly independent solution vectors

(3.5.9)

which fulfill (3.5.8).


The use of these a's in (3.5.2) guarantees thatq (I) in (3.5.2) possesses a gener-
alized characteristic exponent A2 or smaller. Choosing L aj~U) =1= 0 for any
. j
nonvanishing vector component of C;U), we can construct a ij (I) with the genera-
lized characteristic exponent Aj.
We now want to show that we may choose aj(r) as quasiperiodic in r, or that
we may even embed a/r) in a;(rp). We note that due to its construction
[cf. (3.5.6)], C;U) is quasiperiodic in r and can be embedded in a set of functions
~U)(rp) which are 1j-periodic and C k with respect to rp. Under assumption (5) of
Theorem 3.1.1, Lim Sup converges uniformly in r towards the rhs of (3.5.7).
This allows one to exchange the performance of "Lim Sup" and the dense
embedding of c;;C r) in C;j( rp). Using linear algebra we may construct a (I) in such a
way that it possesses the same differentiability and periodicity properties as the
coefficients C; U).

518
3.5 Construction of the Triangular Matrix C 117

Since a detailed presentation of this construction will diverge too far from our
main line, we merely indicate how this construction can be visualized. Let us con-
sider one vector component of the vector equation (3.5.8) [in which all vectors
~U) are parallel, cf. (3.5.7)]. Let us form a vector ~ = (c;£I), c;£2), ... , dm »), where k
is chosen such that ~:j: 0 which is always possible. Then the corresponding vector
component of (3.5.8) can be written as

(3.5.10)

i. e., as the scalar product between ~ and the vector a( = a(l») as introduced by
(3.5.9). In other words, we seek the m - 1 vectors a(l>, I = 2, ... , m, which span a
vector space orthogonal to the vector ~. Now, when we smoothly change the
t.
direction of we may smoothly change the vectors of the space orthogonal to ~.
(More generally, "smoothly" can be replaced by C k .) A simple example of an
algebraic construction will be given below.
For pedagogical reasons it must be added that it seems tempting to divide
(3.5.8) by the common "factor" Lim Sup {u(1)(tn+ r)}. Then the difficulty occurs
that it is not proven that Djl (r) and this factor have the required differentiability
properties with respect to r (or ffJ) individually. Therefore we must not divide
(3.5.8), which contains (3.5.7) via c;j, by Lim Sup {u(1)(tn+ r)}. If we want to get
rid of u(1), we may form the quantities In--+ oo

Lim Sup {zU)(t, ffJo- 7:) X(k)* (t, ffJo - 7:)} = 1'fjk( r) , (3.5.11)
1--+ 00

where '1jk( r) is C k with respect to r, provided M was C k with respect to ffJ. Since
ffJo-7: lies dense on the torus we can embed (3.5.11) in functions which are C k
with respect to ffJ and I;-periodic. Then (3.5.11) can be evaluated in analogy to
(3.5.6) which yields

(3.5.12)

Now U (I) drops out. In order to construct new linear combinations of (3.5.2)
which have a characteristic exponent A,2 or smaller we require

m
Eair)1'fjk(r)=O for k=l, ... ,m. (3.5.13)
j=1

We introduce the vectors

1'fk=(1'flk,1'f2k, ... ,1'fmk) for k= 1, ... ,m (3.5.14)

519
118 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

and readily deduce from (3.5.11) that in the limit t---> 00 all these vectors are
parallel.
Furthermore we introduce (3.5.9) so that (3.5.13) can be written as

(3.5.15)

Since all 'Ik are parallel, the requirement (3.5.15) can be fulfilled by m - 1 dif-
ferent vectors a. In this way we can construct m - 1 new solutions (3.5.2) or,
more generally, we can embed

(3.5.16)

Though for some or all r some of the vectors 'Ik may vanish, on account of our
previous assumptions at least one 'Ik must be unequal to O.
As mentioned before, the a's of (3.5.15) can always be chosen in such a way
that they have the same differentiability properties with respect to parameters as
'Ik' This statement is illustrated by a simple example in the case of an even m:

aj = II 17lk( -1)j . (3.5.17)


I*j

If this is inserted into (3.5.15) we readily convince ourselves that (3.5.15) is ful-
filled and that a has the same differentiability properties as 'Ik' If one of the 'Ik
vanishes one can go over to another 'Ik by multiplying aj with a well-defined
factor.
Summary of Results. We have shown how to construct new solutions ij of the
original equation (3.1.5) which have the following properties. One of these
solutions has the generalized characteristic exponent Aj, all other m - 1 solutions
have the characteristic exponent ,A,2 or still smaller ones. We can now perform
precisely the same procedure with a group of the remaining m - 1 solutions and
reduce this group to one which contains only one solution with generalized
characteristic exponent ,A,2' The continuation of this procedure is then obvious.
Clearly, the resulting coefficient matrix C is triangular as is explicitly shown in
Sect. 3.7.
Since the original q U) are C k and 1j-periodic with respect to lfJ and the coef-
ficients a can be constructed in such a way that they retain this property also,
then the newly derived solutions are C k and 1j-periodic with respect to lfJ.

3.6 Approximation Methods. Smoothing

We make two remarks on possible approximative methods.

3.6.1 A Variational Method


Instead of (3.5.15) we may also require

520
3.6 Approximation Methods. Smoothing 119

m
L la1hl 2 = O. (3.6. 1)
k=l

If in an approximation scheme Lim Sup is not performed entirely but only up to


t = tno t h en
t~co

(3.6.2)

and we cannot fulfill (3.6.1) exactly because the vectors defined by (3.6.2) are not
completely parallel. But now we can require

Ela17kl
k=l
2 = minimum! , (3.6.3)

The variational principle defined by (3.6.3) leads to a set of eigenvalue equations

m
L I1J(a'1k)=Aaj' j= 1, ... ,m (3.6.4)
k=l

which are linear in A. Multiplying (3.6.4) by al and summing over j we obtain

m
L (a* 17t)(a17k) = A lal 2 • (3.6.5)
k=l

We thus see that the eigenvalues A will be a measure as to how far we are still
away from the exact equation (3.6.1).

3.6.2 Smoothing
Since some ~k or, equivalently, some 17k> k fixed, in (3.5.15) can vanish for some
r, we have to switch to another equation stemming from another 17j (or ~). If we
have not performed the limit tn ---> 00 in Lim Sup, these two 17's may not entirely
coincide. This in turn would mean that at those switching points a becomes dis-
continuous and nondifferentiable. We want to show that we can easily smooth
the switching. To this end we introduce the new vector

(3.6.6)

where f3 is given qualitatively in Fig. 3.6.1, and replace (3.5.15) by

afh=O. (3.6.7)

Since due to the linear independence of the solutions not all17j can vanish for the
same r, we may choose such an index j that (3.6.6) does not vanish where 17k
vanishes. We define a variable x by

521
120 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

, - '0
x= (3.6.8)
'1 - '0

which secures

x=O, ,= '0 (3.6.9)

and

x= 1, ,= '1' (3.6.10)

We choose '0 and '1 such that '1k is replaced by llj before 11k vanishes. On the
other hand, we choose '0 and '1 such that 111 k Iis smaller than 111 j I so that (3.6.6)
cannot vanish due to compensation of the l1's. We define {J by

{J = 0, -00 <x:::;; 0
(J=h(x), O:::;;x:::;;l (3.6.11)
(J=1, 1:::;;x<00.

In order that the "spline function" (3.6.6) is sufficiently often continuously dif-
ferentiable with respect to " we require in addition for a given n

(J(m) = 0, 1:::;; m :::;; n - 1


for x = 0 and x = 1. (3.6.12)

Here and in the following the superscript (m) denotes the m'th derivative.
Since the points, at which we have to apply this smoothing procedure have
the same properties of quasi periodicity as 11k> llj' the newly constructed {J's will
also have this property. When we let t -> 00, our above procedure guarantees that
during the whole approach {J remains en and quasiperiodic.
The following short discussion shows how to construct {J in (3.6.11, 12)
explicitly. This outline is not, however, important for what follows. In order to
fulfill (3.6.12) we put

(3.6.13)

where f is a polynomial of order n

(3.6.14)

where we require

522
3.6 Approximation Methods. Smoothing 121

ao= 1. (3.6.15)

To determinefwe introduce the variable

x=1+~ (3.6.16)

and put

f(1 +~) = g(~) , (3.6.17)

where g is again a polynomial which we may write in the form

n 1
g(~) = L bl~ . (3.6.18)
1=0

We now choose the coefficients a or b, respectively, in such a way that

h(1)=1 and (3.6.19)

h(m)(1)=0, m=1, ... ,n-1 (3.6.20)

are fulfilled, where the superscript m in (3.6.20) means derivative with respect
to x.
From (3.6.19) we immediately find

g(O) = 1 (3.6.21)

and thus

bo = 1 . (3.6.22)

From (3.6.20) with m = 1 we readily obtain

ng(O) + g' (0) = 0 and (3.6.23)

(3.6.24)

One may determine all other coefficients b l by use of (3.6.20) consecutively. Thus
one may indeed find a polynomial with the required properties. We still have to
convince ourselves that the thus constructed solution (3.6.13) has everywhere a
positive slope in the interval [0,1], i.e., that h(x) has no wiggles. Since h is a
polynomial of order 2n, it can have 2n -1 different extrema at most, which are
just the zeros of h (1). According to our construction n - 1 extrema coincide at
x = 0 and another n - 1 extrema coincide at x = 1. Therefore only one additional
extremum remains at most. One readily shows that

h'(O+c:»O and (3.6.25)

523
122 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

h'(l-c:»O (3.6.26)

hold for small positive c:. This implies that a wiggled curve must have at least two
more extrema as shown in Fig. 3.6.2, in contrast to our results predicting that
there could be at most only one additional extremum. Therefore, the case of Fig.
3.6.2 cannot apply and (3.6.13) can represent only the curve like Fig. 3.6.1. This
completes the excursion.

______ ~ ______ ~ __--+x ----"---------t----~x

Fig. 3.6.1. The smoothing function P(x) Fig. 3.6.2. This figure shows that p has two extrema
in contrast to our assumption made in the text

So far we have discussed how we can bridge the jumps when '1k vanishes as a
function of T. Now we want to show that a similar procedure holds when we treat
'1k as a function of rp via embedding. Consider a rp <-+ rpo - T for which D"tl (T) 0, *
k fixed. Because qU)(t, rp) is C k , k ~ 0, a whole surrounding S(rp) of rp exists
where each point in S(rp) is approximated by sequences rp- Tn (mod T),
*
T= (11, ... , Tn), 1) = 2nlwj' and for which D"tt (Tn) 0, k fixed. In this way, the
whole torus can be covered by overlapping patches of nonvanishing Dj~ (T). On
each of these patches at least one nonvanishing vector '1k exists. Now the only
thing to do is to smooth the transition when going from one patch to a neighbor-
ing one. But the size of these patches, being defined by Dj~ (T) 0, is independent *
of t and thus fixed, whereas the misalignment of the vectors 11k tends to zero for
t --+ 00. Consequently the smoothing functions remain sufficiently often differen-
tiable for t --+ 00.

3.7 The Triangular Matrix C and Its Reduction

Since Q(t, rp) or Q(t, rp) (3.5.16) are solutions to (3.1.5) we may again invoke the
general relation (3.1.10). Writing down this relation for the solution vectors
explicitly we obtain (G = C T)
q- U)( t + T, rp + T ) -- ~
1... Gjk ( T, rp ) q- (k)( t, rp,
) J. -- 1, ... , m , (3.7.1)
k

where the generalized characteristic exponents Aj which belong to ij U) are ordered


according to
(3.7.2)

524
3.7 The Triangular Matrix C and Its Reduction 123

Because the solution matrix is nonsingular we may derive from the properties of
Q(t, qJ) in Sect. 3.5 the following properties for the coefficients Gjk .
If assumptions (1- 5) of Theorem 3.1.1 hold, the coefficients Gjk(r, qJ) are
differentiable with respect to r and are 1j-periodic and C k with respect to qJ. If
the weaker assumptions (1 - 4) of Theorem 3.1.1 hold, the coefficients are of the
form G jk ( r, qJo - r), they are (at least) C l with respect to r and CO with respect to
the argument qJr( = qJo- r) and quasiperiodic in r. Now the asymptotic behavior
of the lhs of (3.7.1) must be the same as that of the rhs of (3.7.1) for t -> 00. This
implies that

(3.7.3)

The reasoning is similar to that performed above and is based on the study of the
asymptotic behavior of q(k) for t -> 00.
In the next step we want to show that it is possible to transform the q's in such
a way that eventually the matrix Gjk can be brought into diagonal form. To this
end we need to know the asymptotic behavior of q for t -> - 00. In order to
obtain such information we solve the equations (3.7.1) step by step starting with
the equation for j = m,

(3.7.4)

We assume that r becomes an infinitesimal. Because G mm is differentiable with


respect to r and for r = 0 the q's on both sides of (3.7.4) coincide, we may write

(3.7.5)

Replacing t by t + rand qJ by qJ + T we obtain

(3.7.6)

instead of (3.7.4). On the rhs of (3.7.6) we may replace q by the rhs of (3.7.4)
which yields

(3.7.7)

Repeating this procedure N times we readily find

(3.7.8)
to to
Using (3.7.5) and the fact that r is infinitesimal we may replace the product over I
by the exponential function which yields

(3.7.9)

525
124 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

Eventually taking the limit r --> 0 we may replace the sum in (3.7.9) by an integral

(3.7.10)

We now make the replacement

t=O, to-->t, ({J-->({J-t, (3.7.11)

which after a change of variables of the integral brings us from (3.7.10) to

(3.7.12)

This equation can be considered as a functional equation for ij(m). To solve this
equation we make the hypothesis

(3.7.13)

Inserting this into (3.7.12) we readily obtain the relation

(3.7.14)

Since (3.7.13) was C k with respect to ({J and 1j-periodic in ({J, we obtain the fol-
lowing result:

w(m)(t, ((J) is a quasiperiodic function in t . (3.7.15)

Thus the explicit solution of (3.7.12) reads

(3.7.16)

We note that in it am is a quasiperiodic function of u.


Now let us turn to the solution of the next equation for q (m --1). To elucidate
the general idea we consider the example m = 2. Then (3.7.1) acquires the form

ij(I)(t + r, ({J+ -r) = G l1 (r, ({J)ij (1)(t, ({J) + Gtz{ r, ({J)ij (2) (t, ({J) (3.7.17)

ij(2)(t+ -r, ({J+ -r) = G 22 (r, ({J)ij(2)(t, ((J) • (3.7.18)

We want to show that we can construct a new solution to our original equation
(3.1.5) with the solution vector ij(l) so that the Eqs. (3.7.17 and 18) acquire a

526
3.7 The Triangular Matrix C and Its Reduction 125

diagonal form. We note that any linear combination of ij(l) and ij(2) is again a
solution of (3.1.5) provided the coefficients are time independent. We therefore
make the hypothesis

(3.7.19)

Inserting it into (3.7.17) we immediately obtain

Since the underlined expressions are those which we want to keep finally, we
require that the rest vanishes. Making use of (3.7.18) we are then led to

(3.7.21)

We want to show that this equation can indeed be fulfilled, which is by no means
obvious because G jk are functions of both rand rp whereas h is assumed to be a
function of rp alone. In the following we shall show that the G's have such a
structure that h can indeed be chosen as a function of rp alone.
With this task in mind, we turn to a corresponding transformation of the Eqs.
(3.7.1), starting with the second last

ij(m-I)(t+ r, rp+ r) = G m- 1,m-l(r, rp)ij(m-I)(t, rp)

+ Gm-I,m(r, rp)ij(m)(t, rp). (3.7.22)

This equation is considered in the limit r ..... O. For r= 0, Gm-I,m-I = 1. Because


G m - 1,m-1 is differentiable with respect to r, for small r

(3.7.23)

In precisely the same way as in the case of the last equation (j = m) of (3.7.1), we
readily obtain for finite r

(3.7.24)

To solve the inhomogeneous equation (3.7.22) we make a hypothesis familiar


from ordinary differential equations, namely

(3.7.25)

527
126 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

where g(m-1) is still unknown. Inserting (3.7.25) into (3.7.22) we immediately


obtain

exp ['Tda am-I (rp+ r- a) ]g(m-1)(t+ r, rp+ r)

= exp [ida am_I (rp+ a) }xp [ldaa m - drp - a)]

. g(m-I)(t, rp) + Gm-I,m(r, rp)ij(m)(t, rp). (3.7.26)

After dividing (3. 7 .26) by the exponential function on the lhs of (3.7.26) we
obtain

g(m-I)(t+ r, rp+ r) = g(m-l)(t, rp) + /(t, r, rp), (3.7.27)


where we have used the abbreviation

In order to determine the form of G m - l ,m for finite r, we first start for infini-
tesimal r's because G is differentiable with respect to r and because Gm-I,m must
vanish for r = O. According to (3.7.22) we may write

r~O, G m- l ,m = rb(rp). (3.7.29)

To save some indices we write for the time being

(3.7.30)

and consider (3.7.27) for the sequence r, 2 r, .... We write this sequence as
follows

y (t + r, rp + r) - y (t, rp) = / (t, r, rp) , (3.7.31)

y{t+2r, rp+ 2r) - y(t+ r, rp+ r) = /(t+ r, r, rp+ r) , (3.7.32)

y(t+Nr, rp+Nr) - y(t+{N-l)r, rp+{N-l)r)


=/(t+(N-l)r, r, rp+{N-l)r). (3.7.33)

Summing up over all these equations we readily obtain

N-I
y(t+Nr, rp+Nr) =y(t, rp) + 'i/(t+lr, r, rp+lr), (3.7.34)
~ '-v---' • 1=0
to to

528
3.7 The Triangular Matrix C and Its Reduction 127

or taking the limit to infinitesimal r's


to
y(t+ to, q.l+ to) = y(t, qJ) + Jdsj(t+s, r, qJ+s) . (3.7.35)
o
Using the definition of j (3.7.28) and reintroducing g(m-I) according to (3.7.30)
we obtain

(3.7.36)

where we have used the abbreviation

~ [ t+s
K(t,to, qJ) = Jdsb(qJ+s)exp - J daam_l(qJ+s-a)
o 0

+ Lam(qJ-a)da ]ij(m)(t,qJ). (3.7.37)

Choosing the initial conditions appropriately we can write the solution of


(3.7.36) in the form

g<m-1)(I, qJ) = ldS b(qJ+s) exp [- Ldaam-l(qJ- a) + l.am(qJ-a)da]

. ij(m)(I, qJ). (3.7.38)

Inserting (3.7.38) into (3.7.36) and making some simple shifts of integration
variables s and a on the resulting lhs of (3.7.36), one can readily convince oneself
that (3.7.38) indeed fulfills (3.7.36). Then (3.7.38) can be written in a more
practical form as

g(m-I)(t, qJ) = exp [ - !a m- t (qJ- a)da ] J ij(m) (I, qJ) (3.7.39)

l
with

J= IdSb (qJ+S) exp [-l[am-t(qJ+a) - am(qJ+a)]da (3.7.40)

To discuss the properties of the integral J we make the decomposition

(3.7.41)

It follows from (3.7.23) and our results on G(r, qJ) that 0m-l(qJ) is 1j-periodic
and C k with respect to qJ. If in addition we invoke assumption (6) of Theorem
3.1.1, the differentiability C k with respect to qJ and the KAM condition secure
that the integrals over OJ in (3.7.38) remain finite for a-+ ± 00 and, in particular,

529
128 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

that they are quasiperiodic in s. Sinceb is also a quasiperiodic function which is


bounded and

(3.7.42)

we may readily convince ourselves that the integral J converges.


We now recall our original hypothesis (3.7.25) for the solution of the inhomo-
geneous equation (3.7.22). To obtain (3.7.25) we have to multiply (3.7.38) by

(3.7.43)

which gives us

(3.7.44)

We immediately see that (3.7.44) has precisely the form required in (3.7.19)
where we may identify the second term on the rhs with the rhs of (3.7.44) and J
with h. Thus we can indeed choose help) or J(lp) as a function of lp alone, which
makes it possible to cast (3.7.22) into diagonal form.
We now want to study the explicit form of

Gm-l,mQ-(m) . (3.7.45)

A comparison between (3.7.22) and (3.7.36) with help of (3.7.26) indicates that
therefore we have to multiply (3.7.37) by

(3.7.46)

After a few elementary transformations of the integrals we obtain

G m - 1,m(r, lp)

= exp [rda am-l (lp+ a)] IdS b(lp+s) exp [ldalam(lp+ a) - am-I (lp+ a)] ]-

(3.7.47)

We recognize that G m - 1,m is indeed a function of rand lp (but not a function of t)


as it has to be.
Our procedure can be generalized to the set of all equations (3.7.1) in an
obvious way, namely, by making the hypothesis

qU)(t, lp) = (jU)(t, rp) + L hk(lp)q(k)(t, rp) (3.7.48)


k>j

530
3.8 The General Case: Some of the Generalized Characteristic Exponents Coincide 129

we may determine the coefficients hk(rp) in such a way that we obtain a diagonal
matrix G for ij, i. e., the solution vectors of (3.7.1) can be chosen so that

(3.7.49)

holds. Equation (3.7.49) can be considered as a functional equation for ij U). We


have solved this equation above [cf. (3.7.16)]. Its solution reads

(3.7.50)

Provided assumptions (1 - 6) of Theorem 3.1.1 are fulfilled, (3.7.50) can be


written

(3.7.51)

where v U) is 1)-periodic and C k with respect to rp and therefore quasiperiodic in t.


Equation (3.7.51) is precisely of the form which was asserted in (3.1.20).
(Clearly, Rep) is the generalized characteristic exponent Aj.)

3.8 The General Case:


Some of the Generalized Characteristic Exponents Coincide

In this section we shall present two theorems. The first one deals with a reduction
of the matrix C (3.1.18) if some of the generalized characteristic exponents
coincide. The second theorem shows that C can even be diagonalized if all gen-
eralized characteristic exponents coincide, and an additional assumption on the
growth rate of Iq U) lis made. The first of these theorems is formulated as follows.
Theorem 3.8.1. Under assumptions (1, 2, 4) of Theorem 3.1.1 and by a suitable
choice of the solution vectors qU)(t, rp), the matrix C (3.1.18) can be brought to
the triangular form [cf. (3.7.1)]

(3.8.1)

where the boxes correspond to generalized characteristic exponents Aj which are


different. Each box has at maximum the size mj' where mj is the degree of Aj.
The coefficients of the matrix C( r, rp) are differentiable with respect to r, 1)-
periodic in rp and (at least) CO with respect to rp. If in addition assumption (5) of
*
Theorem 3.1.1 is valid (for Aj Ak), the coefficients of C(r, rp) are (at least) C I
with respect to rand 1)-periodic and C k with respect to rp.
We now give a sketch of the construction of C.

531
130 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

We assume that the generalized characteristic exponents Aj are labeled in such


a way that

(3.8.2)

If

(3.8.3)

we may apply our previous reduction scheme (Sects. 3.1 -7) at least up to Ak-I'
In order to treat the case in which several of the A'S coincide, we therefore
assume that AI is I times degenerate, i. e.,

(3.8.4)

We assume again that all q's are linearly independent with finite angles even if
t-+ + 00. We define a normalization factor by [cf. (3.5.3, 4)]

(3.8.5)

and form

(3.8.6)

We define

Lim Sup (3.8.7)


t~oo

by the following description. Take a sequence t = tn' t n -+ 00, such that at least
for one of the qU)'s and some positive e

(3.8.8)

holds. Because the present procedure is a rather straightforward generalization


of the case of nondegenerate A'S, we shall give only a sketch of our procedure.
Loosely speaking, for such tn > tno we may neglect

./11
k~I+1
r Djkq(k)(t+r,'Po)exp[-zk(r)] (3.8.9)

against

I
v,jl L
Djkq (k)(t + r, 'Po) exp [ - Zk( r)] , (3.tUO)
k=1

532
3.8 The General Case: Some of the Generalized Characteristic Exponents Coincide 131

or, more precisely, we have

Lim Sup {XU)(t, qJo+ -r)}


I .... 00

I
= E Djk(r) Lim Sup {JV(t+ r) exp [Zk(t+ r) - Zk(r)] U(k)(t+ r}}. (3.8.11)
k= I I .... 00

In Lim Sup we take all such sequences tn for which at least one qU) fulfills (3.8.8).
Note that such sequences may depend on r. We now form

m .
E alr) Lim Sup {x U) (t, qJo+ r)} (3.8.12)
j= I 1.... 00

and require that the a/s are dete.rmined in such a way that (3.8.12) vanishes.
Using the explicit form of XU) and (3.8.12) we thus obtain

I m
E E alr)Djk(r) Lim Sup {,AI(t+ r) exp [Zk(t+ r) - zk(r)]u(k)(t+ r)} = 0
k=lj=1 1 .... 00 (3.8.13)

and because the u(k),S are linearly independent of each other we obtain

~ alr)Djk(r) Lim Sup {JV(t+ r) exp [Zk(t+ r) - Zk(r)] u(k)(t+ r)} = O.


j=1 1 .... 00 (3.8.14)

In it k runs from 1 till!.


We thus find I equations for the m unknown a/so This can be seen most
clearly from the fact that (3.8.14) can be fulfilled if and only if

m
E aj(r)Djk(r) = 0 for k = 1, ... , I (3.8.15)
j=1

holds. From (3.8.15) it follows that aj can be chosen independently of t. On the


other hand, the differentiability properties of D(r) are not known. Therefore it is
advisable to go back to (3.8.12) where the differentiability properties of the coef-
ficients of aj( r) are known. There are (at least) m -I linearly independent solu-
tion vectors a = (al"'" am) of (3.8.15).
To secure the solvability condition even for finite tn we may just take I rows of
(3.8.12) provided not all coefficients of aj vanish identically. If this happens,
however, we may go over to another selection of I rows and apply the procedure
described in Sect. 3.6.2.
Furthermore, we may choose I' (I' ~ I) vectors

(3.8.16)

533
132 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

such that (3.8.12) does not vanish. Let us treat the case in which the least
reduction is possible, l' = /. If we label the vectors (3.8.16) for which (3.8.12)
remains finite by

U(k) with k = 1, ... , / , (3.8.17)

and those for which (3.8.12) vanishes by

U(k) with k = 1+ 1, ... , m , (3.8.18)

we may quite generally form new solutions of (3.1.5) by the definition

(3.8.19)
On account of our construction, the new solutions ij (k), k = / + 1, ... , m, possess
generalized characteristic exponents A2 or smaller. When we use some arguments
of Sects. 3.1-7, we recognize that the matrix C T = (Gjk ), [cf. (3.7.1)], is reduced
to the form
m-/

(3.8.20)
m-/ 0

Again one may show that the aj(r) have the same differentiability properties as
their coefficients in (3.8.12). Ifthe coefficients in (3.8.12) are not considered as a
function of r but rather of rp (compare the previous sections), the a's acquire the
same differentiability properties. Therefore ij(k) (3.8.19) has again the same dif-
ferentiability properties as the original solution vectors qU)(t, rp). After having
reached (3.8.20), we may continue our procedure so that we can reduce the
scheme of coefficients in (3.8.20) to one of the triangular form

o
l
The question arises whether we may reduce the scheme (3.8.21) further to one in
which we have nonvanishing matrices along the main diagonal only
(3.8.21)

(3.8.22)

534
3.8 The General Case: Some of the Generalized Characteristic Exponents Coincide 133

There are various circumstances under which such reduction can be reached. This
can be achieved, for instance, if for t .... - 00 the generalized characteristic
exponents A} just obey the inverse of the corresponding relations (3.8.2). It is,
however, also possible to reduce scheme (3.8.21) to one of the form (3.8.22) if it
can be shown that the squares in (3.8.21) can be further reduced individually to
diagonal form. Even if that is not possible, further general statements can be
made on the form of the solution matrix Q(t, rp). Namely, if the solution matrices
belonging to the Qrsquares in (3.8.21) are known, the total matrix Q(t, lfJ) can be
found consecutively by the method of variation of the constant. The procedure is
analogous to that in Sect. 3.7, but ijU) must be replaced by submatrices QU), etc.
In the final part of this section we want to present a theorem in which all Ak'S
are equal.
Theorem 3.8.2. Let us make the following assumptions:
1) M(3.1.4) is at least C t with respect to rp;
2) Ak = A, k = 1, ... , m;
3) e-Anrll T;q I and eAnrll Tr-nq II are bounde~ for n .... 00 and rarbitrary, real, and
for all vectors of the solution space of Q = M(t, rp) Q, q, which are C t with
respect to rp. ( II ... II denotes the Hilbert space norm.)

Then we may find a new basis ii so that Tr .... Tr becomes diagonal,

Trii = eX r ij, Ximaginary , (3.8.23)

provided the spectrum [of Tr acting in the space q(t, rp)] is a point spectrum.
In the case of continuous spectrum, Tr is equivalent to an operator Tr of
"scalar type", i. e., Tr can be decomposed according to

(3.8.24)

where E is the resolution of the identity for Tr •


The proof of this theorem can be achieved by using several theorems of the
theory of linear operators. We shall quote these latter theorems only (cf. refer-
ences) and indicate how they allow us to derive Theorem 3.8.2. Because of as-
sumption (1) of Theorem 3.8.2 and Lemma 3.2.2, q(t, rp) is (at least) C t with
respect to rp for - 00 < t < 00. Then the solutions form a Hilbert space for
- 00 < t < 00.
The operators Tr form an Abelian group. As indicated in Sect. 3.7, we can de-
compose the group representation G of Tr for r .... 0 into 1 + rA (rp). Correspond-
ingly, we decompose Tr = 1 + A r, r .... 0, where A is the infinitesimal generator of
the group. With the help of A, for finite r, we can represent Tr in the form
Tr = exp (A r). We now invoke the following lemma: Let G be a bounded Abelian
group of operators in Hilbert space H. Then there is a bounded self-adjoint
operator in H with a bounded inverse defined everywhere such that for every A in
G the operator B Tr B - t is unitary.
Consequently, the bounded group exp (A r) introduced above is equivalent to
a group of unitary operators. By Stone's theorem, the latter group has an infini-

535
134 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

tesimal generator iA, where A is self-adjoint. Thus A is equivalent to iA, the


transformation matrix being given by B. Because in our present case A and thus
A are independent of t and T, also B is independent of t and T. Thus B describes a
t- and T-independent transformation of q to another set of q's which therefore
must be again solutions to (3.1.5).
In the last step of our analysis we recall that Tr becomes unitary and there-
fore, according to the theory of linear operators, a spectral operator of scalar
type, so A and Tr satisfy the equations

A= - JXE(dX) (3.8.25)

and

(3.8.26)

respectively, where E is the resolution of the identity for A (and Tr). If the
spectrum Xis a point spectrum, it follows from (3.8.26) that

This equation can now be treated in the same manner as (3.7.9).


In conclusion we mention that a further class of solutions can be identified if
II Tinq II goes with nm. The solutions are then polynomials in t (up to order m)
with quasiperiodic coefficients.

3.9 Explicit Solution of (3.1.1) by an Iteration Procedure

In Sect. 2.9, where we treated differential equations with periodic coefficients,


we developed a procedure to calculate the solutions explicitly by perturbation
theory. Unfortunately there are difficulties with respect to convergence if one
tries to extend this procedure in a straightforward manner to the present case
with quasiperiodic coefficients. It is possible, however, to devise a rapidly con-
verging iteration scheme whose convergence can be rigorously prQven. The basic
idea consists in a decomposition of the matrix M of the equation Q= MQ into a
constant matrix A and a matrix M which contains the quasiperiodic time depend-
ence, M = A + iiI. The matrix iiI is considered as a small perturbation.
We first formulate the basic theorem and then show how the solution can be
constructed explicitly.
Theorem 3.9.1. Let us assume that A and iiI satisfy the following conditions:
1) The matrix iiI is periodic in rpj and analytic on the domain

IIm{rp} I == sup {IIm{rpj}l} ~ Po. Po> 0 (3.9.1)


J

and real for real rpj;

536
3.9 Explicit Solution of (3.1.1) by an Iteration Procedure 135

2) The matrix M does not contain any constant terms;


3) For some positive Wj and d the inequality

(3.9.2)

i.e., the Kolmogorov-Arnold-Moser condition holds for every vector

(3.9.3)

with integer components, nj;


4) The eigenvalues A of the matrix A have distinct real parts.
Then the assertion is: a sufficiently small positive constant K can be found
such that for
m _
L IMjkl~K (3.9.4)
j,k=1
a solution can be constructed which has the form

(3.9.5)

In it the matrix V is a quasiperiodic matrix which is analytic and analytically in-


vertible in the domain

(3.9.6)

A is a constant matrix.
To show how V and A can be constructed explicitly, we start from the
equation

Q(t) = [A + M(t)] Q(t) (3.9.7)

where A and M have been specified above. By means of a constant matrix C we


transform A into its Jordan normal form

(3.9.8)
Due to our assumption that the real parts of the eigenvalues of A are distinct, all
eigenvalues of A are distinct and therefore we may assume that J is of purely
diagonal form. We now put

Q(t) = CQ(t) (3.9.9)

and insert it into (3.9.7). After multiplication of (3.9.7) by C -I from the left we
obtain

Q= [J + Mo(t)] Q, (3.9.10)

537
136 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

where we have used the abbreviation

(3.9.11)

In order to solve (3.9.10) we make the hypothesis

Q(t) = [1 + U, (t)] Q, (t) , (3.9.12)

where 1 is the unity matrix. Since we want to construct a solution of the form
(3.9.5), it would be desirable to choose U, as a quasiperiodic matrix but Q, as an
exponential function of time.
For a practical calculation it turns out, however, that we cannot construct U,
exactly. Therefore we resort to an iteration scheme in which we aim at calculating
U, approximately but retaining the requirement that U, is quasiperiodic. We then
derive a new equation for Q,. It will turn out that by an adequate choice of the
equation for U" the equation for Q, takes the same shape as equation (3.9.10)
with one major difference. In this new equation Mo is replaced by a new quasi-
periodic matrix M, whose matrix elements are smaller than those of Mo. Thus the
basic idea of the present approach is to introduce repeatedly substitutions of the
form (3.9.12).
In this way the original equation (3.9.10) is reduced to a form in which the
significance of the quasiperiodic matrix M(t) becomes smaller and smaller.
These ideas will become clearer when we perform the individual steps explicitly.
We insert hypothesis (3.9.12) into (3.9.10) which immediately yields
. .
U, Q, + [1 + U, (t)] Q, = lQ, + lU, Q, + MoQ, + MoU, Q1 . (3.9.13)

For reasons which will transpire below we add on both sides of (3.9.13)

(3.9.14)

and we add on the rhs 0 specifically as

(3.9.15)

In general, we shall define D, as the constant part of the main diagonal of the
matrix Mo. We do this here only for formal reasons because Mo was anyway con-
structed in such a way that its constant part vanishes. However, in our sub-
sequent iteration procedure such terms can arise. Next, D, can be found explicitly
via the formula

1 2n 2n Mo,11 0
D1 = ~-N- J... J M O,22 d((J1" .d((J]V. (3.9.16)
(2n) 0 0 0 Mo,mm

538
3.9 Explicit Solution of (3.1.1) by an Iteration Procedure 137

With aid of the just-described manipulations, (3.9.13) is transformed into

(3.9.17)

To simplify this equation we assume that U j can be chosen in such a way that the
underlined expressions in (3.9.17) cancel for an arbitrary matrix Q. Before
writing down the resulting equation for U1 , we make explicit use of the assump-
tion that U j is a quasiperiodic function, i. e., we write

U1 = L U1 ,n exp [in(wt + 91)] . (3.9.18)

By means of it we can immediately cast the time derivative of U 1 in the form

(3.9.19)

Using this specific form, the equation for U1 then reads

(3.9.20)

Because of (3.9.20), (3.9.17) reduces to the following equation for Ql

(1 + U 1)Ql =JQl + U 1JQl + M OU 1Ql +D 1Ql' (3.9.21)


'-----v------oI
(1 + U1)JQl
On the rhs we add 0 in the form

(3.9.22)

so that we can cast (3.9.21) into the form

After multiplication of this equation by

(3.9.24)

from the left, we obtain

(3.9.25)

539
138 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

We now introduce the abbreviations

(3.9.26)

(3.9.27)

which leaves us with

(3.9.28)

Here J 1 is evidently again a diagonal time-independent matrix, whereas Ml is a


quasiperiodic function. So at a first sight it appears that nothing is gained
because we are again led to an equation for Ql which is of precisely the same
form as the original equation (3.9.10). However, a rough estimate will immedia-
tely convince us that Ml is reduced by an order of magnitude. To this end let us
introduce a small parameter £ so that

Moex£ . (3.9.29)

Because Dl is obtained from Ml by means of (3.9.16), then

(3.9.30)

holds. On the other hand, we assume that J is of order unity as compared to Mo

J ex 1 . (3.9.31)

From (3.9.30, 31) it follows that the rhs of (3.9.20) is proportional to £. This
leads us to the conclusion

(3.9.32)

An inspection of (3.9.27) reveals that Ml must be of order

(3.9.33)

whereas J 1 is given by the order of magnitude

J 1 ex 1 + £. (3.9.34)

Thus expressed in £'s (3.9.28) is of the form

(3.9.35)

That means that we have indeed achieved an appreciable reduction of the size of
the quasiperiodic part on the rhs. It should be noted that our estimate is a super-
ficial one and that the convergence of the procedure has been proven mathema-

540
3.9 Explicit Solution of (3.1.1) by an Iteration Procedure 139

tically rigorously (see below). Our further procedure is now obvious. We shall
continue the iteration by making in a second step the hypothesis

(3.9.36)

and continue with the general term

(3.9.37)

In each of these expressions Qv is assumed to be quasiperiodic. Simultaneously


we put

J v +! = J v + D v +!, where (3.9.38)

Dv+! = main diagonal of M v +!,

Introducing the further abbreviation

(3.9.40)

we may define the equation for Qv+! by

(3.9.41)

In order to obtain the relations (3.9.38 - 41), it is only necessary to substitute in


the previous relations (3.9.26 - 28) the index 0 or no index by v and the index 1 by
v + 1. Putting the transformations (3.9.36, 37) one after the other we obtain the
explicit form of the solution as

(3.9.42)

Evidently in the limit that I goes to infinity we have the relation

(3.9.43)

This equation can now be solved explicitly. Its general solution reads

(3.9.44)

where J I is a diagonal matrix.


As a specific choice of initial conditions we can take

(3.9.45)

541
140 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

We are now left with the explicit construction of U v + j , v = 0,1, ... , which must
incorporate the proof that U can be chosen as a quasiperiodic function. Accord-
ing to (3.9.20) the equation for U reads

(3.9.46)

For simplicity we shall drop the indices v and v + 1. We further put

M-D=M' , (3.9.47)

which can be expanded into a multiple Fourier series

M'(rp) = LM~ exp(in. rp). (3.9.48)


n

In order to solve (3.9.46) we expand U(rp) into such a series, too.

U(rp) = L Un exp (in • rp). (3.9.49)


n

Inserting (3.9.48, 49) into (3.9.46), we find for the Fourier coefficients the fol-
lowing relations

Un(J + i(n, w» - JUn = M~ . (3.9.50)

We now introduce the matrix elements of the matrices Un' J andM~ by means of

(3.9.51)

J. (3.9.52)

(3.9.53)

This allows us to write the matrix equation (3.9.50) in the form

(3.9.54)

Its solution can be immediately found

(3.9.55)

provided the denominator does not vanish. We can easily convince ourselves that
the denominator does not vanish. For k :j= j it was assumed that the real parts of
the eigenvalues of the original matrix A (or J) are distinct. Provided M is small

542
3.9 Explicit Solution of (3.1.1) by an Iteration Procedure 141

enough, Dv+l are so small that subsequent shifts of J according to (3.9.38) are so
small that (3.9.56) is fulfilled for all iterative steps. Therefore we obtain for k *j

Re{h} * Re{JJ (3.9.56)

IJk - Jj + i(n, w) I > O. (3.9.57)

For k =j we find

In, wi> 0 for In I> 0 , (3.9.58)

but for n = 0 we know that according to the construction (3.9.47)

M~.(O)
JJ
= 0 • (3.9.59)

To make the solution matrix Un unique we put = O. uZ


Inserting (3.9.55) into (3.9.49) we thus obtain as the unique and explicit solu-
tion of the step v (where we suppress the index v of J k , J)

UV + 1,jk«(fJ) = E (Jk-Jj + in· W)-l MSJexp (in. (fJ). (3.9.60)


n

Equation (3.9.60) can also be expressed as

t
U v + 1 = -aJexp[(Jk-J)T1M~«(fJ+'t')dr. (3.9.61)
o

The sign a J~ .. means that t = - 00 if Jk-Jj > 0 and t = 00 if h-Jj < 0 are taken
as lower limit of the integral.
We may summarize our results as follows. We have devised an iteration
procedure by which we can construct the quasiperiodic solution matrix

V= n (1 + U
I

v=l
v ), /-+ 00 (3.9.62)

explicitly, whereas A is given by

(3.9.63)

In practical applications for a sufficiently small quasiperiodic matrix M very few


steps may be sufficient. In the literature an important part of the mathematical
treatment deals with the proof of the convergence of the above procedure. Since
in the present context this representation does not give us any deeper insights, we
refer the interested reader to Sect. 5.3, Chap. 6, and Appendix A where the ques-
tion of convergence is treated in detail. Rather, we make a comment which quali-
tatively elucidates the range of validity of convergence. As it transpires from
(3.9.60) U is obtained from M essentially by dividing M by J k - Jj • In order to

543
142 3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients

secure the convergence of the infinite product in (3.9.62), U v must go to 0 suf-


ficiently rapidly. This is at least partly achieved if the original M's are
sufficiently small and the differences between the original eigenvalues J
sufficiently large.

544
4. Stochastic Nonlinear Differential Equations

In this chapter we shall present some of the most essential features of stochastic
differential equations. Readers interested in learning more about this subject are
referred to the book by Gardiner (cf. references).
In many problems of the natural sciences, but also in other fields, we deal
with macroscopic features of systems, e. g., with fluids, with electrical networks,
with macroscopic brain activities measured by EEG, etc. It is quite natural to
describe these macroscopic features by macroscopic quantities, for instance, in
an electrical network such variables are macroscopic electric charges and
currents. However, one must not forget that all these macroscopic processes are
the result of many, more or less coherent, microscopic processes. For instance, in
an electrical network the current is ultimately carried by the individual electrons,
or electrical brain waves are ultimately generated by individual neurons. These
microscopic degrees of freedom manifest themselves in the form of fluctuations
which can be described by adding terms to otherwise deterministic equations for
the macroscopic quantities. Because in general microscopic processes occur on a
much shorter time scale than macroscopic processes, the fluctuations represent-
ing the underworld of the individual parts of the system take place on time scales
much shorter than the macroscopic process. The theory of stochastic differential
equations treats these fluctuations in a certain mathematical idealization which
we shall discuss below.
In practical applications one must not forget to check whether such an ideal-
ization remains meaningful, and if results occur which contradict those results
one would obtain by physical reasoning, one should carefully check whether the
idealized assumptions are valid.

4.1 An Example

Let us first consider an example to illustrate the main ideas of this approach. Let
us treat a single variable q, which changes during time I. Out of the continuous
time sequence we choose a discrete set of times Ii' where we assume for simplicity
that the time intervals between Ii and Ii-I are equal to each other. The change of q
when the system goes from one state li-l to another one at time Ii will be denoted
by
Llq(t;) =q(t;) - q(ti-I). (4.1.1)
144 4. Stochastic Nonlinear Differential Equations

This change of the state variable q is generally caused by macroscopic or coherent


forces K which may depend on that variable, and by additional fluctuations
which occur within the time interval under discussion

(4.1.2)

This impact of coherent driving forces and fluctuating forces can be described by

(4.1.3)

where Ll w(tJ = w(tJ - W(t i _ 1). Because Ll w represents the impact of micro-
scopic processes on the macroscopic process described by q, we can expect that
Ll w depends on very many degrees of freedom of the underworld. As usual in
statistical mechanics we treat these many variables by means of statistics. There-
fore we introduce a statistical average which we characterize by means of the fol-
lowing two properties.
1) We assume that the average of Ll w vanishes

(.,1 w(t;) = O. (4.1.4)

Otherwise Ll w would contain a part which acts in a coherent fashion and could
be taken care of by means of K.
2) We assume that the fluctuations occur on a very short time scale. Therefore
when we consider the correlation function between .,1 w at different times ti and Ij
we shall assume that the corresponding fluctuations are uncorrelated. Therefore
we may postulate

(4.1.5)

The quantity Q is a measure for the size of the fluctuations; ()tJj expresses the sta-
tistical independence of .,1 w at different times t;, tj . The time interval Ll I enters
into (4.1.5) because we wish to define the fluctuating forces in such a way that
Brownian motion is covered as a special case. To substantiate this we treat a very
simple example, namely

K=O. (4.1.6)

In this case we may solve (4.1.3) immediately by summing both sides over Ii'
which yields
N
q(t) - q(to) = L Ll w(t), t - 10 = N Ll t . (4.1.7)
j=!

We have denoted the final time by I. We choose q(to) = O. It then follows from
(4.1.4) that

(q(t) = O. (4.1.8)

546
4.1 An Example 145

In order to get an insight as to how big the deviations of q(t) may be on the
average due to the fluctuations, we form the mean square

N N
(q2(t» = L L (,1 w(tJLI w(tj» , (4.1.9)
i~ 1 j~1

which can be reduced on account of (4.1.5) to

N
Q LLlt=Qt. (4.1.10)
j~1

This result is well known from Brownian motion. The mean square of the coordi-
nate of a particle undergoing Brownian motion increases linearly with time t.
This result is a consequence of the postulate (4.1.5).
We now return to (4.1.3). As it will transpire from later applications, very
often the fluctuations LI w occur jointly with the variable q, i. e., one is led to
consider equations of the form

L1q(ti) = K(q(ti-d) ,1 (+ g(q)L1 w(t;). (4.1.11)

An important question arises concerning at which time the variable q in g must be


taken, and it turns out that the specific choice determines different kinds of pro-
cesses. That means in particular that this time cannot be chosen arbitrarily. Two
main definitions are known in the literature. The one is from Ito, according to
which q is taken at time (i-I. This means we put in (4.1.11)

g(q)L1 w«(;) = g(q(ti-d)L1 w(t;). (4.1.12)

First the system has reached q(ti- J, then fluctuations take place and carry the
system to a new state q(t;). Because the fluctuations LI w occur only after the state
q at time (i-l has been reached, q(ti-l) and ,1 w(t;) are un correlated

(g(q(ti-l»,1 w(t;» = (g(q(ti-l») (,1 W(ti» = o. (4.1.13)


~

=0

As we shall see below, this feature is very useful when making mathematical
transformations. On the other hand, it has turned out that in many applications
this choice (4.1.12) does not represent the actual process well. Rather, the fluc-
tuations go on all the time, especially when the system moves on from one time to
the next time. Therefore the second choice, introduced by Stratonovich, requires
that in (4.1.12) q is taken at the midpoint between the times t i - I and ti. Therefore
the Stratonovich rule reads

(4.1.14)

547
146 4. Stochastic Nonlinear Differential Equations

The advantage of this rule is that it allows variables to be transformed according


to the usual calculus of differential equations.
Because we have considered here the variable q at the discrete times t j , it may
seem surprising that we introduce now new times inbetween the old sequence. At
this moment we should mention that in both the Ito and Stratonovich procedures
we have a limiting procedure in mind in which LI t (4.1.2) tends to O. Con-
sequently, midpoints are also then included in the whole approximation pro-
cedure.

4.2 The ito Differential Equation and the ito-Fokker-Planck


Equation

In this section we consider the general case in which the state of the system is
described by a vector q having the components q,. We wish to study a stochastic
process in which, in contrast to (4.1.3 or 11), the time interval tends to zero. In
the generalization of (4.1.11) we consider a stochastic equation in the form

dq,(t) = KM(t»dt + L g'm(q(t»dwm(t) . (4.2.1)


m

We postulate

(dw m) = 0 and (4.2.2)

(4.2.3)

'*
In contrast to (4.1.5) we make Q = 1 because any Q 1 could be taken care of by
an appropriate choice of g'm' For a discussion of orders of magnitude the
following observation is useful. From (4.2.3) one may guess that

(4.2.4)

Though this is not mathematically rigorous, it is a most useful tool in determin-


ing the correct orders of magnitude of powers of dw. From (4.2.1) we may
deduce a mean value equation for q, by taking a statistical average on both sides
of (4.2.1). So we get for the sum on the rhs

(4.2.5)
m m

Working from the Ito assumption that q occurring in g, and dWm in (4.2.1) are
statistically uncorrelated, we have therefore split the average of the last term into
a product of two averages. From (4.2.2) we find

d(q,(t» = (KM(t»)dt, (4.2.6)

548
4.2 The Ito Differential Equation and the Ito-Fokker-Planck Equation 147

or, after a formal division of (4.2.6) by dt, we obtain the mean value equation of
Ito in the form

d
- (q/(t) > = (KM (t» > . (4.2.7)
dt

Instead of following the individual paths of q/(t) of each member of the statis-
tical ensemble, we may also ask for the probability to find the variable q / within a
given interval q/ ... q/ + dq/ at a given time t. We wish to derive an equation for
the corresponding (probability) distribution function. To this end we introduce
an arbitrary function u(q)

U = u(q), (4.2.8)

which does not explicitly depend on time. Furthermore, we form the differential
of Uj' i. e., dUj' up to terms linear in dt. To this end we have to remember that dq,
contains two parts

dq/ = dq/,l + dq/,2 with dq/,l = O(dt) and dq/,2 = O(Vdt) , (4.2.9)

where the first part stems from the coherent force K and the second part from the
fluctuating force. Therefore we must calculate dUj up to second order

8u· 1 8 2 u_
dUj = L __
1 dqk + - L 1 dqkdq/. (4.2.10)
k 8qk 2 k/ 8qk 8q,

Inserting (4.2.1) in (4.2.10) we readily obtain

(4.2.11)

where we omit terms of order (dt)312. Note that

(4.2.12)

where q is taken such that it is not correlated with dw. We now turn to the deter-
mination of the distribution functiop.. To elucidate our procedure we assume that
time is taken at"discrete values t i . We consider an individual path described by the
sequence of variables qj' Wj at time tj,j = 0,1, ... , i. The probability (density) of
finding this path is quite generally given by the joint probability

(4.2.13)

549
148 4. Stochastic Nonlinear Differential Equations

When we integrate P over all variables qj and Wj at all intermediate times


k = 1, ... , i-1 and over wi' Wo, we obtain the probability distribution
f(q, ti Iqo, to), q == qi' This is the function we are looking for and we shall derive
an equation for it. To this end we take the average ( ... ) of (4.2.11). Taking the
average means that we multiply both sides of (4.2.11) by (4.2.13) and integrate
over all variables qk> wk except qo. This multiple integration can be largely sim-
plified when we use the fact that we are dealing here with a Markov process, as it
transpires from the form of (4.2.1) in connection with the Ito rule. Indeed the
values of qiare determined by those of qi-I and dwialone. This Markov property
allows us to write P (4.2.13) in the form [1].

P = Pq(qi, lilwi-Wi-I, qi-I, li-dPw(wi-Wi-l, ti-I)


. P(qi-I, Wi-I - Wi - 2' Ii-I"" qo, Wo, (0) . (4.2.14)

In this relation P q is the conditional probability of finding q = qi at time Ii'


provided LI W = LI Wi and q = q i-I at time Ii-I, and P w(LI Wi' ti) is the probability
distribution of LI W at time Ii' The last factor is again the joint probability distri-
bution. We need further the usual normalization properties, namely

(4.2.15)

and

(4.2.16)

where n is the dimension of the vectors q and w. Weare now in a position to


derive and discuss the averages ( ... ) over the individual terms of (4.2.11).
Because uj depends on q == q i only, the multiple integral over all q k' dw k (except
qo) reduces to

(4.2.17)

where we used the definition off introduced above. Interchanging the averaging
procedure and the time evolution of the system described by the differential
operator d and using (4.2.11) we readily find

2 3

(4.2.18)

550
4.2 The Ito Differential Equation and the Ito-Fokker-Planck Equation 149

where we have numerated the corresponding terms to be discussed. Because of


the statistical independence of qi-l and LI wi' which is also reflected by the de-
composition (4.2.14), we have been able to split the average <... ) on the rhs into
a product (cf. terms 3 and 4). The averages containing q are of precisely the same
structure as (4.2.17) in that they contain the variable q only at a single time i = 1.
A short, straightforward analysis shows that we may again reduce the average to
an integral over q. Taking the differential quotient of (4.2.18) with respect to
time t we then obtain from (4.2.17)

d<Uj)
- -= q )-o f (q, t Iqo, to)·
Sdn qUj { (4.2.19)
dt at

The expressions under the sum of term 2 in (4.2.18) read

(4.2.20)

We perform a partial integration with respect to the coordinate qk by which we


may transform (4.2.20) into

(4.2.21)

where we assumed thatf (and its derivatives) vanishes at the boundaries. Term 3
vanishes because the average factorizes since q and dw are uncorrelated and
because of (4.2.2). The last term 4 of (4.2.18) can be transformed in analogy to
our procedure applied to the term (4.2.20), but using two consecutive partial in-
tegrations. This yields for a single term under the sums

When we consider the resulting terms 1, 2 and 4, which stem from (4.2.18), we
note that all of them are of the same type, namely of the form
(4.2.23)

where the bracket may contain differential operators. Because these expressions
occur on both sides of (4.2.18) and because uj was an arbitrary function, we
conclude that (4.2.18), if expressed by the corresponding terms 1, 2 and 4, must
be fulfilled, even if the integration over dnq and the factor Uj(q) are dropped.
This leads us to the equation

(4.2.24)

551
150 4. Stochastic Nonlinear Differential Equations

This Fokker-Planck type equation is related to the Ito stochastic equation


(4.2.1).

4.3 The Stratonovich Calculus

In this calculus q, which occurs in the functions g, is taken at the midpoint of the
time interval Ii-I . .. Ii' i. e., we have to consider expressions of the form

g'm ( q ( t+ (.
/ 2/- 1)) dwm(t;) . (4.3.1)

(Here and in the following summation will be taken over dummy indices.)
This rule can be easily extended to the case in which 9 depends explicitly on
time t, where t must be replaced according to (4.3.1) by the midpoint rule. Two
things must be observed.
1) If we introduce the definition (4.3.1) into the original Ito equation (4.2.1),
a new stochastic process is defined.
2) Nevertheless, the Ito and Stratonovich processes are closely connected with
each other and one may go from one definition to the other by a simple transfor-
mation. To this end we integrate the Ito equation (4.2.1) in a formal manner by
summing up the individual time steps and then going over to the integral. Thus
we obtain from (4.2.1)

1 1
t I

q,(t) = q,(to) + K,(q)dl' + g'm(q)dwm(t')· (4.3.2)

In it the Ito integral is defined by taking q in g'm at a time just before the fluctua-
tion dWm occurs. We have denoted this integral by the sign

(4.3.3)

We now consider a process which leads to precisely the same q,(t) as in (4.3.2)
but by means of the Stratonovich definition

1 1
t t

q/(t) = q,(to) + KM)dt' + 9,m(q)dWm(t')' (4.3.4)

We shall show that this can be achieved by an appropriate choice of a new driving
force K and new factors g'm' In fact, we shall show that we need just a new force
K, but that we may choose g = g. Here the Stratonovich integral

(4.3.5)

552
4.3 The Stratonovich Calculus 151

is to be evaluated as the limit of the sum with the individual terms (4.3.1). There-
fore we consider

where LI t ..... O and N ..... 00 such that N LI t = t- to.


Inserting an additional term containing

(4.3.7)

we obtain for the rhs of (4.3.6) (before performing "lim")

(4.3.8)

We put

q ( (;-1+(;) = q ( t;_1 + (4.3.9)


2
and

dt= t·/ - t /-.


1
(4.3.10)
2

Because q(t) obeys the Ito equation we find for (4.3.9)

+ g[q(t;-dl [ w ( tol+tO)
/- 2 / - W(t;_I) ] . (4.3.11)

We expand [lim with repect to (4.3.10) and obtain, taking into account the terms
of order dt,

553
152 4. Stochastic Nonlinear Differential Equations

+ [oglm(q(tH» Kk(q(t;-d)
Oqk

+ ~
2
02 glm (q(t;_I» g (q(/_ »g (q(t-
oqkoql kp [1 jp [I
»]. 1;-21;_1

+ Oglm(q(t;-I» g (q(1
Oqk kp [-1
» IIwp (/;-12+ t;) - W
P
(I
1-1
)]
.
(4.3.12)

The second term on the rhs occurs if glm explicitly depends on time. By means of
(4.3.12), the integral (4.3.6) acquires the form

i·~ . . = ;=1,;.~ glm


G
- ((/;-1+
q
2
1;))[-
Wm (.)_
I[ Wm (/;-1+ ;)]
2
1

+ ~
,;.
;=1
oglm(q(t;-d) g (q(t »
"
vqk
kp [-I
[W m (/;-1 + I;) - W (/
2 m [-1
)]215 mp

+O(Ll/Llw). (4.3.13)

An inspection of the rhs of (4.3.13) reveals that the first two sums jointly
define the ito integral. This can be seen as follows. The first sum contains a con-
tribution from dW m over the time interval from (t;-I + t;)/2 till Ii' whereas the
second sum contains the time interval from 1;-1 till (t;-1 + 1;)/2. Thus taking both
time intervals together and summing up over i the sums cover the total time inter-
val. The third sum over i is again of the Ito type. Here we must use a result from
stochastic theory according to which the square bracket squared converges
against dtl2 in probability. Thus the whole sum converges towards an ordinary
integral. These results allow us to write the connection between the Stratonovich
and Ito integrals in the form

(4.3.14)

We are now in a position to compare the ito and Stratonovich processes de-
scribed by (4.3.2 and 4) explicitly. We see that both processes lead to the same
result, as required if we make the following identifications:
Stratonovich Ito

(4.3.15)

554
5. The World of Coupled Nonlinear Oscillators

When speaking of oscillators, probably most of us first think of mechanical


oscillators such as springs. Another example from mechanics is provided by the
pendulum. It can be treated as a linear oscillator if its amplitude of oscillation is
small enough, but otherwise it represents a nonlinear oscillator. In many cases of
practical importance we have to deal with coupled oscillators. For instance, think
of a piece of elastic material which for mathematical treatment is treated as a set
of coupled finite elements each of which can be represented as an oscillator. Such
methods are of great importance in mechanical engineering, for instance when
dealing with vibrations of engines or towers, or with flutter of wings of airplanes.
Of course, in a number of cases we consider the limiting case in which the finite
elements approach a continuous distribution which corresponds to our original
picture of an elastic medium. Oscillations occur equally well in electrical and
radio engineering. Here we deal not only with the old radio tube oscillator but
with modern circuits using transistors and other electronic devices.
In the field of optics the laser can be considered as being built up of many
coupled quantum-mechanical oscillators, namely the electrons of the laser
atoms. Through the cooperation of these oscillators, laser light in the form of a
coherent oscillation of the electromagnetic field is produced. In a number of
experiments done with fluids, the observed phenomena can be interpreted as if
some specific oscillators representing complicated motions of the fluid interact
with each other. As mentioned in the introduction, chemical reactions can also
show oscillations and can serve as examples for the behavior of coupled oscil-
lators. Even in elementary particle physics we have to bear in mind that we are
dealing with fields which in one way or another can be considered as a set of
homogeneously distributed coupled oscillators.
As is well known, the brain can show macroscopic electric oscillations stem-
ming from a coherent firing of neurons. For these reasons and many others the
problem of dealing with the behavior of coupled oscillators is of fundamental im-
portance.

5.1 Linear Oscillators Coupled Together

We now turn to a more mathematically orientated discussion of coupled oscil-


lators where we shall be interested in the mathematical features of the solutions

555
4.4 Langevin Equations and Fokker-Planck Equation 153

In other words, we find exactly the same result if we use a Stratonovich stochastic
equation, but use in it g and K instead of g and K of the ito equation in the way
indicated by (4.3.15). These results allow us in particular to establish a Stratono-
vich-Fokker-Planck equation. If the Stratonovich stochastic equation is written
in the form

(4.3.16)

and g'm· dWm is evaluated according to (4.3.1), the corresponding Fokker-Planck


equation reads

(4.3.17)

We remind the reader that we used the convention of summing up over


dummy indices. Because in other parts of this book we do not use this conven-
tion, we write the Fokker-Planck equation in the usual way by explicitly denoting
the sums

4.4 Langevin Equations and Fokker-Planck Equation

For sake of completeness we mention the Langevin equations which are just
special cases of the Ito or Stratonovich equations, because their fluctuating
forces are independent of the variable q and of time t. Therefore the correspond-
ing Fokker-Planck equation is the same in the Ito and Stratonovich calculus.

Exercises. 1) Why do we need relations (4.2.14, 15) when evaluating the rhs of
(4.2.18)? Hint: while Uj depends on q at time t i , Kk and gkm depend on q at time
ti - 1•
2) How does the explicit expression for the joint probability (4.2.13) read?
Hint: the conditional probability Pw(w, t Iw I = 0,0) is given by

(4.4.1)

556
5.1 Linear Oscillators Coupled Together 155

irrespective of the physical nature of the oscillators. For the following it will be
important to distinguish between linear and nonlinear cases because their
behavior may be entirely different.

5.1.1 Linear Oscillators with Linear Coupling


A linear oscillator may be described by an equation of the form

(5.1.1)

where XI is a time-dependent variable, whereas WI is the eigenfrequency. Such an


oscillator may be coupled linearly to a second one, i. e., by additional terms
which are linear in X2 and XI, respectively,

(5.1.2)

(5.1.3)

Introducing additional variables, which in Hamiltonian mechanics are called


momenta, PI and P2 by

(5.1.4)

(5.1.5)

we may cast (5.1.2) in the form of first-order differential equations

(5.1.6)

(5.1.7)

and the same can be done with (5.1.3). Writing the variables XI, PI> X2, P2 in the
form of a vector

(5.1.8)

we may cast the set of equations (5.1.6) and (5.1.7) and their corresponding equa-
tions with the index 2 into the form

(5.1.9)

where L is a matrix with time-independent coefficients. Clearly any number of


linearly coupled linear oscillators can be cast into the form (5.1.9). The solutions
of this type of equation were derived in Sect. 2.6 so that the problem of these

557
156 5. The World of Coupled Nonlinear Oscillators

oscillators is entirely solved. Later we shall show that the same kind of solutions
as those derived in Sect. 2.6 holds if the variables xi are considered as con-
tinuously distributed in space.

5.1.2 Linear Oscillators with Nonlinear Coupling. An Example.


Frequency Shifts

An example of this type is provided by

(5.1.10)

(5.1.11)

where the terms on the right-hand side of these equations provide the coupling.
In order to discuss some features of the solutions of these equations we in-
troduce new variables. We make

(5.1.12)

so that we may replace (5.1.10) by

(5.1.13)

(5.1.14)

and similar equations can be obtained for the second oscillator. Introducing

(5.1.15)

we may cast (5.1.13 and 14) into a single equation. To this end we multiply
(5.1.13) by i and add it to (5.1.14). With use of (5.1.15) we then obtain

(5.1.16)

Inserting the hypothesis

(5.1.17)

ri' qJi being real, into (5.1.16), dividing both sides by exp ( - i cp), and separating
the real from the imaginary part, we obtain the two equations

(5.1.18)

(5.1.19)

558
5.1 Linear Oscillators Coupled Together 157

Similar equations can be obtained for r2 and (fJ2. These equations represent equa-
tions of motion for the radii rj and phase angles (fJj. Collecting variables in the
form of vectors

(5.1.20)
(::) = (fJ,

(::)=W' (5.1.21)

(::) = r, (5.1.22)

and writing the rhs of the equations for (fJ in the form

(~) =/, (5.1.23)

and those for r in the form

( : : ) = g,
(5.1.24)

we find equations of the general form

ip = W +/ (r, (fJ) and (5.1.25)

(5.1.26)

Again it is obvious how we can derive similar equations for a number n of


linear oscillators which are coupled nonlinearly. Nonlinear coupling between
linear oscillators can cause behavior of the solutions which differs qualitatively
from those of linearly coupled linear oscillators. As is evident from (5.1.19),
(fJI = WI t is no longer a solution, i. e., the question arises immediately as to how
we can still speak of a periodic oscillation.
A still rather simple but nevertheless fundamental effect can be easily ob-
served when we change the kind of coupling to a somewhat different one, e.g.,
into the form

(5.1.27)

559
158 5. The World of Coupled Nonlinear Oscillators

The equation for the phase angle ({Jl then acquires the form

(5.1.28)

Let us now assume in the way of a model that r2 is a time-independent constant.


In order to take into account the effect of the term containing ({Jl and ({J2 on rhs of
(5.1.28), we may average that term in a first approximation over a long time
interval. Because for such a time average we obtain

(5.1.29)

we obtain instead of (5.1.28) a new equation for ({Jl in lowest approximation, i. e.,
({J~o>,

(5.1.30)

It indicates that in lowest approximation the effect of the nonlinear terms of the
rhs of (5.1.28) consists in a shift of frequencies. Therefore whenever nonlinear
coupling between oscillators is present we have to expect stich frequency shifts
and possibly further effects. In the following we shall deal with various classes of
nonlinearly coupled oscillators. In one class we shall study the conditions under
which the behavior of the coupled oscillators can be represented by a set of,
possibly, different oscillators which are uncoupled. We shall see that this class
plays an important role in many practical cases. On the other hand, other large
classes of quite different behavior have been found more recently. One important
class consists of solutions which describe chaotic behavior (Sect. 8.11.2)

5.2 Perturbations of Quasiperiodic Motion for Time-Independent


Amplitudes (Quasiperiodic Motion Shall Persist)

In order to elucidate and overcome some of the major difficulties arising when
we deal with nonlinearly coupled oscillators, we shall consider a special case,
namely equations of the form (5.1.25) where we assume that r is a time-independ-
ent constant.
If there is no coupling between oscillators their phase angles obey equations
of the form

d({J
-=w. (5.2.1)
dt

Since these oscillators generally oscillate at different frequencies WI, ••. , W n , we


may call their total motion quasiperiodic. Now we switch on a coupling between

560
5.2 Perturbations of Quasiperiodic Motion for Time-Independent Amplitudes 159

these oscillators and let it grow to a certain amount. In such a case we have to
deal with equations of the form

drp
-=w+ef(rp)· (5.2.2)
dt

It is assumed that f is periodic in rpl' ... , rpn and that it can be represented as a
Fourier series in the form

f(rp) = Lim e im · 'P, where (5.2.3)


m
(5.2.4)

The small quantity e in (5.2.2) indicates the smallness of the additional forcej.
Let us assume that under the influence of coupling the solutions do not lose
their quasiperiodic character. As we have seen in the preceding section by means
of the explicit examples (5.1.28,30) the additional termf can cause a frequency
shift. On the other hand, we shall see below that we can treat the perturbation ef
in an adequate way only if the frequencies wi remain fixed. The situation will
turn out to be quite analogous to that in Sect. 2.1.3, where the convergence of
certain Fourier series depended crucially on certain irrationality conditions
between frequencies wi. Indeed, it shall turn out that we must resort precisely to
those same irrationality conditions. To this end we must secure that all the time
the basic frequencies wi remain unaltered. This can be achieved by a formal trick,
namely we not only introduce the perturbationfinto (5.2.1) but at the same time
a counterterm LI(e) which in each order of e just cancels the effect of the
frequency shift caused by f. Therefore we consider instead of (5.2.2) the equation

drp = w + LI(e) + ef(rp) , (5.2.5)


dt

where LI (e) is a function of e still to be determined. (The introduction of such


counterterms is, by the way, not new to physicists who have been dealing with
quantum electrodynamics or quantum field theory. Here the w's correspond to
the observed energies of a particle, but due to the coupling of that particle with a
field (corresponding to f) an energy shift is caused. In order to obtain the
observed energies, adequate counterterms are introduced into the Hamiltonian.
The corresponding procedure is called renormalization of the mass, electric
charge, etc.)
To simplify the notation in the following we shall make the substitution

ef(rp) --+f(rp) . (5.2.6)

We shall now devise an iteration procedure to solve (5.2.5). It will turn out that
this procedure, developed by Kolmogorov and Arnold and further elaborated by
Bogolyubov and others, converges very rapidly. We first proceed in a heuristic

561
160 5. The World of Coupled Nonlinear Oscillators

fashion. In lowest approximation we average equation (5.2.5) over a sufficiently


long time interval so that we obtain
dip -
-=w+L1+J. (5.2.7)
dt

We now choose in this approximation


L1 = Llo = -1 (5.2.8)

so that the solution, which we also denote by

(5.2.9)

acquires the form

'P(O) = wt + 'Po. (5.2.10)

Obviously we have chosen the counterterm L1 in such a way that 'P(O) retains the
old frequencies Wj. In ordinary perturbation theory we should now insert (5.2.10)
into rhs of (5.2.5) in order to obtain an improved solution tJ3(1}, i. e.,
d A(l) _
_ 'P_ = (w + L1 + f) + L J m exp [im(wt + 'Po)] . (5.2.11)
dt m*O

This solution can be found directly by integrating the rhs in (5.2.11), yielding

tJ3(l)=wt+'PO+ L ifm/(im.w»exp[im(wt+'Po)]· (5.2.12)


m*O

The expression on the rhs, namely

1('1')= L ifm/(im.w»exp(im. 'P), 'P=wt+'Po, (5.2.13)


m*O
is familiar from Sect. 2.1.3. There we have seen that (5.2.13) may cause serious
convergence difficulties, but these convergence difficulties do not occur if the w's
fulfill a Kolmogorov-Arnold-Moser (KAM) condition which we shall take in the
form

1m. wi ~ Kllm 11-(n+I),


(5.2.14)
Ilmll= Imll+ Im21+···lmnl·

Below a condition onJwill be imposed (namely that of being analytic in a certain


domain) to ensure that (5.2.13) converges when (5.2.14) is fulfilled. The solution
(5.2.12) can then be cast into the form ('P = tJ3(I»

'P = wt + 'Po + J(wt + 'Po) . (5.2.15)

562
5.2 Perturbations of Quasiperiodic Motion for Time-Independent Amplitudes 161

Within conventional perturbation theory we should now continue our approxi-


mation scheme by inserting (5.2.15) into the rhs of (5.2.5), thus hoping to
improve the approximation. However, the convergence of this procedure has
been questioned and the following rapidly converging procedure has been
devised. The basic idea is as follows.
We shall use (5.2.15) as a starting point for a sequence of transformations of
variables, namely we shall introduce instead of the wanted variable rp yet an other
unknown variable rp(1) by the hypothesis

(5.2.16)

wherejis identical with that occurring in (5.2.15). When we insert (5.2.16) into
(5.2.5) we find a rather complicated equation for rp(1) so that nothing seems to be
gained. But when rearranging the individual terms of this equation it transpires
that it has precisely the same form as (5.2.5), with one major difference. On the
rhs terms occur which can be shown to be smaller than those of equation (5.2.5),
i. e., the smallness parameter e is now replaced by e 2 • Continuing this procedure,
we find a new equation in which the smallness parameter is e 4 , then we find e 8,
etc. This indicates, at least from a heuristic point of view, that the iteration pro-
cedure is rapidly converging, as shall be proved in this section.
We insert the hypothesis (5.2.16) into both sides of (5.2.5). Of course, rp(1) is a
function of time. Therefore, when differentiating j with respect to time, by
applying the chain rule we first differentiate j with respect to rp(1) and then rp(1)
with respect to time t.
In this connection it will be advantageous to use a notation which is not too
clumsy. If v is an arbitrary vector with components VI"", un' we introduce the
following notation

(5.2.17)

where lhs is defined by rhs. When we choose v = w we readily find from (5.2.13)
the identity

(5.2.18)

It just tells us that by differentiating (5.2.13) with respect to rp(1) and multiplying
it by w, we obtain! except for the constant term (f(rp(1) = f =!o).

563
162 5. The World of Coupled Nonlinear Oscillators

After these preliminaries we may immediately write the result we obtain when
inserting (5.2.16) into (5.2.5):

d (I) 81-( (I» d (1)


_lP_+ lP _lP_ = w + L1 + /[lP(l) + !(lP(1»]. (5.2.19)
dt 8lP(l) dt

This equation can be given a somewhat different form as one can check directly.
The following equation

is identical with (5.2.19) provided we make use of (5.2.18). Dividing both sides of
(5.2.20) by

(5.2.21 )

we obtain

(5.2.22)

We now wish to show that (5.2.22) has the same shape as (5.2.5). To this end we
introduce the abbreviation

(5.2.23)

and abbreviate the remaining terms on the rhs of (5.2.22) by f(1) so that

(5.2.24)

In this way (5.2.22) acquires the form

(5.2.25)

564
5.2 Perturbations of Quasiperiodic Motion for Time-Independent Amplitudes 163

which, as predicted, has the same form as (5.2.5), but where LI(l) now replaces LI
andl(l) replaces I (or ef).
We now want to convince ourselves, at least for heuristic purposes, thatl(l)
and LI(l) are smaller by an order of e than the terms in (5.2.5). Since I is
proportional to e [compare (5.2.6)] we have immediately

o{ e, (5.2.26)
OqJ
--(1-) ex

and from (5.2.23) we obtain

LI(I) ex e. (5.2.27)

Due to (5.2.26) we have for small enough e

(5.2.28)

and because {ex e and I ex e we conclude

(5.2.29)

Taking all these estimates together we immediately arrive at the result that 1(1)
goes with e2• Now, the idea is to continue this iteration procedure by making the
following substitutions

(5.2.30)

(5.2.31)

(5.2.32)

and in particular

(5.2.33)

It is obvious how to continue this iteration procedure:

(5.2.34)

Because the last two terms on the rhs of (5.2.25), etc., become smaller and
smaller, we expect that eventually we obtain

. (dqJ(S»)
hm - - =W (5.2.35)
S-+OOdt

565
164 5. The World of Coupled Nonlinear Oscillators

as solution of our starting equation (5.2.5).


For readers interested in applications we show how the iteration procedure
works explicitly. To this end let us write down the first three steps

rp = rp(l) +i(rp(l» , (5.2.36)

rp(1) = rp(2) + f<1) (rp(2» , (5.2.37)

rp(2) = rp(3) + f<2) (rp (3» • (5.2.38)

If f<2) is small enough we may assume

drp(2)
--=W (5.2.39)
dt

with the solution

rp(2) = wt + rpo, (5.2.40)

where we may take rpo = 0 for simplicity.


Inserting (5.2.40) into (5.2.37) we obtain as an explicit solution to the starting
equation (5.2.5)

rp = wt + j(l)(wt) + j[wt + j(l)(wt)] . (5.2.41)

If f<2) is not sufficiently small but f<3) is instead, we may have

drp(3)
--=w (5.2.42)
dt

and therefore

rp(3) = wt, (5.2.43)

from which we obtain

(5.2.44)

Inserting this into rp(l) we obtain

(5.2.45)

and repeating this step once again to come back to rp we obtain our final result

rp = wt + f<2)(wt) + f<1)[wt + f<2)(wt)]


+ j(wt + f<2)(wt) + j(l)[wt + f<2)(wt)]) . (5.2.46)

566
5.3 Some Considerations on the Convergence of the Procedure 165

This result clearly indicates the construction of rp. It contains terms which
increase linearly with t. The next terms are quasiperiodic functions of time
where, for instance, the second termj(1) depends on a frequency which itself is
again a periodic function of time. Evidently, by continuing this kind of substitu-
tion we obtain a representation of rp in form of a series of termsj<S). In addition,
the arguments of thesej's appear themselves as a series. A proof of the conver-
gence of the procedure must show that the!,s converge provided the initialjis
small enough. In the subsequent section we shall study some of the most interest-
ing aspects of the convergence properties of j<s).

5.3 Some Considerations on the Convergence of the Procedure *

A complete proof of the convergence of the procedure described in the preceding


section is not given here since a detailed proof of a more general problem is pre-
sented in the Appendix.
Here we wish to give the reader an idea what the specific underlying ideas of
the whole procedure are. We shall see that the procedure contains a sophisticated
balance between two competing processes which tend to create and destroy the
convergence, respectively. We start with an auxiliary theorem.
Lemma 5.3.1. We assume that/(rp) is an analytic function on the domain

(5.3.1)

where p is a given positive constant. We further assume that/is bounded in this


domain with
If(rp) I ~ M. (5.3.2)

Then, for the function


j(rp) = L (fm/ im • w) exp (im • rp) (5.3.3)
m*O

on the domain
IIm{rpJI ~ p - 26, (5.3.4)

the inequality
-
If(rp) I ~ -MC -2n1-1 (5.3.5)
K 6 +

holds. In it C is a numerical constant which depends on the dimension n of the


vector rp

C= ( n: 1)
n+l
(1 + e)n . (5.3.6)

567
166 5. The World of Coupled Nonlinear Oscillators

The function (5.3.3) is analytic on the domain (5.3.4). While the constant M is in-
troduced in (5.3.2), the constant K occurs in the inequality of the KAM condition
(5.2.14). In (5.3.4), () is a positive constant which must be suitably chosen as is
explained below. In a similar way one finds the inequality

L 1of<rp) I:;;; MC' _1_, (5.3.7)


k OqJk K ()2n+2

where the constant C' is given by

n+2
C' = ( n: 2 ) (1 + et . (5.3.8)

We now want to show how the inequality (5.3.5,6) can be proven. The reader
who is not so interested in these details can proceed to formula (5.3.30). We
decompose/(rp) into its Fourier series. The Fourier coefficients are given by the
well-known formula

(5.3.9)

Integration over the angles e can be interpreted as an integration over the com-
plex plane where z = r exp (i e), r = 1.
Because/is an analytic function, according to the theory of functions we may
deform the path of integration as long as / remains analytic on the domain over
which the integration runs. This allows us to replace (5.3.9) by

1
- - J ... 0Sf(@+ ItP) exp [-lm(@ + 1 tP)] de! ... den •
21t 21t. . •
(5.3.10)
(271)n 0

We now choose the individual components rpj of tP by

(5.3.11)

We then immediately obtain

mrp = -pllm II, (5.3.12)

where

(5.3.13)

By use of (5.3.2), (5.3.9) can be estimated by

Ifm I :;;; M exp ( - p 11m II> . (5.3.14)

568
5.3 Some Considerations on the Convergence of the Procedure 167

We choose the imaginary part of rpj according to

(5.3.15)

Starting from the definition (5.3.3) for i we obtain the estimate

l/(rp) I ~ L 11m I exp [11m II(p - 20)] (5.3.16)


m*olmowl

by taking in each term of the series the absolute value and using (5.3.15). Using
the KAM condition (5.2.14) and (5.3.14) we obtain

- M
If(rp) I ~ - L 11m Il n+ 1 exp (- 20 11m II> . (5.3.17)
Km

It is now rather easy to estimate the sum over m in the following way. We
write

L 11m Ilv exp (- 20 11m II> with (5.3.18)


m

0<0<1, v>l (5.3.19)

and introduce the abbreviation

11m I = z. (5.3.20)

The maximum of

vlnz - oz (5.3.21)

for z ~ 1 (as 11m II ~ 1) lies at

(5.3.22)

Using (5.3.22) we may estimate (5.3.21) by

(5.3.23)

Taking the exponential of both sides of (5.3.23), we find after a slight rearrange-
ment

(5.3.24)

569
168 5. The World of Coupled Nonlinear Oscillators

Inserting this relation into (5.3.18), we readily obtain

L exp (- 211m 110) 11m II" ~ (~)V_1


m e OV

. L exp(-lmlI0-lm210- ... -lmnlo)=S, (5.3.25)


m

where the sum over m can be written as the nth power of the infinite sum over a
single variable q = 1, 2, 3, ... so that the rhs of (5.3.25) is

S = (~)V ~ (1 + 2I
e 0 q~l
e - <5 q )n (5.3.26)

Summing up the geometric series yields

S (~)V_lOV (1 + e-<5)n
=
e 1-e-<5
(5.3.27)

Using

(5.3.28)

and thus

1 + e - <5 , :::: 1 (1 <5) 1+e


-1-_-e---,c-<5 '" b + e < - 0 - , (5.3.29)

we obtain the final result

L exp (-211m 110) 11m II" < (~)V_l_(l + e)n. (5.3.30)


m C v n b +

Using (5.3.30,17) we obtain the desired result (5.3.5,6). As each member of the
infinite sum (5.3.3) is analytic and this sum (5.3.3) is bounded fcf. (5.3.5)] on the
domain (5.3.4), so (5.3.3) is analytic on the domain (5.3.4).
After this small exercise we resume the main theme, namely, we wish to con-
sider the iteration procedure described in Sect. 5.2 and its convergence. Since/(1)
in (5.2.24) is built up of functions /,1 and derivatives as they appear in (5.3.2,
5, 7) estimates for /(1) can now be derived. Since these estimates are somewhat
lengthy and we shall come to this question in a more general frame in the
Appendix, we shall first give the result of the estimate. One finds that

(5.3.31)

570
5.3 Some Considerations on the Convergence of the Procedure 169

is analytic in the domain

(5.3.32)

and obeys the inequality

(5.3.33)

(5.3.34)

Here e" is a numerical constant of a similar structure to (5.3.6, 8), M is intro-


duced in (5.3.2), K is the constant occurring in the KAM condition and J occurs
in (5.3.4 or 32).
We now come to the discussion of the competition between two different pro-
cesses. Namely, when we wish to continue our iteration procedure, we have to
proceed from Ml to another M2 and so on. In order to obtain convergence we
have to choose J big enough. On the other hand, we are limited in choosing the
size of J, because each time we go one step further we must replace the estimate
(5.3.4) correspondingly, i.e., in the first step we have

Pl = P - 2J, (5.3.35)

in the second step,

(5.3.36)

and so on.
Because each time Pk must be bigger than 0 the sum over the J's must not
become arbitrarily big. Rather, we have to require that Pk must be bounded from
below so that we can perform the estimates safely in each step. Therefore we have
two conflicting requirements. On the one hand, the J's must be big enough to
make M j [cf. (5.3.34)] converging, on the other hand, the J's must not become
too big but rather must converge to O. The attractive feature of the whole
approach is that J can be chosen such that in spite of this conflict a convergent
approach can be constructed. To this end we put (y < 1)

(5.3.37)

and choose y such that Pk --> p/2 for k --> 00, e. g.,

2(y+ y2+ ... + yS+ ... ) = ~ (5.3.38)


2

holds. Summing up the geometric series on lhs of (5.3.38) we find

P
J=y=--«1). (5.3.39)
4+p

571
170 5. The World of Coupled Nonlinear Oscillators

We are now in a position to give explicit estimates for the subsequent M/s.
We introduce the abbreviation P = C"/K so that (5.3.33,34) reads

11 (1)1 ::;'M1 -P-2-2·


_ M2 (5.3.40)
o n+
When we go from Ml to M2 we have to replace M by Ml and L1 by L1 1 , thus
obtaining

11(2) I::;. M2 = P Mi . (5.3.41)


oin + 2
It is obvious how to continue:

If (3) 1 ....,~ M 3 = P M~
02n+2 '
(5.3.42)
• 2
• 2
If (s+I)I~M
...., s+1
=p~.
o2n+2
(5.3.43)
s

Using (5.3.37) we may write (5.3.43) in the form

M =P M; (5.3.44)
s+1
y (s+I)(2n+2)

Now we introduce as an abbreviation


1
N=-- (5.3.45)
y2n+2 '

which allows us to write (5.3.44) in the form

(5.3.46)

We choose M sufficiently small so that

(5.3.47)

Now we require P ~ 1, i. e., the constant K appearing in the KAM condition has
to be chosen small enough as follows from (5.3.40). It is then easy to derive for
the recursion formulas (5.3.40-43) the relation

(5.3.48)

572
5.3 Some Considerations on the Convergence of the Procedure 171

It tells us the following. If we choose ro small enough, Ms converges rapidly,


namely with powers of rZ. Since M is related with ro by (5.3.47), we have shown
that the whole procedure converges like a geometric series provided M is chosen
small enough. This concludes our present considerations.

573
6. Nonlinear Coupling of Oscillators:
The Case of Persistence of Quasiperiodic Motion

In this chapter we shall present a theorem developed by Moser which extends


former work by Kolmogorov and Arnold. The problem to be treated contains
those of Sects. 3.9 and 5.2, 3 as special cases. As we have seen before, a set of
linearly coupled harmonic oscillators can oscillate at certain basic frequencies so
that their total motion is a special case of a quasiperiodic motion. In this chapter
we shall deal with the important problem whether nonlinearly coupled oscillators
can also perform quasiperiodic motion. We also include in this consideration
oscillators which by themselves are nonlinear.

6.1 The Problem

To illustrate the basic idea let us start with the most simple case of a single oscil-
lator described by an amplitude r and a phase angle rp. In the case of a linear
(harmonic) oscillator the corresponding equations read

rp= w, (6.1.1)

r= O. (6.1.2)

In many practical applications, e. g., radio engineering or fluid dynamics, we


deal with a self-sustained oscillator whose amplitude r obeys a nonlinear
equation, e. g.,

(6.1.3)

While (6.1.2) allows for solutions with arbitrary time-independent amplitude r,


(6.1.3) allows only for a definite solution with time-independent amplitude,
namely

r=ro= V%, a,{J>O, (6.1.4)

(besides the unstable solution r = 0).


6.1 The Problem 173

Fig. 6.1.2. (a) The stationary solution of (6.1.1, 3).


(b) The transient solution of the same equations.

...
Note that with increasing time ~ relaxes to 0

a b
Fig. 6.1.1. Visualization of the solution of (6.1.1,3), namely by a particle which moves in an over-
damped way in a potential shown in this figure, and which rotates with angular frequency w

As is well known [1] equations (6.1.1 and 3) can describe a particle moving in
a potential which rotates at a frequency wand whose motion in r-direction is
overdamped. As is obvious from Fig. 6.1.1, the particle will always relax with its
amplitude r(t) towards ro (6.1.4).
If we focus our attention on small deviations of r from ro we may put

(6.1.5)

and derive a linearized equation for ~ in the form

(6.1.6)
In the following discussion we shall neglect the term O(e) which indicates a
function of ~ of order e
or still higher order. The solutions of the equations
(6.1.1) and (6.1.3, 6) can be easily visualized by means of the potential picture
and can be plotted according to Fig. 6.1.2, where Fig. 6.1.2a shows stationary
motion, while Fig. 6.1.2b shows how the system relaxes when initially ~ was
chosen unequal to zero.
These considerations can be extended to a set of oscillators and to the more
general case in which the number of frequencies Wt, W2, •.• , wn differs from the
number of amplitude displacements ~t, ... , ~m' In such a case equations
(6.1.1, 6) can be generalized to equations

ip(O) = W, (6.1.7)

~(O) = A ~(O) , (6.1.8)

where we use an obvious vector notation. Here A is a matrix which we assume to


be diagonizable. Because we shall assume that ~(O) is real, in general we cannot
assume that A has been fully diagonalized. The reader may note that (6.1.7) has
the same form as the unperturbed (e = 0) equation (5.2.1), whereas (6.1.8) has
the same form as the unperturbed equation (3.9.10) with Mo = O.

575
174 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

We now introduce a nonlinear coupling between the w's and ~'s, i. e., we are
interested in equations of the form

q,(l) = w + e/(rp(1), ;;(1), e) , (6.1.9)

~(l) = A ;;(l) + eg(rp(l), ell), e) , (6.1.10)

where/and g are functions which are periodic in tpP) with period 2 n. For mathe-
matical reasons we shall further assume that they are real analytic in tp(l), ell), e,
where e is a small parameter.
As we know from the discussion in Sect. 5.2, the additional term/ may cause
a change of the basic periods 7J = 2 nl Wj. For reasons identical with those of
Sect. 5.2, we must introduce a counterterm in (6.1.9) which is a constant vector
and will guarantee that the basic periods remain the same, even if the interaction
term/ is taken into account. This leads us to consider equations of the form

q, = W + e/(rp, e, e) + A , (6.1.11)

where the counterterm A has been added.


In addition, we wish that the matrix A remains unchanged even if eg is taken
into account. We shall see below what this sentence "A remains unchanged"
means precisely. As it will turn out again, we have to introduce another counter-
term in (6.1.10) which must be chosen in the form d + De, where d is a constant
vector while D is a constant matrix.
In this way, (6.1.10) is transformed into

e= Ae + eg(tp,e,e) + d + De· (6.1.12)

It will transpire from the subsequent mathematical transformations that we may


add the requirements
Ad= 0, (6.1.13)

AD=DA (6.1.14)

to d and D. Let us visualize the effect of the additional terms / and g and of the
counterterms A, d and D. In order to discuss these effects let us start with the
case n = m = 1.
We wish to consider a situation in which the quasiperiodic motion, or, in the
present special case, the periodic motion, persists. According to (6.1.7 and 8), the
unperturbed equations are given by
q,(O) = w, and (6.1.15)

~(O) = _A~(O). (6.1.16)

They describe a uniform rotation of the angle tp(O), tp(O) = wt and a relaxation of
~(O) towards zero so that the spiral motion of Fig. 6.1.3 emerges. As a special case

576
6.1 The Problem 175

Fig. 6.1.3. Solution to .pIO) = w. ~IO) = - A,;IO) Fig. 6.1.4. The steady-state solution of Fig.
6.1.3

we obtain qJ(O) = wt, .;(0) = 0, which is the motion on the circle of Fig. 6.1.4.
These motions are perturbed when we take into account the additional terms /
and g of (6.1.9, 10) in which case we find

~(1) = W + e/(qJ(1), .;(1), e) , (6.1.17)

~(1) = _ A.;(1) + eg(qJ(1), .;(1), e) , (6.1.18)

where/and g are 2n-periodic in qJ(l). Let us first consider the case which corre-
sponds to .;(0) = 0, i.e., the circle of Fig. 6.1.4. To visualize what happens we
adopt the adiabatic approximation from Sect. 1.13. We put ~(1) = in (6.1.18).
For small enough e we may express .;(1) uniquely by means of qJ(1), i.e.,
°
(6.1.19)

(In fact this relation may be derived even without the adiabatic approximation as
we shall demonstrate below, but for our present purpose this is not important.)
Relation (6.1.19) tells us that the circle of Fig. 6.1.4 is deformed into some other
closed curve (Fig. 6.1.5). Thus the old orbit .;(0) = 0 is deformed into the new one
(6.1.19). Inserting (6.1.19) into (6.1.17) we obtain an equation of the form

~(1) = w + e](qJ(1), e) . (6.1.20)

/
/ ' ----~ "-
"
h,
~I'I
\ ~o
\
\

Fig. 6.1.5. Stationary solution to (6.1.17,18), Fig. 6.1.6. Transient solution of (6.1.17, 18),
qualitative plot qualitative plot

577
176 6. Nonlinea r Coupling of Oscillators: The Case of Persistence of Quas iperiodic Motion

It tells us that the rotation speed (p(l) depends on <p(1). Therefore, at least in
general, we must expect that the length of the period of motion around the closed
orbit of Fig. 6.1.5 is different from that of Fig. 6.1.4. When we introduce a
suitable counterterm L1 in the equation

(p = w + d(<p, e) + L1 , (6.1.21)

we may secure that the period remains that of (6.1.15) (Fig. 6.1.4).
Let us now consider the analog of Fig. 6.1.3 . There we considered an initial
state which is somewhat elongated from the stationary state ~(O) = 0 and which
tends to ~(O) = 0 with increasing time. If e is small, the additional term g in
(6.1.18) will deform the spiral of Fig. 6.1.3 to that of Fig. 6.1.6, but qualitatively
~(I) will behave the same way as ~(O), i. e., it relaxes towards its stationary state.
However, the relaxation speed may change. If we wish that ~(1) relaxes with the
same speed as ~(O), at least for small enough ~(1), s, we have to add the above-men-
tioned counterterm D~.
The basic idea of the procedure we want to describe is now as follows. We
wish to introduce instead of IfJ and ~ new variables If! and X, respectively, so that
the Figs. 6.1 .5, 6 of IfJ and ~ can be transformed back into those of Figs. 6.1.4, 3,
respectively. Before we discuss how to construct such a transformation, we
indicate what happens in dimensions where n ~ 2 and/ or m ~ 2. All the essential
features can be seen with help of the case n = 2 and, depending on an adequate
interpretation, m = 2 (or 3). Let us start again with the unperturbed motion
described by (6.1.7 and 8). In the steady state, ~(O) = ~(O) =,0 and we may
visualize the solutions of (6.1.7) by plotting them with coordinates IfJb 1fJ2' The
variables can be visualized by means of a local coordinate system on the torus
(Fig. 6.1. 7).

Fig. 6.1.7. Local coordinates with respect to a torus (sche-


matic) . We distinguish between two systems: 1) Local coor-
dinates on the torus represented by <Pya ); 2) Coordina tes
pointing a way from the to rus represented by ~;o) . In the
figure representing a two-dimensional torus in three-dimen-
sional space a single ¢ would be sufficient. J n this figure we
try to visualize what happens in high-dimensional space
where two linearly indirect directio ns point away from the
toru s

In the case ~(O) =1= 0, provided A is diagonal ,

(6.1.22)

(6.1.8) describes a relaxation of the system's motion towards the torus . To grasp
the essential changes caused by the additional term g in (6.1.10) , let us again
adopt the adiabatic approximation so that ~\l), ~il) may be expressed by IfJP), 1fJ~l)
through

578
6.1 The Problem 177

Fig. 6.1.8. Deformed torus with its new coordinate system

par' 01 I

tfaJectory

~P) == ~l!6 = Fl (<pP), <p~I» ,


(6.1.23)
~~1) == ~P6 = F 2 (<pf!), <p~I» •

Clearly, the original torus is now distorted into a new surface (Fig. 6.1.8). In it,
the uniform rotation of <pfO), <p~O) is replaced by a nonuniform rotation of IPf1), IP1 1)
due to the additional term! in (6.1.9). Consequently, in general the correspond-
ing periods 11 and 0. will be changed. To retain the original periods we must
introduce the counterterms Lll' Ll 2 • Let us now consider the relaxation of ~P), ~~1)
towards the deformed surface. While originally according to our construction (or
visualization) the rectangular coordinate system ~fO), ~1°) was rotating with <pfO),
<p1°) uniformly along the torus, ~f1), ~~1) may form a new coordinate system which
is no longer orthogonal and in which the relaxation rate is different from ..1.1' ..1.2
due to the additional term gin (6.1.10). The counter term DC; allows us to correct
for the locally distorted coordinate system ~P), ~il) by making it orthogonal again
and by restoring the original relaxation rates. If the eigenvalues of A are complex
(or purely imaginary), D makes it possible to keep in particular the original A
fixed while d may correct (in the sense of an average) for the distortion of the
torus. Therefore, due to!, g and the counterterms, the trajectories are deformed
in three ways:
1) The local coordinate system <P on the torus is changed (see Fig. 6.1.9a).
2) The radial displacement has been changed or, more generally, the position of
the surface elements of the torus has been changed (Fig. 6.1.9b).
3) The orientation of the coordinate system ~ has been changed (Fig. 6.1.9c).

b <>
f~~ol Fig. 6.1.9a - c. Local changes due to the deformation of a
torus. (a) New coordinates on the torus. (b) A displacement
L ~~Ol of elements of the torus. (c) Change of the coordinate
system pointing away from the torus
c

579
178 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

This suggests the introduction of coordinate transformations that lead us


back from the deformed trajectories described by (6.1.11, 12) to the undeformed
trajectories described by (6.1.7, 8) by a coordinate transformation taking these
three aspects into account. To this end we express ({J and ~ by new coordinates IJI
and X, respectively. When treating the motion along ~o we make for ({J the ansatz

({J = IJI + W(IJI, 8), (6.1.24)

which takes into account that the rotation speed is not uniform but changes
during the rotation. The factor 8 in front of u indicates explicitly that this term
tends to 0 and therefore ({J coincides with IJI when we let 8 tend to 0 in (6.1.11, 12).
With respect to ~, we have to take into account that the deformed trajectory
differs from the un deformed trajectory (Fig. 6.1.7) by dilation in radial direction
and by rotation of the local coordinate system. Therefore we make the following
hypothesis

~ = X + 8 V(IJI, 8) + 8 V( IJI, 8) X , (6.1.25)

where v takes care of the dilation in radial direction, whereas the second term
V· X takes care of the rotation of the local coordinate system. Because IJI is an
angular variable, we require that u, v, V are periodic in IJI with period 2 Tr.
Furthermore, these functions must be analytic in IJI, ~ and 8. We wish to show
that by a suitable choice of u, v, V and the counterterms LI, d, D, we may trans-
form (6.1.11,12) into equations for the new variables IJI and X which describe
"undistorted trajectories" (Fig. 6.1. 7) and which obey the equations
ip = w + O(X)' (6.1.26)

i=AX+ O (X 2 ). (6.1.27)

The terms 0 describe those terms which are neglected and O(X k ) denotes an
analytic function of IJI, X, 8 which vanishes with x-derivatives up to order
k - 1 ;:;, 0 for X = O. In other words, we are interested in a transformation from
(6.1.11, 12) to (6.1.26, 27), neglecting terms denoted by O. When we specifically
choose the solution
(6.1.28)
x=o,
the solution of (6.1.26) reads
IJI = wt (6.1.29)

(up to an arbitrary constant lJIo). Inserting (6.1.28,29) into (6.1.24, 25), we find
({J = wt + w(wt, 8) and (6.1.30)

~= 8V(wt, 8), (6.1.31)

indicating that we are dealing with a quasiperiodic solution.

580
6.2 Moser's Theorem (Theorem 6.2.1) 179

6.2 Moser's Theorem 1 (Theorem 6.2.1)


So far we have given an outline of what we wish to prove. In order to be able to
perform the proof we need a number of assumptions which we now list. The
object of our study is (6.1.11, 12). We assume that the matrix A in (6.1.12) is
diagonalizable, and we denote its eigenvalues by All"
1) Let us consider the expressions

E jvwv + E kJJA JJ
n m
i for (6.2.1)
v=t /l=1

E Ik/ll ~ 2 and (6.2.2)

IEkJJI~ 1, (6.2.3)

with integer coefficients

(6.2.4)

We require that of the expressions (6.2.1) only finitely many vanish, namely only
for j = O. Obviously this implies that Wt, ... ,W n are rationally independent.
2) For some constants K and r with '

0<K~1, r>O, (6.2.5)

we require

Ii J/vWv + JJ~t kJJAJJI ~ K( IUIIT+ 1)-1 (6.2.6)

for all integers mentioned under point (1), for which lhs of (6.2.6) does not
vanish. Condition (6.2.6) may be considered as a generalized version of the KAM
condition. Note that Ilj II is defined in the present context by

lUll = Ijtl+ Ihl+···+ Ijnl·


3) We require that! and gin (6.1.11 and 12) have the period 271 in rp and are
real analytic in rp, C;, e. We now formulate Moser's theorem.
Theorem 6.2.1. Under conditions (1- 3) there exist unique analytic power series
,1 (e), d(e), D(e)satisfying (6.1.13,14) such that the system (6.1.11,12) posses-
ses a quasiperiodic solution with the same characteristic numbers Wt,···, Wn'
At, ... ,Am as the unperturbed solution. More explicitly speaking, a coordinate
transformation of the form (6.1.24,25) exists, analytic in 'fI, X and e which trans-
forms (6.1.11, 12) into a system of the form (6.1.26,27). In particular, (6.1.30,
31) represent a quasiperiodic solution with the above-mentioned characteristic
lOur formulation given here slightly differs from that of Moser's original publication.

581
180 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

numbers. All series of Lt, d, D have a positive radius of convergence in e. While


LI, d, D are determined uniquely, u, v, Vare determined only up to a certain class
of transformations (which shall be discussed in Appendix A).
We shall proceed as follows. In the next section we shall show how the coun-
terterms LI, d, D, as well as u, v and V can be determined by a well-defined itera-
tion procedure. This section is of interest for those readers who wish to apply this
formalism to practical cases. In Appendix A we present the rigorous proof of
Moser's theorem. The crux of the problem consists in the proof that the iteration
procedure described in Sect. 6.3 converges.

6.3 The Iteration Procedure *

Our starting point is (6.1.11, 12), repeated here for convenience

ip = W + e/(rp, c" e) + LI , (6.3.1)

~ = At, + eg(rp,c" e) + d + DC,. (6.3.2)

We wish to transform these equations by means of the transformations

rp = VI + eu(VI, e), (6.3.3)

.;= X+ e(V(VI,e) + V(VI,e)X) (6.3.4)

into (6.1.26, 27).


We expand u, v, V, Lt, d, D into a power series in e and equate the coefficients
of the same powers of e on the left- and right-hand sides of (6.3.1, 2). It is suf-
ficient to discuss the resulting equations for the coefficients of e to the first power
as the higher coefficients are determined by equations of the same form. As is
well known from iteration procedures, higher-order equations contain coef-
ficients of lower order.
Expressing the various quantities u, ... , D up to order e we write

rp = VI + eu' + 0(e 2 )
Lt = eLI' + 0(e 2 )
t, = X + e(V' + V' X) + 0(e 2 ) (6.3.5)
d = ed' + 0(e 2 )
D = eD' + 0(e 2 ) •

We insert (6.3.3, 4) into (6.3.1), using (6.3.5), and keep only terms up to order e.
Since (6.3.1,2) are vector equations and we have to differentiate for instance u
with respect to rp, which itself is a vector, we must be careful with respect to the
notation. Therefore, we first use vector components. Thus we obtain

582
6.3 The Iteration Procedure 181

tPjJ + e r. (au~/a I/Iv) i;lv = WjJ + ejil/l, X, 0) + e11~ . (6.3.6)


v

We now require that

(6.3.7)

and use the following abbreviation. We consider

(6.3.8)

as an element with indices p, v of a matrix which we write in the form

(6.3.9)

Furthermore, we specialize (6.3.6) to X = O. In agreement with (6.1.26, 27) we


require that i;ljJ = wjJ for X = 0 is fulfilled up to order e. It is now easy to cast
(6.3.6) again in the form of a vector equation, namely

u~W - 11' =/(1/1,0,0). (6.3.10)

We now insert (6.3.3, 4) using (6.3.5) into (6.3.2) and keep only terms up to order
e, where we consider again the individual components p. Using the chain rule of
differential calculus we obtain for lhs of (6.3.2)

XjJ+ e r.(8v~(I/I)/al/lv)tPv+ r.(8V~V<I/I)/al/l).)tP).Xv+ r. V~V<I/I>Xv'


v v,). v (6.3.11)
For the rhs of (6.3.2) we obtain

egjJ (1/1, X, 0) + e r. (8gil/l, X, 0)/8Xv) Xv + r. AjJvXv + e r. AjJvv~ (1/1)


v v v

+ e r. AjJv r. V~v'Xv' + ed~ + e r. D~vXv' (6.3.12)


v v' v

Since we intend to derive (6.1.27) for X we require

(6.3.13)

or in vector notation

X=AX· (6.3.14)

Furthermore we consider

(6.3.15)

as an element with indices p, v of the matrix

583
182 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

v~. (6.3.16)

We equate the terms which are independent of X in (6.3.11, 12). Making use of
(6.3.13,15,16), we readily obtain (specializing to X = 0)

V~W_AV'=g(If/,O,O)+d' . (6.3.17)

In the last step we equate the terms linear in X which occur in (6.3.11, 12). We
denote the matrix, whose elements /1, v are given by

(6.3.18)

(6.3.19)

Furthermore we consider

(6.3.20)

as elements with indices /1, v of the matrix

(6.3.21)

This leads us to the following matrix equation

V~w+ V'A=gx+AV'+D'. (6.3.22)

Collecting the equations (6.3.10, 17,22) we are led to the following set of
equations (setting X = 0)

1
u~w -A' =/(If/,O,O) ,

v~w _AV'-d'=g(If/,O,O) , (6.3.23)


V~w + V' A - A V' - D' = gx(lf/, 0,0)

In these equations the functions/, g, gx are, of course, given, as are the constants
wand A. In the present context, the quantities If/ can be considered as the inde-
pendent variables, whereas the functions u I, v I, V' are still to be determined. The
equations (6.3.23) can be easily brought into a form used in linear algebra. To
this end we rearrange the matrix V' into a vector just by relabeling the indices,
e. g., in the case

(6.3.24)

584
6.3 The Iteration Procedure 183

we may introduce a vector

(6.3.25)

Consequently we have the vector

(6.3.26)

We replace the matrix g)( in the same way by the vector G and introduce the
vector

F~ [~] (6.3.27)

Following the same steps with ,1', d', D', we introduce the vector

(6.3.28)

Finally we have the matrix

(6.3.29)

whose submatrices are matrices of dimension n, m and m 2 , respectively. As can


be seen from the equations (6.3.23), these matrices must have the form
n
Ll = L wv<Volf/v, (6.3.30)
v~ 1

n
L2 = L wvo/Olf/v- A, (6.3.31)
v~l

n
L3= L wvo/Olf/v-(A'-A"), (6.3.32)
v~l '--v--'
A
where A' and A" are again matrices stemming from A via rearrangement of
indices. By means of (6.3.26-29) we may cast (6.3.23) in the form

585
184 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

LU=F+LI. (6.3.33)

This is an inhomogeneous equation for the vector U. We first determine the


eigenvalues of the operator L. This is most easily achieved by expanding the solu-
tions U of the equation
(6.3.34)

into a Fourier series

U(Ij/) = EUjexp[i(j,Ij/)]. (6.3.35)


J

For the moment, the use of L may seem to be roundabout, but its usefulness will
become obvious below. When we insert (6.3.35) into (6.3.34) we readily obtain

(6.3.36)

(6.3.37)

(6.3.38)

In the notation of the original indices inherent in (6.3.23), the last equation
(6.3.38) reads

(6.3.39)

These equations show clearly what the eigenvalues of L are: since the equations
for Uj' OJ, VJ are uncoupled, we may treat these equations individually. From
(6.3.36) we obtain as eigenvalue:

i(j, w), (6.3.40)

by diagonalizing (6.3.37)

i(j,w)-A Il , (6.3.41)

and by diagonalizing (6.3.38), which can be most simply seen from (6.3.39),

(6.3.42)

According to linear algebra, the null-space is defined by those vectors Uj for


which the eigenvalues are zero. But on account of assumption (I) in Sect. 6.2, the
eigenvalues (6.3.40 - 42) vanish only for j = O. We are now in a position to
invoke a well-known theorem of linear algebra on the solubility of the inhomo-
geneous equations (6.3.33). To this end we expand F into a Fourier series of the
form (6.3.35) and equate the coefficients of exp (ij, Ij/) on both sides of (6.3.33).

586
6.3 The Iteration Procedure 185

For j =F 0 we obtain

iU,w)uj =Jj, (6.3.43)

[iU,w)-Alvj =gj' (6.3.44)

[iU,w)lVj -AVj + VjA=gl.,j' (6.3.45)

or, writing (6.3.45) in components (provided A is diagonal for example),

(6.3.46)

Because the eigenvalues do not vanish, these equations can be solved uniquely for
the unknown u I, v I, V'.
We now turn to the eigenvectors Uj belonging to the vanishing eigenvalues,
obtained according to assumption (1) in Sect. 6.2 only for j = O. Thus the eigen-
vectors of the null-space obey the equations

o· U o= 0, (6.3.47)

A· Vo = 0, (6.3.48)

AVo - VoA = 0, (6.3.49)

which follow from (6.3.36 - 38) and the condition that (6.3.40 - 42) vanish.
We now consider the inhomogeneous equations which correspond to
(6.3.43 - 45) for j = O. They read

-Ll'=!o' (6.3.50)

-Avo - d ' =go' (6.3.51)

-AVo + VoA -D' =gl.,o, (6.3.52)

Obviously (6.3.50) fixes LI' uniquely. Because of (6.3.48), i. e.,

Avo = 0, (6.3.53)

d ' is determined uniquely by (6.3.51). Similarly, from (6.3.49) it follows that D'
is determined uniquely by (6.3.52). What happens, however, with the coefficients
uo, vo, Vo? According to linear algebra, within the limitation expressed by
(6.3.47 - 49), these null-space vectors can be chosen freely. In the following we
put them equal to zero, or in other words, we project the null-space out. Let us
summarize what we have achieved so far. We have described the first step of an
iteration procedure by which we transform (6.3.1, 2) into (6.1.26,27) by means
of the transformations (6.3.3, 4). This was done by determining u, v, V and the
counterterms LI, d, D in lowest approximation. We have shown explicitly how the

587
186 6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic Motion

corresponding equations can formally be solved. This procedure may now be


continued so that we may determine the unknown quantities up to any desired
order in e. We shall see in later applications that it is most useful to calculate in
particular ,1, d, D and the more practically oriented reader may stop here.
From the mathematical point of view, the following questions have to be
settled.
1) In each iteration step, the equations were solved by means of Fourier
series. Do these series converge? This question is by no means trivial, because,
e.g., it follows from (6.3.43) that

(6.3.54)

i. e., we have to deal with the problem of small divisors (Sects. 2.1.3 and 5.2). We
shall treat it anew in Appendix A.1, making use of our assumption that f and g
are analytic functions in rp and c;.
2) We have to prove the convergence of the iteration procedure as a whole.
This will shed light on the choice of the null-space vectors in each iteration step.
As it will turn out, any choice of these vectors will give an admitted solution
provided the e-sum over the null-space vectors converges. In connection with this
we shall show that there are classes of possible transformations (6.3.3,4) which
are connected with each other. We turn to a treatment of these problems in
Appendix A.

588
7. Nonlinear Equations. The Slaving Principle

The main objective of this book is the study of dramatic macroscopic changes of
systems. As seen in the introduction, this may happen when linear stability is
lost. At such a point it becomes possible to eliminate very many degrees of
freedom so that the macroscopic behavior of the system is governed by very few
degrees of freedom only. In this chapter we wish to show explicitly how to
eliminate most of the variables close to points where linear stability is lost. These
points will be called critical points. It will be our goal to describe an easily
applicable procedure covering most cases of practical importance. To this end
the essential ideas are illustrated by a simple example (Sect. 7.1), followed by a
presentation of our general procedure for nonlinear differential equations (Sects.
7.2 - 5). While the basic assumptions are stated in Sect. 7.2, the final results of
our approach are presented in Sect. 7.4 up to (and including) formula (7.4.5).
Section 7.3 and the rest of Sect. 7.4 are of a more technical nature. The rest of
this chapter is devoted to an extension of the slaving principle to discrete noisy
maps and to stochastic differential equations of the Ito (and Stratonovich) type
(Sects. 7.6-9).

7.1 An Example

We treat the following nonlinear equations in the two variables u and s

u = au - us, (7.1.1)

s= -ps+ u 2 • (7.1.2)

We assume that

a~O and (7.1.3)

p>O. (7.1.4)

When we neglect the nonlinear terms u . sand u 2, respectively, we are left with
two uncoupled equations that are familiar from our linear stability analysis.
Evidently (7.1.1) represents a mode which is neutral or unstable in a linear
188 7. Nonlinear Equations. The Slaving Principle

stability analysis, whereas s represents a stable mode. This is why we have chosen
the notation u (unstable or undamped) and s (stable). We wish to show that s may
be expressed by means of u so that we can eliminate s from (7.1.1, 2). Equation
(7.1.2) can be immediately solved by integration
t
s(t) = I e- P(t-r)u 2 (r)dr, (7.1.5)
-00

where we have chosen the initial condition s = 0 for t = - 00. The integral in
(7.1.5) exists if 1u( r) 12 is bounded for all r or if this quantity diverges for r -+ - 00
less than exp ( 1pr I). This is, of course, a self-consistency requirement which has
to be checked.
We now want to show that the integral in (7.1.5) can be transformed in such a
way that s at time t becomes a function of u at the same time t only.

7.1.1 The Adiabatic Approximation


We integrate (7.1.5) by parts according to the rule

Ivwdr= vw- Iv wdr, (7.1.6)

where we identify v with exp [- p(t- r)] and w with u 2• We thus obtain

(7.1. 7)

We now consider the case in which u varies very little so that Ii can be con-
sidered as a small quantity. This suggests that we can neglect the integral in
(7.1.7). This is the adiabatic approximation in which we obtain

(7.1.8)

Solution (7.1.8) could have been obtained from (7.1.2) by merely putting

(7.1.9)

on lhs of (7.1.2). Let us now consider under which condition we may neglect the
integral in (7.1.7) compared to the first term. To this end we consider

(7.1.10)

in (7.1. 7). We may then put (7.1.10) in front of the integral which can be
evaluated, leaving the condition

(7.1.11 )

590
7.1 An Example 189

This condition can be fulfilled provided

(7.1.12)

holds. Condition (7.1.12) gives us an idea of the meaning of the adiabatic


approximation. We require that u changes slowly enough compared to a change
prescribed by the damping constant p.
We now want to show how we may express s(t) by u(t) in a precise manner.

7.1.2 Exact Elimination Procedure


In order to bring out the essential features of our approach we choose a = 0, so
that (7.1.1) is replaced by

it= -us. (7.1.13)

We now use the still exact relation (7.1. 7), substituting it in it by - u . s according
to (7.1.13)

(7.1.14)

This is an integral equation for s(t) because s occurs again under the integral at
times T. To solve this equation we apply an iteration procedure which amounts to
expressing s by powers of u. In the lowest order we may express s on the rhs of
(7.1.14) by the approximate relation (7.1.8). In this way, (7.1.14) is transformed
into

(7.1.15)

To obtain the next approximation we integrate (7.1.15) by parts using formula


(7.1.6) in which we again identify v with the exponential function in (7.1.15). We
thus obtain

(7.1.16)

Under the integral we may replace it according to (7.1.13), giving

(7.1.17)

We split (7.1.17) into an integral containing u 6 and another part containing still
higher-order terms

591
190 7. Nonlinear Equations. The Slaving Principle

(7.1.18)

Performing a partial integration as before, s(t) occurs in the form

(7.1.19)

It is now obvious that by this procedure we may again and again perform partial
integrations allowing us to express s(t) by a certain power series in u(t) at the
same time. Provided u is small enough we may expect to find a very good
approximation by keeping few terms in powers of u. Before we study the
problem of the convergence of this procedure, some exact relations must be
derived. To this end we introduce the following abbreviations into the original
equations (7.1.13, 2), namely

u= -us = Q(u,s) , (7.1.20)

(7.1.21)

Note that in our special case P is a function of u alone

P=P(u). (7.1.22)

The procedure just outlined above indicates that it is possible to express s by u:

s = s(u), (7.1.23)

so that we henceforth assume that this substitution has been made. We wish to
derive a formal series expansion for s(u). We start from (7.1.5), written in the
form
t
s(u(t» = J e- fJ(t- r) P(u)rdr. (7.1.24)

As before, we integrate by parts identifying P(u) with win (7.1.6). The differen-
tiation of P(u) with respect to time can be performed in such a way that we first
differentiate P with respect to u and then u with respect to time. But according to
(7.1.20) the time derivative of u can be replaced by Q(u, s) where, at least in prin-
ciple, we can imagine that s is a function of u again. Performing this partial in-
tegration again and again, each time using (7.1.20), we are led to the relation

s(u)=~P(u)-~Q~ ~P+~Q~ ~Q~ ~P+.... (7.1.25)


P P au P p au P au P
In the following section it will turn out to be useful to use the following
abbreviation

592
7.1 An Example 191

(~)
dt 00
= Q(U,S(U» ~
au
(7.1.26)

by which we define the left-hand side. It becomes obvious that (7.1.25) can be
considered as a formal geometric series in the operator (7.1.26) so that (7.1.25)
can be summed up to give

set) = 1..- - - - - - - P (7.1.27)

P (1 + ~ (:t t)
or, in the original notation,

1 1
s(t) = - P. (7.1.28)
P (1 + Q~~)
au p
Multiplying both sides by the operator

(1 +Q~~)
au p
(7.1.29)

from the left we obtain

(7.1.30)

which may be transformed in an obvious way,

as P(u,s)-ps
(7.1.31)
au Q(u,s)

Using the explicit form of Q and P, (7.1.31) acquires the form

as (7.1.32)
au
Thus we have found a first-order differential equation for s(u).
In the subsequent section we shall show how all these relations can be con-
siderably generalized to equations much more general than (7.1.1, 2). In practical
applications we shall have to evaluate sums as (7.1.25) but, of course, we shall
approximate the infinite series by a finite one. In this way we also circumvent

593
192 7. Nonlinear Equations. The Slaving Principle

difficulties which may arise with respect to the convergence of the formal series
equation (7.1. 25).
Rather, in practical cases it will be necessary to estimate the rest terms. To
this end we split the formal infinite expansion (7.1.25) up in the following way

s(t) = -
1 m( - Qa- -1)nP(u,s) + -1 L
L 00 (
-Q- -
a 1,)nP(u,s).
fJn=O au fJ fJn=m+l au fJ,
\ .....- I

r (7.1.33)
Again it is possible to sum up the rest term

r=-
1
L -Q- -
00 ( a 1 )n+m+l
P(u,s(u» , (7.1.34)
fJ n=O au fJ

which yields

1
r=-------
( a 1 )m+l P.
-Q-- (7.1.35)
fJ 1 a au fJ
( 1+ Q - - )
au fJ

This form or the equivalent form

(fJ+Q-a) (-Q-a -fJ )m+l


au
r=
au
1 P (7.1.36)

can be used to estimate the rest term. The term Q introduces powers of at least
order u 3 while the differentiation repeatedly reduces the order by 1. Therefore, at
least, with each power of the bracket in front of (7.1.36), a term u 2 is introduced
in our specific example. At the same time corresponding powers 1/fJ are intro-
duced. This means that the rest term becomes very small provided u 2/ fJ is much
smaller than unity. On the other hand, due to the consecutive differentiations
introduced in (7.1.35 or 36), the number of terms increases with m!. Therefore
the rest term must be chosen in such a way that m is not too big or, in other
words, for given m one has to choose u correspondingly sufficiently small. This
procedure is somewhat different from the more conventional convergence
criteria but it shows that we can determine s as a function of u to any desired
accuracy provided we choose u small enough.
Let us now return to our explicit example (7.1.32). We wish to show that
(7.1.32) can be chosen as a starting point to express s explicitly by u. We require
that rhs of (7.1.32), which we repeat

as (7.1.37)
au -us

594
7.1 An Example 193

remains regular for simultaneous limits s-+O and u-+O. To solve (7.1.37) we
make the hypothesis

s = U 2 [CI + II (u)] , (7.1.38)


which yields in (7.1.37)

(7.1.39)

Since rhs must remain regular we have to put


1
Cl=- (7.1.40)
P
so that on rhs of (7.1.39)

(7.1.41)

results. In order that s -+ 0 for u -+ 0, we put II in the form

(7.1.42)

so that

(7.1.43)

results. A comparison of powers of u on lhs with that of rhs yields

2
C2= - 3 . (7.1.44)
P
We are thus led to the expansion

u2 2u 4
S=-+-3-+'" (7.1.45)
P P
which agrees with our former result (7.1.19).
Starting from (7.1.1, 2) (in which our above treatment with a = 0 is included),
we can achieve the result (7.1.37) in a much simpler way, namely writing
(7.1.1,2) in the form

.
11m --
LI t
.1t .... O
(LlU) == u. = au - us and (7.1.46)

595
194 7. Nonlinear Equations. The Slaving Principle

lim
,1t~O
(,1,1 S)
t
= Ii = _ fJs + U2 • (7.1.47)

We may divide the corresponding sides of (7.1.47) by those of (7.1.46). In this


,1
case t drops out, leaving

ds - fJs + u2
(7.1.48)
du au- us

This is a relation very well known from autonomous differential equations refer-
ring to trajectories in a plane [1]. Although our series expansion seems round-
about, its usefulness is immediately evident if we are confronted with the ques-
tion in which way (7.1.48) can be generalized to nonautonomous differential
equations in several variables. We are then quickly led to methods we have out-
lined before.
Thus in the next section we wish to generalize our method in several ways. In
practical applications it will be necessary to treat equations in which the control
parameter a is unequal zero, so that we have to treat equations of the form

u = au - us. (7.1.49)

Furthermore, the equations may have a more general structure, for instance
those of the slaved variables may have the form

Ii = - fJs + u 2 + us + ... (7.1.50)

or be still more general, containing polynomials of both u and s on rhs. In addi-


tion, we wish to treat equations which are nonautonomous, e. g., those in which
the coefficients depend on time t

(7.1.51)

Also in many applications we need equations which are sets of differential equa-
tions in several variables u and s.
Other types of equations must be considered also, for instance

ip = w + CP(u, s, qJ) , (7.1.52)

where cP is a function which is periodic in qJ. Equations of the type (7.1.52) con-
taining several variables qJ may also occur in applications. Last but not least, we
must deal with stochastic differential equations

(7.1.53)

in which Fs(t) is a random force. This random force renders the equations to in-
homogeneous equations and an additional difficulty arises from the fact that

596
7.2 The General Form of the Slaving Principle. Basic Equations 195

Fit) is discontinuous. Still more difficulties arise if such a stochastic force


depends on variables sand u. In order not to overload our treatment, we shall
proceed in two steps. We first consider the fluctuating forces as being approxi-
mated by continuous functions.
Then in Sect. 7.9 we shall consider a general class of equations containing dis-
continuous stochastic forces.

7.2 The General Form of the Slaving Principle. Basic Equations

Consider a set of time dependent variables iit. ii 2, ... , S1> S2,"" f/Jj, f/J2,'" ,
lumped together into the vectors U, s, f/J, where the sign - distinguishes these
variables from those used below after some transformations. The notation u and
S will become clear later on where u
denotes the "unstable" or "undamped"
modes of a system, whereas s refers to "stable" modes. The variables f/J play the
role of phase angles, as will transpire from later chapters. We assume that these
variables obey the following set of ordinary differential equations 1

U= Auu + Q(u,s, f/J, t) + Fu(t) , (7.2.1)

s = AsS + p(u,s, f/J, t) + Fs(t) , (7.2.2)

~ = w + R(u,s, f/J, t) + F,it)· (7.2.3)


In the following, we shall assume that Au, As and ware time-independent
matrices and vectors, respectively. It is not difficult, however, to extend the
whole procedure to the case in which these quantities are time dependent. We
make the following assumptions. We introduce a smallness parameter 0 and
assume that u is of the order o. We assume that the matrix Au is in Jordan's
normal form and can be decomposed into
Au = A~ + iA~' + A~" ;:; A~ + Au' (7.2.4)

Here A~, A~' are real diagonal matrices. While the size of the elements of A~'
may be arbitrary, the size of the elements of A~ is assumed to be of order o.
Further, A~" contains the nondiagonal elements of Jordan's normal form (Sect.
2.4.2). Also the matrix As is assumed in Jordan's normal form. It can be decom-
posed into its real and imaginary parts A;, A;' , respectively
(7.2.5)

Here A;' is a diagonal matrix whose matrix elements may be arbitrary. It is


assumed that the diagonal elements Yi of A; obey the inequalities

Yi ~ fJ < O. (7.2.6)

1 It is actually not difficult to extend our formalism to partial differential equations with a suitable
norm (e. g., in a Banach space)

597
196 7. Nonlinear Equations. The Slaving Principle

The quantities

(7.2.7)

are of order t5 2 or smaller and Q, P, must not contain terms linear in u or s (but
may contain u • s). The functions

Q,P,R (7.2.8)

are 2 n-periodic in qJ, the functions (7.2.7) are assumed to be bounded for
- 00 < t < + 00 for any fixed values of it, s, (p. The functions i are driving forces
which in particular may mimic stochastic forces. However, we shall assume in
this section that these forces are continuous with respect to their arguments.
In subsequent sections we shall then study stochastic forces which can be
written in the form

F(t) =F(it,s,;P,t) = G(it,s,;P,t) . Fo(l) , (7.2.9)

where Fo describes a Wiener process. In such a case the matrix G must be treated
with care because appropriate limits of the functions it, s, (p must be taken, e. g.,
in the form

G(it,s,;P, t) = aG t + o + fJG t - O • (7.2.10)

We shall leave this problem for subsequent sections.


Let us now rather assume that the functions (7.2.9) can be treated in the same
way as Q, P, R. Though a number of our subsequent relations hold also for more
general cases, for most applications it is sufficient to assume that P, Q, R can be
u
expanded into power series of and s. For instance, we may then write P in the
form
P(u,s,;P, t) = Asuu: u: u + 2Asus: u:s + Asss:s:s
(7.2.11)
where the notation introduced in (7.2.11) can be explained as follows. Let us
represent P as a vector in the form

(7.2.12)

We define
(7.2.13)

as a vector whose j-components are given by

L Ajk k2Uk
k j k2 j j
Uk .
2
(7.2.14)

598
7.2 The General Form of the Slaving Principle. Basic Equations 197

Here Asuu may be a 2 n-periodic function in ({J and may still be a continuous func-
tion of time t

(7.2.15)

A similar decomposition to (7.2.11) is assumed for Q. R is assumed in the


form

R(u,s, rp, t) = Ro(rp, t) + R j (rp, t): u + R 2(rp, t): s + terms of form P . (7.2.16)

We shall assume that Ro is of order 15 2 and R j , R2 are of order 15. We now make
the transformations 2

u = exp (Aut)u , (7.2.17)

rp=wt+({J (7.2.18)

to the new time-dependent variables u, ({J. This transforms the original equations
(7.2.1 - 3) into

u = Auu + Q(u,s, ({J, t) + Fu(t) = Q(u,s, ({J, t) , (7.2.19)

s = AsS + P(u,s, ({J, t) + Fs(t) = AsS + P(u,s, ({J, t) , (7.2.20)

ip = R(u,s, ({J, t) + F 'fl(t) = R(u,s, ({J, t) , (7.2.21)

where the last equations in each row are mere abbreviations, whereas the expres-
sions in the middle column of (7.2.19 - 21) are defined as follows:

(7.2.22)

Q(u,s, ({J, t) = exp( - Aut)Q (exp (Aut)u,s, wt + ({J, t) , (7.2.23)

P(u,s, ({J, t) = P(exp (Aut)u,s, wt + ({J, t) , (7.2.24)

R(u,s, ({J, t) = R (exp (Aut)u,s, wt + ({J, t) , (7.2.25)

Fu = exp ( - Aut)Fu (exp (Aut)u,s, ({J + wt, t) , (7.2.26)

Fs = exp ( - Ast)Fs(exp (Aut)u,s, ({J + wt, t) , (7.2.27)

(7.2.28)

2 The transformation of a vector by means of exp (,1 u t) == exp [(iA~' + A~") tl has the following
effects (Sect. 2.6): while the diagonal matrix A~' causes a multiplication of each component of the
vector by exp (i wjt), where Wj is an element of A~' , A~" mixes vector components with coefficients
which contain finite powers of t. All integrals occurring in subsequent sections of this chapter exist,
because the powers of t are multiplied by damped exponential functions of t.

599
198 7. Nonlinear Equations. The Slaving Principle

It will be our goal to express the time-dependent functions s(t) occurring in


(7.2.19-21) as functions of u and rp (and t) by an explicit construction. Before
developing our procedure we mention the following: As we have seen in Chap. 5,
a nonlinear coupling between variables can shift frequencies, i. e. in particular
the values of the matrix elements of A~' in (7.2.4) and of win (7.2.3). In order to
take care of this effect at a stage as early as possible it is useful to perform the
transformations (7.2.17, 18) with the shifted frequencies of A~::, and w,. In
physics, these frequencies are called renormalizedJrequencies. These frequencies
are unknown and must be determined in a self-consistent way at the end of the
whole calculation which determines the solutions u and rp. The advantage of this
procedure consists in the fact that in many practical cases few steps of the eli-
mination procedure (with respect to s) suffice to give very good results for the
variables u, s, and rp. We leave it as an exercise to the reader to rederive
(7.2.19 - 21) by means of renormalized frequencies, and to discuss what assump-
tions on the smallness of w- w, and A~' - A~:, must be made so that the new
functions P, Q, R fulfill the original smallness conditions. The reader is also
advised to repeat the initial steps explained in the subsequent chapters using the
thus altered equations (7.2.19 - 21). Our following explicit procedure will be
based on the original equations (7.2.19 - 21).

7.3 Formal Relations

As exemplified in Sect. 7.1, s may be expressed by u at the same time. Now let us
assume that such a relation can be established also in the general case of
(7.2.19 - 21) but with suitable generalizations. Therefore, we shall assume that s
can be expressed by u and rp and may still explicitly depend on time

s = s(u (t), rp(t), t) . (7.3.1)

Later it will be shown how s may be constructed in the form (7.3.1) explicitly,
but for the moment we assume that (7.3.1) holds, allowing us to establish a
number of important exact relations. Since s depends on time via u and rp but also
directly on time t, we find by differentiation of (7.3.1)

d a . as . as
-s=-s+u-+ rp--, (7.3.2)
dt at au arp

where we have used the notation

(7.3.3)

and a similar one for as/arp.

600
7.3 Formal Relations 199

According to (7.2.20) the lhs of (7.3.2) is equal to

Ass + P(u,s, '1', t) . (7.3.4)

u
Furthermore, we express and ip by the corresponding rhs of (7.2.19,21). This
enables (7.3.2, 4) to be simplified to

(-0 - As + Qa- + Ra -) s = P(u,s(u,rp,t), rp,t). (7.3.5)


at au 0'1'

The bracket on the lhs represents an operator acting on s. The formal solution of
(7.3.5) can be written as

0 a a
s= ( - - A s + Q - + R -
)-1 P(u,s(u,rp,t),rp,t). (7.3.6)
at au 0'1'

At this point we remember some of the results of Sect. 7.1. There we have
seen that it is useful to express s as a series of inverse powers of p, to which As
corresponds. As will become clear in a moment, this can be achieved by making
the identifications

(7.3.7)

a a (7.3.8)
Q-+R-=B
aU 0'1'

and using the formal power series expansion

(A + B) -1 = A -I - A -I BA -1 + ...

=A- 1 ~ (-BA-1r, (7.3.9)


v=o

which is valid for operators (under suitable assumptions onA and B). In (7.3.9)
we use the definition

(7.3.10)

The proof of (7.3.9) will be left as an exercise at the end of this section.
Using (7.3.9) with the abbreviations (7.3.7, 8) we readily obtain

s= a
(--As
at
)- 1 ~ [ -
v=O
(0
Q - + Ra- ) (--As
au 0'1'
a
at
)- 1 P. ]V (7.3.11)

601
200 7. Nonlinear Equations. The Slaving Principle

In practical applications we will not extend the series in (7.3 .11) up to infinity but
rather to a finite number. Also from a more fundamental point of view the con-
vergence of the formal series expansion is not secured. On the other hand, for
practical applications we give an estimate of the rest term and we shall derive here
some useful formulas. To define the rest term we decompose s according to
(7.3.9)

s = [A -lv~O( -BA -lr+A -lv=t/-BA -I V ]P. (7.3.12)

One then readily finds the following decomposition

m
s=A- 1 E(-BA- 1YP+r, (7.3.13)
v=o

where the rest term r is given by

(7.3.14)

(For the proof consult the exercises at the end of this section.)
For practical applications we note that

Q-+R-
a a (7.3.15)
au aqJ

commutes with As. Further, As transforms the components of P, whereas


(7.3.15) acts on each component in the same way. To make closer connection
with the results of Sect. 7.1 dealing with an example and also to show what the
inverse of the operator (7.3.7) means explicitly, we deal with s in still another
way. To this end we start from (7.2.20) which possesses the formal solution

t
S= J exp [As(t- T)]P(U,s, qJ, T)dT, where (7.3.16)
-00

S=S(U,qJ,T). (7.3.17)

In analogy to (7.1. 7) we perform a partial integration. To this end we assume that


P is represented in the form (7.2.11). When performing the partial integration we
make use of the fact that each term in (7.2.11) has the form

g(t)h(u,s,qJ) . (7.3.18)

We treat g(t) as v and h as w in the well-known relation

JV'wdt=vw-Jvw'dt. (7.3.19)

602
7.3 Formal Relations 201

Expressing the temporal derivatives it, ~ by the corresponding rhs of (7.2.19, 21),
we may write (7.3.16) in the new form

t
S = exp(Ast) S exp( - As r)P(u (t),s(u (t), rp(t), t), rp (t), r) dr
-00

t
-exp(Ast) S (0
Q-+R-0) dr'Sr' exp(-Asr)P(u(r'), ... ,r)dr.
-00 ou orp r' -00 (7.3.20)

Because (7.3.15) commutes with As we may insert

1 =' exp(As r' - As r') (7.3.21)

in the second expression of (7.3.20), eventually finding

t
S = S exp [As(t - r)]P(u (t), ... , r) dr
-00

- J exp[As(t- r')] (Q ~+ R~) dr' f eAs(r' -r)p(u(r'), ... , r)dr.


-00 ou orp r' -00
(7.3.22)

The first part of (7.3.22) can be considered as the formal solution of the equation

o
- s = Ass + P(u,s, rp, t), (7.3.23)
ot

where u, s, rp on the rhs are considered as given parameters. Under this assump-
tion the formal solution of (7.3.23) can also be written in the form

(7.3.24)

The first term of (7.3.22) therefore provides us with the interpretation of the
inverse operator A -I. In this way we may write (7.3.22) in the form

s= (:t -AsY1p(U(t), ... ,t)

- i exp[AsCt-r')] (Q~+R~)
-00 ou orp r'
(~-As)-Ip(u(r,),
or
... ,r')dr"

p(1)(u(r'), ... , r') (7.3.25)

603
202 7. Nonlinear Equations. The Slaving Principle

Continuing the procedure of partial integrations, we again use the interpretation


of the inverse operator A -I. These subsequent partial integrations may then be
written as

s= (~-AS)-Ip- (~-AS)-I (Q~+R~)(~- As) Ip+ ...


at at \ au orp at
(7.3.26)

which evidently is the same result as before, namely (7.3.11).


Exercises. 1a) Prove the relation (7.3.9). Hint: mUltiply both sides of (7.3.9) by
A + B from the left and rearrange the infinite sum. 1b) Prove in the same way the
following relation for the remainder [cf. (7.3.12, 14)]

A-I f
v=m+1
(-BA-1)"=(A+B)-I·(-BA-I)m+l. (7.3.27)

2) Prove the relation (7.3.9) by iteration of the identity

(A +B)-I = (A +B)-I(A +B-B)A- I

=A -I + (A +B)-I( -BA -I). (7.3.28)

7.4 The Iteration Procedure

So far we have assumed that s may be expressed as a function of u, rp and t


[compare (7.3.1)] and have then derived a formal relation. In this section we
want to show how we may actually construct s by an iteration procedure. To this
end we use the formerly introduced small parameter o. We introduce the approxi-
mation of s up to nth order by

s(n)(u, rp, t) = f c(m)(u, rp, t) , (7.4.1)


m=2
where the individual terms C(m) are precisely of the order om.
We further define the quantities

p(l)(u, rp, t) == p(l)(u, {S(k)}, rp, t): order ot precisely (7.4.2)

Q(l)(u, rp, t) == Q(I)(u, {S(k)}, rp, t): order ot precisely (7.4.3)

R (I)(u, rp, t) == R (I)(u, {s (k)}, rp, t): order ot precisely (7.4.4)

assuming that we may express s up to a certain order k as indicated in (7.4.2 - 4),


so that just the correct final order ot of the corresponding expressions results.

604
7.4 The Iteration Procedure 203

As shown below, it is possible to construct the expressions s (n) consecutively.


To find the correct expressions for ern) we consult the relation (7.3.11). From its
rhs we select such terms which are precisely of order In by decomposing Pinto
those terms of the order of n - m, and those terms of the operator in front of P of
order m (m :::;; n - 2). In this way we obtain

e(n)(u, qJ, t) = n~2 (~_ As)-1 p(n-m)(u, qJ, t). (7.4.5)


m=O dt (m)

A comparison with (7.3.11) shows further that the operator in front of p(n-m)
can be defined as

-1
( ~-A )
( ~-A ) -1 L IT -~
[() ( ~-A ) -1 ]
dt S (m) dt S (0) all i;;,1 dt (i) dt S (0) ,
products [I = m
(7.4.6)
where we have used the abbreviation

(
d _ A)-I _ (~_ A)-1 (7.4.7)
dt S (0) at S

the relation

(7.4.8)

and the definition

(dd)t (i)
f(u, qJ, t) = (Q(i+l)~ + R(i)~)f(U' qJ, t).
au aqJ
(7.4.9)

The transition from (7.3.11) to (7.4.1) with the corresponding definitions can be
easily done if we exhibit the order of each term explicitly. We have, of course,
assumed that we may express Q(i+l) and R(i) by u, qJ and t. According to our
previous assumptions in Sect. 7.2, we also have the relations

and (7.4.10)

(7.4.11)

A formula which is most useful for practical purposes, because it allows us to


express subsequent ern) by the previous ones, is

605
204 7. Nonlinear Equations. The Slaving Principle

c(n) = (~_ AS)-I fp(n) _ n~2 (~\ c(n-m)]. (7.4.12)


dt (0) L m=1 dt )(m)

Its proof is of a more technical nature, so that we include it only for sake of com-
pleteness. In order to prove (7.4.12), we insert (7.4.5) on both sides of (7.4.12)
which yields

= (~_AS)-lp(n)_ (~-A
dt (0) dt
)-l nf (~)
s (0) m= 1 dt (m)

-1
n-2-m
. E ( ~- As ) p(n-m-I).
1=0 dt (I)
(7.4.13)

Obviously, the term with m = 0 cancels so that it remains to prove the relation

= -
n-2 n-2-m'
E
m' = 1
E
1=0
(dtd
--As )-1 (dtd ) (dtd
(0)
-
(m')
--As )-1
(I)
p(n-m'-l).
(7.4.14)

We now wish to change the sequence of summation in (7.4.14). To this end we


check the domain in which the indices run according to the scheme

m'+[=m }
O~[~n-2-m' (7.4.15)
1~m ~n-2

We then readily obtain for the rhs of (7.4.14)

- n-2
E Em

m=lm'=1
(dtd
--As )-1 (dtd ) (dtd )-1
(0)
-
(m')
--As
(m-m')
p(n-m). (7.4.16)

We now compare each term for fixed m on the lhs of (7.4.13) with that on the rhs
and thus are led to check the validity of the relation

(- d
dt
- As )-1
(m)
=
m'
m (d
E - -
= 1 dt
As )-1 (
(0)
- - d)
dt (m')
(d
-
dt
- As )-
(m-m')
1. (7.4.17)

606
7.5 An Estimate of the Rest Term. The Question of Differentiability 205

For the rhs of (7.4.17) we obtain by use of definition (7.4.6)

(~-A
dt
)-1 s (0) m'=l
~ (-~dt ) (~-A
dt
)-1
(m') s (0)

.L II
[)=m-m'
(- -dtd) (d-dt- A s)-1
(i) (0)
(7.4.18)

This can be rewritten in the form

(~-As)-1 L II [( - ~) (~-As)-1]
dt (0) ,£i=m dt (i) dt (0)
(7.4.19)

which coincides with (7.4.6). This completes the proof of (7.4.12).

7.5 An Estimate of the Rest Term. The Question


of Differentiability

As mentioned in Sect. 7.1.2, in practical applications it will be sufficient to retain


only a few terms of the series expansion (7.3.11). Of course, it will then be im-
portant to know the size of the rest term. Instead of entering a full discussion of
the size of this rest term, we rather present an example.
The rest term can be defined by

(7.5.1)

[compare (7.1.26)]. Let us consider the case in which in P only a variable u and
not s occurs, and let us assume that

P(u) - g(t)u k , where (7.5.2)

Igl<M, u>O (7.5.3)

holds. Clearly

(7.5.4)

holds provided As is diagonal and IYi Imin is the smallest modulus of the negative
diagonal elements of As. We further assume Q in the form

607
206 7. Nonlinear Equations. The Slaving Principle

with (7.5.5)

Ihl<M. (7.5.6)

We readily obtain

< ~Ih(t) Iku kUk - 1 • (7.5.7)


IYilmin
Similarly

(~)2
dt ""
P _ Ih(t) I (~)2 kuk(2k-1)U2k-2
IYi Imin
(7.5.8)

and after applying (dldt)"" m times we obtain

(~)m
dt ""
p -Ih(t) I (~)m k(2k-1)(3k- 2) ... (mk- (m-1»u(m+ 1)k-m.
IYilmin (7.5.9)

Since we are interested in large values of m, we may approximate rhs by

(7.5.10)

Similar estimates also hold for more complicated expressions for P and Q (and
also for those of R). The following conclusion can be drawn from these con-
siderations. The rest term goes with some power of u where the exponent
contains the factor m. Also all other factors go with powers of m. Thus, if u is
chosen sufficiently small these contributions would decrease more and more. On
the other hand, the occurrence of the factorial m! leads to an increase of the rest
term for sufficiently high m. In a strict mathematical sense we are therefore
dealing with a semiconvergent series. As is well known from many practical
applications in physics and other fields, such expansions can be most useful.
In this context it means that the approximation remains a very good one
provided u is sufficiently small. In such a case the first terms will give an excellent
approximation but when m becomes too big, taking higher-order rest terms into
account will rather deteriorate the result.
In conclusion, we briefly discuss the differentiability properties of s(u, rp, t)
with respect to the variables u and rp. As indicated above, we can approximate s
by a power series in u with rp- (and t-) dependent coefficients to any desired
degree of accuracy, provided we choose bin (7.4.1) small enough. If the rhs of
(7.2.19-21) are polynomials (and analytic) in u and are analytic in rp in a certain
domain, any of the finite approximations s(n) (7.4.1) to s have the same (analy-

608
7.6 Slaving Principle for Discrete Noisy Maps 207

ticity) property. Because of the properties of the rest term discussed above, with
an increasing number of differentiations a/aqJj, a/aUk> the (differentiated) rest
term may become bigger and bigger, letting the domain of u, in which the
approximation is valid, shrink, perhaps even to an empty set. This difficulty can
be circumvented by "smoothing". Namely, we consider s(n) (omitting the rest
term) as a "smoothed" approximation to the manifold s(u, qJ, t). Then s(n) pos-
sesses the just-mentioned differentiability (or analyticity) properties. If not
otherwise mentioned, we shall use s(n) later in this sense.

7.6 Slaving Principle for Discrete Noisy Maps *

The introduction showed that we may deal with complex systems not only by
means of differential equations but also by means of maps using a discrete time
sequence. The study of such maps has become a modern and most lively branch
of mathematics and theoretical physics. In this section we want to show how the
slaving principle can be extended to this case. Incidentally, this will allow us to
derive the slaving principle for stochastic differential equations by taking an
appropriate limit (Sect. 7.9).
Consider a dynamic system described by a state vector q 1 which is defined at a
discrete and equidistant time sequence I. The evolution of the system from one
discrete time 1 to the next 1+ 1 is described by an equation of the general form

(7.6.1)

in which f and G are nonlinear functions of q1 and which may depend explicitly
on the index I. Here 111 is a random vector, whose probability distribution may
but need not depend on the index I. In many physical casesft is of the order O(ql)

(7.6.2)

As shown by the case of differential equations at instability points where new


ordered structures occur, it is possible to transform the variables to new collec-
tive modes which can be classified as so-called undamped or unstable modes u,
and damped or slaved modes s.
We assume that we can make an analogous decomposition for discrete maps.
To bring out the most important features we leave the phase variables qJ aside.
Thus we write the equations which are analogous to (7.2.1, 2) in the form (using
the notation dl for the finite time interval)

(7.6.3)
and

(7.6.4)

609
208 7. Nonlinear Equations. The Slaving Principle

We assume As in Jordan's normal form with negative diagonal elements. For


simplicity, we also assume that Au is diagonal and of smallness rJ, but the pro-
cedure can easily be extended to a more general Au.
Note that dQ, dP may contain U and S also at retarded times u, so that they
are functions of UI, UI_I, ... and SI, SI_I" .. From our result it will transpire that
we can also allow for a dependence of dQ and dP on future times, i. e., for in-
stance, on U I + 1 . Our purpose is to devise a procedure by which we can express S in
a unique and well-defined fashion by U and / alone. Let us therefore assume that
such a replacement can be made. This allows us to establish a number of formal
relations used later on.

7.7 Formal Relations *

Let us assume that S can be expressed by a function of U and / alone. This allows
us to consider dP in (7.6.4) as a function of U and / alone
(7.7.1)

Equation (7.7.1) can be solved by


I
SI+l = L (1 + AsdTlJ)I-rndP(u rn , urn-I, ... , m). (7.7.2)
m=-oo

We first show that (7.7.2) fulfills (7.7.1). Inserting (7.7.2) into (7.7.1) we
readily obtain for the lhs
I
SI+I - SI = L (1 + Asdm)l-rn dP(u rn , urn-I"'" m)
m=-oo

I-I
- L (1 + A sdm)l-l- rn dP(u rn , urn-I"'" m). (7.7.3)
m=-oo

Taking from the first sum the member with m = / separately we obtain
SI+I - SI= dP(u{J UI_I"'" /)
I-I
+(1 + Asd/) L (l+A sdm)I-I- rn dP(u m,u m_\J···,m)
m=-oo
I-I
- L (l+A sdm)I-I- mdP(u m,u m_I, ... ,m). (7.7.4)
m=-oo

Taking the difference of the remaining sums gives

SI+ I - SI = dP(UI' UI_I' ... , /)


I-I
+ Asd/ L (1 +Asdm)I-I-mdP(um,um_\J ... ,m). (7.7.5)
m=-oo

610
7.7 Formal Relations 209

According to its definition, the sum in the last term is [compare (7.7.2)]
identical with SI, so that rhs of (7.7.5) agrees with rhs of (7.7.1). To study the
convergence of the sum over m we assume that the factor of

(7.7.6)

is bounded. Because we have assumed that As is in Jordan's normal form, it suf-


fices to study the behavior of (7.7.6) within a subspace of Jordan's decomposi-
tion. This allows us to consider

(7.7.7)

instead of (7.7.6), where 1 is a unity matrix, A' = A d I is a diagonal matrix and M k

l
is a k x k matrix given by

Mk =
o 1o 1
o
0 1
0

··.1
j . (7.7.8)

o
In its first diagonal parallel to the main diagonal, Mk is equal to 1, whereas all
other elements are O. We treat the case that I> 0 (and is, of course, an integer).
According to the binomial law

(7.7.9)

As one can readily verify

M'k = 0 for v~k . (7.7.10)

This allows us to replace (7.7.9) by

(7.7.11)

Because v ~ k is bounded, the size of the individual elements of (7.7.11) can be


estimated by treating (1 + A'i- v, or equivalently, (1 + A,)I. The absolute value of
this quantity tends to 0, for I to infinity, i. e.,
1(1 + A') 11-+0 for 1-+ (YJ, (7.7.12)

provided

1(1+A')I<I. (7.7.13)

611
210 7. Nonlinear Equations. The Slaving Principle

Decomposing A' into its real and imaginary parts A, and Ai' respectively, we
arrive at the necessary and sufficient condition

(7.7.14)

Equation (7.7.14) can be fulfilled only if

A,<O (7.7.15)

holds. Throughout this chapter we assume that conditions (7.7.14, 15) are ful-
filled.
In order to establish formal relations we introduce the abbreviation

(7.7.16)

by which we may express s, by means of S'+l

(7.7.17)

This allows us to write (7.7.1) in the form

(7.7.18)

The reader is remined that dP may be a function of u at various time indices I,


1-1, ...

(7.7.19)

The formal solution of (7.7.18) reads

(7.7.20)

A comparison of this solution with the former solution (7.7.2) yields

m=-oo
(7.7.21)

which may serve as a definition of the operator on lhs of (7.7.21). We now


introduce the decomposition

(7.7.22)

where L1~, L1~), I'l) operate as follows:


(7.7.23)

612
7.7 Formal Relations 211

LJ!.I!.)!(UI, I) = !(UI, I) - !(UI_I, I) , (7.7.24)

r!)!(UI, I) = !(UI, 1-1) . (7.7.25)

Using operator techniques given in (7.3.9), one readily establishes the following
identity

{LJ_ (1 + Asdl) - Asdl} -I = {LJ~(1 + Asdl) - Asdl}-I

- {LJ_ (1 + Asdl) - AsdIt 1 • [ ••• ] , (7.7.26)

where the square bracket represents an abbreviation defined by

(7.7.27)

Because of (7.7.21), we may expect that the curly brackets in (7.7.27) possess an
analog to the explicit form of rhs of (7.7.21). This is indeed the case.
Let us assume for the moment that we may decompose dP into a sum of
expressions of the form [compare (7.3.18)]

(7.7.28)

where h is a function of the variables U alone, but does not depend explicitly on
m, whereas dg is a vector which depends on m alone. It is then simple to verify
the following relation

I
~ (1 + Asdmi-mh(um,um_I," .)dg(m)
m=-oo
I
=h(UI,UI_I,"') ~ (1 + Asdm)l-mdg(m)
m=-oo
I m-I
~ (1+A sdm)I+I-m ~ (1+A sdm)m-I-m'{"'}m,m' (7.7.29)
m= -00 m'=-oo

where the brace is defined by

(7.7.30)

The essence of (7.7.29) can be expressed as follows. While lhs contains U at all
previous times, the first term on rhs contains U only at those earlier times which
are initially present in h(UI,UI_I," .). Thus, instead of an infinite regression we
have only a finite regression. From a comparison of (7.7.26) with (7.7.29) we
may derive the following definition

I
~ (1 + Asdm)l-mdP(ul, ul_I,"" m), (7.7.31)
m=-oo

613
212 7. Nonlinear Equations. The Slaving Principle

where on rhs u/, u/_!> etc., are kept fixed and the summation runs only over m.
Equation (7.7.26) with (7.7.27) may be iterated, giving

(7.7.32)

where the parentheses on rhs are given by

00

( ... ) = L [... Jv (7.7.33)


v=o

and these square brackets are defined in (7.7.27).


As already noted, in practical cases one does not extend the summation to
infinity because it is known from a number of applications that few terms
suffice. Furthermore, the convergence of the whole series (7.7.33) need not hold
so that we are dealing here with semiconvergent series. Therefore it is important
to give an estimate for the size of the rest term, provided we take only a finite
sum. The rest term for rhs of (7.7.32) can be derived from the formula

{... }-1 f [···r = { ... }-1 f [.. T[·· r+ 1


v=n+ 1 v=O

(7.7.34)

Since a more explicit discussion of the rest term entails many mathematical
details going far beyond the scope of this book, we shall skip the details here.
While the operator containing ,1 (I) is defined by (7.7.31), we have still to
explain in more detail how to evaluate the operator ,1(1;). It has the following
properties:
1) It acts only on u" u/_!> ....
2) For any function f of u, it is defined by (7.7.24)

(7.7.35)

If f contains variables at several "times" we may write

where (7.7.36)

(7.7.37)

denotes the set of variables containing different indices I.


3) For the product of the functions v and w we readily obtain

(7.7.38)

In the following we shall assume that the functions f can be considered as poly-
nomials.

614
7.7 Formal Relations 213

4) In order to treat (7.7.36) still more explicitly we use (7.7.38) and the following
rules

.d~)u'=u7-u7_1= (
VI
L
+ v2=n-l
UllU/~1).d_UI. (7.7.39)

The rhs may be written in the form

(7.7.40)

where the factor and parentheses can be considered as a symmetrized derivative,


which is identical with the parentheses in (7.7.39).
5) From relations (7.7.39, 40) we readily derive for an arbitrary function of f(u)
alone

(7.7.41)

or, more generally,


(u) _ I-I'
.d _ 'f(u I, ... , U 1') - L 9 iu I' ... , U I' -1) .d - U I' + v • (7.7.42)
v=O

We may now replace .d _ UI' + v according to (7.6.3) by

(7.7.43)

where we abbreviate the rhs by

dQ ( ... ,/' + v) . (7.7.44)

This gives (7.7.42) in its final form

(7.7.45)

For sake of completeness we derive a formula for

(7.7.46)
Vo+···+vx=n

which, according to the definition (7.7.35), can be written as

(7.7.47)
Vo+···+vx=n

615
214 7. Nonlinear Equations. The Slaving Principle

It can be rearranged to

(7.7.48)

where the parentheses can be written in the form

(7.7.49)

We therefore find for (7.7.46)

E U:J.j:hl··· uJ/(U rn +X+l- urn), (7.7.50)


VO+"'+Vx+l =n-l

where we may write

x
= ELl_u rn + 1+ v • (7.7.51)
v=o

The sum in (7.7.50) in front of the parenthesis can again be considered as the
symmetrized derivative, while the parentheses in (7.7.50) can be expressed by
individual differences uk+ 1 - Uk·

7.8 The Iteration Procedure for the Discrete Case *

Taking all the above formulas together we obtain again a well-defined procedure
for calculating s as a function of u and I, provided dP is prescribed as a function
of u and 1 alone. In practice, however, dP depends on s. Therefore we must
devise a procedure by which we may express s by u and 1stepwise by an inductive
process.
To this end we introduce a smallness parameter o. In general, dQ and dP
contain a non stochastic and a stochastic part according to

dQ = Qo(u,s, l)dl + dFu(u,s, I) , (7.8.1)

dP = Po(u,s, I) dl + dFAu,s, I) , (7.8.2)

where for the stochastic part the following decomposition is assumed

dFj(u,s, I) = Mi(u" U'-l, ... , S" S'-I, . .. ) dFo(l), i = U,s . (7.8.3)

We assume that Au in (7.6.3) is of the order 0 and that the functions occurring in
(7.8.1, 2) can be expressed as polynomials of u and s with I· dependent coef-
ficients. The coefficients are either continuous functions (contained in Qo and
Po) or quantities describing a Wiener process [in dFo(l)] which is defined by

616
7.8 The Iteration Procedure for the Discrete Case 215

(4.2.2, 3). We assume that the constant terms which are independent of u and s
are of smallness 0 2, the coefficients of the linear terms of smallness 0, while u
and s are of order 0, 0 2, respectively. In order to devise an iteration procedure we
represent s in the form

s(u, l) = E C(k)(U, I) , (7.8.4)


k=2

where C(k) is a term which contains expressions of precisely order k

(7.8.5)

Similarly, we introduce the decompositions

dQ= EdQ(k) and (7.8.6)


k=2

dP= r
k=2
dP(k). (7.8.7)

We now proceed in two steps. We apply (7.7.32) with (7.7.33 and 27) on dP, but
on both sides we take only the term of the order Ok.
Starting from (7.7.45) we define g~k-k') as a function which is precisely of
order Ok-k' and dQ(k') as a function being precisely of order k'. We put

(u)(k) _
LL 'I-Ik I-I'
Igv
(k-k') - (k')
(ul,· .. ,ul'_l)dQ
,
( ... ,I+v), (7.8.8)
k' =0 v=O

which is obviously of order Ok. After these preparations the final formula is given
by

C(k) = {LI ~)(1 + A s dl) - A S dl}-l "


L J , i= 1
IT [.. ·]dP(k-k')
k,
(7.8.9)
k t + ... +kr=k

together with

(7.8.10)

This formula allows us to express s by u and I, where u is taken at 1 and a finite


number of retarded time indices. In order to elucidate the above procedure it is
worthwhile to consider special cases.
1) If dQ and dP do not explicitly depend on time and there are no fluctuating
forces, (7.8.9) reduces to

C(k) = {- Asdl} I
kt +···+kr =k"=l
.n [.. ']kPt-k')dl,
'
(7.8.11)

617
216 7. Nonlinear Equations. The Slaving Principle

where
(7.8.12)
2) If I becomes a continuous variable and there are no fluctuating forces we
refer to the results of Sect. 7.4. For convenience we put dl = r. Using the nota-
tion of that section we immediately find

{L1_ (1 + Asdl) - A sdl}-l ..... -1 (d


- - As + O(r) )-1 and (7.8.13)
r dt ~

(7.8.14)

In the limit r ..... 0 we finally obtain

7.9 Slaving Principle for Stochastic Differential Equations *

We are now in a position to derive the corresponding formulas for stochastic dif-
ferential equations. To introduce the latter, we consider

du = Auu dt + dQ(u,s, t) , (7.9.1)

ds = Ass dt + dP(u,s, t) . (7.9.2)

Although it is not difficult to include the equations for qJ [cf. Sects. 7.2, 4] and a
qrdependent dQ and dP in our analysis, to simplify our presentation they will be
omitted.
Because of the Wiener process we must be careful when taking the limit to a
continuous time sequence. Because of the jumps which take place at continuous
times we must carefully distinguish between precisely equal times and time dif-
ferences which are infinitesimally small. The fundamental regression relation to
be used stems from (7.7.29) and can be written in the form

t
J exp [AS<t- r)] h(U"U,_dt'U,_2dt, .. Jdg(r)
-00

t
= J exp[As(t- r)]dg(r)h(ut,Ut_dt, .. ·)
-00

t ,-dt
- J exp[As(t- r')]d_h(u"U,_dt,· .. ) J exp[As(r- r')dg(r')
-~ -00
(7.9.3)

618
7.9 Slaving Principle for Stochastic Differential Equations 217

with

(7.9.4)

We shall first discuss (7.9.3, 4) in detail, allowing us eventually to translate rela-


tions (7.7.26 - 33) to the present case.
For the first integral on the rhs of (7.9.3), we consider whether we may
replace Ut-dt' Ut -2dt, etc., by u t .
To this end we replace firstly Ut-dt by u t and compensate for this change by
adding the corresponding two other terms. We thus obtain

t
J exp[As(t- r)]dg(r)h(u t ,U t -dt,U t -2dt,···)
-00
t
= J exp[As(t- r)] dg(r)h(u t ,u t ,U t -2dt,· .. )
-00
t
- J exp[As(t- r)] dg(r)[h(u t ,u t ,U t -2dt,· .. )
-00

(7.9.5)

The difference on rhs of (7.9.5) can be formally written as

(7.9.6)

We assume the fluctuating forces in the form

dFu(u,s, t) = Fu,i(U,S, t)dWi(t) , (7.9.7)

where we shall assume the Ito-calculus, the convention to sum over indices which
occur twice, and that Wi describes a Wiener process.
It is not difficult to include in our approach the case where F u, i depends on U
and s at previous times and on time integrals over dWj over previous times (see
below). We shall further assume that dWi dWk converges in probability according
to
(7.9.8)

We use the Ito transformation rule for a function <p(u) according to w'hich

With help of this relation, (7.9.6) acquires the form

619
218 7. Nonlinear Equations. The Slaving Principle

(7.9.10)

In the next step the expression on rhs of (7.9.5) will be inserted for the stable
mode into the equation for the unstable mode. We now have to distinguish
between two cases:
a) s occurs in

Qo(u,s,t)dr. (7.9.11 )

Because (7.9.10) gives rise to terms of order O(dt), O(dw), we obtain terms of
the form dt 2 or dt dw, which can both be neglected compared to dr.
b) s occurs in

Fu,i(U(t),S(t), t)dWi(t) , (7.9.12)

leading to terms dt dw which tend to 0 more strongly than dr, and to

(7.9.13)

These considerations show that

(7.9.14)

We may continue in this way which proves that we may replace everywhere
Ut-kdt, k = 1, 2, ... , by u t .
We now consider the second term on rhs of (7.9.3), i. e.,

t T-dt
S exp[As(t-r)]d_h(uT'u T_1>"') S exp[As(r-r')]dg(r'). (7.9.15)

When we use the definition of d_ of (7.9.4), we may write (7.9.15) in the form

t
S exp[As(t- r)][h(uT'uT-dt,· .. ) - h(UT-dt, UT-2dt,· .. )]
-00

T-dt
. J exp[As(r-r')]dg(r'). (7.9.16)

620
7.9 Slaving Principle for Stochastic Differential Equations 219

Using

dUi = QOi(Ut,Ut-dt, .. ·, t)dt + Fu;,p(Ut,Ut-dt, t)dwp (7.9.17)


\ I

iFu.
1

and the ito rule, we cast (7.9.16) into the form

[(
St expAst-r )] [kL fjh(U r-dt,U r-2dr,···) [QOiUr-ndt,·
( .. ,r-n d t )d r
-00 n=! fjui,r-ndt

+ Fu;,p(Ur-ndt, ... , r- ndt)dwp(r-ndt)]


1 ~ fj 2h(ur-dt,U r_2dt, ... ) F ( )F ( )d
+ - '"' u· p Ur-ndt,··· u p Ur-ndt,··· r
2 n=1 OUi,r-ndtOUk,r-ndt I' k'

r-dt
X S exp[As(r- r')]dg(r'). (7.9.18)
-00

In the first term of (7.9.18), when we replace ur-ndr' n = 2, 3, ... , by Ur-dt we


obtain correction terms which can be rigorously neglected because all expressions
must still be multiplied by dr. This allows us to replace h which depends on u's at
different times by a new function which depends only on u at time r-dr.
Denoting this function by h({u }), the first term of (7.9.18) acquires the form

t fjh({u}) r
S exp[As(t- r)] QOi({u})dr S exp[As<r- r')dg(r'). (7.9.19)
-00 fj~ -00

Because we shall use the last sum in the curly brackets in the same form as it
occurs in (7.9.19), we have to deal only with the second term containing F
linearly.
We now study this term

St exp [As(t- r )] Lk Oh(u r-dt,U r-2dt,···) F U;,p (ur-ndt,... )d wp ( r- n d)


t
-00 n=! fjui,r-ndt
r-dt
X S exp[As<r- r')]dg(r'). (7.9.20)
-00

Our goal is to replace the u's occurring in h with different time arguments by u's
which depend on a single time argument only. All u's standing in front of u r- nt
can be replaced by ur-ndt' and similarly all u's following ur-ndt can be replaced
by ur-(n+!)dt. Similarly the arguments inF can be replaced by ur-ndt. These sub-
stitutions give contributions of higher order and can therefore be rigorously
neglected in the limit dt ~ O. In this way (7.9.20) is transformed into

621
220 7. Nonlinear Equations. The Slaving Principle

St exp[A s (t-r)] ~
~
Oh({Ur-(n-lldt},Ur-ndP{Ur-(n+lldt})
" .
F.
UI'P
({u _
r ndtJ
I)
-00 n-l uU"r-ndt
r-dt
. dwp(r-ndt) S exp[A,(r- r')]dg(r') + O(dr) + O(dw). (7.9.21)
-00

The next step is to replace the argument u r -(n-lldt by ur-(n+lldt. In order to


correct for this substitution we encounter contributions containing dwp at time
r- ndt. This dwp taken together with another dwp occurring in (7.9.20) will give
rise to terms proportional to dr. These terms must be carried along our calcula-
tion if they occur under an integral. Carrying out these calculations explicitly, we
may cast (7.9.20) into the form

t exp[A (t-r)]
S s
~
":'
lOh(Ur-ndt,{Ur-(n+lldt})
" .
F. (fu _
U"p l r ndt
})dw (r-ndt)
p
-00 n-I uU"r-ndt

+ n~l o2h(u r _dt,U r _2dt .. . ) F (fU _ 1.)F (fu _ l)dr]


L,,, " Ui,p l r ndt J Uk'P l r mdt J
m=1 uUk,r-mdtUUi,r-ndt
r-dt
X I exp[As(r-r')]dg(r'). (7.9.22)

We are now in a position to replace the arguments of h by a single argument


Ur-dt. The second term in (7.9.22) remains the same so that we obtain

r-dt
X S exp[A,(r-r')]dg(r'). (7.9.23)

We now put all terms occurring in (7.9.18) together. These transformations


require a study of the explicit form of the integral over dg which in fact is a
multiple integral over dw's and functions of time.
Since this study is somewhat lengthy without adding new results we give
rather the final formula only. According to it, the operator d _, which was
defined in (7.9.4), is given by

(7.9.24)

622
7.9 Slaving Principle for Stochastic Differential Equations 221

We can now proceed in complete analogy to the discrete time case by devising an
iteration procedure in which we collect terms of the same order of magnitude,
i.e., the same powers of u. To this end we define the operator

+ F(k r +1)CU d })dw (t-dt) ~ (7.9.25)


Ui,P ,/- / P "
uUi

in which the first operator is defined by

(~)
dt (m)
= (u(m+l)~)
ou
(7.9.26)

and u(m+ 1) means that we have to take from (7.9.17) only those terms which
contain the m + 1 power of U (after s has been replaced by u). After these steps we
are in a position to write the final result. Accordingly, s(t) can be written as

s(t) = I e(k), where (7.9.27)


k=2

elk) = ( ~ - As)
dt
-1
I: n[..
(O)k 1 + ... +k v =k'i=1
'kdP(k-k')
'
with (7.9.28)

(7.9.29)

The inverse operator occurring in (7.9.29) denotes the performance of integra-


tion of the time variable which occurs explicitly in all functions except for the
function u (t).
It is simple to study stochastic differential equations along similar lines using
the Stratonovich calculus. The only difference of the final result consists in the
definition of d _ (or dU:,» in the usual way.

623
8. Nonlinear Equations. Qualitative Macroscopic
Changes

In this and the subsequent chapter we deal with a problem central to synergetics,
namely qualitative macroscopic changes of complex systems. Though it is
possible to treat the various instabilities under the impact of noise by means of a
single approach, for pedagogical reasons we shall deal with the special cases in-
dividually. For the same reasons we first start with equations which do not
contain fluctuations (noise) and shall treat the corresponding problems only
later. The general philosophy of our approach was outlined in Sect. 1.14.

8.1 Bifurcations from a Node or Focus. Basic Transformations

We start with probably the simplest case, namely the bifurcation of solutions
from a node. Our starting point is a set of nonlinear differential equations for the
state vector q(t), namely
q(t) = N(q(t), a) . (8.1.1)

We assume that the nonlinear function N(q, a) depends on a control parameter


a. We make the following assumptions.
1) For a value a = ao there exists a time-independent solution of (8.1.1)
(8.1.2)

2) When we change ao continuously to other, say bigger, values a, qo can be


extended to this new parameter value, so that
N(qo(a), a) =0 . (8.1.3)

We now consider the stability of qo at this new control parameter a, focusing


attention on a situation where the solution (8.1.2) becomes unstable. We study
the stability by linear analysis, making the hypothesis
q(t) = qo(a) + w(t) . (8.1.4)

Inserting (8.1.4) into (8.1.1), we obtain the still exact relation


w(t) = N(qo(a) + w(t), a) . (8.1.5)
8.1 Bifurcations from a Node or Focus. Basic Transformations 223

We now assume that w is a small quantity which allows us to linearize (8.1.5).


That means we expand the rhs into a power series in the components WI of w.
Because of (8.1.3), the first term vanishes and the second leads to the result

(8.1.6)

whereas, as stated before, we neglected all terms of higher order. Equation


(8.1.6) can be cast into the general form

w(t) = L(a)w(t) , (8.1. 7)

where L depends on a but does not depend on time t. As we know from Sect. 2.6,
the solutions of (8.1.7) have the general form

(8.1.8)

If the eigenvalues Ak(a) of the matrix L(a) are nondegenerate, the vectors v are
time independent. In case of degeneracy, v(k) may depend on powers of t.
In order not to overload our presentation we shall assume in the following
that the vectors v (k) are time independent which is secured if, e. g., the eigen-
values Ak are all different from each other, i. e., nondegenerate. Since the vectors
v (k) form a complete set in the vector space of q, we may represent the desired
solution q as a superposition of the vectors V(k). Therefore, the most general
solution may be written in the form

(8.1.9)

w(t)

where the coefficients c;k(t) are still unknown time-dependent variables. It will be
our first task to determine c;k' To this end we insert (8.1.9) into (8.1.1)

L ~k(t)V(k) =N(qo(a) + L c;k(t)v(k),a). (8.1.10)


k k
'---v---"'
W(t)

To be as explicit as possible we assume that we may expand N into a power series


of Wand that it is sufficient to keep only its first terms. Our procedure will make
it clear, however, how to proceed in the general case in which the polynomial
extends to higher orders. We thus write N in the form

N(qo(a) + W,a) = N(qo(a), a) +L W + N(2): W: W + N(3): W: W: W + ... ,


'----v---I '-v-' '--v--' '---v----'
1 2 3 4 (8.1.11)

625
224 8. Nonlinear Equations. Qualitative Macroscopic Changes

where according to (8.1.3) the first term vanishes

N(qo(a), a) = o. (8.1.12)

According to (8.1.6 or 7) and using (8.1.8), the second term can be cast into the
form

LW= E~k(t)Lv(k)= E~k(t)AkV(k). (8.1.13)


k k

The third term is a shorthand notation which is defined as follows

N(2): W: W = ~ E ~k,(t)~k,,(t) E (8 2NI8qj" 8qj')vt)v)'f."). (8.1.14)


2 k'k" Jj" I

2Nf~"
The higher-order terms are defined in a similar fashion. Since we want to derive
equations for ~b we should get rid of the vectors v (k). To this end we use the dual
eigenvectors introduced in Sect. 2.5. These eigenvectors,
-(k)
V , (8.1.15)

which are in the present case as v time independent, have the property

(8.1.16)

Taking the scalar product of (8.1.10) with (8.1.15) and making use of the decom-
position (8.1.11), we find the following equations

(8.1.17)

where we have introduced the abbreviation A (2). The higher-order terms are, of
course, defined in an analogous fashion, giving the final result

(8.1.18)

8.2 A Simple Real Eigenvalue Becomes Positive

We numerate the eigenvalues in such a way that the one with the biggest real pan
carries the index 1. We assume that this eigenvalue is real and nondegenerate and
that the control parameters are such that Ai (a) changes its sign from a negative to
a positive value. We shall focus our attention on a situation in which

(8.2.1)

626
8.2 A Simple Real Eigenvalue Becomes Positive 225

holds and in which the real parts of all other eigenvalues are still negative. Since
(8.2.1) in the linear stability analysis indicates that the corresponding solution is
unstable, whereas

Re{Ad,··· :::;Co<O (8.2.2)

indicates that all other solutions are still stable, we shall express this fact by a
change of notation. To this end we put for

(8.2.3)

where u refers to "unstable" and for

Ak"k' ;;:: 2: c;dt) = Sk'-l, Ak'-l => Ak, (8.2.4)


'-v-'
k

where s refers to "stable". We must stress here that our notation may lead to
some misunderstanding which must be avoided at any rate. Note that "unstable"
and "stable" refer to the linear stability analysis only. We shall show that in
many cases, due to the nonlinearities, the mode which was formerly unstable in
the linear analysis becomes stabilized, and it will be our main purpose to explore
the new stable regions of u.
So the reader is well advised to use u and s as abbreviations distinguishing
those modes which in the linear stability analysis have eigenvalues characterized
by the properties (8.2.1, 2). After these introductory remarks we split (8.1.18)
according to the indices 1 and 2, ... , respectively, into the following equations

U - /\.uu
• - 1
+ A u(2) u 2 + A u(3) u 3 + ... + ~
IJ
A uk
(2)
u · Sk + ... , (8.2.5)
k

Sk = AkSk + AY)u 2 + A}3)U 3 + ... + L A}~u. sk + .... (8.2.6)


k

In this section and Sect. 8.3 we shall assume u real. Provided u is small enough
and (8.2.2) holds, we may apply the slaving principle. This allows us to express Sk
as a function of u which may be approximated by a polynomial

(8.2.7)

Inserting (8.2.7) in (8.2.5) we readily obtain the order parameter equation

(8.2.8)

where C is given by

(8.2.9)

627
226 8. Nonlinear Equations. Qualitative Macroscopic Changes

The rest term contains higher powers of u. Of course in practical applications


it will be necessary to check the size of the rest term. Here we shall assume that it
is sufficiently small so that we can neglect it. To exhibit the main features of
(8.2.8) we start with a few examples. We write (8.2.8) in the form

(8.2.10)

and first treat the example

J(u) = - /3u 3 , 13 real (8.2.11)

so that (8.2.10) reads

(8.2.12)

For those readers who are acquainted with mechanics we note that (8.2.12) can
be visualized as the overdamped motion of a particle in a force field which has a
potential V according to the equation

.
u= ---.
av (8.2.13)
au
In the present case V is given by

u2 u4
V= -A - +
u 2
/3-,
4
(8.2.14)

which easily allows us to visualize the behavior of the solution u of the nonlinear
equation (8.2.12). An inspection of the potential curve V reveals immediately
that depending on the sign of Au two entirely different situations occur. For
Au < 0 the value u = 0 is a stationary stable solution. This solution persists for
Au> 0 but becomes unstable. The new stable solutions are given by

u = ± VAul/3 . (8.2.15)

Note that this solution becomes imaginary for negative Au and must therefore be
excluded for that branch of AU"
Taking the roots (8.2.15) and

u=O (8.2.16)

together we may decompose the rhs of (8.2.12), which is a polynomial, into a


product of its roots according to

u= -13· u(u- V Aul /3)(U+ VAulp). (8.2.17)

628
8.2 A Simple Real Eigenvalue Becomes Positive 227

Introducing the abbreviations

(8.2.18)

for the individual roots, we may write (8.2.17) in the more condensed form

(8.2.19)

Because Au depends on the control parameter a or may even be identified with it


in a number of cases, the roots Uk are functions of this control parameter a.
These considerations can be generalized in a straightforward manner to cases in
whichf(u) is a general polynomial

U = AuU + feu) :=P(u). (8.2.20)

Again it is helpful to interpret (8.2.20) as the overdamped motion of a particle


in the potential field V from which the rhs of (8.2.20) may be derived

.
U= - - - .
av (8.2.21)
au

In analogy to (8.2.19) we may decompose the polynomial (8.2.20) into a product


combining the individual roots

u = C[u - u 1 (a)] [u - u2(a)]··· [u - ux(a)] . (8.2.22)

Knowing the dependence of Uk on the control parameter a we may find a number


of branches of the possible stationary solutions when a changes. It is a nice game
to study the emergence, coalescence, or disappearance of such branches, some
examples of which are given in the exercises.
To study the stability of these individual branches of solutions we may
expand P(u) around the respective root Uk' In the immediate vicinity of Uk we
put

u= Uk + c5u, (8.2.23)

linearize (8.2.20) around uk> and study the stability by means of the linearized
equations

c5u = ap(u) I c5u. (8.2.24)


au U=Uk

Equation (8.2.24) is of the general form

c5u = y. c5u , (8.2.25)

where the sign of y decides upon stability or instability,

629
228 8. Nonlinear Equations. Qualitative Macroscopic Changes

y = aP(u) I :; : o. (8.2.26)
au u=uk

The temporal evolution in the nonlinear domain can be studied also because
(8.2.20) is a differential equation whose variables can be separated so that we im-
mediately find

du'
J
U
= (- to· (8.2.27)
Uo Auu' + feu')
Using (8.2.22) the integrand can be decomposed into partial fractions so that
the integral can be explicitly evaluated.
Exercise. Evaluate the integral in (8.2.27) in the case (8.2.19) and determine u(t).

8.3 Multiple Real Eigenvalues Become Positive

We assume that a number M of the eigenvalues with the greatest real parts coin-
cide and that they are real

(8.3.1)

and are zero or positive while all other eigenvalues have negative real parts. In
extension of the previous section we denote the mode amplitudes which belong to
(8.3.1) by

(8.3.2)

Splitting the general system (8.1.18) into the mode amplitudes (8.3.2) and the
rest and applying the slaving principle we find the order parameter equations

~t : AuUt + ft (u t ,···, UM, Au, a) }


(8.3.3)
UM- AuuM+ fM(ut,···, UM, Au, a)

or, in shorter notation (not denoting explicit a-dependence of the P's),

(8.3.4)

In principle, these equations are of precisely the same form as the original equa-
tions (8.1.1). However, the following two points should be observed. In contrast
to the general equations (8.1.1), the number of variables of the reduced equations
(8.3.3) is in very many practical cases much smaller. Especially in complex

630
8.3 Multiple Real Eigenvalues Become Positive 229

systems, where we deal with very many degrees of freedom, an enormous reduc-
tion of the number of degrees of freedom is achieved by going from (8.1.1) to
(8.3.3). Furthermore, when eigenvalues are degenerate, symmetry properties can
often be used. Here we indicate how to cope with (8.3.3) in the special case in
which stationary solutions

(8.3.5)

exist. Note, however, that (8.3.3) may also have solutions other than (8.3.5), for
instance oscillatory solutions. We shall come back to such equations later. Here
we make a few comments for the case when (8.3.5) holds.
A rather simple case occurs when the rhs of (8.3.4) can be written as a
gradient of a potential. A class of such problems is treated by catastrophe theory
([Ref. 1, Chap. 5] and additional reading material given at the end of the present
book). We shall not dwell on that problem here further. Another class of
equations of the form (8.3.4) is provided by

(8.3.6)

The expressions (8.3.6) vanish if Uj or the bracket connected with it vanish. If


some of the uj vanish, say those with the indices k + 1, k + 2, ... , we need only
require that for 1, ... ,k the brackets vanish

(8.3.7)

Equations (8.3.7) describe ellipses, straight lines or hyperbola in a plane, or their


corresponding counter parts, namely certain hypersurfaces in n-dimensional
space. The possible stationary solutions (8.3.5) are then defined as the cross sec-
tions of such hypersurfaces.
Let us illustrate this by a most simple example in which we deal with two
variables, and one of the equations (8.3.7) reads

(8.3.8)

and the other

Ur
-2 +-2 =
U~
Au(a). (8.3.9)
a b

Depending on the constants a and b we may have 0, 2, or 4 cross sections.


Evidently with increasing Au(a) in the case of possible cross sections the variables
Ut and U2 increase. Each of the cross sections represents one of the newly evolving
possible solutions.
Exercise. Show that at least in general (8.3.8) and (8.3.9) do not possess a poten-
tial.

631
230 8. Nonlinear Equations. Qualitative Macroscopic Changes

8.4 A Simple Complex Eigenvalue Crosses the Imaginary Axis.


Hopf Bifurcation

Our starting point is the set of differential equations (8.1.1). Again we assume
that for a certain control parameter a = ao a stable time-independent solution qo
exists. We assume that when we change the control parameter, this time-inde-
pendent solution can be extended and we again perform a stability analysis. We
first consider
w=Lw (8.4.1)

which is identical with (8.1. 7). Inserting the usual hypothesis

(8.4.2)

into (8.4.1), we are led to the eigenvalue problem

Lv= AV. (8.4.3)


It is assumed that the eigenvalue with the biggest real part, which we denote by
At, is nondegenerate and complex,

where (8.4.4)

A; ~ o. (8.4.5)

We shall use the notation Au instead of At again

(8.4.6)

Furthermore, we assume that

Re{AJ ~ Co < 0, j = 2, ... (8.4.7)

which allows us to use the slaving principle for small enough u (which is a new
notation of et).
Because (8.4.4) is a complex quantity, it is plausible that u is complex.
Exhibiting only the terms up to 3rd order in u, the order parameter equation for
u reads

it = (A~ + iJ...~')U + A~:~uU2 + 2A~:)uu'uu* + A~:)u'u' U*2


+ BuuuuU3 + 3 Buuuu'u . uu* + 3Buuu ' u.UU*2
+ B uu ' u' u.U*3 + rest. (8.4.8)

In the following we shall focus our attention on important special cases for
(8.4.8) which occur repeatedly in many applications.

632
8.4 A Simple Complex Eigenvalue Crosses the Imaginary Axis. Hopf Bifurcation 231

We start with the case (A~, A~' real)

it = (A~ + iA~')u - bu lu 12 , (8.4.9)

where b is assumed real and

b == - Buuuu' > 0 . (8.4.10)

Equation (8.4.9) can be readily solved by means of the hypothesis

u(t) = r(t) exp[i 'II(t) + iA~' t], r ~ 0, 'II real. (8.4.11)

Inserting it into (8.4.9) we obtain

r+ (i if + iA~')r = (A~ + iA~')r- br 3 (8.4.12)

or, after separation of the real and imaginary parts,

if = 0, (8.4.13)

(8.4.14)

Equation (8.4.14) is identical with (8.2.12), with the only difference being that r
must be a positive quantity. Therefore the stationary solutions read

ro = 0 for A~ < 0, (8.4.15)

ro=O, ro=+VA~/b for A~~O. (8.4.16)

Because (8.4.13) holds for all times, the transient solution can be readily found
by solving (8.4.14) [1].
The stationary solution has the form

u = ro exp(i '110 + iA~' t) . (8.4.17)

It shows that (for ro > 0) the total system undergoes an harmonic oscillation or,
when we plot u in the phase plane consisting of its real and imaginary parts as
coordinates, we immediately find a limit cycle. The transient solution tells us that
all points of the plane converge from either the outside or the inside towards this
limit cycle. An interesting modification arises if the constant b occurring in
(8.4.9) is complex

b = b' + ib", b', b" real. (8.4.18)

We assume

b' > 0, real (8.4.19)

633
232 8. Nonlinear Equations. Qualitative Macroscopic Changes

and the equation we have to study reads

u= (A~ + iA~')u - (b' + ib")u lu 12. (8.4.20)

For its solutions we make the hypothesis

u = r(t) ei(O(t) , r ~ 0, (fJ real. (8.4.21)

Inserting (8.4.21) into (8.4.20) and separating real and imaginary parts we find

and (8.4.22)

ip = A~' - b" r2 . (8.4.23)

These are two coupled differential equations, but the type of coupling is very
simple. We may first solve (8.4.22) for r(t) in the stationary or the transient state
exactly. Then we may insert the result for r2 into (8.4.23) which can now be
solved immediately. We leave the discussion of the transient case as an exercise
for the reader and write down the solution for the steady state

{fJ = (A~' - b" r~) t + {fJo. (8.4.24)

This is an important result because it shows that the frequency of the oscilla-
tion depends on the amplitude. Thus, in the case of Hopf bifurcation we shall not
only find new values for r, i. e., a nonzero radius of the limit cycle, but also
shifted frequencies. The present case can be generalized in a very nice way to the
case in which (8.4.20) is replaced by the more general equation

(8.4.25)

Let us assume that f and g are real functions of their arguments and are of the
form of polynomials. Then again the hypothesis (8.4.21) can be made. Inserting
it into (8.4.25) and splitting the resulting equation into its real and imaginary
parts, we obtain

(8.4.26)

(8.4.27)

Equation (8.4.26) has been discussed before [compare (8.2.20)]. The additional
equation (8.4.27) can then be solved by a simple quadrature.

634
8.5 Hopf Bifurcation, Continued 233

8.5 Hopf Bifurcation, Continued

Let us now consider a more general case than (8.4.25) which one frequently meets
in practical applications. In it the order parameter equation has the following
form

it = (A~ + iA~')U - (b ' + ib")u lu 12 + feu, u*) , (8.5.1)

where A~, A~', b I and b" are real constants.


We shall assume thatfis of order u 3 but no longer contains the term u u 12
1 1 1

which is exhibited explicitly in (8.5.1). The specialization of the general equation


(8.4.8) therefore consists in the assumption that the rhs of (8.5.1) does not
contain quadratic or bilinear terms in u, u*. By making the hypothesis

u(t) = ret) ei(P(t) , r ~ 0, cp real (8.5.2)

we may transform (8.5.1) into the equations

and (8.5.3)

ip = A~' - b" r2 + her, ei(P, e-i(P) , (8.5.4)


'----v----'
W

with the abbreviations

g=Re{e-i(Pf} and (8.5.5)

h = 1m {e-i(Pf}lr . (8.5.6)

In accordance with our specification off we require that 9 is of order r3 and h of


order r2. We may put

(8.5.7)

where gl is cp independent and of order r 4 and g2 fulfills the condition

(8.5.8)

Furthermore, we may put

(8.5.9)

where hi is of order r3 and h2 fulfills the condition

(8.5.10)

635
234 8. Nonlinear Equations. Qualitative Macroscopic Changes

We assume b ' > O. While for A~ < 0 the solution, = 0 is stable, it becomes
unstable for A~ > 0, so that we make the hypothesis

, ='0 + ~ (8.5.11)

and require

(8.5.12)

By this we may transform (8.5.3) into

(8.5.13)

Since we want to study the behavior of the solutions of (8.5.1) close to the
transition point A~ = 0, the smallness of A~ is of value. To exhibit this smallness
we introduce a smallness parameter e,

(8.5.14)

Equation (8.5.12) suggests that we may put

(8.5.15)

To find appropriate terms of the correct order in e on both sides of (8.5.13), we


rescale the amplitude by

(8.5.16)

and time by

(8.5.17)

Because of the dependence of g on , it follows that

(8.5.18)

Introducing the scaling transformations (8.5.14 -18) into (8.5.13) we readily find

(8.5.19)

after having divided (8.5.13) by e 3 • Making the corresponding scaling transfor-


mations in (8.5.4) and putting

(8.5.20)

636
8.5 Hopf Bifurcation, Continued 235

we transform (8.5.4) into

d<p = (A~' _ b" f,2;.2)/f,2 + h(;',eilp,e- ilp ). (8.5.21)


dr' v I

W
This equation shows that instead of the original frequency w occurring in (8.5.4),
we now have to deal with the renormalized frequency where w
(8.5.22)

Because f, is a small quantity we recognize that on the new time scale r under con-
w
sideration, is a very high frequency. We want to show that therefore g and h in
(8.5.19) and (8.5.21), respectively, can be considered as small quantities. From
(8.5.21) we may assume that in lowest approximation exp(i<p) "'" exp(iwr) is a
very rapidly oscillating quantity.
Now consider a function of such a quantity, e. g., {h or h2 • Since (/2 and h2 are
functions of exp(i<p), they can both be presented in the form

(8.5.23)

Let us assume that g and h are small quantities. Then it is possible to make a per-
turbation expansion in just the same way as in the quasiperiodic case in Chap. 6.
As we have seen we then have to evaluate an indefinite integral over (8.5.23)
which acquires the form
2
_f,_ L ~ eimwrh2 • (8.5.24)
iw m*,O m

If (8.5.23) converges absolutely so does (8.5.24) because of

(8.5.25)

However, the factor f,2 in front of the sum in (8.5.24) can be arbitrarily small.
This means g2 and h2 can be treated as small quantities. In addition, according to
our assumptions above, the <p-independent terms gl and hI are still of higher
order and can again be considered as small quantities so that g and h act as small
perturbations.
In order to bring out the smallness of the terms gand h explicitly we introduce
a further scaling, namely by

(8.5.26)

h=f,lfi (8.5.27)

637
236 8. Nonlinear Equations. Qualitative Macroscopic Changes

jointly with

1'/=e'fj. (8.5.28)

This brings us from (8.5.19,21) to

(8.5.29)

(8.5.30)

respectively. Lumping all quantities which contain the smallness parameter e'
together, we may write (8.5.29, 30) in the final forms

dfj
and (8.5.31)
dr

(8.5.32)

where Wu is given by

(8.5.33)

For reasons which will transpire immediately, we rewrite (8.5.31 and 32) in the
form

(8.5.34)

(8.5.35)

where g and hare 2 n-periodic in ({J [and must not be confused with g and h in
(8.5.3,4)].
Equations (8.5.34, 35) are now in a form which allows us to make contact
with results in Sect. 6.2, where we presented a theorem of Moser that will now
enable one to cope with such equations [compare Eqs. (6.1.9, 10)].
It should be noted that the full power of that theorem is not needed here
because it deals with quasiperiodic motion whereas here we deal with periodic
motion only, which excludes the problem of small divisors. (We may treat
(8.5.34,35) more directly by perturbation expansions also.) But nevertheless we
shall use similar reasoning later on in the more complicated case of quasiperiodic
motion and therefore we present our arguments in the simpler case here.
Let us compare (8.5.34,35) with (6.1.9,10). There the aim was to introduce
counterterms D and L1 (and perhaps d) such that the quantities A and OJ were not
changed by the iteration procedure.

638
8.5 Hopf Bifurcation, Continued 237

Here our iteration is somewhat different because (8.5.34, 35) do not contain
these counterterms. Rather, we have to encounter the fact that due to the pertur-
bations g and h, A and W will be changed. It is rather simple to make contact
with the results of Sect. 6.1 nevertheless. To this end we put - Xu = Au [where
this Au must not be confused with Au in (8.4.6)] and write

Au=Au- D + D (8.5.36)
~

Ar
and similarly

Wu = Wu - L1 + L1 . (8.5.37)
~

Wr
Introducing (8.5.36, 37) into (8.5.34, 35), respectively, we cast these equations
into precisely the form required by Moser's theorem. This procedure is well
known from physics, namely from quantum field theory where the quantities
with the index u, e. g., w u , are referred to as "unrenormalized" quantities
whereas the quantities Wn Ar are referred to as "renormalized". According to
Moser's theorem we may calculate D and L1 explicitly from (8.5.34, 35) provided
we use the decomposition (8.5.36, 37). Then D and L1 become functions of An Wr
and e so that we have the relations

Ar = Au - D(An Wn e) , (8.5.38)

Wr = Wu - L1 (An Wn e) . (8.5.39)

Here D and L1 are analytic in e and by an inspection of the iteration process


described in Sect. 6.3 we may readily convince ourselves that D and L1 are conti-
nuous and even differentiable functions of An W r • At least for small enough e this
allows us to resolve equations (8.5.38, 39) for Ar and Wr

(8.5.40)

(8.5.41)

It is even possible to give a simple algorithm in order to do this inversion ex-


plicitly. Namely, we just have to substitute the quantities Ar and Wr in the argu-
ments of D and A, which occur on rhs of (8.5.38,39), by the corresponding rhs in
lowest order of (8.5.38, 39) consecutively. Thus in lowest approximation we
obtain

(8.5.42)

For sake of completeness we note that in the case of Ar = 0 a constant counter-


term d may occur. In this case

639
238 8. Nonlinear Equations. Qualitative Macroscopic Changes

(8.5.43)

(8.5.44)

hold. We merely note that d can be generated by choosing (8.5.44) instead of


(8.5.12).
After having described a procedure which allows us to determine the shifted,
i. e., renormalized, frequency (8.5.39), and the renormalized relaxation constant
(8.5.38), we now turn so study the form of solutions 17 and <p. Instead of
(8.5.34, 35), we studied a general class of equations of which (8.5.34, 35) are a
special case. Those equations referred to the phase angles rp and real quantities t;.
In Sect. 6.3 we showed that the corresponding equations for rp and ~ can be cast
into a simple form by making the transformations

rp= 1fI+ w(lfI,e) and (8.5.45)

c; = X + eV(IfI, e) + e V(IfI, e)x· (8.5.46)

In Sect. 6.3 we also described an iteration procedure by which u, f) and V


could be constructed explicitly. In particular these functions were 2n-periodic
in 1fI. The resulting equations could be written in the form

;p=w+D(X) and (8.5.47)

X=AX+ D(X 2 ) (8.5.48)

which allows for solutions of the form

IfI = wt and (8.5.49)

(8.5.50)

when we retained only the lowest orders in (8.5.47,48). Inserting (8.5.49,50) in


(8.5.45, 46) yields the explicit time dependence of rp and ~ in the form

rp = wt + eu(wt, e) and (8.5.51)

(8.5.52)

The only step still to be performed is to specialize these results to the case of
our equations (8.5.34, 35). Making the identifications

(8.5.53)

(8.5.54)

(8.5.55)

640
8.6 Frequency Locking Between Two Oscillators 239

W-' wr ' (8.5.56)


our final result reads

qJ = wrt + c:u(wrt, c:) (8.5.57)

Yf = XO exp(Art) + c:v(wrt, c:) + c: V(wrt, c:) Xo exp(Art), Ar < o. (8.5.58)

It gives us the form of the solution close to a bifurcation point. From (8.5.58) we
may readily conclude that the solution is stable and relaxes towards a steady state
given by

Yf = c:v(wrt, c:). (8.5.59)

In particular it follows that an oscillation takes place at the renormalized fre-


quency w,. This approach goes considerably beyond conventional approaches
because it not only determines the bifurcation solution in the steady state, but
also allows us to discuss its behavior close to the steady state, i. e., its relaxation
properties. Such a treatment is particularly necessary when we wish to study the
stability, and beyond that, to take fluctuations into account (as we will show
later).

8.6 Frequency Locking Between Two Oscillators

As an example we shall consider the bifurcation starting from a focus in a case


where two complex eigenvalues acquire positive real parts. Here we have to deal
with two equations for the two complex order parameters Ut and U2. To illustrate
the basic ideas we write these equations in the form
Ut = (AI + iWt)Ul - btutlu112 + Ctu21u212, (8.6.1)

(8.6.2)

For their solution we make the hypothesis

(8.6.3)

In the following we shall use the decomposition

(8.6.4)

where the constant frequencies wj are such that IfIj,j = 1, 2 remains bounded for
all times. We shall call wj renormalized frequency. Inserting (8.6.3), e.g., in
(8.6.1) leads us to

(8.6.5)

641
240 8. Nonlinear Equations. Qualitative Macroscopic Changes

which can be split into its real and imaginary parts, respectively,

(8.6.6)

(8.6.7)

A similar equation arises for r2' ({J2' Let us assume that Cl is a small quantity
so that in a first approximation the corresponding term can be neglected in
(8.6.6). This allows us to find in a good approximation the stationary solution
rl,o by

(8.6.8)

Similarly we assume that a corresponding time-independent solution r2,1) can be


found.
Let us now study (8.6.7) and its corresponding equation for ([12 under the
assumption (8.6.8) and its corresponding one. In order to discuss the correspond-
ing equations

and (8.6.9)

(8.6.10)

we introduce a new variable

'P = ([12 - ({JI (8.6.11)

and the frequency difference

Q= W2- WI' (8.6.12)

Subtracting (8.6.9) from (8.6.10) we obtain

p= Q - asin 'P with (8.6.13)

(8.6.14)

Equation (8.6.13) can be readily solved by separation of the variables, giving

'P d'P'
t=J---- (8.6.15)
'Po Q - a sin 'P'

In it 'Po denotes the initial value of 'P at time t = O. The behavior of the integral
on the rhs of (8.6.15) is entirely different depending on the size of a and Q. If

(8.6.16)

642
8.6 Frequency Locking Between Two Oscillators 241

the integrand in (8.6.15) never diverges. In particular, we may expand the de-
nominator into a power series with respect to a. In that case the integral can be
represented in the form

s.. . = _1 ('1'- '1'0) + small pulsations. (8.6.17)


Q

Therefore, aside from small pulsations we find for (8.6.15) the relation

(8.6.18)

In view of (8.6.11,12) and the decomposition (8.6.4), (8.6.18) implies

or (8.6.19)

(8.6.20)

This means that the renormalized frequencies w} retain the same distance as the
frequencies Wj without nonlinear interaction. Quite different behavior occurs,
however, if the condition

(8.6.21)

is fulfilled. Then the integral diverges at a finite 'I' or, in other words, t = 00 is
reached for that finite '1'. Perhaps aside from transients, the solution of the dif-
ferential equation (8.6.13) reads

'I' = arc sin (Q/ a) = const , (8.6.22)

which in view of (8.6.11 and 4) implies

({J2 - ({Jt = (w:Z - w{)t + 1f12 - IfIt = const. (8.6.23)

This relation can be fulfilled only if the renormalized frequencies w) coincide

W:Z = w{ , (8.6.24)

i. e., if the two frequencies acquire the same size or, in other words, if they are
locked together. Also the phases ({J2 and ({Jt are locked together by means of
(8.6.22).
It should be noted that such frequency locking effects can occur in far more
general classes of equations for which (8.6.1,2) are only a very simple example.
Exercise. Treat three coupled oscillators and show that frequency locking may
occur between the frequency differences W2 - WI and W3 - W2 provided

(8.6.25)

643
242 8. Nonlinear Equations. Qualitative Macroscopic Changes

is small enough.
Take as an example the equations

}
UI = (A' + iwdul - b l u l !UI!2 + CIU~Ut
U2 = (A' + iW2)u2 - b 2u Z !U2!Z + C2U!UIU3 (8.6.26)
U3 = (A' + iW3)u3 - b3U3!U3!2 + C3U~Ut

Hint: use the same decomposition as (8.6.3, 4) and introduce as a new variable

(8.6.27)

Form a suitable linear combination of the equations for qJj which results in an
equation for (8.6.27).

8.7 Bifurcation from a Limit Cycle

In the preceding chapter we learned that a system which was originally quiescent
can start a coherent oscillation or, in other words, its trajectory in the corre-
sponding multi-dimensional space of the system lies on a limit cycle. When we
change the external control parameter further it may happen that the motion on
such a limit cycle becomes unstable and we wish to study some of the new kinds
of motion which may evolve.
We first perform a general analysis allowing us to separate the order para-
meters from the slaved mode amplitudes and then we shall discuss a number of
typical equations for the resulting order parameters. We shall confine our ana-
lysis to autonomous systems described by nonlinear evolution equations of the
form

q= N(q, a), (8.7.1)

where q (t) is, in general, a multi-dimensional vector describing the state of the
system. We assume that we have found the limit cycle solution of (8.7.1) for a
certain range of the control parameter

(8.7.2)

We note that farther away from the solution (8.7.2) other possible solutions of
(8.7.1) for the same control parameter may exist, but these shall not be consider-
ed.
We now change the value of the control parameter a and assume that qo can
be continued into this new region. Because of the assumed periodicity we require

(8.7.3)

644
8.7 Bifurcation from a Limit Cycle 243

where the period T may depend on a ,

T= T(a). (8.7.4)

In order to study the stability of (8.7.2) we make the usual hypothesis

q(t) = qo(t) + w(t) (8.7.5)

and insert it into (8.7.1). This leaves us with

qo(t) + w(t) = N(qo(t) + w(t» . (8.7.6)

Assuming that w is a small quantity we may linearize (8.7.6) with respect to w to


obtain

w(t) = L w(t) . (8.7.7)

In it the matrix

L = (Lij) (8.7.8)

has the elements

(8.7.9)

Because N contains the periodic function qo and depends on the control para-
meter a, L depends on time t and control parameter a

L = L(t, a) (8.7.10)

and has the same periodicity property as qo, i. e.,

L(t+ T, a) = L(t, a). (8.7.11)

According to Sect. 2.7 we know that the solutions of (8.7.7) can be written in the
form
(8.7.12)

If the Ak'S are nondegenerate, Vk is periodic with period T. Otherwise Vk may


contain a finite number of powers of t with periodic coefficients. In order not to
overload our presentation we shall assume that all vk are periodic in time though
it is not difficult to extend the following elimination scheme to the more general
case. When we use (8.7.1) for qo,

(8.7.13)

645
244 8. Nonlinear Equations. Qualitative Macroscopic Changes

and differentiate it with respect to time on both sides, we readily obtain

(8.7.14)

which by comparison with (8.7.7) reveals that one of the solutions of (8.7.7) is
identical with qo

(8.7.15)

Evidently WI always points along the tangent of the trajectory at each point qo
at time t. Because the solutions of (8.7.7) span a complete vector space we find
that at each time t this space can be spanned by a vector tangential to the trajec-
tory and n -1 additional vectors which are transversal to the trajectory. Since we
wish to study new trajectories evolving smoothly from the present limit cycle, we
shall construct a coordinate system based on the coordinates with respect to the
old limit cycle. We expect that due to the nonlinearities contained in (8.7.6) the
new trajectory will shift its phase with respect'to the original limit cycle, and that
it will acquire a certain distance in changing directions away from the limit cycle.
Therefore we shall construct the new solution q(t) for the evolving trajectory
using the phase angle CP(t) and deviations ~(t) away from the limit cycle. Thus,
our hypothesis reads

(8.7.16)

W
The prime at the sum means that we shall exclude the solution (8.7.15), i. e.,

k *' 1, (8.7.17)

because this direction is taken care of by the phase angle CP(t). In (8.7.16) W is a
function of ';k(t), t, and CP(t), i. e.,

W = W(c;, t+ CP(t» . (8.7.18)

Inserting hypothesis (8.7.16) into (8.7.1) we readily obtain [omitting the


argument t+ CP(t) in qo, qo, Vb vkl I

qo+qotP+ L'~kVk+ L'~kVk(1+tP)=N(qo+W), (8.7.19)


k k

where rhs can be expanded into a power series of the vector W

N(qo+ W) = N(qo) + L W + H(W). (8.7.20)

1 The dot C) means derivative with respect to the total argument, I + <P, of q, v, or t of <P,
respectively.

646
8.7 Bifurcation from a Limit Cycle 245

In it Nand L have been defined before, whereas H(W) can be imagined as a


power series which starts with second-order terms

H(W) = N(2): W: W + .... (8.7.21)


Because of (8.7.18), (8.7.21) can be considered as a function of ~ and t + ifJ, i. e.,

H(W) = H(f" t+ ifJ) = L ~k~kN(2): Vk: Vk' I + .... (8.7.22)


kk'

We now use the fact that (8.7.2) fulfills (8.7.1), and also use (8.7.7) and the de-
composition (8.7.12) to obtain

(8.7.23)

In Sect. 2.5 it was shown that we may construct a bi-orthogonal set of functions
Vk (k = 1, 2, ... ), [compare (2.5.16)]. Multiplying (8.7.23) with VI and using the
orthogonality property we readily obtain

cP+ L'~k(VIVk)cP= (vIH). (8.7.24)


k

We may write explicitly

(vIH) == HI = L I ~k~k' (v I N(2): Vk: Vk') + .... (8.7.25)


kk' '-----v----'
Atkk,(t+ ifJ(t»

Similarly, using Vk (k = 2,3, ... ) we readily obtain from (8.7.23)

~k = Ak ~k - L ~k' ( Vk ti k' ) cP + <v II


I ), k, k I =1= 1 (8.7.26)
'---v---' '-v--'
akk,(t+ifJ) Hk
where

(8.7.27)

We may readily solve (8.7.24) for cP which yields

(8.7.28)

In this way we may also eliminate cP from (8.7.26):

~k = Ak ~k - L I ~k,akk' (t + ifJ) [1 + L I ~k" (t)a1k"(t + ifJ)l-1 HI (c" t+ ifJ)


k' k"

(8.7.29)

647
246 8. Nonlinear Equations. Qualitative Macroscopic Changes

We remind the reader that Fh, k = 1,2, ... ,depends on t + (/) viaq 0, qo == VI,
and Vk' Let us study this behavior in more detail. Because qo is a smooth and
periodic function we may expand it into a Fourier series

qo(t) = L cmeimwt , where (8.7.30)


m

w= 2n1T. (8.7.31)

Similarly we may write

Vk(t) = L dmeimwt . (8.7.32)


m

Because of the assumed form of Fft we may put

Hk(t+ (/) = L Cf:)eimw(t+t[». (8.7.33)


m

In particular, in order to make contact with theorems presented earlier, especially


in Sect. 6.2, we introduce a new variable rp(t) by

rp(t) = w[t+ (/)(t)] . (8.7.34)

Differentiating (8.7.34) with respect to time yields

ip=w+w(/) (8.7.35)

into which we insert rhs of (8.7.28). This yields

ip = w + !(f" <p), (8.7.36)

where we have used the abbreviation

(8.7.37)

Similarly, we find from (8.7.29)

(8.7.38)

where we have used the abbreviation

(8.7.39)

We note that! and gk are 2 n-periodic in rp. Equation (8.7.38) can be cast into a
vector notation in the form

(8.7.40)

648
8.8 Bifurcation from a Limit Cycle: Special Cases 247

Equations (8.7.36, 40) are still exact and represent the final result of this section.
We now study special cases.

8.8 Bifurcation from a Limit Cycle: Special Cases

8.8.1 Bifurcation into Two Limit Cycles


Remember that in our present notation ),,1 = O. We assume that we have labeled
the other eigenvalues in such a way that the one with the biggest real part carries
the index 2. We assume that ),,2 is real and nondegenerate

),,2 real , ~O , (8.8.1)

and that

Re{)"d ~ C < 0, k = 3,4, .... (8.8.2)

For small enough ~ we may invoke the slaving principle of Chap. 7 which
allows us to do away with all the slaved variables with k = 3, 4, ... and to derive
equations for the order parameters alone,

~(t)
(fJ(t)
== u(t), real
J parameters
order
.
(8.8.3)

According to the slaving principle the original set of (8.7.36, 40) reduces to the
equations

it = )"uu + g(u, (fJ) , (8.8.4)

(p = w + !(u, (fJ) . (8.8.5)


In it g and! can be constructed explicitly as power series (of u) whose coefficients
are 2:n:-periodic functions of (fJ. We may decompose g according to

(8.8.6)

Each coefficient gj may be decomposed again into a constant term and another
one which is 2 :n:-periodic in (fJ

(8.8.7)

so that

2rr
Jd(fJ gj,2«(fJ) = 0
o
(8.8.8)

649
248 8. Nonlinear Equations. Qualitative Macroscopic Changes

holds. A similar decomposition can be used for I

(8.8.9)

(8.8.10)

with the property

2n
jdrp!j,I(rp) =0. (8.8.11)
o

We shall first focus our attention on the special case

g2 = II = 0, g3 == - b <0. (8.8.12)

From Sect. 8.4 it will be clear that the resulting equations (8.8.4, 5) acquire a
form treated there, with one difference, however, which we shall explain now.
When we make the hypothesis

U = Uo + 11 (8.8.13)

and require

(8.8.14)

we find the two solutions

Uo = ± VA-ul b (S.S.15)

which are distinguished by the sign in front of the square root. We note that in
the case of Hopf bifurcations we must choose the positive sign because ro has to
be positive. Here, however, we take the two signs. Either choice of Uo takes
(S.S.4, 5) [under the assumption of (S.S.12)] to equations which we know from
Sect. S.4 so that we need not treat this case here again explicitly. We can rather
discuss the final result.
To illustrate the resulting solution we consider the case in which the original
limit cycle lies in a plane. Then the two solutions (8.S.15) mean that two limit
cycles evolve to the outer and inner side of the limit cycle. According to the ana-
lysis of Sect. 8.7, the phase motion may suffer a shift of frequency with respect to
the original limit cycle. Furthermore, the phase may have superimposed on its
steady motion oscillations with that renormalized frequency. Also deviations
from the new distance U o may occur which are in the form of oscillations at the
renormalized frequency w,. If the limit cycles lie in a higher-dimensional space,
the newly evolving two trajectories need not lie in a plane but may be deformed
curves.

650
8.8 Bifurcation from a Limit Cycle: Special Cases 249

8.8.2 Period Doubling


Let us assume a situation similar to that in the preceding section, with the only
difference that A2 is complex

(8.8.16)

We shall admit that also W2 = 0 is included provided that A~ remains twofold


degenerate. After having applied the slaving principle we again find an order
parameter equation which we assume to be in the specific form

(8.8.17)

For simplicity we assume b real though this is not essential. It is assumed that
there may be additional terms to (8.8.17) but that these terms can be treated as a
perturbation. It is assumed with (8.8.17) that the equation for qJ simply reads

qJ = w. (8.8.18)

We shall show that (8.8.17) allows for a solution which oscillates at half the
frequency or, in other words, whose period has doubled with respect to the
original limit cycle. To this end we make the hypothesis

(8.8.19)

wherey is a complex variable. Inserting (8.8.19) into (8.8.17) yields

(8.8.20)

It allows for a time-independent stable solution Yo = 0 for A~ < 0 and for solu-
tions (besides Yo = 0)

Y6 = [A~ + i{w2- w/2)]/b = AO ei",o,


Yo = ± ~ e i ",ol2
i.e.,
] fOrA~ >0. (8.8.21)

One may readily show that the solutions (8.8.21) are stable (provided A~ > 0).
Thus we have shown that a stable solution with double period exists. Of course,
in the general case one now has to study higher-order terms and show that they
can be treated as perturbations.

8.8.3 Sub harmonics


We now want to show that period doubling is just a special case of the generation
of subharmonics in which the frequency w is just the nth part of the original
frequency of the limit cycle. To substantiate our claim we assume that the order
parameter equation has the form

651
250 8. Nonlinear Equations. Qualitative Macroscopic Changes

(8.8.22)

Making the hypothesis

u = ye iwtln , (8.8.23)

where y is assumed to be constant, we readily obtain

Y(A-iwln)-byn+l=o, A=A u , (8.8.24)


which allows for the solutions

Yo= ° or (8.8.25)

(8.8.26)

or, after taking the nth roots

Y = exp(2nimln) ~ eilf/01n, m integer. (8.8.27)

By linear stability analysis we can show that (8.8.23) with (8.8.27) is a stable solu-
tion provided Au > O. While here we have assumed that A is real and positive, one
may readily extend this analysis to a complex A and complex b provided the real
part of A> O.
Let us briefly discuss examples in which the other remaining terms omitted in
our above analysis can be treated as perturbation. Let us consider an equation of
the form

(8.8.28)

where the perturbative terms are of order US or higher. Making the hypothesis

(8.8.29)

we find

(8.8.30)

From a comparison between this equation and (8.5.1), it transpires that further
analysis is similar to that in Sect. 8.5. Thus we must make the transformation

y = Yo + 11 (8.8.31)

with Yo

(Au - iwl2) - by~ = 0. (8.8.32)

We leave the further discussion to the reader.

652
8.8 Bifurcation from a Limit Cycle: Special Cases 251

We now present an example indicating that there are cases in which it is not at
all evident from the beginning which terms can be considered as perturbations.
Some kind of bifurcation may occur in which, if one solution is singled out, one
class of terms can be considered as small, but if another class of solutions is
singled out, another class of terms can be considered as small. Let us consider an
example,

(8.8.33)

where Au and b are assumed real. Then in lowest approximation we may choose

or (8.8.34)

(8.8.35)

as a solution. Depending on the choice (8.8.34 or 35), (8.8.33) is cast into the
form

Y= (Au + iwl2)y - b y 3(1 +e ±iQl) (8.8.36)


with the additional equation
ip=2w. (8.8.37)

These equations are the same type as (8.5.3, 4), already discussed in detail (here r
is complex). There, by suitable scaling, we have put in evidence that the terms
containing exp( ± i tp) can be considered as small perturbations. Thus (8.8.33)
allows for two equivalent solutions, (8.8.34, 35) in whichy is a constant on which
small amplitude oscillations at frequencies m . wl2 are superimposed. In physics
the neglect of the term exp(± itp) compared to 1 (where tp = 2wt) is known as
"rotating wave approximation" .

8.8.4 Bifurcation to a Torus


In Sect. 8.8.1 we discussed the case in which A2' i. e., the eigenvalue with the
biggest real part of all eigenvalues, is real. Here we assume that this A2 is complex
(8.8.38)

Again we assume that for all other eigenvalues


Re{Ad ~ C < 0, k = 3, 4, .... (8.8.39)

Invoking the slaving principle, we have to study an order parameter equation for
u alone. In a first preparatory step we assume that this order parameter equation
has the form

it. = (A~ + iW2)u - bu lu 12 , (8.8.40)

653
252 8. Nonlinear Equations. Qualitative Macroscopic Changes

where for simplicity we assume b real and independent of an additional phase


angle ({J. We are familiar with this type of equation from discussion of the Hopf
bifurcation (Sect. 8.4). We know that the solution can be written in the form

(8.8.41)

where r obeys the equation

(8.8.42)

which for A~ > 0 possesses a nonvanishing stable stationary solution r = roo To


visualize the meaning of the corresponding solution we remind the reader that the
solution of the total system q can be written in the form

q = qo(t+ 1» + U(t)v2(t+ 1» + u*(t)vf(t+ 1» + ... , (8.8.43)

where the dots indicate the amplitudes of the slaved variables which are smaller
than those which are explicitly exhibited. Here V2 and its conjugate complex are
eigensolutions to (8.7.7) with (8.7.12). Inserting (8.8.41) into (8.8.43) we find

q = qo + COS(W2t)· Re{v2} + sin(w2t)· 1m {V2} . (8.8.44)

The real part of V2 and the imaginary part of V2 are linearly independent and
present two vectors which move along the original limit cycle. The solution
(8.8.44) indicates a rotating motion in the frame of these two basic vectors Re{v2}
and Im{v2}' i. e., the end point of the vector spirals around the limit cycle while
the origin of the coordinate system moves along qo. If the two frequencies W2 and
WI == W = (P == (PI are not commensurable the trajectory spans a whole torus. Thus
in our present case we have a simple example of the bifurcation of a limit cycle to
a two-dimensional torus. In most practical applications the order parameter
equation for u will not have the simple form (8.8.40) but several complications
may occur. In particular, we may have additional terms which introduce a
dependence of the constant b on ({Jl' A simple example is the equation
U = (A~ + iW2)u - bu lu 12 - C(WI t)u lu 12, (8.8.45)
({JI

where C is 2n-periodic in ({JI' Hypothesis (8.8.41) takes us from (8.8.45) to the


equations

and (8.8.46)

(8.8.47)

which are of precisely the same form encountered previously in the case of Hopf
bifurcation. From this analysis we know that basically the bifurcation of the limit
cycle of a torus is still present.

654
8.9 Bifurcation from a Torus (Quasiperiodic Motion) 253

Things become considerably more complicated if the equation for the order
parameter u is of the form

(8.8.48)

and that for the phase angle ({J :; ({Jt of the form

(8.8.49)

The functions g and!t are 2rr-periodic in ({Jt. The hypothesis

(8.8.50)

leads to the equations

(8.8.51)

(8.8.52)

to which we must add the equation

(8.8.53)

which stems from (8.8.49) and where Jt (like fJ and !2) is 2 rr-periodic in ({Jt, ({J2·
The discussion of the solutions of these equations is somewhat subtle, at least in
the general case, because a perturbation expansion with respect to fJ and Jt , J2
requires the validity of a KAM condition on Wt, W2. Since this whole discussion
will turn out to be a special case of the discussion on bifurcations from a torus to
other tori, we shall postpone it to the end of Sect. 8.10.2.

8.9 Bifurcation from a Torus (Quasiperiodic Motion)

As in our preceding chapters we start from an equation of the form

q (t) = N(q(t), a) (8.9.1)

with the by now familiar notation. We assume that (8.9.1), for a certain range of
control parameters a, allows quasiperiodic solutions qo which we may write in the
form

qo(l, a) = L em eim · wt , (8.9.2)


m

where we have used the abbreviation

(8.9.3)

655
254 8. Nonlinear Equations. Qualitative Macroscopic Changes

and where the m's run over all integers. As discussed in Sects. 1.12 and 8.8, such
a motion can be visualized as motion on a torus. We assume in the following that
the trajectories lie dense on that torus. We may parametrize such a torus by phase
angles lfJj , j = 1, ... , M, so that by use of these variables the vector r whose end
point lies on the torus can be written in the form

(8.9.4)

Because the trajectories lie dense on the torus 1 we may subject them to the initial
condition

(8.9.5)

This allows us to construct the solutions (8.9.2) which fulfill the initial condition
(8.9.5) explicitly by putting

qo(t,a; lfJ)= Lcmexp[im(wt+ q,)], (8.9.6)


m

where we have used the abbreviation

(8.9.7)

whereas the vector f/J is defined by

(8.9.8)

The first steps of our subsequent procedure are now well known from the preced-
ing chapters. We assume that the form of the solution (8.9.6) still holds when we
increase the control parameter a further, in particular into a region where the old
torus loses its linear stability.
To study the stability we perform the stability analysis, considering the equa-
tion

w(t) = L(t)w(t) , (8.9.9)

where the matrix

(8.9.10)

1 More precisely speaking, we assume that with (8.9.2) as solution of (8.9.1) also

qo(t, a, </» = L em exp[im(wt+ <1>)] (8.9.6)


m
is a solution to (8.9.1), where qo is assumed to be Ck(k:? 1) with respect to </> and k will be specified
later. For certain cases, discussed later, we require that q()(t, a, </» is analytic with respect to </> in a
certain domain.

656
8.9 Bifurcation from a Torus (Quasiperiodic Motion) 255

is defined by

Ljk = oNiq, a) I . (8.9.11)


oqk qo = qo(t, a; <1»

Because qo is given by (8.9.6), L can be cast into a similar form

Ljk = Ljk(t, a, CP) = L Ljk;meXP lim (wt+ tP)] . (8.9.12)


m

In the following we shall drop the index a. Clearly (8.9.12) is a quasiperiodic


function. We now refer to the results of Chap. 3, especially Sect. 3.7. There we
studied the general form of solutions of (8.9.9) with quasiperiodic coefficients,
showing that under certain conditions on the generalized characteristic exponents
and some more technical conditions the form of the solution of (8.9.9) can be
explicitly constructed, namely as

(8.9.13)

This form holds especially if the characteristic exponents (or generalized charac-
teristic exponents) are all different from each other, if L is sufficiently often dif-
ferentiable with respect to CPj' j = 1, ... ,M, and if a suitable KAM condition is
fulfilled by the w's. In this case, the Vk'S are quasiperiodic functions of time,
whose dependence on cP or tP, respectively, is of the general form of rhs of
(8.9.6). Here we need a version of that theorem in which all Ak are different from
each other except those for which Ak = 0, whose Vk will be constructed now. To
this end we start from the equation

(8.9.14)

which we differentiate on both sides with respect to CPl' On account of (8.9.11),


one immediately verifies the relation

(8.9.15)

This relation tells us that we have found solutions to (8.9.9) given by

(8.9.16)

Since lhs of (8.9.16) is quasiperiodic, we have thus found solutions in the form
(8.9.13) with AI = O. We shall denote all other solutions to (8.9.9) by

Wb k=M+1, .... (8.9.17)

We now wish to find a construction for the solutions which bifurcate from
the old torus which becomes unstable. We do this by generalizing our considera-

657
256 8. Nonlinear Equations. Qualitative Macroscopic Changes

tions on the bifurcation of a limit cycle. We assume self-consistently that the new
torus or the new tori are not far away from the old torus, so that we may utilize
small vectors pointing from the old torus to the new torus. On the other hand, we
have to take into account the fact that when bifurcation takes place, the corre-
sponding points on the old trajectory and the new one can be separated arbi-
trarily far from each other when time elapses.
To take into account these two features we introduce the following coor-
dinate system. We utilize the phase angles as local coordinates on the old torus,
and introduce vectors v which point from each local point on the old torus to the
new point at the bifurcating new torus. The local coordinates which are trans-
versal to the old torus are provided by (8.9.17). These arguments lead us to the
following hypothesis for the bifurcating solutions (the prime on the sum means
exclusion of k = 1, 2, ... , M)

q(t) = qo(t, tfJ) + L I ~kVk(t, tfJ), (8.9.18)


'---v---'
W
where we assume that

tfJ = tfJ(t) (8.9.19)

is a function of time still to be determined. Also Vk is a function of time

v(t, tfJ(t» . (8.9.20)

Inserting (8.9.18) into (8.9.1) and performing the differentiations we readily


find
.
qo+
'\' (}q 0';'
~---~I+ ~
'\' I j:
~kVk+ ~
'\' I):(} V k,;,
~k---~I+ ~
'\' I): vk
~k---
a
I atP
l k k,1 atPl k at
= N(qo) + L L I ~kVk + H(W) , (8.9.21)
k

where we have expanded the rhs into powers of W. In this way, H is given by an
expression of the form

H(W) = N(2): L ~kVk: L ~k'Vk' + ... , (8.9.22)


k k'

where we have indicated higher-order terms by dots. It is, of course, a simple task
to write down such higher-order terms explicitly in the way indicated by the
second-order term.
We now make use of results of Sect. 2.5, where a set of vectors v was con-
structed orthogonal on the solutions v of (8.9.13). Multiplying (8.9.21) by

VI, 1= 1, ... ,M, (8.9.23)

we readily obtain

658
8.9 Bifurcation from a Torus (Quasiperiodic Motion) 257

. .
CPI + L L ~j(VI'
I SVjISCPI') CPI' = (vIH) , (8.9.24)
j I' '----v-----' '-v--'
ajll' iii
while by multiplication of (8.9.21) by

Vb k=M+1,oo., (8.9.25)

we obtain
. .
~k = Ak~k - L L ~j(Vk' Sv/SCP!') CPI' + (vkH)
I . (8.9.26)
j I' ~ '-v--'
ajkl' lik
Introducing the matrix K with matrix elements

KI/' = bl/' + L I ~jajll' (8.9.27)


j

and taking cP and Ii as column vectors, we may cast the set of equations (8.9.24)
into the form

(8.9.28)

Because we assume that ~j is a small quantity we may immediately solve (8.9.28)


[cf. (8.9.24)] by means of

(8.9.29)

where the last term of this equation is just an abbreviation. Expressing cP!' which
occurs in (8.9.26) by rhs of (8.9.29), we transform (8.9.26) into

(8.9.30)

We note that t and CP, on which the right-hand sides of (8.9.29,30) depend, occur
only in the combination

(8.9.31)

Therefore we introduce {fJj as a new variable which allows us to cast (8.9.29) into
the form

(8.9.32)

where we used the abbreviation

(8.9.33)

659
258 8. Nonlinear Equations. Qualitative Macroscopic Changes

In a similar fashion we may cast equations (8.9.30) into the form

1(8.9.34)

with the abbreviation

(8.9.35)

We note that I and g are 2 n-periodic in rpj.


Within the limitation that we may use the form (8.9.13) of the solutions of the
linearized equations (8.9.9), (8.9.32, 34) are still general. Of course, another
approach would be to forget the way in which we arrive at (S.9.32, 34) from
(S.9.1) and take (8.9.32, 34) just as given starting equations. These equations
describe for! = g = c; = 0 the motion on the old torus while for nonvanishing!
and g we have to establish the form of the newly evolving trajectories.

8.10 Bifurcation from a Torus: Special Cases

8.10.1 A Simple Real Eigenvalue Becomes Positive


We now numerate the eigenvalues Ak in such a way that AM+l has the biggest real
part. We assume

and (S.1O.1)

Re{Ak} ,,;; C < 0, k = M + 2, ... (S.10.2)

and put

(8.10.3)

Here u plays the role of the order parameter, while ¢b k ;;, M + 2 plays the role
of the slaved variables. Applying the slaving principle we reduce the set of equa-
tions (S.9.34 and 32) to equations of the form

(S.1O.4)

ip = w + feu, rp) . (S.10.5)


We shall focus our attention on the case in which g can be written in the form

(S.10.6)

where we may split h further into the following two parts:

h = hI (u) + hz(u, rp) , (S.10.7)

660
8.10 Bifurcation from a Torus: Special Cases 259

where hI' h2 are assumed to have the following properties

hI (u) = O(u) , (8.10.8)

(8.10.9)

2][ 2][
S···Sdrpl.··drp Mh2=0. (8.10.10)
a a

In addition we assume

feu, rp) =fl(u) + f2(U, rp), where (8.10.11)

fl(u) = O(u 3 ), (8.10.12)

f2(U) = O(u 2) , (8.10.13)


2][ 2][
S ... S drpl'" drpMf2 =0 . (8.10.14)
a a

(These conditions can be easily weakened, e.g., by requiringfl(u) = O(u 2 ) and


f2(U) = O(u) [Sect. 8.8.1].)
We are familiar with similar properties from our treatment of the bifurcation
of a limit cycle. To continue, we now consider a region somewhat above the
transition point and assume

and (8.10.15)

u = e(ua + rO . (8.10.16)

We determine ua by

(8.10.17)

so that

Ua = ± VA-alb (8.10.18)

holds. This relation indicates that two new tori split off the old torus and keep a
mean distance ua from the old torus. Introducing in addition to (8.10.15, 16) the
new scaling of time

(8.10.19)

by using relation (8.10.17), we may cast (8.10.4) into

661
260 8. Nonlinear Equations. Qualitative Macroscopic Changes

dl1/dr= -2Aol1- 3buo,,2 - b,,3


3 - -
+ (UO+ ,,) - [ehl (UO+ ", e) + h 2 (e, Uo+ ", cp)] , (8.10.20)
where

ehl(uo+",e) = hl(e(uo+"», (8.10.21)

h 2 (e, uo+ ", cp) = h 2 (e(uo+ ,,), cp). (8.10.22)

On account of assumption (8.10.8), hI is 0(1) so that eh can be treated as a small


perturbation. Because h2 depends on rapid oscillations due to cp through scaling
of time, h2 can also be considered as a perturbation 0:: e from a formal point of
view. Note that in the present case of quasiperiodic motion such an argument
holds only if the w's fulfill a KAM condition. The same scaling brings us from
(8.10.5) to

where (8.10.23)

(8.10.24)

is of order O(e), and

(8.10.25)

is of order unity, but due to its dependence on cp acts like a perturbation 0:: e.
Superficially, (8.10.20, 23) are familiar from Sects. 8.7, 8.8. The reader
should be warned, however, not to apply the previous discussion directly to the
case of (8.10.20, 23) without further reflection. The basic difference between
(8.10.20, 23) and the equations of Sects. 8.7, 8.8 is that here we are dealing with
quasiperiodic motion, while Sects. 8.7, 8.8. dealt with periodic motion only. It is
appropriate here to recall the intricacies of Moser's theorem. Before we discuss
this question any further we shall treat the case in which AM+l is complex. We
shall show that in such a case we are led to equations which again have the struc-
ture of equations (8.10.20, 23), so that we can discuss the solutions of these equa-
tions simultaneously.

8.10.2 A Complex Nondegenerate Eigenvalue Crosses the Imaginary Axis


The initial steps of our procedure are by now well known to the reader. We put

and (8.10.26)

';M+I = U (8.10.27)

and assume

662
8.10 Bifurcation from a Torus: Special Cases 261

Re{Ad~C<O, k=M+2, .... (8.10.28)

Application of the slaving principle allows the reduction of the original set of
equations (8.9.32, 34) to the two equations

if = (A~ + iWM+I)u + g(u, rp) and (8.10.29)

q,= w+/(u,rp) (8.10.30)

for the order parameters u and rp. Let us consider as an example

g(u,rp) = -bU!U!2+ m (U,rp), b=b'+ib", b'>O, (8.10.31)

where we shall specify m(u, rp) below.


The hypothesis

u = r(t) exp [i rpM+ I (I)] (8.10.32)

allows us to transform (8.10.29) into

r= A~r - br 3 + Re{exp( -iqJM+I)m(rexp(iqJM+I)' qJ)}. (8.10.33)

Introducing the new vector fp which contains the additional phase qJM+I'

qJI ]
fp= [ : (8.10.34)
;M+I .

we may cast (8.10.33) into the form


r= A~r - b' r3 + fj(r, fp). (8.10.35)

In a similar fashion we obtain from (8.10.29)

rPM + I = WM+I - b" r2 + 1m {exp ( -iqJM+I)m(rexp(iqJM+l), qJ)r- 1} (8.10.36)


\ J
v
fM+I
which can be cast into the form
d fp/dt = fu +1 with (8.10.37)

[ 1= ~ land (8.10.38)
fM+l

fu~ [:~J . (8.10.39)

663
262 8. Nonlinear Equations. Qualitative Macroscopic Changes

From a formal point of view (8.10.35, 37) have the same structure as (8.9.32),
(8.9.34) which then can be transformed into (8.10.20,23).
In precisely the same way as we discussed in Sect. 8.10.1, (8.10.35, 37) allow
for the introduction of scaled variables. It is a simple matter to formulate condi-
J
tions on g and [and thus on m(u, qJ)] which are analogs to the conditions
(8.10.7- 14). This allows us to exhibit explicitly the smallness of the perturbative
terms by which we may write (8.10.35, 37) in the form

(8.10.40)

(8.10.41)

The functions g and hare 2 n-periodic in qJ.


We now apply the same reasoning as in Sect. 8.5 in order to apply Moser's
theorem. We put - Xu = Au [where this Au must not be confused with Au
occurring in (8.10.4)] and split Au according to

(8.10.42)

and similarly

(8.10.43)

In this way, (8.10.40, 41) are cast into the form of the equations treated by
Moser's theorem. Let us postpone the discussion whether the conditions of this
theorem can be met, and assume that these conditions are fulfilled. Then we can
apply the procedure we described after (8.5.36,37) in Sect. 8.5. In particular we
can, at least in principle, calculate D and A so that

and (8.10.44)

(8.10.45)

In addition, we can construct the steady state and transient solutions of (8.10.40,
41) in complete analogy to the procedure following (8.5.45). Thus the transient
solutions (close to the bifurcating torus) acquire the form
qJ = wrt + cu(wrt, c), (8.10.46)
11 = Xo exp (Art) + Cv(wrt, c) + c V(wrt, c)xoexp(Art) , Ar < o. (8.10.47)

u, v, and V are 2 n-periodic in each component of wrt. The steady state solutions
are given by
qJ = wrt + cu(wrt, c) , (8.10.48)
11 = cv(wrt, c). (8.1 0.49)

664
8.10 Bifurcation from a Torus: Special Cases 263

The rest of this section will be devoted to a discussion under which circumstances
the conditions of Moser's theorem can be fulfilled, in particular under which
conditions Wr fulfills the KAM condition 1. We first turn to the question under
which hypothesis we derive (8.10.29, 30). If we use these equations or (8.10.40,
41) in the form of a model, no further specifications on Wu must be made. In this
case we find the bifurcation from a torus provided we can find for given Au and
Wu such Ar and Wr so that (8.10.44,45) are fulfilled and Wr obeys a KAM condi-
tion. If, on the other hand, (8.10.40, 41) have been derived starting from the
original autonomous equations (8.9.1) then we must take into account the hypo-
thesis we made when going from (8.9.1) to these later equations. The main hypo-
thesis involved concerned the structure ofthe functions (8.9.13), to secure that Vk
are quasiperiodic. This implied in particular that the frequencies WI, ••• , £OM
obeyed the KAM condition.
We now face a very profound problem, namely whether it is possible to check
whether or not such frequencies as wr and Wu obey a KAM condition. Maybe
with the exception of a few special cases this problem implies that we know each
number £OJ with absolute precision. This is, however, a task impossible to solve.
Further discussion depends on our attitude, whether we take the point of view of
a mathematician, or a physicist or chemist.
A mathematician would proceed as follows. In order to check whether a bi-
furcation from one torus to another can be expected to occur "in reality", we
investigate how probable it is that out of a given set of w's a specific set fulfills
the KAM condition. As is shown in mathematics, there is a high probability for
the fulfillment of such a condition provided the constant K occurring in (6.2.6) is
small enough. It should be noted that this constant K occurs jointly with the scal-
ing factor e2, as can be seen by the following argument.
We start from the KAM condition (6.2.6). The scaling of time transforms the
frequencies wrand A,u into w r le 2, A,u/e 2, respectively. This transforms the KAM
condition into

(8.10.50)

From the mathematical point of view we thus are led to the conclusion that the
probability of finding suitable combinations of w's is very high, or more precise-
ly expressed, the bifurcation from one torus to another takes place with non-
vanishing measure.

1 From a rigorous mathematical point of view we require that g andh in (8.10.40,41) are analytic in
TJ and 'fl. It should be noted that the slaving principle introduced in Chap. 7 does not guarantee this
property even if the initial equations (i. e., those before applying the slaving principle) only con-
tained right-hand sides analytic in the variables. Thus we have either to invoke "smoothing" as ex-
plained in Sect. 7.5, or to introduce the equations (8.10.40, 41) by way of a model (which appeals
to a physicist or engineer), or by making stronger assumptions on the original equations so that the
slaving principle can be given a correspondingly sharper form (which appeals to a mathematician).
A further possibility is provided by weakening the assumptions of Moser's theorem (which is also
possible in certain cases, e. g., if certain symmetries are fulfilled).

665
264 8. Nonlinear Equations. Qualitative Macroscopic Changes

From a physical point of view we can hardly think that nature discriminates
in a precise way between w's fulfilling a KAM condition and others. Rather, at a
microscopic level all the time fluctuations take place which will have an impact
on the actual frequencies. Therefore, the discussion on the bifurcation of a torus
will make sense only if we take fluctuations into account. We shall indicate a
possible treatment of such a problem in a subsequent chapter. Here we just want
to mention that fluctuations, at least in general, will let the phase angles diffuse
over the whole torus so that, at least in general, such delicate questions as to
whether a KAM condition is fulfilled or not will not playa role. Rather, a system
averages over different frequencies in one way or another.
Some other important cases must not be overlooked, however. One of them is
frequency locking which has been dicussed in Sect. 8.6. In this case a torus col-
lapses into a limit cycle. It is worth mentioning that fluctuations may play an im-
portant role in this case, too. Finally it may be noted that by a combination of the
methods developed in Sects. 8.5 -10, further phenomena, e.g. period doubling
of the motion on a torus, can easily be treated.

8.11 Instability Hierarchies, Scenarios, and Routes to Turbulence

As we saw in the introduction, systems may pass through several instabilities


when a control parameter is changed. There are different kinds of patterns which
occur after each instability. If we consider only temporal patterns, we may find a
time-independent state, periodic motion, quasiperiodic motion, chaos, and
various transitions between these states at instability points leading, for example,
to frequency locking or to period doubling (subharmonics). Which sequence of
transitions is adopted by a specific system is, of course, an important question.
Such a sequence is often called a route, in particular if a sequence leads to tur-
bulence, or chaos. In such a case one speaks of routes to turbulence. The theo-
retical discussion of a route is often referred to as a "scenario", or "picture".

8.11.1 The Landau-Hopf Picture


For a number of systems, e. g., fluids, a typical route is as follows: a time-inde-
pendent (spatially homogeneous) state bifurcates 1 into other time-independent
(but spatially inhomogeneous) states. A new such state then bifurcates into an
oscillating state, i. e., a limit cycle occurs (Hopf bifurcation). Then two basic fre-
quencies occur; i. e., the limit cycle bifurcates into a torus. Landau conjectured
that these kinds of transitions are continued in such a way that systems exhibit
subsequent bifurcations to tori of higher and higher dimensions. At each step a
new frequency Wi is added to the set of basic frequencies at which the systems
oscillate (quasiperiodic motion). Thus it is suggested that turbulence is described
by motion on a torus of infinite dimensions. This scenario is called the Landau-
Hop! picture.
1 In the following we shall speak of bifurcations, though it would be more precise to speak of non-
equilibrium phase transitions because of the role of fluctuations. We shall adopt a mathematician's
attitude and neglect fluctuations.

666
8.11 Instability Hierarchies, Scenarios, and Routes to Turbulence 265

8.11.2 The Ruelle aud Takens Picture


Quite a different scenario has been derived by Ruelle and Takens. To explain
their mathematical approach we must first discuss the meaning of the word
"generic", which nowadays is frequently used in mathematics.
Let us consider a whole class of differential equations q = N(q), where N ful-
fills certain differentiability conditions. One then looks for those properties of
the solutions q(t) which are the rule and not the exception. Such properties are
called "generic". Instead of trying to sharpen this definition let us illustrate it by
a simple example taken from physics. Let us consider a central force which is
continuous. If we denote the distance from the center as r, the class of functions
K(r), where K is a continuous function, is "generic". On the other hand, the
force described by Coulomb's law, K oc 1Ir2, is not generic; it is quite a special
case 2 • Ruelle and Takens studied what happens in the generic case with respect to
the bifurcations of tori into higher-dimensional tori.
Their analysis shows that after the two-dimensional torus has been reached,
the next bifurcation should not lead to a three-dimensional torus, i. e., not to
quasiperiodic motion at three basic frequencies, but rather to a new kind of
attractor, a "strange attractor". A strange attractor can be characterized as
follows: All the trajectories of the attractor are embedded in a certain region of q
space. Trajectories outside that region but close enough are attracted into that
region. Trajectories within that region will remain in it. The "strangeness" of the
attractor consists in the fact that it is neither a fixed point nor a limit cycle nor a
torus, and it does not form a manifold. Clearly, if a strange attractor is defined
in this way, a large variety of them may exist, and it will be an important task for
future research to attempt a classification. One such classification, which is
espoused in particular by Ruelle, consists in the use of Lyapunov exponents, but
of course there are other possibilities. In what follows it is important to note that
Ruelle and Takens assume in their approach that the frequencies are close to
rational numbers.

8.11.3 Bifurcations of Tori. Quasiperiodic Motions


Thus far we have very carefully studied the properties of quasiperiodic motion,
and especially bifurcations from one torus to another, including bifurcations
from two-dimensional to three-dimensional tori. The reason why we have put so
much emphasis on this approach is the following: Experimentally not only are
transitions from a two-dimensional torus to chaos found to occur, but also
transitions to a three-dimensional torus. Therefore it is important to know why
the Ruelle and Takens picture holds in some cases but fails in others. Our
detailed discussion in the preceding section revealed that if a KAM condition is
fulfilled, i. e., if the frequencies possess a certain kind of irrationality with
respect to each other, bifurcation from, say, a two-dimensional to a three-dimen-

2 This nearly trivial example illustrates another important aspect: sometimes one has to be cautious
when applying the concept of "genericity" to physics (or other fields). Here, due to symmetry, con-
servation laws, or for other reasons, the phenomena may correspond to "nongeneric" solutions
(such as Coulomb's law).

667
266 8. Nonlinear Equations. Qualitative Macroscopic Changes

sional torus is possible. We drew the conclusion that probability arguments must
be applied in order to decide whether a real system will show such a transition.
Our approach solves the puzzle of why systems may show this kind of bifurcation
despite the fact that the corresponding solutions are not "generic" in the sense of
Ruelle and Takens. Indeed, for a given system there may be regions of the con-
trol parameter or of other parameters in which a scenario of subsequent bifurca-
tions of tori is possible. However, when proceeding to higher-dimensional tori,
such transitions become more and more improbable, so that the Landau-Hopf
picture loses its validity and chaos sets in.
There are at least two further scenarios for the routes to turbulence; these are
discussed in the next sections.

8.11.4 The Period-Doubling Route to Chaos. Feigenbaum Sequence


In some experiments two frequencies may lock together to a single frequency.
The corresponding limit cycle can then undergo a sequence of period doubling bi-
furcations which eventually lead to chaos. According to my interpretation, in
such a case the system is governed by few order parameters, and the period-
doubling sequence takes place in the low-dimensional space of the corresponding
order parameters, whose number is at least three. Such period doublings can be
described by discrete maps, as explained in the introduction, but they can also be
understood by means of differential equations, for which the Duffing equation
(1.14.14) may stand as an example. Of course, in autonomous systems the
driving force in the Duffing equation stems from a mode which oscillates at that
driving frequency and drives two other nonlinearly coupled modes (or one non-
linear oscillator). Whether the Feigenbaum sequence is completely exhibited by
real systems is an open question insofar as only the first bifurcation steps, say up
to n = 6, have been observed experimentally. An important reason why no higher
bifurcations can be observed is the presence of noise. In addition, as is known
from specific examples, at higher bifurcations other frequencies, e. g., with
period tripling, can also occur.

8.11.5 The Route via Intermittency


The last route to turbulence to be mentioned here is via intermittency according
to Pomeau and Manneville. Here outbursts of turbulent motion are interrupted
by quiet regimes. For a discussion of this phenomenon in terms of the logistic
map, see [1], where further references are given.
In systems other than fluids, similar routes have been found. For instance in
the laser, we find at the first laser threshold a Hopf bifurcation, and when laser
pulses break into ultrashort pulses, bifurcation of a limit cycle into a torus.
Motion of a limit cycle may also switch to chaotic motion under different circum-
stances where, more precisely speaking, we have periodic motion being
modulated by chaotic motion. An important task of future research will be to
study scenarios for general classes of systems and devise methods to obtain an
overall picture.

668
9. Spatial Patterns

In the introduction we got acquainted with a number of systems in which spatial


patterns evolve in a self-organized fashion. Such patterns may arise in
continuous media such as fluids, or in cell assemblies in biological tissues. In this
chapter we want to show how the methods introduced in the preceding chapters
allow us to cope with the formation of such patterns. We note that such patterns
need not be time independent but may be connected with oscillations or still more
complicated time-dependent motions. Throughout this chapter we shall consider
continuous media or problems in which a discrete medium, e. g., a cell assembly,
can be well approximated by a continuum model.

9.1 The Basic Differential Equations

We denote the space vector (x, y, z) by x. The state of the total system is then de-
scribed by a state vector

q(x, t) (9.1.1)

which depends on space and time.


Take for example a chemical reaction in a reactor. In this case the vector q is
composed of components ql (x, t), Q2(X, t), ... , where qj == n/x, t) is the concen-
tration of a certain chemical j at space point x at time t. Because in continuously
extended media we have to deal with diffusion or wave propagation or with
streaming terms, spatial derivatives may occur. We shall denote such derivatives
by means of the nabla-operator

v = «vax, alay, alaz) . (9.1.2)

The temporal evolution of a system will be described by equations of the


general form

q(x, t) = N(q(x, t), V, a,x, t) . (9.1.3)

These equations depend in a nonlinear fashion on q(x, t). They contain, at least
in general, spatial derivatives and they describe the impact of the surrounding by
268 9. Spatial Patterns

control parameters a. Furthermore, in spatially inhomogeneous media they may


depend on the space vector x explicitly, and they may depend on time. Even in a
stationary process such a time dependence must be included if the system is
subject to internal or external fluctuations.
From the mathematical point of view, equations (9.1.3) are coupled nonlinear
stochastic partial differential equations. Of course, they comprise an enormous
class of processes and we shall again focus our attention on those situations
where the system changes its macroscopic features dramatically. We mention a
few explicit examples for (9.1.3). A large class used in chemistry consists of
reaction diffusion equations of the form

q(x,/) = R(q(x, I),X,/) + D!'1q(x, I). (9.1.4)

Here R describes the reactions between the chemical substances. In general, R is a


polynomial in q or it may consist of the sum of ratios of polynomials, for
instance if Michaelis-Menton terms occur [1]. In homogeneous media the coef-
ficients of the individual powers of q may depend on the space coordinate x, and
if controls are changed in time, R may also depend on time t. In most cases,
however, we may assume that these coefficients are independent of space and
time. In the second term of (9.1.4)

(9.1.5)

is the Laplace-operator describing a diffusion process. The diffusion matrix is

(9.1.6)

It takes into account that chemicals may diffuse with different diffusion con-
stants.
Another broad class of nonlinear equations of the form (9.1.3) occurs in hy-
drodynamics. The most frequently occurring nonlinearity stems from the stream-
ing term. This term arises when the streaming of a fluid is described in the usual
way by local variables. This description is obtained by a transformation of the
particle velocity from the coordinates of the streaming particles to the local coor-
dinates of a liquid. Denoting the coordinate of a single particle by x (t) and
putting

v=x (9.1.7)

the just-mentioned transformation is provided by

dv(x(t),/) av av av av
-----= -Vx + - vy + - vz + - . (9.1.8)
dl aX ay az al

670
9.1 The Basic Differential Equations 269

At rhs of (9.1.8) we must then interpret the arguments x as that of a fixed space
point which is no longer moving along with the fluid. The right-hand side of
(9.1.8) represents the well-known streaming term, which is nonlinear in v. Of
course, other nonlinearities may also occur, for instance, when the density,
which occurs in the equations of fluid dynamics, becomes temperature dependent
and temperature itself is considered as part of the state vector q. Note that in the
equations of fluid dynamics v is part of the state vector q

q,v-+q(x,t) . (9.1.9)

As is well known from mathematics, the solutions of (9.1.3) are determined


only if we fix appropriate initial and boundary conditions. In the following we
shall assume that these equations are of such a type that the time evolution is
determined by initial conditions, whereas the space dependence is governed by
boundary conditions. We list a few typical boundary conditions, though it must
be decided in each case which one of these conditions must be used and whether
other types should also be taken into account. In general, this must be done on
physical grounds, though some general theorems of mathematics may be helpful.
Some examples of boundary conditions: 1) The state vector q must vanish at
the surface, i. e.,

q(s) = 0, (9.1.10)

(s: surface), for a generalization see below. 2) The derivative normal to the
surface must vanish. This condition is sometimes called the non flux boundary
condition

aqj(s) = 0 . (9.1.11)
an
3) Another boundary condition, which is somewhat artificial but is quite useful if
no other boundary conditions are given, is the periodic boundary condition

q (x) = q (x + L) . (9.1.12)

Here it is required that the solution is periodic along its coordinates x, y, z with
periods Lx, L y , L z' respectively. 4) It there is no boundary condition within finite
dimensions, in general one requires that

Iq I bounded for Ix 1-+ 00 • (9.1.13)

Of course, within a state vector q, different components may be subjected to


different boundary conditions, e. g., mixtures of (9.1.1 0 and 11) may occur.
Another boundary condition which generalizes (9.1.10) is given by

q(s) = q(s) , (9.1.14)

671
270 9. Spatial Patterns

where ij is a prescribed function at the surface. For instance one may require that
the concentrations of certain chemicals are kept constant at the boundary.
There are still other boundary conditions which are less obvious, namely
when q (x) is a function on a manifold, e. g., on a sphere or on a torus. Such ex-
amples arise in biology (evolution of morula, blastula, gastrula, and presumably
many other cases of biological pattern formation), as well as in astrophysics. In
such a case we require that q(x) is uniquely defined on the manifold, i. e., if we
go along a meridian we must find the same values of q after one closed path.

9.2 The General Method of Solution

From a formal point of view our method of approach is a straightforward exten-


sion of the methods presented before. We assume that for a certain range of
control parameter we have found a stable solution qo(x, t) so that

qo(x, t) = N(qo(x, t), V, a,x, t) . (9.2.1)

We assume that the solution qo can be extended into a new region of control
parameter values a but where linear stability is lost. To study the stability of the
solution of (9.1.3) we put

q(x, t) = qo(x, t) + w(x, t) (9.2.2)

and insert it in (9.1.3). We assume that w obeys the linearized equation

w(x, t) = L(qo(x, t), V, a,x, t)w(x, t). (9.2.3)

The right-hand side of (9.2.3) is sometimes called the linearization of (9.1.3) and
L is sometimes called the Frechet derivative. If N is a functional of qj (e. g., in the
form of integrals over functions of qj at different space points) the components
of the matrix L are given by functional derivatives

(9.2.4)

However, in order no to overload our presentation we shall not enter the problem
of how to define (9.2.3) in abstract mathematical terms but rather illustrate the
derivation of (9.2.3) by explicit examples. In the case of a reaction diffusion
equation (9.1.4), we obtain

where (9.2.5)

L~V = (}R j I (9.2.6)


(}qk q=qo

is just the usual derivative, whereas L (2) is given by

672
9.2 The General Method of Solution 271

L(2) = DA. (9.2.7)

We give here only one additional example of how to obtain the linearization
(9.2.3). To this end we consider as a specific term the streaming term which may
occur inN,

(9.2.8)

Making the replacement (9.2.2) we have to consider

(}
[qo J'(x) + wJ·(x)] - [qo ,(x) + w,(x)] . (9.2.9)
• (}Xk •

Multiplying all terms with each other and keeping only the terms which are linear
in Wj we readily obtain

(9.2.10)

Let us consider the solutions of (9.2.3) in more detail. Because L depends on


qo, we have to specify the different kinds of qo from which we start. We begin
with a qo which is a constant vector, i. e., independent of space and time, and
consider as example reaction diffusion equations. In this case, L is of the form
(9.2.5), where L(1), (9.2.6), is a constant matrix and L(2) is given by (9.2.7).
In order to solve

(9.2.11)

we make the hypothesis

(9.2.12)

Under the assumption that Xk(X) obeys the equation

(9.2.13)

(under given boundary conditions) we may transform the equation

w=L(V, a)w into (9.2.14)

(9.2.15)
This equation represents a set of coupled linear ordinary differential equations
with constant coefficients. From Sect. 2.6 we know the general form of its
solution,
(9.2.16)

673
272 9. Spatial Patterns

where Vk is independent of time if the characteristic exponents Ak are non-


degenerate. Otherwise Vk may contain a finite number of powers of t. For certain
geometries the solutions of (9.2.13) are well known, e. g., in the case of rec-
tangular geometry they are plane or standing waves. In the case of a sphere they
are Bessel functions multiplied by spherical harmonics.
The next case concerns a qo which is independent of space but a periodic or
quasiperiodic function of time. In addition, we confine our considerations to
reaction diffusion equations. Then L (1) acquires the same time dependence as qo.
Hypothesis (9.2.12) is still valid and we have to seek the solutions of (9.2.15), a
class of equations studied in extenso in Chaps. 2 and 3.
We now discuss a class of problems in which qo depends on the space coor-
dinate x. In such a case, the solutions of (9.2.3) are, at least in general, not easy
to construct analytically. In most cases we must resort to computer solutions. So
far not very much has been done in this field, but I think that such a treatment
will be indispensable, since this problem cannot be eventually circumvented,
e. g., by specific forms of singularity theory. In spite of this open problem there
are a few classes in which general statements can still be made. Let us assume that
qo(x) is periodic in x with periods

(9.2.17)

If N is independent of x, L(x) possesses the same periodicity as qo(x). By


means of the hypothesis

(9.2.18)

we transform (9.2.14) into

L(qo(x), \7, a) v(x) = AV(X) . (9.2.19)

Because L(x) is periodic in each component of x we may apply the results of


Sect. 2.7. If the boundary condition requires that v(x) is bounded for Ix 1-+ 00,
the general form of the solution v reads

(9.2.20)

where k is a real vector and z is periodic with periods (9.2.17). In case of other
boundary conditions in finite geometries, standing waves may be formed from
(9.2.20) in order to fulfill the adequate boundary conditions. Here it is assumed
that the boundary conditions are consistent with the requirement that L (x) is
periodic with (9.2.17).

9.3 Bifurcation Analysis for Finite Geometries

We now wish to show how to solve equations of the type (9.1.3). We study the
bifurcation from a node or focus which are stable for a certain range of para-

674
9.3 Bifurcation Analysis for Finite Geometries 273

meter values a and which become unstable beyond a critical value a c ' We assume
that the corresponding old solution is given by

(9.3.1)

We assume that the solutions of the linearized equations (9.2.3) have the form

(9.3.2)

As is well known, when dealing with finite boundary conditions, at least in a


standard problem, we may assume that the characteristic exponents Ak are
discrete and also that the index k is a discrete index. We decompose the required
solution of (9.1.3) into the by now well-known superposition

q(X,t) = qo(x) + L ¢'k(t)Vk(x), (9.3.3)


~
W

The essential difference between our present procedure and that of the Chap.
8 (especially Sects. 8.1-5) consists in the fact that the index k covers an infinite
set while in former chapters k was a finite set. The other difference consists in the
space dependence of q, qo and v. We insert (9.3.3) into (9.1.3) where we expand
the rhs of N into a power series of W. Under the assumption that (9.3.3) obeys
(9.1.3) we obtain

L ¢'k(t) Vk(X) = L ¢'k(t)L Vk(X) + H(W) with (9.3.4)


k k

(9.3.5)

We now multiply (9.3.4) by Vk(X) and integrate over the space within the given
boundaries. According to the definition of Vk we obtain

(9.3.6)

Using in addition

(9.3.7)

we can cast (9.3.4) into the form

with (9.3.8)

(VkH (W» = L ¢'k,¢'k" Jd 3x(vk(x)N(2)(qO(X»: Vk'(X): Vk" (x» + .... (9.3.9)


kl k'" v I

A~22'k"

675
274 9. Spatial Patterns

If Nand qo are independent of space variables, the coefficients A, which occur in


(9.3.9), can be cast into the form

Ai.2h" = I: ~1)I"Jd3xvk.l(X)Vk.l'(X)Vkl"(X). (9.3.10)


/1'1" '....-----.,vr----....I

The remaining integral depends on the solutions of the linearized equation


(9.3.7) only. If L is invariant under symmetry operations, the v's can be chosen
as irreducible representations of the corresponding transformation group. As is
shown in group theory, on account of the transformation properties of the v's
and ii's, selection rules for the integral I in (9.3.10) arise. We mention that such
group theoretical ideas are most useful to simplify the solution of (9.3.8). From
the formal point of view, the set of (9.3.8) is identical with that of (8.1.17).
Therefore, close to critical points we may apply the slaving principle, which
allows us to reduce the set of (9.3.8) to a set of finite and in general even very few
dimensions. The leading terms in (9.3.3) are those which contain the order para-
meters Uk only instead of all ¢k' Close to the instability point the other remaining
terms are comparatively small and give rise to small changes only. Therefore,
close to instability points the evolving pattern is determined by superpositions of
a finite number of terms of the form

(9.3.11)

What kind of combinations of (9.3.11) must be taken is then determined by the


solutions of the order parameter equations alone. Here, particularly, we may
have cases of competition in which only one U survives so that the evolving
pattern is determined by a single Vko' In other cases, by cooperation, specific
combinations of the u's stabilize each other. An example for the former is
provided by the formation of rolls in the Benard instability, while an example for
the latter is given by hexagonal patterns in the same problem.

9.4 Generalized Ginzburg-Landau Equations

The spectrum of the operator L will be, at least in general, continuous if there are
no boundary conditions in infinitely extended media. Let us consider the special
case in which qo is space and time independent. We treat a linear operator L of
the form (9.2.11), where L is space and time independent. According to (9.2.13)
we may choose X in the form of plane waves

(9.4.1)

Using hypothesis (9.2.12), we have to deal again with a coupled set of linear
differential equations with constant coefficients (9.2.15). If WI is a finite dimen-
sional vector, the eigensolutions WI can be characterized by a discrete set of
indices which we call}. On the other hand, k is a continuous variable. Neglecting

676
9.4 Generalized Ginzburg-Landau Equations 275

degeneracies of the eigenvalues of (9.2.15), the solutions of the linearized equa-


tion (9.2.14) are

(9.4.2)

We note that the eigenvalues can be written as

(9.4.3)

The general hypothesis for q can be formulated analogously to (9.3.3) but we


must take into account that k is continuous by a corresponding integral

q(x,t) = qo(x) + L gk,j(t)Vk,j(x)d 3k. (9.4.4)


J

Such a continuous spectrum can cause some difficulties when applying the
slaving principle. For this reason we resort to an approach well known to quan-
tum mechanics and which amounts to the formation of wave packets. To this end
we split k into a discrete set of vectors k' and remaining continuous parts k

k=k'+k. (9.4.5)

We now consider an expression of the form

(9.4.6)

where we split the integral into a sum over k'


k{+o/2 k;+O/2 k~+o/2

L J J J d3k~k'+k,j(t)eik'Xvk'+k,j(0)eik"X. (9.4.7)
k'k;-o/2 k;-O/2 k~-0!2
'~----------~v~----------~
~k',/X,t)

Introducing the abbreviation ~k',j(X, t) as indicated in relation (9.4.7) and making


for each wave packet the approximation

(9.4.8)

we may cast (9.4.7) into the form

(9.4.9)

Under these assumptions we may write (9.4.4) in the form

q(x, t) = qo(x) + L ~k,j(X, t) Vk,j(X) . (9.4.10)


k,j

677
276 9. Spatial Patterns

The orthogonality relation (9.3.6) must now be taken in a somewhat different


way, namely in the relation

(9.4.11)

we shall integrate over a space region which contains many oscillations of vex)
but over which ~k does not change appreciably. Under these assumptions the rela-
tions

and (9.4.12)

(9.4.13)

hold. In order to proceed further we must perform some intermediate steps. We


consider the expression

(9.4.14)

which can be written in the form

(9.4.15)

where AiV') yields A/ik) if A)V') is followed by exp(ik • x). The integral follow-
ing Aj in (9.4.15) can be split into the sum (9.4.9) as before, so that we obtain

(9.4.16)

~k ',j(x, t)

We now consider the effect of Aj(V') on ~. To this end we treat

(9.4.17)

Performing the differentiation with respect to exp(ik' • x) we readily obtain for


(9.4.17)

(9.4.18)

where formulas well known in quantum physics have been used. It follows from
(9.4.7) that ~k',j contains only small k-vectors, enabling us to expand Aj into a
power series with respect to V' . We readily obtain

Alik' +V')~kjX,t)

= Aj(ik')~k',lx, t) + A,51l (ik'): V' ~k',j(X, t) + AYZl(ik'): V':V' ~k',j(X, t) + ...


(9.4.19)

678
9.4 Generalized Ginzburg-Landau Equations 277

To complete the intermediate steps we have to evaluate

(9.4.20)

which leads to an expression of the form

(9.4.21)

where we have made use of the result (9.4.13). After all these steps we are now in
a position to treat the nonlinear equation (9.1.3). To this end we insert hypothesis
(9.4.10) into (9.1.3), expand rhs of (9.1.3) into a power series of Nwith respect to
q, and make use of the linearized equation (9.2.3). We then readily obtain

with (9.4.22)

Except for the fact that Aj is an operator, (9.4.22) is of the same form as all the
equations we have studied in a similar context before.
To make use of the slaving principle we must distinguish between the unstable
and stable modes. To this end we distinguish between such A'S for which

Au = Re{Aj(ik)} > - Ie! and (9.4.24)

As = Re {Aj(ik)} ~ C < 0 (9.4.25)

hold. We must be aware of the fact that the indices j and k are not independent of
each other when we define the unstable and stable modes according to (9.4.24,
25). We shall denote those modes for which (9.4.24) holds by Ukj, whereas those
modes for which (9.4.25) is valid by Skj. Note that in the following summationsj
and k run over such restricted sets of values implicitly defined by (9.4.24, 25).
The occurrence of the continuous spectrum provokes a difficulty, because the
modes go over continuously from the slaved modes to the undamped or unslaved
modes. Because the slaving principle requires that the real part of the spectrum As
has an upper bound [cf. (9.4.25)], we make a cut between the two regions at such
C. Consequently, we also have to treat modes with a certain range of negative
real parts of Aj as unstable modes. As one may convince oneself the slaving prin-
ciple is still valid under the present conditions provided the amplitudes of Ukj are
small enough.
Since the A'S are operators, the corresponding Au and As are also operators
with respect to spatial coordinates. Some analysis shows that the slaving principle
also holds in this more general case provided the amplitudes C; are only slowly
varying functions of space so that As(k + V ) does not deviate appreciably from
AAk). After these comments it is easy to apply to formalisms of the slaving prin-
ciple. Keeping in the original equation (9.1.3) only terms of Nup to third order in

679
278 9. Spatial Patterns

q, we obtain the following set of equations for the order parameters


Uk,j = Uk,j(X, t) in the form of

+ L Bk'k"k"',j'j"j,,,Uk',j'Uk",j"Uk'",j''' + ... ( + Fk,j(t» . (9.4.26)


k'k"k'"
j'j"j'"

Precisely speaking, the coefficients A and B may contain derivatives with respect
to spatial coordinates. However, in most cases of practical importance,
AAik + V) which occurs in the denominators may be well approximated by
As(ik) because U depends only weakly on x and the denominators are bounded
from below. If the original equation (9.1.3) contained fluctuations, the corre-
sponding fluctuating forces reappear in (9.4.26) and are added here in the form
of Fk,j' I have called these equations, which I derived some time ago, "gener-
alized Ginzburg-Landau equations" because if we use Aj in the approximation
(9.4.19) and drop the indices and sums over k and}, the equations (9.4.26) reduce
to equations originally established by Ginzburg and Landau in the case of equi-
librium phase transitions, especially in the case of superconductivity. Besides the
fact that the present equations are much more general, two points should be
stressed. The equations are derived here from first principles and they apply in
particular to systems far from thermal equilibrium.

9.5 A Simplification of Generalized Ginzburg-Landau Equations.


Pattern Formation in Benard Convection

In a number of cases of practical interest, the generalized Ginzburg-Landau


equations which we derived above can be considerably simplified. In order to
make our procedure as transparent as possible, we will use the Benard instability
of fluid dynamics as an example. (The relevant experimental results were pre-
sented in Sect. 1.2.1.) The procedure can be easily generalized to other cases. In
the Benard problem the order parameters depend on the two horizontal space
coordinates x andy, which we lump together in the vector x = (x,y). Correspond-
ingly, plane waves in the horizontal plane are described by wave vectors
k.L = (kx, ky). In this case the eigenvalues of the unstable modes can be written in
the form [Ref. 1, Sect. 8.13]:

(9.5.1)

To make contact with the A which occurs in the order parameter equations
(9.4.26), we must transform (9.5.1) by means of

(9.5.2)

680
9.5 A Simplification of Generalized Ginzburg-Landau Equations 279

[where kc corresponds to k in (9.4.26)], and in order to take into account finite


bandwidth excitations we must introduce derivatives as in the above section.
Therefore the A to be used in (9.4.26) reads

(9.5.3)

In the following we shall use the order parameter equations (9.4.26) in the
form

(9.5.4)

where the Kronecker symbols under the sums ensure the conservation of wave
numbers. This is the case if we do not include boundary conditions with respect
to the horizontal plane, and use plane-wave solutions for Vk,j(X), It is evident that
the specific form of A(9.5.1) or A(9.5.3) favors those Ik .L Iwhich are close to Ikc I.
We now introduce a new function If! according to

(9.5.5)

The sum runs over the critical k vectors, which all have the same absolute value
ko but point in different horizontal directions. We now make our basic
assumption, namely that we may put

and (9.5.6)

(9.5.7)

for the k vectors, which are selected through the a-functions and the condition
that Ike 1= k o·
We mUltiply (9.5.4) by exp(ik e • x) and sum over k c . To elucidate the further
procedure we start with the thus resulting expression which stems from the last
term in (9.5.4), and therefore consider
(9.5.8)

Since

(9.5.9)
we can insert the factor

exp[i{ -k c + k~ + k~' + k~"}x] == 1 (9.5.10)

681
280 9. Spatial Patterns

into (9.5.8). Exchanging the sequence of summations in (9.5.8) we obtain

L exp[i{k~ + k~' + k~"}X] Uk ' (X)Uk ,, (X)Uk ''' (X) L (\ k '+ k " + k '" (9.5.11)
k~,k~',k~" C C kc c' C c c

and note that we may drop the last sum due to

L 15k
k C'
kC'+ kC"+ k C'" = 1. (9.5.12)
C

Because of (9.5.5) the whole expression (9.5.11) then simplifies to

(9.5.11) = 1f13(X) . (9.5.13)

The term with A can be treated in analogous fashion.


Finally, we use the transformation

(01
16
"
~j
.~ ~
,~

'~
~
~~
~:.
. ~.

.'
~
~
~
>-=..
.
~
'. ... . -
.
t'
~ ~
T 04:100; F- - 6 .22 ; N· .048
16
T ':1000 ; F· -2.06; N.ma T.25000;F' -2 .21 ; N" .0 40

o -.
o 16 16 To12600 ; Fo-6.38 ; N- .0 48
To1900 ; F· -280. ;N'0.!IO To 6400 ; F" -266 ; N a O'48

Fig. 9.la - f. Contour plots of the amplitude field I/f(x,y , r) at variou s times 1 '= r, for (a, b, e, f)
a = 0.10 and (e, d) a = 0.90, where A = 0, B = 1. The cells in (a - d) have an aspect ratio ofl6 while
the cells in (e, f) have aspect ratios of 29.2 and 19.5. The initial conditions were parallel rolls for (a , e)
and random domains of opposite sign for (b , e). The solid and dotted contours represen t positive a nd
negative values at t, t, t ,
and t\; the maximum amplitude. The contours correspond to vertical
velocity contours in optical experiments. The values of the time at which equilibrium was reached (r),
the Lyapunov functional (F) , and the Nu sselt number (N) are given. The state in (e) has not reached
equilibrium and is evolving into the equilibrium state, (f). Here equilibrium is defined to occur when
d In(F)l dr is smaller than 10 - 8 . [After H. S. Green side, W. M. Coughran, Jr., N. L. Schryer: Ph ys.
Rev. Lett. 49, 726 (1982)]

682
9.S A Simplification of Generalized Ginzburg-Landau Equations 281

Because the whole procedure yields the expression fiJ(x) for the lhs of (9.5.4), we
find the new equation

(9.5.15)

This equation has been solved numerically for A = 0 by H. S. Greenside, W. M.


Coughran, Jr., and N. L. Schryer, and a typical result is shown in Fig. 9.1. Note
the resemblance of this result to Figs. 1.2.6, 8, except for the fact that hexagons
are now lacking. Our own results show that such hexagons can be obtained if the
term with A is included.

683
10. The Inclusion of Noise

In the introduction we pointed out that noise plays a crucial role especially at in-
stability points. Here we shall outline how to incorporate noise into the approach
developed in the previous chapters. In synergetics we usually start from equa-
tions at a mezoscopic level, disregarding the microscopic motion, for instance of
molecules or atoms. The equations of fluid dynamics may stand as an example
for many others. Here we are dealing with certain macroscopic quantities such as
densities, macroscopic velocities, etc. Similarly, in biological morphogenesis we
disregard individual processes below the cell level, for instance metabolic proces-
ses. On the other hand, these microscopic processes cannot be completely
neglected as they give rise to fluctuating driving forces in the equations for the
state variables q of the system under consideration. We shall not derive these
noise sources. This has to be done in the individual cases depending on the nature
of noise, whether it is of quantum mechanical origin, or due to thermal fluctua-
tions, or whether it is external noise, produced by the action of reservoirs to
which a system is coupled. Here we wish rather to outline the general approach to
deal with given noise sources. We shall elucidate our approach by explicit
examples.

10.1 The General Approach

Adding the appropriate fluctuating forces to the original equations we find equa-
tions of the form

q =N(q,a) +F(t) , (10.1.1)

which we shall call Langevin equations. If the fluctuating forces depend on the
state variables we have to use stochastic differential equations of the form

dq = N(q, a)dt + dF(t,q) , (10.1.2)

which may be treated according to the Ito or Stratonovich calculus (Chap. 4). In
the present context we wish to study the impact of fluctuating forces on the
behavior of systems close to instability points. We shall assume that F is com-
paratively small in a way that it does not change the character of the transition
appreciably. This means that we shall concentrate on those problems in which the
10.2 A Simple Example 283

instability is not induced by fluctuations but by the deterministic part N. Qualita-


tive changes may occur in the case of multiplicative noise where F depends in
certain ways on q. Since these aspects are covered by Horsthemke and Lefever
(cf. the references), we shall not enter the discussion on these problems here.
We now proceed as follows. We first disregard F or dF, assume that

q=N(q,a) (10.1.3)

possesses a solution for a given range of the control parameter a, and study as
before the stability of

(10.1.4)

The hypothesis

(10.1.5)

leads us to the by now well-known linearized equations

(10.1.6)

where we shall assume the solutions in the form

(10.1.7)

Again amplitudes f.k and phase angles ¢k may be introduced. Assuming that one
or several eigenvalues A's belonging to (10.1.7) acquire a positive real part, we
shall apply the slaving principle. In Sect. 7.6 we have shown that the slaving prin-
ciple can be applied to stochastic differential equations of the Langevin-Ito or
-Stratonovich type. According to the slaving principle we may reduce the original
set of equations (10.1.2) to a set of order parameter equations for the corre-
sponding f.k and ¢k' The resulting equations are again precisely of the form
(10.1.2), though, of course, the explicit form of Nand dF has changed. To illus-
trate the impact of fluctuations and how to deal with it, we shall consider a
specific example first.

10.2 A Simple Example

Let us study the order parameter equation

(10.2.1)

where we assume the properties

(F(t» = 0, (10.2.2)

685
284 10. The Inclusion of Noise

(F{t)F(t'» = Qc5{t-t') (10.2.3)

for the fluctuating forces. If we are away from the instability point Jt = 0, in
general it suffices to solve (10.2.1) approximately by linearization. Because for
A < 0, u is a small quantity close to the stationary state, we may approximate
(10.2.1) by

u==Au+F{t). (10.2.4)

For A > 0 we make the replacement

u = Uo + ", where A - bU5 = 0, (10.2.5)

which transforms (10.2.1) into

(10.2.6)

where we have neglected higher powers of". Because (10.2.6) is of the same form
as (10.2.4) and" is small, it suffices to study the solutions of (10.2.6) which can
be written in the form

,,= oSexp[-2A(t-r)]F(r)dr.
t
(10.2.7)

(We do not take care of the homogeneous solution because it drops out in the
limit performed below.) The correlation function can be easily evaluated by
means of (10.2.3) and yields in a straightforward fashion for t ~ t'
ft'
(1'/(t),,(t'» = SSexp[ -2A(t- r) - 2A(t'- r')] Qc5(r- r')drdr'
00
t'
= Jexp[ - 2A(t+ t') + 4h'] Qdr'
o
(e 4At ' 1)
=exp[-2A{t+t')]Q . (10.2.8)
4A

Assuming that t and t' are big, but t- t' remains finite, (10.2.8) reduces to the
stationary correlation function

(1'/(t) 1'/(t'» = SL exp( - 2A It- t' I) , (10.2.9)


4A

which is valid for t ~ t' .


Close to the instability point A = 0 the linearization procedure breaks down.
This can be easily seen from the result (10.2.9), because in such a case the rhs
diverges for A-+ O. This effect is well known in phase transition theory and is

686
10.2 A Simple Example 285

called "critical fluctuation". However, in physical systems far from thermal


equilibrium and many other systems these fluctuations are limited, which,
mathematically speaking, is due to the nonlinear term - bu 3 in (10.2.1). To take
care of this term the approach via the Fokker-Planck equation is most elegant.
To this end we assume that F(t) has the properties (10.2.2) and (10.2.3) and that
it is Gaussian [1]. According to Sect. 4.2 the Fokker-Planck equation for the
probability density fbelonging to the Langevin equation (10.2.1) reads

. 8 3 Q 82
f= - -[(AU - bu )f] +- --2f, where (10.2.10)
8u 2 8u

(F(t)F(t'» = Qo(t- I'). (10.2.11)

As shown in [1], the stationary solution of (10.2.10) is given by

(10.2.12)

where JtI is a normalization factor. The branching of the solution of the deter-
V
ministic equation with F(t) == 0 from u = 0 for A ~ 0 into u ± = ± Alb for A > 0
is now replaced by the change of shape of the distribution function (10.2.12), in
which the single peak for A < 0 is replaced by two peaks for A> O. Some care
must be exercised in interpreting this result, because fo is a probability distri-
bution. Therefore, in reality the system may be at any point u but with given
probability (10.2.12). Clearly, for A> 0 the probability is a maximum for
u ± = ± VAlb. But, of course, at a given moment the system can be in a single
state only. From this point of view an important question arises. Let us assume
that at time t = 0 we have prepared (or measured) the system in a certain initial
state u = Uj. What is the probability of finding the system at a later time I in some
other final given state U = Uf? This question can be answered by the time-depend-
ent solution f(u, I) of the Fokker-Planck equation with the initial condition
f(u,O) = o(u - Uj). Even for Fokker-Planck equations containing several varia-
bles these solutions can be found explicitly if the drift coefficients are linear in
the variables and the diffusion coefficients constants. We shall present the results
for the one-variable Fokker-Planck equation at the end of this section and the
corresponding general theorem in Sect. 10.4.1. If, for example, the drift coef-
ficients are nonlinear, even in the case of a single variable, computer solutions
are necessary. We shall give an outline of the corresponding results in Sect. 10.3.
A further important problem is the following. Let us assume that the system
was originally prepared in the state U +. How long will it take for the system to
reach the state U _ for the first time? This is a special case of the so-called first
passage time problem which can be formally solved by a general formula. We
shall deal with the first passage time problem in the context of the more general
Chapman-Kolmogorov equation for discrete maps in Sect. 11.6. Now, let us take
up the problem of finding the time-dependent solutions of (10.2.1) in an approxi-

687
286 10. The Inclusion of Noise

mation, namely by linearization. It suffices to treat A> 0, because the case of


A < 0 can be treated similarly. Using the decomposition

u = Uo + 1] (10.2.13)

and neglecting nonlinear terms, we transform (10.2.10) into

.:. 8 - Q 82 -
/= --(-2A"f) +- - - I (10.2.14)
81] 2 8,,2

with]=](,,) =/(uo+ ,,).


Taking as initial condition

](t=O) = 0(1]-1]0), (10.2.15)

the solution reads [1]

(10.2.16)

where a and b are given by

a(t) = JL [1 - exp( -4U)] and (10.2.17)


2A

b(t) = b(O) exp( - 2A t) . (10.2.18)

10.3 Computer Solution of a Fokker-Planck Equation for a


Complex Order Parameter

In this section we study the branching from a time-independent solution when a


complex eigenvalue acquires a positive real part [cf. Sect. 8.4, in particular
(8.4.9)]. Making the substitution

u(t)-+u(t)e iwt , w=A", (10.3.1)

we assume the order parameter equation in the form

(10.3.2)

where F(t) fulfills (10.2.2, 3). Decomposing u according to

u=re- irp , (10.3.3)

688
10.3 Computer Solution of a Fokker-Planck Equation for a Complex Order Parameter 287

we may derive the following Fokker-Planck equation

oWl 0 -
--+f3--[(n-r)rw]=Q
ot r or
2 [1
--0 ( r0-W)
r or or
2
- +1- 0- W
r2 orp2
-. l
(10.3.4)

[This can best be done by writing down the Fokker-Planck equation for the real
and imaginary parts of u and transforming that equation to polar coordinates
according to (10.3.3).]
In order to get rid of superfluous constants we introduce new variables or
constants by means of

(10.3.5)

The Fokker-Planck equation (10.3.4) then takes the form

(10.3.6)

The stationary solution is given by (AI: normalization constant)

;/I
Wei') = -<-exp
(~4 ~2
-!....-.. + a!....-.. ) 1
, - = Ii'exp
00 (~4 -2 )
- !....-..+a!....-.. di'.
2n 4 2 .;/10 42
(10.3.7)

In order to obtain correlation functions, e. g., of the type (10.2.8), the nonsta-
tionary solutions of the Fokker-Planck equation must be used. Since general ana-
lytical expressions are not known, either approximation methods (like a varia-
tional method) or painstaking computer calculations were performed, the latter
yielding results of great accuracy, of which those relevant are represented.
First (10.3.6) is reduced to a one-dimensional Schrodinger equation. The
hypothesis

~
W(f,rp,t)= ~
00

m-O n-
_L 00

-00
Anm [ 1
l;;exp
Vr
(i'4
8 4
l
- + ai'2- ) 'Pnm(i') exp(inrp-Anmt}~
(10.3.8)

leads to

with (10.3.9)

for the eigenfunctions 'Pnm = 'P -nm and the eigenvalues Anm = A -nm. For explicit
values of the A'S see Table 10.3.1.

689
288 10. The Inclusion of Noise

Table 10.3.1. Eigenvalues and moments for different pump parameters, a

a .1.01 M1 .1.02 M2 .1.03 M3 .1.04 M4

10 19.1142 0.4614 19.1237 0.4885 34.5184 0.0212 35.3947 0.0226


8 14.6507 0.4423 14.9666 0.4622 23.6664 0.0492 28.3894 0.0344
7 12.0787 0.4085 13.0891 0.4569 20.0129 0.0895 26.4382 0.0354
6 9.4499 0.4061 11.5823 0.4459 18.0587 0.1132 25.6136 0.0287
5 7.2368 0.4717 10.6059 0.4095 17.3876 0.0980 25.8079 0.0179
4 5.6976 0.5925 10.2361 0.3344 17.6572 0.0634 26.9004 0.0086
3 4.8564 0.7246 10.4763 0.2387 18.6918 0.0330 28.7914 0.0033
2 4.6358 0.8284 11.2857 0.1553 20.3871 0.0151 31.3963 0.0011
0 5.6266 0.9370 14.3628 0.0601 25.4522 0.0028 38.4621 0.0001

The potential Vn(i') is given by (Fig. 10.3.1)

n2 'l'."
V(f)=-+~
n A2
r
'l'.
00

= (n2 _ ~4 ) _1 + a +
f2
(
-2
a 2 A2 a A4
- ) r--r+-
rA6
. (10.3.10)
4 2 4

According to (10.3.7,8) the eigenfunction '1'00 belonging to the stationary eigen-


value ..100 = 0 is

(10.3.11)

The first five eigenfunctions, which are determined numerically, are plotted in
Fig. 10.3.2. For a plot of eigenvalues consult Fig. 10.3.3.
If follows from (10.3.9) that the Pnm's are orthogonal for different m. If they
are normalized to unity, we have

J Pnm(f) P nm ,(f)df =
00

f5 mm , . (10.3.12)
o

a=-4 a=10
VoIr) IV' (r)
80
I
60
I
\
40
\
\
20 \.- Fig. 10.3.1. The potential
Vo(f) of the Schrodinger
0 ~~S;;::cz:===:::::",--=::;==::::;::.L~a-7L--::::- equation (10.3.9) for three
l' pump parameters (solid line)
and V 1 (f) for a = 10 (broken
-20 line). [After H. Risken, H. D.
Vollmer: Z. Physik 201, 323
-40 (1967)]

690
10.3 Computer Solution of a Fokker-Planck Equation for a Complex Order Parameter 289

fV, rrl Fig. 10.3.2. The potential Vo

I
80 of the Schrodinger equation
a =10 (10.3.9) and the first five
60 eigenvalues a nd eigenfunc-

lro~ tions fo r the pump parameter

--
40 "'04- a = 10. [After H. Risken, H .
D. Vollmer: Z . Phys. 201 , 323
20
V -X ~ V~~ .~~(1967)]

o / ~
....x--
~

"--./
J.-<"F01' '"01
~'I'~ "'nn

-20

-40
o

The completeness relation


00

J(f-f') = L 'Pnm(f) 'Pnm(f') (10.3.13)


m =O

leads immediately to Green's function of the Fokker-Planck equation. It is ob-


tained from the general solution (10.3.8) by putting

Anm=
1
I~ exp (f,4 f'2)
---a- - 'Pnm(f')exp(-incp'). (10.3.14)
2nv f' 8 4

Thus the Green's function reads

1 (f4 f2 f ,4 f' 2 )
G(f,cp;f',cp', f)= exp - -+a-+---a--
2n~ 8 4 8 4
00 00

x L L 'Pnm{f) 'Pnm(f') exp[in(cp- cp') - Anm f) .


m = O n=-oo
(10.3.15)
3S
\j \J \ A04 II
\ \ V!
30

2S

f'.", 1\ ~ ""1""-- !'-

/
'"
20
V
~ ~2 I-----
~
kI
15

10

5
~
"-..,
1"--.. "" >:;'f~ V
~

Fig. 10.3.3. The first four nonzero eigen-


values Aom and the effect ive eigenvalue Acrr
(10.3.24) as functions of the pump para-
o 6 8 10 meter a. [After H. Risken, H. D. Vollmer:
-10 -8 -6 -4 -2 0 2 4
a_ Z . Phys. 201, 323 (1967)]

691
290 10. The Inclusion of Noise

For the calculation of stationary two-time correlation functions the joint distri-
bution function F(f,({J;f',q/; f) is needed. F(f,({J;f',({J';f)fdfd({Jf'df'd({J' is
the probability that f(t + f), (P(t + f) lie in the interval f, ... , f + df; ({J, ... , ({J + d ({J
and that f'(t), ({J'(t) lie in the interval f', ... ,f'+df', ({J', ... ,({J'+d({J'.
Moreover, F can be expressed by means of the Green's function G(f, ({J; f', ({J'; f)
and of W(f', ({J'), the latter describing the distribution at the initial time, t,

F(f, ({J; f', ({J'; f) = G(f, ({J; f', ({J'; f) WeT', ((J'), f > 0. (10.3.16)
The correlation function of the intensity fluctuations is obtained by

K(a, f) == «f2([ + f) - (f2) )(f2([) - (f2»)


= SJSJfdff' df' d({J dqJ' (f2_ (f2»)(f,2- (f,2» F(f, ({J;f', ({J'; f)
00

=K(a,O) L Mmexp(-Aom f), (10.3.17)


where m=!

(10.3.18)

For a plot of the first four matrix elements Mm see Fig. 10.3.4, and for explicit
values, Table 10.3.1. Here K(a,O) is given by

(f4) - (f2)2 = 2n I f5 W(f)df -l2 n I f3 W(f)df J (10.3.19)

where W(f) is defined in (10.3.7).


Both fl and K(a, 0) may be reduced to the error integral
2
- f exp( -x 2 )dx.
y
<f>(y) = (10.3.20)
Vn°
Introducing the new variable v = f2 we define

1.0
- ~ Ml

"\
0.8 ~--

"'-
0.6

0.1.
My V Fig. 10.3.4. The first four matrix cle-

--
ments M m as functions of the pump

---~ ---
o. 2
parameter a. [After H. Risken, H.
o
/ D. Vollmer: Z. Phys. 201, 323
-10 -8 -6 -I. -2 0 6 8 10 (1967)]

692
10.3 Computer Solution of a Fokker-Planck Equation for a Complex Order Parameter 291

(10.3.21)

Then the following recurrence relations hold:

lo(a) = vn exp(a 2/4)[1 + 4>(a/2)] ,


II (a) = 2 + alo(a) , (10.3.22)
In(a) = 2(n-l)ln _2 (a) + aln_l(a) for n ~ 2.

The spectrum S(a, w) of the intensity fluctuations is given by the Fourier trans-
form of the correlation function (10.3.17)
00

S(a, w) = K(a,O) L Mm. (10.3.23)


n=1

Although it is a sum of Lorentzian lines of widths AOm , S(a, w) may be well


approximated by an "effective" Lorentzian curve

.h ~ Mm
WIt -1
- = iJ -- (10.3.24)
Aeff m= I AOm

which has the same area and the same maximum value (Fig. 10.3.5).

Fig. 10.3.5. A comparison between


the exact noise spectrum and the ef-
fective Lorentzian line for a = 5.
[After H. Risken, H. D. Vollmer: Z.
w Phys. 201, 323 (1967)]

This effective width Aeff is, however, about 25070 larger than AOI for a"" 5. The
eigenvalues and matrix elements were calculated numerically in particular for the
threshold region -10 ~ a ~ 10. Similar calculations for the correlation function
of the amplitude yield

g(a, f) = (p(i + f) exp [i qJ(i + f)] f(i) exp [ - i qJ(i)])

= IIf If dff' df' dqJ dqJ' ff' exp(iqJ- i qJ')F(f, qJ; f', qJ'; f),
00

=g(a,O) L Vm exp(-Alm f), where (10.3.25)


m=O

693
292 10. The Inclusion of Noise

(10.3.26)

Further, g (a, 0) is given by

g(a,O) = (f2) = 2 7r Jo W(f·) df


f3 (10.3.27)

and can be reduced to the error integral by the same substitution leading to
(10.3.21). A calculation of Yo shows that

00

1- Va = L Vm
m=1

is of the order of 20/0 near threshold and smaller farther away from threshold.
Therefore the spectral profile is nearly Lorentzian with a linewidth (in unnor-
malized units)

(10.3.28)

The factor a is plotted in Fig. 10.3.6.

-""
2.0

.........
"""-
Fig. 10.3.6. The linewidth factor
0.5 a(a) = A10 as a function of the pump
parameter a. [After H. Risken: Z.
o Phys. 191, 302 (1966)]
-10 -8 -6 -4 -2 a0 _ _
2 4 6810

Transient Solution. The general transient solution of (10.3.6) is given by

2"
J
00

W(f,rp,t) = JG(f,rp;f',rp';i)W(f',rp',O)f'df'drp', (10.3.29)


o 0

where G is the Green's function (10.3.15) and W(f',rp',O) is the initial distribu-
tion.
After this specific example we now turn to the presentation of several useful
general theorems.

694
IDA Some Useful General Theorems on the Solutions of Fokker-Planck Equations 293

10.4 Some Useful General Theorems on the Solutions of Fokker-


Planck Equations

10.4.1 Time-Dependent and Time-Independent Solutions of the Fokker-


Planck Equation, if the Drift Coefficients are Linear in the Coordinates and the
Diffusion Coefficients Constant
In certain classes of applications we may assume that drift coefficients can be
linearized around certain stable values of the coordinates and that the diffusion
coefficients are independent of the coordinates. If we denote the elongation from
the equilibrium positions by Qj, the corresponding Fokker-Planck equation reads
W
- + LCij-(qJ)
0 1
= -LQij
o~
(10.4.1)
ot u Oqi 2 ij oqioqj

We abbreviate qt, ... , qN by q. The Green's function of (10.4.1) must fulfill


the initial condition
G(q,q',O) = n o(qrq)·
j
(10.4.2)

The solution of (10.4.1) with (10.4.2) reads explicitly

G(q,q',t) = [nndet{a(t)}l-tl2

x exp [- L (a-t)ij[qi- E bik(t)qkl [qj- E bj/(t)qn] , (10.4.3)


ij k /

where

a = (au) , aij(t) = L [0is0jr- bis(t) bjr(t)l a sr ( 00) . (10.4.4)


sr

The functions b is which occur in (10.4.3, 4) obey the equations


b is = L Cijbjs , (10.4.5)
j

with the initial conditions

(10.4.6)

Here a( 00) is determined by

Ca(oo) + a(oo)C T = -2Q, (10.4.7)

where we have used the abbreviations

C = (Cij) ,
(10.4.8)
Q= (Qu),

695
294 10. The Inclusion of Noise

and the superscript T denotes the transposed matrix. In particular the stationary
solution reads
f(q) = G(q,q ',00) = [nn det {a( 00)}1-1I2

x exp [- t (a- 1)ij(00)}q;qj]. (10.4.9)

10.4.2 Exact Stationary Solution of the Fokker-Planck Equation for Systems in


Detailed Balance
In this section we mainly demonstrate two things.
1) We derive sufficient and necessary conditions for the drift and diffusion coef-
ficients of the Fokker-Planck equation so that the "principle of detailed
balance" is fulfilled.
2) We show that under the condition of detailed balance the stationary solution
of the Fokker-Planck equation may be found explicitly by quadratures.
While the principle of detailed balance is expected to hold for practically all
systems in thermal equilibrium, this need not be so in systems far from thermal
equilibrium. Thus each individual case requires a detailed discussion (e. g., by
symmetry considerations) as to whether this principle is applicable. Also the in-
spection of the structure of the Fokker-Planck equation will enable us to decide
whether detailed balance is present.

a) Detailed Balance
We denote the set of variables q1> ... , qN by q and the set of the variables under
time reversal by

(10.4.10)

where e; = -1 (+ 1) depending on whether the coordinate q; changes sign (does


not change sign) under time reversal. Furthermore, 1 stands for a set of
externally determined parameters. The time reversed quantity is denoted by

(10.4.11)

where V; = -1 (+ 1) depends on the inversion symmetry of the external para-


meters under time reversal. We denote the joint probability of finding the system
at t1 with coordinates q and at t2 with coordinates q' by

(10.4.12)

In the following we consider a stationary system so that the joint probability


depends only on the time difference t 2- t1 = r. Thus (10.4.12) may be written as

f2(q',q;t 2,t1) = W(q',q;r). (10.4.13)

696
10.4 Some Useful General Theorems on the Solutions of Fokker-Planck Equations 295

We now formulate the principle of detailed balance. The following two defini-
tions are available.
1) The principle of detailed balance (first version)

W(q',q; r,l) = W(q,q'; r,X). (10.4.14)

The joint probability may be expressed by the stationary distributionf(q) mul-


tiplied by the conditional probability P, where stationarity is exhibited by writing

P = P(q Iq; r, l) .
I (10.4.15)

Therefore, we may reformulate (10.4.14) as follows:

P(q' Iq;r, l)f(q,l) = P(q Iq'; r,X)f(q',X). (10.4.16)

Here and in the following we assume that the Fokker-Planck equation possesses
a unique stationary solution. One may then show directly that

f(q,l) =f(q,X) (10.4.17)

holds. We define the transition probability per second by

W(q',q; l) = [(dldr)P(q' Iq; r, A)]r=O. (10.4.18)

Taking the derivative with respect to r on both sides of (10.4.16) and putting
'*'
r = 0 (but q q'), we obtain
2) the principle of detailed balance (second version)

w(q',q;l)f(q,l) = w(q,ql;l)f(q',l). (10.4.19)

It has obviously a very simple meaning. The left-hand side describes the total
transition rate out of the state q into a new state q The principle of detailed
I.

balance then requires that this transition rate is equal to the rate in the reverse
direction for q and q with reverse motion, e. g., with reverse momenta.
I

b) The Required Structure of the Fokker-Planck Equation and


Its Stationary Solution
We now derive necessary and sufficient conditions on the form of a Fokker-
Planck equation so that the principle of detailed balance in its second (and first)
versions is satisfied. Using the conditional probability P (which is nothing but the
Green's function) we write the Fokker-Planck equation (or generalized Fokker-
Planck equation having infinitely many derivatives) in the form of the equation

~P(qllq; r,l) = L(ql,l)P(q'lq; r,A). (10.4.20)


dr

697
296 10. The Inclusion of Noise

Note that, if not otherwise stated, L may also be an integral operator. The solu-
tion of (10.4.20) is subject to the initial condition

P(q'!q;O,l) = J(q' -q). (10.4.21)

The formal solution of (10.4.20) with (10.4.21) reads

P(q'!q;r,l) = exp[L(q',l)r]J(q'-q). (10.4.22)

Putting (10.4.22) into (10.4.20) and taking r = 0 on both sides, (10.4.20) acquires
the form

w(q',q;l) = L(q',l)J(q'-q). (10.4.23)

The backward equation (backward Kolmogorov equation) is defined by

(d/dr)P(q' !q; r, l) = L + (q, l)P(q' !q; r, l), (10.4.24)

where L + is the operator adjoint to L. Again specializing (10.4.24) for r = 0 we


obtain

w(q',q;l) = L +(q,l)J(q'-q). (10.4.25)

Proceeding in (10.4.25) to time-inverted coordinates and then inserting this and


(10.4.23) into (10.4.19), we obtain

L(q',l)J(q' -q)f(q, l) = {L + (ij', i)J(ij' -ij)}f(q',l) , (10.4.26)

where we have used (10.4.17).


We now demonstrate how one may derive an operator identity to be fulfilled
by L, L + which is a consequence of (10.4.19). On the lhs of (10.4.26) we replace q
by q' in f. On the rhs we make the replacement

o(ij'-ij) = J(q'-q). (10.4.27)

With these substitutions and bringing rhs to lhs, (10.4.26) acquires the form

L(q', l)f(q', l)J(q' - q) - f(q', l)L + (g', l)J(q' -q) = O. (10.4.28)

Because the J-function is an arbitrary function if we let q accept all values,


(10.4.28) is equivalent to the following operator equation

L(q', l)f(q', l) - f(q', l)L + (g', 1) = 0 . (10.4.29)

In (10.4.29), L acts in the usual sense of an operator well known from operators
in quantum mechanics, so that Lf is to be interpreted as L (t . .. ), the points in-

698
10.4 Some Useful General Theorems on the Solutions of Fokker-Planck Equations 297

dicating an arbitrary function. So far we have seen that the condition of detailed
balance has the consequence (10.4.29).
We now demonstrate that if (10.4.29) is fulfilled the system even has the
property of the first-version principle of detailed balance (which appears to be
stronger). First we note that (10.4.29) may be iterated yielding

[L(q',lWf(q',l) =f(q',l)[L +(tj',1W. (10.4.30)

We multiply (10.4.30) by rn(1/n!) and sum up over n from n = 0 to n = 00. Now


making all steps which have led from (10.4.26 to 29) in the reverse direction, and
using (10.4.22) and its analogous form for L +, we obtain (10.4.16) and thus
(10.4.14). We now exploit (10.4.29) to determine the explicit form of the Fokker-
Planck equation if the system fulfills the condition of detailed balance. Because
(10.4.29) is an operator identity each coefficient of all derivatives with respect to
qi must vanish. Though in principle the comparison of coefficients is possible for
arbitrarily high derivatives, we confine ourselves to the usual Fokker-Planck
equation with an operator L of the form

(10.4.31)

and its adjoint

(10.4.32)

We may always assume that the diffusion coefficients are symmetric

(10.4.33)

It is convenient to define the following new coefficients:


a) the irreversible drift coefficients

(10.4.34)

b) the reversible drift coefficients

(10.4.35)

For applications it is important to note that J i transforms as qi under time


reversal. We then explicitly obtain the necessary and sufficient conditions for
Kib Di and J i so that the principle of detailed balance holds. We write the sta-
tionary solution of the Fokker-Planck equation in the form

f(q,l) = ,!Ve-tJ>(q,).) , (10.4.36)

699
298 10. The Inclusion of Noise

where J!fis the normalization constant and cP may be interpreted as a generalized


thermodynamic potential. The conditions read

(10.4.37)

(10.4.38)

(10.4.39)

If the diffusion matrix Kik possesses an inverse, (10.4.38) may be solved with
respect to the gradient of cP

(10.4.40)

This shows that (10.4.40) implies the integrability condition

(10.4.41)

which is a condition on the drift and diffusion coefficients as defined by rhs of


(10.4.40). Substituting Ai' Aj by (10.4.40), the condition (10.4.39) acquires the
form

(10.4.42)

Thus the conditions for detailed balance to hold are given finally by (10.4.37, 41,
42). Equation (10.4.38) or equivalently (10.4.40) then allows us to determine cP
by pure quadratures, i. e., by a line integral. Thus the stationary solution of the
Fokker-Planck equation may be determined explicitly.

10.4.3 An Example
Let us consider the following Langevin equations:

(10.4.43)

(10.4.44)

where the fluctuating forces F j give rise to the diffusion coefficients

(10.4.45)

700
10.4 Some Useful General Theorems on the Solutions of Fokker-Planck Equations 299

In order to find suitable transformation properties of ql, q2 with respect to time


reversal, we establish a connection between (10.4.43,44) and the equations of the
harmonic oscillator, i. e.,

x=wp, (10.4.46)

p= -wx, (10.4.47)

where we have chosen an appropriate scaling of the momentum p and the coor-
dinate x. As is well known from mechanics, these variables transform under time
reversal as

x=x, (10.4.48)

p= -po (10.4.49)

A comparison between (10.4.43, 44) and (10.4.48, 49) suggests an identification


(at least in the special case a = 0)

(10.4.50)

p = q2· (10.4.51)

Retaining this identification also for a =t= 0, we are led to postulate the following
properties of Ql, Q2,

(10.4.52)

(10.4.53)

so that el = + 1, e2 = - 1.
Using definition (10.4.34) of the irreversible drift coefficient Dl we obtain

(10.4.54)

so that

(10.4.55)

In an analogous fashion we obtain

(10.4.56)

so that

(10.4.57)

701
300 10. The Inclusion of Noise

Let us now calculate the reversible drift coefficients by means of (10.4.35). We


obtain

J1 = tl- aql + Wq2 - ( - aql - Wq2)]

= +Wq2 (10.4.58)

and similarly

J2 = tl- aq2 - wQl + (aQ2 - wQd]

= -wQI' (10.4.59)

Inserting (10.4.55, 57), respectively, into (10.4.40) we obtain

2a
Ai=--=-Qi, i=1,2. (10.4.60)
Q
Using (10.4.55, 57 - 59), we readily verify (10.4.42). Clearly, (10.4.60) can be
derived from the potential function

(10.4.61)

so that we have constructed the stationary solution of the Fokker-Planck equa-


tion belonging to (10.4.43, 44). (For an exercise see the end of Sect. 10.4.4.)

10.4.4 Useful Special Cases


We mention two special cases which have turned out to be extremely useful for
applications.
1) J i = 0 yields the so-called potential conditions in which case (10.4.37, 39)
are fulfilled identically so that only (10.4.40, 41) remain to be satisfied.
2) In many practical applications one deals with complex variables (instead
of the real ones) and the Fokker-Planck equation has the following explicit form

81 (10.4.62)
8t

with

(10.4.63)

Then the above conditions reduce to the following ones: Cj , Cj must have the
form

CJ = 8B18u*
J
+ /(1)
J'
(10.4.64)

702
10.5 Nonlinear Stochastic Systems Close to Critical Points: A Summary 301

(10.4.65)

and the following conditions must be satisfied

(10.4.66)

uP) ar,Z) )
E ( _J_+_J_ = O. (10.4.67)
j aUj aul
As a result the stationary solution of (10.4.62) reads

where (10.4.68)

1/>= 2BIQ. (10.4.69)

Exercise. Determine the stationary solution of the Fokker-Planck equation be-


longing to the following coupled Langevin equations:

ql = - aql + wqz - pq, (qr + q~) + F, (/) (10.4.70)

qz = - aqz - wq, - pqz(qr + q~) + Fz(t) , (10.4.71)

where Qik = c)ikQ independent of q" qz·

10.5 Nonlinear Stochastic Systems Close to Critical Points:


A Summary

After all the detailed representations of sometimes rather heavy mathematical


methods it may be worthwhile to relax a little and to summarize the individual
steps. One starting point would be nonlinear equations containing fluctuating
forces. We assumed that in a first step we may neglect these forces. We then
studied these systems close to instability points. Close to such points the behavior
of a system is generally governed by few order parameters, and the slaving prin-
ciple allows us to eliminate all "slaved" variables. The elimination procedure
works also when fluctuating forces are present. This leads us to order parameter
equations including fluctuating forces. Such order parameter equations may be
of the Langevin-Ito or -Stratonovich type. In general these equations are non-
linear, and close to instability points we must not neglect nonlinearity. On the
other hand, it is often possible to keep only the leading terms of the nonlineari-
ties. The most elegant way to cope with the corresponding problem consists in
transforming the corresponding order parameter equations of the Langevin-ito
or -Stratonovich type into a Fokker-Planck equation. Over the past decades we

703
302 10. The Inclusion of Noise

have applied this "program" to various systems. In quite a number of cases in


which spatial patterns are formed it turned out that on the level of the order para-
meters the principle of detailed balance holds due to symmetry relations. In such
a case one may get an overview of the probability with which individual con-
figurations of the order parameters Uj can be realized. This allows one to deter-
mine the probability with which certain spatial patterns can be formed, and it
allows us to find their stable configurations by looking for the minima of V(u) in

f(u) = ,/If e - V(u) ,

where V( == cJ» is the potential defined in (10.4.40).

704
11. Discrete Noisy Maps

In this chapter we shall deal with discrete noisy maps, which we got to know in
the introduction. In the first sections of the present chapter we shall study in how
far we can extend previous results on differential equations to such maps. In
Sects. 7.7-7.9 we showed how the slaving principle can be extended. We are
now going to show how an analog to the Fokker-Planck equation (Sect. 10.2 - 4)
can be found. We shall then present a path-integral solution and show how the
analog of solutions of time-dependent Fokker-Planck equations can be found.

11.1 Chapman-Kolmogorov Equation

We consider an n-dimensional mapping where the state vectors qk have n com-


ponents. From what follows it appears that the whole procedure can be extended
to continuously distributed variables. We assume that qk obeys the equation

(11.1.1)

where f is a nonlinear function and the matrix G can be decomposed into

G=A +M(qd, (11.1.2)

where A is independent of the variables q k and thus jointly with the random
vector '1k represents additive noise. Here M is a function of the state vector q k
and thus jointly with 11k gives rise to mUltiplicative noise. We wish to find an
equation for the probability density at "time" k + 1 which is defined by

P(q,k+ 1) = (b(q-qk+d>. (11.1.3)

The average goes over the random paths of the coordinates qk' up to the index k,
due to the systematic motion and the fluctuations 11k" Therefore we transform
(11.1.3) into

P(q,k+ 1) = Jdn~Jdnl1b(q-f(~) - G(~)l1) W(l1)P(~,k), (11.1.4)


304 11. Discrete Noisy Maps

where W(tf) is an arbitrary distribution function of the noise amplitude tf and


may even depend on each step k. For simplicity, we neglect the latter dependence,
however.
To evaluate the integrals in (11.1.4) we introduce the variable tf'

G(~)tf= tf' , (11.1.5)

by which we can replace (11.1.4) by

P(q,k+l) = Jdn~Jdntf'D(~)-I<5(q-/(s)-tf') W(G-l(~)tf')P(s,k),


(11.1.6)
where D = det G. The integrals can now be evaluated by use of the properties of
the <5-function, yielding the final result

(11.1. 7)

which has the form of the Chapman-Kolmogorov equation.

11.2 The Effect of Boundaries. One-Dimensional Example

In our above treatment we have implicitly assumed that either the range of the
components of q goes from - 00 to + 00 or that the spread of the distribution
function W is small compared to the intervals on which the mapping is executed.
If these assumptions do not hold, boundary effects must be taken into account.
Since the presentation is not difficult in principle but somewhat clumsy, we show
how this problem can be treated by means ,of the one-dimensional case.
We start from the equation

b
P(q, k+ 1) = Jd~ Jdl1<5(q - /(~) -11)P(~, k) W(I1) , (11.2.1)
a

where

and (11.2.2)

q =/(0 + 11 (11.2.3)

hold. Equations (11.2.2, 3) imply

a- 1(0 ( 11 ( b - 1(0· (11.2.4)

When integrating over 11 in (11.2.1), we have to observe the boundary values


(11.2.4). In order to guarantee the conservation of probability, i. e.,

b
JP(q,k)dx=1 (11.2.5)
a

706
11.3 Joint Probability and Transition Probability. Forward and Backward Equation 305

for all k, we must normalize W(I]) on the interval (11.2.4). This requires the in-
troduction of a normalization factor JIIwhich is explicitly dependent onj(O

b-JW
S W(I])dl] = .AI-l(f(~». (11.2.6)
a-JW

Therefore, in (11.2.1) we are forced to introduce instead of W(I]) the function

W(I];f(~» = JII (f(~» W(I]). (11.2.7)

With this in mind we can eventually transform (11.2.1) into

b
P(q, k+ 1) = Sd~ W(q - j(~» JII (f(~»P(~,k) , (11.2.8)
a

which is again a Chapman-Kolmogorov equation.

11.3 Joint Probability and Transition Probability. Forward and


Backward Equation

In the following we use the Chapman-Kolmogorov equation for the probability


density P(q,k) in the form

P(q, k+ 1) = Sd n~K(q, ~)P(~, k) . (11.3.1)


'I

The kernel K has the general form

(11.3.2)

and D = det G. Here W is the probability distribution of the random vector 11 and
JII is a properly chosen normalization factor which secures that the fluctuations
do not kick the system away from its domain g and that probability is conserved,
i.e.,

SdnqK(q,~) = 1. (11.3.3)
'/

In order to obtain a complete description of the underlying Markov process cor-


responding to (11.1.1), we have to consider the joint probability

P2(q,k;~,k'), (k>k'). (11.3.4)

Here the "time" index k is related to q, whereas k' belongs to ~. Further, P 2 may
be expressed in terms of the transition probability p(q, k I~, k') and P(q, k) via

707
306 11. Discrete Noisy Maps

P2(q,k;~,k') =p(q,kl~,k')P(~,k'). (11.3.5)

In the case of a stationary Markov process where the kernel (11.3.2) does not
depend on the time index k explicitly, we have furthermore

p(q,kl~,k') = p(q I~;k-k'). (11.3.6)

In order to derive the so-called forward equation for the transition probability p
we perform an integration over ~ in (11.3.5) to obtain

P(q,k) = fdnt,p(q,kl~,k')P(~,k'). (11.3.7)


,)

Equations (11.3.7, 1) jointly with the fact that the initial distribution may be
chosen arbitrarily yield the desired equation

p(q,k+ IIt"k') = fK(q,z)p(z,kl~,k')dnz. (11.3.8)


9'

Furthermore, we note that jointly with (11.3.7)

P(q,k) = fdnt,p(q,kl~,k' + l)P(~,k' + 1) (11.3.9)


'/

holds. Subtracting (11.3.9) from (11.3.7) and using the fact that P(~,k'+1)
obeys (11.3.1) we arrive at

o= fdnt,[p(q,kl~,k')P(~,k') - fdnzp(q,kl~,k'+ 1)K(~,z)P(z,k')l.


'/ 'I (11. 3 .10)

Renaming now the variables in the second term, i. e., Z <->~, and changing the
order of integration we obtain

p(q,kl~,k') = fdnzp(q,klz,k'+l)K(z,~). (11.3.11 )


'/

Again we made use of the arbitrariness of the initial distribution P(~, k'). Equa-
tion (11.3.11) is the backward equation for the transition probability p.
Equations (11.3.8 and 11) complete the description of process (11.1.1) in terms of
probability distributions. Moments, correlation functions, etc., may now be
defined in the standard way.

11.4 Connection with Fredholm Integral Equation

For the steady-state distribution we put

P(q,k) = Ps(q) (11.4.1)

708
11.5 Path Integral Solution 307

which transforms (11.1.7) into

(11.4.2)

where the kernel is defined by [compare (11.3.2)]

K(q,~) =D(~)-1 W(G-l(~)[q-f(~)]). (11.4.3)

Equation (11.4.2) is a homogeneous Fredholm integral equation. In order to


treat transients we seek the corresponding eigenfunctions making the hypothesis

(11.4.4)

It transforms (11.1.7) again into a Fredholm integral equation

(11.4.5)

The k-dependent solution can be expressed in the form

(11.4.6)

11.5 Path Integral Solution

We start from (11.1.7) in the form

P(q,k+ 1) = Jdn~K(q, ~)P(~,k). (11.5.1)

Iterating it for k = 1, 2, ... leads to

P(q,k+ 1) = Jdnqk'" dnqlK(q,qk)K(qkoqk_l)" .K(q2,ql)P(ql' 1),


(11.5.2)

which can be considered as a path integral. To make contact with other formula-
tions we specialize W to a Gaussian distribution which we may take, without
restriction of generality, in the form

W('1) = JVexp( - r,A '1), (11.5.3)

where A is a diagonal matrix with diagonal elements aJ.


Introducing the Fourier
transform of (11.5.3) we may cast (11.5.2) into the form

k
P(q,k+ 1) = JDqJDs TID(q/)-l exp ["'J , (11.5.4)
/=1

709
308 11. Discrete Noisy Maps

where

k
Ds = II [(1I(2n»d ns l ] ,
1=1
(11.5.5)

11.6 The Mean First Passage Time

An important application of the backward equation (11.3.11) is given by the


mean first passage time problem. In the following we shall derive an inhomo-
geneous integral equation for the conditional first passage time which will be
introduced below. Here we assume that the process is stationary [compare
(11.3.6)]. In order to formulate the problem precisely we consider some closed
subdomain f/ of 5' and assume that the system is initially concentrated within
that region 'f/ with probability 1, i. e.,

JdnqP(q,O) = 1. (11.6.1)
1

At this stage it becomes advantageous to define the probability Pcq,k), where


P(q,k)dnq measures the probability to meet the system under consideration in
the volume element dnq at q, without having reached the boundary of Y before.
We introduce

Pck)= JdnqPcq,k), (11.6.2)


Y"

where P(k) is the probability of finding the system within Y without having
reached the boundary up to time k. Combining (11.6.1 and 2) we obtain the
probability that the system has reached the boundary of f during k, namely
1 - P(k). Finally, the probability that the system reaches the boundary of Y
between k and k + 1 is given by P(k) - P(k+ 1). It now becomes a simple matter
to obtain the mean first passage time by

(r) =
k=O
E(k+1)[Pck)-P(k+l)]. (11.6.3)

At this stage it should be noted that the mean first passage time not only depends
on'f/ but also on the initial distribution P(q, 0) [compare (11.6.1 )]. It is this fact
which suggests the introduction of the conditional first passage time (r(q». This

710
11.6 The Mean First Passage Time 309

is the mean first passage time for a system which has been at q at time k = 0 with
certainty. Obviously we have
00

(r(c;» = I (k+ 1)[jj(q Ic;,k) - p(q Ic;,k+ 1)] , (11.6.4)


k=O

where p(q Ie;, k) is the corresponding transition probability. The relation between
(11.6.3 and 4) is given by

(r) = JdnC;(r(C;»P(c;,O). (11.6.5)


.y

In the following we shall use the fact that within .1/ the transition probability
p(q I c;,k) obeys the backward equation (11.3.11), which allows us to rewrite
(11.6.4)

(r(c;» = r
k=O
(k+ 1) Jdnq Jdnz[J(c;-z)-K(z,c;)]p(q Iz,k).
.y.y
(11.6.6)

Equation (11.6.6) may now be considerably simplified by adding and subtracting


the expression

p(q Iz,k+ 1)K(z, c;) (11.6.7)

under the integral. Using the definition of (r(q» according to (11.6.4) and
applying the backward equation again we arrive at

(r(c;» = - JdnZK(Z,c;) (r(z» +R, (11.6.8)


l'

where R is given by

R = r
k=O
(k+1>Jd nq[jj(qlc;,k)-p(qlc;,k+2)].
.y
(11.6.9)

It is now simple to evaluate the expression for R. Adding and subtracting

p(q Ic;,k+ 1) (11.6.10)

in the sum, using the obvious relation

Jdnq
l'
r
k=O
[jj(q Ic;,k) - p(q Ic;,k+ 1)] = 1 , (11.6.11)

performing the summation over k, and replacing c; by q, our final result reads

(r(q» = JdnzK(z,q)(r(z» +1. (11.6.12)


l'

Equation (11.6.12) contains the result announced in the beginning of this section:
We find an inhomogeneous integral equation for the conditional first passage
time (r(q» for the discrete time process (11.1.1).

711
310 11. Discrete Noisy Maps

11.7 Linear Dynamics and Gaussian Noise. Exact Time-Dependent


Solution of the Chapman-Kolmogorov Equation

We consider the linear version of (11.1.1), i. e.,

(11.7.1)

where A is a matrix depending on the external parameters only. Furthermore, we


shall assume G = 1 and that the probability density of the random vector 11 is of
the Gaussian type

,
W(11k) =
(- P
det-
(2n)n
)112 exp (1 _ )
- - 11kP11k ,
2
(11.7.2)

where Pdenotes a symmetric positive matrix. We note that (11.7.1) together with
(11.7.2) may be visualized as a linearized version of (11.1.1) around a fixed point.
If in addition the fluctuations are small, we are allowed to neglect the effect of
the boundaries in the case of a finite domain q) when the fixed point is far
enough from the boundaries. Using (11.7.1 and 2), the kernel (11.3.2) obviously
has the form

(11.7.3)

Equation (11.1.7) may now be solved by the hypothesis

P(~,k) =
detB
(- -
(2n)n
)112 exp [1 __
- -(~- ~o)B(~- ~o)
2
]
, (11.7.4)

where ~o denotes the center of the probability distribution a time k and B again is
a positive symmetric matrix. Indeed, inserting both (11.7.4, 3) into (11.1.7) we
obtain
P(q,k+1)= (detB.de!p)ll2 +('dnC;exp{ ... }, (11.7.5)
(2n) -00

where

(11.7.6)

Shifting ~ by a constant vector a

e= ~' + a (11.7.7)

712
11. 7 Linear Dynamics and Gaussian Noise 311

and choosing

(11.7.8)

we are able to perform the ~fintegration and find

P(q,k+ 1) = jiexp[ -t(ij-ijo)B(q-qo)]' (11.7.9)

Identifying

B= B k + 1; B = Bk
JII= ,Ak+l; Ak= (2n)-n12(detp)1/2 (11.7.10)

qo = qk+l; ~o = qk

and comparing (11.7.4) with (11.7.9), we immediately find the recursion relations

(11.7.11)

(11.7.12)

112
.Ai, =;v, . ( det P ) (11.7.13)
k+l 'k det(ATpA + B k )

The stationary solution can be obtained from the condition

(11.7.14)

etc. In the case of a diagonal matrix P, which fulfills the condition

(11. 7 .15)

we may solve (11. 7 .14) if additionally the following situation is met

(11. 7 .16)

We then simply have

(11. 7.17)

Finally we mention that the Gaussian case also reveals instabilities of the system.
Indeed, as soon as an eigenvalue a of the matrix A crosses Ia I = 1 due to the
variation of an external parameter, a corresponding divergence in the variance of
the probability distribution indicates the instability.

713
12. Example of an Unsolvable Problem in Dynamics

When looking back at the various chapters of this book we may note that even by
a seemingly straightforward extension of a problem (e. g., from periodic to quasi-
periodic motion) the solution of the new problem introduces qualitatively new
difficulties. Here we want to demonstrate that seemingly simple questions exist in
dynamic systems which cannot be answered even in principle. Let us consider a
dynamic system whose states are described by state vectors q. Then the system
may proceed from any given state to another one q' via transformations, A, B, C
within a time interval T,

q' = A q, or q' = Bq , (12.1 )

We assume that the inverse operators A -1, B- 1, ••• exist. Obviously, A, B, C, ...
form a group. We may now study all expressions (words) formed of A, A-I, B,
B- 1, etc., e.g., BA -Ie. We may further define that for a number of specific
words, W(A,B,C ... ) = 1, e.g., BC = 1. This means that after application of C
and B any initial state q of the dynamic system is reached again. Then we may ask
the following question: given two words WI (A,B, ... ) and W2 (A,B, ... ) can we
derive a general procedure by which we can decide in finitely many steps whether
the dynamic system has reached the same end points q 1 = q2, if it started from the
same initial arbitrary point qo. That is, we ask whether

(12.2)

or, equivalently, whether

or (12.3)

(12.4)

This is a clearcut and seemingly simple task. Yet it is unsolvable in principle.


There is no general procedure available. The problem we have stated is the
famous word problem in group theory. By means of Godel's theorem (which we
are not going to present here) the unsolvability of this problem can be shown. On
the other hand, if specific classes of words Jtj = 1 as defining relations are given,
the word problem can be solved.
12. Example of an Unsolvable Problem in Dynamics 313

This example shows clearly that some care must be exercised when problems
are formulated. This is especially so if general solutions are wanted. Rather, one
should be aware of the fact that some questions can be answered only with
respect to restricted classes of equations (or problems). It is quite possible that
such a cautious approach is necessary when dealing with self-organizing systems.

715
13. Some Comments on the Relation Between Synergetics
and Other Sciences

In the introduction we presented phenomena which are usually treated within dif-
ferent disciplines, so that close links between synergetics and other disciplines
could be established at that level. But the present book is primarily devoted to the
basic concepts and theoretical methods of synergetics, and therefore in the fol-
lowing the relations between synergetics and other disciplines are discussed at this
latter level. Since synergetics has various facets, a scientist approaching it from
his own discipline will probably notice those aspects of synergetics first which
come closest to the basic ideas of his own field. Based on my discussions with
numerous scientists, I shall describe how links can be established in this way.
Then I shall try to elucidate the basic differences between synergetics and the
other disciplines.
When physicists are dealing with synergetics, quite often thermodynamics is
brought to mind. Indeed, one of the most striking features of thermodynamics is
its universality. Its laws are valid irrespective of the different components which
constitute matter in its various forms (gases, liquids, and solids). Thermo-
dynamics achieves this universal validity by dealing with macroscopic quantities
(or "observables") such as volume, pressure, temperature, energy or entropy.
Clearly these concepts apply to large ensembles of molecules, but not to in-
dividual molecules. A closely related approach is adopted by information theory,
which attempts to make unbiased estimates about systems on which only limited
information is available. Other physicists recognize common features between
synergetics and irreversible thermodynamics. At least in the realm of physics,
chemistry, and biology, synergetics and irreversible thermodynamics deal with
systems driven away from thermal equilibrium.
Chemists and physicists are struck by the close analogy between the various
macroscopic transitions of synergetic systems and phase transitions of systems in
thermal equilibrium, such as the liquid - gas transition, the onset of ferro-
magnetism, and the occurrence of superconductivity. Synergetic systems may
undergo continuous or discontinuous transitions, and they may exhibit features
such as symmetry breaking, critical slowing down, and critical fluctuations,
which are well known in phase transition theory.
The appropriate way to cope with fluctuations, which are a necessary part of
any adequate treatment of phase transitions, is provided by statistical mechanics.
Scientists working in that field are delighted to see how typical equations of their
field, such as Langevin equations, Fokker-Planck equations and master equa-
tions, are of fundamental importance in synergetics. Electrical engineers are im-
mediately familiar with other aspects of synergetics, such as networks, positive
13. Some Comments on the Relation Between Synergetics and Other Sciences 315

and negative feedback, and nonlinear oscillations, while civil and mechanical
engineers probably consider synergetics to be a theory of static or dynamic in-
stabilities, postbuckling phenomena of solid structures, and nonlinear oscilla-
tions. Synergetics studies the behavior of systems when controls are changed;
clearly, scientists working in cybernetics may consider synergetics from the point
of view of control theory.
From a more general point of view, both dynamic systems theory and syn-
ergetics deal with the temporal evolution of systems. In particular, mathema-
ticians dealing with bifurcation theory observe that synergetics - at least in its
present stage - focusses attention on qualitative changes in the dynamics (or
statics) of a system, and in particular on bifurcations. Finally, synergetics may be
considered part of general systems theory, because in both fields scientists are
searching for the general principles under which systems act.
Quite obviously, each of the above-mentioned disciplines (and probably
many others) have good reason to consider synergetics part of them. But at the
same time in each case, synergetics introduces features, concepts, or methods
which are alien to each specific field. Thermodynamics acts at its full power only
if it deals with systems in thermal eqUilibrium, and irreversible thermodynamics
is confined to systems close to thermal equilibrium. Synergetic systems in phys-
ics, chemistry, and biology are driven far from thermal equilibrium and can
exhibit new features such as oscillations. While the concept of macroscopic
variables retains its importance in synergetics, these variables, which we have
called order parameters, are quite different in nature from those of thermody-
namics. This becomes especially clear when thermodynamics is treated with the
aid of information theory, where numbers of realizations are computed under
given constraints. In other words, information theory and thermodynamics are
static approaches, whereas synergetics deals with dynamics.
The nonequilibrium phase transitions of synergetic systems are much more
varied than phase transitions of systems in thermal equilibrium, and include
oscillations, spatial structures, and chaos. While phase transitions of systems in
thermal equilibrium are generally studied in their thermodynamic limit, where
the volume of the sample is taken to be infinite, in most nonequilibrium phase
transitions the geometry of the sample plays a crucial role, leading to quite dif-
ferent structures. Electrical engineers are quite familiar with the concepts of non-
linearity and noise, which also playa fundamental role in synergetics. But syn-
ergetics also offers other insights. Not only can synergetic processes be realized
on quite different substrates (molecules, neurons, etc.), but synergetics also deals
with spatially extended media, and the concept of phase transitions is alien to
electrical engineering. Similar points may be made with respect to mechanical en-
gineering, where in general, fluctuations are of lesser concern. Though in cyber-
netics and synergetics the concept of control is crucial, the two disciplines have
quite different goals. In cybernetics, procedures are devised for controlling a sys-
tem so that it performs in a prescribed way, whereas in synergetics we change
controls in a more or less unspecified manner and study the self-organization of
the system, i. e. the various states it acquires under the newly imposed control.
The theory of dynamic systems and its special (and probably most interesting)
branch, bifurcation theory, ignore fluctuations. But, as is shown in synergetics,

717
316 13. Some Comments on the Relation Between Synergetics and Other Sciences

fluctuations are crucial at precisely those points where bifurcations occur (and
bifurcation theory should work best in the absence of fluctuations). Or, in other
words, the transition region can be adequately dealt with only if fluctuations are
taken into account. In contrast to traditional bifurcation theory (e. g. of the
Lyapunov-Schmidt type), which derives the branching solutions alone, in syn-
ergetics we study the entire stochastic dynamics in the subspace spanned by the
time-dependent order parameters. This is necessary in order to take fluctuations
into account. At the same time our approach allows us to study the stability of
the newly evolving branches and the temporal growth of patterns. Thus there is
close contact with phase transition theory, and it is possible to introduce concepts
new to bifurcation theory, such as critical slowing down, critical fluctuations,
symmetry breaking, and the restoration of broken symmetry via fluctuations. In
addition, our methods cover bifurcation sequences within such a subspace, e. g.
period-doubling sequences and frequency locking. In most cases a number of
(noisy) components are necessary to establish a coherent state, and consequently
synergetics deals with systems composed of many components; this in turn
requires a stochastic approach.
While bifurcation theory as yet excludes fluctuations, in some of its recent de-
velopments it does consider the neighborhood of branching solutions. As experts
of dynamic systems theory and bifurcation theory will notice, this book advances
to the frontiers of modern research and offers these fields new results. One such
result concerns the form of the solutions (analogous to Floquet's theorem) of
linear differential equations with quasiperiodic coefficients, where we treat a
large class of such equations by means of embedding. Another result concerns
the bifurcation of an n-dimensional torus into other tori. Finally, the slaving
principle contains a number of important theorems as special cases, such as the
center manifold theorem, the slow manifold theorem, and adiabatic elimination
procedures.
With respect to general systems theory, synergetics seems to have entered
virgin land. By focussing its attention on situations in which the macroscopic
behavior of systems undergoes dramatic changes, it has enabled us to make
general statements and to cover large classes of systems.
In conclusion, a general remark on the relation between synergetics and
mathematics is in order. This relation is precisely the same as that between the
natural sciences and mathematics. For instance, quantum mechanics is not just
an application of the theory of matrices or of the spectral theory of linear opera-
tors. Though quantum mechanics uses these mathematical tools, is has developed
its own characteristic system of concepts. This holds a fortiori for synergetics. Its
concepts of order parameters and slaving can be applied to sciences which have
not yet been mathematized and to ones which will probably never be mathe-
matized, e. g. the theory of the development of science.

718
Appendix A: Moser's Proof of His Theorem

A.I Convergence of the Fourier Series

Lemma A.I.l
We assume that the vector F (6.3.27) (with its subvectorsJ, g, G) is a real analytic
function of period 2 n: in /fit, ... , /fin' For a given r > 0 we introduce the norm

IIFII,= sup IIFII, 1IF11=llfll+llgll+IIGII, (A. 1.1)


IIm{'I'vJI< r

which is finite for some positive r. Here II· .. II denotes sum over the modulus of
the individual vector components.
For any real analytic periodic function the Fourier coefficients decay ex-
ponentially. More precisely, if

F = L Fj exp[i(j, /fI)] , (A.lo2)


j

then

(A.lo3)

To prove this inequality we represent Fj in the form

Fj = _1_ J... JF exp[ - i(j, /fI)] dn/fl, (A.l.4)


(2 n:)n

where the integration is taken over 0 ~ /fI v ~ 2 n:. Now shifting the integration
domain into the complex to 1m {/fl v} = - p sign {i v} gives

(A.loS)

and since this holds for every p < r we obtain (A.l.3).


In Sect. 6.3 we introduced the null-space of the operator L. In the following
we shall denote this null-space by ,ff. We may decompose the space of functions F
into that null-space JI! and a residual space, which we shall assume real and
denote by 24. We then formulate the following.
318 Appendix A: Moser's Proof of His Theorem

Lemma A.l.2
Let FE rYt and F be analytic in 11m {fllv} 1< r. Then the unique solution of

LU=F, UE(fJt, (A. 1.6)

is analytic in the same domain. If 0 < p < r < 1, one has for U the estimate

(A. 1.7)

where c depends on n, m, r only, and a = r + 1.


Proof: It is sufficient for the present purposes to prove this lemma only with
a= r+ n, n ~2.
Since the operator L acts component-wise on the various terms in the Fourier
expansion, it suffices to verify the convergence of the Fourier series so obtained.
For this purpose we write

F= EFjexp[i(i,fII») , U= E Ujexp[i(i,If/») , (A.1.8)


j j

and have from

(A. 1.9)

the condition

[i(w,j) + L] Uj = Fj . (A. 1.10)

Since L can be diagonalized one finds 0 by dividing the various eigenfunctions


E rYt of L by the eigenvalues which are nonzero. On account of (6.2.6) we can
estimate Uj by

IlUj I ~ ~ (llili r+ l)11Fj I ~ ~ (IU r + 1) exp( - lIillr)llFlir (A.1.11)

according to (A.1.3). Hence

IlUllp~ IIFlir ~ 1(IUII'+ l)exp[llill(p-r)]. (A. 1.12)

The latter sum always converges and can be estimated as follows. With
a= r - p ~ 1 we have
ar+ n E (1Ij II' + 1) exp( - IIj lIa) ~ E (arllill r+ 1) exp( - IIj lIa)a n , (A.1.13)
j j

720
A.2 The Most General Solution to the Problem of Theorem 6.2.1 319

which is bounded by a constant independent of <5 since it can be estimated by the


integral

(A. 1.14)

which it approximates. Hence we have from (A.1.12)

(A.1.1S)

which proves lemma A.l.2 with a = r + n.


Summarizing the results of this appendix and Sect. 6.3, we have established
the following theorem.

Theorem A.I.I. There exist unique formal expansions in powers of e for A(e),
dee), D(e) and for u(lf/, e), v(lf/, e), V(If/, e) which formally satisfy the conditions
of theorem 6.2.1 and the normalization that all coefficients in this expansion of
u, V, V belong to fl. The proof is evident from the preceding discussion. Com-
parison of coefficients leads at each step to equations of the type (6.3.33), which
by lemma A.1.2 admit a unique solution with the normalization U E fl. The more
difficult proof of the convergence of the series expansion in e is settled in Sect.
A. 3. However, in Sect. A.3 we shall not be able to ensure this normalization and
therefore we investigate now the totality of all formal solutions.

A.2 The Most General Solution to the Problem of Theorem 6.2.1

We discuss what the meaning of the arbitrary constants (of the null-space) is. To
find the most general solution, U E fl, and .AI to the problem of Moser's theorem
we let

O!t. \" rp = If/ + u(lf/, e) (A.2.1)


(~ = X + V (If/, X, e)

be the unique particular solution which satisfies the normalization that all vectors
lying in the null-space vanish, i. e., U E fl. It transforms the differential equations
(6.3.1 and 2) into a system whose linearization is

If/=W
X =AX
J. (A.2.2)

Clearly, any transformation

of}: \" If/' = If/ + eU(If/,e)


(A.2.3)
(X' =x+e[6(If/,e)+ V(If/,e)xl,

721
320 Appendix A: Moser's Proof of His Theorem

which transforms the system (A.2.2) into itself will give rise to another solution

(A.2.4)

to theorem 6.2.1. Here (A.2.4) denotes the composition of q; and 0;;. For this
reason we determine the self-transformations 0;; of (A.2.2). Inserting (A.2.3) into
(A.2.2) and requiring that the differential equation in the new variables
",', X' has the same form as before, we find that [L has the shape (6.3.29) with
L j =L 2 =L 3 ]

where (A2.S)

(A.2.6)

Equation (A.2.S) means that 0 is a vector of the nUll-space, and these self-trans-
formations have the form

~EQ:: ["" = '" + ea (A.2.7)


X' = X+ e(b+Bx) ,

where a, b, B are independent of '" and satisfy

Ab=O, (A.2.8)

AB=BA. (A2.9)

We shall denote the group of self-transformation Q:. Thus, both °Il and (11 0 'f,'
are solutions to theorem 6.2.1. In fact, they form the most general solution
provided Lt, d, D are considered given. Indeed, one may show that these latter
quantities are uniquely determined. Since this proof is more of a technical nature
we do not present it here but rather formulate the corresponding lemma.

Lemma A.2.1
Let q; in (A.2.1) be the unique (normalized) formal transformation and let

N~ [t] (A.2.l0)

be the corresponding modifying term found in theorem ALL Thus '11 trans-
forms the system (6.3.1 and 2) into a system (6.1.26 and 27) whose linearization is
given by

722
A.3 Convergent Construction 321

'" =
i =AX
w J. (A.2.11)

Let °Il, N (where <ji = identity, N = 0 for e = 0) be any other formal expansion
with this property. Then there exists a transformation CG' E~, cf. (A.2.7), such
that q;; = q; 0 CG' and N = N.

A.3 Convergent Construction

a) To prove the convergence of the series expansion obtained in the previous


section one is tempted to use Cauchy's majorant method. But this method fails
because of the presence of the small divisors. A crude estimate leads to majorants
of the form I(n! )2, en which diverge for alII e I > O. Namely, by lemma A.l.2 the
solution of the linearized equation leads to multiplication of the coefficients by
the factor (r - r') -, where 11m {qJ} I < r, 11m {qJ'} I < r' are the complex domains
in which the iterative estimates hold. Choosing a sequence of such domains
11m {qJ} I< rn with

asn---+ oo , (A.3.1)

we obtain a factor (rn-1 - rn) -, = O(n 2,) going from the determination of the co-
efficients of e n - 1 to those of en. This leads then to the series I(n! )2, en, in-
dicating the divergence of the series. Still, the convergence of the series can be
established by using an alternate construction which converges more rapidly
(Sects. 5.2 and 5.3). This idea is from Kolmogorov, Moser's proof here being a
generalization and sharpened version of that approach.
We shall describe an iteration method in which at each step the linear equa-
tions of lemma A.l.2 have to be solved but the precision increases with an
exponent 3/2 so that the previous series can be replaced by

which is obviously convergent. Below, we shall describe this iteration procedure


with detailed estimates and show in the following section how this result can be
used to prove the convergence of the series found in Sect. 6.3.
b) We consider again a family of differential equations

ip=a+/ J (A.3.2)
~=b+B~+g ,

where a = (a1' ... ,an) vary freely, while b = (b 1, ... ,b m ) and the m by m matrix
B are restricted by

723
322 Appendix A: Moser's Proof of His Theorem

Ab=O, BA=AB. (A.3.3)

Here w = (Wt, ... , wn) and the eigenvalues At, ... , Am of the matrix A satisfy
the condition

(A.3.4)

for all integers, all vectorsj, and all k ll , with

(A.3.S)

except the finitely many (j,k) = (O,k) for which the lhs of (A.3.4) vanishes. The
number r is chosen > n - 1 to ensure the existence of such w, A.
In the following it will be decisive to consider a, b, B as variables [under the
linear restrictions (A.3.3)] and we shall specify the complex domain of these
variables as well as that of lp, ~, by

(A.3.6)

and we require with a fixed constant Co ~ 1, that q ~ coK. In the following, all
constants which depend on n, m, co, r only will be denoted by Cl> C2, C3,' •••

Theorem A.3.t. Constants 150 , c depending on co' n, m, r only exist such that if
15 < 150 and

JO+kl<r a KI5 III ~, a= r+ 1 (A.3.7)


r s

then a transformation exists

011: )" lp = Ijf + u(ljf) (A.3.8)


l ~ = X + v (Ijf, X) linear in X

and a = ii, b = 6, B = 13 in ':t such that (A.3.2) for this choice of a = ii, b = 6,
B = 13 is transformed by (A.3.8) into a system

(A.3.9)

In particular

lp=WI+Ijf+u(wl+lp), ~=v(wl,O) (A.3.10)

724
A.3 Convergent Construction 323

is a quasiperiodic solution of (A.3.2) with characteristic numbers WI, ... , W n ,

A1,···,A m ·
Moreover, the ii, 6, fJ lie in

lii- W 1 161
---+~+ IB-AI<cruKo<q
- (A.3.11)
r s

and u, v are real analytic satisfying

M+M<co, for IIm{",}I<rl2; Ixl<s. (A.3.12)


r s

c) The remainder of this section is devoted to the proof of this theorem. First
we observe that replacing ~ by s~ we can assume s = 1. Similarly, replacing t by
At, multiplying a, b, B, W, A,J, g by A -I we can normalize K to K = 1. However,
r cannot be replaced by 1 by stretching the 'P variable, since the angular variables
'PI' ... ,'Pn where chosen of period 21L Therefore, we take s = K = 1, r ~ 1,
q ~ Co ~ 1.
In the following construction the transformation 0/1 will be built up as an
infinite product of transformations

011 = lim (Olio 0 0/11 0 •.• 0 'ltv) , (A.3.13)


v-+ 00

where each o/1v refines the previous approximation further. We denote the given
family of differential equations symbolically by ff = ,~o, and §j denotes the
system obtained by transforming Yo by the coordinate transformation 1&'0' etc.
Hence ,:7;, is transformed by °llv into .:7;,+ 1, and Yo by 0/10 0 0/11 0 •.• 0 (flv into
,:7;,+1'
It will be decisive in the following proof to describe precisely the domain ~v of
validity of the transformation and the differential equations. In particular, we
mention that the transformation °llv will involve a change of the parameters a, b,
B, as well as a transformation of the variables 'P, ~. To make this change of
variables 111v clear we drop the subscript v and write o/1v in the form

'P = 'II + u(""X, a,p,B)


~ = X + v(""X, a,p,B)
a = a + wI(a,p,B) (A.3.14)
b = P+ w2(a,p,B)
B = B + w3(a,p,B).

The variables 'P, X, a, p, B will be restricted to

o 'IIm{III}I<r
c/'v+I' 'Y v+I' Ivl<s
A v+l, la-wl+Jl!l+IB_AI<qv+I,
rv+1 sv+1 (A.3.15)

725
324 Appendix A: Moser's Proof of His Theorem

where the sequence rv, sv, qv will be chosen in such a manner that f/lv maps 0'v+ 1
into

(A.3.16)

and such that the family of differential equations ,CJ~ is mapped into a system
')'~+1 which approximates (A.3.9) to a higher degree than ,)'~. We shall drop the
index v and write JI~ in the form (A.3.2) and ,7,,+1 in Greek letters

rp=a+(/I
-
X=P+BX+ S ,
J'
III 9tv+1 (A.3.17)

where the aim will be to make (/I, S very small.


To set up the estimates which will be proven inductively we introduce the
sequences rv, sv, J v , qv in the following manner:

(A.3.18)

where c ~ 1 will be determined later. Notice that J v tends to zero rapidly if


J o < c- 6 ~ 1 and, similarly, sv, qv approach zero while rv-> ro12·
We shall assume that ffv satisfies

(A.3.19)

and construct 1iv in such a manner that it maps 2v+ 1 into 0'v and that for the
transformed system 3'\,+1 we have the corresponding error estimate

1411 lsi (J 5:
--+--<rv+lvv+l-qv+21ll 7 v +I'
_ • (
(A.3.20)
rv+1 sv+1

For the mapping 'f/lv [see (A.3.14)] we establish

(A.3.21)

with an appropriate constant c4 > 1.

726
A.3 Convergent Construction 325

d) If the above statement and the estimates (A.3.20, 21) are established, the
above theorem follows readily as we will now show. Since <71v maps 91v + 1 into 9:1v ,
the composite transformation litv = <710 0 <711 0 ... 0 <71v maps 9:1v + 1 into 010 and
can be estimated by

(A.3.22)

there. This implies that 0v converges in

11m {IJI} I< ~ ,


2
X= °, a=p=O, B=O (A.3.23)

uniformly and the transformation ~oo is analytic in 9:100' Moreover, the above
inequality implies (A.3.12) for X = 0 if c is chosen >2c4 • Since it is independ-
ent of X and v depends linearly on X, it remains to estimate ovloX. The term
1 + ovloX is the product of the corresponding terms 1 + oVvloX and, since
I oVvloX I < c4 b v , leads to the estimate

(A.3.24)

This proves (A.3.12) for an appropriate c.


The transformed system .roo has the property that l/Joo, Sv, oSoo/oX vanish
(for X = 0) in 9 This follows from the estimate (A.3.20) and Qv+2 -> together
00 , °
with the fact that oSloX at X = 0 can be estimated by sup ISis;; 1, which also
tends to zero as v-> 00. Hence, for y:, we have
(A.3.25)

as was to be proven.
Finally, the determination of ii, 6, iJ is obtained as the image of ~oo from the
last three components in (A.3.14). Since °llv maps 9:1v+ 1 into 9:1v, the images
(A.3.26)

form a sequence of nested domains in :;10:

(A.3.27)

In particular, the range of the last three components a, p, B in (A.3.14) tends to


zero, as follows from (A.3.15) and Qv+l->O, which implies that the correspond-
ing a, b, BE 9:10 shrink to a point ii, 6, iJ E 01 0 as v -> 00. This follows immediately
00

from (A.3.21) and the convergence of L Qv+l:


v=o

(A.3.28)

727
326 Appendix A: Moser's Proof of His Theorem

if c50 is chosen small enough, proving (A.3.11) for c > 2C4' One readily verifies
that the conditions of the theorem imply those of (A.3.20) for v = 0 so that the
induction can be started.
e) This reduces the proof of theorem A.3.1 to the construction of 11= Ilv
and the proof of the estimates (A.3.20, 21).
For this purpose, we truncate!, g to its linear part

!o =!(rp,O,a,b,B) J (A.3.29)
go = g(rp, O,a,b,B) + gt;(rp, O,a, b,B) ~

and break up ifo,go) into its components in uf, ;J1 (as in Sect. A.2), which we
denote by if" ,g,j') and (/.!t, g@). Then the transformationflvwill be obtained by
solving the linearized equations

u",m=!rfI(lf/,a,b,B) J (A.3.30)
v",m + vxAX - Av = g@ (""a,b,B)
As we saw in the previous section, these equations can be uniquely solved if we
require that
(A.3.31)

This defines u, v. The transformation of a, b, B will be given implicitly by the


equations

a = a +!JV(a,b,B) J (A.3.32)
P+BX=b+Bx+gh(a,b,B;x) .

f) The relations (A.3.30 - 32) define the transformation 011 and we proceed to
verify that it maps1lv+l = 1l+ into 1lv = 1l. (To simplify the notation, we denote
quantities referring to v + 1, such as sv+ 1 by s +, and those referring to v without
subscript.) For this purpose we have to check that (A.3.32) can be inverted for

(A.3.33)

and that the solution a , b, B falls into 1l, see (A.3.6). We explain the argument
for the first equation ignoring the dependence on b, B.
We use the implicit function theorem: In I a - m I < rq we have through
(A.3.19,18)
(A.3.34)

and using Cauchy's estimate in the sphere Ia - m I< R = c4q + < rq /2, we find

with c2 ~ 1. (A.3.35)

728
A.3 Convergent Construction 327

The last relation can be achieved by taking C4 > 2 C2. In the sphere Ia - w I< R we
also have from (A.3.33)

(A.3.36)

and the standard implicit function theorem guarantees the unique existence of an
analytic solution a = a(a) of the equation

a-w=a-w-!JV. (A.3.37)

By the same argument we verify the unique existence of a solution a, b, B of


(A.3.32) in

(A.3.38)

which verifies the second half of (A.3.21), if 00 is chosen sufficiently small.


To estimate the solution u, v of (A.3.30) we make use of lemma A.1.2. The
unique solution of (A.3.30) can be estimated by

M+M<C3(r-r+)~araO~C4+10 in .'?dv+1 (A.3.39)


r s

since by (A.3.18) we have

(A.3.40)

We may choose the same constant C4 as before by enlarging the previous one if
necessary. This proves the first half of (A.3.21), the second part having been
verified already above.
g) Having found the transformation 0/1 we transform ff,; [or (A.3.2)] into
new variables (A.3 .17) and estimate the remainder terms CP, E to complete the in-
duction proof.
For this purpose we introduce the following symbolic notation:

F= (~). ~= (;) (A.3.41)

where the arguments in Fare 'fI,


X, a, b, B (not 'P, c;!) and in ~ are 'fI, X, a, p, B.
Corresponding to the transformation 0/1, we introduce the vector

W= ('fI+U)
X+v
(A.3.42)

729
328 Appendix A: Moser's Proof of His Theorem

and its Jacobian matrix

W' = (1 + 01 + )
VIfI
Ulfl
VIfI •
(A.3.43)

Finally let

(A.3.44)

Then the transformation equations which express thatfl takes (A.3.2) into
(A.3.17) take the form

W' (A + + <P) = (A + F) 0 ull , (A.3.45)

where on the left side we have matrix multiplication and on the rhs the circle 0
indicates composition, i. e., substitution of ({J by 'fI + u, etc.
We compare these equations with those satisfied by solving (A.3.30, 32). In
the present notation these equations take the form

W' A 00 - A 00 0 VI! = FUJI J (A.3.46)


A+ -A =F A ,
where

(A.3.47)

and the meaning of F!l, , F j is obvious. Adding these relations we have

W' A 00 - A 00 0 'fl + (A + - A) = Fa (A.3.48)

with

Fo= ( f ) (A.3.49)
g+g~X

and the arguments are 'fI, X, a, b, B. Subtracting (A.3.48) from (A.3.45) we find
after a short calculation for (/J

W' <P= -I + (F o~lI-Fo) = -I + II + III, (A.3.50)

where

730
A.3 Convergent Construction 329

_ ( u",(a- ro) ) (A.3.51)


- v",(a- ro) + vx[P + (B- A)X] - (B- A)v '

II = F 0 011 - Fo 0 OIJ , (A.3.52)

III = Fo 0 011 - Fo. (A.3.53)

We proceed to estimate the three error terms. The second one is due to lineariza-
tion of F, the third is due to evaluation of F - or Fo - at a displaced argument
and the first is due to the fact that a was replaced by ro, etc., when we solved
(A.3.30).
h) To estimate the quantity

(A.3.54)

(the + in I fIJ I+ refers to the new domain § + as well as to r +, S +; for the success
of this approach it is essential that the norm be changed during the iteration) it
suffices to estimate the corresponding terms II 1+, III 1+, IIII 1+. This follows from
the fact that the Jacobian W' is close to one. But since the two components fIJ, E
are scaled differently one has to show that even

(A.3.55)

is close to the identity. For the diagonal elements this is obvious from (A.3.21),
and for the remaining term we have by (A.3.21) and by Cauchy's estimate

(A.3.56)

By our choice of sin (A.3.18) we have

(A.3.57)

and the term

(A.3.58)

731
330 Appendix A: Moser's Proof of His Theorem

can be made arbitrarily small by the choice of <50 , Thus we have

(A.3.59)

The estimation of these terms is now straightforward, but we shall show how the
various scale factors enter.
i) First we shall show

(A.3.60)

To estimate a typical term in I, see (A.3.15), we consider

(A.3.61)

where we used (A.3.15 and 21). The other terms can be handled similarly, but the
term (B - A) v requires special attention. It does not suffice to use the estimate
IB-A 1< q from (A.3.52), but rather

(A.3.62)

where we used (A.3.21). The statement (A.3.60) is then clear.


To turn to the expression II we have to estimate the remainder in the Taylor
expansion, e. g.,

If(IfJ,c;,.·.)-f(lfJ,o,···)I~ max{lf~IIc;I} ~ 2~q+Ic;I, (A.3.63)


s
where by (A.3.21)

1C;1~IXI+IVI~S+(l+ s: C~+1<5)~2S+' (A.3.64)

since by (A.3.57) the second term in the brackets can be made small by choice of
<50 , Hence

(A.3.65)

Similarly

(A.3.66)

hence

(A.3.67)

732
A.4 Proof of Theorem 6.2.1 331

Finally, in III we use the mean value theorem, e. g.,

1!(I{I+u,x+ v) - !(I{I,X) I~ max{I!If/I·lu I} + max{l!xl·lv I}


~2q+2rcrlo. (A.3.68)

With the corresponding estimate for g we find

(A.3.69)

Combining the estimates (A.3.60, 67, 68) for I, II, III we have

(A.3.70)

Clearly, the optimal choice for s is found by making both terms in the bracket
equal. This agrees approximatively with our choice, as is seen by (A.3.S7). Thus,
with (A.3.18) we get

(A.3.71)

Taking c large enough (c> c§) we have through (A.3.18)

(A.3.72)

which was claimed in (A.3.20). This completes the proof of the theorem.

A.4 Proof of Theorem 6.2.1

a) In this section we show that the series expansion constructed in Sect. 6.3
actually converges. For this purpose we make use of the existence theorem A.3.1
of the previous section and establish the convergence of some power series solu-
tion. This solution, however, may not agree with that found by formal expansion
(in theorem 6.2.1), since in Sect. 6.3 the normalization was imposed on the
factors °llv and not on the products 01/1 0 01/2 0 ... 0 'ltv. But with the help of
lemma A.2.1 we shall be able to establish the convergence of the unique solution
described in theorem 6.2.1.
For the first step we consider the equations (6.3.1 and 2) and assume that!, g
are real analytic in a fixed domain

IIm{qJ}l<r, 1~I<s, 1&1<&0' (A.4.1)

733
332 Appendix A: Moser's Proof of His Theorem

where

(AA.2)

We apply theorem A.3.1 to these equations, where we replace/, g of Sect. A.3 by


e/(rp,~, e), eg(rp,~, e). Although the proof of that theorem was carried out
without such a complex parameter e, one sees immediately that the solutions u, v,
Ii, b, B of theorem A.3.1 are analytic functions of e. In fact, the approximation
constructed in the proof of theorem A.3.1 turns out to be analytic, and since the
final solution is obtained as the uniform limit (in a complex domain) of the
approximations, the final solution is analytic there. It remains to be verified that
the inequalities - required by theorem A.3.1 -

Ie/I + legl < rUKoo (AA.3)


r s

are satisfied. Clearly, (AA.2) implies (AA.3) if

(AAA)

This gives a lower bound for the radius of convergence for the solutions u (1/1, e),
v( 'fI, X,e), Ii(e), b (e), B(e) which are analytic in 11m {I/I} I< r/2, Ie I< eo (v is linear
in X).
Moreover, for e = 0 the solutions constructed in Sect. A.3 reduce to
U = v = 0, Ii = ro, b = 0, B = A as one sees by setting 0 = 0 there. Hence u, v, Ii;
ro, b, B; A can be expanded into power series, without constant terms, which
converge in IeI< eo·
This proves the existence of one analytic solution

U = u(l/I, e)
v = v('fI, ~,e)
A=Ii(e)-ro (AA.5)
d = bee)
D =B(e)-A

to our problem and theorem 6.2.1 is established. In fact, as was stated at the end
of Sect. A.2, the power series for A, d, D are uniquely determined and inde-
pendent of normalizations.
b) We now turn to the proof of the convergence of the series expansion con-
structed in Sect. 6.3. It was normalized by the condition

(AA.6)

which is not necessarily satisfied for the solution (AA.5). We denote the trans-
formation given by (AA.5) by

734
A.4 Proof of Theorem 6.2.1 333

rp = IfI + U(IfI, e) ]. (A.4.7)


~ =X+ v(IfI,~,e)

By lemma A.2.1 the most general solution is given in the form OIl 0 '6', where
'6' E~, while..1, d, D is independent of the normalization. Therefore it suffices to
show that a convergent series expansion for '6' can be found such that the expan-
sion of OIl 0 '6' agrees with that found in theorem A.lolo Since ~ is finite dimen-
sional, this assertion follows from the implicit function theorem, as we now
show. With

'6': [IfI'=IfI+a withAb=O,AB=BA, (A.4.8)


X'=X+b+BX

we get for OIl 0 '6'

rp=lfI+a+U(IfI+a,e) ] (A.4.9)
~ =X+b +BX+ v(lfI+a,x+b+BX,e)

and it remains to find a, b, B in such a way that

( a +U ) rJll (A.4.10)
b+v+BX E.

is fulfilled.
We decompose

(: )
into its components in.AI and fit and denote by P,A/ the projection of our function
space into .AI.
Then we try to determine a, b, B in such a manner that

P
,A/
(a +
b+BX+v
U ) _
-
(a
b+BX
) + (u)
P
,A/ v
(A.4.11)

vanishes. Here the arguments in u, v are 1fI' = IfI + a, X' = X + b + B X, e. This


gives rise to finitely many equations of equally many unknowns (a,b,B) varying
with .AI. For e = 0 the functions U und v vanish and the solution is a = b = B = O.
By the implicit function theorem there exist analytic functions a(e), b(e), B(e)
without constant term annihilating (A.4.11). This defines the mapping '6' in
(A.4.8) for which OIl 0 '6' satisfies our normalization (A.4.6). Since OIl, '6' are

735
334 Appendix A: Moser's Proof of His Theorem

given by a convergent series, so is 'PI 0 C(! given by a convergent series, provided


Ie Iis sufficiently small, as we wanted to prove.
c) This result frees one from the intricate construction in Sect. A.3 and
ensures the convergence of the series obtained by formal expansion, at least if the
above normalization is observed. Actually, the same statement holds if one
requires instead that the free terms

(A.4.12)

which remain in the expansion form a convergent series. To summarize:


Theorem A.4.1. The formal series expansions for u(lf!, e), V (If!, X, e), LI(e), d(e),
D(e) of theorem A.1.1 are convergent for sufficiently small e provided that

is prescribed as a convergent series in e.

736
Bibliography and Comments

Since the field of synergetics has ties to many disciplines, an attempt to provide a more or less
complete list of references seems hopeless. Indeed, such a list would fill a whole volume. We therefore
confine the references to those works which we used in the preparation of this book. In addition, we
quote a number of papers, articles, or books which the reader might find useful for further study. We
list the references and further reading material according to the individual chapters.

1. Introduction
1.1 What is Synergetics About?

H. Haken: Synergetics, An Introduction, 3rd ed. (Springer, Berlin, Heidelberg, New York 1983)
This reference is referred to in the present book as [I]
H. Haken, R. Graham: Synergetik - Die Lehre vom Zusammenwirken. Umschau 6, 191 (1971)
H. Haken (ed.): Synergetics (Proceedings of a Symposium on Synergetics, Elmau 1972) (Teubner,
Stuttgart 1973)
H. Haken (ed.): Cooperative Effects, Progress in Synergetics (North-Holland, Amsterdam 1974)
H. Haken: Cooperative effects in systems far from thermal equilibrium and in nonphysical system.
Rev. Mod. Phys. 47, 67 (1975)
A further source of references is the Springer Series in Synergetics, whose individual volumes are
listed in the front matter of this book.
For a popularisation see
H. Haken: Erfolgsgeheimnisse der Natur (Deutsche Verlagsanstalt, Stuttgart 1981)

1.2 Physics
The modern treatment of phase transitions of systems in thermal equilibrium rests on the renormali-
zation group approach:
K. G. Wilson: Phys. Rev. 84,3174; 3184 (1971)
K. G. Wilson, M. E. Fisher: Phys. Rev. Lett. 28, 248 (1972)
F. J. Wegener: Phys. Rev. 85, 4529 (1972); 86, 1891 (1972)
T. W. Burkhardt, J. M. J. van Leeuwen (eds.): Real-Space Renormalization, Topics Curro Phys.,
Vol. 30 (Springer, Berlin, Heidelberg, New York 1982)
Books and reviews on the subject are, for example,
K. G. Wilson, J. Kogut: Phys. Rep. 12C, 75 (1974)
C. Domb, M. S. Green (eds.): Phase Transitions and Critical Phenomena. Internat. Series of Mono-
graphs in Physics, VoIs. I - 6 (Academic, London 1972 - 76)
S. K. Ma: Modern Theory of Critical Phenomena (Benjamin, Reading, MA 1976)

1.2.1 Fluids: Formation of Dynamic Patterns


Taylor Instability
G. I. Taylor: Philos. Trans. R. Soc. London A223, 289 (1923)
For recent and more detailed studies see, for example,
R. P. Fenstermacher, H. L. Swinney, J. P. Gollub: J. Fluid Mech. 94, 103 (1979)
R. C. DiPrima: In Transition and Turbulence, ed. by R. E. Meyer (Academic, New York 1981)
336 Bibliography and Comments

Benard Instability
H. Bimard: Rev. Gen. Sci. Pures Appl. 11, 1261, 1309 (1900)
Lord Rayleigh: Philos. Mag. 32, 529 (1916)
For more recent theoretical studies on linear stability see, e. g.
S. Chandrasekhar: Hydrodynamic and Hydromagnetic Stability (Clarendon, Oxford 1961)
For nonlinear treatments see
A. Schluter, D. Lortz, F. Busse: J. Fluid Mech. 23, 129 (1965)
F. H. Busse: J. Fluid Mech. 30, 625 (1967)
A. C. Newell, J. A. Whitehead: J. Fluid Mech. 38, 279 (1969)
R. C. DiPrima, H. Eckhaus, L. A. Segel: J. Fluid Mech. 49, 705 (1971)
F. H. Busse: J. Fluid Mech. 52, 1 (1972)
F. H. Busse: Rep. Prog. Phys. 41, 1929 (1978)
Nonlinearity and fluctuations are treated in
H. Haken: Phys. Lett. 46A, 193 (1973); and, in particular, Rev. Mod. Phys. 47, 67 (1975); and
H. Haken: Synergetics, SpringerSer. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)
R. Graham: Phys. Rev. Lett. 31, 1479 (1973); Phys. Rev. 10, 1762 (1974)
For recent experiments see also
G. Ahlers, R. Behringer: Phys. Rev. Lett. 40, 712 (1978)
G. Ahlers, R. Walden: Phys. Rev. Lett. 44, 445 (1981)
P. Berge: In Dynamical Critical Phenomena and Related Topics, ed. by C. P. Enz, Lecture Notes
Phys., Vol. 104 (Springer, Berlin, Heidelberg, New York 1979) p. 288
F. H. Busse, R. M. Clever: J. Fluid Mech. 102, 75 (1981)
M. Giglio, S. Musazzi, U. Perini: Phys. Rev. Lett. 47, 243 (1981)
E. L. Koschmieder, S. G. Pallas: Int. J. Heat Mass Transfer 17, 991 (1974)
J. P. GolJub, S. W. Benson: J. Fluid Mech. 100,449 (1980)
J. Maurer, A. Libchaber: J. Phys. Paris Lett. 39, 369 (1978); 40, 419 (1979); 41, 515 (1980)
G. Pfister, I. Rehberg: Phys. Lett. 83A, 19 (1981)
H. Haken (ed.): Chaos and Order in Nature, Springer Ser. Synergetics, Vol. 11 (Springer, Berlin,
Heidelberg, New York 1981), see in particular contributions by A. Libchaber and S. Fauve, E. O.
Schulz-DuBois et aI., P. Berge, F. H. Busse
H. Haken (ed.): Evolution oj Order and Chaos, Springer Ser. Synergetics, Vol. 17 (Springer, Berlin,
Heidelberg, New York 1982)
H. L. Swinney, J. P. Gollub (eds.): Hydrodynamic Instabilities and the Transition to Turbulence,
Topics Appl. Phys., Vol. 45 (Springer, Berlin, Heidelberg, New York 1981)
Texts and Monographs on Hydrodynamics
L. D. Landau, E. M. Lifshitz: Course oj Theoretical Physics, Vol. 6 (Pergamon, London, New York
1959)
Chia-Shun-Yih: Fluid Mechanics (University Press, Cambridge 1970)
C. C. Lin: Hydrodynamic Stability (University Press, Cambridge 1967)
D. D. Joseph: Stability oj Fluid Motions, Springer Tracts Nat. Phil., Vols. 27, 28 (Springer, Berlin,
Heidelberg, New York 1976)
Metereology
R. Scorer: Clouds oj the World (Lothian, Melbourne 1972)

1.2.2 Lasers: Coherent Oscillations


Early papers on laser theory including quantum fluctuations are
H. Haken: Z. Phys. 181, 96 (1964); 190, 327 (1966)
H. Risken: Z. Phys. 186, 85 (1965)
R. D. Hempstead, M. Lax: Phys. Rev. 161,350 (1967)
W. Weidlich, H. Risken, H. Haken: Z. Phys. 201, 396 (1967)
M. Scully, W. E. Lamb: Phys. Rev. 159, 208 (1967); 166, 246 (1968)
H. Haken: Rev. Mod. Phys. 47, 67 (1975)
Laser-phase transition analogy

738
Bibliography and Comments 337

R. Graham, H. Haken: Z. Phys. 213, 420 (1968)


R. Graham, H. Haken: Z. Phys. 237, 31 (1970)
V. De Giorgio, M. O. Scully: Phys. Rev. A2, Il7a (1970)
Ultra Short Pulses
R. Graham, H. Haken: Z. Phys. 213, 420 (1968)
H. Risken, K. Nummedal: Phys. Lett. 26A, 275 (1968); J. Appl. Phys. 39, 4662 (1968)
H. Haken, H. Ohno: Opt. Commun. 16, 205 (1976); Phys. Lett. 59A, 261 (1976)
H. Knapp, H. Risken, H. D. Vollmer: Appl. Phys. 15,265 (1978)
M. BOttiker, H. Thomas: In Solutions and Condensed Maller Physics, ed. by A. R. Bishop,
T. Schneider, Springer Ser. Solid-State Phys., Vol. 8 (Springer, Berlin, Heidelberg, New York
1981) p. 321
J. Zorell: Opt. Commun. 38,127 (1981)
Optical Bistability (some early and recent treatments):
S. L. McCall: Phys. Rev. A9, 1515 (1974)
R. Bonifacio, L. A. Lugiato: Opt. Commun. 19, 172 (1976)
R. Sulomaa, S. Stenholm: Phys. Rev. A8, 2695 (1973)
A. Kossakowski, T. Marzalek: Z. Phys. B23, 205 (1976)
L. A. Lugiato, V. Benza, L. M. Narducci, J. D. Farina: Opt. Commun. 39, 405 (1981)
L. A. Lugiato, V. Benza, L. M. Narducci: In Evolution oj Order and Chaos, ed. by H. Haken
Springer Ser. Synergetics, Vol. 17 (Springer, Berlin, Heidelberg, New York 1982) p. 120
M. G. Velarde: ibid., p. 132
L. A. Lugiato: In Progress in Optics (North-Holland, Amsterdam 1983)
R. Bonifacio (ed.): Dissipative Systems in Quantum Optics, Topics Curr. Phys., Vol. 27 (Springer,
Berlin, Heidelberg, New York 1982)

Books: Laser Theory


H. Haken: Laser Theory, in Encyclopedia oj Physics, Vol. XXV I2c, Light and Matter Ic, (Springer,
Berlin, Heidelberg, New York 1970) and reprint edition Laser Theory (Springer, Berlin,
Heidelberg, New York 1983)
M. Sargent, M. O. Scully, W. E. Lamb: Laser Physics (Addison-Wesley, Reading, MA 1974)

1.1.3 Plasmas: A Wealth of Instabilities


(We can give only a small selection of titles)
F. Cap: Handbook on Plasma Instabilities, Vols. 1,2 (Academic, New York 1976 and 1978)
A. B. Mikhailowskii: Theory of Plasma Instabilities, Vols. 1,2 (Consultants Bureau, New York,
London 1974)
H. Wilhelmson, J. Weiland: Coherent Non-Linear Interaction of Waves in Plasmas (Pergamon,
Oxford 1977)
S. G. Thornhill, D. ter Haar: Phys. Rep. C43, 43 (1978)

1.1.4 Solid State Physics: Multistability, Pulses, Chaos


Gunn Oscillator
J. B. Gunn: Solid State Commun. 1, 88 (1963)
J. B. Gunn: IBM Res. Develop. 8, 141 (1964)
K. Nakamura: J. Phys. Soc. Jpn. 38, 46 (1975)

Tunnel Diodes
C. Zener: Proc. R. Soc. London 145, 523 (1934)
L. Esaki: Phys. Rev. 109, 603 (1958)
R. Landauer: J. Appl. Phys. 33, 2209 (1962)
R. Landauer, J. W. F. Woo: In Synergetics, ed. by H. Haken (Teubner, Stuttgart 1973) p. 97

739
338 Bibliography and Comments

Thermoelastic Instabilities
C. E. Bottani, G. Caglioti, P. M. Ossi: 1. Phys. F. 11,541 (1981)
C. Caglioti, A. F. Milone (eds.): Mechanical and Thermal Behaviour oj Metallic Materials. Proc. Int.
School of Physics Enrico Fermi (North-Holland, Amsterdam 1982)
Crystal Growth
1. S. Langer: In Fluctuations, Instabilities and Phase Transitions, ed. by T. Riste (Plenum, New York
1975) p. 82
1. S. Langer: Rev. Mod. Phys. 52, I (1980)

1.3 Engineering

1.3.1 Civil, Mechanical, and Aero-Space Engineering: Post-Buckling Patterns, Flutter etc.
1. M. T. Thompson, G. W. Hunt: A General Theory oj Elastic Stability (Wiley, London 1973)
K. Huseyn: Nonlinear Theory oj Elastic Stability (Nordhoff, Leyden 1975)
D. O. Brush, B. D. Almroth: Buckling oj Bars, Plates and Shells (McGraw-Hili, New York 1975)

1.3.2 Electrical Engineering and Electronics: Nonlinear Oscillations


A. A. Andronov, A. A. Vit!, S. E. Kaikin: Theory oj Oscillators (Pergamon, Oxford, London 1966)
N. Minorsky: Nonlinear Oscillations (van Nostrand, Princeton 1962)
C. Hayashi: Nonlinear Oscillations in Physical Systems (McGraw-Hili, New York 1964)
P. S. Lindsay: Phys. Rev. Lett. 47, 1349 (1981)

1.4 Chemistry: Macroscopic Patterns

C. H. Bray: 1. Am. Chern. Soc. 43, 1262 (1921)


B. P. Belousov: Sb. Ref. Radats. Med. Moscow (1959)
V. A. Vavalin, A. M. Zhabotinsky, L. S. Yaguzhinsky: Oscillatory Processes in Biological and
Chemical Systems (Science Publ., Moscow 1967) p. 181
A. N. Zaikin, A. M. Zhabotinsky: Nature 225,535 (1970)
A. M. Zhabotinsky, A. N. Zaikin: 1.Theor. BioI. 40, 45 (1973)
A. M. Turing: Philos. Trans. R. Soc. London B237, 37 (1952)
G. Nicolis, I. Prigogine: Self-Organization in Non-Equilibrium Systems (Wiley, New York 1977)
H. Haken: Z. Phys. B20, 413 (1975)
G. F. Oster, A. S. Perelson: Arch. Rat. Mech. Anal. 55, 230 (1974)
A. S. Perelson: G. F. Oster: Arch. Rat. Mech. Anal. 57, 31 (1974/75)
G. Nicolis: Adv. Chern. Phys. 19, 209 (1971)
B. Change, E. K. Pye, A. M. Ghosh, B. Hess (eds.): Biological and Biochemical Oscillators
(Academic, New York 1973)
G. Nicolis, 1. Portnow: Chern. Rev. 73, 365 (1973)
R. M. Noyes, R. 1. Field: Annu. Rev. Phys. Chern. 25, 95 (1975)
1. 1. Tyson: The Belousov-Zhabotinsky Reaction. Lecture Notes Biomath., Vol. 10 (Springer, Berlin,
Heidelberg, New York 1976)
P. C. Fife: Mathematical Aspects oj Reacting and Difjusing Systems. Lecture Notes Biomath., Vol.
28 (Springer, Berlin, Heidelberg, New York 1979)
A. Pacault, C. Vidal (eds.): Synergetics. Far jrom Equilibrium. Springer Ser. Synergetics, Vol. 3
(Springer, Berlin, Heidelberg, New York 1979)
C. Vidal, A. Pacault (eds.): Nonlinear Phenomena in Chemical Dynamics. Springer Ser. Synergetics,
Vol. 12 (Springer, Berlin, Heidelberg, New York 1981)

1.5 Biology

1.5.1 Some General Remarks


T. H. Bullock, R. Orkand, A. Grinnell: Introduction to Nervous Systems (Freeman., San Francisco
1977)

740
Bibliography and Comments 339

A. C. Scott: Neurophysics (Wiley, New York 1977)


E. Basar: Biophysical and Physiological System Analysis (Addison Wesely, Reading MA 1976)
M. Conrad, W. Giittinger, M. Dal Chin (eds.): Physics and Mathematics oj the Nervous System.
Lecture Notes Biomath., Vol. 4 (Springer, Berlin, Heidelberg, New York 1974)
A. V. Holden: Models oj Stochastic Activity oj Neurons. Lecture Notes Biomath., Vol. 12 (Springer,
Berlin, Heidelberg, New York 1976)
H. Shimizu: Adv. Biophys. 13, 195 (1979)

1.5.1 Morphogenesis
A. M. Turing: Philos. Trans. R. Soc. London B237, 37 (1952)
L. Wolpert: J. Theor. BioI. 25, 1 (1969)
A. Gierer, H. Meinhardt: Kybernetik 12, 30 (1972); J. Cell. Sci. IS, 321 (1974)
H. Haken, H. Olbrich: J. Math. BioI. 6, 317 (1978)
J. P. Murray: J. Theor. BioI. 88, 161 (1981)
C. Berding, H. Haken: J. Math. BioI. 14, 133 (1982)

1.5.3 Population Dynamics


A. Lotka: Proc. Nat. Acad. Sci. (USA) 6, 410 (1920)
V. Volterra: Lerons sur la Theorie Mathematiques de la Lutte pour la Vie, Paris (1931)
N. S. Goel, S. C. Maitra, E. W. Montroll: Rev. Mod. Phys. 43, 231 (1971)
T. N. E. Greville (ed.): Population Dynamics (Academic, London 1972)
D. Ludwig: In Stochastic Population Theories, ed. by S. Levin, Lecture Notes Biomath., Vol. 3
(Springer, Berlin, Heidelberg, New York 1974)
R. B. May: Nature 261, 459 (1976)

1.5.4 Evolution
M. Eigen: Naturwissenschaften 58, 465 (1971)
M. Eigen, P. Schuster: Naturwissenschaften 64, 541 (1977); 65, 7 (1978); 65, 341 (1978)
W. Ebeling, R. Feistel: Physik der Selbstorganisation und Evolution (Akademie-Verlag, Berlin 1982)

1.5.5 Immune System


F. M. Burnet: Immunology, Aging, and Cancer (Freeman, San Francisco 1976)
C. DeLisi: Antigen Antibody Interactions, Lecture Notes Biomath., Vol. 8 (Springer, Berlin,
Heidelberg, New York 1976)
N. Dubin: A Stochastic Model Jor Immunological Feedback in Carcinogenesis. Lecture Notes
Biomath., Vol. 9 (Springer, Berlin, Heidelberg, New York 1976)
P. H. Richter: Pattern formation in the immune system. Lect. Math. Life Sci. 11, 89 (1979)

1.6 Computer Sciences


1.6.1 Self-Organization oj Computers, in Particular Parallel Computing
R. W. Hockney, C. R. Jesshope: Parallel Computers (Hilger, Bristol 1981)

1.6.1 Pattern Recognition by Machines


K. S. Fu: Digital Pattern Recognition, 2nd ed. (Springer, Berlin, Heidelberg, New York 1980)
K. S. Fu: Syntactic Pattern Recognition Applications (Springer, Berlin, Heidelberg, New York 1976)
K. S. Fu: In Pattern Formation by Dynamic Systems and Pattern Recognition, ed. by H. Haken,
Springer Ser. Synergetics, Vol. 5 (Springer, Berlin, Heidelberg, New York 1979) p. 176
T. Kohonen: Associative Memory - A System Theoretical Approach (Springer, Berlin, Heidelberg,
New York 1978)
T. Kohonen: Self-Organization and Associative Memory, Springer Ser. Inf. Sci., Vol. 8 (Springer,
Berlin, Heidelberg, New York 1983)
H. Haken (ed.): Pattern Formation by Dynamic Systems and Pattern Recognition, Springer Ser.
Synergetics, Vol. 5 (Springer, Berlin, Heidelberg, New York 1979)

741
340 Bibliography and Comments

1.6.3 Reliable Systems jrom Unreliable Elements


H. Haken: unpublished material

1.7 Economy

G. Mensch, K. Kaasch, A. Kleinknecht, R. Schnopp: IIM/dp 80-51nnovation Trends, and Switching


between Full- and Under-Employment Equilibria. 1950-1978 Discussion Paper Series, Inter-
national Institute of Management, Wissenschaftszentrum Berlin
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)
W. Weidlich, G. Haag: Quantitative Sociology, Springer Ser. Synergetics, Vol. 14 (Springer, Berlin,
Heidelberg, New York 1983)
H. Haken: Erjolgsgeheimnisse der Natur (Deutsche Verlagsanstalt, Stuttgart 1981)

1.8 Ecology
Ch. J. Krebs: Ecology. The Experimental Analysis oj Distribution and A bundance (Harper and Row,
New York 1972)
R. E. Rickleps: Ecology (Nelson, London 1973)

1.9 Sociology

S. E. Ash: Social Psychology (Prentice Hall, New York 1952) p. 452


W. Weidlich: Collect. Phenom. 1, 51 (1972)
E. Noelle-Neumann: Die Schweigespirale (Piper, Miinchen 1980) [English trans I. (to appear 1983):
The Spiral oj Silence: Public Opinion - The Skin oj Time (Chicago, University Press)]
A. Wunderlin, H. Haken: Lecture Notes, Projekt Mehrebenenanalyse im Rahmen des Forschungs-
schwerpunkts Mathematisierung (Universitat Bielefeld 1980)
H. Haken: Erjolgsgeheimnisse der Natur (Deutsche Verlagsanstalt, Stuttgart 1981)
W. Weidlich, G. Haag: Quantitative Sociology, Springer Ser. Synergetics, Vol. 14 (Springer, Berlin,
Heidelberg, New York 1983)

1.11 The Kind of Equations we Want to Study

For a general background, see also


H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)
Because the individual topics of Sects. 1.11 - 17 will be dealt with in detail in later chapters, we refer
the reader to the corresponding references belonging to those chapters.
Here we Quote only those references which will not be Quoted later.

1.11.1 Differential Equations


R. Courant, D. Hilbert: Methods oj Mathematical Physics, Vols. 1,2 (Wiley, New York 1962)
P. M. Morse, H. Feshbach: Methods oj Theoretical Physics, Vols. I, 2 (McGraw-Hill, New York
1953)
L. W. F. Elen: Differential Equations, Vols. 1,2 (MacMillan, London 1967)

1.11.1 First Order Differential Equations


E. A. Coddington, N. Levinson: Theory of Ordinary Differential Equations (McGraw-Hill, New
York 1955)
1.11.3 Nonlinearity
V. V. Nemytskii, V. V. Stepanov: Qualitative Theory of Differential Equations (University Press,
Princeton 1960)
M. W. Hirsch, S. Smale: Differential Equations, Dynamical Systems, and Linear Algebra
(Academic, New York 1974)

742
Bibliography and Comments 341

Z. Nitecki: Differentiable Dynamics (MIT Press, Cambridge, MA 1971)


R. Abraham, J. E. Marsden: Foundations of Mechanics (Benjamin/Cummings, Reading, MA 1978)
S. Smale: The Mathematics of Time (Springer, Berlin, Heidelberg, New York 1980)

1.1104 Control Parameters


H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

1.11.5 Stochasticity
J. L. Doob: Stochastic Processes (Wiley, New York 1953)
M. Loeve: Probability Theory (van Nostrand, Princeton 1963)
R. von Mises: Mathematical Theory of Probability and Statistics (Academic, New York 1964)
Yu. V. Prokhorov, Yu. A. Rozanov: Probability Theory, Grundlehren der mathematischen Wissen-
schaften in Einzeldarstellungen, Vol. 157 (Springer, Berlin, Heidelberg, New York 1968)
R. C. Dubes: The Theory of Applied Probability (Prentice Hall, Englewood Cliffs, NJ 1968)
W. Feller: An Introduction to Probability Theory and Its Applications, Vol. I (Wiley, New York
1971)
Kai Lai Chung: Elementary Probability Theory with Stochastic Processes (Springer, Berlin, Heidel-
berg, New York 1974)
T. Hida: Brownian Motion, Applications of Mathematics, Vol. II (Springer, Berlin, Heidelberg,
New York 1980)
Statistical Mechanics
L. D. Landau, E. M. Lifshitz: In Course of Theoretical Physics, Vol. 5 (Pergamon, London 1952)
R. Kubo: Thermodynamics (North-Holland, Amsterdam 1968)
D. N. Zubarev: Non-Equilibrium Statistical Thermodynamics (Consultants Bureau, New York 1974)
Quantum Fluctuations
H. Haken: Laser Theory, in Encylopedia of Physics, Vol. XXV /2c, Light and Matter Ie (Springer,
Berlin, Heidelberg, New York 1970) and reprint edition Laser Theory (Springer, Berlin,
Heidelberg, New York 1983)
with many further references.
Chaos
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1978)
H. Haken (ed.): Chaos and Order in Nature, Springer Ser. Synergetics, Vol. II (Springer, Berlin,
Heidelberg, New York 1981)
H. Haken: Order and Chaos, Springer Ser. Synergetics, Vol. 17 (Springer, Berlin, Heidelberg, New
York 1982)

1.11.6 Many Components and the Mezoscopic Approach


H. Haken: unpublished material

1.12 How to Visualize Solutions

Y. Choquet-Bruhat, C. DeWitt-Morette, M. Dillard-Bleick: Analysis, Manifolds and Physics (North-


Holland, Amsterdam 1982)
R. D. Richtmyer: Principles of Advanced Mathematical Physics 11 (Springer, Berlin, Heidelberg,
New York 1981)

1.13 Qualitative Changes: General Approach

H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)
D'Arcy W. Thompson: On Growth and Form (Cambridge University Press, London 1961)

743
342 Bibliography and Comments

1.14 Qualitative Changes: Typical Phenomena

See H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg,
New York 1983) and references cited in later chapters. Here References are presented for

1.14.1 Lyapunov Exponents


V. I. Oseledec: A multiplicative ergodic theorem. Lyapunov characteristic number for dynamical
systems. Tr. Mosk. Mat. Osc. 19, 179 (1968) [English transl.: Trans. Moscow Math. Soc. 19, 197
(1968)]
Ya. B. Pesin: Characteristic Lyapunov Exponents and Smooth Ergodic Theory. Russ. Math. Surv.
32(4), 55 (1977)
D. Ruelle: "Sensitive Dependence on Initial Conditions and Turbulent Behavior of Dynamical
Systems", in Bifurcation Theory and Its Applications in Scientific Disciplines, ed. by O. Gurel,
O. E. Rossler, New York Acad. of Sci. 316, (1979)
J. D. Farmer: Physica 4D, 366 (1982)
K. Tomita: Phys. Rep. 86, 113 (1982)

1.lS The Impact of Fluctuations


See the references in Sect. 1.11. 5 as well as those of later chapters

1.16 Evolution of Spatial Patterns


H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

1.17 Discrete Maps: The Poincare Map

Discrete Maps are treated in Chap. 11. The Poincare map is discussed, e. g., in
R. Abraham, J. E. Marsden: Foundations oj Mechanics (Benjamin/Cummings, Reading, MA 1978)

1.18 Discrete Noisy Maps


Compare Chap. 11

2. Linear Ordinary Differential Equations


2.2 Groups and Invariance
E. C. G. Sudarshan, M. Mukunda: Classical Dynamics: A Modern Perspective (Wiley, New York
1974)
R. D. Richtmyer: Principles oj Advanced Mathematical Physics II (Springer, Berlin, Heidelberg,
New York 1981)

1.19 Pathways to Self-Organization


1.19.1 Self-Organization Through Change of Control Parameters
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

1.19.2 Self-Organization Through Change of Number of Components


H. Haken: Prog. Theor. Phys. Supp!. 69, 30 (1980)

1.19.3 Self-Organization Through Transients


H. Haken: Unpublished material

744
Bibliography and Comments 343

2.3 Driven Systems


o. Duffing: Erzwungene Schwingungen bei veriinderlicher Eigenfrequenz und ihre technische Bedeu-
tung (Vieweg, Braunschweig 1918)
C. Hayashi: Nonlinear Oscillations in Physical Systems (McGraw-Hill, New York 1964)
compare Sect. 2.1.1 also

2.4 General Theorems on Algebraic and Differential Equations


2.4.2 Jordan's Normal Form
R. Bellman, K. L. Cooke: Introduction to Matrix Analysis (McGraw-Hill, New York 1960)

2.4.3 Some General Theorems on Linear Differential Equations


A comprehensive treatment is provided by
N. Dunford, J. T. Schwartz: Linear Operators, Pure and Applied Mathematics, Vol. VII, Parts
I - III (Wiley, Interscience, New York 1957)

2.4.4 Generalized Characteristic Exponents


see Sect. 2.4.3. For the Lyapunov exponents see Sect. 1.14.6.
Theorem on vanishing Lyapunov exponents:
H. Haken: Phys. Lett. 94A, 71 (1983)

2.6 Linear Differential Equations with Constant Coefficients


e.g.,
E. A. Coddington, N. Levinson: Theory of Ordinary Differential Equations (McGraw-Hill, New
York 1955)

2.7 Linear Differential Equations with Periodic Coefficients


G. Floquet: Sur les equations dijjerentielles lineaires Ii coefficients periodiques. Ann. Ecole Norm.
Ser. 2 12, 47 (1883)

2.S Group Theoretical Interpretation


compare Sect. 2.2

2.9 A Perturbation Approach


H. Haken: Unpublished material

3. Linear Ordinary Differential Equations with Quasiperiodic Coefficients


The results of this chapter, with the exception of Sect. 3.9, were obtained by the present author. The
operator T in Eq. (3.1.6) was introduced in
H. Haken: Z. Naturforsch. SA, 228 (1954)
where the case of a unitary representation of T was treated and the form (3.1.20) proven. For further
results see
H. Haken: In Dynamics of Synergetic Systems, Springer Ser. Synergetics, Vol. 6, ed. by H. Haken
(Springer, Berlin, Heidelberg, New York 1980) p. 16
For the proof of Theorem 3.8.2 I used auxiliary theorems represented in
N. Dunford, J. T. Schwartz: Linear Operators, Pure and Applied Mathematics, Vol. VII, Parts
I - III (Wiley, Interscience, New York 1957)
For attempts of other authors at this problem cf.
N. N. Bogoliubov, I. A. Mitropolskii, A. M. Samoilento: Methods of Accelerated Convergence in

745
344 Bibliography and Comments

Nonlinear Mechanics (Springer, Berlin, Heidelberg, New York 1976),


where further references are given.
The results of Sect. 3.9 are taken from N. N. Bogoliubov, I. A. Mitropolskii, A. M. Samoilento I. c.

4. Stochastic Nonlinear Differential Equations


For the original papers on the Ito and Stratonovich calculus see
K. Ito: Lectures on Stochastic Processes (Tata Institute of Fundamental Research, B'Jmbay 1961)
K. Ito: Stochastic Processes (Universitet Matematisk Institut, Aarhus 1969)
K. Ito, H.P. McKean: Diffusion Processes and Their Sample Paths (Springer, Berlin, Heidelberg,
New York 1965)
K. Ito: Nagoya Math. J. 1,35 (1950)
K. Ito: Nagoya Marh. J. 3, 55 (1951)
K.lto: On Stochastic Differential Equations (Am. Marh. Soc. New York, 1951)
P. Langevin: Sur la theorie du mouvement brownien. C. R. Acad. Sci. Paris 146, 530 (1908)
R. L. Stratonovich: SIAM J. Control 4, 362 (1966)
Recent texts and monographs include
I. I. Gihmann, A. V. Skorohod: SlOchastic Differential Equations (Springer, Berlin, Heidelberg,
New York 1972)
L. Arnold: Stochastic Differential Equations (Oldenbourg, MUnchen 1973)
N. G. van Kampen: Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam
1981)
C. W. Gardiner: Handbook of Stochastic Methods. Springer Ser. Synergetics, Vol. 13 (Springer.
Berlin, Heidelberg, New York 1983)

5. The World of Coupled Nonlinear Oscillators


5.1 Linear Oscillators Coupled Together
This section gives only a sketch, for detailed treatments of nonlinear oscillators see
N. N. Bogoliubov, Y. A. Mitropolsky: Asymptotic Methods in /he Theory of Nonlinear Oscil/a/ions
(Hindus tan Pub I. Corp., New Delhi 1961)
N. Minorski: Nonlinear Oscil/ations (Van Nostrand, Toronto 1962)
A. Andronov, A. Via, S. E. Khaikin: Theory of Oscil/ators (Pergamon, London 1966)

5.2 Perturbations of Quasiperiodic Motion for Time-Independent Amplitudes (persistence of


Quasiperiodic Motion)

Our presentation is based on results reponed in


N. N. Bogoliubov, I. A. Mitropolskii, A. M. Samoilento: Methods of Accelerated Convergence in
Nonlinear Mechanics (Springer, Berlin, Heidelberg, New York 1976)

5.3 Some Considerations on the Convergence (If the Procedure

see Sect. 5.2

6. Nonlinear Coupling of Oscillators: The Case of Persistence of Quasiperiodic


Motion
A. N. Kolmogorov: Dokl. Akad. Nauk. USSR 98, 527 (1954)
V. I. Arnol'd: Russ. Math. Surv. 18,9 (1963)
J. Moser: Math. Ann. 169, 136 (1967)
In this chapter we essentially follow Moser's paper, but we use a somewhat different representation

746
Bibliography and Comments 345

Further reading
l. Moser: "Nearly Integrable and Integrable Systems", in Topics in Nonlinear Dynamics, ed. by S.
lorna (AlP Conf. Proc. 46, I 1978)
M. V. Berry: "Regular and Irregular Motion", in Topics in Nonlinear Dynamics, ed. by S . .lorna
(AlP Conf. Proc. 46, 16 1978)

7. Nonlinear Equations. The Slaving Principle


This chapter is based on a slight generalization of
H. Haken, A. Wunderlin: Z. Phys. 847, 179 (l982)
An early version, applied to quasiperiodic motion (in the case of laser theory) was developed by
H. Haken: Talk at the International Conference on Optical Pumping, Heidelberg (1962); also
H. Haken, H. Sauermann: Z. Phys. 176,47 (1963)

Here, by an appropriate decomposition of the variables into rapidly oscillating parts and slowly
varying amplitudes, the atomic variables were expressed by the field modes (order parameters)
Other procedures are given in
H. Haken: Z. Phys. 820, 413 (1975); 821,105 (1975); 822, 69 (1975); 823, 388 (l975) and
H. Haken: Z. Phys. 829, 61 (1978); 830, 423 (1978)
The latter procedures are based on rapidly converging continued fractions, at the expense that the
slaved variables depend on the order parameters (unstable modes) at previous times (in higher order
approximation). These papers included fluctuations of the Langevin type.
In a number of special cases (in particular, if the fluctuations are absent), relations can be established
to other theorems and procedures, developed in mathematics, theoretical physics, or other disci-
plines.
Relations between the slaving principle and the center manifold theorem (and related theorems) are
studied by
A. Wunderlin, H. Haken: Z. Phys. 844, 135 (I981)
For the center manifold theorem, see
V. A. Pliss: Izv. Akad. Nauk SSSR., Mat. Ser. 28, 1297 (I964)
A. Kelley: In Transversal Mappings and Flows, ed. by R. Abraham, J. Robbin (Benjamin, New York
1967)
In contrast to the center manifold theorem, the slaving principle contains fluctuations, includes the
surrounding of the center manifold, and provides a construction of s(u, tp, I).

8. Nonlinear Equations. Qualitative Macroscopic Changes


In this chapter I present an approach initiated in 1962 (H. Haken: Talk at the International
Conference on Optical Pumping, Heidelberg 1962), and applied to laser theory including quasi-
periodic motion, e. g. bifurcation to tori (see, e. g.,
H. Haken, H. Sauermann: Z. Phys. 176,47 (1963)
H. Haken: Laser Theory, in Encylopedia of Physics, Vol. XXV, 2c, Light and Matter Ic (Springer,
Berlin, Heidelberg, New York 1970) and reprint edition Laser Theory (Springer, Berlin,
Heidelberg, New York 1983)
This author's approach is based on the slaving principle and represents, in modern language,
"dynamic bifurcation theory" (which allows one to cope with transients and fluctuations). "Static"
bifurcation theory was initiated in the classical papers by
H. Poincare: Les methodes nouvelles de la mecanique celeste T. 1 (Gauthier-Villars, Paris 1892)
H. Poincare: Acta Math. 7, 1 (I885)
A. M. Lyapunov: Sur Ie masse liquide homogene donnee d'un mouvement de rolalion. Zap. Acad.
Nauk, Sl. Petersburg 1, I (1906)
E. Schmidt: Zur Theorie der linearen und nichtlinearen Integralgleichungen, 3. Teil, Math. Annalen
65, 370 (I908)

747
346 Bibliography and Comments

While this field seems to have been more or less dormant for a while (with the exception of
bifurcation theory in fluid dynamics), the past decade has seen a considerable increase of interest as
reflected by recent texts. We mention in particular
D. H. Sattinger: Topics in Stability and Bifurcation Theory, Lecture Notes Math., Vol. 309
(Springer, Berlin, Heidelberg, New York 1972)
D. H. Sattinger: Group Theoretic Methods in Bifurcation Theory, Lecture Notes Math., Vol. 762
(Springer, Berlin, Heidelberg, New York 1980)
G. Iooss: Bifurcation of Maps and Applications, Lecture Notes, Mathematical Studies (North-
Holland, Amsterdam 1979)
G. Iooss, D. D. Joseph: Elementary Stability and Bifurcation Theory (Springer, Berlin, Heidelberg,
New York 1980)
These authors deal in an elegant fashion with "static" bifurcation theory.

8.2 A Simple Real Eigenvalue Becomes Positive


H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

8.3 Multiple Real Eigenvalue Becomes Positive


Here we follow
H. Haken: Unpublished material
References on catastrophe theory are
R. Thorn: Structural Stability and Morphogenesis (Benjamin, Reading, MA 1975)
Further references on this subject can be found in
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)
8.4 A Simple Complex Eigenvalue Crosses the Imaginary Axis. Hopf Bifurcation

The branching of oscillatory solutions was first treated in the classical paper
E. Hopf: Abzweigung einer periodischen L6sung eines Differentialsystems. Berichte der Mathema-
tisch-Physikalischen Klasse der Sachsischen Akademie der Wissenschaften zu Leipzig XCIV, I
(1942)
For recent treatments see
J. Marsden, M. McCracken: The Hopf Bifurcation and Its Applications. Lecture Notes Appl. Math.
Sci., Vol. 18 (Springer, Berlin, Heidelberg, New York 1976)
D. D. Joseph: Stability of Fluids Motion. Springer Tracts Natural Philos., Vols. 27, 28 (Springer,
Berlin, Heidelberg, New York 1976)
A. S. Monin, A. M. Yaglom: Statistical Fluid Mechanics, Vol. I (MIT Press, Cambridge, MA 1971)
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

8.5 Hopf Bifurcation, Continued


Compare Sect. 8.4

8.6 Frequency Locking Between Two Oscillators


See e. g.
H. Haken: Laser Theory, in Encylopedia of Physics, Vol. XXV, 2c, Light and Matter Ie (Springer,
Berlin, Heidelberg, New York 1970) and reprint edition Laser Theory (Springer, Berlin, Heidel-
berg, New York 1983)
R. L. Stratonovich: Topics in the Theory of Random Noise, Vols. 1,2 (Gordon and Breach, New
York 1963, 1967)

8.7 Bifurcation from a Limit Cycle


We follow
H. Haken: Z. Phys. B29, 61 (1978)

748
Bibliography and Comments 347

8.8 Bifurcation from a Limit Cycle: Special Cases


Compare Sect. 8.7 and
H. Haken: Unpublished material

8.9 Bifurcation from a Torus (Quasiperiodic Motion)

Compare Sect. 8.7 and


H. Haken: Z. Phys. B30, 423 (1978)
and unpublished material. For different approaches cf.
A. Chenciner, G, Iooss: Arch. Ration. Mech. Anal. 69, 109 (1979)
G. R. Sell: Arch. Ration. Mech. Anal. 69, 199 (1979)
G. R. Sell: In Chaos and Order in Nature, Springer Ser. Synergetics, Vol. 11, ed. by H. Haken
(Springer, Berlin, Heidelberg, New York 1980) p. 84

8.10 Bifurcation from a Torus: Special Cases


See Sect. 8.9

8.11 Instability Hierarchies, Scenarios, and Routes to Turbulence

8.11.1 The Landau-Hopj Picture


L. D. Landau, E. M. Lifshitz: In Course oj Theoretical Physics, Vol. 6, Fluid Mechanics (Pergamon,
London, New York 1959)
E. Hopf: Commun. Pure Appl. Math. 1,303 (1948)

8.11.2 The Ruelle and Takens Picture


D. Ruelle, F. Takens: Commun. Math. Phys. 20, 167 (1971)
S. Newhouse, D. Ruelle, F. Takens: Commun. Math. Phys. 64, 35 (1978)

8.11.3 Bifurcations oj Tori. Quasiperiodic Motions


See Sects. 8.9, 11.3

8.11.4 The Period-Doubling Route to Chaos. Feigenbaum Sequence


S. Grossmann, S. Thomae: Z. Naturforsch. 32A, 1353 (1977)
M. J. Feigenbaum: J. Stat. Phys. 19,25 (1978); Phys. Lett. 74A, 375 (1979)
P. Collet, J. P. Eckmann: Iterated Maps on the Interval as Dynamical Systems (Birkhauser, Boston
1980)
T. Geisel, J. Nierwetberg: In Evolution oj Order and Chaos, Springer Ser. Synergetics, Vol. 17, ed.
by H. Haken (Springer, Berlin, Heidelberg, New York 1982) p. 187

8.11.S The Route via Intermittency


Y. Pomeau, P. Manneville: Commun. Math. Phys. 77, 189 (1980)
G. Mayer-Kress, H. Haken: Phys. Lett. 82A, 151 (1981)

9. Spatial Patterns
9.1 The Basic Differential Equations

Examples are provided by the Navier-Stokes Equations, e. g.


H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983) and
O. A. Ladyzhenskaya: The Mathematical Theory oj Viscous Incompressible Flow (Gordon and
Breach, New York 1963)

749
348 Bibliography and Comments

D. D. Joseph: Stability oj Fluid Motions I and II, Springer Tracts Natural Philos., Vols. 27, 2R
(Springer, Berlin, Heidelberg, New York 1976)
Reaction Diffusion equations are treated, e. g., in
P. C. Fife: In Dynamics oj Synergetic Systems, Springer Ser. Synergetics, Vol. 6, ed. by H. Haken
(Springer, Berlin, Heidelberg, New York 1980) p. 97, with further references
J. S. Turner: Adv. Chern. Phys. 29, 63 (1975)
J. W. Turner: Trans. NY Acad. Sci. 36, 800 (1974), Bull. Cl. Sci. Acad. Belg. 61, 293 (1975)
Y. Schiffmann: Phys. Rep. 64, 87 (1980)
Compare also Sect. 1.5.2

9.2 The General Method of Solution

H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

9.3 Bifurcation Analysis for Finite Geometries

H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. cd. (Springer, Berlin, Heidelberg, New
York 1983)
See also the references cited in Sect. 9.1

9.4 Generalized Ginzburg-Landau Equations


H. Haken: Z. Phys. B21, 105 (1975)
H. Haken: Z. Phys. B22, 69 (1975); B23, 388 (1975)
For a different approach (for a more restricted class of problems) based on scaling see
Y. Kuramoto, T. Tsusuki: Prog. Theor. Phys. 52, 1399 (1974)
A. Wunderlin, H. Haken: Z. Phys. B21, 393 (1975)

9.5 A Simplification of Generalized Ginzburg-Landau Equations. Pattern Formation in Benard


Convection

H. Haken: Unpublished material


Equation (9.5.15) with A == 0 was derived differently by
J. Swift, P. C. Hohenberg: Phys. Rev. Al5, 319 (1977)
H. Haken: Synergetics, Springer Ser. Synergetics, Vol. 1, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

10. The Inclusion of Noise


10.1 The General Approach

Compare the references cited in Chap. 4 and


W. Horsthemke, R. Lefever: Noise-Induced Transitions, Springer Ser. Synergetics. Vol. 15
(Springer, Berlin, Heidelberg, New York 1983)

10.2 A Simple Example

H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin, Heidelberg, New
York 1983)

10.3 Computer Solution of a Fokker-Planck Equation for a Complex Order Parameh'r

H. Risken: Z. Phys. 186,85 (1965); 191, 302 (1966)


H. Risken, H. D. Vollmer: Z. Phys. 201, 323 (1967); 204, 240 (1967)
H. Risken: In Progress in Optics, Vol. VIII, ed by E. Wolf (North-Holland, Amsterdam 1970) p. 239

750
Bibliography and Comments 349

10.4 Some Useful General Theorems on the Solution of Fokker-Planck Equations

/0.4.1 Time-Dependent and Time-Independent Solutions of the Fokker-Planck Equation, it the


Drift Coefficients are Linear in the Coordinates and the Diffusion Coefficients Constant
G. E. Uhlenbeck, L. S. Ornstein: Phys. Rev. 36, 823 (1930)
N. Wax (ed.): Selected Papers on Noise and Statistical Processes (Dover, New York 1954)

/0.4.2 Exact Stationary Solution of the Fokker-Planck Equation for Systems in Detailed Balance
We follow essentially
R. Graham, H. Haken: Z. Phys. 248, 289 (1971)
H. Risken: Z. Phys. 251, 231 (1972)
For related work see
H. Haken: Z. Phys. 219, 246 (1969)
H. Haken: Rev. Mod. Phys. 47, 67 (1975)
R. Graham: Z. Phys. B40, 149 (1980)

10.4.4 Useful Special Cases


H. Haken: Z. Phys. 219, 246 (1969)

10.5 Nonlinear Stochastic Systems Close to Critical points: A Summary

Compare (H. Haken: Synergetics, Springer Ser. Synergetics, Vol. I, 3rd. ed. (Springer, Berlin,
Heidelberg, New York 1983)), where also another approach is outlined. That approach starts right
away from the master equation or Fokker-Planck equation and eliminates the slaved variables from
these equations.

11. Discrete Noisy Maps


Basic works on discrete maps have been cited in Sect. 8.11.2. Scaling properties of discrete noisy
maps have been analized in
J. P. Crutchfield, B. A. Huberman: Phys. Lett. 77 A, 407 (1980)
J. P. Crutchfield, M. Nauenberg, J. Rudnick: Phys. Rev. Lett. 46, 935 (1981)
B. Shraiman, C. E. Wayne, P. C. Martin: Phys. Rev. Lett. 46, 933 (1981)
We shall follow essentially
G. Mayer-Kress, H. Haken: J. Stat. Phys. 26, 149 (1981)
H. Haken, G. Mayer-Kress: Z. Phys. B43, 185 (1981)
H. Haken: In Chaos and Order in Nature. Springer Ser. Synergetics, Vol. II, ed. by H. Haken
(Springer, Berlin, Heidelberg, New York 1981) p. 2
H. Haken, A. Wunderlin: Z. Phys. B46, 181 (1982)

12. Example of an Unsolvable Problem in Dynamics


K. GOdel: Monathsh. Math. Phys. 38, 173 (1931)

Appendix
J. Moser: Convergent Series Expansions for Quasi-Periodic Motions. Math. Ann. 169, 136 (1967)

751
Subject Index

Abelian group 71,93,133 Bifurcation theory 315 f.


-, bounded, of operators in Hilbert space Biological system 13
133 f. Bistability 7
Adiabatic approximation 36,175, 176f., Boundary conditions for spatial patterns
188 f. 48 f., 269f.
Astrophysics 8 - - , nonflux boundary condition 269
Attractor 24, 33, 44f. - - , periodic boundary condition 269
- ,chaotic 31, 42, 45 Boundary effects in discrete noisy maps
-, Lorenz 31 304f., 310
- ,Rossler 31 f. Brownian motion 21, 144f.
-, strange 31,264ff.
- , -, classification by Catastrophe theory 229
Lyapunov exponents 265 Center manifold theorem 316
Autocatalysis 15, 19, 35 Chaos 3, 7ff., 21, 42, 53, 264ff.
Autonomous system 36 Chaotic motion see Chaos
Average, statistical 144 ff. Chapman-Kolmogorov equation 303 ff.
- - , connection with Fredholm integral
Belousov-Zhabotinsky reaction 11 f. equation 306 f.
Benard instability 3, 32, 278 ff. - - , effect of boundaries 304 f., 310
- - , numerical solution of 28Of. - -, exact time-dependent solution in the
Bifurcation from a limit cycle 39 ff., 242ff. linear case 31Of.
- - into a torus 39 f., 251 ff. - -, mean first passage time 308 f.
- - into two limit cycles 39f., 247f. - -, path integral solution 307 f.
- , period doubling 40f., 249 Characteristic exponent(s) 62 f., 66, 85
- - , subharmonics 40f., 249ff. - -, continuous spectrum of 274 ff.
Bifurcation from a node or focus 37 ff., - -, discrete spectrum of 273 f.
222ff., 272 ff. - -, generalized 68, 79 f., 104 ff., 129 ff.
- - , a simple real eigenvalue becomes positiv - - , - ,vanishing 255
224ff. Chemical oscillations 11 f., 154
- - , frequency locking between two oscillators Chemical pre-pattern 14
239ff. Chemical waves 15
- , Hopf bifurcation 38f.,230ff. Cloud streets 3
- - into a limit cycle 38f. Coefficient matrix, decomposition of 96 f.
- - into two nodes or two foci 37 f. 134f.
- - , multiple real eigenvalues become positive Coexistence of patterns 3, 5
228ff. - of species 14
Bifurcation from a torus 253 ff. Competition of collective modes 18, 274
- - , a complex nondegenerate eigenvalue Control parameter 20, 33 f., 36ff., 50ff.,
crosses the imaginary axis 260 ff. 57 f., 222ff.
- - , a simple real eigenvalue becomes positive Control theory 315
258 ff. Convection instability 3, 32, 278 ff.
- - , collaps into a limit cycle 264 - - , numerical solution of 28Of.
- - , period doubling of the motion on the Coordinate system, local see Local
264 coordinates
- - , strange attractors 264 ff. Correlation function of fluctuating forces
- - to other tori 41 f., 255 ff. 144, 146
352 Subject Index

Counterterms 159,174, 176, 236ff., 262 - - , - - , convergence estimates 141f.


Critical fluctuations 47, 316 - - , some generalized exponents
parameter value 2ff. degenerated 129ff.
- points 46, 187 - - , - - , conditions for the reduction of the
- slowing down 54, 316 constructed transformation matrix
Crystal growth 9 133f.
Cybernetics 315 - - , - - , construction of a triangular
transformation matrix 130ff.
Darwin's rules 14 - - , - - , construction of the solution vectors
Degrees of freedom, reduction of 228f. by variation of the constant 133
- - - , - - and symmetries 229 - - , - - , form of the solution vectors
Detailed balance 294ff. - - , - - , general conditions for the system
- - and symmetry 302 129
- -, first and second version of the principle Diffusion coefficient, (non)constant 285
of 295 Diffusion matrix 268
- -, necessary and sufficient conditions for Discrete map 49ff., 266
295ff. - - , critical slowing down 54
- -, integrability condition 298 - - , Lyapunov exponent of 54f.
Deterministic equation 143 - -, slaving principle 54
Difference operators, decompostion of - -, symmetry breaking 54
210f. Discrete noisy map 54, 56, 303ff.
Differential equations Divisors, small, problem of 63ff., 186
- -, first-order 19 Drift coefficient, (ir)reversible 297
- -, linear see Differential systems and - -, (non)linear 285
Linear Dual (solution) space 81ff.
- -, nonlinear see Nonlinear - - -, orthogonality relations 84, 224, 245,
- - ,partial 23, 36, 267f., 282ff. 256, 273, 276
- -, stochastic see Stochastic Duffing equation 40f., 266
Differential systems 76ff. Dynamic systems theory 315
- - ,stable 79
with constant coefficient matrix 84ff., Ecology, dramatic changes in 17
155f. Economy, dramatic changes in 16
with periodic coefficient matrix 89ff. Eigenvalue
Differential systems with quasiperiodic coeffi- - and stability 37
cient matrix 103ff. -, characteristic exponent 62f., 66, 85
- - , all generalized exponents different -, F10quet exponent 39, 91
103ff. Electroencephalogram 13, 143
- - , - - , approximation by a variational Electronics 9
method118 Embedding 104, 114ff.
- - , - - , approximation by smoothing Engineering 9f., 154, 315
119ff. -, electrical 9, 154, 314
- - , - - , auxiliary lemmas 106ff. -, radio 154
- - , - - , construction of a triangular Evolution equations 19
formation matrix l11ff., 115ff. in biology 14
- - , - - , form of the solution vectors - of spatial patterns 47
103, 106, 129
- - , - - , general conditions for the Feigenbaum number 53
system 104f. - sequence 266
- - , - - , proof of the properties of the - -, complete 266
constructed transformation matrix Finite geometries for spatial patterns 48f.,
111ff., 115 ff. 272ff.
- - , - - , properties of the transformation Firing of neurons 154
matrix 105f. First passage time problem 47, 285, 308f.
- - , - - , reduction of the constructed Fixed point 310
transformation matrix 122ff. - -, stable 43,51, 80f.
- - , solution by an iteration procedure Floquet exponent 39, 91
134ff. - -, vanishing 244

754
Subject Index 353

Flow diagram 33, 38f. Gunn oscillator 8


- in dynamical systems 23
Fluctuations 20f., 143 Hopf bifurcation 38f., 230ff., 233ff.,
-, critical 47,285,316 264
- in nonequilibrium phase transitions 46f. Hierarchy
Fluids, instabilities in 2ff. of instabilities 264ff.
Flutter 9, 154 - of patterns 3
Focus 24f., 37 - of period doublings 3, 40f.
-, stable 39, 45 Hypercycles 15
Fokker-Planck equation 146ff., 153, 285ff.
- -, approximate solution by linearization Immune system 15
286 Imperfection of patterns 4, 6
- - , computer solution 286ff. Information theory 18, 314
- -, detailed balance 294 ff. Instabilities 1ff.
- - , diffusion coefficient 285 - , hierarchy of 3, 264ff.
- -, drift coefficient 285, 297 Instability point, close to 274, 282
- -, exact stationary solution in case of Instability, structural 32ff.
detailed balance 298 Intermittency 266
- - , exact time-dependent solution in the Invariance against group operations 69ff., 89,
linear case 286, 293f. 274
- - , generalized thermodynamic potential Inversion symmetry under time reversal 294
298 Irregular motion 3, 7ff., 42, 53, see also
- -, Green's function of 289ff., 293 Chaos
- -, Ito type 146ff., 149 Iteration procedure see a/so Perturbation
- - , reduction to a Schr6dinger equation approach
287ff. - - by Kolmogorof and Arnold 159ff.
- - , Stratonovich type 150ff., 153 - - by Moser 180ff.
- -, useful special cases 300f. - - - -, proof of the convergence 317ff.
Forces for a differentiable system with
-, coherent 144ff. quasiperiodic coefficient 134ff.
-, fluctuating 20f., 36, 144ff. for eliminating slaved variables 202ff.,
-, -, correlation function of 144, 146 214ff.
Frechet derivative 270 Ito differential equation 145
Fredholm integral equation 306f. calculus 217
Frequency locking 9, 42, 239ff., 264, 266, Fokker-Planck equation 146ff., 149
316 rule 145
- -, destruction of 47 transformation rule 217
-, renormalized 198, 238ff.
-, - in frequency locking 239, 241 Joint probability 147f., 295, 305f.
- shift 158f., 174,232,248 Jordan's normal form 76f., 85, 90f., 94ff.,
195,208f.
Gaussian distribution 307
- noise 310f.
KAM condition 65f., 73f., 105, 135, 160,
General systems theory 315
169f., 253, 255, 265
Generator see Group generator
- - and fluctuations 264
Generic 265 f.
- -, condition on the convergence of the
Ginzburg-Landau equation 278
Fourier coefficients 64ff., 73f.
- - , generalized 274ff.
- -, extended version 179, 263ff., 322
Green's function 289ff., 293
- -, measure theoretical aspect 65
Grid transformation in biology 32f.
Kolmogorov-Arnold-Moser condition
Group, Abelian 71,93, 133
see KAM condition
- axioms 70f.
- generator 71, 93
- -, infinitesimal 123ff., 133f. Landau-Hopf picture 264, 266
- representation of the translation group Langevin equation 153, 282
72, 95f. Ito equation 301
- -, irreducible 95 - Stratonovich equation 301

755
354 Subject Index

Laser Modes, collective 18


-, bistability 7 Morphogenesis 13 f.
-, coherent oscillation 6, 154 Moser's theorem 179f., 236f.
- , Hopf bifurcation 266 Multistability 8
-, periodic pulses 6f. Network, of computers 15
- , single mode 32 - ,elect rial 143
- , turbulent light emission 6f., 266 -, neural 143
Limit cycle 24ff., 38ff. Neurons, firing of 154
-, stable 25, 39, 45 Node 24f., 37f.
- ,unstable 39f., 45 - ,stable 25, 38
Linear ordinary differential equations for one Noise see also Fluctuations and Stochastic
variable 61ff. -, additive 21, 303
- - , homogeneous, with constant -, Gaussian 31Of.
coefficient 61f. -, multiplicative 22, 283, 303
-, with periodic coefficient 62f. Noise sources 282
-, with quasiperiodic coefficient Nonequilibrium phase transitions 46f., 315
63ff. - - -, critical fluctuations 47
-, with real bounded coefficient 67f. - - -, critical points 46
- - , inhomogeneous 72ff. Nonlinear stochastic differential equations
Linear ordinary differential equations, sets of 143ff.
see Differential systems - , Ito type 145
Linear stability analysis see Stability - - , Stratonovich type 153
Local coordinates of a liquid 268f. Nonlinear stochastic partial diffen:ntial
- - of a torus 176ff., 256 equations 23, 36, 267, 282ff.
Logistic equation 50ff. Null-space 184ff., 319ff., 332
Lorentzian curve, effective 291f.
- - , -, linewidth of 292 Operator, function of 84ff., 93
Lorenz attractor 31£. Orbit, closed 28f., 40, 175f.
- equations 32 Order parameter 35f., 46, 48, 315f.
Lyapunov exponent 42ff., 80f. Order parameter equation 38, 225ff.
- - for discrete maps 54f. - - -, rescaling of the variables 234ff.,
- - ,vanishing 45, 80f. 259f., 262
- - -, renormalization by counterterms
Macroscopic level 22 236ff., 262
Macroscopic process 143 Oscillation, chemical 11£.,154
Manifold 24ff. -, quasiperiodic 18
-, attracting 29 Oscillators, linear
-, center 31 - , - with linear coupling 155f.
-, differentiable 27, 29 -, - with nonlinear coupling 156ff.
-, invariant 29f. - , - - -, frequency shift 158
-, stable 30 - ,nonlinear 42, 154ff.
- ,unstable 30 - ,- driven 40f.
Map, discrete see Discrete map - ,- in electrical engineering and electronics 9
- ,- noisy see Discrete noisy map - with nonlinear coupling 158ff., 172ff.,
- ,logistic 50ff. see also Perturbation of quasiperiodic
-, Poincare 49, 53f. motion
Marengo instability 4 Overdamped motion 173,226f.
Markov process 148, 305
- -, stationary 306 Parallel computing 15
Master computer 15 Path integral 307
Mezoscopic level 22 Pattern formation see Instabilities
Michaelis-Menton term 268 - recognition 15
Microscopic level 22 -, spatial 47 ff.
Microscopic process 143 Period doubling 2f., 9f., 40f., 249, 264, 266
Mode, neutral 187 - -, hierarchy of 40f.
-, stable 188,195,207,225 - -, sequence of 53, 316
-, unstable 187, 195, 207, 225 - - , - - accumulation point 53

756
Subject Index 355

- tripling 9, 41, 266 Reaction and diffusion of molecules 14


Perturbation approach 96ff., 236 Reaction diffusion equation 268, 270ff.
- and standard form of the solution vector Relaxation constant, renormalized 238
99ff. - of a dynamical system 38
Perturbation of quasiperiodic motion for - of the amplitude 47, 173, 174f., 176ff.
time-dependent amplitudes 172ff. Reliability of systems 15
- - , coordinate transformation for the Renormalization by counterterms 236ff., 262
perturbed torus 176ff., 179f., 321 f. - group technique 1
- - , - - transformation equations 178, Representation, irreducible 95, 274
238, 320, 322 -, one-dimensional 72
- - , - - , motion on the perturbed torus in Rest term in the adiabatic and exact
the new variables 178, 238, 322 elimination 192,200, 205f.
- - , - - , motion on the perturbed torus in Reynold's number 4
the old variables 174, 180, 321 ROssler attractor 31
- , - , null-space 184ff., 319ff., 332ff. - equations 31
- - , fixing by counterterms 174, 176 Rotating wave approximation 251
- - , frequency shift 174 Routes to turbulence 264ff.
- - , iteration procedure by Moser 180ff. Ruelle and Takens picture 265f.
- - , - - , proof of the convergence of the
iteration procedure 317ff. Saddle point 24, 30f.
- - , KAM condition, extended version Scaling in order parameter equations 234ff.,
179, 322 259f.,262
- - , Moser's theorem 179f. Sceleton of a spatial pattern 48
Perturbation of quasiperiodic motion for Scenario 264,266
time-independent amplitudes 158ff. SchrOdinger equation 287
- - , convergence of the iteration procedure Selection of collective modes 18
165ff. Selection rules by symmetry 274
- - , fixing by counterterms 159 Self-organization 1,9, 315
- , frequency shift 159 - of computers 15
- - , iteration procedure by Kolmogorov - , pathways to 56ff.
and Arnold 159ff. Self-organizing systems 2f.
- - , KAM condition 160, 169f. Semiconvergence 206, 212
Phase diffusion 47 Sensitive dependence on the initial conditions
- transitions in thermal equilibrium I, 46, 21,45
314f. Singularity theory 272
- -, critical fluctuations 47, 316 Slaved variable 36
- -, critical points 46 Slaving principle 35f., 39, 46, 54, 258, 274,
- -, critical slowing down 54, 316 277
- - in nonequilibrium state 46f., 315 - -, adiabatic approximation 188f.
Plasmas, instabilities in 7f. - - and adiabatic elimination procedures
Poincare map 49, 53f. 316
Pomeau and Manneville, route via - - and center manifold theorem 316
intermittency 266 - - and discrete noisy maps 207ff.
Population dynamics 14, 19 - - and nonlinear equations 195ff.
Post-buckling patterns 9f. - - and slow manifold theorem 316
Power spectrum 4f., 10ff., 42 - - and stochastic differential equations
Pre-pattern, chemical 14 216ff.
Probability distribution (function) 46, 147ff., - -, approximation by a finite series 191,
207, 285ff. 212
Problem of small divisors 63ff., 186 - -, basic equations 195ff., 207f., 216
Pulses 6ff. - -, - -, transformations of 197
- -, differentiability 206f.
Quantum fluctuations 21 - -, exact elimination procedure for an
Quasiperiodic motion 35, 40, 253ff., 264ff. example 189ff.
- -, persisting of 159, 174f. - -, formal relations 190ff., 198ff., 208ff.,
- -, perturbation of see Perturbation of 220f.
quasiperiodic motion - -, iteration procedure 202ff., 214ff.

757
356 Subject Index

Slaving principle, regression relation 216 Streamlines in dynamical systems 23f.


- , -, renormalized frequencies 198 Subharmonics 40f., 249ff.
- , -, rest term 200 -, the question which perturbation is small
-, -, -, -, estimation 192, 205f. 251
- - , semiconvergent series 206, 212 Subsystems I, 18
Smoothing 119ff. Survival of the fittest 14
Sociology, dramatic changes in 17 Symmetry breaking 54, 316
Solidification of structures 18
Solid-state physics, instabilities in 8f.
Taylor instability 2
Solution space 78f.
Thermal equilibrium see Phase transitions
- -, basis of 78
- - far from 46,278,285,301,315
- -, dual see Dual (solution) space
Ther:nodynamic potential, generalized 298
- matrix 78f.
Thermodynamics 18, 314
- -, nonsingularity of 79
-, irreversible 18, 314
- -, uniqueness of 79
Thermoelastic instabilities 9
- vector 77 f.
Time-ordering operator 106ff.
- -, standard form 84ff., 91, 223ff.
Time reversal 294
Spatial patterns 11, 23, 47ff., 267ff.
Torus 26ff., 4Off., 158ff., 172ff.
- -, analysis for finite geometries 272ff.
-, local coordinate system of 176ff.
- - , -, selection rules 274
-, perturbed see Perturbation of
- -, analysis for infinitely extended media
quasiperiodic motion
274ff.
-, stable 45
- - , -, wave packet approximation 275ff.
Trajectory 23ff., 43f.
- -, basic differential equations 267ff.
Transformation matrix 78f.
- - - - - boundary conditions of 269f.
- -, computer calculation of 90
- - ' - - -' reaction diffusion equation as
- -, nonsingularity of 79
, 'example 268, 270ff.
- -, reduction of 122ff., 133f.
- -, general method of solution 270ff.
- -, triangular 105, 129
- - , generalized Ginzburg-Landau equation
Transients see also Relaxation
274ff.
- in self-organization 58
- - , - - -, Benard convection as example
- near bifurcation points 231f., 239
278ff.
Transition probability 305 f.
- -, slaving principle in 274
- - forward and backward equation 306
Stability analysis
Tra;slations group 70ff.
- -, linear 35,37,39, 43f., 47f., 222ff.
- operator 69ff.
-, loss of linear 36, 38f., 187
- -, extended 104, 123f., 133f.
-, structural 32ff.
Tunnel diodes 8
- ,- in biology 32f.
Turbulence 3
Statistical average 144ff.
see also Irregular motion
Statistical mechanics 21, 314
Stochastic equations 20f., 23
- nonlinear differential equations 143ff. Universality 314
- -, 'ito type 145 - of the Feigenbaum number 53
- -, Stratonovich type 153
- nonlinear partial differential equation Variational method 118f.
23, 36, 267f., 282ff. Vector space, complex linear 78
Stochastic systems, nonlinear, close to critical
points 282f., 30lf.
Stratonovich differential equation 153 Wave packet 275
- calculus 150ff., 221 Wiener process 196, 214, 216
- Fokker-Planck equation 150ff., 153 Window of regular motion 53
- rule 145 Word problem in group theory 312

758
Part III

Springer Series in Synergetics

List of All Published Books


Springer Series in Synergetics
Synergetics An Introduction 3rd Edition Chemical Oscillations, Waves,
By H. Haken (1983) and Turbulence By Y. Kuramoto (1984)
Synergetics A Workshop Advanced Synergetics
Editor: H. Haken (1977) 2nd Edition By H. Haken (1983)
Synergetics Far from Equilibrium Stochastic Phenomena and Chaotic
Editors: A. Pacault, C. Vidal (1979) Behaviour in Complex Systems
Editor: P. Schuster (1984)
Structural Stability in Physics
Editors: W. Giittinger, H. Eikemeier (1979) Synergetics - From Microscopic to
Macroscopic Order Editor: E. Frehland
Pattern Formation by Dynamic Systems
(1984)
and Pattern Recognition
Editor: H. Haken (1979) Synergetics of the Brain
Editors: E. Bapr, H. Flohr, H. Haken,
Dynamics of Synergetic Systems
A. J. Mandell (1983)
Editor: H. Haken (1980)
Chaos and Statistical Methods
Problems of Biological Physics
Editor: Y. Kuramoto (1984)
By L. A. Blumenfeld (1981)
Dynamics of Hierarchical Systems
Stochastic Nonlinear Systems
An Evolutionary Approach
in Physics, Chemistry, and Biology
By J. S. Nicolis (1986)
Editors: L. Arnold, R. Lefever (1981)
Self-Organization and Management
Numerical Methods in the Study
of Social Systems Editors: H. Ulrich,
of Critical Phenomena
G. J. B. Probst (1984)
Editors: J. Della Dora, J. Demongeot,
B. Lacolle (1981) Non-Equilibrium Dynamics
in Chemical Systems
The Kinetic Theory of Electromagnetic
Editors: C. Vidal, A. Pacault (1984)
Processes By Yu. L. Klimontovich (1983)
Self-Organization Autowaves and
Chaos and Order in Nature
Structures Far from Equilibrium
Editor: H. Haken (1981)
Editor: V.l. Krinsky (1984)
Nonlinear Phenomena in Chemical
Dynamics Editors: C. Vidal, A. Pacault Temporal Order Editors: L. Rensing,
N.l. Jaeger (1985)
(1982)
Dynamical Problems in Soliton Systems
Handbook of Stochastic Methods
Editor: S. Takeno (1985)
for Physics, Chemistry, and the Natural
Sciences 2nd Edition Complex Systems - Operational
By C. W. Gardiner (1985) Approaches in Neurobiology, Physics,
Concepts and Models of a Quantitative and Computers Editor: H. Haken (1985)
Sociology The Dynamics ofInteracting Dimensions and Entropies in Chaotic
Populations By W. Weidlich, G. Haag Systems Quantification of Complex
(1982) Behavior 2nd Corr. Printing
Editor: G. Mayer-Kress (1986)
Noise-Induced Transitions Theory and
Applications in Physics, Chemistry, and Selforganization by Nonlinear
Biology By W. Horsthemke, R. Lefever Irreversible Processes
(1983) Editors: W. Ebeling, H. Ulbricht (1986)
Physics of Bioenergetic Processes Instabilities and Chaos in Quantum Optics
By L. A. Blumenfeld (1983) Editors: F. T. Arecchi, R. G. Harrison (1987)
Evolution of Order and Chaos Nonequilibrium Phase Transitions
in Physics, Chemistry, and Biology in Semiconductors Self-Organization
Editor: H. Haken (1982) Induced by Generation
The Fokker-Planck Equation and Recombination Processes
By E. SchOll (1987)
2nd Edition By H. Risken (1996)
Springer Series in Synergetics
Temporal Disorder Foundations of Synergetics II
in Human Oscillatory Systems Complex Patterns 2nd Edition
Editors: 1. Rensing, U. an der Heiden, By A. S. Mikhailov, A. Yu. Loskutov
M. C. Mackey (1987) (1996)
The Physics of Structure Formation Synergetic Economics By W.-B. Zhang
Theory and Simulation (1991)
Editors: W. Guttinger, G. Dangelmayr
Quantum Signatures of Chaos
(1987) 2nd Edition By F. Haake (2000)
Computational Systems - Natural and
Rhythms in Physiological Systems
Artificial Editor: H. Haken (1987)
Editors: H. Haken, H. P. Koepchen (1992)
From Chemical to Biological
Quantum Noise 2nd Edition
Organization Editors: M. Markus,
By C. W. Gardiner, P. Zoller (1999)
S. C. Miiller, G. Nicolis (1988)
Nonlinear Nonequilibrium
Information and Self-Organization
Thermodynamics I Linear and Nonlinear
A Macroscopic Approach to Complex
Fluctuation-Dissipation Theorems
Systems 2nd Edition By H. Haken By R. Stratonovich (1992)
(1999)
Self-organization and Clinical
Propagation in Systems Far from Psychology Empirical Approaches
Equilibrium Editors: J. E. Wesfreid,
to Synergetics in Psychology
H. R. Brand, P. Manneville, G. Albinet, Editors: W. Tschacher, G. Schiepek,
N. Boccara (1988)
E. J. Brunner (1992)
Neural and Synergetic Computers Nonlinear Nonequilibrium
Editor: H. Haken (1988) Thermodynamics II Advanced Theory
Cooperative Dynamics in Complex By R. Stratonovich (1994)
Physical Systems Editor: H. Takayama Limits of Predictability
(1989) Editor: Yu. A. Kravtsov (1993)
Optimal Structures in Heterogeneous
On Self-Organization
Reaction Systems Editor: P. J. Plath An Interdisciplinary Search
(1989) for a Unifying Principle
Synergetics of Cognition Editors: R. K. Mishra, D. MaaB,
Editors: H. Haken, M. Stadler (1990) E. Zwierlein (1994)
Theories of Immune Networks Interdisciplinary Approaches
Editors: H. Atlan, I. R. Cohen (1989) to Nonlinear Complex Systems
Editors: H. Haken, A. Mikhailoy (1993)
Relative Information Theories
and Applications By G. Jumarie (1990) Inside Versus Outside
Endo- and Exo-Concepts of Observation
Dissipative Structures in Transport
and Knowledge in Physics, Philosophy
Processes and Combustion
and Cognitive Science
Editor: D. Meinkiihn (1990)
Editors: H. Atmanspacher, G. J. Dalenoort
Neuronal Cooperativity (1994)
Editor: J. Kriiger (1991)
Ambiguity in Mind and Nature
Synergetic Computers and Cognition Multistable Cognitive Phenomena
A Top-Down Approach to Neural Nets Editors: P. Kruse, M. Stadler (1995)
2nd edition By H. Haken (2004)
Modelling the Dynamics
Foundations of Synergetics I of Biological Systems
Distributed Active Systems 2nd Edition Editors: E. Mosekilde, O. G. Mouritsen
By A. S. Mikhailov (1994) (1994)
Springer Series in Synergetics
Self-Organization in Optical Systems Self-Organization and the City
and Applications in Information By J. Portugali (1999)
Technology 2nd Edition
Critical Phenomena in Natural Sciences
Editors: M.A. Vorontsov, W. B. Miller (1995)
Chaos, Fractals, Selforganization and Disorder:
Principles of Brain Functioning Concepts and Tools 2nd Edition
A Synergetic Approach to Brain Activity, By D. Sornette (2004)
Behavior and Cognition
Spatial Hysteresis and Optical Patterns
By H. Haken (1995)
By N. N. Rosanov (2002)
Synergetics of Measurement, Prediction
Nonlinear Dynamics of Chaotic and Stochastic
and Control By I. Grabec, W. Sachse
Systems Tutorial and Modem Developments
(1997)
By V. S. Anishchenko, V. V. Astakhov,
Predictability of Complex Dynamical Systems A. B. Neiman, T. E. Vadivasova,
By Yu. A. Kravtsov, J. B. Kadtke (1996) L. Schimansky-Geier (2003)
Interfacial Wave Theory of Pattern Formation Synergetic Phenomena in Active Lattices
Selection of Dentritic Growth and Viscous Patterns, Waves, Solitons, Chaos
Fingerings in Hele-Shaw Flow By V. I. Nekorkin, M. G. Velarde (2002)
By Han-Jun Xu (1997)
Brain Dynamics
Asymptotic Approaches in Nonlinear Dynamics Synchronization and Activity Patterns
New Trends and Applications in Pulse-Coupled Neural Nets with Delays
By J. Awrejcewicz, I. V. Andrianov, and Noise By H. Haken (2002)
L.1. Manevitch (1998)
From Cells to Societies
Brain Function and Oscillations Models of Complex Coherent Action
Volume I: Brain Oscillations. By A. S. Mikhailov, V. Calenbuhr (2002)
Principles and Approaches
Brownian Agents and Active Particles
Volume II: Integrative Brain Function.
Neurophysiology and Cognitive Processes Collective Dynamics in the Natural and Social
Sciences By F. Schweitzer (2003)
By E. Ba~ar (1998)
Asymptotic Methods for the Fokker-Planck Nonlinear Dynamics of the Lithosphere
and Earthquake Prediction
Equation and the Exit Problem in Applications
By J. Grasman, O. A. van Herwaarden (1999) By V. I. Keilis-Borok, A. A. Soloviev (Eds.)
(2002)
Analysis of Neurophysiological Brain
Functioning Editor: Ch. Uhl (1998) A Computational Differential Geometry
Approach to Grid Generation
Phase Resetting in Medicine and Biology By V. D. Liseikin (2003)
Stochastic Modelling and Data Analysis
By P. A. Tass (1999)
Die neue Springer-Website
Schnell, intelligent, aktuell
• Einfache Navigation und schnelle Suchergebnisse.
• Bucher und Ztitschriften auf einen Blick.
• S~ndig neue Online·Angebole.
Unsere neue Website - lhrWissensvorsprung

, Springer

You might also like