Lattice Gas Cellular Automata and Lattice Boltzmann Models Chapter5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

4.

Some statistical mechanics

4.1 The Boltzmann equation

The motion of a fluid can be described on various levels. The most basic
decription by the Hamilton equations for a set of classical particles or the
analogous quantum mechanical formulation prohibits itself because of the
huge number of particles. 1 cm3 of air, for example, at 0◦ C and a pressure of
one atmosphere contains 2.69 · 1019 molecules. It is impossible to prepare a
desired microstate of such a system.
This fact is already taken into account by the next higher level of
decription for a system with N particles by distribution functions
fN (q 1 , p1 , ..., q N , pN , t) which encompass all statistical informations of all dy-
namical processes (q i and pi are the generalized coordinate and momentum
of particle i). fN (q 1 , p1 , ..., q N , pN , t)dq 1 dp1 ... dq N dpN is the probability to
find a particle in the interval ([q 1 , q 1 + dq 1 ], [p1 , p1 + dp1 ]) while the other
particles are in infinitesimal intervals around (q 2 , p2 ) ... (q N , pN ). Thus fN
contains especially the various correlations between particles. fN obeys the
Liouville equation
!
∂fN  ∂HN ∂fN
3N
∂HN ∂fN
− − =0 (4.1.1)
∂t j=1
∂qj ∂pj ∂pj ∂qj

where HN is the Hamiltonian of the system.


By integration over part of the phase space one defines reduced densities

Fs (q 1 , p1 , ..., q s , ps , t) := V s fN (q 1 , p1 , ..., q N , pN , t) dq s+1 dps+1 ... dq N dpN

where V s is a normalization factor. It has been shown that a coupled system


of differential equations for the Fs (1 ≤ s ≤ N ) is equivalent to the Liouville
equation. This system is called BBGKY after Bogoljubov, Born, Green, Kirk-
wood and Yvon who derived these equations. The BBGKY hierarchy has to
be truncated at some point to calculate approximate solutions.
The Boltzmann equation has been derived as a result of a systematic approx-
imation starting from the BBGKY system not before 1946 (compare Bo-
goliubov, 1962; Boltzmann derived the equation which bears his name by a

D.A. Wolf-Gladrow: LNM 1725, pp. 139–158, 2000.



c Springer-Verlag Berlin Heidelberg 2000
140 4. Some statistical mechanics

different reasoning already in the 19th century). It can be derived by applying


the following approximations: 1. Only two-particle collisions are considered
(this seems to restict applications to dilute gases). 2. The velocities of the two
colliding particles are uncorrelated before collision. This assumption is often
called the molecular chaos hypothesis. 3. External forces do not influence the
local collision dynamics.
The Boltzmann equation is an integro-differential equation for the single par-
ticle distribution function f (x, v, t) ∝ F1 (q 1 , p1 , t)
K
∂t f + v∂x f + ∂v f = Q(f, f ) (4.1.2)
m
where x = q 1 , v = p1 /m, m = const is the particle mass, f (x, v, t) d3 x d3 v
is the probability to find a particle in the volume d3 x around x and with
velocity between v and v + dv.
 
3
Q(f, f ) = d v1 dΩ σ(Ω)|v − v 1 |[f (v  )f (v 1 ) − f (v)f (v 1 )] (4.1.3)

is the collision integral with σ(Ω) the differential collision cross section for the
two-particle collision which transforms the velocities from {v, v 1 } (incoming)
into {v  , v 1 } (outgoing). K is the body force. It will be neglected in the
following discussion of the current chapter.

4.1.1 Five collision invariants and Maxwell’s distribution

It can be shown (see, for example, Cercignani, 1988) that the collision integral
possesses exactly five elementary collision invariants ψk (v) (k = 0, 1, 2, 3, 4)
in the sense that 
Q(f, f )ψk (v) d3 v = 0. (4.1.4)

The elementary collision invariants read ψ0 = 1, (ψ1 , ψ2 , ψ3 ) = v and ψ4 =


v 2 (proportional to mass, momentum and kinetic energy). General collision
invariants φ(v) can be written as linear combinations of the ψk

φ(v) = a + b · v + cv 2 .

It can be further shown (see, for example, Cercignani, 1988) that positive
functions f exist which give a vanishing collision integral

Q(f, f ) = 0.

These functions are all of the form

f (v) = exp(a + b · v + cv 2 )
4.1 The Boltzmann equation 141

where c must be negative. The Maxwell1 distribution


!3/2 % &
m m
f (M )
= f (x, v, t) = n exp − (v − u)2
(4.1.5)
2πkB T 2kB T

is a special case among these solutions where u is the mean velocity



1
u= d3 v vf (x, v, t). (4.1.6)
n
Please note that f (M ) depends on x only implicitly via n(x), u(x) and T (x).

4.1.2 Boltzmann’s H-theorem

In 1872 Boltzmann showed that the quantity



H(t) := d3 v d3 x f (x, v, t) ln f (x, v, t) (4.1.7)

where f (x, v, t) is any function that satisfies the Boltzmann equation fulfills
the equation
dH
≤0 (4.1.8)
dt
and the equal sign applies only if f is a Maxwell distribution (4.1.5). This is
his famous H-theorem.
Proof. We will assume that no external forces are applied and thus f (x, v, t)
obeys the following Botzmann equation
∂f
+ v · ∇f = Q(f, f ).
∂t
Differentiation of (4.1.7) yields

dH ∂f
= d 3 v d3 x (x, v, t) [1 + ln f (x, v, t)] .
dt ∂t
Insertion of the Boltzmann equation leads to

dH
= − v · ∇ [f (x, v, t) ln f (x, v, t)] d3 v d3 x
dt
(4.1.9)

+ d3 v1 d3 v2 d3 x dΩ σ(Ω) [f2 f1 − f2 f1 ] · | v 2 − v 1 | [1 + ln f1 ]

1
The Maxwell distribution is also referred to as Maxwell-Boltzmann distribution
or as Boltzmann distribution.
142 4. Some statistical mechanics

where f1 = f (x, v 1 , t), f2 = f (x, v 2 , t), f1 = f (x, v 1 , t), and f2 =
f (x, v 2 , t). The first summand can be transformed into a surface integral

− n · vf (x, v, t) ln f (x, v, t) d3 v dF (4.1.10)
F

where n is the (outer) normal of the surface F that enclosed the gas. Without
detailed discussion we will assume that this surface integral vanishes. The
second integral is invariant under exchange of v 1 and v 2 because σ(Ω) is
invariant under such exchange:

dH
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[1 + ln f2 ] (4.1.11)
dt
Adding up half of (4.1.9) and half of (4.1.11) leads to

dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )]
dt 2
(4.1.12)
This integral is invariant under exchange of {v 1 , v 2 } and {v 1 , v 2 } because
for each collision there exists an inverse collision with the same cross section.
Therefore, one obtains

dH 1
= d3 v1 d3 v2 d3 x dΩ σ  (Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )]
dt 2
and because of d3 v1 d3 v2 = d3 v1 d3 v2 , | v 2 −v 1 |=| v 2 −v 1 | and σ  (Ω) = σ(Ω):

dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )].
dt 2
(4.1.13)
Adding up half of (4.1.12) and half of (4.1.13) leads to

dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 −v 1 | (f2 f1 −f2 f1 )[ln(f1 f2 )−ln(f1 f2 )].
dt 2
The integrand is never positive because of the inequality

(b − a) · (ln a − ln b) > 0, a = b > 0,

thus dH/dt ≤ 0.
∂f (v, t)
It vanishes, however, when (f2 f1 − f2 f1 ) = 0 and therefore = 0.
∂t
dH/dt = 0 is possible if and only if

f (v 1 )f (v 2 ) − f (v 1 )f (v 2 ) = 0 (4.1.14)

for all v 1 , v 2 that result from v 1 , v 2 by collisions. From (4.1.14) one obtains

ln f (v 1 ) + ln f (v 2 ) = ln f (v 1 ) + ln f (v 2 ), (4.1.15)
4.1 The Boltzmann equation 143

i.e. ln f (v) is an additive collision invariant and thus it is of the form (linear
composition of the five collision invariants):
ln f (x, v) = a(x) + b(x) · v + c(x)v 2 . (4.1.16)
Therefore it follows that
m(v − u(x))2

f (x, v) = C(x)e 2kB T (x) (4.1.17)
where C(x), u(x) and T (x) are independent of v. However, the distribution
(4.1.17) represents no equilibrium state because if f (x, v, t1 ) at time t1 is of
the form (4.1.17) then it follows from the Boltzmann equation that
 
! m(v − u(x))2

∂f (x, v, t)  2T (x) 
= −v · ∇x C(x)e  (4.1.18)
∂t t=t1

(the collision term Q(f, f ) vanishes because f(x,v) is a function of collision


invariants). For the equilibrium state f must be of the form (4.1.17) and be
independent of x thus
m(v − u)2

f (eq) (x, v) = f (eq) (v) = Ce 2kB T (4.1.19)
with constants C, T , u. In a closed system at rest the mean velocity u must
vanish and therefore
mv 2

f (eq) (v) = Ce 2kB T . (4.1.20)
This is the famous Maxwell velocity distribution. q.e.d.

4.1.3 The BGK approximation

One of the major problems when dealing with the Boltzmann equation is the
complicated nature of the collision integral. It is therefore not surprising that
alternative, simpler expressions have been proposed. The idea behind this
replacement is that the large amount of detail of two-body interactions is not
likely to influence significantly the values of many experimentally measured
quantities (Cercignani, 1990).
The simpler operator J(f ) which replaces the collision operator Q(f, f )
should respect two constraints:

1. J(f ) conserves the collision invariants ψk of Q(f, f ), that is



ψk J(f ) d3 x d3 v = 0 (k = 0, 1, 2, 3, 4), (4.1.21)
144 4. Some statistical mechanics

2. The collision term expresses the tendency to a Maxwellian distribution


(H-theorem).

Both constraints are fulfilled by the most widely known model called usually
the BGK approximation. It was proposed by Bhatnagar, Gross and Krook
(1954) and independently at about the same time by Welander (1954). The
simplest way to take the second constraint into account is to imagine that each
collision changes the distribution function f (x, v) by an amount proportional
to the departure of f from a Maxwellian f M (x, v):
' (
J(f ) = ω f M (x, v) − f (x, v) . (4.1.22)

The coefficient ω is called the collision frequency. From the first constraint it
follows
 %  &
3 3
ψk J(f ) d x d v = ω ψk f (x, v) d x d v − ψk f (x, v) d x d v = 0
M 3 3 3 3

(4.1.23)
i.e. at any space point and time instant the Maxwellian f M (x, v) must have
exactly the same density, velocity and temperature of the gas as given by
the distribution f (x, v). Since these values will in general vary with space
and time f M (x, v) is called the local Maxwellian. Other model equations are
discussed in Cercignani (1990).
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 145

4.2 Chapman-Enskog: From Boltzmann to


Navier-Stokes

Many fluid-dynamical phenomena including laminar flows, turbulence and


solitons can be described by solutions of the Navier-Stokes equation. Al-
though the form of this equation can be obtained by phenomenological rea-
soning (see, for example, Landau and Lifshitz, 1959) it is of fundamental
as well as practical interest to derive the Navier-Stokes equation (Eq. 1.3.1)
from the Boltzmann equation. Applying certain models of the microscopic
collision processes one can obtain explicit formulas for the transport coeffi-
cients. For example, Maxwell was able to derive an analytical expression for
the shear viscosity for molecules which interact by a r−5 -potential where r
is their distance. It came as a surprise for him and his contemporaries that
this theory predicted a dynamic viscosity coefficient independent of density.
Experiments made thereafter indeed showed that this is a good approxima-
tion for gases over a wide range of densities.
The derivation of the Navier-Stokes equation and its transport coefficients
from the Boltzmann equation and certain microscopic collision models runs
under the name Chapman-Enskog expansion. This method has been devel-
oped by Chapman and Enskog between 1910 and 1920 (Chapman, 1916 and
1918; Enskog, 1917 and 1922; see also Cercignani, 1988 and 1990). The cal-
culations for certain models are rather involved and may easily hide some
peculiarities of this expansion. Therefore it seems appropriate to discuss a
few interesting features before beginning with the formal derivations and to
restrict the calculation to a simple collision model, namely the BGK approx-
imation.
The Chapman-Enskog or multi-scale expansion has already been used to de-
rive the Euler equation for the FHP lattice-gas cellular automata (compare
Section 3.2) and will be applied later on to derive the Navier-Stokes and other
macroscopic equations for lattice Boltzmann models (compare Section 5.2).
The transformation from the Boltzmann equation to the Navier-Stokes equa-
tion involves a contraction of the description of the temporal development of
the system (Uhlenbeck and Ford, 1963). Whereas the distribution function f
of the Boltzmann equation in general is explicitly depending on time, space
and velocity, we will see that the distribution functions f (n) of the Chapman-
Enskog expansion depend only implicitly on time via the local density, veloc-
ity and temperature, i.e. the f (n) are not the most general solutions of the
Boltzmann equation. It can be shown that arbitrary initial distributions relax
very fast (a few collision time scales which means of the order of 10−11 s in a
gas in 3D under standard conditions) toward this special kind of distribution.
The possibility of the contraction of the description has been considered as
a very fundamental insight (Uhlenbeck and Ford, 1963).
The expansion parameter of Chapman-Enskog is the Knudsen number Kn ,
i.e. the ratio between the mean free length λ (the mean distance between two
146 4. Some statistical mechanics

succesive collisions) and the characteristic spatial scale of the system (for
example, radius of an obstacle in a flow). When the Knudsen number is of
the order of 1 or larger the gas in the system under consideration cannot be
described as a fluid.
As a last point one should mention that the series resulting from the
Chapman-Enskog procedure is probably not convergent but asymptotic2 .
This is suggested by the application to the dispersion of sound (Uhlen-
beck and Ford, 1963). Higher order approximations of the Chapman-Enskog
method lead to the Burnett and super-Burnett equations (Burnett, 1935,
1936) which have never been applied systematically. One of the problems
with these equations is the question of appropriate boundary conditions (see,
for example, Cercignani, 1988 and 1990, for further discussion).

4.2.1 The conservation laws

Conservation laws can be obtained by multiplying the Boltzmann equation


with a collision invariant ψk (v) (ψ0 = 1, ψα = uα for α = 1, 2, 3 and ψ4 =
2 m|v − u| ) and subsequent integration over d v. The integrals over the
1 2 3

collision integral Q(f, f ) vanish by definition. Therefore



d3 v ψk (∂t + vα ∂xα )f (x, v, t) = 0 (4.2.1)

and thus (in 3D)

∂t ρ + ∂xα (ρuα ) = 0 (4.2.2)


ρ∂t uα + ρuβ ∂xβ uα = −∂xα P̂αβ (4.2.3)
2 2
ρ∂t θ + ρuβ ∂xβ θ = − ∂xα qα − P̂αβ Λαβ (4.2.4)
3 3

with

n(x, t) = d3 v f (x, v, t) (4.2.5)

ρ(x, t) = m n(x, t) (m = const) (4.2.6)


(4.2.7)

ρuα (x, t) = m d3 v vα f (x, v, t) (4.2.8)

2
Asymptotic series are discussed, for example, in Bender and Orszag (1978). De-
spite their missing convergence these series can be extremely useful. Bender and
Orszag give a number of neat examples.
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 147

m
θ(x, t) = kB T (x, t) = d3 v (vα − uα )(vα − uα ) f (x, v, t)
3n
(4.2.9)
m
Λαβ = (∂x uα + ∂xα uβ ) (4.2.10)
2 β
P̂αβ = m d3 v (vα − uα )(vβ − uβ ) f (x, v, t) (4.2.11)

m2
qα (x, t) = d3 v (vα − uα )(vβ − uβ )(vβ − uβ ) f (x, v, t)
2
(4.2.12)

Although the conservation equations are exact they are useless until one can
solve the Boltzmann equation and apply the solution f to calculate (4.2.5)
to (4.2.12). Please note that P̂αβ is different from the momentum flux tensor
introduced in Eq. (3.2.54) in that it does not contain the advection term.

4.2.2 The Euler equation

Inserting f (0) = f (M ) (the Maxwell distribution, compare Eq. 4.1.5) into Eqs.
(4.2.5) to (4.2.12) leads to the following approximation of the conservation
laws

∂t ρ + ∂xα (ρuα ) = 0 (continuity equation)


ρ∂t uα + ρuβ ∂xβ uα = −∂xα p (Euler equation)
1
∂t θ + uβ ∂xβ θ = − θ ∂xα uα
cv

where p = nkB T = nθ is the pressure and cv = 3/2 is the heat capacity


at constant volume. The heat flux q vanishes in this approximation. The
continuity equation is already in its final form. The dissipative terms in the
equation of motion have to be derived from higher order approximation.

4.2.3 Chapman-Enskog expansion

The distribution function is expanded as follows

f = f (0) + f (1) + 2 f (2) + ... (4.2.13)

The symbol  is often used in two different ways:


148 4. Some statistical mechanics

1. One speaks of an expansion as a power series in the small quantity , i.e.


||  1. In the case of Chapman-Enskog the Knudsen number Kn can be
considered as the small expansion parameter.
2. The formal parameter  in the expansions allows one to keep track of the
relative orders of magnitude of the various terms. It will be considered
only as a label and will be dropped out of the final results by setting
 = 1.

As an example consider the expansion f = f (0) +f (1) . In discussions one may
consider f (0) and f (1) as quantities of the same order of magnitude and argue
that the second term of the expansion is small because  is a small quantity
whereas in the formal calculations f (1) is small compared to f (0) and  is
only a label to keep track of the relative size of the various terms. The  in
this second sense can be set equal to one after finishing all transformations.
According to the expansion (4.2.13) the conservation laws (Eqs. 4.2.2 - 4.2.4)
can be formulated as follows

∂t ρ + ∂xα (ρuα ) = 0

 (n)
ρ∂t uα + ρuβ ∂xβ uα = − n ∂xα P̂αβ
n=0
∞
2 (n)
ρ∂t θ + ρuβ ∂xβ θ = − n (∂xα qα(n) + P̂αβ Λαβ )
3 n=0

where 
(n)
P̂αβ := m d3 v f (n) (vα − uα )(vβ − uβ ) (4.2.14)

and 
m2
qα(n) := d3 v f (n) (vα − uα )|v − u|2 .
2
Because f depends on t only via ρ, u and T the chain rule

∂t f = ∂ρ f ∂t ρ + ∂uα f ∂t uα + ∂θ f ∂t θ

applies. Inserting (4.2.13) into the derivatives of f with respect to ρ, uα and


T yields

∂ρ f = ∂ρ f (0) + ∂ρ f (1) + 2 ∂ρ f (2) + ...


∂uα f = ∂uα f (0) + ∂uα f (1) + 2 ∂uα f (2) + ...
∂θ f = ∂θ f (0) + ∂θ f (1) + 2 ∂θ f (2) + ...
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 149

The expansions of ∂t ρ, ∂t uα and ∂t T have to be defined such that they are


consistent with the conservation laws in each order of . The terms of the
formal expansion3
(1) (2)
∂t = ∂t + 2 ∂t + ... (4.2.15)
will be derived from the conservation laws as follows:
(1)
∂t ρ := −∂xα (ρuα ) (4.2.16)
(n+1)
∂t ρ := 0 (n > 0)
(1) 1 (0)
∂t uα := −uβ ∂xβ uα − ∂xβ P̂αβ (4.2.17)
ρ
(n+1) 1 (n)
∂t uα := − ∂xβ P̂αβ (n > 0)
ρ
(1) 2  (0)

∂t θ := −uβ ∂xβ θ − ∂xα qα(0) + P̂αβ Λαβ

(n+1) 2  (n)

∂t θ := − (n)
∂xα qα + P̂αβ Λαβ (n > 0)

Application of these definitions leads to an expansion of ∂t f into a power


series in :
  
(1) (2)
∂t f = ∂t + 2 ∂t + ... f (0) + f (1) + 2 f (2) + ...
(1) (1) (2)
= ∂t f (0) + 2 (∂t f (1) + ∂t f (0) ) + 3 ...

Inserting the expansion of the distribution function f into the collision inte-
gral Q(f, f ) of the Boltzmann equation with BGK approximation4 yields
 
Q(f, f ) = −ω f − f (0)
 
= −ω f (1) + 2 f (2) + ...
=: J (0) + J (1) + 2 J (2) + ... (4.2.18)

3
The reason for starting the ∂t expansion by a term linear in  will become ap-
parent from the discussion later on. The expansions of f or ∂t alone can be
multiplied by arbitrary powers of  because the powers of  only label the rela-
tive size of the different terms in each expansion. When expansions of different
quantities are combined, however, the powers of  have to be related such that
the terms of leading order yield a meaningful balance.
4
The BGK approximation will be applied here in order to simplify the
calculations.
150 4. Some statistical mechanics

where
 
J (0) f (0) = 0 (4.2.19)
   
J (1) f (0) , f (1) = J (1) f (1) = −ωf (1) (4.2.20)
   
J (2) f (0) , f (1) , f (2) = J (2) f (2) = −ωf (2) (4.2.21)
...

where the collision frequency ω is a constant. In general, i.e. no BGK ap-


proximation of the collision integral, the J (n) depend on all f (k) with k ≤ n
(as indicated on the left hand sides of Eqs. 4.2.20 and 4.2.21) whereas for the
BGK approximation J (n) depends only on f (n) . This simplification is due to
the fact that the collision integral in the BGK approximation is linear in f .
The spatial derivative ∂x on the left hand side of the Boltzmann equation is
of the same order as the leading term in the time derivative, i.e.

∂xα = ∂x(1)
α
. (4.2.22)

This looks like the first term of an expansion. In space, however, only one
macroscopic scale will be considered because different macroscopic processes
like advection and diffusion can be distinguished by their time scales but act
on similar spatial scales.
Equating terms of same order in  of the Boltzmann equation leads to the
following set of equations:
 
J (0) f (0) = 0 (4.2.23)
 
(1)
∂t f (0) + vα ∂x(1)
α
f (0) = J (1) f (0) , f (1) = −ωf (1) (4.2.24)
 
(1) (2)
∂t f (1) + ∂t f (0) + vα ∂x(1)
α
f (1) = J (2) f (0) , f (1) , f (2) = −ωf (2)
...

Eq. (4.2.23) is fulfilled because J vanishes for Maxwell distributions. f (1) can
readily be calculated from Eq. (4.2.24)

1  (1) (0) 
f (1) = − ∂t f + vα ∂x(1)
α
f (0)
. (4.2.25)
ω

This equation states that the lowest order deviations f (1) from a local
Maxwell distribution f (0) are proportional to the gradient in space and time
of f (0) . The calculation of f (1) is much more involved when the collision
integral is not approximated (see, for example, Huang, 1963).
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 151
(1)
The next step is the calculation of P̂αβ according to Eq. (4.2.14)

(1)
P̂αβ = m d3 v(vα − uα )(vβ − uβ )f (1)

m (1)
= − d3 v(vα − uα )(vβ − uβ )(∂t f (0) + vγ ∂x(1)
γ
f (0) ).
ω

(1)
Insertion of (4.2.16) and (4.2.17) leads to (from now on the superscript
will be dropped for the sake of simplicity)

∂f (0) ∂ρ ∂f (0) ∂uγ


∂t f (0) (ρ, u) = +
∂ρ ∂t ∂uγ ∂t
# (0)
$
f (0) ∂(ρuγ ) m ∂uγ 1 ∂ P̂γδ
= − + (vγ − uγ )f (0) uδ +
m ∂xγ kB T ∂xδ ρ ∂xδ
ρf (0) ∂uγ f (0) ∂ρ m ∂uγ
= − − uγ + (vγ − uγ )f (0) uδ
m ∂xγ m ∂xγ kB T ∂xδ
m 1 ∂p
+ (vγ − uγ )f (0) δγδ
kB T ρ ∂xδ

and
∂f (0) ∂ρ ∂f (0) ∂uδ
vγ ∂xγ f (0) = vγ + vγ
∂ρ ∂xγ ∂uδ ∂xγ
f (0) ∂ρ m ∂uδ
= vγ + vγ (vδ − uδ )f (0) .
m ∂xγ kB T ∂xγ

The various integrals are readily evaluated



1 ∂uγ kB T ∂uγ
− d3 v(vα − uα )(vβ − uβ )f (0) = −δαβ n
m ∂xγ m ∂xγ

1 ∂ρ
d3 v(vα − uα )(vβ − uβ )(vγ − uγ )f (0) = 0
m ∂xγ
!
m (0) ∂uγ 1 ∂p
f uδ + δγδ d3 v(vα − uα )(vβ − uβ )(vγ − uγ )f (0) = 0
kB T ∂xδ ρ ∂xδ

m ∂uδ
d3 v(vα − uα )(vβ − uβ )vγ (vδ − uδ )f (0)
kB T ∂xγ
kB T ∂uδ
= (δαβ δγδ + δαγ δβδ + δαδ δβγ )n
m ∂xγ
152 4. Some statistical mechanics

and thus
% &
(1) kB T ∂uδ ∂uγ
P̂αβ = −n (δαβ δγδ + δαγ δβδ + δαδ δβγ ) − δαβ
ω ∂xγ ∂xγ
 
∂u ∂u ∂v ∂u ∂w
2 + +
 ∂x ∂y ∂x ∂z ∂x 

kB T  ∂u ∂v ∂v ∂w 
∂v 
= n  + 2 + .
ω  ∂y ∂x ∂y ∂z ∂y 
 ∂u ∂w ∂v ∂w ∂w 
+ + 2
∂z ∂x ∂z ∂y ∂z

(1)
Neglecting density and temperature variations the divergence of P̂αβ reads
(1) % 2 &
∂ P̂αβ ∂ u ∂2u ∂2u ∂2v ∂2w
= µ 2 2+ 2 + 2 + + ex + ...
∂xα ∂x ∂y ∂z ∂x∂y ∂x∂z
% ! !&
∂ ∂uα ∂ ∂uβ
= µ +
∂xβ ∂xβ ∂xα ∂xβ
' (
= µ ∇2 u + ∇(∇ · u)

where
kB T
µ=n (4.2.26)
ω
is the dynamic shear viscosity. Thus one obtains the Navier-Stokes equation

∂t uα + uβ ∂xβ uα = −∂xα P + ν∂xβ ∂xβ uα + ξ∂xα ∂xβ uβ (4.2.27)

where the kinematic shear (ν) and bulk (ξ) viscosities are equal and given by
kB T
ν= = ξ. (4.2.28)
ωm
4.3 The maximum entropy principle 153

4.3 The maximum entropy principle

In 1948 Shannon [418, 419] proposed a theory which allows us to quantify


‘information’. The statistical measure of the lack of information is called the
information theoretical or Shannon entropy. Equilibrium distributions can be
derived from the maximum entropy principle.

The following presentation closely follows Stumpf and Rieckers (1976).


First consider a discrete set Z := {z1 ...zN } with N elements. A message5
is defined as a selection of one or several elements of Z. The informational
measure of the message is defined by that knowledge which is necessary to
denote a certain element or a selection of elements. What is the elementary
unit of this measure? If the set Z encompasses only one element the selection
of this element does not augments our knowledge. There is no real message
until the number of elements in Z is at least two. Obviously the decision
between two alternatives is the smallest unit of information one can think
of: it is called a bit which is the short form of ‘binary digit’. The larger
the number of elements in Z, the more information is connected with the
selection of a certain element of Z. The measure of the information gained
can be traced back to a sequence of alternative decisions. The number of
elements N can be written down in binary form. The number of binary digits
is a measure of information. Or the elements zj can be aranged in the form of
a binary tree (compare Fig. 4.3.1) where the number of branching points from
the root to one of the end points equals the number of bits. These procedures
work for sets with N = 2n elements and yield the measure of information
I(N ) = n = log2 N for the selection of a single element. This definition is
generalized to sets with arbitrary number of elements by

I(N ) = log2 N,

i.e. I(N ) is not necessary an integer anymore.


Further the measure of information is additive with respect to the choice of
several (α) elements out of a set with N elements

I(N, α) = α · I(N )

and the choice of two elements out of a direct product of two sets Z1 and Z2
5
The notation has its roots in the theory of communication. One of the basic
problems in this context is the reliable transmission of messages from a source
via a channel to a receiver. Often the messages to be transmitted have meaning
like, for example, the news you hear on the radio. This, however, is not always
the case. In transmitting music, the meaning is much more subtle then in the
case of a verbal message. In any case, meaning is quite irrelevant to the problem
of transmitting the information.
154 4. Some statistical mechanics

Fig. 4.3.1. The information for the selection of a certain element out of a set of
N = 2n elements is defined as the number of alternative decisions necessary when
going from the root to a certain end point. The selection of a certain elements out
of 8 elements requires three binary decisions.

.......................................1
!!aaa
! !! a aa
!!
! a......................2
a
@ @
@ @
@ @
@
@ @
@............3
A A A A
 A  A  A  A
 A  A  A  A
1 2 3 4 5 6 7 8

I(NZ1 ⊗Z2 ) = I(NZ1 ) + I(NZ2 ).

Now consider probability distributions instead of sets. Let us start with dis-
crete probability distributions P with a finite number of entries:


N
P := {P1 ...PN } , Pk = 1
k=1

corresponding to the set of events

Ω := {x1 ...xN } .

The task is to find a measure of information I(P ) for a probability distribution


as a whole. Let’s first consider two special cases:

– The sharp distribution: Pi = δil , i.e. each measurement will yield the re-
sult xl . The sharp distribution contains a lot of information: if you know
this probability distribution you can be sure of the output of your next
measurement. Because one event out of N possible events is selected the
measure
I(P ) = log2 N
suggests itself.
4.3 The maximum entropy principle 155

– The normal distribution: Pi = 1/N , i.e. every possible event has the same
probability. The normal distribution can be understood as a consequence
of the Laplacian principle of the insufficient reason: “If there is no reason
to single out a certain event for given information concerning an exper-
imental situation, a normal distribution is to be assumed.” (Stumpf and
Rieckers, 1976, p.14). Obviously the normal distribution contains the min-
imal amount of information, thus

I(P ) = 0.

The measure of information I(P ) for a general distribution I(P ) = I(P1 ...PN )
is based on four postulates:

1. I(P ) is an universal function.


2. I(P ) is additive concerning the (still to be determined) statistical infor-
mation for single elements i(Pk ) as well as for the composition of direct
products:
N
I(P1 ...PN ) = i(Pk )
k=1

I(PΩ1 ⊗Ω2 ) = I(PΩ1 ) + I(PΩ2 ).

3. I(P ) = 0 for the normal distribution and I(P ) = log2 N for the sharp
distribution.
4. The statistical information of a single element i(P ) is defined on 0 ≤ P ≤
1 and is continuous.

Theorem 4.3.1. The statistical measure of information I(P ) over the set
of events Ω with N elements, which fulfills the above given postulates 1 to 4,
is uniquely given by


N
I(P ) = I(P1 ...PN ) = Pi log2 Pi + log2 N. (4.3.1)
i=1

The proof can be found, for example, in Stumpf and Rieckers (1976).

Lemma 4.3.1. The maximum value Imax of I(P ) is given by the sharp dis-
tribution:
Imax (P ) = log2 N.

Exercise 4.3.1. (*)


Prove Lemma 4.3.1.
156 4. Some statistical mechanics

The information theoretical entropy or Shannon entropy S is defined as fol-


lows:

N
S(P1 ...PN ) := Imax − I(P1 ...PN ) = − Pi log2 Pi . (4.3.2)
i=1

It is a measure for the lack of information: S vanishes for the sharp distri-
bution and becomes maximal for the normal distribution.
The generalization of I(P ) for continuous sets of events is given by

S[f ] := −k f (x) ln f (x) dx, (4.3.3)

i.e. the function S = S(P ) is replaced by a functional S = S[f ] over the


probability density f . The transition from the case of discrete distributions
to probability densities f is not as simple as it looks. For example, there is no
maximal measure of information and because f can be a generalized function
the integral 4.3.3 could be meaningless (see Stumpf and Rieckers, 1976, for a
detailed discussion).
The most important theorem of this section reads:

Theorem 4.3.2. (Maximum entropy principle) If the probability density


f (x) with the normalization

f (x) dx = 1

obeys the following m linear independent constraints



Ri (x)f (x) dx = ri 1 ≤ i ≤ m

then the probability density which maximizes the lack of information while
respecting the m + 1 constraints is uniquely given by
+ ,
m
f (x) = exp −λ0 − λi Ri (x) (4.3.4)
i=1

where the Lagrange multipliers λ0 , λ1 , ..., λm are unique functions of the


values r1 , ..., rm .

Proof. The proof essentially consists of two parts. Here only the derivation of
the distribution (4.3.4) shall be discussed in detail. The second part, namely
the proof that the Lagrange multipliers are uniquely determined by the values
r1 , ..., rm , can be found in Stumpf and Rieckers (1976, p. 20).
The extremum of a functional under given constraints is sought after. An
extended functional Ŝ[f ] is defined by coupling the constraints via Lagrange
multipliers η0 , ...ηm to the Shannon entropy S[f ]:
4.3 The maximum entropy principle 157


m
Ŝ[f ] := S[f ] − k (λ0 − 1) T r[f ] − k λi T r[(Ri − ri )f ]
i=1
+ # $,
m
= −k T r f ln f + λ0 − 1 + λi (Ri − ri ) .
i=1

For reasons which become obvious in a moment the ηj have been written as
follows:
η0 = k(λ0 − 1)
ηi = kλi 1 ≤ i ≤ m.
The trace of f is defined as

T r[f ] := f (x) dx. (4.3.5)

From this definition it immediately follows


 
T r[c · f ] = c · f (x) dx = c f (x) dx = c T r[f ]

where c is a constant.
The vanishing of the functional derivative of Ŝ[f ] with respect to f is a
necessary condition for an extremum of S[f ]:

δ Ŝ[f ]
=0
δf
Functional derivatives are calculated analogously to the rules for ordinary
derivatives (see, for example, Großmann, 1988):
 -
δ Ŝ[f ] m
= −k ln f + λ0 + λi Ri
δf i=1

and therefore

m
ln f = −λ0 − λi Ri
i=1
respectively + ,

m
f = exp −λ0 − λi Ri .
i=1
q.e.d.
The maximum entropy principle will be applied later on to calculate equilib-
rium distributions for lattice Boltzmann models.
Further reading: The proceedings edited by Levine and Tribus (1979) and
especially the paper by Jaynes (1979).
158 4. Some statistical mechanics

Exercise 4.3.2. (**)


Find si (i = 0, 1, ..., l) such that

ci si (x, t) = S(x, t)
i

under the constraint 


si (x, t) = 0
i

by minimizing 
V = s2i .
The lattice velocities ci satisfy
 
ci = 0, and c2i = n.
i i

Exercise 4.3.3. (**)


The Renyi entropy (Renyi, 1970) of order α is definiered as follows:

1 
N
Sα := − ln pα , α ∈ R, α = 1.
α − 1 i=1 i

Calculate
lim Sα .
α→1

You might also like