Lattice Gas Cellular Automata and Lattice Boltzmann Models Chapter5
Lattice Gas Cellular Automata and Lattice Boltzmann Models Chapter5
Lattice Gas Cellular Automata and Lattice Boltzmann Models Chapter5
The motion of a fluid can be described on various levels. The most basic
decription by the Hamilton equations for a set of classical particles or the
analogous quantum mechanical formulation prohibits itself because of the
huge number of particles. 1 cm3 of air, for example, at 0◦ C and a pressure of
one atmosphere contains 2.69 · 1019 molecules. It is impossible to prepare a
desired microstate of such a system.
This fact is already taken into account by the next higher level of
decription for a system with N particles by distribution functions
fN (q 1 , p1 , ..., q N , pN , t) which encompass all statistical informations of all dy-
namical processes (q i and pi are the generalized coordinate and momentum
of particle i). fN (q 1 , p1 , ..., q N , pN , t)dq 1 dp1 ... dq N dpN is the probability to
find a particle in the interval ([q 1 , q 1 + dq 1 ], [p1 , p1 + dp1 ]) while the other
particles are in infinitesimal intervals around (q 2 , p2 ) ... (q N , pN ). Thus fN
contains especially the various correlations between particles. fN obeys the
Liouville equation
!
∂fN ∂HN ∂fN
3N
∂HN ∂fN
− − =0 (4.1.1)
∂t j=1
∂qj ∂pj ∂pj ∂qj
is the collision integral with σ(Ω) the differential collision cross section for the
two-particle collision which transforms the velocities from {v, v 1 } (incoming)
into {v , v 1 } (outgoing). K is the body force. It will be neglected in the
following discussion of the current chapter.
It can be shown (see, for example, Cercignani, 1988) that the collision integral
possesses exactly five elementary collision invariants ψk (v) (k = 0, 1, 2, 3, 4)
in the sense that
Q(f, f )ψk (v) d3 v = 0. (4.1.4)
φ(v) = a + b · v + cv 2 .
It can be further shown (see, for example, Cercignani, 1988) that positive
functions f exist which give a vanishing collision integral
Q(f, f ) = 0.
f (v) = exp(a + b · v + cv 2 )
4.1 The Boltzmann equation 141
where f (x, v, t) is any function that satisfies the Boltzmann equation fulfills
the equation
dH
≤0 (4.1.8)
dt
and the equal sign applies only if f is a Maxwell distribution (4.1.5). This is
his famous H-theorem.
Proof. We will assume that no external forces are applied and thus f (x, v, t)
obeys the following Botzmann equation
∂f
+ v · ∇f = Q(f, f ).
∂t
Differentiation of (4.1.7) yields
dH ∂f
= d 3 v d3 x (x, v, t) [1 + ln f (x, v, t)] .
dt ∂t
Insertion of the Boltzmann equation leads to
dH
= − v · ∇ [f (x, v, t) ln f (x, v, t)] d3 v d3 x
dt
(4.1.9)
+ d3 v1 d3 v2 d3 x dΩ σ(Ω) [f2 f1 − f2 f1 ] · | v 2 − v 1 | [1 + ln f1 ]
1
The Maxwell distribution is also referred to as Maxwell-Boltzmann distribution
or as Boltzmann distribution.
142 4. Some statistical mechanics
where f1 = f (x, v 1 , t), f2 = f (x, v 2 , t), f1 = f (x, v 1 , t), and f2 =
f (x, v 2 , t). The first summand can be transformed into a surface integral
− n · vf (x, v, t) ln f (x, v, t) d3 v dF (4.1.10)
F
where n is the (outer) normal of the surface F that enclosed the gas. Without
detailed discussion we will assume that this surface integral vanishes. The
second integral is invariant under exchange of v 1 and v 2 because σ(Ω) is
invariant under such exchange:
dH
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[1 + ln f2 ] (4.1.11)
dt
Adding up half of (4.1.9) and half of (4.1.11) leads to
dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )]
dt 2
(4.1.12)
This integral is invariant under exchange of {v 1 , v 2 } and {v 1 , v 2 } because
for each collision there exists an inverse collision with the same cross section.
Therefore, one obtains
dH 1
= d3 v1 d3 v2 d3 x dΩ σ (Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )]
dt 2
and because of d3 v1 d3 v2 = d3 v1 d3 v2 , | v 2 −v 1 |=| v 2 −v 1 | and σ (Ω) = σ(Ω):
dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 − v 1 | (f2 f1 − f2 f1 )[2 + ln(f1 f2 )].
dt 2
(4.1.13)
Adding up half of (4.1.12) and half of (4.1.13) leads to
dH 1
= d3 v1 d3 v2 d3 x dΩ σ(Ω) | v 2 −v 1 | (f2 f1 −f2 f1 )[ln(f1 f2 )−ln(f1 f2 )].
dt 2
The integrand is never positive because of the inequality
thus dH/dt ≤ 0.
∂f (v, t)
It vanishes, however, when (f2 f1 − f2 f1 ) = 0 and therefore = 0.
∂t
dH/dt = 0 is possible if and only if
f (v 1 )f (v 2 ) − f (v 1 )f (v 2 ) = 0 (4.1.14)
for all v 1 , v 2 that result from v 1 , v 2 by collisions. From (4.1.14) one obtains
ln f (v 1 ) + ln f (v 2 ) = ln f (v 1 ) + ln f (v 2 ), (4.1.15)
4.1 The Boltzmann equation 143
i.e. ln f (v) is an additive collision invariant and thus it is of the form (linear
composition of the five collision invariants):
ln f (x, v) = a(x) + b(x) · v + c(x)v 2 . (4.1.16)
Therefore it follows that
m(v − u(x))2
−
f (x, v) = C(x)e 2kB T (x) (4.1.17)
where C(x), u(x) and T (x) are independent of v. However, the distribution
(4.1.17) represents no equilibrium state because if f (x, v, t1 ) at time t1 is of
the form (4.1.17) then it follows from the Boltzmann equation that
! m(v − u(x))2
−
∂f (x, v, t) 2T (x)
= −v · ∇x C(x)e (4.1.18)
∂t t=t1
One of the major problems when dealing with the Boltzmann equation is the
complicated nature of the collision integral. It is therefore not surprising that
alternative, simpler expressions have been proposed. The idea behind this
replacement is that the large amount of detail of two-body interactions is not
likely to influence significantly the values of many experimentally measured
quantities (Cercignani, 1990).
The simpler operator J(f ) which replaces the collision operator Q(f, f )
should respect two constraints:
Both constraints are fulfilled by the most widely known model called usually
the BGK approximation. It was proposed by Bhatnagar, Gross and Krook
(1954) and independently at about the same time by Welander (1954). The
simplest way to take the second constraint into account is to imagine that each
collision changes the distribution function f (x, v) by an amount proportional
to the departure of f from a Maxwellian f M (x, v):
' (
J(f ) = ω f M (x, v) − f (x, v) . (4.1.22)
The coefficient ω is called the collision frequency. From the first constraint it
follows
% &
3 3
ψk J(f ) d x d v = ω ψk f (x, v) d x d v − ψk f (x, v) d x d v = 0
M 3 3 3 3
(4.1.23)
i.e. at any space point and time instant the Maxwellian f M (x, v) must have
exactly the same density, velocity and temperature of the gas as given by
the distribution f (x, v). Since these values will in general vary with space
and time f M (x, v) is called the local Maxwellian. Other model equations are
discussed in Cercignani (1990).
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 145
succesive collisions) and the characteristic spatial scale of the system (for
example, radius of an obstacle in a flow). When the Knudsen number is of
the order of 1 or larger the gas in the system under consideration cannot be
described as a fluid.
As a last point one should mention that the series resulting from the
Chapman-Enskog procedure is probably not convergent but asymptotic2 .
This is suggested by the application to the dispersion of sound (Uhlen-
beck and Ford, 1963). Higher order approximations of the Chapman-Enskog
method lead to the Burnett and super-Burnett equations (Burnett, 1935,
1936) which have never been applied systematically. One of the problems
with these equations is the question of appropriate boundary conditions (see,
for example, Cercignani, 1988 and 1990, for further discussion).
with
n(x, t) = d3 v f (x, v, t) (4.2.5)
2
Asymptotic series are discussed, for example, in Bender and Orszag (1978). De-
spite their missing convergence these series can be extremely useful. Bender and
Orszag give a number of neat examples.
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 147
m
θ(x, t) = kB T (x, t) = d3 v (vα − uα )(vα − uα ) f (x, v, t)
3n
(4.2.9)
m
Λαβ = (∂x uα + ∂xα uβ ) (4.2.10)
2 β
P̂αβ = m d3 v (vα − uα )(vβ − uβ ) f (x, v, t) (4.2.11)
m2
qα (x, t) = d3 v (vα − uα )(vβ − uβ )(vβ − uβ ) f (x, v, t)
2
(4.2.12)
Although the conservation equations are exact they are useless until one can
solve the Boltzmann equation and apply the solution f to calculate (4.2.5)
to (4.2.12). Please note that P̂αβ is different from the momentum flux tensor
introduced in Eq. (3.2.54) in that it does not contain the advection term.
Inserting f (0) = f (M ) (the Maxwell distribution, compare Eq. 4.1.5) into Eqs.
(4.2.5) to (4.2.12) leads to the following approximation of the conservation
laws
As an example consider the expansion f = f (0) +f (1) . In discussions one may
consider f (0) and f (1) as quantities of the same order of magnitude and argue
that the second term of the expansion is small because is a small quantity
whereas in the formal calculations f (1) is small compared to f (0) and is
only a label to keep track of the relative size of the various terms. The in
this second sense can be set equal to one after finishing all transformations.
According to the expansion (4.2.13) the conservation laws (Eqs. 4.2.2 - 4.2.4)
can be formulated as follows
∂t ρ + ∂xα (ρuα ) = 0
∞
(n)
ρ∂t uα + ρuβ ∂xβ uα = − n ∂xα P̂αβ
n=0
∞
2 (n)
ρ∂t θ + ρuβ ∂xβ θ = − n (∂xα qα(n) + P̂αβ Λαβ )
3 n=0
where
(n)
P̂αβ := m d3 v f (n) (vα − uα )(vβ − uβ ) (4.2.14)
and
m2
qα(n) := d3 v f (n) (vα − uα )|v − u|2 .
2
Because f depends on t only via ρ, u and T the chain rule
∂t f = ∂ρ f ∂t ρ + ∂uα f ∂t uα + ∂θ f ∂t θ
Inserting the expansion of the distribution function f into the collision inte-
gral Q(f, f ) of the Boltzmann equation with BGK approximation4 yields
Q(f, f ) = −ω f − f (0)
= −ω f (1) + 2 f (2) + ...
=: J (0) + J (1) + 2 J (2) + ... (4.2.18)
3
The reason for starting the ∂t expansion by a term linear in will become ap-
parent from the discussion later on. The expansions of f or ∂t alone can be
multiplied by arbitrary powers of because the powers of only label the rela-
tive size of the different terms in each expansion. When expansions of different
quantities are combined, however, the powers of have to be related such that
the terms of leading order yield a meaningful balance.
4
The BGK approximation will be applied here in order to simplify the
calculations.
150 4. Some statistical mechanics
where
J (0) f (0) = 0 (4.2.19)
J (1) f (0) , f (1) = J (1) f (1) = −ωf (1) (4.2.20)
J (2) f (0) , f (1) , f (2) = J (2) f (2) = −ωf (2) (4.2.21)
...
∂xα = ∂x(1)
α
. (4.2.22)
This looks like the first term of an expansion. In space, however, only one
macroscopic scale will be considered because different macroscopic processes
like advection and diffusion can be distinguished by their time scales but act
on similar spatial scales.
Equating terms of same order in of the Boltzmann equation leads to the
following set of equations:
J (0) f (0) = 0 (4.2.23)
(1)
∂t f (0) + vα ∂x(1)
α
f (0) = J (1) f (0) , f (1) = −ωf (1) (4.2.24)
(1) (2)
∂t f (1) + ∂t f (0) + vα ∂x(1)
α
f (1) = J (2) f (0) , f (1) , f (2) = −ωf (2)
...
Eq. (4.2.23) is fulfilled because J vanishes for Maxwell distributions. f (1) can
readily be calculated from Eq. (4.2.24)
1 (1) (0)
f (1) = − ∂t f + vα ∂x(1)
α
f (0)
. (4.2.25)
ω
This equation states that the lowest order deviations f (1) from a local
Maxwell distribution f (0) are proportional to the gradient in space and time
of f (0) . The calculation of f (1) is much more involved when the collision
integral is not approximated (see, for example, Huang, 1963).
4.2 Chapman-Enskog: From Boltzmann to Navier-Stokes 151
(1)
The next step is the calculation of P̂αβ according to Eq. (4.2.14)
(1)
P̂αβ = m d3 v(vα − uα )(vβ − uβ )f (1)
m (1)
= − d3 v(vα − uα )(vβ − uβ )(∂t f (0) + vγ ∂x(1)
γ
f (0) ).
ω
(1)
Insertion of (4.2.16) and (4.2.17) leads to (from now on the superscript
will be dropped for the sake of simplicity)
and
∂f (0) ∂ρ ∂f (0) ∂uδ
vγ ∂xγ f (0) = vγ + vγ
∂ρ ∂xγ ∂uδ ∂xγ
f (0) ∂ρ m ∂uδ
= vγ + vγ (vδ − uδ )f (0) .
m ∂xγ kB T ∂xγ
and thus
% &
(1) kB T ∂uδ ∂uγ
P̂αβ = −n (δαβ δγδ + δαγ δβδ + δαδ δβγ ) − δαβ
ω ∂xγ ∂xγ
∂u ∂u ∂v ∂u ∂w
2 + +
∂x ∂y ∂x ∂z ∂x
kB T ∂u ∂v ∂v ∂w
∂v
= n + 2 + .
ω ∂y ∂x ∂y ∂z ∂y
∂u ∂w ∂v ∂w ∂w
+ + 2
∂z ∂x ∂z ∂y ∂z
(1)
Neglecting density and temperature variations the divergence of P̂αβ reads
(1) % 2 &
∂ P̂αβ ∂ u ∂2u ∂2u ∂2v ∂2w
= µ 2 2+ 2 + 2 + + ex + ...
∂xα ∂x ∂y ∂z ∂x∂y ∂x∂z
% ! !&
∂ ∂uα ∂ ∂uβ
= µ +
∂xβ ∂xβ ∂xα ∂xβ
' (
= µ ∇2 u + ∇(∇ · u)
where
kB T
µ=n (4.2.26)
ω
is the dynamic shear viscosity. Thus one obtains the Navier-Stokes equation
where the kinematic shear (ν) and bulk (ξ) viscosities are equal and given by
kB T
ν= = ξ. (4.2.28)
ωm
4.3 The maximum entropy principle 153
I(N ) = log2 N,
I(N, α) = α · I(N )
and the choice of two elements out of a direct product of two sets Z1 and Z2
5
The notation has its roots in the theory of communication. One of the basic
problems in this context is the reliable transmission of messages from a source
via a channel to a receiver. Often the messages to be transmitted have meaning
like, for example, the news you hear on the radio. This, however, is not always
the case. In transmitting music, the meaning is much more subtle then in the
case of a verbal message. In any case, meaning is quite irrelevant to the problem
of transmitting the information.
154 4. Some statistical mechanics
Fig. 4.3.1. The information for the selection of a certain element out of a set of
N = 2n elements is defined as the number of alternative decisions necessary when
going from the root to a certain end point. The selection of a certain elements out
of 8 elements requires three binary decisions.
.......................................1
!!aaa
! !! a aa
!!
! a......................2
a
@ @
@ @
@ @
@
@ @
@............3
A A A A
A A A A
A A A A
1 2 3 4 5 6 7 8
Now consider probability distributions instead of sets. Let us start with dis-
crete probability distributions P with a finite number of entries:
N
P := {P1 ...PN } , Pk = 1
k=1
Ω := {x1 ...xN } .
– The sharp distribution: Pi = δil , i.e. each measurement will yield the re-
sult xl . The sharp distribution contains a lot of information: if you know
this probability distribution you can be sure of the output of your next
measurement. Because one event out of N possible events is selected the
measure
I(P ) = log2 N
suggests itself.
4.3 The maximum entropy principle 155
– The normal distribution: Pi = 1/N , i.e. every possible event has the same
probability. The normal distribution can be understood as a consequence
of the Laplacian principle of the insufficient reason: “If there is no reason
to single out a certain event for given information concerning an exper-
imental situation, a normal distribution is to be assumed.” (Stumpf and
Rieckers, 1976, p.14). Obviously the normal distribution contains the min-
imal amount of information, thus
I(P ) = 0.
The measure of information I(P ) for a general distribution I(P ) = I(P1 ...PN )
is based on four postulates:
3. I(P ) = 0 for the normal distribution and I(P ) = log2 N for the sharp
distribution.
4. The statistical information of a single element i(P ) is defined on 0 ≤ P ≤
1 and is continuous.
Theorem 4.3.1. The statistical measure of information I(P ) over the set
of events Ω with N elements, which fulfills the above given postulates 1 to 4,
is uniquely given by
N
I(P ) = I(P1 ...PN ) = Pi log2 Pi + log2 N. (4.3.1)
i=1
The proof can be found, for example, in Stumpf and Rieckers (1976).
Lemma 4.3.1. The maximum value Imax of I(P ) is given by the sharp dis-
tribution:
Imax (P ) = log2 N.
It is a measure for the lack of information: S vanishes for the sharp distri-
bution and becomes maximal for the normal distribution.
The generalization of I(P ) for continuous sets of events is given by
S[f ] := −k f (x) ln f (x) dx, (4.3.3)
then the probability density which maximizes the lack of information while
respecting the m + 1 constraints is uniquely given by
+ ,
m
f (x) = exp −λ0 − λi Ri (x) (4.3.4)
i=1
Proof. The proof essentially consists of two parts. Here only the derivation of
the distribution (4.3.4) shall be discussed in detail. The second part, namely
the proof that the Lagrange multipliers are uniquely determined by the values
r1 , ..., rm , can be found in Stumpf and Rieckers (1976, p. 20).
The extremum of a functional under given constraints is sought after. An
extended functional Ŝ[f ] is defined by coupling the constraints via Lagrange
multipliers η0 , ...ηm to the Shannon entropy S[f ]:
4.3 The maximum entropy principle 157
m
Ŝ[f ] := S[f ] − k (λ0 − 1) T r[f ] − k λi T r[(Ri − ri )f ]
i=1
+ # $,
m
= −k T r f ln f + λ0 − 1 + λi (Ri − ri ) .
i=1
For reasons which become obvious in a moment the ηj have been written as
follows:
η0 = k(λ0 − 1)
ηi = kλi 1 ≤ i ≤ m.
The trace of f is defined as
T r[f ] := f (x) dx. (4.3.5)
where c is a constant.
The vanishing of the functional derivative of Ŝ[f ] with respect to f is a
necessary condition for an extremum of S[f ]:
δ Ŝ[f ]
=0
δf
Functional derivatives are calculated analogously to the rules for ordinary
derivatives (see, for example, Großmann, 1988):
-
δ Ŝ[f ] m
= −k ln f + λ0 + λi Ri
δf i=1
and therefore
m
ln f = −λ0 − λi Ri
i=1
respectively + ,
m
f = exp −λ0 − λi Ri .
i=1
q.e.d.
The maximum entropy principle will be applied later on to calculate equilib-
rium distributions for lattice Boltzmann models.
Further reading: The proceedings edited by Levine and Tribus (1979) and
especially the paper by Jaynes (1979).
158 4. Some statistical mechanics
by minimizing
V = s2i .
The lattice velocities ci satisfy
ci = 0, and c2i = n.
i i
1
N
Sα := − ln pα , α ∈ R, α = 1.
α − 1 i=1 i
Calculate
lim Sα .
α→1