0% found this document useful (0 votes)
35 views45 pages

Main File Maths

Uploaded by

rg221864
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views45 pages

Main File Maths

Uploaded by

rg221864
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Analytic function

In mathematics, an analytic function is a function that is locally given by a convergent power


series. There exist both real analytic functions and complex analytic functions. Functions of
each type are infinitely differentiable, but complex analytic functions exhibit properties that do
not generally hold for real analytic functions.

A function is analytic if and only if its Taylor series about converges to the function in some
neighborhood for every in its domain. It is important to note that it is a neighborhood and not
just at some point , since every differentiable function has at least a tangent line at every
point, which is its Taylor series of order 1. So just having a polynomial expansion at singular
points is not enough, and the Taylor series must also converge to the function on points
adjacent to to be considered an analytic function. As a counterexample see the Weierstrass
function or the Fabius function.

Definitions
Formally, a function is real analytic on an open set in the real line if for any one can
write
in which the coefficients are real numbers and the series is convergent to for
in a neighborhood of .

Alternatively, a real analytic function is an infinitely differentiable function such that the Taylor
series at any point in its domain

converges to for in a neighborhood of pointwise.[a] The set of all real analytic


functions on a given set is often denoted by .

A function defined on some subset of the real line is said to be real analytic at a point if
there is a neighborhood of on which is real analytic.

The definition of a complex analytic function is obtained by replacing, in the definitions above,
"real" with "complex" and "real line" with "complex plane". A function is complex analytic if and
only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic"
and "analytic" are often used interchangeably for such functions.[1]

Examples
Typical examples of analytic functions are

The following elementary functions:


All polynomials: if a polynomial has
degree n, any terms of degree larger
than n in its Taylor series expansion
must immediately vanish to 0, and
so this series will be trivially
convergent. Furthermore, every
polynomial is its own Maclaurin
series.
The exponential function is analytic.
Any Taylor series for this function
converges not only for x close
enough to x0 (as in the definition)
but for all values of x (real or
complex).
The trigonometric functions,
logarithm, and the power functions
are analytic on any open set of their
domain.
Most special functions (at least in some
range of the complex plane):
hypergeometric functions
Bessel functions
gamma functions
Typical examples of functions that are not analytic are

The absolute value function when


defined on the set of real numbers or
complex numbers is not everywhere
analytic because it is not differentiable
at 0.
Piecewise defined functions (functions
given by different formulae in different
regions) are typically not analytic where
the pieces meet.
The complex conjugate function z → z*
is not complex analytic, although its
restriction to the real line is the identity
function and therefore real analytic, and
it is real analytic as a function from
to .
Other non-analytic smooth functions,
and in particular any smooth function
with compact support, i.e.
, cannot be analytic on
.[2]
Alternative characterizations
The following conditions are equivalent:

1. is real analytic on an open set .


2. There is a complex analytic extension
of to an open set which
contains .
3. is smooth and for every compact
set there exists a constant
such that for every and
every non-negative integer the
following bound holds[3]

Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much
more easily characterized.
For the case of an analytic function with several variables (see below), the real analyticity can be
characterized using the Fourier–Bros–Iagolnitzer transform.

In the multivariable case, real analytic functions satisfy a direct generalization of the third
characterization.[4] Let be an open set, and let .

Then is real analytic on if and only if and for every compact there
exists a constant such that for every multi-index the following bound holds[5]

Properties of analytic
functions

The sums, products, and compositions


of analytic functions are analytic.
The reciprocal of an analytic function
that is nowhere zero is analytic, as is the
inverse of an invertible analytic function
whose derivative is nowhere zero. (See
also the Lagrange inversion theorem.)
Any analytic function is smooth, that is,
infinitely differentiable. The converse is
not true for real functions; in fact, in a
certain sense, the real analytic functions
are sparse compared to all real infinitely
differentiable functions. For the complex
numbers, the converse does hold, and in
fact any function differentiable once on
an open set is analytic on that set (see
"analyticity and differentiability" below).
For any open set , the set A(Ω) of
all analytic functions is a
Fréchet space with respect to the
uniform convergence on compact sets.
The fact that uniform limits on compact
sets of analytic functions are analytic is
an easy consequence of Morera's
theorem. The set of all bounded
analytic functions with the supremum
norm is a Banach space.
A polynomial cannot be zero at too many points unless it is the zero polynomial (more precisely,
the number of zeros is at most the degree of the polynomial). A similar but weaker statement
holds for analytic functions. If the set of zeros of an analytic function ƒ has an accumulation
point inside its domain, then ƒ is zero everywhere on the connected component containing the
accumulation point. In other words, if (rn) is a sequence of distinct numbers such that ƒ(rn) = 0
for all n and this sequence converges to a point r in the domain of D, then ƒ is identically zero on
the connected component of D containing r. This is known as the identity theorem.

Also, if all the derivatives of an analytic function at a point are zero, the function is constant on
the corresponding connected component.

These statements imply that while analytic functions do have more degrees of freedom than
polynomials, they are still quite rigid.

Analyticity and
differentiability
As noted above, any analytic function (real or complex) is infinitely differentiable (also known as
smooth, or ). (Note that this differentiability is in the sense of real variables; compare
complex derivatives below.) There exist smooth real functions that are not analytic: see non-
analytic smooth function. In fact there are many such functions.
The situation is quite different when one considers complex analytic functions and complex
derivatives. It can be proved that any complex function differentiable (in the complex sense) in
an open set is analytic. Consequently, in complex analysis, the term analytic function is
synonymous with holomorphic function.

Real versus complex analytic


functions
Real and complex analytic functions have important differences (one could notice that even
from their different relationship with differentiability). Analyticity of complex functions is a more
restrictive property, as it has more restrictive necessary conditions and complex analytic
functions have more structure than their real-line counterparts.[6]

According to Liouville's theorem, any bounded complex analytic function defined on the whole
complex plane is constant. The corresponding statement for real analytic functions, with the
complex plane replaced by the real line, is clearly false; this is illustrated by

Also, if a complex analytic function is defined in an open ball around a point x0, its power series
expansion at x0 is convergent in the whole open ball (holomorphic functions are analytic). This
statement for real analytic functions (with open ball meaning an open interval of the real line
rather than an open disk of the complex plane) is not true in general; the function of the example
above gives an example for x0 = 0 and a ball of radius exceeding 1, since the power series
1 − x2 + x4 − x6... diverges for |x| ≥ 1.

Any real analytic function on some open set on the real line can be extended to a complex
analytic function on some open set of the complex plane. However, not every real analytic
function defined on the whole real line can be extended to a complex function defined on the
whole complex plane. The function ƒ(x) defined in the paragraph above is a counterexample, as it
is not defined for x = ±i. This explains why the Taylor series of ƒ(x) diverges for |x| > 1, i.e., the
radius of convergence is 1 because the complexified function has a pole at distance 1 from the
evaluation point 0 and no further poles within the open disc of radius 1 around the evaluation
point.

Analytic functions of several


variables
One can define analytic functions in several variables by means of power series in those
variables (see power series). Analytic functions of several variables have some of the same
properties as analytic functions of one variable. However, especially for complex analytic
functions, new and interesting phenomena show up in 2 or more complex dimensions:

Zero sets of complex analytic functions


in more than one variable are never
discrete. This can be proved by
Hartogs's extension theorem.
Domains of holomorphy for single-
valued functions consist of arbitrary
(connected) open sets. In several
complex variables, however, only some
connected open sets are domains of
holomorphy. The characterization of
domains of holomorphy leads to the
notion of pseudoconvexity.

See also

Cauchy–Riemann equations
Holomorphic function
Paley–Wiener theorem
Quasi-analytic function
Infinite compositions of analytic
functions
Non-analytic smooth function
Cauchy–Riemann
equations

In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after
Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential
equations which form a necessary and sufficient condition for a complex function of a complex
variable to be complex differentiable.

A visual depiction of a vector X in a


domain being multiplied by a complex
number z, then mapped by f, versus
being mapped by f then being
multiplied by z afterwards. If both of
these result in the point ending up in
the same place for all X and z, then f
satisfies the Cauchy–Riemann
condition.

These equations are


(1a)

and

(1b)

where u(x, y) and v(x, y) are real differentiable bivariate functions.

Typically, u and v are respectively the real and imaginary parts of a complex-valued function
f(x + iy) = f(x, y) = u(x, y) + iv(x, y) of a single complex variable z = x + iy where x and y are real
variables; u and v are real differentiable functions of the real variables. Then f is complex
differentiable at a complex point if and only if the partial derivatives of u and v satisfy the
Cauchy–Riemann equations at that point.

A holomorphic function is a complex function that is differentiable at every point of some open
subset of the complex plane C. It has been proved that holomorphic functions are analytic and
analytic complex functions are complex-differentiable. In particular, holomorphic functions are
infinitely complex-differentiable.

This equivalence between differentiability and analyticity is the starting point of all complex
analysis.

History
The Cauchy–Riemann equations first appeared in the work of Jean le Rond d'Alembert.[1] Later,
Leonhard Euler connected this system to the analytic functions.[2] Cauchy[3] then used these
equations to construct his theory of functions. Riemann's dissertation on the theory of functions
appeared in 1851.[4]
Simple example
Suppose that . The complex-valued function is differentiable at any point
z in the complex plane.

The real part and the imaginary


part are

and their partial derivatives are

We see that indeed the Cauchy–Riemann equations are satisfied, and .

Interpretation and
reformulation
The Cauchy-Riemann equations are one way of looking at the condition for a function to be
differentiable in the sense of complex analysis: in other words, they encapsulate the notion of
function of a complex variable by means of conventional differential calculus. In the theory there
are several other major ways of looking at this notion, and the translation of the condition into
other language is often needed.

Conformal mappings
First, the Cauchy–Riemann equations may be written in complex form

(2)

In this form, the equations correspond structurally to the condition that the Jacobian matrix is of
the form

where and
. A matrix of this
form is the matrix representation of a
complex number. Geometrically, such a
matrix is always the composition of a
rotation with a scaling, and in particular
preserves angles. The Jacobian of a
function f(z) takes infinitesimal line
segments at the intersection of two curves
in z and rotates them to the corresponding
segments in f(z). Consequently, a function
satisfying the Cauchy–Riemann equations,
with a nonzero derivative, preserves the
angle between curves in the plane. That is,
the Cauchy–Riemann equations are the
conditions for a function to be conformal.
Moreover, because the composition of a conformal transformation with another conformal
transformation is also conformal, the composition of a solution of the Cauchy–Riemann
equations with a conformal map must itself solve the Cauchy–Riemann equations. Thus the
Cauchy–Riemann equations are conformally invariant.

Complex differentiability
Let
where and are real-valued functions,
be a complex-valued function of a
complex variable where
and are real variables.
so the
function can also be regarded as a
function of real variables and . Then,
the complex-derivative of at a point
is defined by

provided this limit exists (that is, the limit


exists along every path approaching ,
and does not depend on the chosen path).
A fundamental result of complex analysis is that is complex differentiable at (that is, it has
a complex-derivative), if and only if the bivariate real functions and are
differentiable at and satisfy the Cauchy–Riemann equations at this point.[5][6][7]

In fact, if the complex derivative exists at , then it may be computed by taking the limit at
along the real axis and the imaginary axis, and the two limits must be equal. Along the real axis,
the limit is

and along the imaginary axis, the limit is

So, the equality of the derivatives implies

which is the complex form of Cauchy–


Riemann equations at .
(Note that if is complex differentiable at , it is also real differentiable and the Jacobian of
at is the complex scalar , regarded as a real-linear map of , since the limit
as .)
Conversely, if f is differentiable at (in the real sense) and satisfies the Cauchy-Riemann
equations there, then it is complex-differentiable at this point. Assume that f as a function of two
real variables x and y is differentiable at z0 (real differentiable). This is equivalent to the
existence of the following linear approximation

where , ,

z = x + iy, and as
Δz → 0.
Since and , the above can be re-written as

Now, if is real, , while if it is imaginary, then . Therefore, the


second term is independent of the path of the limit when (and only when) it vanishes
identically: , which is precisely the Cauchy–Riemann equations in the complex
form. This proof also shows that, in that case,
Note that the hypothesis of real differentiability at the point is essential and cannot be

dispensed with. For example,[8] the function , regarded as a complex function

with imaginary part identically zero, has both partial derivatives at , and it
moreover satisfies the Cauchy–Riemann equations at that point, but it is not differentiable in the
sense of real functions (of several variables), and so the first condition, that of real
differentiability, is not met. Therefore, this function is not complex differentiable.

Some sources[9][10] state a sufficient condition for the complex differentiability at a point as,
in addition to the Cauchy–Riemann equations, the partial derivatives of and be continuous at
the point because this continuity condition ensures the existence of the aforementioned linear
approximation. Note that it is not a necessary condition for the complex differentiability. For
example, the function is complex differentiable at 0, but its real and imaginary
parts have discontinuous partial derivatives there. Since complex differentiability is usually
considered in an open set, where it in fact implies continuity of all partial derivatives (see below),
this distinction is often elided in the literature.

Independence of the complex


conjugate
The above proof suggests another interpretation of the Cauchy–Riemann equations. The
complex conjugate of , denoted , is defined by

for real variables and . Defining the two


Wirtinger derivatives as
the Cauchy–Riemann equations can then
be written as a single equation

and the complex derivative of in that

case is In this form, the


Cauchy–Riemann equations can be
interpreted as the statement that a
complex function of a complex variable
is independent of the variable . As
such, we can view analytic functions as
true functions of one complex variable ( )
instead of complex functions of two real
variables ( and ).

Physical interpretation

Contour plot of a pair u and v


satisfying the Cauchy–Riemann
equations. Streamlines (v = const,
red) are perpendicular to
equipotentials (u = const, blue). The
point (0,0) is a stationary point of the
potential flow, with six streamlines
meeting, and six equipotentials also
meeting and bisecting the angles
formed by the streamlines.

A standard physical interpretation of the Cauchy–Riemann equations going back to Riemann's


work on function theory[11] is that u represents a velocity potential of an incompressible steady
fluid flow in the plane, and v is its stream function. Suppose that the pair of (twice continuously
differentiable) functions u and v satisfies the Cauchy–Riemann equations. We will take u to be a
velocity potential, meaning that we imagine a flow of fluid in the plane such that the velocity
vector of the fluid at each point of the plane is equal to the gradient of u, defined by
By differentiating the Cauchy–Riemann equations for the functions u and v, with the symmetry
of second derivatives, one shows that u solves Laplace's equation:

That is, u is a harmonic function. This


means that the divergence of the gradient
is zero, and so the fluid is incompressible.
The function v also satisfies the Laplace equation, by a similar analysis. Also, the Cauchy–
Riemann equations imply that the dot product (
), i.e., the direction of the

maximum slope of u and that of v are orthogonal to each other. This implies that the gradient of
u must point along the curves; so these are the streamlines of the flow. The
curves are the equipotential curves of the flow.

A holomorphic function can therefore be visualized by plotting the two families of level curves
and . Near points where the gradient of u (or, equivalently, v) is not zero,
these families form an orthogonal family of curves. At the points where , the stationary
points of the flow, the equipotential curves of intersect. The streamlines also
intersect at the same point, bisecting the angles formed by the equipotential curves.

Harmonic vector field


Another interpretation of the Cauchy–Riemann equations can be found in Pólya & Szegő.[12]
Suppose that u and v satisfy the Cauchy–Riemann equations in an open subset of R2, and
consider the vector field
regarded as a (real) two-component
vector. Then the second Cauchy–Riemann
equation (1b) asserts that is irrotational
(its curl is 0):

The first Cauchy–Riemann equation (1a) asserts that the vector field is solenoidal (or
divergence-free):

Owing respectively to Green's theorem and the divergence theorem, such a field is necessarily a
conservative one, and it is free from sources or sinks, having net flux equal to zero through any
open domain without holes. (These two observations combine as real and imaginary parts in
Cauchy's integral theorem.) In fluid dynamics, such a vector field is a potential flow.[13] In
magnetostatics, such vector fields model static magnetic fields on a region of the plane
containing no current. In electrostatics, they model static electric fields in a region of the plane
containing no electric charge.

This interpretation can equivalently be restated in the language of differential forms. The pair u
and v satisfy the Cauchy–Riemann equations if and only if the one-form is both
closed and coclosed (a harmonic differential form).

Preservation of complex structure


Another formulation of the Cauchy–Riemann equations involves the complex structure in the
plane, given by

This is a complex structure in the sense


that the square of J is the negative of the
2×2 identity matrix: . As above, if
u(x,y) and v(x,y) are two functions in the
plane, put

The Jacobian matrix of f is the matrix of partial derivatives


Then the pair of functions u, v satisfies the Cauchy–Riemann equations if and only if the 2×2
matrix Df commutes with J.[14]

This interpretation is useful in symplectic geometry, where it is the starting point for the study of
pseudoholomorphic curves.

Other representations
Other representations of the Cauchy–Riemann equations occasionally arise in other coordinate
systems. If (1a) and (1b) hold for a differentiable pair of functions u and v, then so do

for any coordinate system (n(x, y), s(x, y)) such that the pair is orthonormal and
positively oriented. As a consequence, in particular, in the system of coordinates given by the
polar representation , the equations then take the form
Combining these into one equation for f gives

The inhomogeneous Cauchy–Riemann equations consist of the two equations for a pair of
unknown functions u(x, y) and v(x, y) of two real variables

for some given functions α(x, y) and β(x, y) defined in an open subset of R2. These equations
are usually combined into a single equation

where f = u + iv and 𝜑 = (α + iβ)/2.


If 𝜑 is Ck, then the inhomogeneous equation is explicitly solvable in any bounded domain D,
provided 𝜑 is continuous on the closure of D. Indeed, by the Cauchy integral formula,

for all ζ ∈ D.
Generalizations

Goursat's theorem and its


generalizations
Suppose that f = u + iv is a complex-valued function which is differentiable as a function
f : R2 → R2. Then Goursat's theorem asserts that f is analytic in an open complex domain Ω if
and only if it satisfies the Cauchy–Riemann equation in the domain.[15] In particular, continuous
differentiability of f need not be assumed.[16]

The hypotheses of Goursat's theorem can be weakened significantly. If f = u + iv is continuous in


an open set Ω and the partial derivatives of f with respect to x and y exist in Ω, and satisfy the
Cauchy–Riemann equations throughout Ω, then f is holomorphic (and thus analytic). This result
is the Looman–Menchoff theorem.

The hypothesis that f obey the Cauchy–Riemann equations throughout the domain Ω is
essential. It is possible to construct a continuous function satisfying the Cauchy–Riemann
equations at a point, but which is not analytic at the point (e.g., f(z) = z5/|z|4). Similarly, some
additional assumption is needed besides the Cauchy–Riemann equations (such as continuity),
as the following example illustrates[17]

which satisfies the Cauchy–Riemann equations everywhere, but fails to be continuous at z = 0.

Nevertheless, if a function satisfies the Cauchy–Riemann equations in an open set in a weak


sense, then the function is analytic. More precisely:[18]
If f(z) is locally integrable in an open
domain Ω ⊂ C, and satisfies the
Cauchy–Riemann equations weakly,
then f agrees almost everywhere with an
analytic function in Ω.
This is in fact a special case of a more general result on the regularity of solutions of hypoelliptic
partial differential equations.

Several variables
There are Cauchy–Riemann equations, appropriately generalized, in the theory of several
complex variables. They form a significant overdetermined system of PDEs. This is done using a
straightforward generalization of the Wirtinger derivative, where the function in question is
required to have the (partial) Wirtinger derivative with respect to each complex variable vanish.

Complex differential forms


As often formulated, the d-bar operator
annihilates holomorphic functions. This
generalizes most directly the formulation

where

Bäcklund transform
Viewed as conjugate harmonic functions, the Cauchy–Riemann equations are a simple example
of a Bäcklund transform. More complicated, generally non-linear Bäcklund transforms, such as
in the sine-Gordon equation, are of great interest in the theory of solitons and integrable
systems.

Definition in Clifford algebra


In the Clifford algebra , the complex number is represented as
where ,( , so ). The Dirac operator in this
Clifford algebra is defined as . The function is considered
analytic if and only if , which can be calculated in the following way:
Grouping by and :

Hence, in traditional notation:

Conformal mappings in higher


dimensions
Let Ω be an open set in the Euclidean space Rn. The equation for an orientation-preserving
mapping to be a conformal mapping (that is, angle-preserving) is that

where Df is the Jacobian matrix, with transpose , and I denotes the identity matrix.[19] For
n = 2, this system is equivalent to the standard Cauchy–Riemann equations of complex
variables, and the solutions are holomorphic functions. In dimension n > 2, this is still
sometimes called the Cauchy–Riemann system, and Liouville's theorem implies, under suitable
smoothness assumptions, that any such mapping is a Möbius transformation.

See also

List of complex analysis topics


Morera's theorem
Wirtinger derivatives

References

1. d'Alembert, Jean (1752). Essai d'une


nouvelle théorie de la résistance des fluides
(https://books.google.com/books?id=EepGi
tm97JkC) . Paris: David l'aîné. Reprint 2018
where is the momentum density, a
conservation variable.
Incompressible Euler equation(s)
(conservation or Eulerian form)

where is the force density, a conservation variable.

Euler equations
In differential convective form, the compressible (and most general) Euler equations can be written shortly
with the material derivative notation:

Euler equations
(convective form)
where the additional variables here is:

is the specific internal energy (internal


energy per unit mass).
The equations above thus represent conservation of mass, momentum, and energy: the energy equation
expressed in the variable internal energy allows to understand the link with the incompressible case, but it is
not in the simplest form. Mass density, flow velocity and pressure are the so-called convective variables (or
physical variables, or lagrangian variables), while mass density, momentum density and total energy density
are the so-called conserved variables (also called eulerian, or mathematical variables).[1]

If one expands the material derivative the equations above are:


Incompressible constraint (revisited)
Coming back to the incompressible case, it now becomes apparent that the incompressible constraint typical
of the former cases actually is a particular form valid for incompressible flows of the energy equation, and not
of the mass equation. In particular, the incompressible constraint corresponds to the following very simple
energy equation:

Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in
a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the
multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows
it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates
in incompressible flows.[7]

Basing on the mass conservation equation, one can put this equation in the conservation form:

meaning that for an incompressible inviscid


nonconductive flow a continuity equation holds
for the internal energy.

Enthalpy conservation
Since by definition the specific enthalpy is:
The material derivative of the specific internal energy can be expressed as:

Then by substituting the momentum equation in this expression, one obtains:

And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler
energy equation:

In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly
corresponds to a variation of pressure.

Thermodynamics of ideal fluids


In thermodynamics the independent variables are the specific volume, and the specific entropy, while the
specific energy is a function of state of these two variables.

Deduction of the form valid for


thermodynamic systems
Considering the first equation, variable must be changed from density to specific volume. By
definition:
Thus the following identities hold:

Then by substituting these expressions in the mass conservation equation:

And by multiplication:

This equation is the only belonging to general continuum equations, so only this equation have
the same form for example also in Navier-Stokes equations.

On the other hand, the pressure in thermodynamics is the opposite of the partial derivative of the
specific internal energy with respect to the specific volume:

since the internal energy in thermodynamics


is a function of the two variables
aforementioned, the pressure gradient
contained into the momentum equation
should be explicited as:

It is convenient for brevity to switch the notation for the second order derivatives:

Finally, the energy equation:

can be further simplified in convective form


by changing variable from specific energy to
the specific entropy: in fact the first law of
thermodynamics in local form can be
written:

by substituting the material derivative of the


internal energy, the energy equation
becomes:
now the term between parenthesis is
identically zero according to the
conservation of mass, then the Euler energy
equation becomes simply:

For a thermodynamic fluid, the compressible Euler equations are consequently best written as:

Euler equations
(convective form, for a thermodynamic
system)

where:

is the specific volume


is the flow velocity vector
is the specific entropy
In the general case and not only in the incompressible case, the energy equation means that for an inviscid
thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow.
Basing on the mass conservation equation, one can put this equation in the conservation form:[8]

meaning that for an inviscid nonconductive flow


a continuity equation holds for the entropy.
On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum
equation require the specification of the fundamental equation of state of the material considered, i.e. of the
specific internal energy as function of the two variables specific volume and specific entropy:

The fundamental equation of state contains all the thermodynamic information about the system (Callen,
1985),[9] exactly like the couple of a thermal equation of state together with a caloric equation of state.

Conservation form
The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved
quantity and associated flux respectively:
where:

is the momentum density, a


conservation variable.

is the total energy density


(total energy per unit volume).
Here has length N + 2 and has size N(N + 2).[b] In general (not only in the Froude limit) Euler equations
are expressible as:

Euler equation(s)
(original conservation or Eulerian form)

where is the force density, a conservation variable.

We remark that also the Euler equation even when conservative (no external field, Froude limit) have no
Riemann invariants in general.[10] Some further assumptions are required

However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is
equivalent to the conservation equation:

Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as:

Euler equation(s)
(conservation form, for thermodynamic
fluids)

where is the entropy density, a thermodynamic conservation variable.

Another possible form for the energy equation, being particularly useful for isobarics, is:

where is the
total enthalpy density.

Quasilinear form and


characteristic equations
Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting
(approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the
equations in conservative form can be put in quasilinear form:

where are called the flux Jacobians defined


as the matrices:
Obviously this Jacobian does not exist in discontinuity regions (e.g. contact discontinuities, shock waves in
inviscid nonconductive flows). If the flux Jacobians are not functions of the state vector , the equations
reveals linear.

Characteristic equations
The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound
in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables.

In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the
system is defined hyperbolic, and physically eigenvalues represent the speeds of propagation of
information.[11] If they are all distinguished, the system is defined strictly hyperbolic (it will be proved to be the
case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is
easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic
fluids) than in other energy variables. This will become clear by considering the 1D case.

If is the right eigenvector of the matrix corresponding to the eigenvalue , by building the projection
matrix:

One can finally find the characteristic variables as:

Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P−1 yields the
characteristic equations:[12]
The original equations have been decoupled into N+2 characteristic equations each describing a simple
wave, with the eigenvalues being the wave speeds. The variables wi are called the characteristic variables and
are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic
variables is finally very simple. In one spatial dimension it is:

Then the solution in terms of the original conservative variables is obtained by transforming back:

this computation can be explicited as the linear


combination of the eigenvectors:

Now it becomes apparent that the characteristic variables act as weights in the linear combination of the
jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected
independently without change in shape. Each i-th wave has shape wipi and speed of propagation λi. In the
following we show a very simple example of this solution procedure.

Waves in 1D inviscid, nonconductive


thermodynamic fluid
If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial
dimension and free (no external field: g = 0):

You might also like