Text
Text
Differential Equations
Winfried Fakler
IAKS, Universität Karlsruhe, [email protected]
This article presents the new domain of linear ordinary differential operators
and shows how it works in a few examples. Furthermore, in a very informal
way the algebraic point of view dealing with ordinary differential equations
will be introduced. Using such tools allows to develop general algorithms for
solving linear equations.
Introduction
In designing algorithms for solving differential equations one necessarily has to work at the (algebraic) structure
of these equations. On one hand this leads to a classification of special types of differential equations. But it never
can be complete. On the other hand this will give a classification like linear and nonlinear, ordinary and partial
differential equations.
A further decision is, what kind of solutions one would like to determine. For example, one can search for power
series solutions, formal or closed-form (symbolic) solutions. This article treats symbolically constructable solu-
tions of linear ordinary differential equations.
Let
Rational solutions.
Rational solutions are functions lying in k. For that class J. Liouville already gave an algorithm in 1833, but only
when k is the rational function field. More general versions are presented by Singer (1991) and Bronstein (1992).
Algebraic solutions.
Algebraic solutions are functions, which lie in an algebraic extension of k, i.e. they satisfy an irreducible polyno-
mial over k. An example for this class is
q
p
1, x:
3
Many renowned mathematicians like Pépin, Fuchs, Klein, Jordan were searching for an algorithm for algebraic
solutions. Today, there exists an algorithm from Singer, but it is far from being satisfactory.
1
mathPAD
Liouvillian solutions.
Liouvillian solutions are functions, which can be constructed from the rational functions by successively substitut-
ing nested algebraic functions, integrals and exponential of integrals. An example for such a construction is
p
p e R Z
px :
x ,! x ,! exp
This class of functions includes also functions like the trigonometric functions and logarithms. In principle it
consists of nearly all closed-form functions. An important exception are the Bessel functions (and other special
functions).
Exponential solutions.
Exponential solutions are functions, whose logarithmic derivatives lie in k. If y is a solution, then y0 =y is its
logarithmic derivative. An example is
y = exp(x3 );
since y0 =y = 3x2 lies in k. For this article, exponential functions forms the most important subclass of liouvil-
lian functions. Without an algorithm for them, it is not possible to give an algorithm for liouvillian solutions.
Fortunately, there are algorithms for exponential solutions. The very first one stems from Beke in 1894.
p
Dividing functions into one of these classes is not always unique, e.g. for the function y = x, it is possible to
attach it either to the algebraic functions or to the exponential, since y0 =y = 1=2x 2 k. Furthermore, it can be hard
to decide for a given function whether it is liouvillian and how one could find the simplest construction.
The ring of linear ordinary differential operators k[D] is presented in MuPAD as the domain constructor Linear-
OrdinaryDifferentialOperator. From the mathematical point of view linear differential operators gen-
erate a left skew polynomial ring of derivation type. The elements of such a ring are called skew polynomials or
Ore polynomials. For Ore polynomials the usual polynomial addition holds. Only the multiplication is different.
It is declared as an extension of the rule for a 2 k
Da = aD + a0
to arbitrary Ore polynomials. Multiplication of Ore polynomials is in fact operator composition. Therefore, k[D]
is not a commutative ring. This means, there is a left and right division. Indeed, there exists an extended Euclidean
algorithm and it is possible to determine for any two nontrivial elements a smallest nontrivial common left multiple.
This is the so-called Ore condition, i.e. skew polynomials are left Ore rings. Linear ordinary differential operators
are even left and right Ore rings.
Starting from this mathematical background the category constructor UnivariateSkewPolynomialCat was
generated, in which already all operations for univariate skew polynomials independent of the representation are
implemented. For a true representation the following domains hierachy was created:
BaseDomain
#
PolynomialExplicit
#
UnivariatePolynomial
#
UnivariateSkewPolynomial
#
LinearOrdinaryDifferentialOperator
In this way, based on polynomials it was possible to implement the new domains in a reasonable time. Simply the
polynomial multiplication needs to be overloaded by the new noncommutative multiplication and all the resulting
left and right operations had to be implemented. Altogether it is an example for the advantages of the domains
constructor concept. Based on the domain UnivariateSkewPolynomial, it would be possible to generate
beside the domain LinearOrdinaryDifferentialOperator e.g. a domain for linear ordinary difference
operators without need for implementing all operations once more.
To create a LODO domain one has to choose a variable for the operator, e.g. ’Df‘, a variable with respect to which
one wants to differentiate, and optionally a coefficient field or ring of characteristic zero from the domains package.
Note, in MuPAD ’D‘ is not a variable, since it is predefined as an operator. A LODO domain can be created for
example with the call
>> EF := Dom::ExpressionField(normal):
>> lodo := Dom::LinearOrdinaryDifferentialOperator(Df,x,EF);
There are more possiblities for generating differential operators, e.g. one can generate the same operator by a
vector
or by a differential equation
>> A:=lodo(Dfˆ2+sin(x)*Df+x+1);
2
Df + sin(x) Df + (x + 1)
>> B:=lodo(Df+x);
Df + x
>> P:=A*B;
3 2 2
Df + (x + sin(x)) Df + (x + x sin(x) + 3) Df + (x + sin(x) + x )
3
mathPAD
That this product is really an operator composition one can test with
x Df + 1, x Df
that the defined multiplication satisfies the rule above and that it is noncommutative. The option ’Unsimplified‘ is
necessary, since the function func call of the LODO domain is primarily intended for the zero test of solutions.
Naturally it is possible to evaluate operators, but there are internal manipulations which leave the solution space
unchanged, however it will change the resulting expression. By the given option this will be suppressed. It is also
possible for example to compute a right division.
>> t:=P::rightDivide(P,A);
table(
remainder = (- cos(x) + 2 ) Df + (sin(x) - 1),
quotient = Df + x
)
>> t[quotient]*A+t[remainder];
3 2 2
Df + (x + sin(x)) Df + (x + x sin(x) + 3) Df + (x + sin(x) + x )
In short, a LODO domain contains among other things the operations left/rightfDivide, Quotient, Remainder, Gcd,
Lcm, ExtendedEuclidg and allows to determine the adjoint operator,
>> P::adjoint(P);
3 2
- Df + (x + sin(x)) Df + (- x + 2 cos(x) - x sin(x) - 1 ) Df +
2
(x - sin(x) - x cos(x) + x - 1)
symmetric powers (symmetricPower) of an operator and currently limited factoring and computing zeros or
solutions of operators with rational function coefficients. For demonstrating that, here an example:
>> L:=lodo(Dfˆ4+(2*x-1)/(2*x*(x-1))*Dfˆ3+(143*x-147)/(784*xˆ2*(x-1))*Dfˆ2\
&> +(-18*x+21)/(32*xˆ3*(x-1))*Df+(2349*x-2940)/(3136*xˆ4*(x-1)));
4 / 2 x - 1 \ 3 / 143 x - 147 \ 2
Df + | ------------- | Df + | ------------------ | Df +
| 2 | | 2 3 |
\ - 2 x + 2 x / \ - 784 x + 784 x /
/ - 18 x + 21 \ 2349 x - 2940
| ---------------- | Df + --------------------
| 3 4 | 4 5
\ - 32 x + 32 x / - 3136 x + 3136 x
>> Factor(L);
/ 2 / 2 x - 1 \ 1 \ / 1 \
| Df + | ------------- | Df - ----------------- | | Df + --- |
| | 2 | 2 | \ 4 x /
\ \ - 2 x + 2 x / - 196 x + 196 x /
/ 1 \
| Df - --- |
\ 4 x /
An operator which decomposes into factors is called reducible, if it is not reducible, it is called irreducible. Cur-
rently, the function Factor can only find left and right factors of degree 1, which means that a decomposition
in irreducible factors is guaranteed only for operators up to third degree. Nevertheless it is possible to find de-
compositions of higer degree operators. The situation for computing liouvillian zeros is quite similar. Finding
all liouvillian zeros can currently only be guaranteed for second degree operators and reducible operators of third
degree. But one can also find liouvillian zeros of higher degree operators.
>> sols:=L::liouvillianZeros(L):
>> map(sols,combine@simplify@expand@eval);
>> map(%,L);
{0}
The last call shows that the determined functions are in fact zeros of the operator.
5
mathPAD
For readers who want to know more about Ore polynomials and linear ordinary differential operators we refer to
Bronstein and Petkovšek [2] and Ore [6]. The currently most efficient method for factoring differential operators
is described in van Hoeij [11]. Information about the domains package one can find in the online help system of
MuPAD .
An Algebraic Algorithm
Algorithms computing liouvillian solutions of second order linear differential equations have been developed at
the end of the last century. The first complete and implemented algorithm for second order equations stems from
Kovacic in 1977, see [5]. A considerably simplification and much more efficient variant of this algorithm was
recently presented from Ulmer and Weil [10] and already in 1981 Singer gave an algorithm computing liouvillian
solutions for equations of arbitrary order. Unfortunately this algorithm has such an enormous complexity, that it is
even not implemented for second order equations. A new development by Singer and Ulmer [8] could close this
gap.
All modern solution procedures are in principle based on differential Galois theory. Because of the vector space
structure of a solution set in this theory one associate a linear group to every linear differential equation. From
this group, it is now possible to draw conclusions about the solutions. For example it is known, that if a linear
differential equation has a liouvillian solution y, then it has also a liouvillian solution of which the logarithmic
derivative y0 =y is algebraic of bounded degree. On the basis of this Singer theorem, one can reduce the problem of
finding liouvillian solutions to the problem of finding a minimal polynomial, thus an algebraic equation. Hence, it
is an algebraic problem. The considerably higher efficiency of the Ulmer-Weil algorithm compared with Kovacic’s
algorithm comes from the different calculation of the coefficients of the minimal polynomial. In the Ulmer-Weil
algorithm they are simply determined via recursion.
Even, when the Galois group of an irreducible differential equation is finite, one can determine a minimal polyno-
mial of a solution. But for this the Galois group must be explicitely known. For finite primitive groups, L. Fuchs
determined in the years from 1875 to 1878 such a method for second order equations. In Singer und Ulmer [7] this
method was extended to higher order equations. Minimal polynomials for all finite groups are given in Fakler [3].
For the reader who wants to know more about differential Galois theory we refer to Kaplansky [4] and to the
references in the given articles. In figure 1 one can find an overview about the methods for second order differential
equations.
In the following we outline the method given in Fakler [3] for second order differential equations and we show
how one can use it together with a factorization to compute liouvillian solutions for many higher order equations.
Let
L(y) = y00 + a1 y0 + a0 y = 0; ai 2 k(x)
be a differential equation with rational function coefficients over a field k of charakteristic 0. This will be satisfied
e.g. for k = Q. Further, let fy1 ; y2g be a fundamental set of solutions for L(y) = 0 and G (L) its Galois group.
One can imagine it as a (2 2) matrix group, i.e. G (L) is a subgroup of the general linear group GL(2; k).
For using the powerful tools from Galois theory one has to take care that the Galois group of the equation is
unimodular, which means that the determinants of all matrices are 1. Then the Galois group is a subgroup of
the special linear group SL(2; k). This is necessary, since only
R these groups are all known. However, it is not a
a
problem because e.g. with the transformation y = z exp(, 2 ) we can guarantee it. For that reason, we assume
1
One distinguishes, if the Galois group is reducible or imprimitive (which means here, that non-zero elements are
only in the main diagonal or only outside the main diagonal) or if it is isomorph to the tetrahedral or octahedral or
icosahedral group.
Fuchs, Singer-Ulmer:
P (Y) = Ydm + pm,1 Yd(m,1) + : : : + p0 = 0
That the Galois group of a second order equation is reducible is equivalent to the existence of an exponential
solution.
For distinguishing the remaining cases the so-called symmetric powers of L(y) = 0 are used. Symmetric powers
L s m (y) = 0 are from L(y) = 0 constructable differential equations with the property that all mth power products
of solutions of L(y) = 0 are solutions of it. The second symmetric power L (y) = 0 e.g. has the solution space
s 2
m
fy12 ; y1 y2 ; y22 g. Rational solutions of symmetric powers L s (y) = 0 correspond to homogeneous (polynomial)
invariants of degree m of G (L). Since the invariants of all the Galois groups are known, it is possible to distinguish
s m
the following cases for irreducible second order equations: If L (y) = 0 has a nontrivial rational solution for
m = 4: G (L) is imprimitive
m = 6, (m 6= 4): G (L) = tetrahedral group
m = 8, (m =6 4; 6): G (L) = octahedral group
m = 12, (m = 6 4; 6; 8): G (L)
= icosahedral group
6 4; 6; 8; 12): G (L)
(m = = SL(2; k).
More, one gets from that a beautiful criterion for when an irreducible second order differential equation with uni-
modular Galois group has liouvillian solutions namely if and only if its twelveth symmetric power has a nontrivial
rational solution.
In the imprimitive case the fourth symmetric power L (y) = 0 has a rational solution r (in one case, there are
s 4
4r00 r , 3(r0)2 W 2 2 r0
16r2
+
4r
C , 4r a1 + a0 = 0:
7
mathPAD
Unfortunately, for the rest of the cases there are no direct solution formulae. However, it remains possible to
compute the minimal polynomial of a solution via solution formulae. One recalls, in the Ulmer-Weil algorithm
the coefficients of the minimal polynomial of the logarithmic derivative of a solution are determined by recursion.
Here the coefficients are computed by determining a constant. With the tetrahedral group we demonstrate how this
procedure will work. For the octahedral and icosahedral group the process is similar.
In a precomputation one determines a minimal polynomial decomposed into invariants for the tetrahedral group:
Y24 + 10I2 Y16 + 5I3 Y12 , 15I22 Y8 , I2 I3 Y4 + I14 :
To the invariant I1 = 4I1 corresponds a rational solution r of the sixth symmetric power L (y) = 0. Since
s 6
solutions of differential equations can only be unique up to a constant, one has to determine a constant c such that
I1 = c r. To do this, one can use the syzygy between the invariants from which one gets the determining equation
,
25J (r; H (r))2 + 64H (r)3 c2 + 106 108r4 = 0;
where
" #
f0
2 0 0 0
H (f ) = mW,2 1 f +m
f f
f + ma1 f + m a0 f
2 2
In figure 2 is a summary of the just represented method. As an application, we now solve the famous example of
Kovacic.
Example.
The second order differential equation
3 2 3
L(y) = y00 + 16x 2
+
9(x , 1)2
, 16x(x , 1) y = 0
possess no exponential solution and its fourth symmetric power L (y) = 0 has no nontrivial rational solution.
s 4
But then L (y) = 0 possesses the rational solution r = x2 (x , 1)2 . This means that the Galois group G (L) is
s 6
H (r) = 25
4
x2 (x , 1)3 and J (r; H (r)) = , 25
2
x3 (x , 1)4 (x , 2):
From that, one gets the determining equation
, , ,
c2 + 27648 x16 + ,8c2 , 221184 x15 + 28c2 + 774144 x14+
, , ,
,56c2 , 1548288 x13 + 70c2 + 1935360 x12 + ,56c2 , 1548288 x11+
, , ,
28c2 + 774144 x10 + ,8c2 , 221184 x9 + c2 + 27648 x8 = 0;
m
,
=6
25J (r; H (r ))2 + 64H (r )3 c
2 + 106 108 r
4 = 0 (n = 8)
I1 =
1
4
c r , I2 =
1
400
c
2 H (r ) , I3 =
1
3200
c
3 J (r; H (r ))
P (Y) = Y
24 + 10I Y16 + 5I Y12
2 3 , 15I22 Y8 , 4 4
I2 I3 Y + I1
m
,
=8
49J (r; H (r ))2 + 144H (r )3 , 118013952
c
3
r H (r ) =0 (n = 12)
I1 = ,
1
16
c r , I2 = 150528
1 2 ( ) c H r
m
,
= 12
121J (r; H (r ))2 + 400H (r )3 c + 708624400 1728r 5 = 0 (n = 20)
I1 =
1
125
c r , I2 =
1
121 34375
3 (c
2
H (r ) , I3 =
11
2420 3125 c J r; H (r ))
9
mathPAD
c2 + 27648 = 0:
p
Hence, c = 96 ,3. Substituting
1 p
I1 =
4
c r = 24 ,3 x2 (x , 1)2 ;
1
I2 =
400
c2 H (r) = ,432x2(x , 1)3;
1 p
I3 =
3200
c3 J (r; H (r)) = 10368 ,3 x3(x , 1)4 (x , 2)
in the minimal polynomial decomposed into invariants, we obtain the desired minimal polynomial of a solution:
p
P (Y) = Y24 , 4320x2(xp, 1)3 Y16 + 51840 ,3 x3 (x , 1)4 (x , 2)Y12 , 2799360x4(x , 1)6 Y8
+ 4478976 ,3 x5 (x , 1)7 (x , 2)Y4 + 2985984x8(x , 1)8: 3
Solutions of the minimal polynomial P (Y) = 0 are also solutions of the differential equation L(y) = 0. From
the Galois theory one knows, that for solvable groups the solutions can be represented in radicals (nested root
expressions). Therefore, in the cases of the tetrahedral or octahedral group it would be possible to compute radical
expressions of the solutions. But such a calculation has an enormous complexity and there is no implementation in
any computer algebra system for performing this operation known to the author. However, in the icosahedral case
it is only possible to represent a solution implicit as solution of a minimal polynomial.
With this algorithm one can even find solutions of higher order reducible equations. For this, one has to factorize
the associated operator and beginning from the right to solve the factors. With the method of variation of parameters
which was introduced by Lagrange one can construct solutions of the equation from solutions of the factors. An
example for that is already given in the last section. Unfortunately, decomposing into factors is not unique. To
determine in this way as many as possible (all) liouvillian solutions one can compute from a given factorization
with the algorithm from Tsarev [9] all the other possible factorizations. By a complete implementation of factoring
and e.g. by an implementation of the algorithm from Singer and Ulmer [8], one then could find all liouvillian
solutions.
I thank Frank Postel for his invitation to present my implementations and results from [3] here, Eckhard Pflügel
for implementing the algorithm for exponential solutions, Paul Zimmermann for including my algorithms into the
ODE solver and for many helpful discussions, Ralf Hillebrand for including some of my special wishes into the
MuPAD system and for his help in solving problems and last but not least Werner Seiler for proofreading this
article.
References
[1] Barkatou, M. (1997). An Efficient Algorithm for Computing Rational Solutions of Systems of Linear Differ-
ential Equations. Preprint.
[2] Bronstein, M., Petkovšek, M. (1996). An introduction to pseudo-linear algebra. Theor. Comp. Science 157,
No. 1.
[3] Fakler, W. (1997). On second order homogeneous linear differential equations with Liouvillian solutions.
Theor. Comp. Science 187 (1-2), 27-48.
[4] Kaplansky, I. (1957). Introduction to differential algebra. Paris: Hermann.
[5] Kovacic, J. (1986). An algorithm for solving second order linear homogeneous differential equations. J. Symb.
Comp. 2, 3-43.
[6] Ore, O. (1933). Theory of non-commutative polynomials. Ann. of Math. 34, 480-508.
[7] Singer, M.F., Ulmer, F. (1993). Liouvillian and Algebraic Solutions of Second and Third Order Linear Differ-
ential Equations. J. Symb. Comp. 16, 37-73.
[8] Singer, M.F., Ulmer, F. (1996). Linear Differential Equations and Products of Linear Forms.. Preprint. To
appear in J. Pure and Applied Algebra.
[9] Tsarev, S.P. (1996). An Algorithm for Complete Enumeration of All Factorizations of a Linear Ordinary
Differential Operator. In: Proceedings of ISSAC’96, 226-231.
[10] Ulmer, F., Weil, J.A. (1996). Note on Kovacic’s Algorithm. J. Symb. Comp. 22, 179-200.
[11] van Hoeij, M. (1996). Factorization of Linear Differential Operators. PhD thesis, University of Nijemegen.
[12] Zwillinger, D. (1992). Handbook of differential equations. 2nd ed. San Diego: Academic Press.
11