MEC503 Lecture2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

MEC503 Computer Techniques February 2, 2022

Lecture 2: PARTIAL DIFFERENTIAL EQUATIONS


University of Mosul Younis Najim Dep. of Mechanical Engineering

2.1 ACCURACY AND PRECISION


The errors associated with both calculations and measurements can be characterized with regard to their
accuracy and precision.
★ Accuracy: refers to how closely a computed or measured value agrees with the true value.
★ Precision: refers to how closely individual computed or measured values agree with each other.
Numerical methods should be sufficiently accurate or unbiased to meet the requirements of a particular
problem. They also should be precise enough for adequate design.
We will use the collective term error to represent both the inaccuracy and imprecision of our predictions.

2.1.1 ERRORS

1. Derivative approximation: To solve the problem with a computer, you had to approximate the derivative
of velocity with a finite difference:
𝜕𝑣 Δ𝑣 𝑣(𝑡𝑖+1 ) − 𝑣(𝑡𝑖 )
 
𝜕𝑡 Δ𝑡 𝑡𝑖+1 − 𝑡𝑖
Thus, the resulting solution is not exact—that is, it has error. the relationship between the exact, or true,
result and the approximation can be formulated as: True value = approximation + error.
The true relative error is:
true value − approximation
𝜀𝑡 = × 100
true value
In real-world applications, we will obviously not know the true answer a priori. For such cases, the error is
often estimated as the difference between the previous and present approximations:

present approximation − previous approximation


𝜀𝑎 = × 100
present approximation

When 𝜀 𝑎 becomes lower than a prespecified tolerance 𝜀 𝑠 the stopping criterion |𝜀 𝑎 | < 𝜀 𝑠 is satisfied and
our result is assumed to be within the prespecified acceptable level
2. Digital computer:the computer you use to obtain the solution is also an imperfect tool. Because it is a
digital device, the computer is limited in its ability to represent the magnitudes and precision of numbers.

Example 2.1.1 Write a computer program to find the true relative error for:

𝑥3 𝑥5 𝑥𝑛
sin(𝑥) = 𝑥 − + − .... +
3! 5! 𝑛!

When n=5, 7, 9, 11, 13, 15, 17.

2-1
2-2
2.1.2 ROUNDOFF ERRORS
Roundoff errors arise because digital computers cannot represent some quantities exactly. It can lead to
unstable obviously erroneous results.There are two major facets of roundoff errors:
1. Digital computers have magnitude and precision limits on their ability to represent numbers.
2. Certain numerical manipulations are highly sensitive to roundoff errors. This can result from both
mathematical considerations as well as from the way in which computers perform arithmetic operations.

2.1.2.1 Computer Number Representation

Roundoff errors are directly related to the way in which numbers are stored in a computer. The fundamental
unit whereby information is represented is called a word which consists of a string of binary digits, or bits.
Numbers are typically stored in one or more words.
A number system is merely a convention for representing quantities. Because we have 10 fingers, the
number system that we are most familiar with is the decimal, or base-10, number system. For example,
if we have the number 8642.9 which can be represented by decimal number system (also called positional
notation): (8 × 103 ) + (6 × 102 ) + (4 × 101 ) + (2 × 100 ) + (9 × 10−1 ) = 8642.9
Since the primary logic units of digital computers are on/off electronic components. Hence, numbers on the
computer are represented with a binary, or base-2, system: For example, the binary number 101.1 which is:
(1 × 22 ) + (0 × 21 ) + (1 × 20 ) + (1 × 2−1 ) = 4 + 0 + 1 + 0.5 = 5.5 in the decimal system.
For integer numbers such as 10101101 (no fractions): 27 + 25 + 23 + 22 + 20 = 128 + 32 + 8 + 4 + 1 = 173
Therefore, the binary equivalent of -173 would be stored on a 16-bit computer as shown below.

floating-Point Representation
Fractional quantities are typically represented in computers using floating-point format. In this approach,
which is very much like scientific notation, the number is expressed as:

∓𝑠 × 𝑏 𝑒

where s = the significand (or mantissa), b = the base of the number system being used, and e = the exponent.
for example: 0.005678 = 5.678 × 10−3 which eliminates the useless zeros.
★ Large positive and negative numbers that fall outside the range would cause an overflow error.
★ In a similar sense, for very small quantities there is a “hole” at zero, and very small quantities would
usually be converted to zero
2-3
For example a simple rational number with a finite number of digits like 2−5 = 0.03125 would have to be
stored as 3.1 × 10−2 = 0.031:
roundoff error= 0.03125−0.031
0.03125 = 0.008

2.2 TRUNCATION ERRORS


This result from using an approximation in place of an exact mathematical procedure.

Example 2.2.1 The approximation of 𝑓 (𝑥) = −0.1𝑥 4 − 0.15𝑥 3 − 0.5𝑥 2 − 0.25𝑥 + 1.2 at x = 1 by zero-order,
first-order, and second-order Taylor series expansions.

Taylor series, is of great value in the study of numerical methods which states that any smooth function can
be approximated as a polynomial.


∑︁ (Δ𝑥) 𝑛 𝜕 𝑛 𝑓 0 𝑓 00 (𝑥) 2 𝑓 000 (𝑥) 3 𝑓 𝑛 (𝑥)
𝑓 (𝑥 + 𝑖) ≡ 𝑓 (𝑥 + Δ𝑥) = = 𝑓 (𝑥) + 𝑓 (𝑥)Δ𝑥 + (Δ𝑥) + (Δ𝑥) + .. + (Δ𝑥) 𝑛 + 𝑅𝑛
𝑛=0
𝑛! 𝜕𝑥 𝑛 2! 3! 𝑛!

Zero-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑂 (Δ𝑥)


First-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑓 0 (𝑥)Δ𝑥 + 𝑂 (Δ𝑥) 2 .
2
Second-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑓 0 (𝑥)Δ𝑥 + 𝑓 00 (𝑥) (Δ𝑥) 3
2! + 𝑂 (Δ𝑥) .
where the nomenclature 𝑂 (Δ𝑥) 𝑛+1 means that the truncation error is of the order of (Δ𝑥) 𝑛+1 and a remainder
term is also included to account for all terms from 𝑛 + 1 to infinity:

𝑓 𝑛+1 (𝜁)
𝑅𝑛 = (Δ𝑥) 𝑛+1
(𝑛 + 1)!

n denotes that this is the remainder for the 𝑛𝑡ℎ -order approximation and 𝜁 is a value of x that lies somewhere
between x and 𝑥 + Δ𝑥 (or 𝑥𝑖 and 𝑥𝑖+1 ).

Example 2.2.2 Use Taylor series expansions with n = 0 to 6 to approximate 𝑓 (𝑥) = cos 𝑥 at 𝑥 + Δ𝑥 = 𝜋/3
and 𝑥𝑖 = 𝜋/4 ⇒ Δ𝑥 = 𝜋/3 − 𝜋/4 = 𝜋/12
2-4

Zero-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) ⇒ 𝑓 (𝜋/3)  cos(𝜋/4) = 0.707106781


𝜀𝑡 = | 0.5−0.707106781
0.5 | × 100 = 41.1%
First-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑓 0 (𝑥)Δ𝑥 ⇒ 𝑓 (𝜋/3)  cos(𝜋/4) − sin(𝜋/4)(𝜋/12) =
0.521986659.
𝜀𝑡 = | 0.5−0.521986659
0.5 | × 100 = 0.4%
2
Second-order approximation: 𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑓 0 (𝑥)Δ𝑥 + 𝑓 00 (𝑥) (Δ𝑥)
2! ⇒
cos(𝜋/4)
𝑓 (𝜋/3)  cos(𝜋/4) − sin(𝜋/4)(𝜋/12) − 2 (𝜋/12) 2 = 0.497754, 𝜀𝑡 = | 0.5−0.497754
0.5 | × 100 = 0.449%
. The Matlab code:
"This program is to use taylor series to approximate";
"The function cos x at xi+1=pi/3, xi=pi/4-->Dx=pi/12";
clc;clear;close all
syms x f
f=cos(x);
xi=pi/4;xi1=pi/3;DX=pi/12;
ft(1)=cos(xi);
Tv=subs(f,x,xi1);
et(1)=(Tv-subs(f,x,xi))/Tv*100;
k(1)=0;
for n=[1,2,3,4,5,6]
df=diff(f,’x’,n);
df1=subs(df,x,xi);
ft(n+1)= (ft(n)+df1*DX^n/factorial(n));
et(n+1)=((Tv-ft(n+1))/Tv*100);
k(n+1)=n;
end
et=double(abs(et));
L=[k’,ft’,et’];
disp(’ n order Taylor app true error’)
disp(L)

2.2.1 ESTIMATE OF NUMERICAL ERROR USING TAYLOR SERIES


Taylor series is actually used to estimate numerical errors.
Suppose that we truncated the Taylor series expansion after the zero-order term to yield:
2 3
𝑓 (𝑥 + Δ𝑥)  𝑓 (𝑥) + 𝑂 (Δ𝑥) terms that were truncated: 𝑅𝑜 = 𝑓 0 (𝑥)Δ𝑥 + 𝑓 00 (𝑥) (Δ𝑥) 000 (Δ𝑥)
2! + 𝑓 (𝑥) 3! + ...
One simplification might be to truncate the remainder itself, as in: 𝑅𝑜 = 𝑓 0 (𝑥)Δ𝑥
Since function f(x) and its first derivative are continuous over an interval from 𝑥 to 𝑥 + Δ𝑥, then there exists
at least one point on the function that has a slope, designated by 𝑓 0 (𝜁), that is parallel to the line joining
𝑓 (𝑥) and 𝑓 (𝑥 + Δ𝑥).
zero-order version: 𝑅𝑜 = 𝑓 0 (𝑥)Δ𝑥
𝑓 00 (𝑥) 2
First-order version: 𝑅𝑜 = 2! (Δ𝑥)
2-5
𝑓 000 (𝑥) 3
Second-order version: 𝑅𝑜 = 3! (Δ𝑥) and so on.

2.3 NUMERICAL DIFFERENTIATION


Calculus is the mathematics of change. Because engineers always deal with processes that change.

High-accuracy finite-difference formulas can be generated by including additional terms from the Taylor
series expansion. Taylor series expansion around x = 0 is called Maclaurin series expansion.
Forward difference approximation of the first derivative

𝑓 00 (𝑥𝑖 ) 𝑓 (𝑥𝑖+1 ) − 𝑓 (𝑥𝑖 ) 𝑓 00 (𝑥𝑖 )


𝑓 (𝑥+Δ𝑥) = 𝑓 (𝑥𝑖+1 ) = 𝑓 (𝑥𝑖 )+ 𝑓 0 (𝑥𝑖 )Δ𝑥+ (Δ𝑥) 2 +... → 𝑓 0 (𝑥𝑖 ) = − Δ𝑥+𝑂 (Δ𝑥) 2
2! Δ𝑥 2!
𝑓 00 (𝑥𝑖 )
Truncate the term 2! Δ𝑥 to get

𝑓 (𝑥𝑖+1 ) − 𝑓 (𝑥𝑖 )
𝑓 0 (𝑥𝑖 ) = + 𝑂 (Δ𝑥) forward-difference formula
Δ𝑥

Backward difference approximation of the first derivative

𝑓 00 (𝑥𝑖 ) 𝑓 (𝑥𝑖 ) − 𝑓 (𝑥𝑖−1 )


𝑓 (𝑥 − Δ𝑥) = 𝑓 (𝑥𝑖−1 ) = 𝑓 (𝑥𝑖 ) − 𝑓 0 (𝑥𝑖 )Δ𝑥 + (Δ𝑥) 2 − ... → 𝑓 0 (𝑥𝑖 ) = + 𝑂 (Δ𝑥)
2! Δ𝑥
2-6
Centered difference approximation of the first derivative

𝑓 00 (𝑥 𝑖 )
𝑓 (𝑥𝑖+1 ) = 𝑓 (𝑥𝑖 ) + 𝑓 0 (𝑥𝑖 )Δ𝑥 + 2! (Δ𝑥)
2 + ... 𝑓 (𝑥𝑖+1 ) − 𝑓 (𝑥𝑖−1 )
00
𝑓 (𝑥 𝑖 ) substract → 𝑓 0 (𝑥𝑖 ) = + 𝑂 (Δ𝑥) 2
𝑓 (𝑥𝑖−1 ) = 𝑓 (𝑥𝑖 ) − 𝑓 0 (𝑥 𝑖 )Δ𝑥 + 2! (Δ𝑥)
2 − ... 2Δ𝑥

Truncation error is of the order of(Δ𝑥) 2 , more accurate than forward and backward differencing.
Review examples 4.4 and 4.5 in the textbook.

Forward-difference approximation of the second derivative

𝑓 00 (𝑥𝑖 )
𝑓 (𝑥𝑖+2 ) = 𝑓 (𝑥𝑖 ) + 𝑓 0 (𝑥𝑖 )(2Δ𝑥) + 2! (2Δ𝑥) +
2 ... 𝑓 (𝑥𝑖+2 ) − 2 𝑓 (𝑥𝑖+1 ) + 𝑓 (𝑥𝑖 )
00
𝑓 (𝑥 𝑖 ) subst. → 𝑓 00 (𝑥𝑖 ) = +𝑂 (Δ𝑥)
𝑓 (𝑥𝑖+1 ) = 𝑓 (𝑥𝑖 ) + 𝑓 0 (𝑥 𝑖) (Δ𝑥) + 2! (Δ𝑥) + ...
2 ×2 (Δ𝑥) 2

Backward-difference approximation of the second derivative using similar manipulation as before:

𝑓 (𝑥𝑖 ) − 2 𝑓 (𝑥𝑖−1 ) + 𝑓 (𝑥𝑖−2 )


𝑓 00 (𝑥𝑖 ) = + 𝑂 (Δ𝑥)
(Δ𝑥) 2

And Central-difference approximation of the second derivative

𝑓 (𝑥𝑖+1 ) − 2 𝑓 (𝑥𝑖 ) + 𝑓 (𝑥𝑖−1 )


𝑓 00 (𝑥𝑖 ) = + 𝑂 (Δ𝑥) 2 more accurate
(Δ𝑥) 2

To summarize forward finite-difference formulas: two versions are presented below for each derivative. The
latter version incorporates more terms of the Taylor series expansion and is, consequently, more accurate.

To summarize backward finite-difference formulas: two versions are presented for each derivative. The
2-7
latter version incorporates more terms of the Taylor series expansion and is, consequently, more accurate

Centered finite-difference formulas: two versions are presented for each derivative. The latter version
incorporates more terms of the Taylor series expansion and is, consequently, more accurate
2-8
Example 2.3.1
𝜕2 𝜌
Use forward-finite difference to approximate 𝜕𝑡 2 with a truncation error of order 𝑂 (Δ𝑥) 2
Solution
𝜌(𝑡 + ℎ) = 𝜌(𝑡) + 𝜌0 (𝑡)ℎ + 𝜌00 (𝑡)(ℎ) 2 /2) + 𝜌000 (𝑡)(ℎ) 3 /3! (1)

𝜌(𝑡 + 2ℎ) = 𝜌(𝑡) + 𝜌0 (𝑡)(2ℎ) + 𝜌00 (𝑡)(2ℎ) 2 /2) + 𝜌000 (𝑡)(2ℎ) 3 /3! (2)

𝜌(𝑡 + 3ℎ) = 𝜌(𝑡) + 𝜌0 (𝑡)(3ℎ) + 𝜌00 (𝑡)(3ℎ) 2 /2 + 𝜌000 (𝑡)(3ℎ) 3 /3! (3)

4(𝐸𝑞2) − (𝐸𝑞3) − 5(𝐸𝑞1) ⇒

:0
4𝜌(𝑡 + 2ℎ) − 𝜌(𝑡 + 3ℎ) − 5𝜌(𝑡 + ℎ) = −2𝜌(𝑡) + (4(3)


 3
 
− 5(1)) 𝜌0 (𝑡)ℎ + (4(4) − 9 − 5(1))/2𝜌00 (𝑡)ℎ2


𝜕 2 𝜌 4𝜌(𝑡 + 2ℎ) − 𝜌(𝑡 + 3ℎ) − 5𝜌(𝑡 + ℎ) + 2𝜌(𝑡)


𝜌00 (𝑡) = =
𝜕𝑡 2 ℎ2

Homework#1

1. Write a code to expand Taylor series about 𝑥 = 2 and Δ𝑥 = 1 to find the value of the function:
𝑓 (𝑥) = −0.5𝑥 5 − 0.8𝑥 3 − 0.4𝑥 2 − 0.2𝑥 + 1.8
Using second and fourth order approximation and find absolute errors 𝜀𝑡 .
2. Derive third derivative approximation using central difference formula with an accuracy of 𝑂 (Δ𝑥) 4 .

You might also like