Chapter One Handout

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Chapter one

Number system and numerical error analysis


Introduction
Computational method is a tool used to solve physical problems by writing computer
programming.
Problem solving in numerical analysis follows the steps
1. Formulation: of the problem/analysis the problem
2. Setting an algorithm: (mechanism step by step solving a problem )
3. Programming: set of instructions that direct the computer to perform a certain
task. It uses Matlab (Matrix laboratory).

Computational method is a technique by which mathematical problems are


formulated so that they can be solved with an arithmetic operation{+,-,*,/} that can be
performed by a computer. Although there are many kinds of numerical methods, they
have one common characteristic: they invariably involve large numbers of tedious
arithmetic calculations. It is little wonder that with the development of fast, efficient
digital computers, the role of numerical methods in engineering problem solving has
increased dramatically in recent years.
Why computational method?
Is a powerful tool for solving scientific problems you will often encounter problems
that can't be solved by existing methods.
E.g no analytical solution is available for x-e-x =0, 10/3.
Numerical method yield approximate results that are close to the exact analytical
solutions.
Numerical Analytical
Approximate Exact
Easy complex
There are several additional reasons why you should study numerical methods:
 Numerical methods are extremely powerful problem-solving tools. They are
capable of handling large systems of equations, nonlinearities, and
complicated geometries that are not uncommon in engineering practice and
that are often impossible to solve analytically.
 They may often have occasion to use commercially available
prepackaged computer programs. But, Many problems cannot be approached
using prepackaged programs. you can design your own programs to solve
problems without having to buy or commission expensive software.
 Numerical methods are an efficient vehicle for learning to use computers.
Because numerical methods are for the most part designed for implementation
on computers, Further, they are especially well-suited to illustrate the power
and the limitations of computers.
 At the same time, you will also learn to acknowledge and control the errors of
approximation that are part and parcel of large scale numerical calculations.
 Numerical methods provide a vehicle for you to reinforce your understanding
of mathematics.

Number representation and storage in computer


The representation of numbers in computers is usually not based on the decimal
system, Since computer is digital device and can process digital signals. Every
character, special character has to be represented with numbers, mostly with binary,
and therefore it is necessary to understand how to convert between different systems
of representation. Computers also have a limited amount of bits for representing
numbers, and for this reason numbers with arbitrarily large magnitude cannot be
represented nor can floating-point numbers be represented with arbitrary precision.
The most commonly used number systems are:
1. Binary ( base 2 ): 0&1.
2. Octal ( base 8) 0,1,2,3,4,5,6,&7
3. Decimal ( base 10 ) 0-9
4. Hexadecimal ( base 10) 0-9, A,B,C,D,E &F

Decimal Octal Hexadecimal Binary


0 0 0 0
1 1 1 1
2 2 2 10
3 3 3 11
4 4 4 100
5 5 5 101
6 6 6 110
7 7 7 111
8 10 8 1000
9 11 9 1001
10 12 A 1010
11 13 B 1011
12 14 C 1100
13 15 D 1101
14 16 E 1110
15 17 F 1111

Base conversion
These are the possibilities

Decimal Octal

Binary Hexadeci

Decimal to decimal ( just for fun)


First we have to write the weight or the position of the digits ( which is n) staring
from LSB ( least significance bits) by denoting 0 and moving to the left side by
increasing . then multiply by 10n
Example 125 = 5 is found in the position of 0 , 2 is found in position 1 and 1 is found
in the position of 2. So ,
5*100 = 5
2*101 = 20
1*102 =100
5+20+100 = 125 the result is the ssame
Binary to decimal
Techniques :
 Multiply each bit by 2n, where n is the weight or the position of the bit starting
to count from 0 on the right (LSB) then
 Add the result
 Example : 1010112
1*25+0*24+1*23+0*22+1*21+1*20
32+8+2+1 = (43)10
Exercise: 1110112 =( )10
10101102 = ( )10
Octal to decimal
 Multiply each bit by 8n, where n is the weight or the position of the bit starting
to count from 0 on the right (LSB) then
 Add the result
 Example : 7248 =7*82+2*81+4*80
= 448+16+4
= 46810
Exercise : Convert from octal to decimal 368 = ( )10
Hexadecimal to decimal
 Multiply each bit by 16n, where n is the weight or the position of the bit
starting to count from 0 on the right (LSB) then
 Add the result
 Example : ABC16 = A*!62+B*161+C*160
= 10*!62+11*161+12*160
= 2560+176+12
= 274810
Exercise :Convert from hexadecimal to decimal (1D5)
Decimal to binary
 Divide by 2
 Stop when the quotient is 0
 Keep track of the remainder
 First remainder is bit 0 (lsb) then continues
 Example : 12510 = ( )2
Number divider quotient remainder
125 2 62 1
62 2 31 0
31 2 15 1
15 2 7 1
7 2 3 1
3 2 1 1
1 2 0 1
Since the last quotient is less than the divider we will take it as the remainder
Start from the last remainder
12510 = ( 1111101 )2
Example : (0.6875)10 = ( )2
Here for the fractional part of an integer
 Multiply the fractional part by the assigned base which is needed to be
converted
 Take the first integer after multiplication
 Then puts it at the bits comes
For the above example: 0.6875 *2 = 1.375 take 1 a-1
0.375*2 = 0.75 take 0 a-2
0.75*2 = 1.5 take 1 a-3
0.5*2 = 1 take 1 a-4
(0.6785)10 = ( 0. a-1 a-2 a-3 a-4 ) = (0.1011 )2
Exercise : (125.6875)10 = ( )2
(52.234375)10 = ( )2

Octal to binary
Convert each octal digit to a 3 bit equivalent binary representation (23 =8)
Example : 7058 = ( )2
7 = 111, 0 = 000, 5 = 101
7058 = ( 111000101 )2
Exercise : 2548
Hexadecimal to binary
Convert each octal digit to a 4 bit equivalent binary representation (24 =16)
Example : 10AF16 = ( )2
1= 0001, 0 = 0000, A = 1010, F = 1111
10AF16 = (0001000010101111)2
Exercise : ADE316
Decimal to octal
 Divide by 8
 Stop when the quotient is 0
 Keep track of the remainder
 First remainder is bit 0 (lsb) then continues
 Example : 123410 = ( )8
Number divider quotient remainder
1234 8 154 2
154 8 19 2
19 8 2 3
2 8 0 2
Since the last quotient is less than the divider we will take it as the remainder
Start from the last remainder
123410 = ( 2322 )8
Exercise: 56710 = ( )8
Decimal to hexadecimal
Repeatedly divide the number and then each succeeding quotient by b until a quotient
of zero is obtained. The remainders from the last to the first; but converted to base b,
form the required number. An appropriate number of leading zeroes is prefixed to
obtain the required number of bits.
Example: Convert 5876 into a 16-bit hexadecimal number.
Solution:
367 22 1 0
16 5876 16 367 16 22 16 1
- 5872 - 352 - 16 -0
4 15 6 1
   
4 F 6 1

Thus the answer is 16F4H
Binary to octal
To convert a binary number into octal we follow the given steps
1. Divide the binary digits into groups of 3 digits, starting from the right.
2. Convert each group of 3 binary digits into 1 octal digit.
Convert Binary number 1001012 into Octal form
Step 1. Make groups of 3 digits from right
1001012
Groups: 1002 1012
Step 2. Convert each 3 digits group into 1 octal digit
1012 = 58
1002 = 48
so, 1001012 = 458
Hexadecimal to Octal conversion
To convert a hexadecimal number into octal we follow the given steps
1. Convert each hexadecimal digit into groups of 4 digits binary
2. Combine the groups from step 1

3. Divide the binary digits from step 2 into groups of 3 digits, starting from the right
4. Convert each group of 3 binary digits into 1 octal digit
Step 1. Convert each hexadecimal digit into groups of 4 digits binary

1516
Groups: 116 516
516 = 01012
116 = 00012
Step 2. Combine the groups
so, 1516 = 000101012
Step 3. Divide the binary digits from step 2 into groups of 3 digits, starting from the
right
Groups: 0002 0102 1012
Step 4. Convert each group of 4 binary digits into 1 hexadecimal digit
1012 = 58
0102 = 28
0002 = 08
so, 1516 = 0258 = 258
Octal to Hexadecimal conversion
To convert an octal number into hexadecimal we follow the given steps
1. Convert each octal digit into groups of 3 digits binary
2. Combine the groups from step 1

3. Divide the binary digits from step 2 into groups of 4 digits, starting from the right
4. Convert each group of 4 binary digits into 1 hexadecimal digit

Convert Octal number 258 into Hexadecimal form


Step 1. Convert each octal digit into groups of 3 digits binary
258
Groups: 28 58
58 = 1012
28 = 0102
Step 2. Combine the groups
so, 258 = 0101012
Step 3. Divide the binary digits from step 2 into groups of 4 digits, starting from the
right
Groups: 00012 01012
Step 4. Convert each group of 4 binary digits into 1 hexadecimal digit
01012 = 516
00012 = 116
so, 258 = 1516
Representation of numbers in computers
In this section, we discuss how numbers are represented in typical computers and how
arithmetic between the numbers works. We begin by discussing integer numbers and
then continue to floating-point numbers.
Since there is a fixed space of memory in the computer, a given real number in a
certain base must be represented in finite space in the memory of the machine. This
means that all real numbers cannot be represented in the memory.
These numbers that can be represented in the memory of the computer are called
machine numbers.
There are two ways of representation.
1. Fixed point representation
Here suppose the number to be represented has t digits, the digits are subdivided into
t1 and t2, where t1 is reserved for integers and t2 reserved for fractional parts.
In modern computers, this method is used to represent integers only, where we make
t2=0.
Fixed-point arithmetic may be fast, but it can suffer from serious precision issues. In
particular, it is often the case that the output of a binary operation like multiplication
or division can require more bits than the operands. For instance, suppose we include
one decimal point of precision and wish to carry out the product 1/2 * 1/2 = 1/4. We
write 0.12 × 0.12 = 0.012, which gets truncated to 0. In this system, it is fairly
straightforward to combine fixed point numbers in a reasonable way and get an
unreasonable result. Due to these drawbacks, most major programming languages do
not by default include a fixed-point decimal data type. The speed and regularity of
fixed-point arithmetic, however, can be a considerable advantage for systems that
favor timing over accuracy.
To include also negative numbers, we must assign a separate sign bit. The first bit of
the string is the sign bit which is zero for positive numbers and one for negative
numbers.
The most often used method for obtaining the representation for negative numbers is a
method called two’s complement: In the binary representation of a positive number
x, invert all the bits (0 $ 1), and add 1 to get the binary representation of -x.
For example, if we have eight bits for the representation (of which one is for the sign
and seven for the digits), then
+2 = 0000 0010
+1 = 0000 0001
0 = 0000 0000
-1 = 1111 1111
-2 = 1111 1110
Floating-point numbers
In a computer, there are no real numbers or rational numbers (in the sense of the
mathematical definition), but all non integer numbers are represented with finite
precision.
For example, the numbers pi or 1/3 cannot be represented precisely (in any
representation), and in some representations, the numbers 1.00000000 and
1.00000001 are equally large.
The floating-point representation in computers is based on an internal division of bits
which are reserved for representing a given number x. The number is represented in
three parts: a sign s that is either + or -, and an integer exponent c, and a positive
mantissa M:
x = s×Bc-E ×M
where B is the base of the representation (usually B = 2) and E is the bias of the
exponent (a fixed integer constant for any given machine and representation which
enables representing negative exponents without a separate sign bit).
In the decimal system, this corresponds to the following normalized floating-point
form
x = ±0.d1d2d3 ..×10n = ±r ×10n
where d1  0 and n is an integer.
The above representation is said to be normalized if 1/base  |M|1
Example: x = 38910.321293 , base is 10
x= M*basee , base 10, 1/10= 0.1 so 0.1|M|1
M = 0.38910321293
Exponent is = 5
x= 0.38910321293 x105
In most computers, floating-point numbers are represented in the following standard
IEEE floating-point form
x = s×2c-E ×(1.f)2
The first bit s is the sign bit (0 = + and 1 = -). The next bits are used to represent
the exponent c corresponding to 2c-E where E is a constant which enables representing
negative exponents without a separate sign bit. The last bits are reserved for the
mantissa (also called the significand) which is given in the "1-plus" form (1. f)2.
The IEEE standard defines single-precision (32 bits) and double-precision (64 bits)
floating point numbers. The available bits are allocated as shown in Fig. 1 (constant E
is 127 for single precision and 1023 for double precision):

How to represent a real number x?


1. if x is zero, stored it by a full word of zero bits, (a possible sign bit)
2. for nonzero x, first consider sign bit and then consider |x|
3. convert both integer and fractional parts of |x| from decimal to octal, then to
binary
4. one‐plus normalize (|x|)2 by shifting the binary point
5. find the 24 bit 1‐plus normalized mantissa
6. find the exponent of 2 by setting it equal to c-127 and determine c
7. denote the 32‐bit representation as 8 hexadecimal digits
A 32‐bit single‐precision pattern can be interpreted as the real number
(1)b1  2(b2b3…b9 )2  2127 (1.b10b11… b32)
Examples
Find the 32‐bit representation of -52.234375.
Integer part
(52.)10 = (64.)8 = (110 100.)2
Fractional part
(.234375)10 = (.17)8 = (.001 111)2
(52.234375)10 = (110100.001111)2
= (1.101000 011110)2 x 25
The exponent is (5)10, we need to write it as c - 127 = 5 so c = 132
The stored exponent is (132)10  (204)8  (10 000 100)2
The representation of -52.234375 is
(1 10 000 100 101 000 011 110 000 000 000 00)2
= (1100 0010 0101 0000 1111 0000 0000 0000)2
=(C250F000)16
For a 64-bit (binary digit) representation is used for a real number. The first bit is a
sign indicator, denoted s. This is followed by an 11-bit exponent, c, called the
characteristic, and a 52-bit binary fraction, f , called the mantissa. The base for the
exponent is 2.

(-1)s2c-1023(1 + f ).
Example: Consider the machine number
0 10000000011 1011100100010000000000000000000000000000000000000000.
The leftmost bit is s = 0, which indicates that the number is positive. The next 11 bits,
10000000011, give the characteristic and are equivalent to the decimal number
c = 1 · 210 + 0 · 29 + · · · + 0 · 22 + 1 · 21 + 1 · 20 = 1024 + 2 + 1 = 1027.
The exponential part of the number is, therefore, 21027-1023 = 24. The final 52 bits
specify that the mantissa is

As a consequence, this machine number precisely represents the decimal number

= 27.56640625.
However, the next smallest machine number is
0 10000000011 1011100100001111111111111111111111111111111111111111,
and the next largest machine number is
0 10000000011 1011100100010000000000000000000000000000000000000001.
Exercise: What number has the representation (45DE4000)16?
Exercise: 0100000001111110100000000000000000000000000000000000000000000000,
considered as a double precision word
Numerical Error Analysis
Numerical calculations always involve approximations due to several reasons. These
errors are not the result of poor thinking or carelessness (like programming errors) but
they inevitably arise in all numerical calculations. We can divide the sources of errors
roughly into four categories: model, method, initial values (data) and round-off.
A. Modeling errors
When a practical problem is formulated into mathematical language, it is almost
always necessary to make simplifications. Examples of modeling errors include
leaving out less influential factors (e.g., no air resistance in falling) or using a
simplified description of a more complex system (e.g., classical description of a
quantum-mechanical system).Modeling errors are not discussed here in more detail
but left as a subject of courses in the various application fields.
B. Methodological errors
The conversion of a mathematical problem into a numerical one is also a source of
errors. Care should be taken to control these errors and to estimate their magnitude
and thus the quality of the numerical solution. Note that by methodological errors we
mean errors that would persist even if a hypothetical "perfect" computer had an
infinitely accurate representation and no round-off error. As a general rule, there is
not much a programmer can do about the computer’s round-off error.
Methodological errors, on the other hand, are entirely under the programmer’s
control. In fact, an incredible amount of work in the field of numerical analysis has
been devoted to the fine minimization methodological errors! An example of
methodological errors is the truncation error (or chopping error) which is
encountered when, for example, an interminating series is chopped:

=1+ !
+ !
+ !
+ ⋯,

Another example of methodological errors is the discretizing error which results


when a continuous quantity is replaced by a discrete approximation. For example,
replacing the derivative by the difference quotient leads to a discretizing error:

C. Errors due to initial values


The initial values of a numerical computation can involve inaccurate values (e.g.
measurements). When designing the algorithm, it is important to keep in mind that the
initial errors must not accumulate during the calculation. There are also techniques for
data filtering that are designed to decrease the effects of errors in the initial values.
D. Round-off errors
Round-off errors are the result of having a finite number of bits to represent floating-
point numbers in computers. As already mentioned, arbitrarily large or small numbers
cannot be represented and floating-point numbers cannot have arbitrary precision.
E. Elementary arithmetic operations
We continue to examining errors that are produced in basic arithmetic operations.
Round-off errors accumulate with increasing amounts of calculation.
Mainly focus on the following errors
the two errors we need to focus on are
1. Round off error
2. Truncation error.

What is round off error?


A computer can only represent a number approximately. For example, a number like
1
may be represented as 0.333333 on a PC. Then the round off error in this case is
3
1
 0.333333  0.0000003 3 . Then there are other numbers that cannot be represented
3
exactly. For example,  and 2 are numbers that need to be approximated in
computer calculations.
What is truncation error?
Truncation error is defined as the error caused by truncating a mathematical
procedure. For example, the Maclaurin series for e x is given as
x2 x3
ex  1 x    ....................
2! 3!
This series has an infinite number of terms but when using this series to calculate e x ,
only a finite number of terms can be used. For example, if one uses three terms to
calculate e x , then
x2
ex  1 x  .
2!
the truncation error for such an approximation is
 x2 
Truncation error = e x  1  x  ,
 2! 

x3 x4
   .......................
3! 4!
Example the Taylor series approximate for ex is given by
x2 x3
ex  1 x    ....................
2! 3!
Approximate e1 by three Taylor series terms and calculate the truncation error.
ex 1+ x + x2 / 2!
e1 = 1+1+12/2! =1+1+1/2
e1 =2.5 (approximate)
e1 = 2.73 (true)
error = | 2.73-2.5|
truncation error is = 0.23 (truncation)
Example 1 Determine the five-digit (a) chopping and (b) rounding values of the
irrational number π. Solution The number π has an infinite decimal expansion of the
form π = 3.14159265. . .Written in normalized decimal form, we have π =
0.314159265 . . . × 101.
(a) The floating-point form of π using five-digit chopping is f l(π) = 0.31415 × 101 =
3.1415.
(b) The sixth digit of the decimal expansion of π is a 9, so the floating-point form of π
using five-digit rounding is f l(π) = (0.31415 + 0.00001) × 101 = 3.1416.
Significant Figures
• Number of significant figures indicates precision. Significant digits of a
number are those that can be used with confidence, e.g., the number of certain
digits plus one estimated digit.
53,800 How many significant figures?
5.38 x 104 3
5.380 x 104 4
5.3800 x 104 5
Zeros are sometimes used to locate the decimal point not significant figures.
0.00001753 4
0.0001753 4
0.001753 4
Types of error
Numerical errors arise from the use of approximations to represent exact
mathematical operations and quantities. These include truncation errors, which result
when approximations are used to represent exact mathematical procedures, and
round-off errors, which result when numbers having limited significant figures are
used to represent exact numbers. For both types, the relationship between the exact, or
true result and the approximation can be formulated as
True value = approximation + error
By rearranging the Equation
True error (Et) = true value - approximation
where Et is used to designate the exact value of the error. The subscript t is included
to designate that this is the “true” error.
Relative true error is denoted by t and is defined as the ratio between the true error
and the true value.
True Error
Relative True Error 
True Value

true error
True percent relative error,  t   100 %
true value

where εt designates the true percent relative error


Absolute relative true errors may also need to be calculated. In such cases,
t

Problem Statement. Suppose that you have the task of measuring the lengths of a
bridge and a rivet and come up with 9999 and 9 cm, respectively. If the true values
are 10,000 and 10 cm, respectively, compute (a) the true error and (b) the true percent
relative error for each case.
Solution. (a) The error for measuring the bridge is
Et = 10,000 - 9999 = 1 cm
Thus, although both measurements have an error of 1 cm, the relative error for the
rivet is much greater. We would conclude that we have done an adequate job of
measuring the bridge, whereas our estimate for the rivet leaves something to be
desired.
For numerical methods, the true value will be known only when we deal with
functions that can be solved analytically (simple systems). In real world applications,
we usually not know the answer a priori. Then

Approximate error
a  100%
Approximation
where the subscript a signifies that the error is normalized to an approximate value.
This process is performed repeatedly, or iteratively, to successively compute (we
hope) better and better approximations. For such cases, the error is often estimated as
the difference between previous and current approximations. Thus, percent relative
error is determined according to

Current approximation - Previous approximation


a  100%
Current approximation

Example 1
The derivative of a function f (x) at a particular value of x can be approximately calculated
by
f ( x  h)  f ( x )
f ( x) 
h
of f ( 2) For f ( x )  7e 0.5 x and h  0.3 , find

a) the approximate value of f (2)


b) the true value of f (2)
c) the true error for part (a)
Solution
f ( x  h)  f ( x )
a) f ( x) 
h
For x  2 and h  0.3 ,
f ( 2  0.3)  f ( 2)
f ( 2) 
0. 3
f (2.3)  f ( 2)

0.3
7e 0.5( 2.3)  7e 0.5( 2)

0. 3
22.107  19.028

0.3
 10.265
b) The exact value of f (2) can be calculated by using our knowledge of differential
calculus.
f ( x)  7e 0.5 x

f ' ( x)  7  0.5  e0.5 x

 3.5e 0.5 x
So the true value of f ' (2) is

f ' ( 2)  3.5e 0.5( 2)


 9.5140
c) True error is calculated as
Et = True value – Approximate value
 9.5140  10.265
 0.75061
Example 2

The derivative of a function f (x) at a particular value of x can be approximately


calculated by
f ( x  h)  f ( x )
f ' ( x) 
h
0.5 x
For f ( x )  7e and h  0.3 , find the relative true error at x  2 .
Solution
From Example 1,
Et
= True value – Approximate value
 9.5140  10.265
 0.75061
Relative true error is calculated as
True Error
t 
True Value
 0.75061

9.5140
 0.078895
Relative true errors are also presented as percentages. For this example,
t  0.0758895  100%
 7.58895%
Absolute relative true errors may also need to be calculated. In such cases,
t | 0.075888 |

= 0.0758895
= 7.58895%
Example 3

The derivative of a function f (x) at a particular value of x can be approximately


calculated by
f ( x  h)  f ( x )
f ' ( x) 
h
0.5 x
For f ( x )  7e and at x  2 , find the following

a) f (2) using h  0.3

b) f (2) using h  0.15

c) approximate error for the value of f (2) for part (b)


Solution
a) The approximate expression for the derivative of a function is
f ( x  h)  f ( x )
f ' ( x) 
h .
For x  2 and h  0.3 ,
f ( 2  0.3)  f (2)
f ' ( 2) 
0.3
f (2.3)  f ( 2)

0.3
7e 0.5( 2.3)  7e 0.5 ( 2 )

0. 3
22.107  19.028

0.3
 10.265

b) Repeat the procedure of part (a) with h  0.15,


f ( x  h)  f ( x )
f ( x) 
h
For x  2 and h  0.15 ,
f (2  0.15)  f (2)
f ' ( 2) 
0.15
f ( 2.15)  f ( 2)

0.15
7e 0.5 ( 2.15)  7e 0.5( 2 )

0.15
20.50  19.028

0.15
 9.8799

c) So the approximate error, E a is


Ea 
Present Approximation – Previous Approximation
 9.8799  10.265
 0.38474
Example 4

The derivative of a function f (x) at a particular value of x can be approximately


calculated by
f ( x  h)  f ( x )
f ' ( x) 
h
0.5 x
For f ( x )  7e , find the relative approximate error in calculating f (2) using
values from h  0.3 and h  0.15 .
Solution

From Example 3, the approximate value of f (2)  10.263 using h  0.3 and
f ' ( 2)  9.8800 using h  0.15 .

Ea 
Present Approximation – Previous Approximation
 9.8799  10.265
 0.38474
The relative approximate error is calculated as
Approximate Error
a  Present Approximation

 0.38474

9.8799
 0.038942
Relative approximate errors are also presented as percentages. For this example,
a  0.038942  100%

=  3.8942%
Absolute relative approximate errors may also need to be calculated. In this example
a  | 0.038942 |

 0.038942 or 3.8942%
Example 5
x 0.7
If one chooses 6 terms of the Maclaurin series for e to calculate e , how many
significant digits can you trust in the solution? Find your answer without knowing or
using the exact answer.
Solution
x2
ex  1 x   .................
2!
Using 6 terms, we get the current approximation as
0.7 2 0.7 3 0.7 4 0.7 5
e 0.7  1  0.7    
2! 3! 4! 5!
 2.0136
Using 5 terms, we get the previous approximation as
0.7 2 0.7 3 0.7 4
e 0.7  1  0.7   
2! 3! 4!
 2.0122
The percentage absolute relative approximate error is
2.0136  2.0122
a   100
2.0136
 0.069527%
a  0.5  10 2 2 %
Since , at least 2 significant digits are correct in the answer of
e 0.7  2.0136

You might also like