Lecture 6
Lecture 6
Numerical Methods
[MA-200]
The False Position Method (Regula-Falsi)
• Historically this method is known as “Regula Falsi” which is Latin and means: the
wrong rule. It is also known as method of false position
• The false position method is a numerical method to solve equations of the form
𝑓 𝑥 = 0.
• Bisection algorithm cuts the interval into two equal segments, while the false
position method will try to generate a new interval in a faster way.
Working procedure
i. f(x) is continuous on [a , b]
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥1 =
𝑓 𝑏 − 𝑓(𝑎)
(1)(2) − 2(−1)
𝑥1 = = 1.3333
2 − (−1)
𝑓 𝑥1 = 𝑓 1.3333 = 1.33332 − 2 = −0.2222 < 0
[1.3333, 2]
Cont.
2nd Iteration
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥2 =
𝑓 𝑏 − 𝑓(𝑎)
(1.3333)(2) − 2(−0.2222)
𝑥2 = = 1.4000
2 − (−0.2222)
𝑓 𝑥2 = 𝑓 1.4000 = 1.4000 2 − 2 = −0.0401 < 0
[1.4, 2]
Cont.
3rd Iteration
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥3 =
𝑓 𝑏 − 𝑓(𝑎)
(1.4)(2) − 2(−0.0401)
𝑥3 = = 1.4118
2 − (−0.0401)
𝑓 𝑥3 = 𝑓 1.4118 = 1.4118 2 − 2 = −0.0068 < 0
[1.4118, 2]
Cont.
4th Iteration
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥4 =
𝑓 𝑏 − 𝑓(𝑎)
(1.4118)(2) − 2(−0.0068)
𝑥4 = = 1.4138
2 − (−0.0068)
𝑓 𝑥4 = 𝑓 1.4138 = 1.4138 2 − 2 = −0.0012 < 0
[1.4138, 2]
Cont.
5th Iteration
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥5 =
𝑓 𝑏 − 𝑓(𝑎)
(1.4138)(2) − 2(−0.0012)
𝑥5 = = 1.4142
2 − (−0.0012)
𝑓 𝑥5 = 𝑓 1.4138 = 1.4138 2 − 2 = −0.0012 < 0
[1.4142, 2]
Cont.
6th Iteration
𝑎𝑓 𝑏 − 𝑏𝑓(𝑎)
𝑥6 =
𝑓 𝑏 − 𝑓(𝑎)
(1.4142)(2) − 2(−0.0002)
𝑥6 = = 1.4143
2 − (−0.0002)
𝑓 𝑥6 = 𝑓 1.4143 = 1.4143 2 − 2 = 0.0001 > 0
[1.4142, 1.4143]
Relative Error:
Let 𝑥𝑖 is the approximate solution of the non-linear equation and 𝑥𝐸 is the exact
value of the function 𝑓(𝑥).
𝑥𝑖 − 𝑥𝐸
𝑅. 𝐸 =
𝑥𝐸
Since 𝑥𝐸 = 1.4142
𝑥1 −𝑥𝐸 1.3333−1.4142
i=1, 𝑥𝐸
= 1.4142
= 0.0572
𝑥2 −𝑥𝐸 1.4000−1.4142
i=2, 𝑥𝐸
= 1.4142
= 0.0100
𝑥3 −𝑥𝐸 1.4118−1.4142
i=3, 𝑥𝐸
= 1.4142
= 0.0017
𝑥4 −𝑥𝐸 1.4138−1.4142
i=4, 𝑥𝐸
= 1.4142
= 0.0003
𝑥5 −𝑥𝐸 1.4142−1.4142
i=5, 𝑥𝐸
= 1.4142
= 0 < 0.0001 = 10−4
𝑥6 −𝑥𝐸 1.4143−1.4142
i=6, 𝑥𝐸
= 1.4142
= 0.0001
Homework
Exercise: 2.3
Q3(b, c), Q4(b), Q9, Q10, Q13(a), Q14(b)
Quiz Announcement
Coefficient matrix 𝐴
Column vector 𝑥
of unknowns
𝐴𝑥 = 𝑏
Example
3𝑥1 + 2𝑥2 = 18 3 2 𝑥1 18
ቊ =
−𝑥1 + 2𝑥2 = 2 −1 2 𝑥2 2
Graphical solution
𝑥2
Solution: 𝑥1 = 4, 𝑥2 = 3
𝑥1
Solution Space
𝑥2 𝑥2 𝑥2
No solution
𝑥1 𝑥1 𝑥1
Gauss Elimination Method
• Gauss elimination aims to transform the system of linear equations in an
equivalent system that can be solved in very short time
𝑏1 − σ𝑛𝑖=2 𝑎1𝑖
′
∙ 𝑥𝑖
𝑎11 𝑎12 𝑎13 ⋯ 𝑎1𝑛 𝑥1 𝑏1 𝑎11
′ ′
0 𝑎22 𝑎23 ⋯ 𝑎′ 2𝑛 𝑥2 𝑏 ′
2 Back
𝑥1 ⋮
0 0 ′
𝑎33 ⋯ ′
𝑎3𝑛 𝑥3 = 𝑏3′ substitution ⋮ 𝑏 ′
− 𝑎 ′
∙ 𝑥𝑛
= 𝑛−1 𝑛−1,𝑛
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑥𝑛−1 ′
𝑥𝑛 𝑎𝑛−1,𝑛−1
′ 𝑥𝑛
0 0 0 ⋯ 𝑎𝑛𝑛 𝑏𝑛′ 𝑏𝑛′
′
𝑎𝑛𝑛
Example with two equations
• Consider
𝑥1 + 𝑥2 = 3
ቊ
3𝑥1 − 4𝑥2 = 2
• Subtracting three times the first equation from the second yields
𝑥1 + 𝑥2 = 3
ቊ
−7𝑥2 = −7
Example of a two equations
• Solve
𝑥1 + 𝑥2 = 3
ቊ
−7𝑥2 = −7
𝑥2 = 1
• And then the first one:
𝑥1 + 1 = 3 ⇒ 𝑥1 = 2
Remark
• There are various ways to form an upper triangular system
𝑟2 ← 𝑅2 − 3𝑅1
𝑥1 + 𝑥2 = 3 𝑥1 + 𝑥2 = 3
ቊ ቊ
3𝑥1 − 4𝑥2 = 2 −7𝑥2 = −7
Used in Gauss elimination
𝑥1 + 𝑥2 = 3 𝑟2 ← 3𝑅1 − 𝑅2 𝑥1 + 𝑥2 = 3
ቊ ቊ
3𝑥1 − 4𝑥2 = 2 7𝑥2 = 7
Augmented matrix notation
• A common notation is:
1 1 | 3 𝑟2 ← 𝑅2 − 3𝑅1 1 1 | 3
Pivot element 3 −4 | 2 0 −7 | −7
−3 2 −1 | −1
6 −6 7 | −7
3 −4 4 | −6
Example with three equations
• Solving:
−3 2 −1 | −1 𝑟2 ← 𝑅2 − −2 𝑅1 −3 2 −1 | −1
6 −6 7 | −7 𝑟3 ← 𝑅3 − −1 𝑅1 0 −2 5 | −9
3 −4 4 | −6 0 −2 3 | −7
−3 2 −1 | −1 −3 2 −1 | −1
𝑟3 ← 𝑅3 − 𝑅2
0 −2 5 | −9 0 −2 5 | −9
0 −2 3 | −7 0 0 −2 | 2
Example with three equations
• Back substitution
−3 2 −1 | −1 2
Back substitution
0 −2 5 | −9 𝑥= 2
0 0 −2 | 2 −1
Gauss Jordan Elimination
Working Procedure