0% found this document useful (0 votes)
36 views3 pages

Iterations

The document discusses iterative methods for solving systems of equations. It explains that iterative methods start with an initial approximation that is successively improved until the solution is approached. The method converges if the approximations get closer to the solution with each iteration, and diverges otherwise. It also notes that the matrix must be diagonally dominant for the method to converge. Examples are then shown of solving a system of equations using the Jacobi and Gauss-Seidel iterative methods, calculating successive approximations until the solutions converge after multiple iterations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

Iterations

The document discusses iterative methods for solving systems of equations. It explains that iterative methods start with an initial approximation that is successively improved until the solution is approached. The method converges if the approximations get closer to the solution with each iteration, and diverges otherwise. It also notes that the matrix must be diagonally dominant for the method to converge. Examples are then shown of solving a system of equations using the Jacobi and Gauss-Seidel iterative methods, calculating successive approximations until the solutions converge after multiple iterations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

c.

By Iterative Method

-starting out with initial approximation to the solution, which is successively tried to
improve.

-if the successive approximation tend to approach the solution, the method converges,
otherwise the method diverges.

Convergence

-the matrix should be “diagonally dominant”, meaning that every diagonal is in


absolute value larger than the sum of the absolute values of the other entries in its row.

• Jacobi
Example:
𝑥1 − 3𝑥2 = −7
2𝑥1 + 𝑥2 = 7

Check for the convergence


2𝑥1 + 𝑥2 = 7
𝑥1 − 3𝑥2 = −7

Express xn in the nth equation.


7 𝑥2
𝑥1 = 2 − 2
7 𝑥
𝑥2 = + 1
3 3

Work with three decimal places.


𝑥1 = 3.5 − 0.5𝑥2
𝑥2 = 2.333 + 0.333𝑥1

Choose initial approximation 0 (1st)


𝑥1 = 3.5 − 0.5(0) = 3.5
𝑥2 = 2.333 + 0.333(0) = 2.333

Substitute the values from the previous (2nd)


𝑥1 = 3.5 − 0.5(2.333) = 2.334
𝑥2 = 2.333 + 0.333(3.5) = 3.498

3rd Iteration
𝑥1 = 3.5 − 0.5(3.498) = 1.751
𝑥2 = 2.333 + 0.333(2.334) = 3.110

4th Iteration
𝑥1 = 3.5 − 0.5(3.110) = 1.945
𝑥2 = 2.333 + 0.333(1.751) = 2.916
5th Iteration
𝑥1 = 3.5 − 0.5(2.916) = 2.042
𝑥2 = 2.333 + 0.333(1.945) = 2.281

6th Iteration
𝑥1 = 3.5 − 0.5(2.281) = 2.01
𝑥2 = 2.333 + 0.333(2.042) = 3.013

7th Iteration
𝑥1 = 3.5 − 0.5(3.013) = 1.994
𝑥2 = 2.333 + 0.333(2.01) = 3.002

8th Iteration
𝑥1 = 3.5 − 0.5(3.002) = 1.999
𝑥2 = 2.333 + 0.333(1.994) = 2.997

𝑥1 = 2
𝑥2 = 3

• Gauss-Seidel

Example

𝑥1 − 3𝑥2 = −7
2𝑥1 + 𝑥2 = 7

Check for the convergence


2𝑥1 + 𝑥2 = 7
𝑥1 − 3𝑥2 = −7

Express xn in the nth equation.


7 𝑥2
𝑥1 = 2 − 2
7 𝑥1
𝑥2 = 3
+3

Work with three decimal places.


𝑥1 = 3.5 − 0.5𝑥2
𝑥2 = 2.333 + 0.333𝑥1

Choose initial approximation 0 (1st)

𝑥1 = 3.5 − 0.5(0) = 3.5


𝑥2 = 2.333 + 0.333(3.5) = 3.498
2nd Iteration
𝑥1 = 3.5 − 0.5(3.498) = 1.751
𝑥2 = 2.333 + 0.333(1.751) = 2.916

3rd Iteration
𝑥1 = 3.5 − 0.5(2.916) = 2.042
𝑥2 = 2.333 + 0.333(2.042) = 3.013

4th Iteration
𝑥1 = 3.5 − 0.5(3.013) = 1.994
𝑥2 = 2.333 + 0.333(1.994) = 2.997

5th Iteration
𝑥1 = 3.5 − 0.5(2.997) = 2.002
𝑥2 = 2.333 + 0.333(2.002) = 3

6th Iteration
𝑥1 = 3.5 − 0.5(3) = 2
𝑥2 = 2.333 + 0.333(2) = 2.999

𝑥1 = 2
𝑥2 = 3

You might also like