SOT Method
SOT Method
systems of equations, particularly those of the form Ax=bAx=b, where AA is a square matrix,
xx is the vector of unknowns, and bb is the vector of constants. It is a variant of the Gauss-
Seidel method but with an additional tabulative factor that can speed up convergence.
In essence, the method updates each element of the solution vector successively, using a
weighted average of the current approximation and the new value obtained from solving the
system's equations. The tabulative parameter, ωω, allows adjusting how much of the new
value is used in the update process:
Here’s how we can break this down and create a possible approach for SOT:
python
import numpy as np
n = len(b)
D_inv = 1 / np.diag(A)
for k in range(max_iterations):
x_old = np.copy(x) # Store old values of x for comparison
for i in range(n):
sigma = 0
for j in range(n):
if i != j:
return x
return x
# Example system Ax = b
print("Solution:", solution)
Explanation:
● Step 1: Precompute Values (Tabulation):
○ The inverse of the diagonal elements of matrix AA is computed once and
stored in D_inv. This avoids repeatedly calculating these values inside the
iteration loop, which is essentially the "tabulation" or caching part of the
method.
● Step 2: Successive Updates:
○ In each iteration, xx is updated similar to how the Successive Over-Relaxation
(SOR) method works, but using the cached diagonal inverse values.
● Step 3: Convergence Check:
○ After each iteration, the solution is checked for convergence by comparing the
norm of the difference between the old and new estimates of xx.
Customization:
If the SOT method in your context refers to something different, such as the precomputation
of function values (e.g., trigonometric functions, expensive matrix multiplications), the
general idea is to precompute the values that are used frequently and store them for reuse
during iterations.
Additional Optimization:
This example could be extended to other types of precomputation:
python
import numpy as np
for k in range(max_iterations):
x_old = np.copy(x) # Store old values of x for comparison
for i in range(n):
# Compute the sum of non-diagonal terms (this part is
repeated in each iteration)
sigma = sum(A[i, j] * x[j] for j in range(n) if j != i)
# Example system Ax = b
A = np.array([[4, -1, 0, 0],
[-1, 4, -1, 0],
[0, -1, 4, -1],
[0, 0, -1, 3]], dtype=float)
print("Solution:", solution)
How It Works:
● Matrix AA and vector bb** represent the system of equations.
● Initial guess for the solution vector xx starts as a zero vector.
● Tabulation (caching) is applied by precomputing the inverse of the diagonal
elements of AA and storing them in D_inv. This avoids recalculating these
values in every iteration.
● Successive updates are made for each variable xixi, using both the previous
iteration's values and the precomputed diagonal inverses.
Example Output:
Key Elements:
1. Tabulation: Precomputing or caching the inverse diagonal elements to avoid
redundant operations.
2. Successive Iteration: Like in SOR or Gauss-Seidel methods, values are
iteratively updated.
3. Relaxation Parameter: ωω, the relaxation factor, can be tuned for faster
convergence.
Where:
● R1=4 ΩR1=4Ω,
● R2=2 ΩR2=2Ω,
● R3=6 ΩR3=6Ω,
● Voltage source V=10VV=10V.
System of Equations
We can write the system in matrix form Ax=bAx=b:
1. First equation:
10−4I1−2(I1−I2)=010−4I1−2(I1−I2)=0
Simplifying:
6I1−2I2=106I1−2I2=10
2. Second equation:
2(I1−I2)−6I2=02(I1−I2)−6I2=0
Simplifying:
2I1−8I2=02I1−8I2=0
python
import numpy as np
n = len(b)
D_inv = 1 / np.diag(A)
for k in range(max_iterations):
for i in range(n):
return x
return x
A = np.array([[6, -2],
print("Solution (Currents):")
Explanation:
1. Step 1: Defining the system: We derived the system of linear equations from
Kirchhoff's Laws, with the coefficient matrix AA and vector bb.
2. Step 2: Tabulation (Caching): The diagonal elements of matrix AA are
precomputed (inverted) and stored, to speed up the calculations.
3. Step 3: Successive Iterations: We apply successive iterations to update the
values of I1I1 and I2I2, using the previously cached values for efficiency.
Expected Output:
Solution (Currents):
I1: 2.0 A
I2: 0.5 A
Interpretation:
● The currents through resistors R1R1 and R2R2 are I1=2.0 AI1=2.0A and I2=0.5
AI2=0.5A, respectively.
● The SOT method has successfully converged to the solution after a certain
number of iterations, determined by the tolerance and relaxation factor ωω.