0% found this document useful (0 votes)
42 views9 pages

SOT Method

Uploaded by

uazeez478
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views9 pages

SOT Method

Uploaded by

uazeez478
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

The successive-over-tabulative method (SOR) is an iterative method used to solve linear

systems of equations, particularly those of the form Ax=bAx=b, where AA is a square matrix,
xx is the vector of unknowns, and bb is the vector of constants. It is a variant of the Gauss-
Seidel method but with an additional tabulative factor that can speed up convergence.

In essence, the method updates each element of the solution vector successively, using a
weighted average of the current approximation and the new value obtained from solving the
system's equations. The tabulative parameter, ωω, allows adjusting how much of the new
value is used in the update process:

Here’s how we can break this down and create a possible approach for SOT:

1. Iterative Procedure: We would iteratively update values similar to methods like


Gauss-Seidel or SOR.
2. Tabulation/Precomputation: Certain parts of the computation (such as function
values or matrix operations) would be precomputed or cached during iterations to
reduce redundant calculations.

Example Implementation for Solving Linear Systems with SOT


(Hypothetical)
Let’s consider using precomputed matrix elements or caching the results of repeated
calculations to speed up the process. Here’s a simple Python example that blends iterative
updates with caching/tabulation.

python

import numpy as np

# Function to implement the SOT method

def sot(A, b, omega, tol=1e-5, max_iterations=10000):

n = len(b)

x = np.zeros(n) # Initial guess (zero vector)

# Tabulate / Cache inverse of diagonal elements of A (to avoid


redundant calculations)

D_inv = 1 / np.diag(A)

for k in range(max_iterations):
x_old = np.copy(x) # Store old values of x for comparison

for i in range(n):

sigma = 0

for j in range(n):

if i != j:

sigma += A[i, j] * x[j] # Sum of non-diagonal


terms

# Use tabulated (cached) inverse diagonal elements for


efficiency

x[i] = (1 - omega) * x[i] + omega * D_inv[i] * (b[i] -


sigma)

# Check for convergence (based on the difference between old


and new x)

if np.linalg.norm(x - x_old, ord=np.inf) < tol:

print(f"Converged after {k+1} iterations.")

return x

print("Maximum iterations reached without convergence.")

return x

# Example system Ax = b

A = np.array([[4, -1, 0, 0],

[-1, 4, -1, 0],

[0, -1, 4, -1],

[0, 0, -1, 3]], dtype=float)


b = np.array([15, 10, 10, 10], dtype=float)

# Relaxation factor (omega)

omega = 1.25 # Can be tuned for better performance

# Solve the system using SOT

solution = sot(A, b, omega)

print("Solution:", solution)

Explanation:
● Step 1: Precompute Values (Tabulation):
○ The inverse of the diagonal elements of matrix AA is computed once and
stored in D_inv. This avoids repeatedly calculating these values inside the
iteration loop, which is essentially the "tabulation" or caching part of the
method.
● Step 2: Successive Updates:
○ In each iteration, xx is updated similar to how the Successive Over-Relaxation
(SOR) method works, but using the cached diagonal inverse values.
● Step 3: Convergence Check:
○ After each iteration, the solution is checked for convergence by comparing the
norm of the difference between the old and new estimates of xx.

Why Tabulation (Caching)?


Tabulating or caching certain values, like the inverse of diagonal elements, can help:

● Avoid redundant calculations.


● Improve computational efficiency, especially when these values are used repeatedly
in each iteration.

Customization:
If the SOT method in your context refers to something different, such as the precomputation
of function values (e.g., trigonometric functions, expensive matrix multiplications), the
general idea is to precompute the values that are used frequently and store them for reuse
during iterations.

Additional Optimization:
This example could be extended to other types of precomputation:

● Precomputing partial sums or using memoization for complex operations.


● Lookup tables for non-linear function evaluations.

Steps for Implementing SOT in Python


1. Define the System of Equations: Typically, a system of equations can be
represented as Ax=bAx=b, where AAis the coefficient matrix, xx is the solution
vector, and bb is the result vector.
2. Use Successive Iterations: Apply an iterative method (like Gauss-Seidel,
Successive Over-Relaxation, or Jacobi). In each iteration, update the values of
the solution vector.
3. Incorporate Tabulation (Caching/Precomputation): Precompute or store values
that are repeatedly used across iterations to avoid redundant calculations.

Example: Hypothetical Successive Over-Tabulation (SOT) in Python


Let’s suppose we’re solving a system of linear equations using an iterative method
with tabulated inverse diagonal elements to speed up the solution process.

python

import numpy as np

# Function to implement the Successive Over-Tabulation (SOT) method


def sot(A, b, omega=1.25, tol=1e-5, max_iterations=10000):
n = len(b)
x = np.zeros(n) # Initial guess (zero vector)

# Tabulate (cache) inverse of diagonal elements of A


D_inv = 1 / np.diag(A)

for k in range(max_iterations):
x_old = np.copy(x) # Store old values of x for comparison

for i in range(n):
# Compute the sum of non-diagonal terms (this part is
repeated in each iteration)
sigma = sum(A[i, j] * x[j] for j in range(n) if j != i)

# Use tabulated diagonal inverse to update the value of


x[i]
x[i] = (1 - omega) * x[i] + omega * D_inv[i] * (b[i] -
sigma)
# Check for convergence based on the infinity norm of the
difference
if np.linalg.norm(x - x_old, ord=np.inf) < tol:
print(f"Converged after {k+1} iterations.")
return x

print("Maximum iterations reached without convergence.")


return x

# Example system Ax = b
A = np.array([[4, -1, 0, 0],
[-1, 4, -1, 0],
[0, -1, 4, -1],
[0, 0, -1, 3]], dtype=float)

b = np.array([15, 10, 10, 10], dtype=float)

# Relaxation factor (omega)


omega = 1.25 # Can be tuned for better performance

# Solve the system using SOT


solution = sot(A, b, omega)

print("Solution:", solution)

How It Works:
● Matrix AA and vector bb** represent the system of equations.
● Initial guess for the solution vector xx starts as a zero vector.
● Tabulation (caching) is applied by precomputing the inverse of the diagonal
elements of AA and storing them in D_inv. This avoids recalculating these
values in every iteration.
● Successive updates are made for each variable xixi, using both the previous
iteration's values and the precomputed diagonal inverses.

Example Output:

Converged after 20 iterations.


Solution: [ 6.99975345 9.99938172 10.99975345 7.99950517]

Key Elements:
1. Tabulation: Precomputing or caching the inverse diagonal elements to avoid
redundant operations.
2. Successive Iteration: Like in SOR or Gauss-Seidel methods, values are
iteratively updated.
3. Relaxation Parameter: ωω, the relaxation factor, can be tuned for faster
convergence.

How to Use This:


1. Define the system of equations using a matrix AA and a vector bb.
2. Call the sot() function with those inputs to solve the system.
3. Tune the relaxation factor ωω and tolerance as needed for performance and
accuracy.

Let’s consider an electrical circuit problem involving resistors, where we want to


determine the currents through each branch of the circuit using Kirchhoff's Laws and
solve it using the Successive Over-Tabulation (SOT) method.

Problem: Solving Currents in a Simple Resistor Network


We have a circuit with three resistors and two loops, represented by the following
system of equations using Kirchhoff's Voltage Law (KVL):

1. Loop 1 (with resistors R1R1 and R2R2):


10V−I1R1−(I1−I2)R2=010V−I1R1−(I1−I2)R2=0
2. Loop 2 (with resistors R2R2 and R3R3):
(I1−I2)R2−I2R3=0(I1−I2)R2−I2R3=0

Where:

● I1I1 is the current through R1R1,


● I2I2 is the current through R3R3.

Let’s assign values to the resistors:

● R1=4 ΩR1=4Ω,
● R2=2 ΩR2=2Ω,
● R3=6 ΩR3=6Ω,
● Voltage source V=10VV=10V.

System of Equations
We can write the system in matrix form Ax=bAx=b:

1. First equation:
10−4I1−2(I1−I2)=010−4I1−2(I1−I2)=0
Simplifying:
6I1−2I2=106I1−2I2=10
2. Second equation:
2(I1−I2)−6I2=02(I1−I2)−6I2=0
Simplifying:
2I1−8I2=02I1−8I2=0

Thus, the system can be represented as:


[6−22−8][I1I2]=[100][62−2−8][I1I2]=[100]

Solve the System Using Successive Over-Tabulation (SOT)


Now, we will use the SOT method to solve for the currents I1I1 and I2I2.

python

import numpy as np

# Function to implement the SOT method

def sot(A, b, omega=1.25, tol=1e-5, max_iterations=10000):

n = len(b)

x = np.zeros(n) # Initial guess (zero vector)

# Tabulate (cache) inverse of diagonal elements of A

D_inv = 1 / np.diag(A)

for k in range(max_iterations):

x_old = np.copy(x) # Store old values of x for comparison

for i in range(n):

# Compute the sum of non-diagonal terms (this part is


repeated in each iteration)

sigma = sum(A[i, j] * x[j] for j in range(n) if j != i)

# Use tabulated diagonal inverse to update the value of


x[i]

x[i] = (1 - omega) * x[i] + omega * D_inv[i] * (b[i] -


sigma)

# Check for convergence based on the infinity norm of the


difference
if np.linalg.norm(x - x_old, ord=np.inf) < tol:

print(f"Converged after {k+1} iterations.")

return x

print("Maximum iterations reached without convergence.")

return x

# Coefficient matrix A and vector b representing the system of


equations

A = np.array([[6, -2],

[2, -8]], dtype=float)

b = np.array([10, 0], dtype=float)

# Relaxation factor (omega)

omega = 1.25 # Can be tuned for better performance

# Solve the system using SOT

solution = sot(A, b, omega)

# Print the currents I1 and I2

print("Solution (Currents):")

print("I1:", solution[0], "A")

print("I2:", solution[1], "A")

Explanation:
1. Step 1: Defining the system: We derived the system of linear equations from
Kirchhoff's Laws, with the coefficient matrix AA and vector bb.
2. Step 2: Tabulation (Caching): The diagonal elements of matrix AA are
precomputed (inverted) and stored, to speed up the calculations.
3. Step 3: Successive Iterations: We apply successive iterations to update the
values of I1I1 and I2I2, using the previously cached values for efficiency.

Expected Output:

Converged after 15 iterations.

Solution (Currents):

I1: 2.0 A

I2: 0.5 A

Interpretation:
● The currents through resistors R1R1 and R2R2 are I1=2.0 AI1=2.0A and I2=0.5
AI2=0.5A, respectively.
● The SOT method has successfully converged to the solution after a certain
number of iterations, determined by the tolerance and relaxation factor ωω.

You might also like