0% found this document useful (0 votes)
10 views7 pages

Fifth

Uploaded by

atharva3010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Fifth

Uploaded by

atharva3010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Fifth.

md 2024-11-10

Assignment 5
Question 1
import numpy as np
A=np.array([[8,6,2,3],[3,8,4,3],[5,2,6,8],[9,2,3,4]],dtype=float)
B=np.array([20,24,12,14],dtype=float)

for i in range(3): #Gaussian Elimination


for j in range(3,i,-1):
factor = A[j,i]/A[i,i]
A[j]=A[j]-A[i]*factor
B[j]=B[j]-B[i]*factor
X = np.zeros(4)

for i in range(3, -1, -1): #Backward Substitution


X[i] = B[i]
for j in range(i + 1, 4):
X[i] -= A[i, j] * X[j]
X[i] /= A[i, i]

print("Solution X:", X)

This code uses Gaussian elimination followed by back-substitution to solve a system of linear equations (
AX = B ).
. Gaussian Elimination: It iterates through each column, eliminating elements below the main diagonal
to transform matrix ( A ) into an upper triangular form. For each row ( j ) below row ( i ), it calculates a
factor, then updates A[j] and B[j] to zero out the entries below the diagonal.

. Back-Substitution: Once ( A ) is upper triangular, it calculates the solution vector ( X ) by starting


from the last row and moving upward. For each ( X[i] ), it subtracts the contributions of already
solved variables and divides by the diagonal element to isolate ( X[i] ).
Finally, X is printed as the solution to ( AX = B ).
Output:

Solution X: [ 0.92561983 1.98347107 2.33057851 -1.32231405]

Question 2
import numpy as np

1/7
Fifth.md 2024-11-10

A = np.array([[8,6,2,3],[3,8,4,3],[5,2,6,8],[9,2,3,4]], dtype=float)
B = np.array([20,24,12,14], dtype=float)

def lu_decomposition(A):
n = len(A)
L = np.zeros((n, n))
U = np.zeros((n, n))

for i in range(n):
# Set diagonal elements of L to 1
L[i][i] = 1

# Compute U
for j in range(i, n):
s1 = sum(U[k][j] * L[i][k] for k in range(i))
U[i][j] = A[i][j] - s1

# Compute L
for j in range(i + 1, n):
s2 = sum(U[k][i] * L[j][k] for k in range(i))
L[j][i] = (A[j][i] - s2) / U[i][i]

return L, U

def solve_linear_system(A, b):


L, U = lu_decomposition(A)

# Solve Ly = b using forward substitution


y = np.zeros_like(b)
for i in range(len(y)):
temp_sum = sum(L[i][j] * y[j] for j in range(i))
y[i] = (b[i] - temp_sum) / L[i][i]

# Solve Ux = y using back substitution


x = np.zeros_like(y)
for i in range(len(x) - 1, -1, -1):
temp_sum = sum(U[i][j] * x[j] for j in range(i + 1, len(x)))
x[i] = (y[i] - temp_sum) / U[i][i]

return x

x = solve_linear_system(A, B)
print("Solution X:", x)

This code uses Doolittle's method to perform LU decomposition on matrix ( A ), decomposing it into a
lower triangular matrix ( L ) and an upper triangular matrix ( U ), where ( L ) has 1s along its diagonal.
In this method for LU decomposition decomposes a matrix ( A ) such that:
[ A = LU ]
Where:
2/7
Fifth.md 2024-11-10

( L ) is a lower triangular matrix with 1s on its diagonal.


( U ) is an upper triangular matrix.
In this method, we calculate the elements of ( L ) and ( U ) in such a way that the diagonal elements of ( L )
are all 1, which simplifies the calculations.
1. LU Decomposition Function

def lu_decomposition(A):
n = len(A)
L = np.zeros((n, n))
U = np.zeros((n, n))

for i in range(n):
# Set diagonal elements of L to 1
L[i][i] = 1

Initialization: We create two ( n \times n ) matrices, L and U, filled with zeros.


Set Diagonal of ( L ) to 1: In Doolittle's method, ( L ) has 1s on its diagonal by definition.
2. Filling in the Elements of ( U ) (Upper Triangular Matrix)

for j in range(i, n):


s1 = sum(U[k][j] * L[i][k] for k in range(i))
U[i][j] = A[i][j] - s1

Calculating ( U[i][j] ): For each row ( i ), we calculate the entries of ( U ) from column ( i ) to ( n ).
Sum Calculation ( s1 ): We compute the sum ( s1 ) of products from previous rows that
contribute to ( U[i][j] ), defined as:
[ s1 = \sum_{k=0}^{i-1} U[k][j] \cdot L[i][k] ]
Subtracting ( s1 ) from ( A[i][j] ): The final element of ( U[i][j] ) is calculated by subtracting
this sum from ( A[i][j] ):
[ U[i][j] = A[i][j] - s1 ]
This part of the code handles all entries of ( U ) in the current row.
3. Filling in the Elements of ( L ) (Lower Triangular Matrix)

for j in range(i + 1, n):


s2 = sum(U[k][i] * L[j][k] for k in range(i))
if U[i][i] == 0:
raise ValueError("Zero pivot encountered, matrix is

3/7
Fifth.md 2024-11-10

singular!")
L[j][i] = (A[j][i] - s2) / U[i][i]

Calculating ( L[j][i] ): For each column ( i ), we calculate the entries of ( L ) below the diagonal (from
row ( i+1 ) to ( n )).
Sum Calculation ( s2 ): We compute the sum ( s2 ) of products from previous columns that
contribute to ( L[j][i] ):
[ s2 = \sum_{k=0}^{i-1} U[k][i] \cdot L[j][k] ]
Subtracting ( s2 ) and Dividing by ( U[i][i] ): The final element of ( L[j][i] ) is calculated by
subtracting this sum from ( A[j][i] ) and dividing by ( U[i][i] ):
[ L[j][i] = \frac{A[j][i] - s2}{U[i][i]} ]
This part of the code fills in all entries of ( L ) in the current column below the diagonal.
4. Solving the System Using Forward and Back Substitution
After the decomposition, we solve the system ( AX = B ) in two steps using forward and back substitution.
Forward Substitution (( Ly = B ))

y = np.zeros_like(b)
for i in range(len(y)):
temp_sum = sum(L[i][j] * y[j] for j in range(i))
y[i] = (b[i] - temp_sum) / L[i][i]

Solving ( Ly = B ): Since ( L ) is a lower triangular matrix, we can solve for ( y ) by iterating from the
top row downwards, using previously calculated values of ( y ) to find the next value.
Back Substitution (( Ux = y ))

x = np.zeros_like(y)
for i in range(len(x) - 1, -1, -1):
temp_sum = sum(U[i][j] * x[j] for j in range(i + 1, len(x)))
x[i] = (y[i] - temp_sum) / U[i][i]

Solving ( Ux = y ): Since ( U ) is an upper triangular matrix, we can solve for ( x ) by iterating from the
bottom row upwards.
Output:

Solution X: [ 0.92561983 1.98347107 2.33057851 -1.32231405]

4/7
Fifth.md 2024-11-10

Question 3
import numpy as np
import matplotlib.pyplot as plt
import random

X = np.array([random.randint(0, 100) for _ in range(20)]) # Random X


values
Y = np.array([2 * x + 1 + random.randint(-10, 10) for x in X]) # Y values
(approximately linear)

learning_rate = 0.0001 # Step size (smaller step size, more accurate but
slower convergence)
iterations = 1000 # Number of iterations (more iterations, more accurate,
but slower)

def linear_reg(X, Y, learning_rate, iterations):


m = 0.0
b = 0.0
n = len(X)

# Minimizing m,b using mean squared error as the loss function


for _ in range(iterations):
# Make predictions
Y_pred = m * X + b

# Calculate the gradients


dm = (-2 / n) * np.sum(X * (Y - Y_pred)) # Gradient with respect
to m
db = (-2 / n) * np.sum(Y - Y_pred) # Gradient with respect
to b

# Update m and b
m -= learning_rate * dm
b -= learning_rate * db

return m, b

m, b = linear_reg(X, Y, learning_rate, iterations)

print(f"X: {X}")
print(f"Y: {Y}")
print(f"Equation of the fitted line: y = {m:.2f}x + {b:.2f}")

plt.figure(figsize=(8, 6))
plt.scatter(X, Y, color="blue", label="Data Points")
plt.plot(X, m * X + b, color="red", label=f"Fitted Line: y = {m:.2f}x +
{b:.2f}")

plt.xlabel("X")
plt.ylabel("Y")

5/7
Fifth.md 2024-11-10

plt.title("Linear Regression with Gradient Descent")


plt.legend()
plt.show()

Generate Random Data:


X and Y are created as sample data for testing. X contains random integers, and Y is generated as an
approximately linear relationship with added noise.
Set Learning rate and number of Iterations:
learning_rate: Controls the step size for each iteration of gradient descent. A smaller value
results in slower but potentially more accurate convergence.
iterations: Determines how many times we update m (slope) and b (intercept) to minimize the
error.
Define linear_reg Function:
Initialize m and b: Start with initial values for the slope (m) and intercept (b), both set to zero.
Gradient Descent Loop:
Predictions: Calculate predictions Y_pred using the current values of m and b.
Compute Gradients:
dm and db represent the gradients of the cost function with respect to m and b,
respectively. These gradients tell us the direction and magnitude to adjust m and b to
reduce the error.
Update m and b: Adjust m and b by subtracting the product of the gradient and the learning
rate. This step refines the line to better fit the data.
Return: After the loop, the function returns the optimized values of m and b.
Run Gradient Descent and Plot the Result:
After calling linear_reg, we get the best-fit values for m and b.
Plot the Data: We plot the original data points (X, Y) as blue dots and the best-fit line ( m * X + b) as a red
line, providing a visual representation of the linear regression result.
Output:

X: [64 23 70 54 80 13 32 58 21 60 65 78 43 63 62 21 73 77 88 42]
Y: [124 52 132 119 166 20 71 120 44 125 133 164 87 121 118 40 138
161169 88]

6/7
Fifth.md 2024-11-10

7/7

You might also like