0% found this document useful (0 votes)
12 views14 pages

ELEN3102 Laboratory Report 2

Numerical Methods and Analysis - Laboratory Report 2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

ELEN3102 Laboratory Report 2

Numerical Methods and Analysis - Laboratory Report 2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ELEN3102: Numerical Methods and Analysis

1st Semester, AY 2024 - 2025

Laboratory Report No. 2


Numerical Solutions to Non-Linear
Equations (Iterative Non-Bracketing Method)

Submitted by:

FLORES, Alfred G.
EE3A, BSEE

Submitted to:

Engr. Ranzel V. Dimaculangan, ECT


Faculty-in-Charge
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

LABORATORY REPORT NO. 2

I. Introduction
In the case of the iterative bracketing methods, as discussed in the previous
experiment, they determine the root within a specified interval between a lower and upper
bound. By repeatedly applying these methods, the estimates of the root become
increasingly accurate. These methods are considered convergent because they
progressively approach the true root value as the calculations continue.
Iterative non-bracketing method, also called as open methods, unlike bracketing
methods, use formulas that need only one or two initial guesses that do not have to
enclose the root. This can sometimes cause the methods to diverge from the actual root.
However, when they do converge, they tend to do so much faster than bracketing
methods.
The first open method that will be utilized in this experiment is the Newton-
Raphson method. This method, also known as Newton’s method, is another powerful
technique for finding successively better approximations to the roots (or zeroes) of a given
function. This method is particularly useful in numerical analysis due to its rapid
convergence properties when applied correctly.
The Newton-Raphson method is based on the idea that a continuous and
differentiable function can be approximated by a straight-line tangent to it. Given a
function 𝒇(𝒙) and its derivative 𝒇′(𝒙), the method iteratively refines an initial guess 𝒙𝒏 to
find a root of the function.
Geometrically, the method involves drawing a tangent line to the curve of 𝒇(𝒙) at
the point [𝒙𝒏 , 𝒇(𝒙𝒏 )]. The 𝒙-intercept of this tangent line provides the next approximation
𝒙𝒏+𝟏. This process is repeated until the value converges to a sufficiently accurate root.
The iterative formula for the Newton-Raphson method is derived from the Taylor series
expansion of 𝒇(𝒙) around 𝒙𝒏 . The formula is given by:
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

𝒇 (𝒙 𝒏 )
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇 ′ (𝒙 𝒏 )
The second open method that will be utilized in this experiment is the secant
method. It is mostly like the Newton-Raphson method, but it does not require the
computation of derivatives. Instead, it uses secant lines to approximate the root.
The Secant Method approximates the root of a function 𝒇(𝒙) by using two initial
guesses, 𝒙𝒏−𝟏 and 𝒙𝒏 , and iteratively refining these guesses. The method constructs a
secant line between the points [𝒙𝒏−𝟏, 𝒇(𝒙𝒏−𝟏)] and [𝒙𝒏 , 𝒇(𝒙𝒏 )], and the 𝒙-intercept of this
line is used as the next approximation (𝒙𝒏+𝟏).
The iterative formula is given by:
𝒇(𝒙𝒏 )[𝒙𝒏 − 𝒙𝒏−𝟏 ]
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇(𝒙𝒏 ) − 𝒇(𝒙𝒏−𝟏)
The algorithm typically concludes once specific conditions are met. Firstly, it will
stop after a predefined number of iterations, as specified in the problem, with the final
approximate root (𝒙𝑹 ) serving as the solution. Alternatively, the algorithm will stop when
the approximate percent relative error (𝜺𝒂 ) falls below a predetermined threshold (𝜺𝒔 ).
The formula for (𝜺𝒂 ) is provided as follows:

𝒙𝐧𝐞𝐰
𝑹 − 𝒙𝐨𝐥𝐝
𝑹
𝜺𝒂 = | | ⋅ 𝟏𝟎𝟎%
𝒙𝐧𝐞𝐰
𝑹

By implementing and analyzing these methods by answering the problem in this


experiment, we aim to understand their convergence properties, limitations, and
applications in various engineering and scientific fields.

II. Problem
In this experiment, we are to analyze the convergence behavior of the two iterative
bracketing methods for finding the root of the following non-linear equation using the
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

programming language Python using various programming techniques to implement the


algorithm of both numerical methods.
The given function to be evaluated is:
𝒇(𝒙) = 𝒙𝟑 − 𝟓𝒙𝟐 + 𝟔𝒙 − 𝟐
Here are the conditions of the problem as provided:
1.) The initial guesses shall be:
a. Newton-Raphson Method: 𝒙𝟎 = 𝟐. 𝟓
b. Secant Method: 𝒙−𝟏 = 𝟐 ; 𝒙𝟎 = 𝟒
2.) Generate the iteration table, showing the number of iterations, the lower and
upper bounds, the approximate root per iteration, and the approximate percent
relative error (𝜺𝒂 ).
Stop the iterations table when the approximate percent relative error reaches the stopping
criterion (𝜺𝒔 ) of 0.1%.
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

III. Presentation of Results and Solutions


Figure 2.A shows the graph of the given function, and its true roots obtained
through analytic solutions, and the given lower and upper bounds.

Figure 2.A. Graph of the Function

The first open method that is used in this experiment is the Newton-Raphson
Method. Table 2.A shows the generated approximate roots per iteration, its value when it
is plugged into the function, the value of the derivative at that value, and approximate
percent relative error (𝜺𝒂 ).
Table 2.A. Approximate Root and Relative Error per Iteration
Iteration (𝒏) 𝒙𝒏 𝒇 (𝒙 𝒏 ) 𝒇 ′ (𝒙 𝒏 ) 𝜺𝒂 (%)
0 2.500000 -2.625000 -0.250000 None
1 -8.000000 -882.000000 278.000000 131.250000
2 -4.827338 -259.972390 124.182962 65.722802
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

3 -2.733875 -76.206820 55.760981 76.574908


4 -1.367206 -22.105155 25.279820 99.960730
5 -0.492787 -6.290587 11.656390 177.443528
6 0.046881 -1.729598 5.537780 1151.136084
7 0.359208 -0.443554 2.795008 86.948692
8 0.517903 -0.094785 1.625637 30.641833
9 0.576210 -0.011518 1.233955 10.118951
10 0.585544 -0.000284 1.173145 1.594097
11 0.585786 0.000000 1.171574 0.041357

According to the table above, the approximate root generated by the Newton-
Raphson method is 𝒙𝟏𝟏 = 𝟎. 𝟓𝟖𝟓𝟕𝟖𝟔 with an approximate percent relative error (𝜺𝒂 ) of
0.041357%. The algorithm has stopped after the 11th iteration after reaching the stopping
criterion (𝜺𝑺) of 0.1%.

The second open method that was used in this experiment was the Secant
method. Table 2.B shows the generated approximate roots per iteration, its value when
it is plugged into the function, the value of the derivative at that value, and approximate
percent relative error (𝜺𝒂 ).
Table 2.B. Approximate Root and Relative Error per Iteration
Iteration (𝒏) 𝒙𝒏 𝒇(𝒙 𝒏 ) 𝜺𝒂 (%)
-1 2.000000 -2.000000
0 4.000000 6.000000 50.000000
1 2.500000 -2.625000 60.000000
2 2.956522 -2.122956 15.441176
3 4.886979 24.622641 39.502057
4 3.109753 -1.621234 57.150042
5 3.219543 -1.137993 3.410092
6 3.478088 0.457813 7.433543
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

7 3.403915 -0.069766 2.179041


8 3.413724 -0.003343 0.287323
9 3.414217 0.000027 0.014460

According to the table above, the approximate root generated by the Newton-
Raphson method is 𝒙𝟗 = 𝟑. 𝟒𝟏𝟒𝟐𝟏𝟕 with an approximate percent relative error (𝜺𝒂 ) of
0.014460%. The algorithm has stopped after the 11th iteration after reaching the stopping
criterion (𝜺𝑺) of 0.1%.

Figure 2.B shows the convergence behavior of both methods, showing how each
method and their iterations comes closer to the final approximate root.

Figure 2.B. Convergence Behavior of the Iterative Non-Bracketing Methods


COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

Figure 2.C shows the approximate percent relative error (𝜺𝒂 ) per iteration for each
open method, and how it fluctuates as the algorithm runs.

Figure 2.C. Approximate Percent Relative Error per Iteration

IV. General Observation and Conclusion


The Newton-Raphson method is esteemed for its rapid convergence, particularly
when the initial guess is proximate to the actual root. This method exhibits quadratic
convergence, implying that the number of correct digits approximately doubles with each
iteration.

When evaluated with an initial guess of 𝒙𝟎 = 𝟐. 𝟓,the method demonstrates its


efficiency and convergence behavior. Initially, the approximate root generated is a
negative number due to the nature of the function and the algorithm's formula. As
iterations progress, the method refines the estimate, bringing it closer to the actual root.
By the time the approximate error falls below 0.1%, the method has effectively converged
to a highly accurate root.

Throughout the iterations, the behavior of the approximate errors can also be
observed. Initially, the error is set to 100% for the first iteration to indicate a large initial
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

discrepancy. As the method progresses, the approximate error decreases significantly,


showcasing the method's fast convergence. However, an interesting phenomenon occurs
at the 6th iteration, where the approximate error jumps to over 1000%. This spike is due
to a change in the sign of the approximate root, transitioning from negative to positive.
Such a change can cause a temporary increase in the error value. Once the method
crosses this point, the error decreases rapidly, approaching the desired threshold of 0.1%.

This behavior highlights both the strengths and potential pitfalls of the Newton-
Raphson method. While it is highly efficient and converges quickly under favorable
conditions, it can exhibit large errors if the function's behavior causes significant changes
in the root's estimate. Despite this, the method remains a powerful tool for solving
nonlinear equations, provided the initial guess is chosen wisely and the function is well-
behaved.

Meanwhile, in the Secant method, it demonstrates its iterative approach to finding


the roots of a function without requiring the computation of the derivative. The method
typically converges faster than the bisection method but slower than the Newton-Raphson
method, exhibiting superlinear convergence, as shown in Figure 2.B. It is due to the initial
guesses set by the problem. As shown in Figure 2.A, the bounds for the Secant method,
𝒙−𝟏 = 𝟐 and 𝒙𝟎 = 𝟒, has a true root in between them. Therefore, the method tends to
approach the root that is graphically close to them.

As the iterations progress, the method continues to improve the root estimate. The
approximate errors decrease rapidly, demonstrating the method’s efficiency. By the time
the approximate error falls below the desired stopping criterion (e.g., 0.1%), the method
has effectively converged to a highly accurate root.

In conclusion, both the Newton-Raphson and secant methods are powerful


iterative techniques for finding the roots of nonlinear equations, each with its own
strengths and limitations.
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

The Newton-Raphson method is highly efficient, exhibiting quadratic convergence,


which allows it to rapidly approach the root when the initial guess is close. However, it
requires the computation of the derivative and is sensitive to the choice of the initial guess.
Poor initial guesses or points where the derivative is zero can lead to convergence issues
or division by zero errors.

On the other hand, the secant method does not require the derivative, making it
simpler to implement for functions where the derivative is difficult to compute. It typically
converges faster than the bisection method but slower than the Newton-Raphson method.

Appendices:
import math
import matplotlib.pyplot as plt
import numpy as np

# Function for f(x) and f'(x)


def f(x):
return x**3 - 5*x**2 + 6*x - 2

def df(x):
return 3*x**2 - 10*x + 6

# Newton-Raphson Method with Table and Data Collection


def newton_raphson(x0, e_s, iter_max):
print("\n*** Newton-Raphson Method ***")

print(f"{'Iteration':<10}{'x_n':<15}{'f(x_n)':<15}{'f\'(x_n)':<15
}{'Approx. Error':<20}")
print("-" * 70)
iteration = 0
ea = 100
x_values = [x0]
error_values = []
while iteration == 0:
ea = None
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

print(f"{iteration:<10}{x0:<15.6f}{f(x0):<15.6f}{df(x0):<15.6f}{'
':<20}")
iteration += 1
while iteration < iter_max:
if f(x0) - df(x0) == 0:
print("Divide by zero error!")
return x1, x_values, error_values
x1 = x0 - (f(x0) / df(x0))
ea = abs((x1 - x0) / x1) * 100
error_values.append(ea)
x_values.append(x0)

print(f"{iteration:<10}{x1:<15.6f}{f(x1):<15.6f}{df(x1):<15.6f}{e
a:<20.6f}")
x0 = x1
iteration += 1
if ea < e_s:
x_values.append(x1)
return x1, x_values, error_values

# Secant Method with Table and Data Collection


def secant_method(x0, x1, e_s, iter_max):
print("\n*** Secant Method ***")
print(f"{'Iteration':<10}{'x_n':<15}{'f(x_n)':<15}{'Approx.
Error':<20}")
print("-" * 55)
iteration = -1
x_values = [x0, x1]
error_values = [None, 100] # No error for the initial guess,
100 for the first iteration
while iteration == -1:
ea = None

print(f"{iteration:<10}{x0:<15.6f}{f(x0):<15.6f}{'':<20}")
iteration += 1
while iteration == 0:
ea = abs((x1 - x0) / x1) * 100

print(f"{iteration:<10}{x1:<15.6f}{f(x1):<15.6f}{ea:<20.6f}")
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

error_values.append(ea)
iteration += 1
while iteration < iter_max:
if f(x0) == f(x1):
print("Divide by zero error!")
return x_values, error_values
x2 = x0 - (x1 - x0) * f(x0) / (f(x1) - f(x0))
ea = abs((x2 - x1) / x2) * 100

print(f"{iteration:<10}{x2:<15.6f}{f(x2):<15.6f}{ea:<20.6f}")
x_values.append(x2)
error_values.append(ea)
if ea < e_s:
print(f"\nApproximate Root is: {x2:.6f}")
return x2, x_values, error_values
x0, x1 = x1, x2
iteration += 1
print("\nNot Convergent.")
return x2, x_values, error_values

x0_nr = float(input("Enter initial guess for Newton-Raphson Method:


"))
x0_sec = float(input("Enter first initial guess for Secant Method:
"))
x1_sec = float(input("Enter second initial guess for Secant Method:
"))
e_s = float(input("Enter tolerable error (e_s): "))
iter_max = int(input("Enter maximum iterations (iter_max): "))

nr_root, nr_data, nr_errors = newton_raphson(x0_nr, e_s, iter_max)


sec_root, sec_data, sec_errors = secant_method(x0_sec, x1_sec,
e_s, iter_max)

# True Roots
x_t1 = round(2 - math.sqrt(2), 6)
y_t1 = f(x_t1)

x_t2 = 1
y_t2 = f(x_t2)
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

x_t3 = round(2 + math.sqrt(2), 6)


y_t3 = f(x_t3)

# Plotting the function


x_values = np.linspace(-0.5, 4.5, 400)
y_values = [f(x) for x in x_values]

# Plotting the graph of the function, showing the true roots and
bounds
plt.figure(figsize=(10, 5))
plt.plot(x_values, y_values, label='f(x)=x^3 - 5x^2 + 6x - 2')
plt.axhline(0, color='black', linewidth=0.5)

# Plotting the true values of the roots


plt.plot(x_t1, y_t1, marker="s", label=f'True Value at x = {x_t1}')
plt.plot(x_t2, y_t2, marker="s", label=f'True Value at x = {x_t2}')
plt.plot(x_t3, y_t3, marker="s", label=f'True Value at x = {x_t3}')

plt.axvline(nr_root, color='black', linestyle='--',


label=f'Approximate Root (F.P.M.) = {nr_root:.6f}')
plt.axvline(sec_root, color='blue', linestyle='--',
label=f'Approximate Root (S.C.) = {sec_root:.6f}')

plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Graph of the Function with Bounds')
plt.legend()
plt.grid(True)
plt.show()

# Plotting the results for approximate roots and errors using


subplots
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 8))

# Plotting the results for approximate roots


ax1.plot(range(len(nr_data)), nr_data, label='Newton-Raphson
Method', marker='x')
ax1.plot(range(len(sec_data)), sec_data, label='Secant Method',
marker='s')
ax1.set_xlabel('Iteration Number')
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis

Name: FLORES, Alfred G.


Program: BSEE
Date Submitted: 27 September 2024
Name of Faculty: Engr. Ranzel V. Dimaculangan, ECT

ax1.set_ylabel('Approximate Root')
ax1.set_title('Iteration Number vs. Approximate Root')
ax1.legend()
ax1.grid(True)

# Plotting the results for approximate errors


ax2.plot(range(len(nr_errors)), nr_errors, label='Newton-Raphson
Method', marker='x')
ax2.plot(range(len(sec_errors)), sec_errors, label='Secant
Method', marker='s')
ax2.set_xlabel('Iteration Number')
ax2.set_ylabel('Approximate Error')
ax2.set_title('Iteration Number vs. Approximate Error')
ax2.legend()
ax2.grid(True)
plt.tight_layout()
plt.show()

You might also like