ELEN3102 Laboratory Report 2
ELEN3102 Laboratory Report 2
Submitted by:
FLORES, Alfred G.
EE3A, BSEE
Submitted to:
I. Introduction
In the case of the iterative bracketing methods, as discussed in the previous
experiment, they determine the root within a specified interval between a lower and upper
bound. By repeatedly applying these methods, the estimates of the root become
increasingly accurate. These methods are considered convergent because they
progressively approach the true root value as the calculations continue.
Iterative non-bracketing method, also called as open methods, unlike bracketing
methods, use formulas that need only one or two initial guesses that do not have to
enclose the root. This can sometimes cause the methods to diverge from the actual root.
However, when they do converge, they tend to do so much faster than bracketing
methods.
The first open method that will be utilized in this experiment is the Newton-
Raphson method. This method, also known as Newton’s method, is another powerful
technique for finding successively better approximations to the roots (or zeroes) of a given
function. This method is particularly useful in numerical analysis due to its rapid
convergence properties when applied correctly.
The Newton-Raphson method is based on the idea that a continuous and
differentiable function can be approximated by a straight-line tangent to it. Given a
function 𝒇(𝒙) and its derivative 𝒇′(𝒙), the method iteratively refines an initial guess 𝒙𝒏 to
find a root of the function.
Geometrically, the method involves drawing a tangent line to the curve of 𝒇(𝒙) at
the point [𝒙𝒏 , 𝒇(𝒙𝒏 )]. The 𝒙-intercept of this tangent line provides the next approximation
𝒙𝒏+𝟏. This process is repeated until the value converges to a sufficiently accurate root.
The iterative formula for the Newton-Raphson method is derived from the Taylor series
expansion of 𝒇(𝒙) around 𝒙𝒏 . The formula is given by:
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
𝒇 (𝒙 𝒏 )
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇 ′ (𝒙 𝒏 )
The second open method that will be utilized in this experiment is the secant
method. It is mostly like the Newton-Raphson method, but it does not require the
computation of derivatives. Instead, it uses secant lines to approximate the root.
The Secant Method approximates the root of a function 𝒇(𝒙) by using two initial
guesses, 𝒙𝒏−𝟏 and 𝒙𝒏 , and iteratively refining these guesses. The method constructs a
secant line between the points [𝒙𝒏−𝟏, 𝒇(𝒙𝒏−𝟏)] and [𝒙𝒏 , 𝒇(𝒙𝒏 )], and the 𝒙-intercept of this
line is used as the next approximation (𝒙𝒏+𝟏).
The iterative formula is given by:
𝒇(𝒙𝒏 )[𝒙𝒏 − 𝒙𝒏−𝟏 ]
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇(𝒙𝒏 ) − 𝒇(𝒙𝒏−𝟏)
The algorithm typically concludes once specific conditions are met. Firstly, it will
stop after a predefined number of iterations, as specified in the problem, with the final
approximate root (𝒙𝑹 ) serving as the solution. Alternatively, the algorithm will stop when
the approximate percent relative error (𝜺𝒂 ) falls below a predetermined threshold (𝜺𝒔 ).
The formula for (𝜺𝒂 ) is provided as follows:
𝒙𝐧𝐞𝐰
𝑹 − 𝒙𝐨𝐥𝐝
𝑹
𝜺𝒂 = | | ⋅ 𝟏𝟎𝟎%
𝒙𝐧𝐞𝐰
𝑹
II. Problem
In this experiment, we are to analyze the convergence behavior of the two iterative
bracketing methods for finding the root of the following non-linear equation using the
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
The first open method that is used in this experiment is the Newton-Raphson
Method. Table 2.A shows the generated approximate roots per iteration, its value when it
is plugged into the function, the value of the derivative at that value, and approximate
percent relative error (𝜺𝒂 ).
Table 2.A. Approximate Root and Relative Error per Iteration
Iteration (𝒏) 𝒙𝒏 𝒇 (𝒙 𝒏 ) 𝒇 ′ (𝒙 𝒏 ) 𝜺𝒂 (%)
0 2.500000 -2.625000 -0.250000 None
1 -8.000000 -882.000000 278.000000 131.250000
2 -4.827338 -259.972390 124.182962 65.722802
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
According to the table above, the approximate root generated by the Newton-
Raphson method is 𝒙𝟏𝟏 = 𝟎. 𝟓𝟖𝟓𝟕𝟖𝟔 with an approximate percent relative error (𝜺𝒂 ) of
0.041357%. The algorithm has stopped after the 11th iteration after reaching the stopping
criterion (𝜺𝑺) of 0.1%.
The second open method that was used in this experiment was the Secant
method. Table 2.B shows the generated approximate roots per iteration, its value when
it is plugged into the function, the value of the derivative at that value, and approximate
percent relative error (𝜺𝒂 ).
Table 2.B. Approximate Root and Relative Error per Iteration
Iteration (𝒏) 𝒙𝒏 𝒇(𝒙 𝒏 ) 𝜺𝒂 (%)
-1 2.000000 -2.000000
0 4.000000 6.000000 50.000000
1 2.500000 -2.625000 60.000000
2 2.956522 -2.122956 15.441176
3 4.886979 24.622641 39.502057
4 3.109753 -1.621234 57.150042
5 3.219543 -1.137993 3.410092
6 3.478088 0.457813 7.433543
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
According to the table above, the approximate root generated by the Newton-
Raphson method is 𝒙𝟗 = 𝟑. 𝟒𝟏𝟒𝟐𝟏𝟕 with an approximate percent relative error (𝜺𝒂 ) of
0.014460%. The algorithm has stopped after the 11th iteration after reaching the stopping
criterion (𝜺𝑺) of 0.1%.
Figure 2.B shows the convergence behavior of both methods, showing how each
method and their iterations comes closer to the final approximate root.
Figure 2.C shows the approximate percent relative error (𝜺𝒂 ) per iteration for each
open method, and how it fluctuates as the algorithm runs.
Throughout the iterations, the behavior of the approximate errors can also be
observed. Initially, the error is set to 100% for the first iteration to indicate a large initial
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
This behavior highlights both the strengths and potential pitfalls of the Newton-
Raphson method. While it is highly efficient and converges quickly under favorable
conditions, it can exhibit large errors if the function's behavior causes significant changes
in the root's estimate. Despite this, the method remains a powerful tool for solving
nonlinear equations, provided the initial guess is chosen wisely and the function is well-
behaved.
As the iterations progress, the method continues to improve the root estimate. The
approximate errors decrease rapidly, demonstrating the method’s efficiency. By the time
the approximate error falls below the desired stopping criterion (e.g., 0.1%), the method
has effectively converged to a highly accurate root.
On the other hand, the secant method does not require the derivative, making it
simpler to implement for functions where the derivative is difficult to compute. It typically
converges faster than the bisection method but slower than the Newton-Raphson method.
Appendices:
import math
import matplotlib.pyplot as plt
import numpy as np
def df(x):
return 3*x**2 - 10*x + 6
print(f"{'Iteration':<10}{'x_n':<15}{'f(x_n)':<15}{'f\'(x_n)':<15
}{'Approx. Error':<20}")
print("-" * 70)
iteration = 0
ea = 100
x_values = [x0]
error_values = []
while iteration == 0:
ea = None
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
print(f"{iteration:<10}{x0:<15.6f}{f(x0):<15.6f}{df(x0):<15.6f}{'
':<20}")
iteration += 1
while iteration < iter_max:
if f(x0) - df(x0) == 0:
print("Divide by zero error!")
return x1, x_values, error_values
x1 = x0 - (f(x0) / df(x0))
ea = abs((x1 - x0) / x1) * 100
error_values.append(ea)
x_values.append(x0)
print(f"{iteration:<10}{x1:<15.6f}{f(x1):<15.6f}{df(x1):<15.6f}{e
a:<20.6f}")
x0 = x1
iteration += 1
if ea < e_s:
x_values.append(x1)
return x1, x_values, error_values
print(f"{iteration:<10}{x0:<15.6f}{f(x0):<15.6f}{'':<20}")
iteration += 1
while iteration == 0:
ea = abs((x1 - x0) / x1) * 100
print(f"{iteration:<10}{x1:<15.6f}{f(x1):<15.6f}{ea:<20.6f}")
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
error_values.append(ea)
iteration += 1
while iteration < iter_max:
if f(x0) == f(x1):
print("Divide by zero error!")
return x_values, error_values
x2 = x0 - (x1 - x0) * f(x0) / (f(x1) - f(x0))
ea = abs((x2 - x1) / x2) * 100
print(f"{iteration:<10}{x2:<15.6f}{f(x2):<15.6f}{ea:<20.6f}")
x_values.append(x2)
error_values.append(ea)
if ea < e_s:
print(f"\nApproximate Root is: {x2:.6f}")
return x2, x_values, error_values
x0, x1 = x1, x2
iteration += 1
print("\nNot Convergent.")
return x2, x_values, error_values
# True Roots
x_t1 = round(2 - math.sqrt(2), 6)
y_t1 = f(x_t1)
x_t2 = 1
y_t2 = f(x_t2)
COLEGIO DE MUNTINLUPA
ELEN3102: Numerical Methods and Analysis
# Plotting the graph of the function, showing the true roots and
bounds
plt.figure(figsize=(10, 5))
plt.plot(x_values, y_values, label='f(x)=x^3 - 5x^2 + 6x - 2')
plt.axhline(0, color='black', linewidth=0.5)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Graph of the Function with Bounds')
plt.legend()
plt.grid(True)
plt.show()
ax1.set_ylabel('Approximate Root')
ax1.set_title('Iteration Number vs. Approximate Root')
ax1.legend()
ax1.grid(True)