Numerical Method For Root Findings
Numerical Method For Root Findings
Representing Polynomials
Polynomials are represented as vectors of coefficients ordered from the highest power to the constant term.
Example:
For , use:
Finding Roots
ans = 3×1
3.0000
2.0000
1.0000
Differentiation
dp = 1×3
3 -12 11
[3, -12, 11] when converted to Polynomial equation based on its coefficient will become
that is equal to
Integration
Integrate a polynomial with polyint(p, k), where k is the constant of integration (default: 0).
ip = 1×5
0.2500 -2.0000 5.5000 -6.0000 0
Polynomial Multiplication
Multiply
product = conv([1, 1], [1, -1])
product = 1×3
1 0 -1
Polynomial Division
Example:
Divide by :
quotient = 1×2
1 1
remainder = 1×3
0 0 0
Evaluating Polynomials
Evaluate at
value = 1×5
0 0 24 120 504
Polynomial giving 0 at 1 and 3 as they are the roots of this polynomial. How to find the roots, we will learn in the subsquent section.
Solve:
coefficients = [1, -7, 15.75, -11.25]; % Define the coefficients of the polynomial
3.0000
2.5000
1.5000
This works well for polynomials of moderate degree. However, for high-degree polynomials or polynomials with complex coefficients, numerical stability
issues may arise.
NOTE: The roots function is ideal for solving single-variable polynomial equations. When roots fails to provide a solution for complex problems,
numerical methods such as the Bisection Method and Newton-Raphson Method are often employed. These methods are robust and can handle cases
where analytical or built-in functions like roots struggle.
Curve Fitting
Fit a polynomial to data using polyfit(x, y, n).
coeffs = 1×2
3.0000 2.0000
Let us generate some x and y data and then ask the polyfit to give us polynomial fit of n order. Once we visualize it we will know if the fitted equation is
good or not.
% 100 points between 0 and 2.5
x = linspace(0, 2.5, 100);
figure;
plot(x, y, '-k', 'MarkerSize', 6, 'DisplayName', 'Noisy Data'); % Original data
hold on;
plot(x, y_fit, 'LineWidth', 2, 'DisplayName', '2nd-Order Fit'); % Fitted curve
hold off;
xlabel('x');
ylabel('y');
title('Noisy Sine Wave vs. Polynomial Fit');
legend('Location', 'best');
grid on;
Change the range, instead of going from 0 to 2.5 ... set these values to 0 to 5 and see that if the quadratic equation still able to fit those data.
Bisection Method
The bisection method is a root-finding technique that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further
processing. Read more about it here: Bisection Method.
Following is the bare minimum code that you must know and it has been practiced in the class to solve differerent cases.
% Define the polynomial equation
func = @(x) x.^3 - x - 2;
% Bisection algorithm
for k = 1:max_iter
iteration = iteration + 1;
c = (a + b) / 2; % Midpoint
Its always a good way to visualize the function, if possible, that way you can find the interval for which the function has the opposite sign. To plot the above
function we can do this
f= @(x) x.^3 - x - 2;
xrange = -1:0.1:3;
plot(xrange , f(xrange),'--xk') % plotting these points [(x1,f(x1)), (x2, f(x2), ... , and so on)]
yline(0) % This is a horizontal line passing from y=0.
You can see here this line crossing y = 0 line in between 1 and 2. Therefore this could be an valid interval.
Using Bisection Method as Function
The following function implement the bisection method. Save it with the same name: bisection_method to find the roots of a function in one variable:
function root = bisection_method(func, a, b, tolerance, max_iteration)
% bisection_method: Finds the root of a function using the bisection method.
%
% Inputs:
% func: A function handle to the function whose root is to be found.
% a: The lower bound of the interval.
% b: The upper bound of the interval.
% tolerance: The tolerance for the root (stopping criterion).
% max_iteration: The maximum number of iterations allowed.
%
% Outputs:
% root: The approximate root of the function.
% Initialize variables
itereration = 0;
root = (a + b) / 2;
% While loop
while (b - a) / 2 > tolerance && itereration < max_iteration
Explanation
func: The function handle for which you want to find the root.
a, b: The endpoints of the interval [a,b] where the function is having opposite sign.
tolerance: The tolerance level, which determines how close the result should be to the actual root.
Notes
The function func must be continuous on the interval [a,b].
The interval [a,b] must contain a root, i.e., func(a) and func(b) must have opposite signs.
The method will stop when the interval size is smaller than the specified tolerance or when the maximum number of iterations is reached.
Solve
Let us visualize it.
g= @(x) 4*x.^2-x.^3;
xrange = -1:0.1:5;
plot(xrange , g(xrange),'--or') % change f to g as now the function has been defined as g only.
yline(0)
You can see that it has two solutions one is from -1 to 1 and another solution lies in between 4 to 5.
[Actually this polynomial is having three solutions namely 0, 0 , 4 due to its highest power being 3]
root = bisection_method(g, -10, 5, 1e-6, 100)
You may have seen the tables for different iterations at their respective wikipedia pages, we can also
implement here to print the different iterations in the tabular format.
Bisection Method
function results = bisection_method2(f, a, b, tol, max_iter)
% Bisection Method for finding roots of a function
%
% Inputs:
% f - Function handle @(x) f(x)
% a, b - Initial interval [a, b] where f(a)*f(b) < 0
% tol - Tolerance for stopping criterion
% max_iter - Maximum number of iterations
%
% Output:
% results - Table containing iteration, a_n, b_n, c_n, and f(c_n)
1 1 2 1.5 -0.125
2 1.5 2 1.75 1.6094
3 1.5 1.75 1.625 0.66602
4 1.5 1.625 1.5625 0.2522
5 1.5 1.5625 1.5312 0.059113
6 1.5 1.5312 1.5156 -0.034054
7 1.5156 1.5312 1.5234 0.01225
8 1.5156 1.5234 1.5195 -0.010971
9 1.5195 1.5234 1.5215 0.00062218
10 1.5195 1.5215 1.5205 -0.0051789
11 1.5205 1.5215 1.521 -0.0022794
12 1.521 1.5215 1.5212 -0.00082891
13 1.5212 1.5215 1.5214 -0.00010343
14 1.5214 1.5215 1.5214 0.00025935
15 1.5214 1.5214 1.5214 7.7956e-05
16 1.5214 1.5214 1.5214 -1.2739e-05
17 1.5214 1.5214 1.5214 3.2608e-05
18 1 5214 1 5214 1 5214 9 9343e 06
Newton Raphson Method
The Newton-Raphson method is an iterative root-finding algorithm that uses the derivative of a function to approximate its roots. Read it here Newton's
Method
% Initial guess
x0 = 1;
% Newton-Raphson algorithm
for k = 1:max_iter
iteration = iteration + 1;
Now the same problem has been translated into finding a roots of a equation
% Function definition
f = @(x) x^2 - 2;
f_prime = @(x) 2*x;
% Initial guess
x0 = 1.5;
% Tolerance level
tolerance = 1e-6;
% Newton-Raphson iteration
for i = 1:max_iterations
x1 = x0 - f(x0) / f_prime(x0);
Even you can minimize this code to find the square root of a. Use
% Define the number we want the square root of
a = 2;
% Initial guess
x0 = 1;
end
However, its always good to include the conditions. So that we can go fewer iteration and the program will break from the loop once we got satisfactory
result.
% Initialize variables
iter = 0;
root = x0;
% Newton-Raphson loop
while iter < max_iter
% Evaluate the function and its derivative at the current guess
f_val = func(root);
df_val = dfunc(root);
Suppose you want to find the root of the function with its derivative , starting from an initial guess , with a
tolerance of and a maximum of 100 iterations:
Explanation
1. func: The function handle for which you want to find the root.
2. dfunc: The function handle for the derivative of the function.
3. x0: The initial guess for the root.
4. tol: The tolerance level, which determines how close the result should be to the actual root.
5. max_iter: The maximum number of iterations to perform.
Notes
The Newton-Raphson method requires the derivative of the function. If the derivative is not available, you may need to use a numerical
approximation (e.g., finite differences).
The method may fail if the derivative is zero at any point or if the initial guess is not close enough to the root.
The method converges quickly if the initial guess is good, but it may diverge for poorly chosen initial guesses.
% Initialize variables
iter = 0; % Counter for the number of iterations
x_n = x0; % Initialize the current approximation to the initial guess
results = []; % Initialize an empty array to store results
% Check for convergence: if the function value is less than the tolerance, stop
if abs(fx) < tol
break;
end
1 1.5 -0.125
2 1.5217 0.0021369
3 1.5214 5.8939e-07
% Initialize variables
iter = 0; % Counter for the number of iterations
x_n = x0; % Initialize the current approximation to the initial guess
iterations = []; % Initialize an array to store iteration numbers
x_values = []; % Initialize an array to store x_n values
fx_values = []; % Initialize an array to store f(x_n) values
% Check for convergence: if the function value is less than the tolerance, stop
if abs(fx) < tol
break;
end
% Update the approximation using the Newton-Raphson formula
x_n = x_n - fx / dfx;
iter = iter + 1; % Increment the iteration counter
end