computational mathematics
computational mathematics
UNIT I
The Bisection Method is a numerical method used to find the roots (or zeros) of a continuous function.
Specifically, it is used to find the value of xxx where the function f(x)=0f(x) = 0f(x)=0. The method
works by repeatedly dividing the interval in half and selecting the subinterval in which the root lies.
Key Concepts:
1. Initial Interval: The method requires two starting values, aaa and bbb, such that f(a) and f(b)
have opposite signs, i.e.,f(a)⋅f(b)<0. This guarantees that there is at least one root between a and b
due to the Intermediate Value Theorem.
2. Midpoint: At each step, the method calculates the midpoint of the interval, m=2a+b.
Evaluate Function at Midpoint:
3. Stopping Criteria: The process is repeated until the interval is sufficiently small or the
function value at the midpoint is close enough to zero.
1. Choose the initial interval [a,b], where f(a) and f(b) have opposite signs.
a+b
Calculate the midpoint m=( )
2m
Check the value of f(m):
Example:
Let's say we want to find the root of the equation f(x) = x^2 – 4 using the bisection method. We can
follow these steps:
Initial interval: [a,b]=[1,3] (since f(1)=−3and f(3)=5, the signs are opposite.
Midpoint: m= 1
If the function doesn't cross zero exactly at the midpoint, we would continue narrowing down the interval
until we get sufficiently close to the root.
Advantages:
Simple to implement.
Guaranteed to find a root if the initial interval contains a sign change.
Disadvantages:
The Method of False Position, also known as Regula Falsi, is a root-finding algorithm similar to the
Bisection Method. It is used to find the root of a continuous function. This method combines the idea of
the bisection method with linear interpolation.
Key Concept:
The False Position Method iteratively refines an interval in which the root lies. At each step, it
uses a secant line between the points (a,f(a))and (b,f(b)), and it intersects this secant line with the
x-axis to estimate a new point c, which is the next approximation of the root.
The point ccc is chosen in such a way that the sign of f(c)f(c)f(c) is used to decide which of the
two intervals (either [a,c]or [c,b]) will be retained for the next iteration.
1. Choose initial interval: Start with two values aaa and bbb such that f(a)and f(b) have opposite
signs (i.e., f(a)⋅f(b)<0).
2. Calculate the False Position: Compute the new approximation of the root, c, using the formula:
f (a)⋅(b−a)
c = a−
f ( b)−f (a)
Repeat the process until the difference between aaa and b is smaller than a predefined
tolerance or the function value f(c) is sufficiently close to zero.
Let’s take the same function f(x)=x2−4, and we want to find its root using the Method of False Position.
Initial interval:a=1,b=3 (since f(1)=−3 and f(3)=5, the signs are opposite).
The root lies between a and b because the function changes sign.
The method would iteratively calculate a new approximation using the false position formula and update
the interval based on the sign of f(c).
Key Concept:
The Successive Approximation method works by rewriting the equation f(x)=0 into the form
x=g(x). This gives a recursive relationship that allows you to approximate the root by iterating the
function.
Starting from an initial guess x0, the next approximation x1 is computed by applying g(x) to x0,
and so on.
1. Rearrange the equation: Convert the original equation f(x)=0into a form x=g(x).
4
x=√ 4 or x=
x
xn+1=g(xn)
4. Convergence check: Repeat the iteration until the difference between successive approximations
is smaller than a specified tolerance:
∣x n+1−xn∣<tolerance
5. Result: The value of x n+1 after the last iteration is taken as the root.
Example:
Let's solve the equation f(x)=x2−4=0 using the method of successive approximation.
However, more interesting problems arise when there are more complex functions or when the rearranged
equation involves iterative refinement.
For the method to converge, the function g(x) must satisfy certain conditions:
1. Fixed-point condition: g(x) must be continuous and the interval must be such that the iterations
move closer to the root.
root is less than 1: ∣g′(x)∣<1 If ∣g′(x)∣ is greater than 1, the method may diverge or not converge to
2. Convergence criterion: The method converges if the magnitude of the derivative of g(x) at the
Example Walkthrough:
Steps:
The values oscillate around the root x=2, but this is due to the form of the equation and can be refined
further.
UNIT II
In numerical linear algebra, solving a system of linear equations is a fundamental problem. The
general system is given as:
Ax=b
where:
A is a square matrix of coefficients (n×n),
x is the vector of unknowns (n×1),
b is the vector of constants (n×1).
There are two main approaches for solving such systems of equations: Direct Methods and Iterative
Methods.
1. Direct Methods
Direct methods aim to find the exact solution of the system in a finite number of steps (in theory) using
algebraic manipulation. These methods are efficient for small to moderate-sized systems where the matrix
is well-conditioned.
Gaussian Elimination (or Row Reduction): This method transforms the system Ax=b into an
upper triangular matrix U via a sequence of row operations. Once the system is in this form,
back-substitution is used to solve for the unknowns.
Steps:
LU Decomposition: Decompose the matrix A into a product of a lower triangular matrix L and
an upper triangular matrix U, i.e., A=LU. Then, solve the system in two steps:
1. Solve Ly=b for y.
2. Solve Ux=y for x.
Matrix Inversion: If the matrix A is invertible, the solution can be found by directly multiplying
both sides of the equation by A−1 i.e., x=A−1b. This method is computationally expensive for large
matrices due to the cost of matrix inversion.
2. Iterative Methods
Iterative methods are techniques that generate a sequence of approximations that converge to the exact
solution. These methods are particularly useful for large, sparse systems where direct methods may be
inefficient or impractical.
Common Iterative Methods
Jacobi Method: This method uses the diagonal elements of AAto iteratively update the values of
x. Each new approximation is computed based on the old one.
Steps:
Gauss-Seidel Method: This is a refinement of the Jacobi method. In Gauss-Seidel, the updated
values of x are used immediately in the next iteration, instead of waiting for the whole vector to
be updated.
Steps:
Conjugate Gradient Method: This method is used for solving large, sparse systems where A is
symmetric and positive definite. It is an optimization-based approach, iterating toward the
solution while minimizing the error along conjugate directions.
UNIT III
1. Bisection Method
The Bisection method is used to find the root of a function within a given interval [a,b][a, b][a,b], where
the function changes signs at the endpoints. The method repeatedly bisects the interval and then selects
the subinterval in which the root lies.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
// Bisection Method
double bisection(double a, double b, double tol) {
if (f(a) * f(b) > 0) {
cout << "The function has the same sign at the endpoints." << endl;
return -1;
}
double c;
while ((b - a) / 2 > tol) {
c = (a + b) / 2;
if (f(c) == 0.0) {
break;
} else if (f(c) * f(a) < 0) {
b = c;
} else {
a = c;
}
}
return (a + b) / 2;
}
int main() {
double a = 0, b = 3; // Interval [0, 3]
double tolerance = 0.001;
if (root != -1) {
cout << "Root is: " << fixed << setprecision(4) << root << endl;
}
return 0;
}
The False Position method is similar to the Bisection method but uses a linear interpolation to estimate
the root.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
double c;
while ((b - a) / 2 > tol) {
c = a - (f(a) * (b - a)) / (f(b) - f(a));
if (f(c) == 0.0) {
break;
} else if (f(c) * f(a) < 0) {
b = c;
} else {
a = c;
}
}
return c;
}
int main() {
double a = 0, b = 3; // Interval [0, 3]
double tolerance = 0.001;
double root = falsePosition(a, b, tolerance);
if (root != -1) {
cout << "Root is: " << fixed << setprecision(4) << root << endl;
}
return 0;
}
In this method, the function is rearranged into an iterative form to find the root by successive
approximations.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
int main() {
double x0 = 1; // Initial guess
double tolerance = 0.001;
cout << "Root is: " << fixed << setprecision(4) << root << endl;
return 0;
}
4. Newton-Raphson Method
The Newton-Raphson method uses the function and its derivative to iteratively find the root.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
// Newton-Raphson Method
double newtonRaphson(double x0, double tol) {
double x1 = x0 - f(x0) / df(x0); // First iteration
int main() {
double x0 = 2; // Initial guess
double tolerance = 0.001;
cout << "Root is: " << fixed << setprecision(4) << root << endl;
return 0;
}
UNIT IV
1. Secant Method
The Secant method is an iterative method for solving non-linear equations. It is similar to the Newton-
Raphson method but does not require the derivative of the function. Instead, it uses the secant line to
approximate the root
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
// Secant Method
double secantMethod(double x0, double x1, double tol) {
double x2;
do {
x2 = x1 - (f(x1) * (x1 - x0)) / (f(x1) - f(x0)); // Secant formula
x0 = x1;
x1 = x2;
} while (fabs(f(x2)) > tol);
return x2;
}
int main() {
double x0 = 1.0, x1 = 2.0; // Initial guesses
double tolerance = 0.001;
cout << "Root is: " << fixed << setprecision(4) << root << endl;
return 0;
}
Graeff's Root Squaring Method is an iterative method used for finding the root of a function. It is
particularly useful when the function can be expressed in a way that allows the use of this method
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
return x1;
}
int main() {
double x0 = 2.0; // Initial guess
double tolerance = 0.001;
cout << "Root is: " << fixed << setprecision(4) << root << endl;
return 0;
}
// Forward elimination
for (int i = 0; i < n; i++) {
// Pivoting (if necessary)
for (int k = i + 1; k < n; k++) {
if (A[i][i] == 0) {
cout << "Division by zero detected." << endl;
return;
}
double factor = A[k][i] / A[i][i];
for (int j = 0; j < n; j++) {
A[k][j] -= factor * A[i][j];
}
b[k] -= factor * b[i];
}
}
// Back substitution
for (int i = n - 1; i >= 0; i--) {
x[i] = b[i];
for (int j = i + 1; j < n; j++) {
x[i] -= A[i][j] * x[j];
}
x[i] /= A[i][i];
}
}
int main() {
double A[3][3] = {
{2, -1, 1},
{1, 3, 2},
{3, -1, 2}
};
gaussElimination(A, b, x);
return 0;
}
4. Gauss-Jordan Method
The Gauss-Jordan method is similar to Gauss Elimination but further simplifies the matrix by reducing it
to a diagonal form. This method directly produces the solution.
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
double A[3][3] = {
{2, -1, 1},
{1, 3, 2},
{3, -1, 2}
};
gaussJordan(A, b, x);
cout << "Solution is: ";
for (int i = 0; i < 3; i++) {
cout << fixed << setprecision(4) << x[i] << " ";
}
cout << endl;
return 0;}
UNIT V
do {
for (int i = 0; i < n; i++) {
x_new[i] = b[i];
for (int j = 0; j < n; j++) {
if (i != j) {
x_new[i] -= A[i][j] * x[j];
}
}
x_new[i] /= A[i][i];
}
error = 0.0;
for (int i = 0; i < n; i++) {
error = max(error, fabs(x_new[i] - x[i]));
}
int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};
double b[3] = {15, 10, 10}; // Right-hand side vector
double x[3] = {0, 0, 0}; // Initial guess
return 0;
}
2. Gauss-Seidel Method
The Gauss-Seidel method is an iterative technique used for solving a system of linear equations. It is
similar to the Jacobian method but updates the values of the variables as soon as they are computed.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
do {
double x_new[3];
// Calculate error
error = 0.0;
for (int i = 0; i < n; i++) {
error = max(error, fabs(x_new[i] - x[i]));
}
iter++;
if (iter >= maxIter) {
cout << "Maximum iterations reached!" << endl;
break;
}
int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};
double b[3] = {15, 10, 10}; // Right-hand side vector
double x[3] = {0, 0, 0}; // Initial guess
return 0;
}
3. Largest Eigenvalue by Power Method
The Power Method is an iterative method used to find the largest eigenvalue (in absolute value) of a
matrix. It works by iterating on a random vector and multiplying by the matrix, normalizing at each step.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
do {
// Multiply the matrix A with vector x
for (int i = 0; i < n; i++) {
x_new[i] = 0.0;
for (int j = 0; j < n; j++) {
x_new[i] += A[i][j] * x[j];
}
}
iter++;
if (iter >= maxIter) {
cout << "Maximum iterations reached!" << endl;
break;
}
} while (error > tol);
return eigenValue;
}
int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};
cout << "Largest Eigenvalue: " << fixed << setprecision(4) << eigenValue << endl;
return 0;
}