0% found this document useful (0 votes)
138 views85 pages

Ssss 1 1 PDF

This document is a project report submitted by Mahendra Kumar Yadav to the Department of Computer Science at Iswar Saran Degree College. The report discusses a project on Numerical Method and Analysis. It includes an introduction discussing the importance of numerical techniques for solving scientific problems. It also provides overviews of the coding and objectives of the project, which aims to solve complex numerical problems in a more efficient manner than manual processing. The report is submitted in partial fulfillment of the requirements for a B.Sc. degree in Computer Science.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views85 pages

Ssss 1 1 PDF

This document is a project report submitted by Mahendra Kumar Yadav to the Department of Computer Science at Iswar Saran Degree College. The report discusses a project on Numerical Method and Analysis. It includes an introduction discussing the importance of numerical techniques for solving scientific problems. It also provides overviews of the coding and objectives of the project, which aims to solve complex numerical problems in a more efficient manner than manual processing. The report is submitted in partial fulfillment of the requirements for a B.Sc. degree in Computer Science.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

A PROJECT REPORT

ON

Numerical Method and Analysis

SUBMITTED TO
Department of computer science

Iswar saran Degree College

B.Sc-3. (Computer Science)

Submitted by
MAHENDRA KUMAR YADAV
Type your text
Type your text

Type your text

Type your text


Enrolment No:I1510146 Type your text

Roll No:507602 Type your text

Type your text


PREFACE

In the present-day world, computer-based knowledge has become an integral part


of the educational system any kind of application is amenable to numerical
treatment, the only limitation being the reader's imagination. Due to the advent of
fast and more powerful digital computers, in the last five decades, the
developments of numerical techniques to solve scientific problems have been quite
phenomenal.

This project (numerical method and analysis) is a general-purpose project, which


satisfies all the needs of a typical Computer system. The project has been tailored
to solve the problems faced by the user to compute the complex numerical problem
in a manual processing of the mathematics problem. The package “Solution of
Numerical Method and Its Analysis” is fully menu driven and provides very quick
and accurate information about the mathematics computation knowledge.
ACKNOWLEDGEMENT
With immense pleasure ,I am mahendra kumar yadav want to express my gratitude and
heartful thanks towards almighty
I extend my warm and sincere sense of gratitude to Mr. Sumit Kumar Srivastava
and Mr. Sailesh Kumar Srivastav and V.S. Verma(H.O.D) for all there
motivation and inspiration. I am highly indebted to them.
Type your text

Finally, I would also like to express my thanks to all those persons who have
directly helped me in completion of the project. Type your text

MAHENDRA KUMAR YADAV


Type your text

Type your text


CERTIFICATE
This is to certify that the project entitled “Numerical Method And Analysis”
submitted to Computer science department(ISDC) is partial fulfillment of the
requirement for the award of the degree of B.Sc-3 ), is an original work carried
out mahendra kumar yadav Enrolment No I1510146 under my guidance.

The matter embodied in this project is a genuine work done by the student and
has not been submitted to any other University/Institute for the fulfillment of the
requirement of any course of study.
Type your text

Eno.I1510146
Type your text
Type your text

Type your text


ISDC,ALLAHABAD Type your text

Roll no. 507602

Type your text


CONTENTS
1. ABOUT THE PROJECT
2. OVERVIEW OF PROJECT
3. INTRODUCTION OF THE PROJECT
4. OBJECTIVE OF THE PROJECT
5.COADING
6.OUTPUT SCREEN
7.SCOPE OF THE PROJECT
8 .LIMITATIONS
9. CONCLUSION
10. BIBLIOGRAPHY
ABOUT THE PROJECT
The project " Numerical Method and Analysis " is a menu-driven program which provide the
facility to the user to solve the complex problem.
In this user can able to solve and easily understand the different types of equation used in this
project. I have considered five types of different equation in which there are different types of
method to solve that particular equation. A comprehensive and balanced treatment of the subject
covering many important topics such as computation of complex linear equation by Jacobi’s
method, derivation of the modified Euler method,adoms bashforth-moulton method for solving
ordinary differential equation are dealt with important details. New methodology for finding
interpolation in two dimension and the concept of the stability and truncation error are
introduced in this to easily understand.
OVERVIEW OF C
C is a powerful, flexible, portable and elegantly structured programming language evolved by
the Dennis Ritchie at the Bell Lapidaries in 1972. Since C combines the features of the high-
level language with elements of the assembler, it is suitable for both system and applications
programming. It is undoubtedly the most widely used general-purpose language today.
To assure that C language remains standard, in 1983, ANSI appointed a technical committee to
define a standard for C.The committee approved a version of C in December 1989 which is now
known as ANSI C.It was then approved by the International Standards Organizations (ISO) IN
1990.This version of C is also referred to as C89.

The Main Features Of C:-


1. Program written in C are efficient and fast.
2. Highly portable
3. Well-suited for structured programming
4. Ability to extends its self
5. Easy to understand and friendly users.
INTRODUCTION OF THE PROJECT

Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to
general symbolic manipulations) for the problems of mathematical analysis (as distinguished
from discrete mathematics).One of the earliest mathematical writings is a Babylonian tablet from
the Yale Babylonian Collection (YBC 7289), which gives a sexagesimal numerical
approximation of the length of the diagonal in a unit square. Being able to compute the sides of a
triangle (and hence, being able to compute square roots) is extremely important, for instance, in
astronomy, carpentry and construction.
Numerical analysis involves the study of methods of computing numerical data. In many
problems this implies producing a sequence of approximation by repeating the procedures again
and again .
Numerical analysis naturally finds applications in all fields of engineering and the physical
sciences, but in the 21st century also the life sciences and even the arts have adopted elements of
scientific computations. Ordinary differential equations appear in celestial mechanics (planets,
stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential
equations and Markov chains are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation
in large printed tables. Since the mid-20th century, computers calculate the required functions
instead. These same interpolation formulas nevertheless continue to be used as part of the
software algorithms for solving differential equations.

Numerical Method

Numerical methods provide approximation to the problems in question. No matter how accurate
they are .They do not in most cases, provide the exact answers. In some instances may not be
possible or may be too time consuming and it is in these cases where numerical methods are
most often used. people who employ numerical method for solving problems have to worry
about the following issues-
1) the rate of convergence(how long does it take for the method to find the
answer)

2) the accuracy or even validity

3) the completeness of the responses(do other solution in addition to


the one found,exist)

General introduction

The overall goal of the field of numerical method and numerical analysis is the design and
analysis of techniques to the following:

1. Advanced numerical methods are essential in making numerical weather prediction


feasible.
2. Computing the trajectory of a spacecraft requires the accurate numerical solution of a
3. Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving partial
differential equations numerically.
4. Hedge funds (private investment funds) use tools from all fields of numerical analysis to
attempt to calculate the value of stocks and derivatives more precisely than other market
participants.
5. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and
crew assignments and fuel needs. Historically, such algorithms were developed within
the overlapping field of operations research.
6. Insurance companies use numerical programs for actuarial analysis.

History

The field of numerical analysis predates the invention of modern computers by many centuries.
Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of
the past were preoccupied by numerical analysis, as is obvious from the names of important
algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or
Euler’s method.
To facilitate computations by hand, large books were produced with formulas and tables of data
such as interpolation points and function coefficients. Using these tables, often calculated out to
16 decimal places or more for some functions, one could look up values to plug into the formulas
given and achieve very good numerical estimates of some functions. The canonical work in the
field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very
large number of commonly used formulas and functions and their values at many points.
The function values are no longer very useful when a computer is available, but the large listing
of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators
evolved into electronic computers in the 1940s, and it was then found that these computers were
also useful for administrative purposes. But the invention of the computer also influenced the
field of numerical analysis, since now longer and more complicated calculations could be done.

TWO TYPES OF METHOD FOR NUMERICAL METHOD

Direct methods:

Compute the solution to a problem in a finite number of steps. These methods would give the
precise answer if they were performed in infinite precision arithmetic. Examples include
Gaussian elimination, the QR factorization method for solving systems of linear equations, and
the simplex method of linear programming. In practice, finite precision is used and the result is
an approximation of the true solution (assuming stability).

Iterative method:
In contrast to direct methods, iterative methods are not expected to terminate in a finite number
of steps. Starting from an initial guess, iterative methods form successive approximations that
converge to the exact solution only in the limit. A convergence test, often involving the residual,
is specified in order to decide when a sufficiently accurate solution has (hopefully) been found.
Even using infinite precision arithmetic these methods would not reach the solution within a
finite number of steps (in general). Examples include Newton's method, the bisection method,
and Jacobi iteration.
In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods
are direct in principle but are usually used as though they were not, e.g. GMRES and the
conjugate gradient method. For these methods the number of steps needed to obtain the exact
solution is so large that an approximation is accepted in the same manner as for an iterative
method.

Discretization

Furthermore, continuous problems must sometimes be replaced by a discrete problem whose


solution is known to approximate that of the continuous problem; this process is called
discretization. For example, the solution of a differential equation is a function. This function
must be represented by a finite amount of data, for instance by its value at a finite number of
points at its domain, even though this domain is a continuum.

Generation and propagation of errors


The study of errors forms an important part of numerical analysis. There are several ways
in which error can be introduced in the solution of the problem.

Round-off
Round-off errors arise because it is impossible to represent all real numbers exactly on a
machine with finite memory (which is what all practical digital computers are).

Truncation and discretization error

Truncation errors are committed when an iterative method is terminated or a mathematical


procedure is approximated, and the approximate solution differs from the exact solution.
Similarly, discretization induces a discretization error because the solution of the discrete
problem does not coincide with the solution of the continuous problem. For instance, in the
iteration in the sidebar to compute the solution of, after 10 or so iterations, we conclude that the
root is roughly 1.99 (for example). We therefore have a truncation error of 0.01.
Once an error is generated, it will generally propagate through the calculation. For instance, we
have already noted that the operation + on a calculator (or a computer) is inexact. It follows that
a calculation of the type a+b+c+d+e is even more inexact.

Numerical stability and well-posed problems

Numerical stability is an important notion in numerical method and analysis. An algorithm is


called numerically stable if an error, whatever its cause, does not grow to be much larger during
the calculation. This happens if the problem is well-conditioned, meaning that the solution
changes

By only a small amount if the problem data are changed by a small amount. To the contrary, if a
problem is ill-conditioned, then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned
and/or ill-conditioned, and any combination is possible. So an algorithm that solves a well-
conditioned problem may be either numerically stable or numerically unstable. An art of
numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem.
Many algorithms solve this problem by starting with an initial approximation x1 to , for instance
x1=1.4, and then computing improved guesses x2, x3,etc.. One such method is the famous
Babylonian method, which is given by xk+1 = xk/2 + 1/xk.
Brief Introduction of Method Used In This Project

Method for solving algebraic and Transcdental equation

(1) Muller Method


Muller's method is a root-finding algorithm, a numerical method for solving equations of the
form f(x) = 0. It is first presented by David E. Muller in 1956.

Muller's method is based on the secant method, which constructs at every iteration a line through
two points on the graph of Instead, Muller's method uses three points, constructs the parabola
through these three points, and takes the intersection of the x-axis with the parabola to be the
next approximation.

Muller's method is a recursive method which generates an approximation of the root ξ


of f at each iteration. Starting with three initial values x0, x-1 and x-2, the first iteration
calculates the first approximation x1, the second iteration calculates the second
approximation x2, the third iteration calculates the third approximation x3, etc. Hence the kth
iteration generates approximation xk. Each iteration takes as input the last three generated
approximations and the value of f at these approximations. Hence the kth iteration takes as input
the values xk-1, xk-2 and xk-3 and the function values f(xk-1), f(xk-2) and f(xk-3). The
approximation xk is calculated as follows.

A parabola yk(x) is constructed which goes through the three points (xk-1, f(xk-1)), (xk-2, f(xk-
and (xk-3, f(xk-3)). When written in the Newton form, yk(x) is where f[xk-1, xk-2] and f[xk-1,
xk-2, xk-3] denote divided differences. This can be rewritten as where the next iterate xk is now
given as the solution closest to xk-1 of the quadratic equation yk(x) = 0. This yields the
recurrence relation In this formula, the sign should be chosen such that the denominator is as
large as possible in magnitude. We do not use the standard formula for solving quadratic
equations because that may lead to loss of significance.

Note that xk can be complex, even if the previous iterates were all real. This is in contrast with
other root-finding algorithms like the secant method, Sidi's generalized secant
method or Newton's method, whose iterates will remain real if one starts with real numbers.
Having complex iterates can be an advantage (if one is looking for complex roots) or a
disadvantage (if it is known that all roots are real), depending on the problem

(2)False position method


The false position method or regula falsi method is a term for problem-solving methods in
arithmetic, algebra, and calculus. In simple terms, these methods begin by attempting to evaluate
a problem using test ("false") values for the variables, and then adjust the values accordingly.

The convergce process in the bisection method is very slow. It depends only on the choice of end
points of the interval [a,b]. The function f(x) does not have any role in finding the point c (which
is just the mid-point of a and b). It is used only to decide the next smaller interval [a,c] or [c,b].
A better approximation to c can be obtained by taking the straight line L joining the points
(a,f(a)) and (b,f(b)) intersecting the x-axis. To obtain the value of c we can equate the two
expressions of the slope m of the line L.
m=f(b)-f(a)/(b-a)=0-f(b)/(c-b)
=> (c-b) * (f(b)-f(a)) = -(b-a) * f(b)
c = b - f(b) * (b-a)f(b) - f(a)
Now the next smaller interval which brackets the root can be obtained by checking
f(a) * f(b) < 0 then b = c
> 0 then a = c
= 0 then c is the root.
Selecting c by the above expression is called Regula-Falsi method or False position method.
Algorithm - False Position Scheme
Given a function f (x) continuos on an interval [a,b] such that f (a) * f (b) < 0
Do
c = a*f(b) - b*f(a)
f(b) - f(a)
if f (a) * f (c) < 0 then b = c
else a = c
while (none of the convergence criterion C1, C2 or C3 is satisfied)
The false position method is again bound to converge because it brackets the root in the whole
of its convergence process.
Numerical Example :
Find a root of 3x + sin(x) - exp(x) = 0.
On solving above equation , it's clear that there is a root between 0 and 0.5 and also another
root between 1.5 and 2.0. Now let us consider the function f (x) in the interval [0, 0.5]
where f (0) * f (0.5) is less than zero and use the regula-falsi scheme to obtain the zero of f
(x) = 0.

ITERATION a b c f(a)*f(b)
NO.
1 0 0.5 0.376 1.38(+ve)
2 0.376 0.5 0.36 -0.102(-ve)
So one of the roots of 3x + sin(x) - exp(x) = 0 is approximately 0.36. Note : Although the length
of the interval is getting smaller in each iteration, it is possible that it may not go to zero. If the
graph y = f(x) is concave near the root 's', one of the endpoints becomes fixed and the other end
marches
towards the root.

Method For Solving Linear System Of Equation

(1)Gauss Elimination Method

In linear algebra, Gaussian elimination (also known as row reduction) is an algorithm for solving
systems of linear equations. It is usually understood as a sequence of operations performed on
the associated matrix of coefficients. This method can also be used to find the rank of a matrix,
to calculate the determinant of a matrix, and to calculate the inverse of an invertible square
matrix. The method is named after Carl Friedrich Gauss, although it was known to Chinese
mathematicians as early as 179 AD .

To perform row reduction on a matrix, one uses a sequence of elementary row operations to
modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as is
possible.

There are three types of elementary row operations:

1) Swapping two rows,

2) Multiplying a row by a non-zero number,

3) Adding a multiple of one row to another row.

Using these operations, a matrix can always be transformed into an upper triangular matrix, and
in fact one that is in row echelon form. Once all of the leading coefficients (the left-most non-
zero entry in each row) are 1, and in every column containing a leading coefficient has zeros
elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in
other words, it is independent of the sequence of row operations used. For example, in the
following sequence of row operations (where multiple elementary operations might be done at
each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is
the unique echelon form of matrix.

(2)Gauss Jacobi Method

In numerical linear algebra, the Jacobi method is an algorithm for determining the solutions of a
diagonally dominant system of linear equations. Each diagonal element is solved for, and an
approximate value is plugged in. The process is then iterated until it converges. This algorithm is
a stripped-down version of the Jacobi transformation method of matrix diagonalization. The
method is named after Carl Gustav Jacob Jacobi.

Given a square system of n linear equations:

Then A can be decomposed into a diagonal component D, and the remainder R: The solution
is then obtained iteratively via the element-based formula is thus Note that the computation
of xi(k+1) requires each element in x(k) except itself. Unlike the Gauss–Seidel method, we
can't overwrite xi(k) with xi(k+1), as that value will be needed by the rest of the
computation. The minimum amount of storage is two vectors of size n.

(3) Gauss Jordan Method


This method is a variation of gaussian elimination method. In this
method, the elements above and below the diagonal are simultaneously
made zero and thereby the given system is reduced to an equivalent
diagonal form using elementary transformation . Then the solution of
the resulting diagonal system can be radily obtained.
sometimes, we normalize the pivot row with respect to the pivot
element, before elimination. Partial pivoting is also used whenever
the pivot element becomes zero.
Method For Solving Ordinary Differential Equation

(1)Euler Method
In mathematics and computational science, the Euler method is a first-
order numerical procedure for solving ordinary differential equations (ODEs) with a
given initial value. It is the most basic explicit method for numerical integration of ordinary
differential equations and is the simplest Runga–Kutta method. The Euler method is
namedafter Leonhard Euler, who treated it in his book Institutional calculi integral
is (published 1768–70).[1]
The Euler method is a first-order method, which means that the local error (error per step) is
proportional to the square of the step size, and the global error (error at a given time) is
proportional to the step size. The Euler method often serves as the basis to construct more
complex methods.

(2)Modified Euler method


The Euler forward scheme may be very easy to implement but it can't give accurate solutions. A
very small step size is required for any meaningful result. In this scheme, since, the starting point
of each sub-interval is used to find the slope of the solution curve, the solution would be correct
only if the function is linear. So an improvement over this is to take the
arithmetic average of the slopes at xi and xi+1(that is, at the end points of each sub-interval). The
scheme so obtained is called modified Euler's method. It works first by approximating a value
to yi+1 and then improving it by making use of average slope.

yi+1 = yi+ h/2 (y'i + y'i+1)


= yi + h/2(f(xi, yi) + f(xi+1, yi+1))
If Euler's method is used to find the first approximation of yi+1 then
yi+1 = yi + 0.5h(fi + f(xi+1, yi + hfi))
Truncation error:
yi+1 = yi + h y'i + h2yi'' /2 + h3yi''' /3! + h4yiiv /4! + . . .fi+1 = y'i+1 = y'i + h y''i + h2yi''''
/2 + h3yiiv /3! + h4yiv /4! + . . .
By substituting these expansions in the Modified Euler formula gives
yi + h y'i + h2yi'' /2 + h3yi''' /3! + h4yiiv /4! + . . . = yi+ h/2 (y'i + y'i + h y''i +
h2yi'''' /2 + h3yiiv /3! + h4yiv /4! + .. . )
So the truncation error is: - h3yi''' /12 - h4yiiv /24 + . . . that is, Modified Euler's method is of
order two.
(3)Adams-BashForth and Adams-Moulton methods

Given an initial value problem: y ' = f(x,y), y(x0) = y0 together with additional starting values y1
= y(x0 + h), . . . , yk-1 = y(x0 + (k-1) h) the k-step Adams-Bashforth method is an explicit
linear multistep method that approximates the solution, y(x) at x = x0+kh, of the initial value
problem by
yk = yk - 1 + h * ( a0 f(xk - 1,yk - 1) + a1 f(xk - 2,yk - 2) + . . . + ak - 1 f(x0,y0) )where a0, a1,
. . . , ak - 1 are constants.
The constants ai can be determined by assuming that the linear expression is exact for
polynomials in x of degree k - 1 or less, in which case the order of the Adams-Bashforth
method is k. The major advantage of the Adams-Bashforth method over the Runge-Kutta
methods is that only one evaluation of the integrand f(x,y) is performed for each step.
The (k-1)-step Adams-Moulton method is an implicit linear multistep method that iteratively
approximates the solution, y(x) at x = x0+kh, of the initial value problem by
yk = yk - 1 + h * ( b0 f(xk,yk) + b1 f(xk - 1,yk - 1) + . . . + bk - 1 f(x1,y1) )where b1, . . . , bk - 1
are constants.
The constants bi can be determined by assuming that the linear expression is exact for
polynomials in x of degree k - 1 or less, in which case the order of the Adams-Moulton
method is k.
In order to start the Adams-Moulton iterative method, the Adam-Bashforth method is used to
generate an initial estimate for yk. Applications of the left-hand side Adams-Moulton formula is
then used to generate successive estimates for yk. The process is converges providing that the
step size h is chosen so that |h f,y(x,y) | < 1 over the region of interest, where f,y denotes the
partial derivative of f with respect to y.
The Adams-Bashforth method forms the predictor and Adams-Moulton method forms the
corrector for a predictor-corrector multistep procedure for approximating the solution of a
differential equation given historical values. Usually a k-step Adams-Bashforth method is
paired with a (k-1)-step Adams-Moulton method but this is not necessary it is possible to pair
any k-step Adams-Bashforth method with any l-step Adams-Moulton method.
Method For Solving Integration Equation

(1)Trapeziodal Rule

This article is about the quadrature rule for approximating integrals. In numerical analysis, the
trapezoidal rule (also known as the trapezoid rule or trapezium rule) is a technique for
approximating the definite integral The trapezoidal rule works by approximating the region
under the graph of the function as a trapezoid and calculating its area. It follows that

Error analysis
The error of the composite trapezoidal rule is the difference between the value of the integral and
the numerical result. There exists a number ξ between a and b, such that It follows that if the
integrand is concave up (and thus has a positive second derivative), then the error is negative and
the trapezoidal rule overestimates the true value. The trapezoids include all of the area under the
curve and extend over it. Similarly, a concave-down function yields an underestimate because
area is unaccounted for under the curve, but none is counted above. If the interval of the integral
being approximated includes an inflection point, the error is harder to identify.

(2)Simpson's Rule

In numerical analysis, Simpson's rule is a method for numerical integration, the numerical
approximation of definite integrals. Specifically, it is the following approximation:

Simpson's rule also corresponds to the three-point Newton-Cotes quadrature rule. The method
is credited to the mathematician Thomas Simpson (1710–1761) of Leicestershire,
England. Kepler used similar formulas over 100 years prior. In German, the method is
sometimes called Keplersche Fassregel for this reason. Simpson's rule is a staple of
scientific data analysis and engineering.

Simpson's 1/3 Rule for Integration


As the trapezoidal rule for integration finds the area under the line connecting the endpoints of a
panel, Simpson's rule finds the area under the parabola which passes through 3 points (the
endpoints and the midpoint) on a curve. In essence, the rule approximates the curve by a series of
parabolic arcs and the area under the parabolas is approximately the area under the curve. There
is a unique curve with the equation

y = ax2 + bx + c passing through the points (-x,y0), (0,y1), and (x,y2). There is a unique solution
for a, b, and c generated by the three equations:

y0 = a(-x)2 + b(-x) + c

y1 = c

y2 = a(x)2 + b(x) + c

The area under the curve from -x to x is but the part in the square brackets can be rewritten as y0
+ 4y1 + y2 and so

For the adjoining parabola, y2 is a collocation point; it is evaluated twice. The number of
collocation points is one less than the number of parabolas. The series of coefficients for the yi's
for N points then is

0 1 2 3 4 5 6 7 ... N-3 N-2 N-


1 coeff.
14242424241
BISECTION METHOD
Bisection method is the simplest among all the numerical schemes to solve the transcendental
equations. This scheme is based on the intermediate value theorem for continuous functions .

Consider a transcendental equation f (x) = 0 which has a zero in the interval [a,b] and f (a) * f
(b) < 0. Bisection scheme computes the zero, say c, by repeatedly halving the interval [a,b].
That is, starting with

c = (a+b) / 2

the interval [a,b] is replaced either with [c,b] or with [a,c] depending on the sign of f (a) * f (c) .
This process is continued until the zero is obtained. Since the zero is obtained numerically the
value of c may not exactly match with all the decimal places of the analytical solution of f (x) = 0
in the interval [a,b]. Hence any one of the following mechanisms can be used to stop the
bisection iterations :

C1. Fixing a priori the total number of bisection iterations N i.e., the length of the interval or
the maximum error after N iterations in this case is less than | b-a | / 2N.

C2. By testing the condition | ci - c i-1| (where i are the iteration number) less than some
tolerance limit, say epsilon, fixed a priori.

C3. By testing the condition | f (ci ) | less than some tolerance limit alpha again fixed
a priori.

Algorithm - Bisection Scheme

Given a function f (x) continuous on an interval [a,b] and f (a) * f (b) < 0
Do
c = (a+b)/2
if f (a) * f (c) < 0 then b = c
else a = c
while (none of the convergence criteria C1, C2 or C3 is satisfied)

Numerical Example :

Find a root of f (x) = 3x + sin(x) - exp(x)= 0.

The graph of this equation is given in the figure.

Its clear from the graph that there are two roots, one lies
between 0 and 0.5 and the other lies between 1.5 and 2.0.

Consider the function f (x) in the interval [0, 0.5] since f (0) * f (0.5) is less than zero.

Then the bisection iterations are given by

Iteration
a b c f(a) * f(c)
No.
1 0 0.5 0.25 0.287 (+ve)
2 0.25 0.5 0.393 -0.015 (-ve)
3 0.65 0.393 0.34 9.69 E-3 (+ve)
4 0.34 0.393 0.367 -7.81 E-4 (-ve)
5 0.34 0.367 0.354 8.9 E-4 (+ve)
6 0.354 0.367 0.3605-3.1 E-6 (-ve)
So one of the roots of 3x + sin(x) - exp(x) = 0 is approximately 0.3605
Method For Solving Interpolation method

(1) Newton Forward Difference


Interpolation Formula

Making use of forward difference operator and forward difference table ( will be defined a
little later) this scheme simplifies the calculations involved in the polynomial approximation of
fuctons which are known at equally spaced data points.
Consider the equation of the linear interpolation optained in the earlier section :
f(x) @ P1(x) = a(x-1) b = (f1-f0/x1-x0)x +(f0x1 -f1 x0/x1- x0)
= 1/(x1-x0) [(x1 - x)f0 + (x-x0)f1]
= (x1 - x0)/(x1 - x 0)f0+x-x0/(x1-x0) (f1-f0) +(x - x0)/(x1-x0)f0
= f0+(x - x0)/( x1 - x0) (f1 - f0)
= f0+rDf0 where [r= (x-x0) / (x1-x0) Df0= f1-f0]
since x1 - x0 is the step lenght h, r can be written as (x - x0)/h and will be between (0, 1).
Error in the linear interpolation :
If e(x) is the error in the linear interpolation then
e(x) = P1(x) - f(x) = f0 + r(f1 - f0) - f(x) By
Taylor's theorem
f(x) = f(x0 + r h) = f0 + r h f0' + 1/2 r2 h2 f0'' (f), x0 < f < x1
f1 = f(x1) = f(x0 + h) = f0 + h f0' + 1/2 h2 f0''(u),
x0 < u < x1
Þe(x) = f0 + r ( f0 + h f0' + 1/2 h2 f0'' (u) - f0 ) - ( f0 + r h f0' + 1/2 s2 h2 f0''(t) )
= 1/2 h2 ( r f0''(u) - r2 f0''(t) )
Let us assume that the second derivative of the function is bounded such that | f ''(x) |
< M2 and since r < 1 we have | e(x) | < 1/2 h2 ( M2+M2) = h2M2
The general formula is very convenient to find the function value at various points if
forward difference at various points are avilable. Similarly the polynomial
Newton's forward difference formulae :
Let the function f is known at n+1 equally spaced data points
a = x0 < x1 < ... < = xn = b in the interval [a,b] as f0, f1, . . . fn. Then the n the degree
polynomial approximation of f(x) can be given as
nf(x) @ Pn(x) = S ( ri ) Di f0 ,i=0 where r = (x-x0 ) / n Þ x = x0 + r h Þ0
<r <n and ( r ) are the binomial coefficients defined as ( r ) = 1, ( r ) = r(r - 1) . . . (r - i + 1)
for any integer i > 0 i 0 i i!
Proof :
To prove that the given result is the n the degree polynomial approximation of f(x) it is
sufficeint to prove that at the node i i.e., at x = xi the polynomial approximation Pn(x) gives fi
for any tabulated value xi since the curve f(x) passes through the node points xi, i = 0, 1, . . ., n
Take r = k
Þ x = x0 + r h = x0 + k h = xk
kÞ fk = S ( ki ) = f0 + k Df0 + . . . + ( kk ) Dk f0 i=0
the terms after k need not be considered since ( n ) = 0, for n < i
i we can prove this result by mathematical induction. consider k
=0
Þ f0 = S Di f0 = f0 hence the result is true for k = 0 and assume that the
result is also true upto k = 1, 2, . . . p
i=0
fp = f0 + rDf0 + . . . + ( p ) Dp f0 p
consider fp+1 = fp + Dfp
= ( p )Df0 0 + ( p )Df0 1 + . . . + ( p )Dpf0 p + ( p )Df0 0 + . . . + ( p )Dpf0 p-1 + (
p )Dp+1f0 pk+1
S ( k+1i ) Di f0 i=0
since ( p ) i + ( p ) i-1 = ( p+1 ) i i.e., the nth degree
polynomial approximation for f(x) can be written as
f(x) @ Pn(x) = f0 + rDf0 + r(r-1) D2f0 + . . . + r(r-1) . . . (r - n +1) Dnf0 2! n! The
formula is called Newton's (Newton-Gregory) forward interpolation
formula.
So if we know the forward difference values of f at x0 until order n then
the above formula is easy to use to find the function values of f at any
non-tabulated value of x in the internal [a,b].The higher order forward
differences can be obtained by making use of forward difference table.
Forward difference table :
Consider the function value (xi, fi) i = 0,1,2,--3 then the forward difference
table is
xi fi Dfi D2fi D3fi
D4fi
x0 f0
Df0
x1 f1 D2f0
Df1 D3f0
x2 f2 D2f1
D4f0
Df2 D3f1
x3 f3 D2f2

x4 f4 Df3
Example : If f(x) is known at the following data points
xi 0 1 2 3 4
fi 1 7 23 55 109
then find f(0.5) and f(1.5) using Newton's forward difference formula.
Solution :
Forward difference table
xi fi Dfi D2fi
D3fi D4fi
0 1
6
1 7 10
16
6
2 23 16
0
32
6
3 55 22
54
4 109
(Note : The given data satifies f(x) = x3 + 2x2 + 3x +1, i.e the function is a third
degree polynomial and hence third forward differences are constant by the result).
By Newton's forward difference formula
f(x) = f0 + rDf0 + r(r-1) D2f0 + r(r-1)(r-2) D3f0 2! 3!
at x = 0.5, r = (x - x0) / h = (0.5 - 0) / 1 = 0.5
f(0.5) = 1 + 0.5 x 6 + 0.5(0.5 - 1) x 10 5 + 0.5(0.5 - 1)(0.5-2) x 62 6
= 1 + 3 + 2.5 x (-0.5) + (-0.25)(-1.5)
= 3.125
Exact value is f(0.5) = (0.5)3 + 2(0.5)2 + 3(0.5) + 1
= 0.125 + 0.5 + 1.5 + 1 = 3.125
Error in the Interpolation :
ÞEn(x) = (x - x0)(x - x1) . . .(x - xn) f(n+1)(x) / (n+1)! x0 < x < xn
So for the Newton's method where the nodel points xi, i = 0, 1, . . . n
are equally spaced, the error is En(x) = (x - x0)(x - x0 - h) . . .(x - x0 -
nh) f(n+1)(x) / (n+1)!
= r(r-1). . .(r-n) h(n+1)f(n+1)(x) (n+1)!
= ( r ) h(n+1)f(n+1)(x) n+1)

(2)Newton Divided Difference Interpolation Formula

Let us assume that the function f(x) is linear then we have f(xi) - f(xj)(xi -xj)
where xi and xj are any two tabular points, is independent of xi and xj.
This ratio is called the first divided difference of f(x) relative to xi and xj and is denoted by f
[xi, xj].
That is f [xi, xj] = f(xi) - f(xj)/(xi-xj) = f [xj, xi]
Since the ratio is independent of xi and xj we can write f [x0, x] = f [x0,
x1]=f(x) - f(x0)/(x-x0= f [x0, x1] f(x)=f(x0) + (x - x0) f [x0, x1]
=1/(x-x0) | f(x0) x0 - x|= [f1 - f0/(x1-x0)]+ f0x1 - f1x0| f(x1) x1 - x|/(x1 - x0)
So if f(x) is approximated with a linear polynomial then the function value
at any point x can be
calculated by using f(x) @ P1(x) = f(x0) + (x - x1) f [x0, x1]
where f [x0, x1] is the first divided difference of relative to x0 and x1.

Similarly if f(x) is a second degree polynomial then the secant slope


defined above is not constan but a linear function of x. Hence we have
f [x1, x2] - f [x0, x1]/x2 - x0 is independent of x0, x1 and x2. This ratio
is defined as second divided difference of f relative to x0, x1 and x2. The
second divided difference are denoted as f [x1, x2] - f [x0, x1]/x2-x0=f
[x0, x1, x2]

Now again since f [x0, x1,x2] is independent of x0, x1 and x2 we have.


f [x1, x0, x] = f [x0, x1, x2]f [x0, x] - f [x1, x0]/x-x1= f [x0, x1, x2]
f [x0, x] = f [x0, x1] + (x - x1) f [x0, x1, x2]
f [x] - f [x0]/x-x0= f [x0, x1] + (x - x1) f [x0, x1, x2]
f(x) = f [x0] + (x - x0) f [x0, x1] + (x - x0) (x - x1) f [x0, x1, x2]
This is equivalent to the second degree polynomial approximation
passing through three data points x0 x1 x2 f0 f1 f2

So whenever f(x) is approximated with a second degree polynomial, the


value of f(x) at any point x can be computed using the above
polynomial. In the same way if we define recursively kth divided
difference by the relation
f [x1, x2, . . ., xk] - f [x0, x1, . . ., xk-1] f [x0, x1, . . ., xk] =xk - x0
The kth degree polynomial approximation to f(x) can be written as
f(x) = f [x0] + (x - x0) f [x0, x1] + (x - x0) (x - x1) f [x0, x1, x2]+ . . . + (x - x0) (x
- x1) . . . (x - xk-1) f [x0, x1, . . ., xk].
This formula is called Newton's Divided Difference Formula. Once we
have the divided differences of the function f relative to the tabular points
then we can use the above formula to compute f(x) at any non tabular
point.
Computing divided differences using divided difference table: Let us
consider the points (x1,f1), (x2, f2), (x3, f3) and (x4, f4) where x1, x2, x3
and x4 are not necessarily equi-distant points
then the divided difference table can be written as
xi fi f [xi, xj] f [xi, xj, xk ] f [xi, xj, xk, xl]
x1 f1
f[x1, x2]
x2 f2 f [x1, x2,x3]
f[x2,x3] f
[x1,x2,x3,x4]
x3 f3 f [x2, x3,x4]
f[x3,x4]
x4 f4
Example : Compute f(0.3) for the data
x 0 1 3 4 7
f 1 3 49 129 813
using Newton's divided difference formula.
Solution : Divided difference table
xi fi
01
2
1 3 7 23 3
23 3
3 49 19 80 3

4 129 37
228
7 813
Now Newton's divided difference formula is
f(x) = f [x0] + (x - x0) f [x0, x1] + (x - x0) (x - x1) f [x0, x1, x2] + (x - x0) (x - x1)
(x - x2)f [x0, x1, x2,x3]
f(0.3) = 1 + (0.3 - 0) 2 + (0.3)(0.3 - 1) 7 + (0.3) (0.3 - 1) (0.3 - 3) 3= 1.831
Since the given data is for the polynomial f(x) = 3x3 - 5x2 + 4x +1
the analytical value is f(0.3) =1.831
The analytical value is matched with the computed value because the
given data is for a third degree polynomial and there are five data points
available using which one can approximate any data exactly upto fourth
degree polynomial.
Properties :
(▪) If f(x) is a polynomial of degree N, then the Nth divided difference of f(x) is a constant.
CODING
#include <conio.h>
#include<stdio.h>
#include<process.h>
#include<stdlib.h>
#include<math.h>
# define MAX 10
#define ESP 0.0001
#define X1(x2,x3) ((17 - 20*(x2) + 2*(x3))/20)
#define X2(x1,x3) ((-18 - 3*(x1) + (x3))/20)
#define X3(x1,x2) ((25 - 2*(x1) + 3*(x2))/20)
//#deTfine F(x) (x)*(x)*(x) + (x)*(x) + (x) + 7 // bisection
//dy/dx = xy
#define F(x,y) (x)*(y)
#define F(x,y) (x)*(x)+(y)
#define F(x,y) 1 + (y)*(y)
void algebricandtrans();
void linearsysequation();
void ordinarydiffequation();
void interpolationeq();
void numericalintegrationseq();
int user_power,i=0,cnt=0,flag=0;
int coef[10]={0};
float x1,x2,x3=0,t=0;
float fx1=0,fx2=0,fx3=0,temp=0;
/*
int user_power,i=0,cnt=0,flag=0;
int coef[10]={0};
float x1,x2,x3=0,t=0;
float fx1=0,fx2=0,fx3=0,temp=0;
*/
void main(void)
{
int i=0,ch;
clrscr();
for (;;)
{
//window(10,10,60,15);
//textbackground(RED);
//textcolor(WHITE+BLINK);
gotoxy(44,2);
cprintf("KEERTI SINGH::NUMERICAL PROGRAMMING");
// textattr(i+((i+1) << 4));
printf("\n-----MAIN MENU------\n\r");//textattr(i + ((i+1) << 4));i++;
printf("\n1.SOLUTIONN 0F ALGEBRIC AND TRANSCEDENTAL EQ\r");//textattr(i
+ ((i+1) << 4));i++;
printf("\n2.SOLUTION OF LINEAR SYSTEM OF EQUATION\r");//textattr(i +
((i+1) << 4));i++;
printf("\n3.ORDINARY DIFFERENTIAL EQ\r"); //textattr(i + ((i+1) << 4));i++;
printf("\n4.INTERPOLATION EQUATIONS\r"); //textattr(i + ((i+1) << 4));i++;
printf("\n5.NUMERICAL INTEGRATIONS EQUATIONS\r"); //textattr(i + ((i+1) <<
4));i++;
printf("\n6.EXIT\r");// textattr(i + ((i+1) << 4));i++;
printf("\nPLEASE ENTER YOUR CHOICE :");//textattr(i + ((i+1) << 4));i++;
scanf("%d",&ch);
switch(ch)
{
case 1: algebricandtrans();break;
case 2: linearsysequation();break;
case 3: ordinarydiffequation();break;
case 4: interpolationeq();break;
case 5: numericalintegrationseq();break;
case 6: exit(0);
}
i=0;
}
getch();
}
void algebricandtrans()
{
void bisectionmethod();
void mullermethod();
void regulafalsi();
int ch;
clrscr();
while(1)
{
clrscr();
printf("\n--WELCOME TO ALGEBRIC AND TRANSCENDENTAL EQUATION");
printf("\n1.MULLER METHOD");
printf("\n2.FALSE POSITION METHOD");
printf("\n3.BISECTION METHOD");
printf("\n4.RETURN TO MAIN MENU");
printf("\nPLEASE ENTER YOUR CHOICE :");
scanf("%d",&ch);
switch(ch)
{
case 1: mullermethod();break;
case 2: regulafalsi();break;
case 3: bisectionmethod();break;
case 4: main();
default :printf("\nWRONG CHOICE.TRY AGAIN");
}
}
}
void linearsysequation()
{
void gausseliminationmethod();
void gaussjacobi();
void gaussjorden();
int ch;
clrscr();
while(1)
{
clrscr();
printf("\n--WELCOME TO SOLUTION OF LINEAR SYSTEM OF EQUATION");
printf("\n1.GAUSS ELIMINATION METHOD");
printf("\n2.GAUSS JACOBI METHOD");
printf("\n3.GAUSS JORDAN ELIMINATION METHOD");
printf("\n4.RETURN TO MAIN MENU");
printf("\nPLEASE ENTER YOUR CHOICE :");
scanf("%d",&ch);
switch(ch)
{
case 1: gausseliminationmethod();break;
case 2: gaussjacobi();break;
case 3: gaussjorden();break;
case 4: main();
default :printf("\nWRONG CHOICE.TRY AGAIN");
}
}

getch();
}
void gausseliminationmethod()
{
int i,j,n,k;
float mat[MAX][MAX],x[MAX],temp,pivot,sum=0;
clrscr();
printf("\t\t\t GAUSS ELIMINITION METHOD\n");
printf("-------------------------------------------------------------------\n");
printf("Enter No of Equtions : ");
scanf("%d",&n);
printf("Enter Coefficients of Eqution \n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%f",&mat[i][j]);
printf("Enter Constant value\n");
for(i=1;i<=n;i++)
{
scanf("%f",&mat[i][n+1]);
x[i]=mat[i][n+1];
}
for(i=2;i<=n;i++)
{
for(j=i;j<=n;j++)
{
pivot=mat[j][i-1]/mat[i-1][i-1];
for(k=i-1;k<=n+1;k++)
mat[j][k]=mat[j][k]-pivot*mat[i-1][k];
}
}
printf("Eliminated matrix as :- \n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n+1;j++)
printf("\t%.2f",mat[i][j]);
printf("\n");
}
for(i=1;i<=n;i++)
{
if(mat[i][i]==0)
{
printf("Since diagonal element become zero\n Hence solution is not possible\n");
exit(1);
}
}
printf("Solution : \n");
for(i=0;i<n;i++) {
sum=0;
for(j=n;j>n-i;j--)
sum=sum+mat[n-i][j];

x[n-i]=(mat[n-i][n+1]-sum*x[n])/mat[n-i][n-i];
printf("X%d = %4.2f\n",n-i,x[n-i]);
}

getch();
/*
OUTPUT

GAUSS ELIMINITION METHOD


--------------------------------------------------------------------Enter No of Equtions : 3
Enter Coefficients of Eqution
4 3 -2
111
3 -2 1
Enter Constant value
532
Eliminated matrix as :-
4.00 3.00 -2.00 5.00
0.00 0.25 1.50 1.75
0.00 0.00 28.00 28.00
Solution :
X3 = 1.00
X2 = 1.00
X1 = 1.00
*/
}
void gaussjacobi()
{
int i,j,m,n,l;
float x[10],a[10][10],b[10],c[10];
printf("\nEnter the value of n : \n");
scanf("%d",&n);
printf("\nEnter the number of iterations : \n");
scanf("%d",&l);
printf("\nEnter the right hand side constants :
\n"); for(i=0;i<n;i++) {
scanf("%f",&b[i]);
}
printf("\nEnter the coefficients row wise : \n");
for(i=0;i<n;i++) {
x[i]=0;
for(j=0;j<n;j++) {
scanf("%f",&a[i][j]);
}
}
m=1;
line:
for(i=0;i<n;i++) {
c[i]=b[i];
for(j=0;j<n;j++) {
if(i!=j) {
c[i]=c[i]-a[i][j]*x[j];
}
}
}
for(i=0;i<n;i++) {
x[i]=c[i]/a[i][i];
}
m++;
if(m<=l) {
goto line;
}
else {
printf("\nThe Solution is : \n");
for(i=0;i<n;i++) {
printf("\nx(%d) = %f\n",i,x[i]);
}
}
getch();
}
void gaussjorden()
{
float mat[4][4],temp,temp1,x,y,z;
int i,n=3,j;
clrscr();
for(i=0; i<n; i++)
{
printf("\n\nenter the value of %d eqvation",i+1);
for(j=0; j<n; j++)
{
printf("\nenter the value of coeffcient %d:
",j+1); scanf("%f",&mat[i][j]);
}
printf("\nenter the value of constent: ");
scanf("%f",&mat[i][j]);
}
printf("\n* * * Your Matrix * * *\n\n");
for(i=0;i<n;i++)
{
for(j=0;j<n+1;j++)
{
printf(" %g ",mat[i][j]);
}
printf("\n\n");
}

temp=mat[1][0]/mat[0][0];
temp1=mat[2][0]/mat[0][0];
for(i=0,j=0;j<n+1;j++)
{
mat[i+1][j]=mat[i+1][j]-(mat[i][j]*temp);
mat[i+2][j]=mat[i+2][j]-(mat[i][j]*temp1);
}

temp=mat[2][1]/mat[1][1];
temp1=mat[0][1]/mat[1][1];
for(i=1,j=0;j<n+1;j++)
{
mat[i+1][j]=mat[i+1][j]-(mat[i][j]*temp);
mat[i-1][j]=mat[i-1][j]-(mat[i][j]*temp1);
}

temp=mat[0][2]/mat[2][2];
temp1=mat[1][2]/mat[2][2];
for(i=0,j=0;j<n+1;j++)
{
mat[i][j]=mat[i][j]-(mat[i+2][j]*temp);
mat[i+1][j]=mat[i+1][j]-(mat[i+2][j]*temp1);
}
for(i=0;i<n;i++)
{
for(j=0;j<n+1;j++)
{
printf(" %.3f ",mat[i][j]);
}
printf("\n\n");
}
z = mat[2][3]/mat[2][2];
y = mat[1][3]/mat[1][1];
x = mat[0][3]/mat[0][0];
printf("\n\nx = %.3f",x);
printf("\n\ny = %.3f",y);
printf("\n\nz = %.3f",z);

getch();
/*
______________________________________

OUT PUT
______________________________________

enter the value of 1 eqvation


enter the value of coeffcient 1: 2

enter the value of coeffcient 2: 1

enter the value of coeffcient 3: 1

enter the value of constent: 10


enter the value of 2 eqvation
enter the value of coeffcient 1: 3

enter the value of coeffcient 2: 2

enter the value of coeffcient 3: 3

enter the value of constent: 18

enter the value of 3 eqvation


enter the value of coeffcient 1: 1

enter the value of coeffcient 2: 4

enter the value of coeffcient 3: 9

enter the value of constent: 16

* * * Your Matrix * * *

2 1 1 10

3 2 3 18

1 4 9 16

2.000 0.000 0.000 14.000

0.000 0.500 0.000 -4.500

0.000 0.000 -2.000 -10.000


x = 7.000

y = -9.000

z=
5.000 */
}

void ordinarydiffequation()
{
void euler();
void modifiedeuler();
void admosbashfourth();
int ch;
clrscr();
while(1)
{
clrscr();
printf("\n--WELCOME TO SOLUTION OF ORDINARY DIFFERENTIAL EQUATION--
");
printf("\n1.EULER METHOD");
printf("\n2.MODIFIED EULER METHOD");
printf("\n3.ADOMS BASHFORTH METHOD ");
printf("\n4.RETURN TO MAIN MENU");
printf("\nPLEASE ENTER YOUR CHOICE :");
scanf("%d",&ch);
switch(ch)
{
case 1: euler();break;
case 2: modifiedeuler();break;
case 3: admosbashfourth();break;
case 4: main();
default :printf("\nWRONG CHOICE.TRY AGAIN");
}
}
getch();
}
void euler()
{
double y1,y2,x1,a,n,h;
int j;
clrscr();
printf("\nEnter the value of range: ");
scanf("%lf %lf",&a,&n);
printf("\nEnter the value of y1: ");
scanf("%lf",&y1);
printf("\n\nEnter the h: ");
scanf("%lf",&h);
printf("\n\n y1 = %.3lf ",y1);
for(x1=a,j=2; x1<=n+h; x1=x1+h,j++)
{
y2= y1 + h * F(x1,y1);
printf("\n\n x = %.3lf => y%d = %.3lf ",x1,j,y2);
y1=y2;

}
getch();

/*
OUT PUT
---------
Enter the value of range: 1 1.5

Enter the value of y1: 5


Enter the h: 0.1

y1 = 5.000

x = 1.000 => y2 = 5.500

x = 1.100 => y3 = 6.105

x = 1.200 => y4 = 6.838

x = 1.300 => y5 = 7.726

x = 1.400 => y6 = 8.808

x = 1.500 => y7 = 10.129

*/
}
void modifiedeuler()
{
double y0,x0,y1,x1,y1_0,a,n,h,f,f1;
int j,count,flag;
clrscr();
printf("\nEnter the value of x0: ");
scanf("%lf",&x0);
printf("\nEnter the value of y0: ");
scanf("%lf",&y0);
printf("\nEnter the value of h: ");
scanf("%lf",&h);
printf("\nEnter the value of last point: ");
scanf("%lf",&n);
for(x1=x0+h,j=1; x1<=n+h; x1=x1+h,j++)
{
count=0;
flag=0;
f=F(x0,y0);
y1_0 = y0 + (h * f);
printf("\n\n * * y%d_0 = %.3lf * *",j,y1_0);
do
{
count++;
f=F(x0,y0);
f1=F(x1,y1_0);
y1 = y0 + h/2 * ( f + f1);
printf("\n\n * * x = %.3lf => y%d_%d = %.3lf * *",x1,j,count,y1);
if(fabs(y1-y1_0)<0.00001)
{
printf("\n\n\n\n * * * * y%d = %.3lf * * *
*\n\n",j,y1); flag=1;
}
else
y1_0 = y1;
}while(flag!=1);
y0 = y1;
}
getch();

/*
OUT PUT
---------

Enter the value of x0: 0

Enter the value of y0: 1


Enter the value of h: 0.05

Enter the value of last point: 0.1

* * y1_0 = 1.050 * *

* * x = 0.050 => y1_1 = 1.051 * *

* * x = 0.050 => y1_2 = 1.051 * *

* * x = 0.050 => y1_3 = 1.051 * *

* * * * y1 = 1.051 * * * *

* * y2_0 = 1.104 * *

* * x = 0.100 => y2_1 = 1.105 * *

* * x = 0.100 => y2_2 = 1.106 * *

* * x = 0.100 => y2_3 = 1.106 * *

* * * * y2 = 1.106 * * * *
*/
}
void admosbashfourth()
{
double y0,x0,y1,y[10],n,h,f,sum=0;
int j;
clrscr();
printf("\nEnter the value of x0: ");
scanf("%lf",&x0);
printf("\nEnter the value of y0: ");
scanf("%lf",&y0);
printf("\nEnter the value of h: ");
scanf("%lf",&h);
printf("\nEnter the value of X for finding Y(x):
"); scanf("%lf",&n);
for(x0,j=0; x0<n; x0=x0+h,j++)
{
printf("\nEnter the value of Y(%.2lf): ",x0);
scanf("%lf",&y[j]);
}
f=F(x0,y[3]);
sum = sum + 55 * f;
f = F(x0,y[2]);
sum = sum - 59 * f;
f = F(x0,y[1]);
sum = sum + 37 * f;
f = F(x0,y[0]);
sum = sum - 9 * f;
y1 = y[3] + (h/24) * (sum);
printf("\n\n Yp(%.2lf) = %.3lf ",n,y1);
sum = 0;
f = F(x0,y1);
sum = sum + 9 * f;
f=F(x0,y[3]);
sum = sum + 19 * f;
f = F(x0,y[2]);
sum = sum - 5 * f;
f = F(x0,y[1]);
sum = sum + f;
y1 = y[3] + (h/24) * (sum);
printf("\n\n Yc(%.2lf) = %.3lf ",n,y1);
getch();
/*
____________________________________
OUT PUT
____________________________________
Enter the value of x0: 0
Enter the value of y0: 0
Enter the value of h: 0.2
Enter the value of X for finding Y(x): 0.8
Enter the value of Y(0.00): 0
Enter the value of Y(0.20): 0.2027
Enter the value of Y(0.40): 0.4228
Enter the value of Y(0.60): 0.6841
Yp(0.80) = 1.023
Yc(0.80) = 1.030
*/

}
void interpolationeq()
{
void newtonforward();
void newtondividedformula();
int ch;
clrscr();
while(1)
{
clrscr();
printf("\n--WELCOME TO INTERPOLATION METHOD--");
printf("\n1.NEWTON'S FORWARD DIFFERENCE INTERPOLATION FORMULA");
printf("\n2.NEWTON'S DIVIDED DIFFERENCE FORMULA");
printf("\n3.RETURN TO MAIN MENU");
printf("\nPLEASE ENTER YOUR CHOICE :");
scanf("%d",&ch);
switch(ch)
{
case 1: newtonforward();break;
case 2: newtondividedformula();break;
case 3: main();
default :printf("\nWRONG CHOICE.TRY AGAIN");
}
}
getch();
}
void newtonforward()
{
float x[10],y[10][10],sum,p,u,temp;
int i,n,j,k=0,f,m;
float fact(int);
clrscr();
printf("\nhow many record you will be enter: ");
scanf("%d",&n);
for(i=0; i<n; i++)
{
printf("\n\nenter the value of x%d: ",i);
scanf("%f",&x[i]);
printf("\n\nenter the value of f(x%d): ",i);
scanf("%f",&y[k][i]);
}
printf("\n\nEnter X for finding f(x): ");
scanf("%f",&p);

for(i=1;i<n;i++)
{
for(j=0;j<n-i;j++)
{
y[i][j]=y[i-1][j+1]-y[i-1][j];
}
}
printf("\n_____________________________________________________\n");
printf("\n x(i)\t y(i)\t y1(i) y2(i) y3(i) y4(i)");
printf("\n_____________________________________________________\n");
for(i=0;i<n;i++)
{
printf("\n %.3f",x[i]);
for(j=0;j<n-i;j++)
{
printf(" ");
printf(" %.3f",y[j][i]);
}
printf("\n");
}

i=0;
do
{
if(x[i]<p && p<x[i+1])
k=1;
else
i++;
}while(k != 1);
f=i;
u=(p-x[f])/(x[f+1]-x[f]);
printf("\n\n u = %.3f ",u);

n=n-i+1;
sum=0;
for(i=0;i<n-1;i++)
{
temp=1;
for(j=0;j<i;j++)
{
temp = temp * (u - j);
}
m=fact(i);
sum = sum + temp*(y[i][f]/m);
}
printf("\n\n f(%.2f) = %f ",p,sum);
getch();
}

float fact(int a)
{
float fac = 1;

if (a == 0)
return (1);
else
fac = a * fact(a-1);

return(fac);

/*
______________________________________
OUT PUT

how many record you will be enter: 5

enter the value of x0: 2

enter the value of f(x0): 9

enter the value of x1: 2.25

enter the value of f(x1): 10.06

enter the value of x2: 2.5

enter the value of f(x2): 11.25

enter the value of x3: 2.75

enter the value of f(x3): 12.56

enter the value of x4: 3


enter the value of f(x4): 14

Enter X for finding f(x): 2.35

_____________________________________________________

x(i) y(i) y1(i) y2(i) y3(i) y4(i)


_____________________________________________________

2.000 9.000 1.060 0.130 -0.010 0.020

2.250 10.060 1.190 0.120 0.010

2.500 11.250 1.310 0.130

2.750 12.560 1.440

3.000 14.000

u = 0.400

f(2.35) = 10.522240

*/
}
void newtondividedformula()
{
float x[10],y[10][10],sum,p,u,temp;
int i,n,j,k=0,f,m;
float fact(int);
clrscr();
printf("\nhow many record you will be enter: ");
scanf("%d",&n);
for(i=0; i<n; i++)
{
printf("\n\nenter the value of x%d: ",i);
scanf("%f",&x[i]);
printf("\n\nenter the value of f(x%d): ",i);
scanf("%f",&y[k][i]);
}
printf("\n\nEnter X for finding f(x): ");
scanf("%f",&p);

for(i=1;i<n;i++)
{
k=i;
for(j=0;j<n-i;j++)
{
y[i][j]=(y[i-1][j+1]-y[i-1][j])/(x[k]-x[j]);
k++;
}
}
printf("\n_____________________________________________________\n");
printf("\n x(i)\t y(i)\t y1(i) y2(i) y3(i) y4(i)");
printf("\n_____________________________________________________\n");
for(i=0;i<n;i++)
{
printf("\n %.3f",x[i]);
for(j=0;j<n-i;j++)
{
printf(" ");
printf(" %.3f",y[j][i]);
}
printf("\n");
}

i=0;
do
{
if(x[i]<p && p<x[i+1])
k=1;
else
i++;
}while(k != 1);
f=i;

sum=0;
for(i=0;i<n-1;i++)
{
k=f;
temp=1;
for(j=0;j<i;j++)
{
temp = temp * (p - x[k]);
k++;
}
sum = sum + temp*(y[i][f]);
}
printf("\n\n f(%.2f) = %f ",p,sum);
getch();

/*
______________________________________

OUT PUT
______________________________________
how many record you will be enter: 5

enter the value of x0: 2.5

enter the value of f(x0): 8.85

enter the value of x1: 3

enter the value of f(x1): 11.45

enter the value of x2: 4.5

enter the value of f(x2): 20.66

enter the value of x3: 4.75

enter the value of f(x3): 22.85

enter the value of x4: 6

enter the value of f(x4): 38.60


Enter X for finding f(x): 3.5

_____________________________________________________

x(i) y(i) y1(i) y2(i) y3(i) y4(i)


_____________________________________________________

2.500 8.850 5.200 0.470 0.457 -0.029

3.000 11.450 6.140 1.497 0.354

4.500 20.660 8.760 2.560

4.750 22.850 12.600

6.000 38.600

f(3.50) = 13.992855

*/

}
void numericalintegrationseq()
{
void trapezoidal();
void simsonrule();
int ch;
clrscr();
while(1)
{
clrscr();
printf("\n--WELCOME TO SOLUTION OF NUMERICAL INTEGRATION
EQUATIONS METHOD--");
printf("\n1.TRAPEZOIDAL RULE");
printf("\n2.SIMPSON RULE");
printf("\n3.RETURN TO MAIN MENU");
printf("\nPLEASE ENTER YOUR CHOICE :");
scanf("%d",&ch);
switch(ch)
{
case 1: trapezoidal();break;
case 2: simsonrule();break;
case 3: main();
default :printf("\nWRONG CHOICE.TRY AGAIN");
}
}
getch();
}
void trapezoidal()
{
float x[10],y[10],sum=0,h,temp;
int i,n,j,k=0;
float fact(int);
clrscr();
printf("\nhow many record you will be enter: ");
scanf("%d",&n);
for(i=0; i<n; i++)
{
printf("\n\nenter the value of x%d: ",i);
scanf("%f",&x[i]);
printf("\n\nenter the value of f(x%d): ",i);
scanf("%f",&y[i]);
}
h=x[1]-x[0];
n=n-1;
for(i=0;i<n;i++)
{
if(k==0)
{
sum = sum + y[i];
k=1;
}
else
sum = sum + 2 * y[i];
}
sum = sum + y[i];
sum = sum * (h/2);
printf("\n\n I = %f ",sum);
getch();
/*
______________________________________

OUT PUT
______________________________________

how many record you will be enter: 6

enter the value of x0: 7.47

enter the value of f(x0): 1.93


enter the value of x1: 7.48

enter the value of f(x1): 1.95

enter the value of x2: 7.49

enter the value of f(x2): 1.98

enter the value of x3: 7.50

enter the value of f(x3): 2.01

enter the value of x4: 7.51

enter the value of f(x4): 2.03

enter the value of x5: 7.52

enter the value of f(x5): 2.06

I = 0.099652
*/
}
void simsonrule()
{
float x[10],y[10],sum=0,h,temp;
int i,n,j,k=0;
float fact(int);
clrscr();
printf("\nhow many record you will be enter: ");
scanf("%d",&n);
for(i=0; i<n; i++)
{
printf("\n\nenter the value of x%d: ",i);
scanf("%f",&x[i]);
printf("\n\nenter the value of f(x%d): ",i);
scanf("%f",&y[i]);
}
h=x[1]-x[0];
n=n-1;
sum = sum + y[0];
for(i=1;i<n;i++)
{
if(k==0)
{
sum = sum + 4 * y[i];
k=1;
}
else
{
sum = sum + 2 * y[i];
k=0;
}
}
sum = sum + y[i];
sum = sum * (h/3);
printf("\n\n I = %f ",sum);
getch();

/*
______________________________________

OUT PUT
______________________________________

how many record you will be enter: 5

enter the value of x0: 0

enter the value of f(x0): 1

enter the value of x1: 0.25

enter the value of f(x1): 0.8

enter the value of x2: 0.5

enter the value of f(x2): 0.6667


enter the value of x3: 0.75

enter the value of f(x3): 0.5714

enter the value of x4: 1

enter the value of f(x4): 0.5

I = 0.693250

*/
}
void bisectionmethod()
{
int i;
float f,x,a,b;
printf("Enter the value of a ::");
scanf("%f",&a);
printf("Enter the value of b ::");
scanf("%f",&b);

do
{
x = (a+b)/2.00;
f = (x*x*x)-(9*x)+1; // any equation can be put here
if (f>0)
b=x;
else
a=x;
printf("\n\n\ta = %f b = %f f = %f",a,b,f);
}while(b-a>0.0001);
printf("\n\t The Root of the equation is %f",x);
getch();
}
void mullermethod()
{
#define ESP 0.001
#define F(x) (x)*(x)*(x) + 2*(x)*(x) + 10*(x) -
20 double x1,x2,x3,x4_1,x4_2,fx1,fx2,fx3,
h1,h2,h3_1,h3_2,h4,D,d1,d2,a1,a2,a0; int i=1;

clrscr();
printf("\nEnter the value of x1: ");
scanf("%lf",&x1);
printf("\nEnter the value of x2: ");
scanf("%lf",&x2);
printf("\nEnter the value of x3: ");
scanf("%lf",&x3);
fx1 = F(x1);
printf("\n\n f(x1) = %lf",fx1);
getch();
fx2 = F(x2);
printf("\n\n f(x2) = %lf",fx2);
getch();
fx3 = a0 = F(x3);
printf("\n\n f(x3) = %lf",fx3);
getch();
h1 = x1-x3;
h2 = x2-x3;
d1 = fx1-fx3;
d2 = fx2-fx3;
D = h1*h2*(h1-h2);
a1 = (d2*h1*h1 - d1*h2*h2)/D;
a2 = (d1*h2 - d2*h1)/D;
h3_1 = -((2*a0)/(a1 + sqrt(fabs(a1*a1 -
(4*a2*a0))))); h3_2 = -((2*a0)/(a1 - sqrt(fabs(a1*a1 -
(4*a2*a0))))); if( (a1 + sqrt(fabs(a1*a1 - (4*a2*a0))))
> ((a1 - sqrt(fabs(a1*a1 - (4*a2*a0))))) )
{
h4 = h3_1;
}
else
{
h4 = h3_2;
}
x4_1 = x3 + h4;
printf("\n\n\n x4 = %lf \n",x4_1);
x1=x2;
x2=x3;
x3=x4_1;
printf("\n\nx1 = %lf",x1);
printf("\n\nx2 = %lf",x2);
printf("\n\nx3 = %lf",x3);
getch();
do
{
fx1 = F(x1);
fx2 = F(x2);
fx3 = a0 = F(x3);
h1 = x1-x3;
h2 = x2-x3;
d1 = fx1-fx3;
d2 = fx2-fx3;
D = h1*h2*(h1-h2);
a1 = (d2*h1*h1 - d1*h2*h2)/D;
a2 = (d1*h2 - d2*h1)/D;
h3_1 = -((2*a0)/(a1 + sqrt(fabs(a1*a1 - (4*a2*a0)))));
h3_2 = -((2*a0)/(a1 - sqrt(fabs(a1*a1 - (4*a2*a0)))));
if( (a1 + sqrt(fabs(a1*a1 - (4*a2*a0)))) >
(a1 - sqrt(fabs(a1*a1 - (4*a2*a0)))) )
{
h4 = h3_1;
}
else
{
h4 = h3_2;
}
x4_2 = x3 + h4;
printf("\n\n\n x4 = %lf \n",x4_2);
getch();
if(fabs(x4_1 - x4_2) < ESP)
{
printf("\n\nREAL ROOT = %.3lf",x4_2);
i=0;
}
else
{
x4_1=x4_2;
x1=x2;
x2=x3;
x3=x4_1;
printf("\n\nx1 = %lf",x1);
printf("\n\nx2 = %lf",x2);
printf("\n\nx3 = %lf",x3);
}
}while(i!=0);
getch();
/*
____________________________
OUT PUT
____________________________
Enter the value of x1: 0
Enter the value of x2: 1
Enter the value of x3: 2
f(x1) = -20.000000
f(x2) = -7.000000
f(x3) = 16.000000
x4 = 1.354066
x1 = 1.000000
x2 = 2.000000
x3 = 1.354066
x4 = 1.368647
x1 = 2.000000
x2 = 1.354066
x3 = 1.368647
x4 = 1.368808
REAL ROOT = 1.369
*/
}
void regulafalsi()
{
#define e 0.0001
int check();
clrscr();
printf("\n\n\t\t\t PROGRAM FOR REGULAR-FALSI GENERAL");
printf("\n\tENTER THE TOTAL NO. OF POWER:::: ");
scanf("%d",&user_power);
for(i=0;i<=user_power;i++)
{
printf("\n\t x^%d::",i);
scanf("%d",&coef[i]);
}
printf("\n");
printf("\n\t THE POLYNOMIAL IS ::: ");
for(i=user_power;i>=0;i--)//printing coeff.
{
printf(" %dx^%d",coef[i],i);
}
while(1)
{
if(check()==0)
{
flag=1;
break;
}
check();
}
printf("\n ******************************************************");
printf("\n ITERATION X1 FX1 X2 FX2 X3 FX3 "); printf("\n
**********************************************************"); if(flag==1)

{
do
{
cnt++;
fx1=fx2=fx3=0;
for(i=user_power;i>=1;i--)
{
fx1+=coef[i] * (pow(x1,i)) ;
fx2+=coef[i] * (pow(x2,i)) ;
}
fx1+=coef[0];
fx2+=coef[0];
temp=x3;
x3=((x1*fx2)-(x2*fx1))/(fx2-fx1);
for(i=user_power;i>=1;i--)
{
fx3+=coef[i]*(pow(x3,i));
}
fx3+=coef[0];
printf("\n %d %.4f %.4f %.4f %.4f %.4f %.4f",cnt,x1,fx1,x2,fx2,x3,fx3);
t=fx1*fx3;
if(t>0)
{
x1=x3;
}
if(t<0)
{
x2=x3;
}
fx3=0;
}while((fabs(temp-x3))>=e);
printf("\n\t ROOT OF EQUATION IS ::::: %f",x3);
}
getch();
}
int check()
{
printf("\n\tINTIAL X1---->");
scanf("%f",&x1);
printf("\n\tINTIAL X2---->");
scanf("%f",&x2);
fx1=fx2=fx3=0.0;
for(i=user_power;i>=1;i--)
{
fx1+=coef[i] * (pow(x1,i)) ;
fx2+=coef[i] * (pow(x2,i)) ;
}
fx1+=coef[0];
fx2+=coef[0];
if( (fx1*fx2)>0)
{
printf("\n\t INTIAL VALUES ARE NOT PERFECT.");
return(1);
}
return(0);
}
OUTPUT SCREEN
SCOPE OF THE PROJECT

By using this project we can operate on all types of Numerical Analysis

LIMITATIONS

This is a simple menu driven programming project and operate on some special types of code.

CONCLUSION

Due to various limitations all the functions of a full-fledged computerized Numerical Analysis
We have tried our best to keep the logic of this software as simple as possible so that the
software can be more flexible.
BIBILIOGRAPHY

1. An Introduction to C Programming Language.

2. Numerical Analysis:Shastri & shastri

3. Complete Reference of C language By Herbert

WEBSITES

 http://www.scribd.com/

 http://www.freetutes.com/

 http://www.google.com/

 http://www.cprogram.com

You might also like