0% found this document useful (0 votes)
3 views

Python Notes

The document discusses interpolation and its applications, including techniques like Lagrange and Newton's interpolation methods, as well as polynomial curve fitting and the least squares method for regression analysis. It also covers numerical integration methods such as the Trapezoidal Rule and Simpson's Rule, and introduces numerical differentiation and ordinary differential equations (ODEs). Key concepts include estimating unknown values, fitting curves to data, and approximating integrals.

Uploaded by

jhatbarabar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Python Notes

The document discusses interpolation and its applications, including techniques like Lagrange and Newton's interpolation methods, as well as polynomial curve fitting and the least squares method for regression analysis. It also covers numerical integration methods such as the Trapezoidal Rule and Simpson's Rule, and introduces numerical differentiation and ordinary differential equations (ODEs). Key concepts include estimating unknown values, fitting curves to data, and approximating integrals.

Uploaded by

jhatbarabar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Chapter 2.

1. What is interpolation and how it is used?


Interpolation is the technique of estimating the value of a function for any
intermediate value of the independent variable, while the process of computing
the value of the function outside the given range is called extrapolation.

Interpolation is a process of determining the unknown values that lie in between


the known data points. It is mostly used to predict the unknown values for any
geographical related data points such as noise level, rainfall, elevation, and so on.

For example, interpolation can be used to estimate the missing values in a dataset
or to make predictions for new data points based on the known data. Interpolation
can also be used in classification problems to estimate the probability of each
class for a given input. Image interpolation is a basic tool used extensively in
tasks such as rescaling, translating, shrinking or rotating an image. In computer
graphics, interpolation is the creation of new values that lie between known
values.

Lagrange Interpolation Formula

2. Lagrange Interpolation Formula


Question:
Find the value of y at x = 0 given some set of values (-2, 5), (1, 7), (3, 11), (7,
34).
Solution:
Given the known values are,
x = 0 ; x0 = -2 ; x1 = 1 ; x2 = 3 ; x3 = 7 ; y0 = 5 ; y1 = 7 ; y2 = 11 ; y3 = 34
Using the interpolation formula,
3. Newton Forward And Backward Interpolation
Examples
1. Find Solution using Newton's Forward Difference formula
x f(x)
1891 46
1901 66
1911 81
1921 93
1931 101

x = 1895
Finding option 1. Value f(2)

Solution:
The value of table for x and y

x 1891 1901 1911 1921 1931


y 46 66 81 93 101

Newton's forward difference interpolation method to find solution

Newton's forward difference table is


x Y Δy Δ2y Δ3y Δ4y
1891 46
66-46=20
1901 66 15-20=-5
81-66=15 -3--5=2
1911 81 12-15=-3 -1-2=-3
93-81=12 -4--3=-1
1921 93 8-12=-4
101-93=8
1931 101

The value of x at you want to find the f(x):x=1895


4. Polynomial curve fitting

Curve fitting is the process of constructing a curve, or mathematical function,


that has the best fit to a series of data points, possibly subject to constraints. Curve
fitting can involve either interpolation, where an exact fit to the data is required,
or smoothing, in which a "smooth" function is constructed that approximately fits
the data.

Polynomial curve fitting is a mathematical technique used to approximate a


relationship between two variables using a polynomial function. It has various
applications in fields such as statistics, engineering, physics, economics, and
more. While it can be a valuable tool in data analysis, it may not always be
suitable for separating two species in a classification problem, as other machine
learning or statistical techniques are often more effective for this purpose.

Here are some common applications of polynomial curve fitting:

1. Data Modeling: Polynomial curve fitting is often used to model


data when there is a suspected polynomial relationship between
variables. It can be used to describe and predict data trends.
2. Interpolation and Extrapolation: It's used to estimate data
points within a given range (interpolation) or outside the observed
data range (extrapolation) using a polynomial function.
3. Function Approximation: In engineering and physics,
polynomial curve fitting is used to approximate complex functions
with simpler polynomial functions for ease of analysis.
4. Signal Processing: Polynomial curve fitting can be applied in
signal processing to smooth data and remove noise from signals.
5. Regression Analysis: It's used for polynomial regression, where
a polynomial function is fitted to data to make predictions.
6. Scientific Research: Polynomial curve fitting is applied in
various scientific disciplines to model and analyze experimental
data.

5. Least Square Method

Least square method is the process of finding a regression line or best-fitted line
for any data set that is described by an equation. This method requires reducing
the sum of the squares of the residual parts of the points from the curve or line
and the trend of outcomes is found quantitatively. The method of curve fitting is
seen while regression analysis and the fitting equations to derive the curve is the
least square method.
Let us look at a simple example, Ms. Dolma said in the class "Hey students who
spend more time on their assignments are getting better grades". A student wants
to estimate his grade for spending 2.3 hours on an assignment. Through the magic
of the least-squares method, it is possible to determine the predictive model that
will help him estimate the grades far more accurately. This method is much
simpler because it requires nothing more than some data and maybe a calculator.

Least Square Method Definition

The least-squares method is a statistical method used to find the line of best fit of
the form of an equation such as y = mx + b to the given data. The curve of the
equation is called the regression line. Our main objective in this method is to
reduce the sum of the squares of errors as much as possible. This is the reason
this method is called the least-squares method. This method is often used in data
fitting where the best fit result is assumed to reduce the sum of squared errors that
is considered to be the difference between the observed values and corresponding
fitted value. The sum of squared errors helps in finding the variation in observed
data. For example, we have 4 data points and using this method we arrive at the
following graph.
The two basic categories of least-square problems are ordinary or linear least
squares and nonlinear least squares.

Limitations for Least Square Method

Even though the least-squares method is considered the best method to find the
line of best fit, it has a few limitations. They are:
• This method exhibits only the relationship between the two variables.
All other causes and effects are not taken into consideration.
• This method is unreliable when data is not evenly distributed.
• This method is very sensitive to outliers. In fact, this can skew the
results of the least-squares analysis.

Least Square Method Graph

In the graph below, the straight line shows the potential relationship between the
independent variable and the dependent variable. The ultimate goal of this
method is to reduce this difference between the observed response and the
response predicted by the regression line. Less residual means that the model fits
better. The data points need to be minimized by the method of reducing residuals
of each point from the line. There are vertical residuals and perpendicular
residuals. Vertical is mostly used in polynomials and hyperplane problems while
perpendicular is used in general as seen in the image below.

Least Square Method Formula

Least-square method is the curve that best fits a set of observations with a
minimum sum of squared residuals or errors. Let us assume that the given points
of data are (x1, y1), (x2, y2), (x3, y3), …, (xn, yn) in which all x’s are independent
variables, while all y’s are dependent ones. This method is used to find
a linear line of the form y = mx + b, where y and x are variables, m is the slope,
and b is the y-intercept. The formula to calculate slope m and the value of b is
given by:
m = (n∑xy - ∑y∑x)/n∑x2 - (∑x)2
b = (∑y - m∑x)/n
Here, n is the number of data points.
Following are the steps to calculate the least square using the above formulas.
• Step 1: Draw a table with 4 columns where the first two columns are
for x and y points.
• Step 2: In the next two columns, find xy and (x)2.
• Step 3: Find ∑x, ∑y, ∑xy, and ∑(x)2.
• Step 4: Find the value of slope m using the above formula.
• Step 5: Calculate the value of b using the above formula.
• Step 6: Substitute the value of m and b in the equation y = mx + b
Example: Let's say we have data as shown below.
x 1 2 3 4 5

y 2 5 3 8 7

Solution: We will follow the steps to find the linear line.

x y xy x2

1 2 2 1

2 5 10 4

3 3 9 9

4 8 32 16

5 7 35 25

∑x =15 ∑y = 25 ∑xy = 88 ∑x2 = 55

Find the value of m by using the formula,


m = (n∑xy - ∑y∑x)/n∑x2 - (∑x)2
m = [(5×88) - (15×25)]/(5×55) - (15)2
m = (440 - 375)/(275 - 225)
m = 65/50 = 13/10
Find the value of b by using the formula,
b = (∑y - m∑x)/n
b = (25 - 1.3×15)/5
b = (25 - 19.5)/5
b = 5.5/5
So, the required equation of least squares is y = mx + b = 13/10x + 5.5/5.
Important Notes
• The least-squares method is used to predict the behavior of the
dependent variable with respect to the independent variable.
• The sum of the squares of errors is called variance.
• The main aim of the least-squares method is to minimize the sum of the
squared errors.
Chapter 2.2

1. What is the meaning of numerical integration?

Numerical integration methods can generally be described as combining evaluations of the


integrand to get an approximation to the integral. The integrand is evaluated at a finite set of
points called integration points and a weighted sum of these values is used to approximate the
integral.

In Calculus, “Trapezoidal Rule” is one of the important integration rules. The name
trapezoidal is because when the area under the curve is evaluated, then the total area is divided
into small trapezoids instead of rectangles. This rule is used for approximating the definite
integrals where it uses the linear approximations of the functions.

The trapezoidal rule is mostly used in the numerical analysis process. To evaluate the definite
integrals, we can also use Riemann Sums, where we use small rectangles to evaluate the area
under the curve.

Trapezoidal Rule Definition


Trapezoidal Rule is a rule that evaluates the area under the curves by dividing the total area
into smaller trapezoids rather than using rectangles. This integration works by approximating
the region under the graph of a function as a trapezoid, and it calculates the area. This rule takes
the average of the left and the right sum.
Example 1:

Approximate the area under the curve y = f(x) between x =0 and x=8 using Trapezoidal Rule
with n = 4 subintervals. A function f(x) is given in the table of values.

x 0 2 4 6 8

f(x) 3 7 11 9 3

Solution:

The Trapezoidal Rule formula for n= 4 subintervals is given as:

T4 =(Δx/2)[f(x0)+ 2f(x1)+ 2f(x2)+2f(x3) + f(x4)]

Here the subinterval width Δx = 2.

Now, substitute the values from the table, to find the approximate value of the area under the
curve.

A≈ T4 =(2/2)[3+ 2(7)+ 2(11)+2(9) + 3]

A≈ T4 = 3 + 14 + 22+ 18+3 = 60

Therefore, the approximate value of area under the curve using Trapezoidal Rule is 60.

Example 2:

Approximate the area under the curve y = f(x) between x =-4 and x= 2 using Trapezoidal Rule
with n = 6 subintervals. A function f(x) is given in the table of values.

x -4 -3 -2 -1 0 1 2

f(x) 0 4 5 3 10 11 2

Solution:

The Trapezoidal Rule formula for n= 6 subintervals is given as:

T6 =(Δx/2)[f(x0)+ 2f(x1)+ 2f(x2)+2f(x3) + 2f(x4)+2f(x5)+ f(x6)]

Here the subinterval width Δx = 1.

Now, substitute the values from the table, to find the approximate value of the area under the
curve.
A≈ T6 =(1/2)[0+ 2(4)+ 2(5)+2(3) + 2(10)+2(11) +2]

A≈ T6 =(½) [ 8 + 10 + 6+ 20 +22 +2 ] = 68/2 = 34

Therefore, the approximate value of area under the curve using Trapezoidal Rule is 34.

2. Simpson’s Rule

Simpson's Rule is a numerical method that approximates the value of a definite integral by
using quadratic functions. This method is named after the English mathematician Thomas
Simpson (1710−1761).
3. What is the difference between Trapezoidal rule and Simpsons
rule?
The Trapezoidal Rule does not give accurate value as Simpson’s Rule when the underlying
function is smooth. It is because Simpson’s Rule uses the quadratic approximation instead of
linear approximation. Both Simpson’s Rule and Trapezoidal Rule give the approximation
value, but Simpson’s Rule results in even more accurate approximation value of the integrals.

4. Gaussian Quadrature

The Gauss integration is a very efficient method to perform numerical integration over
intervals. In fact, if the function to be integrated is a polynomial of an appropriate degree, then
the Gauss integration scheme produces exact results. The Gauss integration scheme has been
implemented in almost every finite element analysis software due to its simplicity and
computational efficiency. This section outlines the basic principles behind the Gauss
integration scheme.
Example
Chapter 2.3
1.Numerical differentiation

Numerical differentiation involves the computation of a derivative of a function f from given


values of f. Such formulas are basic to the numerical solution of differential equations.

Ordinary Differential Equation (ODE)ry Differential Equations


An ordinary differential equation (also abbreviated as ODE), in Mathematics, is an
equation which consists of one or more functions of one independent variable along
with their derivatives. A differential equation is an equation that contains a function
with one or more derivatives. But in the case ODE, the word ordinary is used for
derivative of the functions for the single independent variable.

Types
The ordinary differential equation is further classified into three types. They
are:

• Autonomous ODE
• Linear ODE
• Non-linear ODE

Autonomous Ordinary Differential Equations


A differential equation which does not depend on the variable, say x is
known as an autonomous differential equation.

Linear Ordinary Differential Equations


If differential equations can be written as the linear combinations of the
derivatives of y, then they are called linear ordinary differential equations.
These can be further classified into two types:

• Homogeneous linear differential equations


• Non-homogeneous linear differential equations

Non-linear Ordinary Differential Equations


If the differential equations cannot be written in the form of linear
combinations of the derivatives of y, then it is known as a non-linear
ordinary differential equation.
Applications
ODEs has remarkable applications and it has the ability to predict the
world around us. It is used in a variety of disciplines like biology, economics,
physics, chemistry and engineering. It helps to predict the exponential
growth and decay, population and species growth. Some of the uses of
ODEs are:

• Modelling the growth of diseases


• Describes the movement of electricity
• Describes the motion of the pendulum, waves
• Used in Newton’s second law of motion and Law of cooling

2. Runge-Kutta Method
Runge-Kutta methods is used to solve differential equations. These will
give us higher accuracy without performing more calculations. These
methods coordinate with the solution of Taylor’s series up to the term in hr,
where r varies from method to method, representing the order of that
method. One of the most significant advantages of Runge-Kutaa formulae
is that it requires the function’s values at some specified points.

Consider an ordinary differential equation of the form dy/dx = f(x, y) with


initial condition y(x0) = y0. For this, we can define the formulas for Runge-
Kutta methods as follows.

1st Order Runge-Kutta method

y1 = y0 + hf(x0, y0) = y0 + hy’0 {since y’ = f(x, y)}

This formula is same as the Euler’s method

2nd Order Runge-Kutta method

y1 = y0 + (½) (k1 + k2)

Here,

k1 = hf(x0, y0)

k2 = hf(x0 + h, y0 + k1)

3rd Order Runge-Kutta method


y1 = y0 + (⅙) (k1 + 4k2 + k3)

Here,

k1 = hf(x0, y0)

k2 = hf[x0 + (½)h, y0 + (½)k1]

k3 = hf(x0 + h, y0 + k1) such that k1 = hf(x0 + h, y0 + k1)

What is Fourth Order RK Method?


The most commonly used Runge Kutta method to find the solution of a
differential equation is the RK4 method, i.e., the fourth-order Runge-Kutta
method. The Runge-Kutta method provides the approximate value of y for
a given point x. Only the first order ODEs can be solved using the Runge
Kutta RK4 method.

Runge-Kutta Fourth Order Method Formula

The formula for the fourth-order Runge-Kutta method is given by:

y1 = y0 + (⅙) (k1 + 2k2 + 2k3 + k4)

Here,

k1 = hf(x0, y0)

k2 = hf[x0 + (½)h, y0 + (½)k1]

k3 = hf[x0 + (½)h, y0 + (½)k2]

k4 = hf(x0 + h, y0 + k3)

Runge-Kutta RK4 Method


Example 1:

Consider an ordinary differential equation dy/dx = x2 + y2, y(1) = 1.2. Find


y(1.05) using the fourth order Runge-Kutta method.

Solution:

Given,

dy/dx = x2 + y2, y(1) = 1.2


So, f(x, y) = x2 + y2

x0 = 1 and y0 = 1.2

Also, h = 0.05

Let us calculate the values of k1, k2, k3 and k4.

k1 = hf(x0, y0)

= (0.05) [x02 + y02]

= (0.05) [(1)2 + (1.2)2]

= (0.05) (1 + 1.44)

= (0.05)(2.44)

= 0.122

k2 = hf[x0 + (½)h, y0 + (½)k1]

= (0.05) [f(1 + 0.025, 1.2 + 0.061)] {since h/2 = 0.05/2 = 0.025 and k 1/2 =
0.122/2 = 0.061}

= (0.05) [f(1.025, 1.261)]

= (0.05) [(1.025)2 + (1.261)2]

= (0.05) (1.051 + 1.590)

= (0.05)(2.641)

= 0.1320

k3 = hf[x0 + (½)h, y0 + (½)k2]

= (0.05) [f(1 + 0.025, 1.2 + 0.066)] {since h/2 = 0.05/2 = 0.025 and k 2/2 =
0.132/2 = 0.066}

= (0.05) [f(1.025, 1.266)]

= (0.05) [(1.025)2 + (1.266)2]

= (0.05) (1.051 + 1.602)

= (0.05)(2.653)
= 0.1326

k4 = hf(x0 + h, y0 + k3)

= (0.05) [f(1 + 0.05, 1.2 + 0.1326)]

= (0.05) [f(1.05, 1.3326)]

= (0.05) [(1.05)2 + (1.3326)2]

= (0.05) (1.1025 + 1.7758)

= (0.05)(2.8783)

= 0.1439

By RK4 method, we have;

y1 = y0 + (⅙) (k1 + 2k2 + 2k3 + k4)

y1 = y(1.05) = y0 + (⅙) (k1 + 2k2 + 2k3 + k4)

By substituting the values of y0, k1, k2, k3 and k4, we get;

y(1.05) = 1.2 + (⅙) [0.122 + 2(0.1320) + 2(0.1326) + 0.1439]

= 1.2 + (⅙) (0.122 + 0.264 + 0.2652 + 0.1439)

= 1.2 + (⅙) (0.7951)

= 1.2 + 0.1325

= 1.3325

Example 2:

Find the value of k1 by Runge-Kutta method of fourth order if dy/dx = 2x +


3y2 and y(0.1) = 1.1165, h = 0.1.

Solution:

Given,

dy/dx = 2x + 3y2 and y(0.1) = 1.1165, h = 0.1

So, f(x, y) = 2x + 3y2

x0 = 0.1, y0 = 1.1165
By Runge-Kutta method of fourth order , we have

k1 = hf(x0, y0)

= (0.1) f(0.1, 1.1165)

= (0.1) [2(0.1) + 3(1.1165)2]

= (0.1) [0.2 + 3(1.2465)]

= (0.1)(0.2 + 3.7395)

= (0.1)(3.9395)

= 0.39395

The primary disadvantages of Runge-Kutta methods are


that they require significantly more computer time than multi-
step methods of comparable accuracy, and they do not easily
yield good global estimates of the truncation error.

3. Euler’s Rule
The Euler’s method is a first-order numerical procedure for
solving ordinary differential equations (ODE) with a given
initial value.
Formula
1. Euler Rule

( )
y1=y0+hf x0,y0

Examples
1. Find y(0.2) for y′=x-y2, y(0) = 1, with step length 0.1 using Euler method

Solution:
Given y′=x-y2,y(0)=1,h=0.1,y(0.2)=?

Euler method

( )
y1=y0+hf x0,y0 =1+(0.1)f(0,1)=1+(0.1)⋅(-0.5)=1+(-0.05)=0.95

( )
y2=y1+hf x1,y1 =0.95+(0.1)f(0.1,0.95)=0.95+(0.1)⋅(-0.425)=0.95+(-0.0425)=0.9075

∴y(0.2)=0.9075

Advantages:
➢Euler's method is simple and direct.
➢It can be used for nonlinear IVPs.

Disadvantages:
➢It is less accurate and numerically unstable.

You might also like