Python Notes
Python Notes
For example, interpolation can be used to estimate the missing values in a dataset
or to make predictions for new data points based on the known data. Interpolation
can also be used in classification problems to estimate the probability of each
class for a given input. Image interpolation is a basic tool used extensively in
tasks such as rescaling, translating, shrinking or rotating an image. In computer
graphics, interpolation is the creation of new values that lie between known
values.
x = 1895
Finding option 1. Value f(2)
Solution:
The value of table for x and y
Least square method is the process of finding a regression line or best-fitted line
for any data set that is described by an equation. This method requires reducing
the sum of the squares of the residual parts of the points from the curve or line
and the trend of outcomes is found quantitatively. The method of curve fitting is
seen while regression analysis and the fitting equations to derive the curve is the
least square method.
Let us look at a simple example, Ms. Dolma said in the class "Hey students who
spend more time on their assignments are getting better grades". A student wants
to estimate his grade for spending 2.3 hours on an assignment. Through the magic
of the least-squares method, it is possible to determine the predictive model that
will help him estimate the grades far more accurately. This method is much
simpler because it requires nothing more than some data and maybe a calculator.
The least-squares method is a statistical method used to find the line of best fit of
the form of an equation such as y = mx + b to the given data. The curve of the
equation is called the regression line. Our main objective in this method is to
reduce the sum of the squares of errors as much as possible. This is the reason
this method is called the least-squares method. This method is often used in data
fitting where the best fit result is assumed to reduce the sum of squared errors that
is considered to be the difference between the observed values and corresponding
fitted value. The sum of squared errors helps in finding the variation in observed
data. For example, we have 4 data points and using this method we arrive at the
following graph.
The two basic categories of least-square problems are ordinary or linear least
squares and nonlinear least squares.
Even though the least-squares method is considered the best method to find the
line of best fit, it has a few limitations. They are:
• This method exhibits only the relationship between the two variables.
All other causes and effects are not taken into consideration.
• This method is unreliable when data is not evenly distributed.
• This method is very sensitive to outliers. In fact, this can skew the
results of the least-squares analysis.
In the graph below, the straight line shows the potential relationship between the
independent variable and the dependent variable. The ultimate goal of this
method is to reduce this difference between the observed response and the
response predicted by the regression line. Less residual means that the model fits
better. The data points need to be minimized by the method of reducing residuals
of each point from the line. There are vertical residuals and perpendicular
residuals. Vertical is mostly used in polynomials and hyperplane problems while
perpendicular is used in general as seen in the image below.
Least-square method is the curve that best fits a set of observations with a
minimum sum of squared residuals or errors. Let us assume that the given points
of data are (x1, y1), (x2, y2), (x3, y3), …, (xn, yn) in which all x’s are independent
variables, while all y’s are dependent ones. This method is used to find
a linear line of the form y = mx + b, where y and x are variables, m is the slope,
and b is the y-intercept. The formula to calculate slope m and the value of b is
given by:
m = (n∑xy - ∑y∑x)/n∑x2 - (∑x)2
b = (∑y - m∑x)/n
Here, n is the number of data points.
Following are the steps to calculate the least square using the above formulas.
• Step 1: Draw a table with 4 columns where the first two columns are
for x and y points.
• Step 2: In the next two columns, find xy and (x)2.
• Step 3: Find ∑x, ∑y, ∑xy, and ∑(x)2.
• Step 4: Find the value of slope m using the above formula.
• Step 5: Calculate the value of b using the above formula.
• Step 6: Substitute the value of m and b in the equation y = mx + b
Example: Let's say we have data as shown below.
x 1 2 3 4 5
y 2 5 3 8 7
x y xy x2
1 2 2 1
2 5 10 4
3 3 9 9
4 8 32 16
5 7 35 25
In Calculus, “Trapezoidal Rule” is one of the important integration rules. The name
trapezoidal is because when the area under the curve is evaluated, then the total area is divided
into small trapezoids instead of rectangles. This rule is used for approximating the definite
integrals where it uses the linear approximations of the functions.
The trapezoidal rule is mostly used in the numerical analysis process. To evaluate the definite
integrals, we can also use Riemann Sums, where we use small rectangles to evaluate the area
under the curve.
Approximate the area under the curve y = f(x) between x =0 and x=8 using Trapezoidal Rule
with n = 4 subintervals. A function f(x) is given in the table of values.
x 0 2 4 6 8
f(x) 3 7 11 9 3
Solution:
Now, substitute the values from the table, to find the approximate value of the area under the
curve.
A≈ T4 = 3 + 14 + 22+ 18+3 = 60
Therefore, the approximate value of area under the curve using Trapezoidal Rule is 60.
Example 2:
Approximate the area under the curve y = f(x) between x =-4 and x= 2 using Trapezoidal Rule
with n = 6 subintervals. A function f(x) is given in the table of values.
x -4 -3 -2 -1 0 1 2
f(x) 0 4 5 3 10 11 2
Solution:
Now, substitute the values from the table, to find the approximate value of the area under the
curve.
A≈ T6 =(1/2)[0+ 2(4)+ 2(5)+2(3) + 2(10)+2(11) +2]
Therefore, the approximate value of area under the curve using Trapezoidal Rule is 34.
2. Simpson’s Rule
Simpson's Rule is a numerical method that approximates the value of a definite integral by
using quadratic functions. This method is named after the English mathematician Thomas
Simpson (1710−1761).
3. What is the difference between Trapezoidal rule and Simpsons
rule?
The Trapezoidal Rule does not give accurate value as Simpson’s Rule when the underlying
function is smooth. It is because Simpson’s Rule uses the quadratic approximation instead of
linear approximation. Both Simpson’s Rule and Trapezoidal Rule give the approximation
value, but Simpson’s Rule results in even more accurate approximation value of the integrals.
4. Gaussian Quadrature
The Gauss integration is a very efficient method to perform numerical integration over
intervals. In fact, if the function to be integrated is a polynomial of an appropriate degree, then
the Gauss integration scheme produces exact results. The Gauss integration scheme has been
implemented in almost every finite element analysis software due to its simplicity and
computational efficiency. This section outlines the basic principles behind the Gauss
integration scheme.
Example
Chapter 2.3
1.Numerical differentiation
Types
The ordinary differential equation is further classified into three types. They
are:
• Autonomous ODE
• Linear ODE
• Non-linear ODE
2. Runge-Kutta Method
Runge-Kutta methods is used to solve differential equations. These will
give us higher accuracy without performing more calculations. These
methods coordinate with the solution of Taylor’s series up to the term in hr,
where r varies from method to method, representing the order of that
method. One of the most significant advantages of Runge-Kutaa formulae
is that it requires the function’s values at some specified points.
Here,
k1 = hf(x0, y0)
k2 = hf(x0 + h, y0 + k1)
Here,
k1 = hf(x0, y0)
Here,
k1 = hf(x0, y0)
k4 = hf(x0 + h, y0 + k3)
Solution:
Given,
x0 = 1 and y0 = 1.2
Also, h = 0.05
k1 = hf(x0, y0)
= (0.05) (1 + 1.44)
= (0.05)(2.44)
= 0.122
= (0.05) [f(1 + 0.025, 1.2 + 0.061)] {since h/2 = 0.05/2 = 0.025 and k 1/2 =
0.122/2 = 0.061}
= (0.05)(2.641)
= 0.1320
= (0.05) [f(1 + 0.025, 1.2 + 0.066)] {since h/2 = 0.05/2 = 0.025 and k 2/2 =
0.132/2 = 0.066}
= (0.05)(2.653)
= 0.1326
k4 = hf(x0 + h, y0 + k3)
= (0.05)(2.8783)
= 0.1439
= 1.2 + 0.1325
= 1.3325
Example 2:
Solution:
Given,
x0 = 0.1, y0 = 1.1165
By Runge-Kutta method of fourth order , we have
k1 = hf(x0, y0)
= (0.1)(0.2 + 3.7395)
= (0.1)(3.9395)
= 0.39395
3. Euler’s Rule
The Euler’s method is a first-order numerical procedure for
solving ordinary differential equations (ODE) with a given
initial value.
Formula
1. Euler Rule
( )
y1=y0+hf x0,y0
Examples
1. Find y(0.2) for y′=x-y2, y(0) = 1, with step length 0.1 using Euler method
Solution:
Given y′=x-y2,y(0)=1,h=0.1,y(0.2)=?
Euler method
( )
y1=y0+hf x0,y0 =1+(0.1)f(0,1)=1+(0.1)⋅(-0.5)=1+(-0.05)=0.95
( )
y2=y1+hf x1,y1 =0.95+(0.1)f(0.1,0.95)=0.95+(0.1)⋅(-0.425)=0.95+(-0.0425)=0.9075
∴y(0.2)=0.9075
Advantages:
➢Euler's method is simple and direct.
➢It can be used for nonlinear IVPs.
Disadvantages:
➢It is less accurate and numerically unstable.