100% found this document useful (1 vote)
8K views27 pages

CBNST Notes For BCA PU 3rd Sem Based On Syllabus PDF

The document provides information about floating point arithmetic including: - Floating point numbers represent numbers with a radix point that can vary, unlike fixed point numbers. - IEEE 754 standard defines common floating point formats like binary32 and binary64 that use base 2. - Operations on floating point numbers take more processing than integers due to their varying radix points. - Errors can occur in numerical computations due to limited computer precision and approximations. - Iterative methods like bisection repeatedly divide an interval in half to approximate roots of equations.

Uploaded by

Stuti Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
8K views27 pages

CBNST Notes For BCA PU 3rd Sem Based On Syllabus PDF

The document provides information about floating point arithmetic including: - Floating point numbers represent numbers with a radix point that can vary, unlike fixed point numbers. - IEEE 754 standard defines common floating point formats like binary32 and binary64 that use base 2. - Operations on floating point numbers take more processing than integers due to their varying radix points. - Errors can occur in numerical computations due to limited computer precision and approximations. - Iterative methods like bisection repeatedly divide an interval in half to approximate roots of equations.

Uploaded by

Stuti Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

1

Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Semester-3

BCA 301

CBNST
(Computer Based Numerical and Statistical Techniques)

(According to Purvanchal University Syllabus)

“Full Line By Line Notes”

For More Information go to www.dpmishra.com


2
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Unit – 1
Floating Point Arithmetic
 The term floating point refers to the fact that a number's radix point
(decimal point, or, more commonly in computers, binary point) can "float";
that is, it can be placed anywhere relative to the significant digits of
the number.

Representation of Floating point number–


 The following description explains terminology and primary details of IEEE
754 binary floating point representation. The discussion confines to single
and double precision formats.
 Usually, a real number in binary will be represented in the following format,
ImIm-1…I2I1I0.F1F2…FnFn-1
 Where Im and Fn will be either 0 or 1 of integer and fraction parts
respectively.
 A finite number can also represented by four integers components, a sign
(s), a base (b), a significant (m), and an exponent (e). Then the numerical
value of the number is evaluated as
(-1) s x m x be ________ Where m < |b|
 Depending on base and the number of bits used to encode various
components, the IEEE 754 standard defines five basic formats. Among the
five formats, the binary32 and the binary64 formats are single precision
and double precision formats respectively in which the base is 2.

One More Somple Way to Understand –


An n-digit floating point number in base β has the form

x = ±(0.d1d2 · · · dn)β × β e

where
0.d1d2 · · · dn is a β-fraction called the mantissa and e is an integer called the
exponent. Such a floating point number is called normalised if
For More Information go to www.dpmishra.com
3
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

d1 6= 0, or else, d1 = d2 = · · · = dn =.
The exponent e is limited to a range m < e < M Usually, m = −M.
Operations–
 A floating-point operation is any mathematical operation (such as +, -, *, /)
or assignment that involves floating-point numbers (as opposed to binary
integer operations).

 Floating-point numbers have decimal points in them. The number 2.0 is a


floating-point number because it has a decimal in it. The number 2 (without
a decimal point) is a binary integer.

 Floating-point operations involve floating-point numbers and typically take


longer to execute than simple binary integer operations. For this reason,
most embedded applications avoid wide-spread usage of floating-point
math in favor of faster, smaller integer operations.

Normalization–
 Many floating point representations have an implicit hidden bit in the
mantissa. This is a bit which is present virtually in the mantissa, but not
stored in memory because its value is always 1 in a normalized number.

 We say that the floating point number is normalized if the fraction is at


least 1/b, where b is the base. In other words, the mantissa would be too
large to fit if it were multiplied by the base. Non-normalized numbers are
sometimes called de-normal; they contain less precision than the
representation normally can hold.
 If the number is not normalized, then you can subtract 1 from the exponent
while multiplying the mantissa by the base, and get another floating point
number with the same value. Normalization consists of doing this
repeatedly until the number is normalized. Two distinct normalized floating
point numbers cannot be equal in value.

Pitfalls of floating point representation–

For More Information go to www.dpmishra.com


4
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

 Current critical systems often use a lot of floating-point computations, and


thus the testing or static analysis of programs containing floating-point
operators has become a priority. However, correctly defining the semantics
of common implementations of floating-point is tricky, because semantics
may change according to many factors beyond source-code level, such as
choices made by compilers. We here give concrete examples of problems
that can appear and solutions for implementing in analysis software.

Error in numerical computation–


 Often in Numerical Analysis we have to make approximations to
numerically compute things. That’s the thing; in Calculus we can take limits
and arrive at exact results, but when we use a computer to calculate, say
something simple like a derivative, we can’t take an infinite limit so we
have to approximate the answer, and therefore, it has error. Computers are
limited in their capacity to store and calculate the precision and magnitude
of numbers.
 We can characterize error in measurements and computations with respect
to their accuracy and their precision. Accuracy refers to how closely a
calculated value agrees with the true value. Precision refers to how closely
calculated values agree with each other. Inaccuracy (also called bias) is a
systematic deviation from the true values. Imprecision (also
called uncertainty) refers to how close together calculated results are to
each other. We use the term error to represent both inaccuracy and
imprecision in our results.
 The relationship between the exact result and the approximation can be
formulated as
true value=approximation+error

For More Information go to www.dpmishra.com


5
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Iterative methods
 Iterative method. In computational mathematics, an iterative method is a
mathematical procedure that uses an initial guess to generate a sequence
of improving approximate solutions for a class of problems, in which the n-
th approximation is derived from the previous ones.

Bisection methods–
 The bisection method in mathematics is a root-finding methodthat
repeatedly bisects an interval and then selects a subinterval in which a root
must lie for further processing. ... The method is also called the interval
halving method, the binary searchmethod, or the dichotomy method.

How To Solve

 The Bisection method is a approximation method to find the roots of an


equation by continuously dividing an interval. It will divide the interval in
halves until the resulting interval found, which is extremely small.

There is no any specific formula to find the root of a function using bisection
method:

For More Information go to www.dpmishra.com


6
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

For the ith iteration, the interval width is:

ΔΔxii = 1212 = 0.5 and ΔΔxi−1i−1 = (0.5)ii(n-m); where n > m

So the new midpoint is xii = mi−1i−1 + ΔΔxii

for i = 1, 2, 3, ..., n
 Below are certain steps to get the solution for the function.
For a continuous function g(x)

Step 1: Find two points, say m and n st m < n and g(m) * g(n) < 0

Step 2: Find the midpoint of m and n, say t.

Step 3: t is root of function if g(t) = 0, else follow the next step.

Step 4: Divide the interval [m, n]. If g(t) * g(n) < 0, let m = t, else if g(t) *
g(m) < 0 then let n = t.

Step 5: Repeat above two steps until g(t) = 0.


 Example: Find the root of the polynomial, g(x) = x33 - 5 + 3x using bisection
method. Where m = 1 and n = 2?

Solution:
First find the value of g(x) at m = 1 and n = 2

g(1) = 133 - 5 + 3*1 = -1 < 0

g(2) = 233 - 5 + 3*2 = 9 > 0

Since function is continuous, its root lies in the interval [1, 2].

Let t be the average of the interval i.e t = 1+221+22 = 1.5

For More Information go to www.dpmishra.com


7
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

The value of the function at t is

g(1.5) = (1.5)33 - 5 + 3*(1.5) = 2.875

As g(t) is negative so n = 2 is replaced with t = 1.5 for the next iteration.


Make sure that g(m) and g(n) have opposite signs.

Below table contains nine iterations of the function -


Iteration m n t g(m) g(n) g(t)
1 1 2 1.5 -1 9 2.875
2 1 1.5 1.25 -1 2.875 0.703125
3 1 1.25 1.125 -1 0.703125 -0.201171875
4 1.125 1.25 1.1875 -0.201171875 0.703125 0.237060546875
5 1.125 1.1875 1.15625 -0.201171875 0.237060546875 0.014556884765625
0.01455688476562
6 1.125 1.15625 1.140625 -0.201171875 -0.0941429138183594
5
-
1.148437 0.01455688476562
7 1.140625 1.15625 0.09414291381835 -0.0400032997131348
5 5
94
-
1.152343 0.01455688476562
8 1.1484375 1.15625 0.04000329971313 -0.0127759575843811
75 5
48
-
1.1523437 1.154296 0.01455688476562 0.00087725371122360
9 1.15625 0.01277595758438
5 875 5 2
11

Therefore we chose m = 1.15234375 to be our approximated solution.


Regula – Falsi method–
 The Regula falsi method is an oldest method for computing the real roots of
an algebraic equation. This below worksheet helps you to understand how
to compute the roots of an algebraic equation using Regula falsi method.
The practice problems along with this worksheet improve your problem
solving capabilities when you try on your own.

Examples:
Find the root between (2,3) of x3+ - 2x - 5 = 0, by using regular falsi method.
Given
f(x) = x3 - 2 x - 5
f(2) = 23 - 2 (2) - 5 = -1 (negative)

For More Information go to www.dpmishra.com


8
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

f(3) = 33 - 2 (3) - 5 = 16 (positive)

Let us take a= 2 and b= 3.


The first approximation to root is x1 and is given by
x1 = (a f(a) - b f(b))/(f(b)-f(a))
=(2 f(3)- 3 f(2))/(f(3) - f(2))
=(2 x 16 - 3 (-1))/ (16- (-1))
= (32 + 3)/(16+1) =35/17
= 2.058

Now f(2.058) = 2.0583 - 2 x 2.058 - 5


= 8.716 - 4.116 - 5
= - 0.4
The root lies between 2.058 and 3

Taking a = 2.058 and b = 3. we have the second approximation to the root given
by
x2 = (a f(a) - b f(b))/(f(b)-f(a))
= (2.058 x f(3) - 3 x f(2.058)) /(f(3) - f(2.058))
= (2.058 x 16 -3 x -0.4) / (16 - (-0.4))
= 2.081

Now f(2.081) = 2.0812 - 2 x 2.081 - 5


= -0.15
The root lies between 2.081 and 3
Take a = 2.081 and b = 3
The third approximation to the root is given by
x3 = (a f(a) - b f(b))/(f(b)-f(a))
= (2.089 X 16 - 3 x (-0.062))/ (16 - (-0.062))
= 2.093
The root is 2.09

Newton – Raphson method–


For More Information go to www.dpmishra.com
9
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

 The Newton-Raphson method, or Newton Method, is a powerful technique


for solving equations numerically. Like so much of the differential calculus,
it is based on the simple idea of linear approximation. The Newton Method,
properly used, usually homes in on a root with devastating efficiency.

Let x0 be a good estimate of r and let r = x0 + h. Since the true root is r, and h = r −
x0, the number h measures how far the estimate x0 is from the truth.

Since h is ‘small,’ we can use the linear (tangent line) approximation to


conclude that

0 = f(r) = f(x0 + h) ≈ f(x0) + hf’ (x0),

and therefore, unless f’(x0) is close to 0,

h ≈ − f(x0)/ f’(x0) .

It follows that

r = x0 + h ≈ x0 − f(x0) / f’(x0) .

Our new improved (?) estimate x1 of r is therefore given by

x1 = x0 − f(x0) / f’(x0) .

The next estimate x2 is obtained from x1 in exactly the same way as x1 was
obtained from x0:

x2 = x1 − f(x1) f0 (x1) .

Continue in this way. If xn is the current estimate, then the next estimate xn+1 is
given by

xn+1 = xn − f(xn) / f’(xn)

For More Information go to www.dpmishra.com


10
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Unit – 2
Simultaneous Linear Equations
Solution of systems of linear equations–
 The purpose of this section is to look at the solution of simultaneous linear
equations. We will see that solving a pair of simultaneous equations is
equivalent to finding the location of the point of intersection of two
straight lines.

2x − y = 3

This is a linear equation. It is a linear equation because there are no terms


involving x2, y2 or x Xy, or indeed any higher powers of x and y. The only terms we
have got are terms in x, terms in y and some numbers.

Example: Solve simultaneously for x and y:


x + y= 10

x − y=2

This means that we must find values of x and y that will solve both equations.
We must find two numbers whose sum is 10 and whose difference is 2.
The two numbers, obviously, are 6 and 4:
6 + 4 = 10

6 − 4=2

Let us represent the solution as the ordered pair (6, 4).


Now, these two equations --
x + y= 10

x − y=2

For More Information go to www.dpmishra.com


11
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

-- are linear equations (Lesson 33). Hence, the graph of each one is a straight
line. Here are the two graphs:

Gauss elimination Direct method–


method
Gaussian elimination is a method of solving a linear system (consisting
of equations in unknowns) by bringing the augmented matrix

to an upper triangular form

This elimination process is also called the forward elimination method.

The following examples illustrate the Gauss elimination procedure.

EXAMPLE : Solve the linear system by Gauss elimination method.

For More Information go to www.dpmishra.com


12
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Solution: In this case, the augmented matrix is The method


proceeds along the following steps.

1. Interchange and equation (or ).

2. Divide the equation by (or ).

3. Add times the equation to the equation (or ).

4. Add times the equation to the equation (or ).

For More Information go to www.dpmishra.com


13
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

5. Multiply the equation by (or ).

The last equation gives the second equation now gives finally the

first equation gives Hence the set of solutions is A


UNIQUE SOLUTION.

Pivoting–
 The main draw back of the above elimination process is division by the
diagonal term while converting the augmented matrix into upper triangular
form. If the diagonal element is zero or a vanishingly very small then the
elements of the rows below this diagonal become very large in magnitude
and difficult to handle because of the finite storage
storage capacity of the
computers. Alternative is to convert the system such that the element
which has large magnitude in that column comes at the pivotal position i.e.,
the diagonal position.

Partial Pivoting : If only row interchanging is used to bring the


the element of large
magnitude of the pivotal column to the pivotal position at each step of
diagonalization then such a process is called partial pivoting. In this process the
matrix may have larger element in nonnon-pivotal
pivotal column(the column where the
pivot is there) but the largest element in the pivotal column only brought to
pivotal (or diagonal) position in this process by making use of row
transformations.

Complete Pivoting : In this process the largest element(in magnitude) of the


whole coefficient matrix A is first brought at 1x 1 position of the coefficeint matrix

For More Information go to www.dpmishra.com


14
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

and then leaving the first row and first column, the largest among the remaining
elements is brought to the pivotal 2 x2 position and so on by using both row and
column transformations, is called complete pivoting. During row transformations
the last column of the augmented matrix also has to be considered but this
column is not considered to find the largest element in magnitude. Since the
column transformations are also allowed in this process, there will be a change in
the position of the individual elements of the unknown vector X. Hence in the end
the elements of the unknown vector X has to be rearranged by applying inverse
column transformations in reverse order to all the column transformations
preformed.

Ill conditioned system of equations–


 A system of equations is considered to be ill conditioned if a small change in
the coefficient matrix or a small change in the right hand side results in a
large change in the solution vector.
 It is well known that for a system of equations with an ill-conditioned
matrix, an erroneous solution can be obtained which seems to satisfy the
system quite well.
 Various measures of the ill-conditioning of a matrix have been proposed.
For example, the condition number associated with the linear equation Ax
= b gives a bound on how inaccurate the solution x will be after
approximation.

Note that this is before the effects of round-off error are taken into
account; conditioning is a property of the matrix, not the algorithm or
floating point accuracy of the computer used to solve the corresponding
system. In particular, one should think of the condition number as being
(very roughly) the rate at which the solution, x, will change with respect to
a change in b. Thus, if the condition number is large, even a small error in b
may cause a large error in x. On the other hand, if the condition number is
small then the error in x will not be much bigger than the error in b.

Refinement of solution–
For More Information go to www.dpmishra.com
15
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

 Iterative refinement is a technique introduced by Wilkinson for reducing


the roundoff error produced during the solution of simultaneous linear
equations. Higher precision arithmetic is required for the calculation of the
residuals.

The iterative refinement algorithm is easily described.

 Solve Ax=bAx=b, saving the triangular factors.


 Compute the residuals, r=Ax−br=Ax−b.
 Use the triangular factors to solve Ad=rAd=r.
 Subtract the correction, x=x−dx=x−d
 Repeat the previous three steps if desired.

Gauss seidal method–


 Gauss–Seidel method. In numerical linear algebra, the Gauss–Seidel
method, also known as the Liebmann method or the method of successive
displacement, is an iterative method used to solve a linear system
of equations.

For More Information go to www.dpmishra.com


16
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

For More Information go to www.dpmishra.com


17
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Unit – 3
Interpolation and approximation

Finite Differences–
 In mathematics, finite-difference methods (FDM) are numerical
methods for solving differential equations by approximating them
with difference equations, in which finite differences approximate the
derivatives. FDMs are thus discretizationmethods.
 Finite differences play a key role in the solution of differential equations
and in the formulation of interpolating polynomials. The interpolation is the
art of reading between the tabular values. Also the interpolation formulae
are used to derive formulae for numerical differentiation and integration.

For More Information go to www.dpmishra.com


18
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

 Three forms are commonly considered: forward, backward, and central


differences.
A forward difference is an expression of the form
∆h[f](x)= f(x+h)-f(x).
Depending on the application, the spacing h may be variable or constant. When
omitted, h is taken to be
1: Δ[ f ](x) = Δ1[ f ](x).
A backward difference uses the function values at x and x − h, instead of the
values at x + h and x:
∇{h}[f](x)=f(x)-f(x-h).}

Finally, the central difference is given by


{h}[f](x)=f(x+1/2 h)-f(x- ½ h).

Difference Table–
 An auxiliary table to facilitate interpolation between the numbers of the
principal table giving approximate differences in values of the tabulated
function corresponding to certain submultiples (such as tenths) of the
constant smallest increment of the independent variable in the table.

Polynomial Interpolation–
 Polynomial interpolation. In numerical analysis, polynomial interpolation is
the interpolation of a given data set by the polynomial of lowest possible
degree that passes through the points of the dataset.

Newton Forward Formula–

This formula is particularly useful for interpolating the values of f(x) near the beginning
of the set of values given. h is called the interval of difference and u = ( x – a ) / h, Here
a is first term.

For More Information go to www.dpmishra.com


19
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Example :
Input : Value of Sin 52

Output :

Newton Backward Formula–

This formula is useful when the value of f(x) is required near the end of the table. h is
called the interval of difference and u = ( x – an ) / h, Here an is last term.
Example :
Input : Population in 1925

Output :

For More Information go to www.dpmishra.com


20
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Value in 1925 is 96.8368

Central Difference Formula–

For More Information go to www.dpmishra.com


21
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

For More Information go to www.dpmishra.com


22
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

For More Information go to www.dpmishra.com


23
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Interpolation with unequal interval

Langrange’s interpolation–

For More Information go to www.dpmishra.com


24
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Newton Divided difference formula–

For More Information go to www.dpmishra.com


25
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Unit – 4
Statistics
Statistics and its role in decision making–

Internal and external source of data–

Formation of frequency distribution–

Types of frequency distribution–

Simple and weighted means–

Median and mode–

Unit – 5
Correlation

For More Information go to www.dpmishra.com


26
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Significance of study of correlation–

Types of correlation–

Positive correlation–

Negative correlation–

Simple–

Partial–

Multiple correlation–
Linear and non-linear correlation–

Coefficient of correlation–

Use of Regression analysis–

For More Information go to www.dpmishra.com


27
Keep Learning with us VBSPU BCA 3rd sem. Software Engineering

Diffrence between correlation and regression


analysis–

Regression Lines–

Regression equation of Y on X–

Regression equation of X on Y–

For More Information go to www.dpmishra.com

You might also like