Numerical Methods: I. Finding Roots II. Integrating Functions
Numerical Methods: I. Finding Roots II. Integrating Functions
Finding roots
Even if exact procedures existed, we are stuck with the problem that a computer can only represent a finite number of values thus, we cannot validate our answer because it will not come out exactly However we can say how accurate our solution is as compared to the exact solution
Problem-dependent decisions
Approximation: since we cannot have exactness, we specify our tolerance to error Convergence: we also specify how long we are willing to wait for a solution Method: we choose a method easy to implement and yet powerful enough and general Put a human in the loop: since no general procedure can find roots of complex equations, we let a human specify a neighbourhood of a solution
Choose method and initial guess wisely Good idea to start with a crude graph. If you are looking for a single root you only need one positive and one negative value If even a crude graph is difficult, generate a table of values and plot graph from that.
Graph
Solution, continued
One crossover point at about 0.5, another at about 2.0 (there are very many of them ) Compute values of both functions at 0.5 Decrement/increment slightly watch the sign of the difference-function!! If there is an improvement continue, until you get closer. Stop when you are close enough
x
0.3 0.4
e-x
0.741 0.670
sin(1/2x)
0.454 0.588
2
3 4 5 6
0.5
0.45 0.425 0.4375 0.44365
0.606
0.638 0.654 0.6456 0.6417
0.707
0.649 0.619 0.6344 0.6418
- 0.101
- 0.012 0.0347 0.01126 - 0.00014
Bisection Method
The Bisection Method slightly modifies educated guess approach of hand calculation method. Suppose we know a function has a root between a and b. (and the function is continuous, and there is only one root)
Bisection Method
Keep in mind general approach in Computer Science: for complex problems we try to find a uniform simple systematic calculation How can we express the hand calculation of the preceding in this way? Hint: use an approach similar to the binary search
Bisection method
Check if solution lies between a and b F(a)*F(b) < 0 ?
Try the midpoint m: compute F(m)
Bisection method
This method converges to any pre-specified tolerance when a single root exists on a continuous function
Example Exercise: write a function that finds the square root of any positive number that does not require programmer to specify estimates
c = x2
0 0.5 1 1.5 2 2.5 3
Example: F(x) = x2 - 4
2 0 -2 -4 -6
F(x) = x2 - c
double Sqrt(double c, double tol) { double a,b, mid, f; // set initial boundaries of interval if (c < 1) { a = c; b = 1} else { a = 1; b = c} 6 4 do { 2 mid = ( a + b ) / 2.0; 0 0 f = mid * mid - c; -2 -4 if ( f < 0 ) -6 a = mid; else b = mid; } while( fabs( f ) > tol ); return mid; }
0.5
1.5
2.5
Improvement to Bisection
Regula Falsi, or Method of False Position. Use the shape of the curve as a cue Use a straight line between y values to select interior point As curve segments become small, this closely approximates the root
curve
approximation
adjacent1 adjacent2
similar triangles
The values needed for x are values already computed (Different triangles used from text, but same idea)
fa
fb Problem with Regula Falsi -- if the graph is convex down, the interpolated point will repeatedly appear in the larger segment.
Therefore a problem arises if we use the size of the current interval as a criterion that the root is found
fb
fx
x
fa
b If we use the criterion abs(fx) < e, this is not a problem. But this criterion cant be always used (e.g. if function is very steep close to the root)
Another problem with Regular Falsi: if the function grows too fast very slow convergence
fb a fa
interior2 interior3 interior1
Secant Method
Exactly like Regula Falsi, except: No need to check for sign. Begin with a, b, as usual Compute interior point as usual NEW: set a to b, and b to interior point Loop until desired tolerance.
Secant method
No animation ready yet: Intuition It automatically flips back and forth, solving problem with unmodified regula falsi Sometimes, both fa and fb are positive, but it quickly tracks secant lines to root
Secant Illustration
F(x) = x2 - 10
120 100 80 60 40 20
1 2 3 4
1 (a=1, fa=-9) (b=10, fb=90) int = 1.8, fint = -6.7 2 (a=10, fa=90) (b=1.8, fb= -6.7) int = 0.88, fint = -9.22 3 (a=1.8, fa=-6.7) (b=0.88, fb=-9.22) int = 4.25, fint = 8 4 (a=0.88, fa=-9.22) (b=4.25, fb=8) Int =2.68, fint = -2.8 Etc Series1
0 -20
1 2 3 4 5 6 7 8 9 10 11 12
Called function
All functions call this function: func(x, c) = x2 - c
double func(double x, double c) {
return x * x - c;
} * Note that the root of this function is the square root of c. The functions call it as follows: func( current_x, square_of_x );
Initializations
All functions implementing the root-finding algorithm have the same initialization:
double a, b; if ( c < 1 ) { a = c ; b= 1;} else { a = 1; b = c;}; double fa = func( a, c); double fb = func( b, c);
Code
Earlier code gave square root example with logic of bisection method. Following programs, to fit on a slide, have no debug output, and have the same initializations (as on the next slide)
double bisection(double c, int iterations, double tol) for ( int i = 0; i < iterations; i++) { double x = ( a + b ) / 2; double fx = func( x, c ); if ( fabs( fx ) < tol ) return x; if ( fa * fx < 0 ) { b = x; fb = fx; } BISECTION METHOD else { This is the heart of the a = x; algorithm. Compute the fa = fx; } midpoint and the value of } return -1; the function there; iterate on
double regula (double c, int iterations, double tol) for ( int i = 0; i < iterations; i++) { double x = ( fa*b - fb*a ) / ( fa - fb ); double fx = func( x, c ); if ( fabs( fx ) < tol ) return x; if ( fa * fx < 0 ) { b = x; fb = fx; } REGULA FALSI else { The main change from a = x; bisection is computing fa = fx; } an interior point that } return -1; more closely
double regulaMod (double c, int iterations, double tol) for ( int i = 0; i < iterations; i++) { double x = ( fa*b - fb*a ) / ( fa - fb ); double fx = func( x, c ); if ( fabs( fx ) < tol ) return x; if ( fa * fx < 0 ) { b = x; fb = fx; fa *= RELAX } MODIFIED REGULA FALSI else { a = x; The only change from fa = fx; ordinary regula is the fb *= RELAX } RELAXation factor } return -1;
double secant(double c, int iterations, double tol) for ( int i = 0; i < iterations; i++) { double x = ( fa*b - fb*a ) / ( fa - fb ); double fx = func( x, c ); if ( fabs( fx ) < tol ) return x; a = b; SECANT METHOD fa = fb; The change from ordinary b = x; fb = fx; regula is that the sign check is } dropped and points are just shifted return -1;
over
Actual performance
Actual code includes other output commands Used 4 methods to compute roots of 4, 100, 1000000, 0.25 Maximum allowable iterations 1000 Tolerance = 1e-15 RELAXation factor = 0.8
Bisection 4
Regula Mod Secant
100 1000000 0.25 4 100 1000000 0.25 4 100 1000000 0.25 4 100 1000000 0.25
51 61 32 1000 21 46 7 20
54 47 1000 30 28 19 10 6
double bisection(double c, int iterations, double tol) for ( int i = 0; i < iterations; { double x = ( a + b ) / 2; double fx = func( x, c ); if ( fabs( fx ) < tol ) return x; if ( fa * fx < 0 ) { b = x; fb = fx; } else { a = x; fa = fx; } } return -1; i++)
Example. All four code examples used the closeness of fx to zero as convergence criterion.
Convergence criterion
Closeness of fx to zero not always a good criterion, consider a very flat function may be close to zero a considerable distance before the root
Convergence criteria
Fractional size of search interval: (cur_a cur_b) / ( a b) Used in modified falsi in text Indicates that further search may not be productive. Does not guarantee small value of fx.
Numerical Integration
In NA, take visual view of integration as area under the curve Many integrals that occur in science or engineering practice do not have a closed form solution must be solved using numerical integration
Trapezoidal Rule
The area under the curve from [a, fa] to [b, fb] is initially approximated by a trapezoid:
I1 = ( b a ) * ( fa + fb ) / 2
Simple trapezoid over large interval is prone to error. Divide interval into halves
fc
I2 = ( b a )/2 * ( fa + 2*fc + fb ) / 2 (Note that interior sides count twice, since they belong to 2 traps.)
Further development
I2 = ( b a ) / 2 * ( fa + 2*fc + fb ) / 2
= ( b a ) /2 * ( f a + fb + 2* fc ) / 2 = ( b a ) / 2* ( fa + fb ) /2 + (b a) / 2* fc
= I1 /2 + (b a)/2 * fc
Notice that (b-a)/2 is the new interval width and fc is the value of the function at all new interior points. If we call dk the new interval width at step k, and cut intervals by half: Ik = Ik-1/2 + dk (all new interior f-values)
Trapezoidal algorithm
Compute first trapezoid: I = (fa + fb ) * (b a) /2 New I = Old I / 2, length of interval by 2 Compute sum of function value at all new interior points, times new interval length Add this to New I. Continue until no significant difference
Simpsons rule
.. Another approach Rather than use straight line of best fit, Use parabola of best fit (curves) Converges more quickly