0% found this document useful (0 votes)
4 views22 pages

15mat301 O7

Gradient Based Methods include the Bisection method, Newton-Raphson Method, and Secant Method, all requiring differentiable functions. The Bisection method eliminates portions of the search space based on the sign of the first derivative, while the Newton-Raphson method uses Taylor's series for optimization by approximating derivatives. The Secant Method finds new points using derivatives at boundary points, allowing for efficient convergence towards the minimum of a function.

Uploaded by

harishkumarat004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views22 pages

15mat301 O7

Gradient Based Methods include the Bisection method, Newton-Raphson Method, and Secant Method, all requiring differentiable functions. The Bisection method eliminates portions of the search space based on the sign of the first derivative, while the Newton-Raphson method uses Taylor's series for optimization by approximating derivatives. The Secant Method finds new points using derivatives at boundary points, allowing for efficient convergence towards the minimum of a function.

Uploaded by

harishkumarat004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Gradient Based Methods

GBM
Introduction

Gradient based methods consists of four methods namely,


1 Bisection method
2 Newton- Raphson Method
3 Secant Method
These methods needs the function should have derivatives (or) we can
say it requires differentiable function.

GBM
Bisection Method

Firstly,assume that the function is unimodal function.


Here in bisection method, the computation of the second
derivative is not required; instead, only the first derivative is
enough.
This method is similar to the region-elimination methods
discussed earlier in our course.
We are going to use the sign of the first derivative at two points for
eliminating certain portion of the search space.

GBM
Bisection Method

Find two points a, b ∈ R such that f 0 (a) > 0 and f 0 (b) < 0.
Then choose the region as (a, b)
Algorithm:
Step:1 Choose two points a and b such that f 0 (a) > 0 and
f 0 (b) < 0. Also choose a small number . Set x1 = a and x2 = b.
Step:2 Calculate z = (x2 + x1 )/2 and evaluate f 0 (z).
Step:3 If |f 0 (z)| ≤ , Terminate;
Else if f 0 (z) < 0 set x1 = z and go to Step 2; Else if f 0 (z) > 0 set
x2 = z and go to Step 2.
Continue the process until |f 0 (z)| < 

GBM
Bisection Method

Consider the same problem as in Newton’s Method


Example
54
Find the minimum for the function f (x) = x 2 + x , where  = 10−3

First of all,f 0 (x) = 2x − 54/x 2


Step:1 Choose a = 2 and b = 5 because f 0 (a) = −9.501 < 0 and
f 0 (b) = 7.841 > 0
Step:2 Calculate z = (a + b)/2 = 3.5, Also, Calculate
f 0 (z) = 2.591, also, observe |f 0 (z) ≮ .
Since f 0 (z) > 0, the right-half of the search space needs to be
eliminated. Hence, the new interval is (2, 3.5), that is set x1 = 2
and x2 = 3.5
End of First iteration,Again goto Step:2

GBM
Bisection Method

Step:2 Calculate z = (x1 + x2 )/2 = (2 + 3.5)/2 = 2.750, also,


Calculate f 0 (z) = −1.641
Since f 0 (z) < 0, the new interval is (2.750, 3.5), that is set
x1 = 2.750 and x2 = 3.500
Since,|f 0 (z) ≮ . Again goto Step:2
continue like this until |f 0 (z) ≮ . For this problem, we need to do
11 more iteration to attain this and the final interval will be
(2.750, 3.125). Here, z = (2.750 + 3.125)/2 = 2.938 is the
approximate minimum of f with allowed error  = 10−3 actual
minimum is x ∗ = 3.0

GBM
Newton Raphson Method

The goal of an unconstrained local optimization method is to find a


point where the derivative of that point should be as minimum as
possible.
In this method, we are going to approximate the first derivative of
the given function at a point using the Taylor’s series expansion
Suppose, if we want to solve minf (x)
x∈X
Here, assume that at x (k) ,
we can calculate
(k) 0 (k) 00 (k)
f (x ), f (x ), f (x ).
Our aim is to fit a quadratic function through x (k) that matches its
first and second derivatives with our given f .

GBM
Newton Raphson Method

1 By setting, x = x (k+1) , we obtain

f 0 (x (k) )
x (k+1) = x (k) − . (1)
f 00 (x (k) )

2 Make a initial guess x (1) and choose a small value for . Set
k = 1. Compute f 0 (x (k) )
3 compute f 00 (x (k) )
f 0 (x (k) )
4 Calculate x (k+1) = x (k) − f 00 (x (k ) )
5 If |f 0 (x (k+1) )| < , Terminate;
Else set k = k + 1 and go to step 2
Convergence of algorithm depends mainly on the initial guess we
made and the nature of the objective function.

GBM
Newton Raphson Method

For mathematical functions, the derivative is easy to compute, but


in case of practical problems, the gradients have to be computed
numerically.
At a point x (t) , the first and second derivatives are computed as
follows, using the central difference method (Scarborough, 1966)

f (x (t) + ∆x (t) ) − f (x (t) − ∆x (t) )


f 0 (x (t) ) = . (2)
2∆x (t)

f (x (t) + ∆x (t) ) − 2f (x (t) ) + f (x (t) − ∆x (t) )


f 00 (x (t) ) = . (3)
(∆x (t) )2

GBM
Newton Raphson Method

The parameter ∆x (t) is usually taken to be a smaller value.


Throughout our course, for calculations, we assign ∆x (t) to be
about 1 percent of x (t) :
(
(t) 0.01|x (t) |, if |x (t) | > 0.01,
∆x = (4)
0.0001, Otherwise

GBM
Example

Example
54
Find the minimum for the function f (x) = x 2 + x , where
x (1) = 1,  = 10−3

Step:1
Let k = 1, Now, we are going to compute f 0 (x (k ) ) using equation 2
and the increment ∆x (k) as in equation4.
Here, x (1) = 1, ∆x (1) = 0.01 × 1 = 0.01
f (x (1) +∆x (1) )−f (x (1) −∆x (1) )
substitute everything in f 0 (x (1) ) = 2∆x (1)
, we get,

f (1.01) − f (0.99)
f 0 (x (1) ) =
2 × 0.01
54.48544 − 55.52554
=
0.02
= −52.005

GBM
Example

step:2
Using equation 3, calculate f 00 (x (1) ).

f (x (1) + ∆x (1) ) − 2f (x (1) ) + f (x (1) − ∆x (1) )


f 00 (x (1) ) =
(∆x (1) )2
f (1.01) − 2f (1) + f (0.99)
=
(0, 01)2
= 110.011

GBM
Example

step:3
Now, compute

f 0 (x (1) )
x (2) = x (1) − ,
f 00 (x (1) )
−52.005
=1−
110.011
= 1.473

Also, compute f 0 (x (2) ) using equation 2. We get,


f 0 (x (2) ) = −21.944, Here ∆x (2) = 0.01 × 1.473
step:4
Since |f 0 (x (2) )| ≮ 10−3 , take k = 2 and go to step:2. This the end
of the first iteration. Similarly do the second iteration.
At the end of Second iteration , we will have f 00 (x (2) ) =
85.796, x (3) = 2.086, ∆x (3) = 0.02086, f 0 (x (3) ) = −8.239
GBM
Example

Again, |f 0 (x (3) )| ≮  ( = 10−3 ).


Set k = 3 and goto step:2, do the third iteration
Similarly, do upto six iterations.
At the end of sixth iteration, we will have
x (7) = 3.0001, f 0 (x (7) ) = −4 × 10−8 , which means |f 0 (x (7) )| < .
Hence, Minimum value attained at x (7) and minf (x) ≈ 27
x∈X

GBM
Secant Method

Like bisection method, here also we are going to find interval


where minimum exists inside that interval and in each iteration,
new point z can be obtained using derivative.
Since derivates at the boundary points will have opposite sign and
also derivative will change continuously between the boundary
points.
there will exists some point z ∈ (x1 , x2 ) such that f 0 (z) = 0
If we know that x1 , x2 are having opposite sign for derivatives, we
can find the point z ∈ (x1 , x2 ) such that f 0 (z) = 0. That is,

f 0 (x2 )
z = x2 − (5)
f 0 (x2 ) − f 0 (x1 )/(x2 − x1 )

GBM
Secant Method

In this method, in one iteration more than half of the search space may
get eliminated or smaller than half of the search space may also be get
eliminated in some iteration.
Everything depends on the derivatives of the corresponding boundary
points.
The Algorithm for this method is same as Bisection method only the
difference is step:2, instead of choosing midpoint, new point is
calculated using equation 5

GBM
Secant Method

Find two points a, b ∈ R such that f 0 (a) > 0 and f 0 (b) < 0.
Then choose the region as (a, b)
Algorithm:
Step:1 Choose two points a and b such that f 0 (a) > 0 and
f 0 (b) < 0. Also choose a small number . Set x1 = a and x2 = b.
f 0 (x2 )
Step:2 Calculate z = x2 − 0 and evaluate
f (x2 ) − f 0 (x1 )/(x2 − x1 )
f 0 (z).
Step:3 If |f 0 (z)| ≤ , Terminate;
Else if f 0 (z) < 0 set x1 = z and go to Step 2; Else if f 0 (z) > 0 set
x2 = z and go to Step 2.
Continue the process until |f 0 (z)| < 

GBM
Secant Method

Again consider the same problem,


Example
54
Find the minimum for the function f (x) = x 2 + x , where  = 10−3

First of all,f 0 (x) = 2x − 54/x 2


Step:1 Choose a = 2 and b = 5 because f 0 (a) = −9.501 < 0 and
f 0 (b) = 7.841 > 0
Step:2 Calculate z using equation (5). That is,
0 0 (2))
z = 5 − f (5)/ (f (5)−f
(5−2) = 3.644. Also, calculate f 0 (z) = 3.221,
0
also, observe |f (z) ≮ .
Since f 0 (z) > 0, the right-half of the search space needs to be
eliminated. Hence, the new interval is (2, 3.644), that is set x1 = 2
and x2 = 3.644
End of First iteration,Again goto Step:2
GBM
Secant Method

Step:2 Again, calculate z using equation (5). That is,


0 0 (2))
z = 3.644 − f (3.644)/ (f (3.644)−f
(3.644−2) = 3.228. Also, calculate
0 0
f (z) = 1.127, also, observe |f (z) ≮ .
Since f 0 (z) > 0, the right-half of the search space needs to be
eliminated. Hence, the new interval is (2, 3.228), that is set x1 = 2
and x2 = 3.288
Note: The amount of eliminated search space is 0.416, which is
(3.644 − 2)
also smaller than half of the previous search space or
2
0.822. In both these iterations, the eliminated region is less than
half of the search space, but in some iterations, a region more
than the half of the search space can also be eliminated.
Hence, the new interval is (2, 3.228), that is set x1 = 2 and
x2 = 3.228
End of Second iteration,Again goto Step:2

GBM
Newton Raphson Method

GBM
Secant Method

Step:2 Again, calculate z using equation (5). That is,


0 0 (2))
z = 3.228 − f (3.228)/ (f (3.228)−f
(3.228−2) = 3.101. Also, calculate
0 0
f (z) = 0.586, also, observe |f (z) ≮ .
Since f 0 (z) > 0, the right-half of the search space needs to be
eliminated. Hence, the new interval is (2, 3.101), that is set x1 = 2
and x2 = 3.101
End of Second iteration,Again goto Step:2. Continue like this, till
we get |f 0 (z) < . Finally, we will get optimum value as 3.037.
Observe that, we got better optimum value to exact minimum
value x = 3.0 than we got from Bisection method (z = 2.938).

GBM
THANK YOU

GBM

You might also like