0% found this document useful (0 votes)
32 views14 pages

DSI434 Presentation Unconstrained Optimization

Unconstrained optimization in data science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views14 pages

DSI434 Presentation Unconstrained Optimization

Unconstrained optimization in data science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

OPTIMISATION TECHNIQUES WITH NUMPY & SCIPY – DSI434

3.Unconstrained Optimization
CONTENT

a b
One-dimensional optimization
Introduction to algorithms: golden section
unconstrained search, Brent's method

optimization
problems

c d
Multidimensional optimization Implementation of unconstrained
algorithms: steepest descent, optimization algorithms using
conjugate gradient, Newton's NumPy and SciPy
method
(a)Introduction to unconstrained optimization
problems

 Unconstrained optimization problems refer to finding the maximum or minimum value of a


function without any constraints on the variables. In other words, the goal is to find the
optimal value of a function without any limitations or restrictions.
 The general form of an unconstrained optimization problem is:
minimize or maximize f(x)

where x is a vector of variables and f(x) is the objective function that needs to be
optimized.

 The solution to an unconstrained optimization problem involves finding the values of x


that result in the minimum or maximum value of the objective function. This can be
achieved using various optimization techniques such as gradient descent, Newton's
method, and conjugate gradient method.
 Unconstrained optimization problems have numerous applications in various fields such
as engineering, economics, finance, and machine learning. For example, in finance,
unconstrained optimization is used to find the optimal portfolio allocation that maximizes
returns while minimizing risks. In machine learning, unconstrained optimization is used to
optimize the parameters of a model to improve its performance.
Practical Examples of Unconstraint Optimization
1. Portfolio Optimization: In finance, investors often face the challenge of allocating their investments across different
assets to maximize their returns while minimizing risks. Unconstrained portfolio optimization aims to find the optimal
allocation of investments that maximizes the expected return or minimizes the risk, such as the variance of the portfolio.
2. Machine Learning Model Training: When training a machine learning model, the goal is to find the optimal set of model
parameters that minimize the error or loss function. This involves adjusting the weights and biases of the model to optimize
its performance on a given dataset.
3. Supply Chain Management: In supply chain optimization, the objective is to minimize costs while maximizing efficiency.
For example, a company may want to optimize its production and distribution processes to minimize transportation costs,
inventory holding costs, or production time.
4. Resource Allocation: Organizations often face the challenge of allocating limited resources, such as manpower, budget,
or time, to different projects or tasks. The objective is to find the optimal allocation that maximizes the overall productivity
or minimizes the completion time.
5. Process Optimization: In manufacturing or industrial processes, the objective is to optimize various parameters to
maximize the output or minimize the production cost. This can involve optimizing parameters like temperature, pressure,
flow rate, or material composition.
6. Marketing Campaign Optimization: In marketing, companies aim to optimize their advertising or promotional campaigns
to maximize customer engagement, sales, or return on investment. This involves finding the optimal allocation of resources
across different marketing channels or targeting specific customer segments.

These are just a few examples of how unconstrained optimization problems arise in real-world scenarios. In each case, the
objective is to find the optimal solution that maximizes or minimizes a certain objective, without any constraints on the
variables involved. Various optimization algorithms and techniques, such as gradient descent, genetic algorithms, or
simulated annealing, can be used to solve these problems and find the optimal solutions.
(b)One-dimensional optimization algorithms: golden section search,
Brent's method
Golden section search is a one-dimensional optimization algorithm that is used to find the minimum or maximum value of a function within a
given interval. The algorithm works by dividing the interval into two parts based on the golden ratio (approximately 1.618) and evaluating the
function at two new points. The process is repeated until the desired level of accuracy is achieved.
One real-world example of using golden section search is in designing airplane wings. Engineers use this algorithm to optimize the shape of the
wing to minimize drag and maximize lift.

Brent's method is another one-dimensional optimization algorithm that is used to find the minimum or maximum value of a function within a given
interval. It combines the bisection method, linear interpolation, and inverse quadratic interpolation to achieve faster convergence.

One real-world example of using Brent's method is in optimizing the performance of a computer program. Developers use this algorithm to find
the optimal values of parameters that affect the program's speed and accuracy.

One-dimensional optimization algorithms are used to find the optimal solution in a single variable problem. Two commonly used algorithms for
one-dimensional optimization are the golden sections search and Brent's method.
Cont……………..
1. Golden sections search: This algorithm is used to find the minimum or maximum of a function within a given interval. It
works by progressively narrowing down the interval that contains the optimal solution. The algorithm repeatedly divides
the interval into two sub-intervals in a specific ratio (usually the golden ratio) and updates the interval boundaries based
on the function evaluations. This process continues until a satisfactory solution is found.

Real-world example: Suppose you want to find the optimal production rate for a manufacturing process that minimizes the
production cost per unit. By applying the golden sections search, you can iteratively narrow down the production rate range
and find the value that minimizes the cost function.

2. Brent's method: This algorithm is another approach to find the minimum or maximum of a function within a given interval.
It combines the bisection method, secant method, and inverse quadratic interpolation to efficiently converge on the optimal
solution. Brent's method is known for its robustness and relatively fast convergence.

Real-world example: Consider a scenario where you want to determine the vehicle speed that minimizes fuel consumption
for a given distance. By using Brent's method, you can iteratively update the speed based on fuel consumption
measurements until you find the optimal value that minimizes the fuel consumption.
Both the golden sections search and Brent's method are commonly used in optimization problems that involve a single
variable. Their effectiveness and efficiency make them valuable tools in a wide range of real-world applications.
(c) Multidimensional optimization algorithms:
steepest descent, conjugate gradient, Newton's
method
 Steepest descent is a multidimensional optimization algorithm that is used to find the minimum value of a function by
iteratively moving in the direction of the steepest descent. This algorithm is commonly used in machine learning and image
processing applications.
 One real-world example of using steepest descent is in optimizing the weights of a neural network. The algorithm is used to
find the optimal weights that minimize the error between the predicted and actual outputs of the network.
 Conjugate gradient is another multidimensional optimization algorithm that is used to find the minimum value of a function.
It works by iteratively searching along conjugate directions, which are orthogonal to each other. This algorithm is
commonly used in optimization problems that involve large datasets.
 One real-world example of using conjugate gradient is in training support vector machines (SVMs) for classification tasks.
The algorithm is used to find the optimal values of the SVM parameters that minimize the classification error on a given
dataset.
 Newton's method is a multidimensional optimization algorithm that uses second-order derivatives to iteratively find the
minimum value of a function. It is commonly used in optimization problems that have a quadratic form.
 One real-world example of using Newton's method is in optimizing the design of a car engine. The algorithm is used to find
the optimal values of the engine parameters that maximize fuel efficiency while minimizing
(d)Implementation of unconstrained optimization
algorithms using NumPy and SciPy
Implementation of unconstrained optimization algorithms using
numpy and scipy
Numpy and Scipy are popular Python libraries that provide a wide
range of functions for scientific computing, including optimization
algorithms. Here are some examples of how to implement
unconstrained optimization algorithms using these libraries are :
1.Steepest Descent
2.Conjugate Gradient
3.Newton's Method
1. Steepest Descent
To implement steepest descent using Numpy and Scipy, you can use the minimize function from Scipy's optimize module. Here's
an example code snippet:

from scipy.optimize import minimize

import numpy as np

def objective(x):

return x[0]*2 + 2*x[1]2 + 3*x[2]*2

x0 = np.array([1, 1, 1])

res = minimize(objective, x0, method='CG', jac=True)

print(res.x)

 In this example, we define the objective function objective which takes a vector x as input and returns the value of the function
to be minimized. We also define an initial guess x0. The minimize function takes the objective function, the initial guess, and the
optimization method (CG for conjugate gradient) as inputs. The jac=True argument specifies that the gradient of the objective
function is also provided.
2. Conjugate Gradient:
 To implement conjugate gradient using Numpy and Scipy, you can use the same minimize function as above, but with a
different optimization method. Here's an example code snippet:

from scipy.optimize import minimize


import numpy as np

def objective(x):
return x[0]*2 + 2*x[1]2 + 3*x[2]*2

x0 = np.array([1, 1, 1])
res = minimize(objective, x0, method='CG', jac=True)
print(res.x)

 In this example, we define the same objective function as before, and use the same initial guess x0. The minimize function
takes the objective function, the initial guess, and the optimization method (CG for conjugate gradient) as inputs. The
jac=True argument specifies that the gradient of the objective function is also provided.
3. Newton's Method:
 To implement Newton's method using Numpy and Scipy, you can use the minimize function with the Newton-CG optimization method. Here's an
example code snippet:

from scipy.optimize import minimize


import numpy as np

def objective(x):
return x[0]*2 + 2*x[1]2 + 3*x[2]*2

def gradient(x):
return np.array([2*x[0], 4*x[1], 6*x[2]])

def hessian(x):
return np.array([[2, 0, 0], [0, 4, 0], [0, 0, 6]])

x0 = np.array([1, 1, 1])
res = minimize(objective, x0, method='Newton-CG', jac=gradient, hess=hessian)
print(res.x)

 In this example, we define the same objective function as before, but also define a gradient function gradient and a Hessian function hessian. The
minimize function takes the objective function, the initial guess, and the optimization method (Newton-CG for Newton's method) as inputs. The
jac=gradient argument specifies that the gradient of the objective function is provided by the gradient function, and the hess=hessian argument
specifies that the Hessian of the objective function is provided by the hessian function.
Other algorithms used on uncostraint
optimization
numpy and scipy libraries in Python provide various unconstrained optimization algorithms for implementation. Some commonly used
algorithms include:

1. Nelder-Mead: This algorithm is a direct search method that does not require gradient information.

2. Powell: Powell's method is another derivative-free optimization algorithm that uses only function evaluations.

3. BFGS: Broyden-Fletcher-Goldfarb-Shanno (BFGS) is an iterative gradient-based optimization algorithm that approximates the Hessian
matrix.

4. L-BFGS-B: This algorithm is a limited-memory variant of BFGS and is particularly useful for problems with large numbers of variables.

5. CG: The conjugate gradient method is an iterative algorithm that uses only function and gradient evaluations.

To implement these algorithms using numpy and scipy, you can use the `minimize` function from the `scipy.optimize` module. You will need to
define the objective function and specify the desired optimization algorithm as an argument in the `method` parameter.
`minimize` function with the BFGS
algorithm:
Here is an example of how you can use the `minimize` function with the BFGS algorithm:

python
import numpy as np
from scipy.optimize import minimize

# Define the objective function


def objective(x):
return x[0]**2 + x[1]**2

# Initial guess
x0 = np.array([1, 1])

# Minimize the objective function using BFGS


result = minimize(objective, x0, method='BFGS')

# Print the optimized solution


print(result.x)

This code snippet demonstrates how to minimize a simple objective function using the BFGS algorithm provided by
scipy.optimize. You can replace the objective function with your own function and try other optimization algorithms as
well.
THANK YOU
SIYABONGA
TWALUMBA
TATENDA
OBRIGADO

You might also like