0% found this document useful (0 votes)
286 views

Optimization Methods in Finance

This document introduces optimization methods used in finance. It defines optimization as the minimization or maximization of an objective function subject to constraints. There are three essential elements of any optimization problem: decision variables, the objective function, and constraints. Optimization problems can be classified as continuous, discrete, or a mixture of both. The document provides mathematical definitions and descriptions of key optimization concepts such as feasible regions, local and global minimizers, unconstrained problems, and how constraints can be defined using inequalities and equalities.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
286 views

Optimization Methods in Finance

This document introduces optimization methods used in finance. It defines optimization as the minimization or maximization of an objective function subject to constraints. There are three essential elements of any optimization problem: decision variables, the objective function, and constraints. Optimization problems can be classified as continuous, discrete, or a mixture of both. The document provides mathematical definitions and descriptions of key optimization concepts such as feasible regions, local and global minimizers, unconstrained problems, and how constraints can be defined using inequalities and equalities.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Optimization Methods in Finance

Gerard Cornuejols

Reha T ü t ü nc ü

Carnegie Mellon University, Pittsburgh, PA 15213 USA

January 2006
Chapter 1

Introduction

Optimization is a branch of applied mathematics that derives its importance both from the wide variety
of its applications and from the availability of efficient algorithms. Mathematically, it refers to the
minimization (or maximization) of a given objective function of several decision variables that satisfy
functional constraints. A typical optimization model addresses the allocation of scarce resources among
possible alternative uses in order to maximize an objective function such as total profit.

Decision variables, the objective function, and constraints are three essential elements of any
optimization problem. Problems that lack constraints are called unconstrained optimization problems,
while others are often referred to as constrained optimization problems. Problems with no objective
functions are called feasibility problems. Some problems may have multiple objective functions. These
problems are often addressed by reducing them to a single-objective optimization problem or a sequence
of such problems.

If the decision variables in an optimization problem are restricted to integers, or to a discrete set
of possibilities, we have an integer or discrete optimization problem. If there are no such restrictions on
the variables, the problem is a continuous optimization problem. Of course, some problems may have a
mixture of discrete and continuous variables. We continue with a list of problem classes that we will
encounter in this book.

1.1 Optimization Problems


We start with a generic description of an optimization problem. Given a function 𝑓(𝑥): ℝ𝑛 → ℝ
and 𝑎 set 𝑆 ⊂ ℝ𝑛 , the problem of finding an 𝑥 ∗ ∈ ℝ𝑛 that solves

𝑚𝑖𝑛𝑥 𝑓(𝑥)

𝑠. 𝑡. 𝑥 ∈ 𝑆
(1.1)

is called optimization problem. We refer to 𝑓 as the objective function and to 𝑆 as the feasible region. If 𝑆
is empty, the problem is called infeasible. If it is possible to find a sequence 𝑥 𝑘 ∈ 𝑆 such that 𝑓(𝑥 𝑘 ) →
−∞ as 𝑘 → +∞, then the problem is unbounded. If the problem is neither infeasible nor unbounded, then
it is often possible to find a solution 𝑥 ∗ ∈ 𝑆 that satisfies

𝑓(𝑥 ∗ ) ≤ 𝑓(𝑥), ∀𝑥 ∈ 𝑆.
Such an 𝑥 ∗ is called a global minimizer of the problem (1.1). If

𝑓(𝑥 ∗ ) < 𝑓(𝑥), ∀𝑥 ∈ 𝑆, 𝑥 ≠ 𝑥 ∗ ,

then 𝑥 ∗ is a strict global minimizer. In other instances, we may only find an 𝑥 ∗ ∈ 𝑆 that satisfies

𝑓(𝑥 ∗ ) ≤ 𝑓(𝑥), ∀𝑥 ∈ 𝑆 ∩ 𝐵𝑥 ∗ (𝜀)


for some 𝜀 > 0, where 𝐵𝑥 ∗ (𝜀) is the open ball with radius 𝜀 centered at 𝑥 ∗ , i.e.,
𝐵𝑥 ∗ (𝜀) = {𝑥: ∥ 𝑥 − 𝑥 ∗ ∥< 𝜀}.
Such an 𝑥 ∗ is called a local minimizer of the problem (1.1). A strict local minimizer is defined similarly.

In most cases, the feasible set 𝑆 is described explicitly using functional constraints (equalities
and inequalities). For example, 𝑆 may be given as

𝑆 ∶= {𝑥: 𝑔𝑖 (𝑥) = 0, 𝑖 ∈ 𝜀 and 𝑔𝑖 (𝑥) ≥ 0, 𝑖 ∈ 𝐼,

where 𝜀 and 𝐼 are the index sets for equality and inequality constraints.

Then, our generic optimization problem takes the following form:

𝑚𝑖𝑛𝑥 𝑓(𝑥)
𝑔𝑖 (𝑥) = 0, 𝑖 ∈

You might also like