0% found this document useful (0 votes)
7 views98 pages

Mathematical Optimization For Student

owko4kowk,errrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

Uploaded by

kambojaplatters
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views98 pages

Mathematical Optimization For Student

owko4kowk,errrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

Uploaded by

kambojaplatters
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Mathematical Optimization

Armin Darmawan

Department of Industrial Engineering


Hasanuddin University
Outline
Introduction to Optimization
01
Classical Optimization Theory
Type Mathematical Functions 04
02
Modeling Numerical Method for
03 05
Unconstrained
Research Example
06

2
01 Introduction

3
Introduction
Basic concept
Maximizing or minimizing some function relative to some set, often
representing a range of choices available in a certain situation. The
function allows comparison of the different choices for determining
which might be “best.”

01 Common applications: Minimal cost, maximal profit, best


approximation, optimal design, optimal management or
control, variational principles.

4
Efficient vs Effective

Effecttive I II

Less Effective III IV

Less Efficient Efficient


5
Basic concept
Important Elements in Optimization
Objective function Max, Min Profit, cost, time, space, resources
Decision variables X1 , X 2 , … Single or Multi
Constraints No or With constraint (>=, <=, =, >, <, ….)

Objective Function Decision variables

Constrained Unconstrained Single variable Multi variables


(limited solution) (unlimited solution)

6
Basic concept
Objective function Max, Min Profit, cost, time, resources
Decision variables X1, X2, …
Constraints No or With constraint (>=, <=, =, >, <, ….)

Variable Type Case Type

Continues Discrete Statis Dynamic


(measurement scale) (count scale) (single time scale) (time trend scale)

7
Basic concept
Example:
• Kasus 1 : Seorang mahasiswa harus menempuh perjalanan jarak jauh dari
rumah ke kampus setiap hari. Ada beberapa cara yang dapat digunakan untuk
sampai ke kampus.
Permasalahan : Cara manakah yang paling efektif ?
Example:
• PT XYZ menghasilkan 10 jenis produk menggunakan fasilitas produksi yang
sama. Produk dihasilkan secara bergantian. Fasilitas dioperasikan 8 jam setiap
harinya dan 6 hari dalam seminggu. Setiap tanggal 1, fasilitas dibersihkan
untuk perawatan. Biaya produksi setiap jenis produk berbeda, demikian pula
harga jualnya. Semua produk menggunakan bahan baku yang hampir sama.
• Permasalahan kasus 2 : Berapa unit masing-masing produk dihasilkan untuk
mendapatkan keuntungan maksimum ?
8
Basic concept
Manfaat Optimasi
• Pengurangan biaya: Mengoptimalkan proses produksi, pengelolaan
inventaris, dan pemanfaatan sumber daya untuk mengurangi biaya.
• Peningkatan efisiensi: Memaksimalkan hasil produksi dengan
meminimalkan pemborosan waktu, energi, atau material.
• Peningkatan kualitas: Mengoptimalkan desain produk dan proses untuk
menghasilkan produk yang lebih berkualitas dengan biaya yang lebih
rendah.
• Penyelesaian masalah yang kompleks: Menyelesaikan masalah alokasi
sumber daya terbatas, perencanaan produksi, dan penjadwalan yang sering
kali melibatkan banyak variabel dan kendala.

9
Basic concept
Beberapa peran utamanya antara lain:
• Manufaktur dan Produksi: Optimasi dalam penjadwalan produksi untuk
memaksimalkan penggunaan mesin dan tenaga kerja, serta meminimalkan
waktu dan biaya produksi.
• Rantai Pasokan (Supply Chain): Optimasi dalam pengelolaan persediaan,
pengaturan pengiriman barang, dan pemilihan lokasi fasilitas produksi atau
distribusi untuk meminimalkan biaya dan waktu pengiriman.
• Desain Sistem dan Proses: Menggunakan optimasi untuk merancang
sistem atau proses yang efisien, baik dalam aspek teknis maupun ekonomi.
• Pengelolaan Sumber Daya: Memaksimalkan pemanfaatan sumber daya
yang terbatas, seperti bahan baku, tenaga kerja, atau mesin, dalam proses
produksi.
10
Basic concept
Penguasaan matematika optimasi sangat penting bagi seorang
profesional di bidang Teknik Industri karena:
• Memberikan dasar teoritis dalam pengambilan keputusan berbasis data
dan analisis matematis.
• Memungkinkan pemodelan masalah-masalah kompleks di industri
yang melibatkan banyak variabel dan pembatas.
• Memberikan alat untuk merancang solusi yang efisien dan menghemat
sumber daya, baik dalam hal waktu, tenaga kerja, maupun biaya.
• Meningkatkan daya saing perusahaan dengan mengoptimalkan proses
dan meminimalkan pemborosan.
11
Basic concept Observation

Flowchart Problem
M anage me nt definition
science
te chnique s

Model
construction

Fe e dback

Solution

Information

Implementation

12
02 Functions

13
Mathematical
Mathematical Function
Objective function f(x)
Decision variables (x1, x2, …, xn)
Constraint

Methods in optimization:
• Graphical Methods – need reliable software
02
• Analytics – mathematical classic theory → solved by calculus
(differential equation)
• Numerical Methods – if the case study have big challenges (difficult to
be derived)

14
Mathematical
Mathematical Function
Mathematical function types in optimization:
In solving the mathematical optimization, there are three types of functions: 1.
Linier and Non-linier, 2.Unimodal and multimodal, 3. Convex and concave

Linier Convex
Concave Up

Function Unimodal
02
Concave Down
Non-linier

Multimodal

15
Mathematical
Mathematical Function
• Linier and Non-linier
Linier function → ax+b (Polinom berderajat 1)
Kuadratic function → x2 (Polinom berderajat 2) (non linear)
Cubic function → x3 (Polinom berderajat 3) (non linear)
Exponential function → ax (non linear)
Hyperbolic function → a/x (non linier)
02
Case:
1. 2x-3 → Linier
2. X2+3 → Kuadratic (non linier)
3. 3/x → hyperbolic (non linier)

16
Mathematical
Mathematical Function
• Linier and Non-linier

Linier Kuadratic Cubic

02

Exponensial Hyperbolic 17
Mathematical
Mathematical Function
Non-linier:
• Unimodal (hanya 1 titik ekstrim) and multimodal (lebih dari 1
titik ekstrim)

02

18
Mathematical
Mathematical Function
Non-linier → Unimodal
Convex (U) untuk minimasi and Concave (∩) adalah maksimasi
Strictly convex (V) and concave (A)

02

Identify convex and concave with no graphic:


Convex → f”(x) >= 0, Strictly convex → f”(x) > 0, Concave → f”(x) <= 0, Strictly concave → f”(x) < 0

19
Mathematical
Mathematical Function
Non-linier → Unimodal
Example:
1. f(x) = 3x2 → f’(x) = 6x → f”(x) = 6 lebih besar dr 0 → strictly convex

02 Identify convex and concave with no graphic:


Convex → f”(x) >= 0, Strictly convex → f”(x) > 0, Concave → f”(x) <= 0, Strictly concave → f”(x) < 0

20
Mathematical
Mathematical Function
Non-linear → Unimodal
In the range: -2<= x <=2
Example:
1. f(x)=2x3+x2-10x → f’(x) = 6x2+2x-10 → f”(x)=12x+2, masukkan angka range batas bawah
dan batas atas maka didapatkan -2 dan 2 → non convex (indefinite)

02

Identify convex and concave with no graphic:


Convex → f”(x) >= 0, Strictly convex → f”(x) > 0, Concave → f”(x) <= 0, Strictly concave → f”(x) < 0 21
Modelling
03 (Differential Equation)

22
Mathematical
Mathematical Modelling
Gradient

y2 − y1
m=
x2 − x1
y2
y1
dy y
03 =
x1 x2 dx x

Straight line

23
Mathematical
Mathematical Modelling
Differential Equation

f ( x + h) f ( x + h) − f ( x )
dy y lim
= = lim h →0 h
dx x h →0
f ( x)
f ( x) = x n , f '( x) = ......?

f ( x + h) n − x n
x x+h f '( x) = lim
h →0 h
03 h
f '( x) = lim
x n + nx n −1h + ... + h n − x n
h →0 h

Curve line (Binomial Expansion)

24
Ordinary Differential Equations
• Where do ODEs arise?
• Notation and Definitions
• Solution methods for 1st order ODEs

ODE = Persamaan diferensial yang hanya memuat 1 variable bebas.


Partial ODE = persamaan diferensial yang memuat turunan parsial.
Where do ODE’s arise
• All branches of Engineering
• Economics
• Biology and Medicine
• Chemistry, Physics etc

Anytime you wish to find out how


something changes with time (and
sometimes space)
Mathematical
Mathematical Modelling
Example: f(x) = 3x2+5, f’(x)=….?
f ( x + h) − f ( x ) 3( x + h) 2 + 5 − (3 x 2 + 5)
lim
h →0 h f '( x) = lim
h →0 h
f ( x) = x n , f '( x) = ......? 3( x 2 + 2 xh + h 2 ) + 5 − 3 x 2 − 5
f '( x) = lim
h →0 h
f ( x + h) n − x n 3 x 2 + 6 xh + 3h 2 + 5 − 3 x 2 − 5
f '( x) = lim
h →0
f '( x) = lim
h h →0 h
x n + nx n −1h + ... + h n − x n
f '( x) = lim 6 xh + 3h 2
h →0 h f '( x) = lim
03 h →0 h
f ( x) = a.x n , then f '( x) = an.x n −1 h(6 x + 3h)
f '( x) = lim
h →0 h
(Binomial Expansion)
f '( x) = lim 6 x + 3h = 6 x
h →0

f ( x) = a.x n , then f '( x) = an.x n −1


27
Mathematical
Mathematical Modelling
Basic DE formulation:
f ( x) = k , then f '( x) = 0
f ( x) = x, then f '( x) = 1
f ( x) = a.x n , then f '( x) = an.x n −1
f ( x) = k .u.x n , then f '( x) = k .u '( x), k = konstanta
03 f ( x) = u ( x)  v( x), then f '( x) = u '( x)  v '( x)
f ( x) = u ( x)  v( x), then f '( x) = u '( x)v( x) + u ( x)v '( x)
u ( x) u '( x).v( x) − u ( x)v '( x)
f ( x) = , then f '( x) =
v( x) (v( x)) 2
f ( x) = [u ( x)]n , then f '( x) = n[u ( x)]n −1.u '( x)
28
Mathematical
Mathematical Modelling
Practice DE:
x + 3x
2

1. f ( x) = x − 5 x + 2 x − 7 x + 11
6 4 3 6. f ( x) = 2
2x − 3
2. f ( x) = 5 x + 2 x − 3 x + 2 x
6 4 2
7. f ( x) = (3x 2 − 2)6
3. f ( x) = (4 x − 5 x + 1)( x + 2)
2
x −4
2

03 8. f ( x) =
4. f ( x) = (3 x 2 − 5 x)( x 3 + 2) x
( x − 2 x + 5)
2
9. f ( x) = 3 (6 x + 2) 2
5. f ( x) = 2
( x + 2 x − 3) 10. f ( x) = e12 x − 7
29
Order
• The order of a differential equation is just the order of
highest derivative used.
dy
If y is derived to variable x = 5x + 3
dx
d 2 y dy
2
+ =0 2nd order
dx dx
.

dx d 3x
=x 3 3rd order
dy dy

Beberapa literatur lain menuliskan turunan pertama, kedua, ketiga dan seterusnya
sebagai: y’, y’’, y’’’, y(n) atau f’(x), f”(x), dst.
Degree
• The degree is the highest power of the order of highest
derivative used.
dy
If y is derived to variable x = 5x + 3
dx
d 2 y dy
2
+ =0 2nd order, 1 degree
dx dx
.

dx d 3x
=x 3 3rd order, 1 degree
dy dy
Linearity
• The important issue is how the unknown y appears in the equation. A
linear equation involves the dependent variable (y) and its derivatives
by themselves. There must be no "unusual" nonlinear functions of y or
its derivatives.
• A linear equation must have constant coefficients, or coefficients
which depend on the independent variable (t). If y or its derivatives
appear in the coefficient the equation is non-linear.
Linearity - Examples
dy
+ y = 0 is linear
dt
dx
+ x 2 = 0 is non-linear
dt
dy 2
+t = 0 is linear
dt
dy 2
y + t = 0 is non-linear
dt
Linearity – Summary

Linear Non-linear

2y y2 or sin( y )

dy dy
y
dt dt

(2 + 3 sin t) y (2 − 3 y 2 ) y
2
dy  dy 
t  
dt  dt 
Mathematical Opt
04 (Optimization Classical Theory) to
solve Non Linier Programming (NLP)

35
Mathematical
Optimization Classic Theory
Graphic
Non Linear Programming
Analysis Problem (NLPP)
Numeric

NLPP without constraint NLPP with constraint

With Linear Equality With Linear InEquality


Basic problem
(Lagrangian Multiplier) (Kuhn Tucker)

04 1 variable 1 constraint & 2 variables 1 constraint & 2 variables

Multi variables 1 constraint & 3 variables 2 constraint & 2 variables

2 constraint & 2 variables

2 constraint & 3 variables

36
Noted
Optimization is not roots finding

Maximum
Inflection point
roots

Minimum

37
Mathematical
Optimization Classic Theory
NLPP without constraint

1 variable

A function f(x) → x* must satisfy two conditions:


a. Necessary conditions
x* => f’(x) = 0 (x*, stationer point)
04 b. Sufficient conditions
f’’(x*) > 0 (x*, minimize the initial function)
f’’(x*) < 0 (x*, maximize the initial function)
For the specific case, we need to derive again when f’’(x*) = 0. If the result show equal f’’(x*) = 0 the derive it again and when the result fn(x*)
is not equal to zero, then please check when n = even (fn(x*)>0 → x* minimize f(x), fn(x*)<0 → x* maximaize f(x). However, when n = odd, x*
is not max/min but can be defined saddle point or inflection point.
38
Mathematical
Optimization Classic Theory
NLPP without constraint

1 variable

Example:
f(x) = 12x5-45x4+40x3+5
1. Max or min of the function?
a. Derive the first order of the function (Necessary conditions)
04 f’(x)=60x4-180x3+120x2
f’(x)=60x2(x2-3x+2)
f’(x)=60x2(x-1)(x-2)
Stationer point→ x=0, x=1, x=2
39
Mathematical Optimization Classic Theory
NLPP without constraint

1 variable
Example:
f(x) = 12x5-45x4+40x3+5
1. Max or min of the function?
a. Sufficient condition
f’'(x)=60x4-180x3+120x2
f’’(x)=240x3-540x2+240x
For the specific case, we need to derive again when
f’’(1)=-60, x=1 maximize initial function f(x)
04 f (1) = 12
f’’(x*) = 0. If the result show equal f’’(x*) = 0 the
derive it again and when the result fn(x*) is not equal
to zero, then please check when n = even (fn(x*)>0 →
f’’(2)=240, x=2 minimize initial function f(x) x* minimize f(x), fn(x*)<0 → x* maximaize f(x).
However, when n = odd, x* is not max/min but can be
f (2) = -11 defined saddle point or inflection point.
f’’(0)=0
f 3(x) = 720x2-1080x+240=60(12x2-18x+4)
f 3(0) = 240 (n = odd, the result > 0, indicating saddle point/inflection point)
40
Mathematical Optimization Classic Theory
NLPP without constraint

1 variable

Example:
f(x) = 12x5-45x4+40x3+5

Maximum
Inflection point

04 Minimum

Source: https://www.wolframalpha.com/
41
Mathematical Optimization Classic Theory
NLPP without constraint

1 variable

Example:
f(x) = 12x5-45x4+40x3+5
%matlab code
clear
clc
syms x
fplot(@(x)(12*(x.^5) -45*(x.^4)+40*(x.^3)+5), [-0.5 2.4])
04 xline (0); xline (1); xline (2);
yline (0);
yline (12);
title(''); ylabel('f(x)'); xlabel('x');

42
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

A function f(x1, x2, …, xn) multivariable function.


A function f(x) → x* must satisfy two conditions:
a. Necessary conditions
x* => f(x) = 0 (x*, stationary point)
b. Sufficient conditions
04 f’’(x*) > 0 (x*, minimize the initial function)
f’’(x*) < 0 (x*, maximize the initial function)

For the specific case, we need to derive again when f’’(x*) = 0. If the result show equal f’’(x*) = 0 the derive it again and when the result fn(x*)
is not equal to zero, then please check when n = even (fn(x*)>0 → x* minimize f(x), fn(x*)<0 → x* maximaize f(x). However, when n = odd, x*
is not max/min but can be defined saddle point or inflection point.
43
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

Example:
f ( x1 , x2 ) = 3 x12 + 2 x22 + 4 x1 x2 − 6 x1 − 8 x2 + 6

1. Max or min of the function?


04 a. Derive the first order of the function (Necessary conditions)
 f 
  x  6x + 4x − 6
 1   1 
f ( x ) = = =0
2

  f   4 x2 + 4 x1 − 8 
 

 2x
44
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

Subtitusi atau eliminasi:


6 x1 + 4 x2 − 6 = 0
4 x2 + 4 x1 − 8 = 0
04
2 x1 + 2 = 0
 −1 
x1 = −1; x2 = 3 So, x* =  
 +3 
45
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

Hessian Matrix → contained second-order derivatives, to determine the optimal


point (minimize or maximize).
 f   2 f 2 f 
 x   
x 2
x1 x2   h11 h12 
f ( x ) = 
1 −− H = 1
= 
 f   2 f  f
2   h21 h22 
03    2 


 2
x  x1 x2 x2 
 h11 h12 h13 
h h12   
H positive definite → x* = minimal point | h11 | 0; det  11   0; det  h21 h22 h23   0
 h21 h22  h h33 
 31 h32
 h11 h12 h13 
h h12   
| h11 | 0; det  11   0; det  h21 h22 h23   0
H negative definite → x* = maximal point  h21 h22  h
 31 h32 h33 
46
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

f ( x1 , x2 ) = 3 x + 2 x + 4 x1 x2 − 6 x1 − 8 x2 + 6
2
1
2
2
 6 x1 + 4 x2 − 6 
f ( x ) =  
 1
4 x + 4 x2 − 8 
 6 4
H =  , calc. determinant, det H1 = | h11 |= 6  0
04  4 4
 h11 h12 
det H 2 = det   = (6  4) − (4  4) = 8  0
 h21 h22 
Matriks Hessian → definite positive, thus x*[-1 3] minimal point
Then, input [-1 3] to the initial function and we will get f(-1, 3) = -3
47
Mathematical Optimization Classic Theory
NLPP without constraint f ( x1 , x2 ) = 3 x12 + 2 x22 + 4 x1 x2 − 6 x1 − 8 x2 + 6
Multivariable
f ( x, y ) = 3 x 2 + 2 y 2 + 4 xy − 6 x − 8 y + 6

04

contour(X,Y,Z); 48
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

f ( x, y ) = 3 x 2 + 2 y 2 + 4 xy − 6 x − 8 y + 6

04

49
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable

f ( x, y ) = 3 x 2 + 2 y 2 + 4 xy − 6 x − 8 y + 6
%matlab code
%multivariable1
xl=-10; xu=10;
yl=-40; yu=40;
04 x = linspace (xl,xu);
y = linspace (yl,yu);
[X, Y]=meshgrid (x, y);
Z=(3.*X.^2)+(2.*Y.^2)+(4.*X.*Y)-(6.*X)-(8.*Y)+6;
surf (X, Y, Z);%untuk tambahkan counter ubah surf jadi surfc
xlabel ('x'); ylabel ('y'); zlabel('z');
set (gca, 'FontSize', 16, 'LineWidth', 2);

50
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable
So, from the previous example. The procedure to solve NLPP without constraint for multivariable:
1. Define the function f(x1,x2,…,xn)
2. Find the stationary point f = 0, f = 0
x1 x2
 2 f 2 f 
3. Find the Hessian matrix  x 2 
x1 x2   h11 h12 
H = 2 =
1
  f 
 2 f   h21 h22 
03 
 x1 x2

x12 
4. Find the principal minor of H  h11 h12 h13 
 h11 h12   
1 =| h11 |;  2 = det   ;  = det  h21 h22 h23 
 h21 h22  3 h
 31 h32 h33 
5. If Δ1, Δ 2 and Δ 3 are all positive then we get minimal at a stationary point
If Δ 1 is negative, Δ 2 is positive, and Δ 3 is negative then we get maxima at a stationary point
51
Mathematical Optimization Classic Theory
NLPP without constraint

Multivariable
Practice:
1. Find the relative maximum or minimum of the function

f ( x1 , x2 , x3 ) = x + x + x − 4 x1 − 8 x2 − 12 x3 + 100
2
1
2
2
2
3
2. Optimize:
04 f ( x1 , x2 , x3 ) = x + x + x − 6 x1 − 8 x2 − 10 x3
2 2 2
1 2 3
Group homework:
Mencari dan menyelesaikan masing-masing 5 persoalan praktis (studi kasus) untuk NLPP
without constraint yang single variable (1 soal) maupun multivariable (4 soal).
Buat di powerpoint. Noted untuk formulasi matematis gunakan mathtype software.

52
Mathematical
Optimization Classic Theory
NLPP without constraint

Multivariable

Function

Partial derivatives

04

Global minimum

53
Mathematical
Optimization Classic Theory
NLPP without constraint

Multivariable
Function

Partial derivatives

04

Global minimum

54
Optimization Classic Theory
NLPP with constraint
With Linear Equality
(Lagrangian Multiplier)
Lagrange

Sometimes we need to optimize a function (maximum or minimum)


but constrained by another function that needed to be satisfied earlier.

To solve the problem above, we can apply the Lagrange Methods


04 by generating new function as a combination of the functions
where the first function involved lagrange factor in constrained
function.
F = ( x, y ,  ) = f ( x, y ) +   g ( x, y )
55
Optimization Classic Theory
NLPP with constraint
With Linear Equality
(Lagrangian Multiplier)
Lagrange

To search extreme value from a function we can implement the


previous (ordinary) methods from a function.

1. First derivation of a function equal to zero, f’(x)=0.


2. Second derivation, f’’(x):
04 • If the second derivation, f’’(x) < 0, the maximum value
of a function
• If the second derivation, f’’(x)> 0, the minimum value of
a function

56
Optimization Classic Theory
NLPP with constraint
With Linear Equality
(Lagrangian Multiplier)
Lagrange

Case study
A manufacture industry tried to minimize cost due to complicated
economic condition. Currently, total cost (TC) function of the
manufacturer:
TC = 5 x − 2 x + 6 y + 4.5 y − xy
2 2

04 The constraint of the condition:


2 x + 3 y = 15
a. How many product (x and y) that need to produce to gain
minimum cost.
b. How much the minimum total cost (TC)

57
Optimization Classic Theory

Solution a (Combine the functions)


Lagrange

TC = 5 x 2 − 2 x + 6 y 2 + 4.5 y − xy +  (2 x + 3 y − 15)
= 5 x 2 − 2 x + 6 y 2 + 4.5 y − xy + 2 x + 3 y − 15
dTC
= 10 x − 2 − y + 2 = 0 (1)
dx
04 dTC
= 12 y + 4.5 − x + 3 = 0 (2)
dy
dTC (3)
= 2 x + 3 y − 15 = 0
d
58
Optimization Classic Theory

Solution b (Eliminate the lambda)


Lagrange

10 x − y − 2.0 + 2 = 0 |  3 | 30 x − 3 y − 6.0 + 6 = 0
04 − x + 12 y + 4.5 + 3 = 0 |  2 | −2 x + 24 y + 9.0 + 6 = 0 (-)
32 x − 27 y − 15 + 0.0 = 0 (4)

59
Optimization Classic Theory

Solution c (Eliminate the (3) and (4))


Lagrange

2 x + 3 y − 15 = 0 |  16 | 32 x + 48 y − 240 = 0
32 x + 27 y − 15 = 0 |  1| 32 x + 27 y − 15 = 0 (-)
75y − 225 = 0
04 75 y = 225
2 x + 3 y − 15 = 0
y = 225 / 75 = 3 2 x + 3(3) − 15 = 0
(substitute y=3 to (3))
2x + 9 − 15 = 0
x=6/2=3
60
Optimization Classic Theory

Calculate the minimum total cost (TC)


Lagrange

x = 3, y = 3
04 TC = 5 x 2 − 2 x + 6 y 2 + 4.5 y − xy
TC = 5(32 ) − 2(3) + 6(32 ) + 4.5(3) − (3  3)
TC = 97.5

61
Mathematical
Optimization Classic Theory
NLPP with constraint
With Linear Equality
(Lagrangian Multiplier)

Practice: f ( x) = 3 x12 + x22 + 2 x1 x2 + 6 x1 + 2 x2


s.t.
2 x1 − x2 = 4
04
Group homework:
Mencari dan menyelesaikan masing-masing 5 persoalan praktis (studi kasus) untuk NLPP with
constraint dengan Linear Equality (Lagrangian Multiplier).
Buat di powerpoint. Noted untuk formulasi matematis gunakan mathtype software.

62
05 Numerical Methods

63
Mathematical
Optimization Classic Theory
Graphic
Non Linear Programming
Analysis Problem (NLPP)
Numeric
NLPP without constraint NLPP with constraint

With Linear Equality With Linear InEquality


Basic problem
(Lagrangian Multiplier) (Kuhn Tucker)

04 1 variable 1 constraint & 2 variables 1 constraint & 2 variables

Multi variables 1 constraint & 3 variables 2 constraint & 2 variables

2 constraint & 2 variables

2 constraint & 3 variables

64
Mathematical
Optimization Classic Theory
Newton – Raphson
f(x)
The Newton-Raphson method is based on the
principle that if the initial guess of the root
of f (x) = 0 is at xi , then if one draws the
f(xi) [xi+2, f(xi)]
tangent to the curve at f (xi) , the point xi+1
where the tangent crosses the x-axis is an
improved estimate of the root (Figure 1).
f '( xi ) = tan 
f ( xi ) − 0 f(xi+1)
04 = ,
xi − xi +1
θ
which gives xi+2 xi+1 xi
f ( xi )
xi +1 = xi − Geometric Illustration of the Newton-Raphson
f '( xi ) Method

65
Mathematical Optimization Classic Theory
Newton – Raphson

The steps of the Newton-Raphson method to find the root of an equation f(x)=0
are:
1. Evaluate f(x) symbolically
2. Use an initial guess of the root, xi, to estimate the new value of the root, xi+1, as
f ( xi )
xi +1 = xi −
f '( xi )
04 3. Find the absolute relative approximation error |Ꜫa| as
xi +1 − xi
a =
xi +1
4. Compare the absolute relative approximation error with the pre-specified relative error
tolerance, |Ꜫa|. If |Ꜫa|> Ꜫs, then go to Step 2, else stop the algorithm. Also, check if the number
of iterations has exceeded the maximum number of iterations allowed. If so, one needs to
terminate the algorithm and notify the user.
66
Mathematical
Optimization Classic Theory
Newton – Raphson xi +1 = xi −
f ( xi )
Example: f '( xi )

Given that x3 + 2x – 2 = 0 has a root between 0 and 1. Find the root to


two decimal places using the Newton-Raphson Method.
Let: Iteration x f(x) f'(x) x i+1

x0 (initial) 1 1 5 0,8
f(x) = + 2x – 2
x3 1 0,8 0,112 3,92 0,771429
04 f’(x) = 3x2+2 2 0,771429 0,001936 3,785306 0,770917

Let i=0 → x0=1, thus f(x0)=1, f’(x0)=5 3 0,770917 6,05E-07 3,78294 0,770917
4 0,770917 5,91E-14 3,782939 0,770917
x1=1-(1/5)=0.8, thus f(x1)=?, f’(x1)=? 5 0,770917 0 3,782939 0,770917

NLPP without constraint


1 variable 67
Mathematical
Optimization Classic Theory
Newton
Iteration -x Rhapson
f(x) f'(x) xi+1
x0 (initial) 0,5 -0,875 2,75 0,818182
1 0,818182 0,184072 4,008264 0,772259
2 0,772259 0,00508 3,78915 0,770918
3 0,770918 4,16E-06 3,782944 0,770917
4 0,770917 2,8E-12 3,782939 0,770917
5 0,770917 0 3,782939 0,770917

Iteration x f(x) f'(x) xi+1


x0 (initial) 0,1 -1,799 2,03 0,986207
04 1 0,986207 0,931603 4,917812 0,796773
2 0,796773 0,099373 3,904539 0,771322
3 0,771322 0,001532 3,784812 0,770917 %matlab code
syms x
4 0,770917 3,79E-07 3,78294 0,770917 fplot(@(x)((x.^3)+(2.*x)-2), [-1.0 1.0]);
xline (0);
5 0,770917 2,35E-14 3,782939 0,770917 xline (0.770917)
yline (0);
NLPP without constraint title(''); ylabel('f(x)'); xlabel('x');

1 variable 68
Mathematical Optimization Classic Theory
Newton – Raphson

The steps of the Newton-Raphson method to find the optimal solution an


equation f’(x)=0 are:
1. Evaluate f(x) symbolically
2. Use an initial guess of the root, xi, to estimate the new value of the root, xi+1, as
f '( xi )
xi +1 = xi −
f ''( xi )
04
3. Find the absolute relative approximation error |Ꜫa| as
x −x
a = i +1 i
xi +1
4. Compare the absolute relative approximation error with the pre-specified relative error
tolerance, |Ꜫa|. If |Ꜫa|> Ꜫs, then go to Step 2, else stop the algorithm. Also, check if the number
of iterations has exceeded the maximum number of iterations allowed. If so, one needs to
terminate the algorithm and notify the user. 69
Mathematical
Optimization Classic Theory
Newton – Raphson f '( xi )
xi +1 = xi −
Example: f ''( xi )
Given that f(x) = 2 Sin(x) – (x2/10). Find the optimal solution using the
Newton-Raphson Method with an initial value of 2,5.
Let:
f(x) = 2 Sin(x) – (x2/10)
04 f’(x) = 2 Cos(x) – (2x/10)
f’’(x) = -2 Sin(x) – (2/10)
Let i=0 → x0=2.5, thus f(x0)=0,57194, f’(x0)=-2,1023, f’’(x0)=-1,3969
x1=2,5-(-2,1023 /-1,3969)=0,99508, thus f(x1)=?, f’(x1)=?
NLPP without constraint
1 variable 70
Mathematical Optimization Classic Theory
Newton - Rhapson
Iteration x f(x) f'(x) f‘’(x) xi+1
fplot(@(x)(2.*(sin(x))-((x.^2)/10)), [-6.0 6.0]);
x0
2,5 0,571944 -2,10229 -1,39694 0,995082
(initial)
1 0,995082 1,578588 0,889853 -1,87761 1,469011
2 1,469011 1,773849 -0,09058 -2,18965 1,427642
3 1,427642 1,775726 -0,0002 -2,17954 1,427552
4 1,427552 1,775726 -1,2E-09 -2,17952 1,427552
5 1,427552 1,775726 0 -2,17952 1,427552

Iteration x f(x) f'(x) f‘’(x) xi+1


04 x0
2 1,418595 -1,23229 -2,01859 1,389529
(initial)
1 1,389529 1,774153 0,082647 -2,16723 1,427664
2 1,427664 1,775726 -0,00024 -2,17955 1,427552
3 1,427552 1,775726 -1,8E-09 -2,17952 1,427552
4 1,427552 1,775726 0 -2,17952 1,427552
NLPP without constraint
5 1,427552 1,775726 0 -2,17952 1,427552
1 variable 71
Mathematical Optimization Classic Theory
Newton - Rhapson
fplot(@(x)(2.*(sin(x))-((x.^2)/10)), [-6.0 6.0]);

Iteration x f(x) f'(x) f‘’(x) xi+1


x0
5 -4,41785 -0,43268 1,717849 5,251871
(initial)
1 5,251871 -4,47416 -0,02299 1,51595 5,267037
2 5,267037 -4,47434 -0,00012 1,500172 5,267116
3 5,267116 -4,47434 -3,3E-09 1,500088 5,267116
4 5,267116 -4,47434 0 1,500088 5,267116
04 5 5,267116 -4,47434 0 1,500088 5,267116

NLPP without constraint


1 variable 72
Mathematical Optimization Classic Theory
Golden Section
An iterative process to find the optimal point of a
function in a certain domain.
The steps of the Golden Section method to find the optimal
solution to an equation f’(x)=0 are:
1. Define the range (limits), a ≤ x ≤ b.
2. Calculate the function value f(a) and f(b).
3. Determine the middle value (x1 and x2) by adopting the
golden ratio (GR=0.618), d=GR(b-a), x1 =b-d, x2 =a+d
4. Calculate the value from both functions f(x1) and f(x2)
04 5. Update the limits by evaluating function values.
If f(x1) < f(x2), then eliminate all x < x2, x2 becomes new a
and x1 becomes new x2 (no change in b).

6. Repeat it until convergent.

73
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15

d = GR(b – a) = 0,618 (10 –0) = 6,18


x1 = a + d = 0 + 6,18 = 6,18 = 6,2
x2 = b – d = 10 – 6,18 = 3,82 = 3,8
04
Thus, f(x1) > f(x2), x1 become new b

74
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15

d = GR(b – a) = 0,618 (10 –0) = 6,18


x1 = a + d = 0 + 6,18 = 6,18 = 6,2
x2 = b – d = 10 – 6,18 = 3,82 = 3,8
04
Thus, f(x1) > f(x2), x1 become new b

75
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15

d = GR(b – a) = 0,618 (6.18 –0) = 3,82


x1 = a + d = 0 + 3,82 = 3,82
x2 = b – d = 6,18 – 3,82 = 2,36 =2,4
04
Thus, f(x1) > f(x2), x1 become new b

76
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15

d = GR(b – a) = 0,618 (3,8 –0) = 2,4


x1 = a + d = 0 + 2,4 = 2,4
04 x2 = b – d = 3,8 – 2,4 = 1,4

Thus, f(x1) < f(x2), x2 become new a

77
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15

d = GR(b – a) = 0,618 (3,8 –1,4) = 1,5


x1 = a + d = 1,4 + 1,5 = 2,9
04 x2 = b – d = 3,8 – 1,5 = 2,3

Thus, f(x1) < f(x2), x2 become new a

78
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15 fplot(@(x)((x.^2)-(6.*x)+15), [0.0 10.0]);

d = GR(b – a) = 0,618 (3,8 –1,4) = 1,5


x1 = a + d = 1,4 + 1,5 = 2,9
04 x2 = b – d = 3,8 – 1,5 = 2,3

Thus, f(x1) < f(x2), x2 become new a

79
Mathematical Optimization Classic Theory
Golden Section
Case study: Find the minimum of the following function between x = 0 and x = 4,
f(x) = x2 – 6x + 15 A B C D E F G H
Iteration a b d x1 x2 f(x1) f(x2)
1 0 10 6,18034 6,18034 3,81966 16,11456 6,671843
2 0 6,18034 3,81966 3,81966 2,36068 6,671843 6,40873
3 0 3,81966 2,36068 2,36068 1,45898 6,40873 8,374742
4 1,45898 3,81966 1,45898 2,917961 2,36068 6,00673 6,40873
5 2,36068 3,81966 0,9017 3,262379 2,917961 6,068843 6,00673
6 2,36068 3,262379 0,557281 2,917961 2,705098 6,00673 6,086967
7 2,705098 3,262379 0,344419 3,049517 2,917961 6,002452 6,00673
8 2,917961 3,262379 0,212862 3,130823 3,049517 6,017115 6,002452
9 2,917961 3,130823 0,131556 3,049517 2,999267 6,002452 6,000001
04 10 2,917961 3,049517 0,081306 2,999267 2,968211 6,000001 6,001011
11 2,968211 3,049517 0,05025 3,018461 2,999267 6,000341 6,000001
12 2,968211 3,018461 0,031056 2,999267 2,987405 6,000001 6,000159
13 2,987405 3,018461 0,019194 3,006598 2,999267 6,000044 6,000001
14 2,987405 3,006598 0,011862 2,999267 2,994736 6,000001 6,000028
15 2,994736 3,006598 0,007331 3,002067 2,999267 6,000004 6,000001
16 2,994736 3,002067 0,004531 2,999267 2,997536 6,000001 6,000006
17 2,997536 3,002067 0,0028 3,000337 2,999267 6 6,000001
18 2,999267 3,002067 0,001731 3,000998 3,000337 6,000001 6
19 2,999267 3,000998 0,00107 3,000337 2,999928 6 6
80
20 2,999267 3,000337 0,000661 2,999928 2,999676 6 6
Mathematical
Optimization Classic Theory
Golden Section
Practice
f(x)=20x-2x2+10

04

81
Mathematical
Optimization Classic Theory
Steepest Ascent/Descent
This is an iterative method, also known as the Gradient Ascent/Descent Method
Steepest ascent → for maximum function; Steepest descent → for minimum function
Perform a series of transformation steps to change a multivariable function into a single variable function
based on the gradient of the search direction. This optimum search step is then carried out iteratively
until the expected level of convergence is obtained.
Requires reliable software for the iteration process carried out.
Optimum search:
04 As an illustration, a function f(x,y) whose optimum (max/min) point will be determined, with initial
values: x = x0 and y = y0, then in the first iteration step, the new x and y values ​can be determined by:
f f
x = x0 + h and y = y0 + h
x x0 , y0 y x , y
0 0

f f
with and are partial derivatives of function f ( x, y ), each to x and y.
NLPP without constraint x y
multivariable 82
Mathematical
Optimization Classic Theory
Steepest Ascent/Descent
In this case, the vector gradient function can be formulated as:
f f
f = i+ j
x y

Thus, we can observe that the function with two variables in x, and y,
04 f(x,y), are transformed into the function with one variable in h, g(h).
The value of x and y can be obtained iteratively, then become x0 and y0
for the next iteration. Continue the iteration until convergent.
NLPP without constraint
multivariable 83
Mathematical
Optimization Classic Theory
Steepest Ascent
Example:
Use the steepest ascent to determine the optimal (maximum) point
from the function below, with the initial value of x and y: x0 = -1,
and y0 = 1
f(x, y)=2xy + 2x - x2 - 2y2
04 Steps:

1. Determine the partial derivative of the function. The partial derivative can
be derived as:
f f
= 2 y + 2 − 2 x and = 2x − 4 y
NLPP without constraint
x y
multivariable 84
Mathematical
Optimization Classic Theory
Steepest Ascent
Steps: f(x, y)=2xy + 2x - x2 - 2y2

2. For the first iteration with the initial value of x and y: x0 = -1, and y0 = 1,
substitute the value into the initial function: f(x, y)= 2(-1)(1)+2(-1)-(-1)2-2(1)2=-7
3. Evaluate the value from the partial derivative function at x0 and y0
f f
= 2(1) + 2 − 2(−1) = 6 and = 2(−1) − 4(1) = −6
x y
04 x0 , y0 x0 , y0

f f
Thus, the gradient vector can be written as : f = i+ j = 6 i + (−6 j )
x y

f f
x = x0 + h = −1 + 6h and y = y0 + h = 1 + ( −6 h )
x x0 , y0 y x0 , y0

NLPP without constraint


multivariable 85
Mathematical Optimization Classic Theory
Steepest Ascent
Steps: f(x, y)=2xy + 2x - x2 - 2y2

4. Substitute the value of x and y (as a function of h) into f(x, y):


f(x, y) = 2xy + 2x - x2 - 2y2
f(x, y) = 2(-1+6h)(1-6h) + 2(-1+6h) - (-1+6h)2 - (1-6h)2
f(x, y) = -180h2 + 72h – 7 = g(h)  Function with one variable
5. The first order derivative of g(h): g’(h) = -360h + 72. To obtain the optimum value,
the first order derivative is equal to zero, g’(h)= 0.
04 Thus, g’(h) = -360h + 72 = 0, So 360h = 72, h = 0.2
6. The new value of x and y can be calculated as below:
x = -1 + 6h = -1 + 6(0.2) = 0.2
y = 1 - 6h = 1 - 6(0.2) = -0.2
7. Repeat the procedures until you achieve the convergent.
NLPP without constraint
multivariable 86
Mathematical Optimization Classic Theory
Steepest Ascent
When we use the excel calculation for the iteration:

i xi yi df/dxi df/dyi h xi+1 yi+1 f(xi,yi)


0 -1,00 1,00 6,0000 -6,0000 0,20 0,20 -0,20 -7,0000
1 0,20 -0,20 1,2000 1,2000 1,00 1,40 1,00 0,2000
2 1,40 1,00 1,2000 -1,2000 0,20 1,64 0,76 1,6400
3 1,64 0,76 0,2400 0,2400 1,00 1,88 1,00 1,9280
4 1,88 1,00 0,2400 -0,2400 0,20 1,93 0,95 1,9856
5 1,93 0,95 0,0480 0,0480 1,00 1,98 1,00 1,9971

04 6
7
1,98
1,99
1,00
0,99
0,0480 -0,0480
0,0096 0,0096
0,20
1,00
1,99
2,00
0,99
1,00
1,9994
1,9999
8 2,00 1,00 0,0096 -0,0096 0,20 2,00 1,00 2,0000
9 2,00 1,00 0,0019 0,0019 1,00 2,00 1,00 2,0000

NLPP without constraint


multivariable 87
Mathematical Optimization Classic Theory
Steepest Ascent Z=(2.*X.*Y)+(2.*X)-(X.^2)-(2.*Y.^2);
This is an iterative method, also known as the Gradient
Ascent/Descent Method.
Example: f(x)= 2xy+2x-x2-2y2

04

NLPP without constraint


multivariable 88
Mathematical Optimization Classic Theory
Steepest Ascent Z=(2.*X.*Y)+(2.*X)-(X.^2)-(2.*Y.^2);

Example: f(x)= 2xy+2x-x2-2y2


Simulation in Matlab

04

NLPP without constraint


multivariable 89
Mathematical Optimization Classic Theory
Steepest Descent

Practice:
f(x)=(1/3)x2+(1/3)y2+(1/5)xy
with the initial value of x and y
: x0 = -4, and y0 = 4

04

NLPP without constraint


multivariable 90
Mathematical
Optimization Classic Theory
Random Search
Based on the use of random numbers in finding the optimal (max/min) point.
For example:
f(x,y) = y – x - 2x2 - 2xy - y2
Domain range:
x → -2 to 2
04 y → 1 to 3
Random number = r, 0 <= r <= 1
x = xa + (xb – xa)r → -2 + 4r
y = ya + (yb – ya)r → 1 + 2r
NLPP without constraint
multivariable a = batas bawah; b = batas atas 91
Mathematical Optimization Classic Theory
rx x ry y f(x)
0,00 -2,00 0,00 1,00 -2,00
0,00 -2,00 0,25 1,50 -0,75
Random Search 0,00 -2,00 0,50 2,00 0,00
0,00 -2,00 0,75 2,50 0,25
0,00 -2,00 1,00 3,00 0,00
0,25 -1,00 0,00 1,00 1,00
0,25 -1,00 0,25 1,50 1,25
0,25 -1,00 0,50 2,00 1,00
0,25 -1,00 0,75 2,50 0,25
0,25 -1,00 1,00 3,00 -1,00
0,50 0,00 0,00 1,00 0,00
0,50 0,00 0,25 1,50 -0,75
0,50 0,00 0,50 2,00 -2,00
0,50 0,00 0,75 2,50 -3,75
0,50 0,00 1,00 3,00 -6,00
04 0,75 1,00 0,00 1,00 -5,00
0,75 1,00 0,25 1,50 -6,75
0,75 1,00 0,50 2,00 -9,00
0,75 1,00 0,75 2,50 -11,75
0,75 1,00 1,00 3,00 -15,00
1,00 2,00 0,00 1,00 -14,00
1,00 2,00 0,25 1,50 -16,75
1,00 2,00 0,50 2,00 -20,00
NLPP without constraint 1,00 2,00 0,75 2,50 -23,75
multivariable 1,00 2,00 1,00 3,00 -28,00
92
Mathematical
Optimization Classic Theory
Random Search
Z=Y-X-(2.*X.^2)-(2.*X.*Y)-Y.^2;

04

NLPP without constraint


multivariable 93
Mathematical
Optimization Classic Theory

Group homework:
04 Mencari dan menyelesaikan masing-masing 5 persoalan praktis
(studi kasus) untuk NLPP dengan metode numeric diatas (untuk
masing-masing numeric approach).
Buat di powerpoint. Noted untuk formulasi matematis gunakan
mathtype software.

94
Mathematical
Optimization Theory
• ….
• Bayesian
• Quadratic Programming
• Sequential Quadratic Programming (SQP)
• Etc…
04 Modern method optimization:
❑ Genetic Algorithm
❑ Simulated Annealing
❑ Particle Swarm Optimization
❑ Ant Colony Optimization
❑ Optimization of Fuzzy Systems
❑ Neural Network Based Optimization
❑ Bee Colony Optimization
95
06 Example research

96
Adopting SQP (Sequential Quadratic Programming) to
Publications solve the non-linear mathematical optimization model.

Wu, Chien-Wei, Darmawan, A., & Liu, Shih-Wen*. (2023). Stage-independent multiple sampling plan by variables
inspection for lot determination based on the process capability index Cpk. International Journal of Production Research,
61(10), 3171-3183. https://doi.org/10.1080/00207543.2022.2078745 (SCI & Scopus indexed)
Link pdf: https://drive.google.com/file/d/1-cLO66_Ld6DQdFXu5tfi1DlPDSAoX7jy/view?usp=sharing
Wu, Chien-Wei*, Darmawan, A. (2023). A Modified Sampling Scheme for Lot Sentencing Based on the Third-Generation
Capability Index. Annals of Operations Research. https://doi.org/10.1007/s10479-023-05328-z (SCI & Scopus indexed)
Link pdf: https://drive.google.com/file/d/1t3wytmZoESKNRJRjEOf2wTK1byit-GhZ/view?usp=sharing
Wu, Chien-Wei*, Darmawan, A., & Liu, Shih-Wen. (2024). Developing a Stage-Independent Multiple Sampling Plan
(SIMSP) Loss-based Capability Index for Lot Disposition. (Journal of Operation Research Society) (SCI & Scopus indexed)
Link: https://www.tandfonline.com/doi/abs/10.1080/01605682.2024.2363264
Darmawan, A., Wu, C. W., Wang, Z. H., & Chiang, P. J. (2024). Developing variables two-plan sampling scheme with
consideration of process loss for lot sentencing. Quality Engineering, 37(2), 273–291.
https://doi.org/10.1080/08982112.2024.2381012

97
Thank you

98

You might also like