0% found this document useful (0 votes)
29 views86 pages

Chapter 2 or and Its Application in Descritption of Physiological Systems

Operational research techniques can be applied to describe physiological systems. Operations research seeks to optimize complex systems with limited resources using mathematical modeling. It involves techniques like linear and nonlinear programming to optimize variables and system performance while satisfying constraints. Stochastic processes model systems with randomly changing attributes over time. Optimization aims to find the best solution by determining maxima and minima while direct substitution converts constrained problems into unconstrained ones.

Uploaded by

Siferaw Negash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views86 pages

Chapter 2 or and Its Application in Descritption of Physiological Systems

Operational research techniques can be applied to describe physiological systems. Operations research seeks to optimize complex systems with limited resources using mathematical modeling. It involves techniques like linear and nonlinear programming to optimize variables and system performance while satisfying constraints. Stochastic processes model systems with randomly changing attributes over time. Optimization aims to find the best solution by determining maxima and minima while direct substitution converts constrained problems into unconstrained ones.

Uploaded by

Siferaw Negash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 86

Chapter 2

Operational research applied to the


description of physiological systems
Contents
• Operation research
• Optimization
• Signals processing by interfacing instrumentation,
• Biomedical variability and probabilistic solution
to medical decision making,
• Population dynamics perturbation technique in
dealing with the problems of thermodynamics.
• Stochastic process
Operations research
• Find the most effective utilization of limited
resource
• Is the study of mathematical models for
complex organizational systems
• Optimization is a branch of OR which uses
mathematical techniques such as linear and
nonlinear programming to derive values for
system variables that will optimize system
performance
Common problems in OR
• Set covering packing and partitioning
• Quadratic assignment problem
• Routing problem
• Production scheduling problem
• Portfolio selection and chemical equilibrium
problem
Stochastic process
• A system whose attributes change randomly
over time
• ATM machine
– States ={ working, idle, failed}
– Number of customers waiting
What is Optimization?
• Optimization is the mathematical discipline
which is concerned with finding the maxima
and minima of functions, possibly subject to
constraints.
• Optimize means make as perfect, effective or
functional as possible
• Used to determine best solution without
actually testing all possible cases
Statement of optimization problem

General Statement of a Mathematical


Programming Problem
Find x which minimize: f(x)

Subject to:
gi(x) < 0 for i = 1, 2, ..., h
li(x) = 0 for i = h+1, ..., m
f(x), gi(x) and li(x) are twice continuously differentiable real valued
functions.
gi(x) is known as inequality constraint
li(x) is known as equality constrain
X can be a vector with several
variables

Minimization of f(x) is same


as maximization of –f(x)
Some terminologies
• Design vector
• Design constraints
• Constraint surface
• Objective function
• Mathematical programming
Design vector

Two types of variables Design space- space of


exist the design vector
-Pre-assigned variables
-Variables whose value is f (x) 9.82x1x2 2x1
known before hand
x1 x2
-Design vector x  
-Vector of decision x2
Design vector Design space
variables
-Should be calculated using
techniques
x1
Design constraints
• Restrictions on • Behavioral constraints
variables – Blood p cannot be
negative
Find x which minimize: f(x) • Geometric constraints
Subject to:
gi(x) < 0 for i = 1, 2, ..., h
– Constraints due to
li(x) = 0 for i = h+1, ..., m geometry
Where:
gi(x) is inequality constrain
li(x) is equality constraint
Constraint surface
• Set of values which
satisfy a single
constraint
• Plot of gi(x)=0
• Four possible points
– Free and acceptable
– Free and unacceptable
– Bound acceptable
– Bound unacceptable
Example
• Draw constraint surface for problem of minimization

f (x) 9.82x1x2 2x1 Subject to 1

0.9

2 x1 14 0.8

0.7

0.2 x2 0.8 0.6

x2
0.5

2500
0.4

5000 0.3

x1x2
0.2

0.1
2 4 6 8 10 12 14
x1

2500 2(8.5x105)(x12 x22)


 2
0
x1x2 8(250)
Objective function

– The function which gives the relation between the


objective we want to achieve and the variables involved
– Single or multiple
Objective function
• Example: A power • Amount of coal required
company operates two
thermal power plants
A and B using three
different grades of coal
C1, C2 and C3. The • Write the objective
minimum power to be function to min cost
generated at plants A
and B is 30MWh and
80MWh respectively.
• Objective function
surface
– Locus of all points
satisfying f(x)=C for
some constant C
1

0.9
Red lines are
0.8
objective function
surfaces for C=50 0.7
and C=30 0.6
x2

0.5

0.4

0.3

0.2

0.1
2 4 6 8 10 12 14
x1
Classification of optimization problems

• Based • Constrained optimization


on – Formulation
existe Min F(X)
nce of subject to
constr Gj(X)0
aints • Unconstrained optimization
– Formulation
Min F(X)
Classification cont…

• Based on nature of design variables


• Static
– Design variables are simple variables
• Dynamic
– Design variables are function of other
variables
Classification cont…

• Based on nature of expression for objective


• Linear programming
function and constraints
– Objective function and constraints are
linear
– Approximation
• Optimal Power flow, steady state security
etc
• Nonlinear programming
– If any one of the objective function or
constraints is nonlinear
– Power system problems are nonlinear
Classification cont…

• Based on expression of objective function or


• Geometric programming –
constrains
objective function and/or
constraint are expressed as power
terms
• Quadratic programming
– Special case of NLP
– Objective function is quadratic form
Classical Optimization techniques

•Used for continuous and differentiable functions


•Make use of differential calculus
• disadvantages
–Practical problems have non differentiable objective
functions
Single variable optimization
• Local minima
– If for small Local
positive and negative h maxima

• Local maxima
– If for small
positive and negative h

Local
minima
Single variable cont…

Theorem 1

Theorem 2
Example

5 4 3
f(x) 12x 45x 40x 5 400

350

Soln. 300

250
Find f’(x) and then equate it with zero. 200
The extreme points are x=0,1 and 2
150

X=0 is inflection point, x=2 is local 100

minima and x=1 is local maxima 50

-50

-100
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Multivariable optimization
• Without constraint • Similar conditions with single
• With constraint variable case

Theorem 3

Theorem 4
Example: find extreme points of
• Function of two variable – Find extreme points
– Check the Hessian
matrix by determining
• Soln. second derivatives and
– Evaluate first partial determinants
derivatives and
Multivariable optimization with equality
constraints

• Problem formulation
– Find which minimizes F(x) subject to the constraint

gi(x)=0 for i=1,2,3, … m where m n


– Solution can be obtained using
• Direct substitution
• Constrained variation and Lagrangian method
Direct substitution
• Converts constrained • Technique
optimization to – Express the m constraint
unconstrained variables in terms of the
remaining n-m variables
• Used for simpler
– Substitute into the
problems
objective function
Direct substitution cont…
• Example – find the values of x1,x2 and x3 which
maximize
subject to the equality constrain

Soln. Re-write the constraint equation to eliminate any


one of the variables
then

 2
x2  1x x
1
2
3   2
1
2
f(x1, x2, x3) 8x1x3 1x x
3 
constrained variation method
• Finds a closed form expression for the first order
differential of f at all points where the constraints are
satisfied
• Example: minimize

• Subject to
Constrained variation
• At a minimum • Rewriting the equation

• If we take small • Substituting


variations dx1 and dx2
• Necessary condition
• After Taylor series expansion
Constrained variation
• For general case

• Under the assumption that


Example minimize the following function subject to given
constraint

• Minimize

• Subject to

• Soln. The partial • Using the necessary condition


differentials are
Method of Lagrangian multipliers

• Problem formulation
s.t.
• Procedure:
A function L can be formed as

• Necessary conditions for extreme are given by


Lagrange Multiplier method cont…

• For a general case L


Lagrangian method
• Sufficient condition is

Has to be positive definite matrix


Example: find maximum of f given by

• Subject to

• Soln. The Lagrangian is • Giving


Formulation of multivariable optimization

• When the constraints are inequality constraints, i.e.

• It can be transformed to equality by adding slack


variable
• Lagrangian method can be used
Kunh- Tucker conditions
• The necessary condition for the above problem is

• When there are both equality and inequality constraints, the


KT condition is given us
Kuhn-Tucker conditions cont…

• Example: For the following problem, write the KT


conditions

Subject to
Linear programming
• History • Problem statement
– George B. Dantzing
1947, simplex method • Subject to the constraint
– Kuhn and Tucker –
duality theory
– Charles and Cooper -
industrial application
Properties of LP
• The objective function • Transformations
is minimization type – If problem is
• All constraints are linear maximization use –f
equality type – If there is negative
variable, write it as
• All decision variables difference of two
are nonnegative – If constraint is inequality,
add slack or surplus
variables
Simplex algorithm
• Objective of simplex • Algorithm
algorithm is to find vector X 1. Convert the system of
0 which minimizes f(X) equation to canonical
form
and satisfies equality 2.Identify the basic solution
constraints of the form 3. Test optimality and stop
if optimum
4. If not, select next pivotal
point and re-write
equation
5. Go to step 2
• Simplex algorithm
Example: Maximize
• Solution:
Subject to Step 1. convert to canonical form

Use tabular method to proceed on


the algorithm
Basic variable means those variables having coefficient 1 in one of the equation and
zero in the rest of the equations
Various types of solutions
• Unbounded solution
– If all the coefficients of the entering variable are
negative
• Infinite solution
– If the coefficient of the objective function is zero at
an optimal solution
Modifications to simplex method
• Two phase method • Revised simplex method
– When an initial feasible – Solve the dual of the
solution is not readily basic solution
available
– Phase I is for rearranging
the equations
– Phase II is solving
Using MATLAB to solve LP

• Example:
Subject to

Soln.
1. Form the matrices containing coefficients of the objective function, constraints
equations and constants separately
f=[5 2];
A=[3 4;1 -1;-1 -4;-3 -1];
b=[24;3;-4;-3];
lb=zeros(2,1);

2. Use the built in function linprog() [x, fmin]=linprog(f,A,b,[],[],lb);


Nonlinear programming

• When objective function or constraint is nonlinear


• Analytical
– If objective function is differentiable
– Constraints are equality
• Numerical method
– When objective function is not differentiable
Nonlinear programming

– Is a numerical method
– Used for non differentiable or analytically unsolvable objective
functions
– Example
0.75 1 1
f x   2
 0.65x tan  0.65
1 x x

• Cases
1. unconstrained single variable case
2. unconstrained multi-variable case
3. constrained single variable case
4. constrained multi-variable cases
General outline of NLP

• Start with an initial trial point X1


• Find a suitable direction Si that points in the
direction of optimum
• Find an appropriate step length i for movement in
direction of Si
• Obtain the new approximation Xi+1 as
X i 1  X i   iSi
• Test if Xi+1 is optimum, if set i=i+1 and go to step 2
else stop
Methods available for one dimensional case
Elimination methods

• Unrestricted search – with fixed step size


Example: Find the minimum of f such that f=x(x-1.5) starting with
x0=0.0 and with fixed step size of 0.25

• F0(0)=0 • X4=0.15
• Let x1=0.0+0.05 • F3(0.15)=0.15(0.15-1.5)
• F1(x2)=0.05(0.05-1.5) =-0.2025
=-0.0725 • F3<F2, continue
• F1<F0, continue • X5=0.2
• X3=0.05+0.05=0.1 • F4(0.2)=0.2(0.2-1.5)
• F2(0.1)=0.1(0.1-1.5) =-0.26
=-0.14 • F4<F3, continue
• F2<F1, continue
Fibonacci
• Makes use of Fibonacci • Define two points x1 and x2 as

numbers
• Fibonacci numbers
• Compare f(x1) and f(x2)
• Delete infeasible interval and
• Algorithm form new interval
• Assume initial interval of • Repeat the procedure until
uncertainty be L0=[a b] specified number of iterations
• Let total number of
experiments be n , then
Flow chart of the Fibonacci method
Example: minimize f(x) given by

In the interval [0 3] with n=6

When n=6, L2 becomes then x1 and x2 will


be

X1=0+L2*=0.153846 and x2=3-1.153864=1.846154.


Accordingly f(x1)=-0.207270 and f(x2)=-0.115843, hence eliminate [x2 3]
Newton- Raphson method
• Is an NLP method based on first and second
derivatives of objective function
• It starts with an initial guess
• Improved approximation is obtained as

• Stopping criteria is f ' x i 


x i 1  x i  ''
f x i 

f ' x i 1   
Example: Find the minimum of the function, with starting point x=0.1 and stopping
criteria =0.01

0.75 1 1
f ( x )  0.65   0.65x tan
1 x2 x

Solution: X1=0.1
1.5x 0.65x 1 1
f ' x     0. 65 tan
1  x 
2 2 1 x2 x
1.51  3x  0.651  x  0.65
2 2
2.8  3.2 x 2
f ''
x     
1  x 
2 3
1  x  1  x 2 2 2
1  x 
2 3

Xi f f’ f’’ Xi+1 Check


0.1 -0.1882 -0.7448 2.6865 0.377 |-0.13|>
0.377 -0.303 -0.1382 1.573 0.465 |-0.017|>
0.465 -0.3098 -0.0179 1.171 0.480 |-0.0005|<
Newton method for multivariable case

• Gradient
– N component vector of • Ji is the Hessian matrix of
partial derivatives second order derivatives
– represent direction of • Disadvantages
steepest decent – Needs storage of J
– Next iteration is – Computation of J is
obtained as difficult
– Needs the inversion of J
• Solution: use quasi
Newton Method
– Where is the gradient
of f
Example: minimize f(x1,x2) given by

f x1 , x 2   x1  x 2  2x12  2x1x 2  x 22


 4 2  0.5  0.5
J  J 1

 2 2    0.5 0.5 

1 1
x i 1  x i  J f ( x i )   
  2
xi Xi+1 J-1 f check
[0;0] [0.5;1.5] [0.5 -0.5;-0.5 1] [1;-1] ||f||=1.41>
[-1;1.5] [-1;1.5] [0.5 -0.5;-0.5 1] [0;0] ||f||=0<
Using MATLAB

• MATLAB has built in function named fminunc


• It solves unconstrained minimization
• Steps
– Write the objective function as a user defined function in
mfile
– Call fminunc using the objective function and initial values
as an argument
• Example: minimize the function

• With initial point


• Solution
• 1. write the objective function as mfile
function y=objc(x)
y=100*(x(2)-x(1)^2)^2+(1-x(1))^2
• 2. Call fminunc with the objective function as
argument
xo=[-1.2;1];
[x,fx]=fminunc(@objc,xo);
Reading assignment

• Steepest descent
• Quasi-Newton Method
Constrained NLP
• Problem statement
Sequential quadratic programming
• Is the latest and best method
• Converts the constrained NLP to a quadratic
problem using
– Gradient of function
– Lagrangian multiplier
• Derivation
– Lagrangian equation of the above NLP is

– Converted Problem
Solution

• The solution procedure has the following steps


– Step 1 – start with an initial guess X
– Step 2 update X as

– Where S is the solution of an auxiliary optimization problem

– Where H is the positive definite matrix initially taken as


identity and is updated to converge to Hessian of the
Lagrangian and the constant  is given by
• And  is found from the solution of

• MATLAB solution
– Steps :
• Write the objective function as m-file and save it
• Write the constraint function as a separate m-file and save it
• Prepare the upper and lower bounds as vectors
• Call the build in function fmincon() using the objective function,
constrain functions and upper and lower bounds as arguments
Example: solve the following minimization problem using
MATLAB

• Minimize

• Use X=[11.8765 7] as starting point


• Solution: There are no equality constraints and
lower and upper bounds
Programs

• function y=objc2(x)
y=0.1*x(1)+0.05773*x(2);
end
Save this as objc2 in a separate m-file

• function [c ceq]=constr(x)
ceq=[];
c=[(0.6/x(1)) +(0.3464/x(2))-0.1;6-x(1);7-x(2)];
end
Save this also in a separate m-file named constr
• Write the main calling function
• xo=[ 11.8756 7];
• [x, fx]=fmincon(@obj2,xo,[],[],[],[],[],[],@constr);
The answer will be

x=

9.4639 9.4642

fx =

1.4928
Modern optimization techniques
• Drawback of classical • AI based techniques
optimization techniques – Fuzzy logic system
– Need differential – Neural network
– Single directional – Evolutionary algorithm
– Stack at local extreme • Genetic algorithm
• Simulated annealing
• Particle swarm
signals processing by interfacing
instrumentation
• Biomedical signals
– Observations of physiological activities of
organisms
• Gene and protein sequencing
• Neural and cardiac rhythms
• Tissue and organ images
• Biomedical signal processing
– Extracting useful information
Biomedical signal processing contd…

• Filtering signals to remove noise


– Techniques used were
• Suppress unwanted frequency component
• Statistical characterization of signals
• Sources of noise
– Imprecision in instruments
– Nature of the biological systems
– Power line interference
Biomedical signal processing contd…

• New techniques
– Segmentation
– Motion tracking
– Sequence analysis
– Classification
biomedical variability and probabilistic
solution to medical decision making
• In modeling of systems
– Assumptions taken ignore certain details
– Uncertainties should be considered
• Biomedical variability
– Uncertainty or randomness in model
• Example: orthopedic mechanics
– Joint loading, material properties, failure properties

• Two types of variability


– Intra individual ( variation of parameters with in the individual –
body weight, blood pressure etc
– Inter individual – variation between individuals
Probabilistic approach
• Analysis of risk of failures
– Can be studied using probabilistic approach

• Risk of failure
g(X ) = R(X ) − S(X )

• Where
– R(x) is random function describing resistance or strength of
component
– S(x) is response of structure
– X is the vector of random variables
• Negative or zero g(x) tells failure
Probabilistic approach
• Probability of failure
p = P(g(X )≤ 0) f .

• For each risk analysis,


– Functions R(x), S(x) and X have to be identified
– various methods can be used for solving
• Example: Orthopedic mechanics –
probabilistic analysis of cemented hip implant
Probabilistic approach example contd…

• Performance functions

You might also like