Chapter 1 SCM
Chapter 1 SCM
CHAPTER 1
INTRODUCTION
technology, etc. The task of getting the right product to the right customer at
the right place and at the right time is not easy, and this task leads to the
study on ‘supply chain’. Supply Chain Management (SCM) is a relatively
new term. It crystallizes concepts about integrated business planning that have
been espoused by logistics experts, strategists, and operations research
practitioners as far back as the 1950’s. Today, integrated planning is finally
possible due to advances in information technology, but most companies still
have much to learn about implementing new analytical tools needed to
achieve it.
A total supply chain cost analysis approach would look at all cost
consequences of supply chain structure and policy decisions. This approach
suggests that as a team participants in the supply chain should maximize the
overall profit in the chain, rather than optimizing their own portion of it. The
implication is that pricing of goods moving between participants can be
adjusted to reflect a fair sharing of the profit pie.
This level of planning includes sourcing and deployment plans for each plant,
each distribution center, and each customer. It also considers the flow of
goods through the supply chain network. Generally, supply chain network
design is done infrequently (i.e., every few years) as companies do not need to
add new plants or distribution centers on a routine basis.
1.4.1 Introduction
• Minimizing lateness
Optimum, Feasible
Solution
Objective(S)
Optimized,
Infeasible Solution
Optimized, Feasible
Solution
Local Optimized,
Infeasible Solution
1.5 OPTIMIZATION
Optimization is the act of obtaining the best results for the given
problem under given circumstances. In design, manufacturing and
maintenance of any engineering system, engineers have to take many
technological and managerial decisions at several stages. The ultimate goal of
all such decisions is either to minimize the effort required or to maximize the
17
desired benefit. The effort required or the benefit desired in any practical
situation is to find the conditions that give the maximum or minimum value of
a function. An optimization problem can be stated as follows:
x1
x
2
Find X = which minimizes f (X )
M
x n
(1.1)
Subject to constraints
g j ( X ) ≤ 0, j = 1, 2,..., m ;
l j ( X ) =0, j = 1,2,...,p ;
where ‘X’ is an ‘n’ dimensional vector called the design vector, f(X) is termed
as the objective function, and g j (X ) and l j (X ) are known as inequality and
equality constraints respectively. The number of variables (dimensions) ‘n’
and the number of constraints ‘m’ and/or ‘p’ need not be related in any way.
The problem stated in the equation (1.1) is called a constrained optimization
problem. Some optimization problems do not involve any constraints and are
called as unconstrained optimization problems (see equation 1.2). An
unconstrained optimization problem can be stated as follows:
x1
x
2
Find X = which minimizes f ( X ) (1. 2)
M
x n
18
are restricted to have only integer (discrete) values, the problem is known as
Optimization methods
Direct analytical
and work on Construct
complete solutions Deterministic Stochastic
Solutions
Examples:
Example:
Examples: Branch and
Bound, Dynamic
LP, Local search,
Programming
Gradient based Tabu Search
methods
Simulated Annealing
21
necessary to check all solutions during the exhaustive search process. The
properties of the linear fitness function and convex search space makes it
feasible only to check a path on boundary of the search space. Optimization
methods such as branch and bound and dynamic programming methods work
on partial solutions, and can likewise cut off parts of the search space without
examining them. Algorithms that perform exhaustive search always find the
global optimum, but are often too time consuming or do not apply for solving
real-world problems. Either the search space of these problems is too large, or
the methods have to simplify the problem to be computationally efficient,
which is not possible for real world problems. When compared with
exhaustive search methods, local search methods and gradient based methods
are different.
returning to the local optima and the new search trajectory ensures that new
regions of the search space are explored and the global optimum located.
24
−E
probabilistically according to P ( E ) = e k′T , where k′ is the Boltzmann
probability of the next point (from the current point s0) being at x ( t +1) depends
on the difference in the function values at these two points or on
∆E = E ( t + 1) − E ( t ) and is calculated using the Boltzmann probability
distribution:
−k∆′TE
P ( E ( t + 1) ) = min 1, e (1.3)
value at x ( t +1) is better than that at x ( t ) , the point x ( t +1) must be accepted. The
interesting situation happens when ∆E > 0 , which implies that the function
value at x ( t +1) is worse than that at x ( t ) . According to many traditional
algorithms, the point x ( t +1) must not be chosen in this situation. But according
to the metropolis algorithm, there is some finite probability of selecting the
point s even though it is worse than the point x ( t ) . However, this probability is
not the same for all situations. This probability depends on relative magnitude
of ∆E and ‘T’ values. The pseudo code of simulated annealing is shown in
Figure 1.9.
Else Go to Step 2.
Step 4 : If x ( t +1) −x t <ε and T is small, Terminate;
Else if ( t mod n ) = 0 then lower ‘T’ according to a
cooling schedule.
Go to Step 2;
Else go to Step 2.
26
Is the Yes
Generate initial Evaluate termination Select best
population population condition individuals
satisfied?
Selection
Recombination
Mutation
Generate a new
population
The idea of imitating the behavior of ants for finding good solutions
to combinatorial optimization problems was initiated by Dorigo et al (1991).
Ant Colony Optimization (ACO) simulates the collective foraging habits of
ants, venturing out for food and bringing back to the nest. Real ants are
capable of finding the shortest path from a food source to their nest without
using visual cues as they have poor vision. They communicate information
concerning food sources via an aromatic essence. This chemical substance
deposited by ants as they travel is called a pheromone. A greater amount of
pheromone on the path gives an ant as a stronger stimulation and thus a higher
probability to follow it. They essentially move randomly, but when they
encounter a pheromone trail, they decide whether or not to follow it, and if
they do so, they deposit their own pheromone on the trail, which reinforces
the path. Since ants passing through a food source by a shorter path will have
higher traffic intensity and therefore will make the quantity of pheromone laid
down on the shorter path grow faster. However, there is always a small
probability that an ant will not follow a well-marked pheromone trail. This
small probability allows for exploration of other trails. The foraging behavior
28
problems. The detailed descriptions about the ACO algorithms and their
implementations are presented in the book “Ant Colony Optimization”
(Dorigo and Stutzle 2004).
by the optimization problem, assesses the extent to which the swarm is good
or bad.
Following are the two key aspects by which PSO has become more
popular:
gbest (global best): Position of the best particle of the entire swarm.
kth particle is recorded and represented by { Pkd } (i.e., (Pk1,Pk2, … PkD)), and
the global best solution (obtained so far) is denoted by { G d } ( i.e., (G1, G2, …,
GD). The rate of the position change (i.e., velocity) for the particle is
Velocity v new
kd
X new
kd =
X kd +v kd
new
(1.4)
The equations (1.3) and (1.4) describe the flying trajectory of a population of
particles. Equation (1.3) describes how the velocity is dynamically updated
and equation (1.4) describes the position update of the flying particle. The
equation (1.3) consists of three parts. The first part is known as the
momentum part. The velocity cannot be changed abruptly. It is changed from
the current velocity. The second part is known as cognitive part which
represents private thinking of itself learning from its own flying experience.
The third part is the social part which represents the collaboration among
particles learning from group flying experience. In equation (1.3), if the sum
of the three parts on the right side exceeds a constant value specified by the
33
user, then the velocity on that dimension is assigned to be ± v max , that is,
particles’ velocities on each dimension is clamped to a maximum velocity
v max , which is an important parameter. Originally v max is the only parameter
required to be adjusted by the users. Large v max leads the particles to fly past
the good solution areas. Small v max would lead the particles to be potentially
trapped into local minima, making them unable to fly into better solution
areas. Usually a fixed constant value is used as the v max , but a well designed
v kd + c1 × [ r1 × ( Pkd − X kd ) ] + c 2 × [ r2 × ( G d − X kd ) ]
Momentum Part Cognitive Part Social Part
for d= 1,2,..D.
and hence obtain
X new
k , d = X kd + ( v kd )
new
In the above,
c1, c2 are two positive constants, and r1 and r2 are two uniformly
distributed random numbers in the range (0, 1).
Figure 1.11 (a) The generic PSO algorithm for a minimization problem
35
Velocity , v new
kd =
w × v kd + c1 × [ r1 × ( Pkd − X kd ) ] + c 2 × [ r2 × ( G d − X kd ) ]
for d = 1, 2, … , D. (1.5)
36
w max −w min
w = w max− × iter (1.6)
itermax
kd = X kd + vkd
X new new
(1.7)
Velocity, v new
kd = k' × ( v kd + c 1× r×
1 ( kd− X kd )
P
+ c 2 × r2 × ( G d − Xkd ) )
for d = 1, 2, … , D. (1.8)
X nk ewd = Xk d + vkd
new
(1.9)
2
with
k′ = , (1.10)
2− φ − φ2 −4 φ
to nonlinearly change the inertia weight (Shi and Eberhart 2001a, 2001b).
Recently, a new variation of PSO model introducing nonlinear variation of
inertia weight with dynamic adaptation was proposed by Chatterjee and Siarry
(2006). The search process of a PSO algorithm is nonlinear and complicated.
A PSO with well-selected parameter set can have good performance, but
much better performance could be obtained if a dynamically changing
parameter is well designed (Shi et al 2005).
changes in velocity per unit time step which meant exploration of new search
areas in pursuit of a better solution. However smaller inertia weight meant
less variation in velocity to provide slower updating for fine tuning a local
search. It was inferred that the system should start with a high inertia weight
for course global exploration and the inertia weight should linearly decrease
to facilitate final local search exploration. This should help the system to
approach the optimum of fitness function quickly. This method proposes a
new nonlinear function modulated inertia weight adaption with time for
improved performance of PSO algorithm.
The system stars with a high initial inertia weight (w initial) which
should allow it to explore new search areas aggressively and then decreases it
gradually according to relation 2, following different paths for different
values of n to reach wfinal at iter=(itermax).The proposed algorithm also
attempts to derive a reasonable set of choice for the free parameters of any
given system.ie { winitial,wfinal,n} on the basis of a fixed itermax .the objective is
to arrive at an attractive solution for any given problem with the known,
fixed free parameters applying our proposed PSO variation which should
require less computational burden and time compared to trial and error
approaches.
40
Velocity , v new
kd = w iter × v kd + c1 × r1 × ( Pkd − Xkd )
+ c 2 × r2 × ( G d − X kd ) (1.11)
( itermax − iter ) n
w iter = ( winitial − wfinal ) + wfinal (1.12)
( itermax )
n
( w initial − w final )
where m= (1.13a)
itermax
itermax The maximum number of iterations that you are running the
PSO algorithm, For example if you are running 1000 iterations,
itermax = 999.
The above is the basic equation used to calculate the velocity and
hence obtain new position of particle as follows,
kd = X kd + vkd
X new new
(1.14)
f(x) ≤
/ f( x̂ ) for any x ≠ x̂ ЄX
(1.17)
42
Thereafter, all constraint violation are added together to get the overall
constraint violation:
n
Ω ( x(i) ) = ∑
k =0
ω j (x (i) ) (1.19)