0% found this document useful (0 votes)
54 views

Probability: Totalfavourable Events Total Number of Experiments

1. The document discusses probability and probability distributions. It defines probability as the chances of an event occurring and introduces key probability concepts like sample space, events, and axioms of probability. 2. It distinguishes between discrete and continuous random variables and probability distributions. A discrete random variable has a countable number of possible values, while a continuous random variable can take any value within an interval. 3. Probability distributions, whether discrete or continuous, describe the probabilities associated with all possible outcomes of a random variable. The document provides examples of how to calculate probabilities using both discrete and continuous distributions.

Uploaded by

masing4christ
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Probability: Totalfavourable Events Total Number of Experiments

1. The document discusses probability and probability distributions. It defines probability as the chances of an event occurring and introduces key probability concepts like sample space, events, and axioms of probability. 2. It distinguishes between discrete and continuous random variables and probability distributions. A discrete random variable has a countable number of possible values, while a continuous random variable can take any value within an interval. 3. Probability distributions, whether discrete or continuous, describe the probabilities associated with all possible outcomes of a random variable. The document provides examples of how to calculate probabilities using both discrete and continuous distributions.

Uploaded by

masing4christ
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

1.

PROBABILITY

Introduction: Events whose outcome cannot be predicted with certainty are called probabilistic

events. Knowledge of probability theory is a prerequisite for the simulation of probabilistic

events. Outcome of probabilistic (random) events generally follow a special pattern which can

be expressed in mathematical form called Probability Distribution.

Probability can therefore be defined as the chances of the occurrence of an event.

[Prob. ≡Probability]

total favourable events


The probability of occurrence of an event =
total number of experiments

Thus tossing a coin once, the prob. of head or tail is 0.5

Tossing an unbiased coin hundred times and obtaining 49 heads means that the probability of

getting a head equals 49/100 = 0.49 and the prob. of getting a tail is 51/100 = 0.51 (1 – 0.49).

This is an experimental /empirical approach to probability.

Terms in Probability

(1) Sample Point – Each possible outcome of an experiment is called a sample point. Thus all

favourable and unfavourable events in an experiment are called sample points.

The 100 (trials) tossing of unbiased coin are sample points.

(2) Sample Space– A set of sample points is called a Sample Space

Example: S{1,2,3,4,5,6} ≡ Sample space for a die

S{Head, Tail } ≡ Sample space for a Coin ≡{H, T}.

Sample Space Finite (countable numbers e.g. a die)


Infinite {uncountable numbers (0, 1) (0≤ x ≤ 1)}

Countable Infinite (Natural numbers)

Thus a Sample space that is finite or countable infinite is called a discrete sample space while

the non-countable infinite one is called non discrete (continuous) sample space.

3. Event

An event is a subset of a sample space, S. (It is a set of possible outcomes). A set is a

collection of similar types of items. A set consists of elements usually numbers.

Example: The scores in the throwing of two dice is a set.

A simple event consists of a single point in a sample space, S.

NB: Random Variable ≡ Stochastic Variable ≡ Variate

Universal set. – Contains subsets. In throwing die, a set which contains both sets of even

numbers (subset E) and odd number (subset O) is called a universal set.

U ≡ {E, O}. E = (2, 4, 6), O ≡ (1, 3, 5)

Set Operations (A and B are events in set, S)

(1) AUB ≡ Union of A and B implies either A or B

(2) AПB ≡ Intersection of A and B implies both A and B

(3) Ac ≡ Complement of A implies not A same as (S – A)

(4) A – B = A∩Bc implies A but not B.

Examples A = {1, 2, 3}, B = {3, 2, 5, 6, 7}, C = {a, b, c}

(a) S = {1, 2, 3,5,6,7, a, b, c} i.e. S = AUBUC

(b) AUB = {1, 2, 3, 5, 6, 7} AUB

(c) AПB = {2,3} A∩B


(d) Ac = {5, 6, 7, a, b, c} Ac

[Fig. 5 Venn Diagram]

Two random events A and B are statistically independent if and only if

P(AПB) = P (A) P(B)

Thus their joint probability can be expressed as a simple product of their individual probabilities

(probs).

Similarly, P(A/B) = P (A)

P(B/A) = P(B)

Where P(A/B) ≡ Probability of the occurrence of A if B has already occurred.

In other words if A and B are independent, then the conditional probability of A, given B is

simply the individual prob. of A alone and vice –versa. This is called SWINE THEOREM

Thus P(A∩B) = P(A) P(B) = P(A) P(B/A) = P(B)P(A/B)

Two events A and B are mutually exclusive if and only if

P(A∩B) = O

If P(A∩B) ≠ 0 and P(B) ≠ 0

Then P(A/B) ≠0 and P(B/A) ≠0

In other words, the probability of A happening given that B happens is nil since A and B cannot

both happen in the same situation. Likewise the probability of B happening, given that A

happens is also nil. Thus A and B are mutually exclusive if the occurrence of A excludes the

occurrence of B and vice versa.

AXIOMS OF PROBABILITY
If sample space, S is discrete, all its subsets correspond to events and conversely. But if S is not

discrete, only special subsets (measurable) correspond to events. To each event A in the Class C

of events a real number P(A) is associated. Then P is called a probability function and P(A) is the

probability of the event A, if the following axioms are satisfied.

Axiom 1: For every event A in the Class C, P(A) ≥0

Axiom 2: For the sure or certain S in the Class C, P(S) = 1

Axiom 3: For any number of mutually exclusive events A1, A2, in the Class C,

P(A1 U A2 U A3 U…) = P(A1) + P(A2) + P(A3) +…

a. Complement Rule: The probability of the complement of A is P(Ac) = 1 – P(A)

Proof

1 – P(A)
P(A)

Fig. 7: Venn diagram (complement)

Fig. 7a: Venn diagram (Union)


Since Space S is partitioned into A and Ac and P(S) = 1 then

1 = P(A) + P(Ac) P(Ac) = 1 - P(A)

b. Difference Rule: If the occurrence of A implies occurrence of B, then P(A) ≤ P(B)

and the difference between these probabilities is that B occurs and A does not

occur.

P(B and not A) = P(BAc) = P(B) (1- P(A))

Proof: Since every outcome in A is an outcome in B therefore A is a subset of B. Since B can be

partitioned into A and (B but not A), then

P(B) = P(A) + P(BAc)

And P(B) – P(A) = P(BAc) = [P(B) (1 – P(A)]

Conditional Probabilities

P(A∩B) ≡ P(A) P(B/A)

This means that the probability of occurrence of both A and B is equal to the product of

the probability that A occurs and the probability that B occurs subject to the condition

that A has already occurred. Thus P(B/A) is the conditional probability of B given A.

Example (1)

Find the probability that a single toss of dice will result in a number less than 3 (a) if no other

information is given and (b) it is given that the toss resulted in an even number.

Solution: Let B denote the event (less than 3), then the probability of the occurrence of B is

given by

1 1 1
(a) P(B) = P(1) + P(2) = + =
6 6 3
3 1
(b) Let A be the event (even number), then P(A) = =
6 2

1
Also P(A∩ B ¿ = A = {2, 4, 6} B ≡ {1,2}
6

P ( A ∩B) 1 1 1 2 1
Then P(B/A) = = ÷ = x =
P( A) 6 2 6 1 3

1
=
3

= The probability of B given that A has occurred.

Example 2: A mathematics teacher gave her class two tests. 25% of the class passed both tests

and 42% of the class passed the first test. What percentage of those that passed the first test

also passed the second.

Solution:

P ( First∧Second)
P(Second/First) =
P( First )

0.25
= = 0.595
0.42

= 0.60 = 60%

Probability Distributions

Distribution function can be discrete as in counting or continuous as in measurement.

Continuous distribution can be reduced to discrete distribution by grouping. Continuous

distributions arise in practice and therefore have advantages in real life usage over discrete

However all principles applicable for discrete distributions are transferable to continuous

distributions by substituting the integral sign for the summation sign. Example of discrete
distribution includes throwing of die (dice), tossing of coin and drawing a card from a pack of

cards and recording the results.

1.1 DISCRETE RANDOM VARIABLE

Random Variable may be discrete or continuous. (Pg.33)

A random variable X and its corresponding distribution are said to be discrete, if X has the

following properties.

(1) The number of values for which X has a probability different from 0, is finite or almost

countable finite.

(2) Each finite interval on the real line contains at most finitely many of those values. If an

interval a≤X≤b does not contain such a value, then P(a<X≤b) = 0, where a and b are the

lower and upper limits of the stochastic variable X.

If X1, X2, X3 are values for which X has positive corresponding probabilities P 1, P2, P3, then the

function

f(x) = Prob. (X = xj) = Pj, j = 1,2, ….n ……….(1)

= 0 otherwise

With the condition that f(x) ≥0 and ∑ f (x) = 1 ……….(2)

f(x) is the Probability Density Function (PDF) of X and determines the distribution of the random

variable X.

Prob. (x = xj) = Pj means that the distribution of random variable X is given by f(x) which is equal

to Pj for X = Pj
Example: Throwing two dice and adding the scores. Since there are 36 ways (6 2), the probability

of getting a score of 2 is given by:-

1
f(2) = Prob.(2) = P(2) = [(1+1=2)/62]
36

3 1
Similarly f(4) = P(4) = = = [ (1+3 , 3+1 , 2+ 2)/6 2]
36 12

Thus f(x) = P(X=x) = Px(1-P)n-x, x=0,1,….n and 0<P<1 …. (3)

Usually PDF is in the form of table of probabilities associated with the values of the

corresponding random variable. However, it is sometimes expressed as given in equation 3

(closed form functions).

For continuous distribution, the use of probability density (PDF) in place of probability is

important and necessary. If a distribution is continuous, the probability of an event to have an

exact value, say 5.00 is zero. There is infinity of possible result and the chance of exactly 5 is

1
= 0. However, it is possible to consider a probability density in the neighborhood of 5 with

units of probability for occurrence within the interval.

If Pd(X) represents the probability density at point X, multiplied by some interval of X to convert

it to probability, then Pd(X) depends on X and is a function of X.

i.e. Pd(X) = f (x) ………(4)

Then the mean or expected value is given by

∫ xP ( X ) dx
μ = E(x) = ………(5)
∫ P ( X ) dx
And the variance is given by
∫ ( X −μ ) 2 Pd ( X ) dx
δ2 = ……….(6)
∫ P ( x ) dx

By analogy, the discrete function mean or expected value is given by

N
x
μ = E(x) = ∑ x P(x) = ∑
1 …………(7)
N
N

δ
∑ ( x−μ )2
1

And the variance is given by 2


= N −1 ………(8)

The variance is a measure of the spread of a distribution from the mean. A small variance

means that the distribution is peaked while a large variance indicates a spread out from the

mean. Standard deviation (SD) is the Square root of the Variance. SD is also a measure of the

spread of a distribution.

A continuous distribution function is so expressed that the area under the probability density

function (PDF) is unity and the cumulative probability is unity in the limit and is given by

Pd(X) = ∫ P ( x ) dx =1 ………(9)
−∞

TABLE 3: Frequency Distribution with DICE (Die)

ONE DIE TWO DICE THREE DICE


Sum No. of Ways Prob. No. of Prob. Cumulative No. of ways Prob. Cum.
of dice for P(x) Ways for P(x) Prob. for P(x) of X P∑ P(x)
appearance of X appearanc of X P( X ) o ∑ appearance of X
s e
fX
1 1 0.1667
2 1 0.1667 1 0.0278 0.0278
3 1 0.1667 2 0.0556 0.0833 1 0.0046
4 1 0.1667 3 0.0833 0.1667 3 0.0139
5 1 0.1667 4 0.1111 0.2778 6 0.0278
6 1 0.1667 5 0.1389 04167 10 0.0463
7 6=61 6 0.1667 0.5833 15 0.0694
8 5 0.1389 0.7222 21 0.0972
9 4 0.1111 0.8333 25 0.1157
10 3 0.0833 0.9167 27 0.1250
11 2 0.0556 0.9722 27 0.1250
12 1 0.0278 1.0000 25 0.1157
13 36=62 21 0.0972
14 15 0.0694
15 10 0.0463
16 6 0.0278
17 3 0.0139
18 1 0.0046
216=63

Thus the average (Mean) is given by


1+ 2+ 3+4 +5+6
μ1= = 3.5 (for throwing one die)
6
1 ( 2 ) +2 ( 3 ) +3 ( 4 ) +… .+3 ( 10 ) +2 ( 11 )+ 1(12)
or μ2 = = 7.0 (for throwing two dice)
36

1 ( 3 ) +3 ( 4 ) +6 ( 5 ) +… .+6 ( 16 )+ 3 ( 17 )+ 1(18)
or μ3 = = 10.5 (for throwing three dice)
216

Alternatively, from table 1

μ2 = 2(0.0278) + 3(0.0556) + … + 11 (0.556) + 12 (0.0278) = 7.0

Expected value of X = sum of (Value of X) (Prob. of X occurring)

Example 3

One die is thrown and the player receives twice the face value if odd and one-half the face value

if even. Find the expectation.

Solution: E(X) = ∑ X (P(X)) = ∑ X P(X)

1 1 1 1 1 1 1 1 1
E(X) = 2(1) + (2) + 2(3) + (4) + 2(5) + (6)
6 2 6 6 2 6 6 2 6
= 4

Example 4

The life of a tool in hours before breakage is given by the following data obtained for 50 tools.
Tabulate the frequency distribution of the tool life and find the average tool life and its variance.

Table 4: Tool life (Hours)


28.1 31.6 29.9 31.4 31.6
29.9 33.7 30.4 33.9 31.5
30.9 26.5 33.6 31.7 36.4
32.6 30.9 29.3 33.7 32.8
27.7 30.5 30.8 34.4 33.5
29.1 37.8 30.4 33.3 30.5
30.5 32.0 26.5 29.6 31.4
29.6 29.3 29.8 34.4 34.4
30.4 31.0 29.8 38.8 38.9
27.7 26.7 31.6 28.3 26.5

Solution

Table 5: Frequency Distribution of 50 tools


Tool life Mid Point Frequency Prob.
(hrs) X f P(X) of X
26.0 – 26.9 26.45 4 0.08
27.0 – 27.9 27.45 2 0.04
28.0 – 28.9 28.45 2 0.04
29.0 – 29.9 29.45 9 0.18
30.0 – 30.9 30.45 9 0.18
31.0 – 31.9 31.45 8 0.16
32.0 – 32.9 32.45 3 0.16
33.0 – 33.9 33.45 6 0.12
34.0 – 34.9 34.45 3 0.06
35.0 – 35.9 35.45 0 0.00
36.0 – 36.9 36.45 1 0.02
37.0 – 37.9 37.45 1 0.02
38.0 – 38.9 38.45 2 0.04
50

NB: The data of table 4 are grouped and condensed in table 5.

Using the frequency distribution Table 5; the average tool life is given by
4 ( 26,45 ) +2 ( 27.45 ) +2 ( 28.45 ) +…+1 ( 37.45 ) +2(38.45)
μ= (7a)
50

= 31.21hrs

Using equation 8, the variance is given by

δ2 =
∑ ( x−μ)
1
N −1

4 ( 26.45−31.21 ) 2+2 ( 27.45−31.21 ) 2+… …+1 ( 37.45−31.21 ) 2+2 ( 38.45−31.21 ) 2


50−1

= 8.26

Cumulative – Distribution Function

A frequency distribution function can be shown as a plot of cumulative values known as a

Cumulative-distribution function. The plots are useful in certain applications.


It is emphasized that any frequency function may be changed to a probability function by

changing the area under the function equal to unity. The cumulative probability distribution

function is, accordingly, unity within the limit.

Cumulative functions may be used to find the probability for a random variable X lying between

two limits A and B. Thus if A < B, the probability is given by

B B A
P(A<x≤B) = ∑ P( Xi) = ∑ P( Xi) - ∑ P( Xi) ……….. (10)
i= A +1 i=0 1=0

Example 5: Find the probability of throwing a 4, 5, 6 or 7 with two dice, (The range is from 3 to 7

inclusive)

Solution

Using equation 10 and table 3

7 3
P(3<X≤ 7) = ∑ P( X ) - ∑ P( X )
0 0

= 0.5833 – 0.0833

= 0.5000

Alternatively, (using table 3)

P(3 < X ≤ 7) = P(4) + P(5) + P(6) + P(7)

= 0.0883 + 0.1111 + 0.1389 + 0.1667 = 0.5000

Rectangular (Uniform) Distribution


Suppose a distribution is uniform over a range 0 to a, then the prob. density is given by

1
Pd(X) = { 0≤ X ≤a
a

{0elsewhere ….11
1
Pd(x) a

→a
Fig.9: Uniform Distribution
Notice that the condition for equation 9 is met.

∞ a

∫ Pd(X)dx = ∫ dx = 1 ….12
−∞ o
1.0
Pc(X)

The cumulative probability of X or less is given by

Fig.10: Cumulative Distribution


a
1 x
Pc(X) = ∫ dx = 0≤ X ≤ a …13
o a
a

The rectangular (uniform) distribution can be used to simulate variables from almost any kind of
distribution due to its simplicity.

Distributions Functions: Other distributions include

1. Normal distribution/Log normal distribution.

2. Gamma distribution

3. Weibull distribution

4. Poisson distribution/Exponential distribution

5. Binomial distribution.

Normal Distribution

Normal distribution has a very wide range of applications in statistics, including the testing of
hypothesis. The probability density function is given by

1 (x−μ)
Pd(X) = e– ….. (14)
σ √2 π 2δ

Where μ = mean and δ 2 = Variance

1 Z x−μ
Pd(Z) = e− , Z = ( ¿ …..(15)
√2π 2 δ

The Lognormal distribution is used to analyze data that has been transformed
logarithmically.

Gamma Distribution

Gama distribution can be used to study variables that may have a skewed distribution. It is
commonly used in queuing analysis.

The standard gamma probability density function is given by

X e−N
Pd(X) = …..(16)
(α)

1
When alpha α = 1, gamma distribution becomes exponential with λ = β

β = beta parameter.

Gamma distribution is also called Erland distribution when alpha is a positive integer.

Weibull Distribution
Weibull distribution is used in reliability analysis such as the calculation of the mean

time to failure of a device. The probability density function is given by

X
( )α
αX α−1 e −β ¿¿
Pd(X) = βα …..(17)

When ∝ = 1, Weibull becomes exponential with λ = 1.


β

The Weibull Cumulative distribution function is given by


x α
Pc(X) = 1 – e-( ) ……(17a)
β

Poisson Distribution

The Poisson distribution is applicable only when the events occur completely at random and the

number that occurs is small compared to the potential number that could occur. The Poisson

−μ −x
e μ
P( X )=
distribution is given by X| (18)

Where P(X) = Probability of exactly X occurrences, e = Naperian constant (2.71 828)

and μ = Expected or average number of occurrences. The mean and variance of Poisson

function are bothμ.

Example 8.4 (P175): At 3pm telephone calls arrive at the company switchboard at the rate
of 120/hr. Find the probability that exactly 0, 1, and 2 calls arrive between 3.00 and 3.01 pm.

Solution: Take 1 min. as the period of time. Then

120
μ= = 2 calls per minute expected
60


Using equation 18, (i.e. P(X)= ), we have
X!

P(0) = e-2 20 = 0.1353


0!

P(1) = e-2 21 = 0.2707


1!

P(2) = e-2 22 = 0.2707


2!
Exponential Distribution

The probability density function for exponential distribution is given by

Pd(X) = {ae-α х X≥ 0
0 X≤ 0 …..(19)

The condition for equation 9 is met



∫0 ae−aΧ a -ax ∞
dx = - a [e ] 0 =1 ……
(20)

The Cumulative probability for X or less is given by

Pc (X) = ∫0 ae−ax dx = 1 – e-ax …..(21)


1 1
Thus the mean of the exponential density distribution is and the variance is
a a

There is a connection between the Poisson distribution and exponential distribution. For
example, in queuing problems if the arrival rate, in arrivals per unit time period, follows a
Poisson distribution with λ average arrivals per period, then by equation 18,

−λ x
e λ
P( X )=
X| …..(22)

It can be shown that the time between arrivals has an exponential distribution with the

following probability density.

Pd(Ta) = λe-λT …..(23)

Where Ta is the time between arrivals measured in periods T. The cumulative probability

becomes the time between arrivals of Ta or less as given by

Pc (Ta) = I – e-λ T …..(24)

Binomial Distribution

Binomial distribution applies to events that can take on only two values such as the head
or tail for a tossed coin or accept or reject for an object. The binomial distribution is
given by
P(X) =N!Px(I – P)N-X
X!(N – X)! ….(25)

Where P(X) = Probability of exactly X occurrences in N trials and P = Probability of success in

one trial.

In the case of an unbiased coin, P and (I – P) are both 0.5, but in most problems P will not be

0.5. The binomial distribution is symmetrical if and only if P = 0.5. The mean and variance of a

binomial distribution are respectively, μ = NP and δ2 = NP (1 – P).

The binomial distribution assumes that the trials are independent.

1.3. RANDOM NUMBERS (Random Variables)

A random variable (stochastic Variable, variate) is a function of X that assigns to each possible

outcome in an experiment a real number. If X may assume any value in some given interval

(bounded or unbounded), it is called a CONTINUOUS RANDOM VARIABLE. If it assumes only a

number of separated values, it is called a DISCRETE RANDOM VARIABLE.

Methods of Generating Random Numbers (Variables) (V.P. Singh 2009)

The requirements for generating random numbers are such that each successive number in a

sequence of random numbers must have equal probability of taking on any one of the possible

values and must be statistically independent of the other number in the sequence. (Willier&

Lieberman, 1970).

An acceptable method of generating random numbers must yield sequences which satisfy the

following conditions:
(a) Uniformly distributed.

(b) Statistically independent.

(c) Reproducible.

(d) Non-repeating for any desired length (Period).

(e) Capable of generating random numbers at high rates of speed and yet requires a minimum

of computer memory capacity.

The methods used to generate random numbers include:-

(a) Congruential method


(b) Midsquare method
(c) Manual Method
(d) Library (Rand) table

(a) The Congruential Method: Generates random number from the last one obtained, given

an initial number called the SEED.

This method has 3 variants namely the Mixed, multiplicative and the additive congruential

methods.

(i) The mixed congruential method has the following recurrence relation

r n+1 =ar n+c (Modulo M)

where a, c and M are positive integers (a < M, c <M)

If M = 2b, a = 1,5,9,13, … and c = 1,3,5,7,9…… b= Word Size of byte (Binary computer)

If M = 10d, a = 1, 21, 41, 61 …. and c= 1,3,5,7,9 d= Word Size of byte ( Decimal computer)

(ii) Multiplicative Congruential method

The recurrence relation is given by


r n+1 =ar n (Modulo M)

(iii) The additive Congruential Method

The recursive formula is given by

r n+1=rn +r n−k (Modulo M)

If K = 1, the popular Fibonacci sequence is generated.

(b) The Midsquare Method.

This is used to generate a four digit number from the middle of the square of the

preceding four digit random number.

(c) The Manual Method

This method involves the throwing of dice or tossing of coins and recording the outcome.

(d) The Library (RAND) Table

RAND Corporation publishes tables containing random numbers generated by

one of the previous methods.

2. QUEUING THEORY

A queue is a waiting line but queuing is used broadly to cover variety of problems usually
for economic balance and optimization involving waiting and delay in serving people or
servicing machines and equipment. Queuing also covers problems such as the optimum
number of long distance lines required between two cities or optimum number of
repairers and equipment required to keep an assembly line in operation. There is now
strong competition among service industries to gain and retain customers and as such
companies must consider internal queuing for the economic balance of manufacturing
and operational efficiency.
Queuing theory is the mathematical study of waiting lines or queues. In queuing theory
a model is constructed so that queue lengths and waiting times can be predicted.
Generally, queuing theory is a branch of operations research because the results are
used in making business decisions aboutresources required in the provision of services.
Queuing theory takes its origin from the research credited to AgnerKrarupErlang who
created models to describe the Copenhagen telephone exchange. The idea has been
extended to applications such as telecommunication, traffic engineering, computing and
the design of factories, shops, offices and hospitals.
Queue Networks: These are systems in which single queues are connected by a routing
network. The queuing system can be classified with respect to input source, queue and
service facility.
The input source or population is classified as infinite or finite which is chiefly the size of
the population relative to the number in the queue and being serviced. If the
characteristics of the input source are changed by the number of withdrawals, the
population is considered finite and the problem is solved with the changes in the
population taken into account.
The queue itself can be classified as infinite if it is allowed to grow to any size or finite if
it is characterized by a maximum permissible size. Queues can also be classified as single
or multiple. A one-chair barbershop is a good example of a single queue. The multiple-
queue case is exemplified by a work centre having two input queues from different
operations.
In a typical network, the main elements in the system are the input source and the
service system. The latter is characterized by queue discipline and service facility.
A machine needing servicing can follow three possible paths:
i. Return to the population without servicing (balking).
ii. Join the queue but return to population without servicing (renege).
iii. Return to the population after being serviced.

In the queue network, the servers are represented by circles, queues by a series of rectangles
and the routing network by arrows.

Fig.11: Queue Network

Markovian Systems

Markov Chain: This is a stochastic process with the


Markov property and refers to the sequence of
random variables which the process moves through.
Markov property defines serial dependence
between adjacent periods only. It can thus be used
to describe systems that follow a chain of linked
events, where what happens next depends only on the current state of the system
(memorylessness). Usually the term “Markov Chain” is reserved for a process with a
discrete set of times [Discrete-Time Markov Chain (DTMC)]. Markov Chain is credited to
Andrey Markov.
A discrete-time random process involves a system which is in a certain state at each step,
with the state changing randomly between steps. The steps are often thought of as
moments of time, but they can equally well refer to physical distance or any other
discrete measurement. The Markov property states that the conditional probability
distribution for the system at the next step and in fact at all future steps depends only
on the current state of the system and not additionally on the state of the system at
previous steps. It is generally impossible to predict with certainty the state of a Markov
chain at a given point in the future since the system changes randomly. However, the
statistical properties of the system’s future can be predicted. In many applications, it is
these statistical properties that are important.

3. STOCHASTIC PROCESSES

Stochastic Process (Random Process) is the collection of random variables representing the
evolution of some system of random values over time. It is the probabilistic counterpart of a
deterministic process (system). In a stochastic process there is some indeterminacy. There are
several or infinitely many directions in which the process may evolve even if the initial condition
(starting point) is known. Deterministic process can only evolve in one direction or way. For
instance, the solutions to a differential equation follow a definite or particular way (direction).
For discrete time, as opposed to continuous time, a stochastic process involves a sequence of
random variables and the time series associated with these random variables. An example of
this process is the Markov Chain otherwise known as the Discrete -Time Markov Chain (DTMC).
An approach treats stochastic processes as functions of one or several deterministic inputs,
usually time parameters whose values are random variables (outputs usually probabilistic single
quantities associated with certain probability distributions). Random variables corresponding to
various times or points may be completely different. It therefore requires that these different
quantities take values in the same space.
Processes modeled as stochastic time series include stock market and exchange rate
fluctuations, speech, audio and video signals; medical data such as patient’s EKG, EEG, blood
pressure/temperature, and random movement such as Brownian motion or random walks.
Examples of random fields include static images, random terrains (landscapes), wind waves or
composition variations of a heterogeneous material. Stochastic process takes its origin in the
19th century when it was used to aid the understanding of financial market and Brownian
motion. Thorvald N. Thiele was the first person to describe the mathematics behind Brownian
motion in his paper on the method of least squares published in 1880.
Stochastic process can be classified according to its index set (time) and state space as follows:
 Discrete time and discrete state space
 Continuous time and continuous state space
 Discrete time and continuous state space
 Continuous time and discrete state space.

4. APPLICATIONS

4.1. Inventory:
This is a stock of physical assets having value, which can be material, money or labour. Material
inventory can be raw material; tools and accessories including spare parts used in production,
unfinished or in-process inventory and finished products. Inventories are maintained at a cost to
gain advantages having monetary value, such as avoiding a shutdown due to a temporary
absence of supplies or permitting uniform production for a reasonable supply or demand.
Inventory models which are usually mathematical (sometimes symbols) may be deterministic,
for which all information and values are treated as being definite or probabilistic (stochastic), for
which uncertainty in some values is recognized and considered. The classes of inventory in the
manufacture of steel are shown in fig. 11.1 while the general inventory system is illustrated in
fig. 11.2.
In the general inventory system, ordinates represent the number of units available in stock
against time in the abscissa. Consider quantity Q units received at the beginning of the period
T1. The use during this period U1 is subtracted from the stock to give a lower stock for the
second period, etc. At the end of the fourth period T 4 the level of inventory reaches the reorder
level R. At that time an order equal to Q unit is placed, which is eventually received at the
beginning of the period. The procurement time or lead time T i is the time required in period to
receive an order after it is placed. During the procurement time the level of stock may go to
zero, creating a shortage for which a penalty can arise. The objective is to minimize the total
cost of the system, which is the minimum cost per period (not per cycle). Some costs are
connected with the cycle, such as placing and receiving an order, whereas other costs are
connected with a period, such as storage costs. Confusion between a period and a cycle is the
most common source of error for a beginner.

Economic Lot Size


The following assumptions are made for the simplest case:
 All items are delivered simultaneously at the time the stock becomes zero
 The cost of placing and receiving an order is constant and is independent of the lot size
 Usage is at a constant rate
 No safety stock is provided
 No shortage is allowed
 The warehousing cost is calculated on a continuous basis and at any instant of time is
proportional to the number of items in the inventory.

Fig. 11.3 illustrates the schematic diagram for economic lot size. Let C 0 be the cost of
placing and receiving an order, Ch the storage or holding cost per item per period, M = Q
the maximum inventory where Q is the lot order, and U the rate of use in items per
period. Consider a time element T units of time from zero time lasting dT units of time.
The inventory at any time is a straight-line function of time and is Q at T = 0 and 0 at T =
N where N is the duration of the cycle. Then

Q
− T +Q
Inventory = N
The number of periods of time N that the lot Q will last is given by
items Q
=
N= items / period U
The inventory at any time is given by
Inventory = -UT + Q
The total cycle cost is given by

N =Q /U
UQ 2 Q
∫ C h (−UT +Q )dT =C 0 +C h (− 2
+Q )
Ccycle = C0 + 0 2U U

2
ChQ
= C0 + 2U

The function to be optimized is the cost per period Ct and with


item/ period U
=
Cycle/period = item/cycle Q

The cost per period becomes


2
Naira cycle ChQ U U ChQ
=(C 0 + ) =C 0 +
Ct = cycle period 2U Q Q 2

Differentiating and setting to zero, we obtain


dC t U Ch
=−C 0 + =0
dQ Q2 2

Finally,
2 C0 U
Qopt. = √ Ch

The lead time Ti multiplied by the rate of use per period U gives the reorder level, R. Thus

R = Ti U

By placing Qopt.in Ct, the optimum period cost is obtained,

C0U
Ct =
C0
U
√ 2 C0 U /C h
+C h
√Ch
= √ 2C h C 0 U

Example: The demand for an item 100/day. The cost of placing and receiving an order is #50
with an infinite delivery rate. The storage cost is #10 per item per year based on the actual
inventory at any time. No shortage is allowed. The reorder level is an 8-day delivery. Find the
optimum stock, the optimum period cost and the reorder level.

Solution:
i.The storage charge per item per day is 10/365 = 0.0274. Thus the optimum stock is
2(50 )(100 )
Qopt. = √ 0.0274
=604

ii. The optimum period cost is given by

Ct = √ 2(0.0274)(50)(100)=16.55/day
iii. The reorder level is given by
R = 8(100) = 800

Safety Stock: This is added to the inventory to ensure that a shortage in an item does not occur.
The inventory model becomes (fig.11.4)
Inventory = -UT + Q + S (S = Safety Stock)
The cost per cycle (following individual steps established previously)
2
Q Q
C0 +C h ( +S )
Ccycle = 2U U
The cost per period becomes
U Q
+C h +SCh
Ct = C 0 Q 2
The optimum order for minimum cost becomes
2 C0 U
Qopt. = Ch √
The optimum period cost becomes
√ 2C h C 0 U +SCh
Ct = √ 2C h C 0 U ,when ,S=0
Finite Rate Delivery
Finite rate delivery model is associated with manufacturing. The schematic diagram (fig.11.5)
illustrates the model concept. C0 is a setup cost and D item per unit of time is a manufacturing
rate. The use rate in items per unit of time is U. The value for Q is the total number of items
produced and used in the cycle of (T1 + T2) periods. The safety stock is S items and M is the
maximum inventory. In time T1 the inventory builds up by an amount (M – S) at a rate (D – U).
Thus
M – S = T1 (D – U) and Q = T1D (Since Q is made at D rate in timeT1)
Eliminating T1 gives
D−U
M–S=Q D
But Q = UN (Q is used in N period at U rate), then
D−U
M – S = UN D
In this model (M – S) is linear with time on both sides of the maximum inventory. The average
inventory for the total N periods is half (M – S) plus the safety stock.
UN D−U
+S
Average inventory = 2 D
The average inventory is held at a cost Ch per item per period. The cost per period is
UN D−U naira 1 cycle
+S
Ct = [C0 + ( 2 D ) ChN] cycle N period
C0 UN D−U
Ch +C h S
= N + 2 D
Differentiating, we obtain,
dCt C 0 C h U D−U
= 2+ =0
dN N 2 D

2 C0 D
N= √
C h U ( D−U )
Thus the optimum stock is
2C 0 UD
Q =
opt.

C h ( D−U ) (Since Q = UN)
And the minimum period cost is
2 C h C0 U (D−U )
Ct = √ D
+C h S

Example: The setup cost for a small operation is #600 and the storage cost is #0.40 per item per
day. Usage is uniform at 300 items per day and the production rate is 500 items per day. A safety
stock of 100 units is required. Find the optimum production lot size that gives minimum cost.
Solution:
i. The optimum lot size is given by
2(600 )(300)(500 )

ii.
Qopt. = √ (0 . 40)(500−300)
The minimum period cost is given by
=1500

2(0 . 40 )(600)(300 )(500−300 )

iii.
Ct = √ 500
The production time is given by
+0 . 40(100 )=280 /day
Q 1500
= =3 days
Ti = D 500
iv. The cycle time is given by
Q 1500
= =5 days
N = U 300

Example: (Setting Up Inventory Problem)


The demand of an item is uniform at 1000 items per day. Delivery when placing an order is
immediate. The cost for placing and receiving an order size is given by
0 .5
C0 =13000+5Q
It costs #0.002/day for each item in storage. In addition, there is an antispoilage cost while in
storage which is proportional tothe age of the inventory multiplied by the number of units in
the inventory at the time. If the inventory is 20 days old and there are 8000 items in stock, the
antispoilage cost would be (0.000015)(20)(8000) #/day, where 0.000015 is a constant.
Find the optimum lot size.

Solution:
Let a cycle last N days. The lot size will then be Q = 1000N and
C0 =13000+5 (1000 N )0. 5 =13000+158 .11 N 0 .5
The inventory declines linearly from 1000N to zero in N days. The inventory at any time is
Inventory = 1000(N – T), where T is in days.

The cost for antispoilage at that time is given by


(0.000015)T[1000(N – T)] = 0.015(NT – T2) #/day
N N

∫ 0 . 002[1000( N −T )]dT +∫ 0 . 015( NT −T 2 )dT


The cost per cycle = 13000 + 158.11N0.5 + 0 0

Integrating, we obtain,
13000 + 158.11N0.5 + N2 + 0.0025N3 = 13000N-1 + 158.11N-0.5 + N + 0.0025N2 = 0

By repeated trails, the minimum occurs at N = 96.4 for which the optimum lot size
Qopt. = 1000N = 96400 items.

Probabilistic Models
Example: (Optimum Stock By Incremental Analysis)
The cumulative probability for the sales of certain item is as shown in the table:

Cumulative sales Cumulative probability of this Cumulative probability of this


many sales or less many sales or more
0 0.00 1.00
1000 0.03 0.97
2000 0.09 0.91
3000 0.18 0.82
4000 0.29 0.71
5000 0.44 0.56
6000 0.60 0.40
7000 0.80 0.20
8000 0.94 0.06
9000 0.98 0.02
10000 1.00 0.00

An item costs #3, sells for #4 and is disposed of for #1 if not sold in season. There is no storage
charge. Determine the optimum number to stock at the start of season.

Solution:
The number to stock is such that the last item stocked has an expectation equal to its cost. If
another were stocked, its expectation would be less than the cost.
Recall that Expectation = ∑ (value)(probability)
Let P(X) be the probability of selling the last unit and 1 – P(X) be the probability of not selling
the last unit. The last unit costs #3 and brings in #4 or #1. Thus the mathematical relationship
for break-even is given by
3= 4P(X) + 1[1- P(X)]
P(X) = 0.67
The probability of selling the last unit must be 0.67 and 0.33 for not selling it. By interpolation in
the tabulation or from a plot, a stock of 4300 meets the requirements. If a stock of 4300 is
ordered, the probability of selling the last unit is the probability of selling 4300 or more which is
0.67. Analysts must recognize the two complementary cumulative probabilities in arriving at the
correct reasoning.

Example: (Expected optimum inventory)


The average demand for an item is 4 per week and follows a Poisson distribution. Find the
number of units to be stocked at the beginning of each week so that the number of customers
turned away on the average will not exceed 10% of demand.

Solution:
Poisson distribution with average demand of 4 per week is given by
x −4
4e
P(X) = X !
Thus P(0) = 0.0183, P(1) = 0.0732, P(2) = 0.1464, P(3) = 0.1950, P(4) = 0.1950, P(5) = 0.1560,
P(6) = 0.1040, P(7) = 0.0594, P(8) = 0.0296, P(9) = 0.0132, P(10) = 0.0053, P(11) = 0.0019,
P(12) = 0.0006, P(13) = 0.0002, P(14) = 0.0001, P(15) = 0.0000.
Starting with an initial stock of 6 units, we have the following:

Demand, X P(X) Lost sales, (X-6) (X-6)(P(X)


7 0.0594 1 0.0594
8 0.0296 2 0.0592
9 0.0132 3 0.0396
10 0.0053 4 0.0212
11 0.0019 5 0.0095
12 0.0006 6 0.0036
13 0.0002 7 0.0014
14 0.0001 8 0.0008
15 0.0000 9 0.0000
Total = 0.195(Lost sales/week)
The expected lost sales is given by
Expected lost sales = ∑(lost sales)(probability).
Thus starting with an initial stock of 6 units, the expected lost sales is 0.195. The average
demand is 4 units. Then the lost sales as a fraction of demand is 0.195/4 = 0.05 (5%).
This is below 0.10 (10%). A stock of 6 units is therefore more than adequate.
If the calculation is repeated, starting with a stock of 5 units. The expected lost sales is found to
be 0.409, which as a fraction of demand is 0.409/4 = 0.102 (10.2%). A stock of 5 units nearly
met the condition of 10% maximum.

4.2. Queuing: (Simplest Model)


The simplest model is single channel, single phase and single queue with an infinite population.
Consider an interval of time Ta (period between arrivals) during which one item will enter the
service channel and remain there for Ts periods (servicing time). Ta is greater than Ts to avoid
queue forming. If Cw is the cost for one item waiting one time period, the cost for the waiting is
CwTs. The cost of servicing is also considered. The size of the servicing channel is optional. It is
assumed that the time for servicing an item Ts is inversely proportional to the size of the channel
and the period cost of a servicing channel is proportional to its size. Let a service channel which
can service an item in one time period cost Cf to operate in one time period. If the servicing
time to service one unit is Ts and not a unity time period, then a service channel of a different
size will be required and its cost for one time period is given by Cf/Ts.

The cost for a time interval Ta will be the sum of the cost for the lost or waiting time for the unit
being serviced plus the cost of operating the service channel for T a time periods which is
Cf
Ta
CwTs + T s
The total cost for one time period Ct is obtained by dividing with Ta which is given by
T s Cf
Cw +
Ct = Ta Ts

Differentiating with respect to Ts and equating to zero, we obtain the minimum cost/period.
dCt C w C f
= − =0
dT s T a T 2
s
Cf T a
T s. opt .=
√ Cw
Cf Cw
Ct . opt .=2
√ Ta
(Putting Ts.opt. in Ct)
Please note that Cf is the cost for one time period for a servicing channel which when working
full time would service one item. If there are L channels or servicing facilities instead of one,
then

T s Cf L
Cw +
Ct = Ta Ts
Cf T a L
T s. opt .=
√ Cw
Cf Cw L
Ct . optt .=2
√ Ta

Example
The arrival rate is constant at three items per hour. The cost of providing and maintaining a
service facility is #25/h and it can service four items per hour working full time. If an item waits
one day it represents a cost of #2400. Find the optimum time to service one item and the
minimum variable cost per item.

Solution
Take one hour as one period, then Cf = 25/4 = 6.25
Ta = 1/3 h between arrivals and Cw = 2400/24 = 100/h/item. Then

6. 25(0 .33)
T s. opt .=

100
6 .25(100 )
=0.1443 h( optimum.servicing . time)

Ct . opt .=2

0 .33
=86. 60
The total cost for servicing an item is 86.60/3 = 28.87 since three items enter the system per
period.

Example
Assuming that the cost of a service facility is proportional to 0.6 power of its size, redo the
previous example.

Solution
Let S be size of the service station. Then S is inversely proportional to the time required to
service one item, Ts. Thus S = k/Ts.
The cost of the service station per hour is given by
k 0.6 1
K '( ) =K ( )0 .6
Cf =
Ts Ts
But Ts = ¼ and Cf = 25. Thus
k 0. 6 1 0. 6
K '( ) =K ( ) =25 ¿ and ¿=10 .88
Cf =
Ts 0 .25
Cost for a service facility per hour is given by
1
10.88( )0 .6 =10.88T −0.6
Ts s
The cost for waiting plus the cost for the service facility for an item serviced in interval T a is
CwTs + 10.88Ts-0.6Ta
Dividing through by Ta, we obtain cost per hour
Ts
Ct =Cw +10 . 88T −0 .6
Ta s

Differentiate with respect to Ts and set to zero:


Cw
−0. 6(10. 88 )T −1.6 =0
Ta s

Solving we obtain optimum service time


6. 528 T a 1/1. 6 6 . 528 x 0 .333 1/1 .6
T s =( ) =( ) =0. 0914 hour
Cw 100

Putting Ts in Ct we obtain optimum total cost per period


0 . 0914
Ct . opt .=100 +10 .88 (0. 0914 )−0 . 6=73 . 14 /h
0 . 3333

Models Involving Probability


The arrival rate and servicing time are not constant inpractice. If the number of arrivals per
period follows a Poisson distribution with λ average arrivals per period and the number of items
serviced per period also follows a Poisson distribution with µ > λ then minimum cost per period
is obtained at
Ct.opt. = 3Cw
λ’opt.= Cw/Cf
Cw
µ’opt. = 2 C f
Where λ’optand µ’opt. are specific values of independent variables λ’ and µ’ at which the lowest
cost of the system occurs.

Example
The cost of waiting per period is #4 and the cost per hour for servicing an item in a service
centre which can handle one item in 1 hour is #2. If the arrival rates follow Poisson distribution,
find the lowest cost policy.

Solution
Ct.opt. = 3Cw = 3(4) = #12 per period
λ’opt. = Cw/Cf = 4/2 = 2 items per period
Cw
4
=4
µ’opt. = 2 C f = 2 2 items per period

Finite Queuing
In practice, the number of items being serviced might have significant effect on the number of
items still in use. The situation becomes one of finite queuing. Analytical solutions for finite
queuing problems can be quite complicated but can be treated by the general methods with
certainty, noting that probabilities will be affected by the number of items requiring servicing.

General Method with Certainty


In many cases analytical solution is difficult or perhaps impossible. Such cases must be reduced
to tabulation and considerable ingenuity may be required to represent the problem on paper.

Example
Five units of a product must be processed at three different stations, A, B and C. The processes
are independent. The process time at each station is as follows;
A (5 days), B (7 days), C (9 days).
The five products must have the same operating sequence. The handling time and cost between
stations are negligible. Stations are all started at zero time and are shut individually when the
fifth unit has gone through the station. Idle time for a product is counted whenever a product
arrives at a station and must wait for entry. Idle time for a station is counted whenever a station
is waiting for work. Find the most economical sequence of work centres for the following
conditions, where Cw is the waiting cost per part per day and Ce is the cost of idle time per
station per day.

Cw Ce
(a) 10 30
(b) 20 20
(c) 30 10

Solution
Summary of Results from Time-Analysis Table is presented as follows:

Table 4.1: Part Waiting and Station Idle Times for the Sequences
Sequence Part Waiting Time (PWT) Station Idle Time (SIT)
ABC 40 17
ACB 40 27
BAC 20 27
BCA 20 39
CAB 0 47
CBA 0 49

Table 4.2: Cost Tabulation.


Cost of Waiting Cost of Idle Station Total Cost/Period
Seq. PWT (a) (b) (c) SIT (a) (b) (c) (a) (b) (c)
ABC 40 400 800 1200 17 510 340 170 910 1140 1370
ACB 40 400 800 1200 27 810 540 270 1210 1340 1470
BAC 20 200 400 600 27 810 540 270 1010 940 870
BCA 20 200 400 600 39 1170 780 390 1370 1180 990
CAB 0 0 0 0 47 1410 940 470 1410 940 470
CBA 0 0 0 0 49 1470 980 490 1470 980 490

Total cost per period is given by


Ct = Cw(Part Waiting Time) + Ce(Station Idle Time)
The minimum cost for (a) is #910 per period for sequence ABC, for (b) #940 per period for BAC
or CAB and for (c) #470 per period for CAB.
The total time required to produce five parts is 57 days for all the sequences (Table 4.3), but the
part waiting and station idle times vary (Table 4.1).

Table 4.3: Time-Analysis Tabulation for Example on General Method with Certainty

Station 1 Station 2 Station 3


Seq. Part In Out In Out Part Station In Out Part Station
ABC 1 0 5 5 12 0 5 12 21 0 12
2 5 10 12 19 2 0 21 30 2 0
3 10 15 19 26 4 0 30 39 4 0
4 15 20 26 33 6 0 39 48 6 0
5 20 25 33 40 8 0 48 57 8 0

ACB 1 0 5 5 14 0 5 14 21 0 14
2 5 10 14 23 4 0 23 30 0 2
3 10 15 23 32 8 0 32 39 0 2
4 15 20 32 41 12 0 41 48 0 2
5 20 25 41 50 16 0 50 57 0 2

BAC 1 0 7 7 12 0 7 12 21 0 12
2 7 14 14 19 0 2 21 30 2 0
3 14 21 21 26 0 2 30 39 4 0
4 21 28 28 33 0 2 39 48 6 0
5 28 35 35 40 0 2 48 57 8 0

BCA 1 0 7 7 16 0 7 16 21 0 16
2 7 14 16 25 2 0 25 30 0 4
3 14 21 25 34 4 0 34 39 0 4
4 21 28 34 43 6 0 43 48 0 4
5 28 35 43 52 8 0 52 57 0 4

CAB 1 0 9 9 14 0 9 14 21 0 14
2 9 18 18 23 0 4 23 30 0 2
3 18 27 27 32 0 4 32 39 0 2
4 27 36 36 41 0 4 41 48 0 2
5 36 45 45 50 0 4 50 57 0 2

CBA 1 0 9 9 16 0 9 16 21 0 16
2 9 18 18 25 0 2 25 30 0 4
3 18 27 27 34 0 2 34 39 0 4
4 27 36 36 43 0 2 43 48 0 4
5 36 45 45 52 0 2 52 57 0 4

Take Home Exercise


1. Assume that the trucks with goods are coming in a market yard at the rate
of 30 trucks per day and suppose that the inter-arrival times follows an exponential
distribution. The time to unload the trucks is assumed to be exponential with an
average of 42 minutes. If the market yard can admit 10 trucks at a time, (a) calculate
p(the yard is empty) and find the average queue length. (b) If the unload time
increases to 48 minutes then again calculate the above two questions.

2. Assume the inter-arrival time x and service time y are exponentially


distributed with mean 3 and 2 minutes respectively. Simulate the model
for 10 minutes by using the following random numbers:

RN for x 0.82 0.23 0.37 0.75 0.15 0.27


RN for y 0.66 0.31 0.48 0.92 0.38 0.72

i) Average number of customers in the system.

ii) Average number of customers in the queue.

iii) Average waiting time in the system.

iv) Average waiting time in the queue.

v) Proportion of idle time of the server.

3. Consider the problem of the repair shop for which the machines are brought
in for repair according to a Poisson process infinite population of machines
assumption) with rate λ = 5 per day. The manager has the option of hiring
a team of mechanics (team 1) in which both mechanics work together to
repair the same machine in serial stages: first mechanic repairs the machine
and then the other one test, and they do not consider another machine before
completely repairing the current one i.e., 2 serial stages) or team 2, where
the two mechanics work independently and in parallel to repair the machines.
Each mechanics has an exponential service time, with a rate μ =3
machines per day.

i) What is the equivalent service distribution for team 1?

ii) What are the equivalent queuing models for the two case? Is the
queuing system stable for each of the two cases.

iii) If the cost of the two teams is the same, what is the best option for
the shop manager (best option gives the smallest delay)?

4.3 Replacement/Refurbishment of Machinery


A machinery replacement/refurbishment model (Ekeocha, 2011) is given by

1
K (T )=
R
[ Q+B ( t ) R−S ( t ) Rt ]
And
1
K (T )= [ Q+ ( b 1+ b2 t ) R−Q ( 1−d )t R t ]
R
Where
K(T) = Present value of total cost
B(t) = Time-dependent (increasing) Maintenance cost
S(t) = Salvage value (Deterioration and time dependent)
Q = Cost of machine (constant)
100
R = Discount factor ( r +100 )
r = Rate of return on replacement investment
d= Deterioration rate
As usual, the objective is to find T that will minimize K (T). The time, T that gives minimum present value
of total cost, K (T) corresponds to the replacement date of the machinery. Conversely, the time, T that
produces maximum K(T) corresponds to the refurbishment machinery.
Example
Consider a machine with the following technical details (ASA, 2000)
Name of machine: Front-end Loader
Model: Y
Year of manufacture: 1995
Cost: $60,000.00
Resale Value (Active Market): $40,000.00 (5th year)
Deterioration rate: 25%
Capitalization rate: 40%
Minor maintenance: $2,500.00 annually
Major maintenance: (i) $10,000.00 (2 nd year)
$3,500.00 (3 rd year)
(ii) $6,000.00 (4th year)
(iii) $20,000.00 (5th year)

Solution: The tabular presentation is shown in Table 4.4.


Table 4.4: Data for model Y, Front-End Loader
T Q b1 b2t S(t)
1 60,000 2500 - -
2 60,000 2500 10,000 -
3 60,000 2500 3,500 -
4 60,000 2500 6,000 -
5 60,000 2500 20,000 40,000
6 60,000 2500 - -
7 60,000 2500 - -

Table 4.5: Enumeration of Data for Hypothetical front-end Loader

t Q R Rt b1 b2t B(t) B(t)R d S(t) S(t)Rt K(T)


1 60,000 0.71 0.71 2500 - 2500 1785 0.33 - - 86533
2 60,000 0.71 0.51 2500 10,000 12500 8925 0.50 - - 96533
3 60,000 0.71 0.36 2500 3500 6000 4284 0.50 - - 90033
4 60,000 0.71 0.26 2500 6000 8500 6069 0.83 - - 92533
5 60,000 0.71 0.19 2500 20,000 22500 16,065 0.83 40,000 7424 96136
6 60,000 0.71 0.13 2500 - 2500 1785 0.83 40,000 5300 79111*
7 60,000 0.71 0.09 2500 - 2500 1785 1.00 40,000 3784 81234
10 60,000 0.71 0.03 2500 - 2500 1785 1.00 40,000 1376 84606

Table 4.6: Predicted K (T) for Front-End Loader

T Q R Rt B(t) B(t)R d S(t) S(t)Rt K(T)


1 60000 0.71 0.71 6653.82 4750.83 0.33 89437
2 60000 0.71 0.51 16653.64 1189.70 0.50 99298
3 60000 0.71 0.36 10153.47 7249.58 0.50 92888
4 60000 0.71 0.26 12653.29 9034.45 0.83 95353
5 60000 0.71 0.19 26653.11 19030.32 0.83 8.52 1.58 109158
6 60000 0.71 0.13 6652.93 4750.19 0.83 1.45 0.19 89435*
7 60000 0.71 0.09 6652.75 4750.07 1.00 0 0 89435
10 60000 0.71 0.03 6652.22 4749.69 1.00 0 0 89435

(MAINTEANCE COST REGRESSION LINE,


B(t) = 6654 – 0.1778t).
b1 = 6654
b2 = -0.1778
b2t = Cost of Refurbishment in any year of
refurbishment (Overhaul)

TABLE 4.7: K(T) against Time (Measured L and Predicted M)

T 1 2 3 4 5 6 7 8
K(T)L 86533 96533 90033 92533 96136 79111 81234 84606
K(T)M 89437 99298 92888 95353 109158 89435 89435 89435
%Dev. -3.24 -2.78 -3.07 -2.96 -11.93 -11.54 -9.17 -5.40
Fig 4.1: Comparison of the measured and predicted total cost K(T) of the Loader
120000

115000

110000

105000

100000
Total Cost $

95000

Measured Data
90000 Predicted Values

85000

80000

75000

70000
1 2 3 4 5 6 7 8

Years

The result from the tables and graph shows a replacement date of 6th year.
(Repair/Refurbishment in the 2ndand 5thyears).

You might also like