5 Random Numbers-2025
5 Random Numbers-2025
- Why do we need to do
simulation?
- What do we need to
create a simulation
model?
2
2
Inputs Outputs
Random Statistics
3
Input modelling
Probability Distributions
Random Variates
EXPO(..)
Data TRIA(..)
NORM(..)
WEIB(..)
….???
5
Random Variates
Example of Arrival process
Triangular
Uniform
Normal Exponential
Gamma Weibull
How does the computer generate Random Variates from
Probability Distributions?
10
1.What are properties of Random Numbers?
f(x)
1
Figure 1. pdf for
random numbers
x
0 1
12
2. Generation of Pseudo-random numbers
Goal: to produce a sequence of numbers between 0 and 1 that
simulates the ideal properties: uniformity and independence.
Considerations for the method:
-Fast.
-Portable to different computers, different program languages.
-Long cycle. Cycle length (maximal period): length of RN
sequence before previous number repeated.
-RN should be repeatable.
-Generated RN should be uniform and independent.
13
3. Techniques for Generating RNs
14
3. Techniques for Generating RNs (cont.)
Example
15
3. Techniques for Generating RNs (cont.)
● Maximum Density
– Such that values assumed by Ri, i = 1,2,…, leave no large gaps on [0,1]
– Problem: Instead of continuous, each Ri is discrete
– Solution: a very large integer for modulus m (e.g., 231-1, 248)
● Approximation appears to be of little consequence
● Maximum Period
– To achieve maximum density and avoid cycling.
– Achieve by: proper choice of a, c, m, and X0.
18
3. Techniques for Generating RNs (cont.)
3.2. Combined Linear Congruential Generators (2)
– Suggested form:
The coefficient:
Performs the
subtraction Xi,1- 1
19
3. Techniques for Generating RNs (cont.)
3.2. Combined Linear Congruential Generators (3)
● Example: For 32-bit computers, L’Ecuyer [1988] suggests combining k = 2
generators with m1 = 2,147,483,563, a1 = 40,014, m2 = 2,147,483,399 and a2 =
20,692. The algorithm becomes:
Step 1: Select seeds
● X1,0 in the range [1, 2,147,483,562] for the 1st generator
● X2,0 in the range [1, 2,147,483,398] for the 2nd generator.
Step 2: For each individual generator,
X1,j+1 = 40,014 X1,j mod 2,147,483,563
X2,j+1 = 40,692 X1,j mod 2,147,483,399.
Step 3: Xj+1 = (X1,j+1 - X2,j+1 ) mod 2,147,483,562.
Step 4: Return
● A single random-number generator with k streams can act like k distinct virtual
random-number generators
● To compare two or more alternative systems.
– Advantageous to dedicate portions of the pseudo-random number sequence to
the same purpose in each of the simulated systems.
21
4. Tests for RNs
● Two categories:
– Testing for uniformity:
H0: Ri ~ U[0,1]
H/1: Ri ~ U[0,1]
●Failure to reject the null hypothesis, H0, means that evidence of
non-uniformity has not been detected.
– Testing for independence:
H0: Ri ~ independently
H/1: Ri ~ independently
●Failure to reject the null hypothesis, H0, means that evidence of
dependence has not been detected.
22
4. Tests for RNs (cont.)
23
4. Tests for RNs (cont.)
4.1. Frequency Tests
● Test of uniformity
● Two different methods:
– Kolmogorov-Smirnov test
– Chi-square test
24
4. Tests for RNs (cont.)
4.1. Frequency Tests
4.1.1. Kolmogorov-Smirnov Test
● Compares the continuous cdf, F(x), of the uniform distribution with the
empirical cdf, SN(x), of the N sample observations.
– We know:
– If the sample from the RN generator is R1, R2, …, RN, then the empirical
cdf, SN(x) is:
25
4. Tests for RNs (cont.)
4.1. Frequency Tests
4.1.1. Kolmogorov-Smirnov Test (2)
● Example: Suppose 5 generated numbers are 0.44, 0.81, 0.14, 0.05, 0.93.
Arrange R(i) from
smallest to largest
Step 1: R(i) 0.05 0.14 0.44 0.81 0.93
26
4. Tests for RNs (cont.)
4.1. Frequency Tests
4.1.1. Chi-Square Test
● Chi-square test uses the sample statistic:
Oi is the observed
# in the ith class
● Hypothesis:
31
4. Tests for RNs (cont.)
4.2. Tests for Autocorrelation (2)
33
4. Tests for RNs (cont.)
4.2. Tests for Autocorrelation (4)
Example
34
4. Tests for RNs (cont.)
4.2. Tests for Autocorrelation (4)
35 35
4. Tests for RNs (cont.)
4.2. Tests for Autocorrelation (5)
Shortcomings
● The test is not very sensitive for small values of M, particularly when the
numbers being tests are on the low side.
● Problem when “fishing” for autocorrelation by performing numerous tests:
– If α = 0.05, there is a probability of 0.05 of rejecting a true hypothesis.
– If 10 independence sequences are examined,
●The probability of finding no significant
autocorrelation, by chance alone, is 0.9510 = 0.60.
●Hence, the probability of detecting significant
autocorrelation when it does not exist = 40%
36
5. Summary
● Caution:
– Even with generators that have been used for years, some of which still
in used, are found to be inadequate.
– This chapter provides only the basic
– Also, even if generated numbers pass all the tests, some underlying
pattern might have gone undetected.
37
What’s next?
38
Homework
(use excel)
39